Podcast appearances and mentions of joe beda

  • 38PODCASTS
  • 60EPISODES
  • 42mAVG DURATION
  • ?INFREQUENT EPISODES
  • Feb 1, 2024LATEST

POPULARITY

20172018201920202021202220232024


Best podcasts about joe beda

Latest podcast episodes about joe beda

The Engineering Room with Dave Farley
Kubernetes & Cloud Computing | Kelsey Hightower In The Engineering Room Ep. 13

The Engineering Room with Dave Farley

Play Episode Listen Later Feb 1, 2024 83:34


Kelsey is a pioneer in cloud computing and has led many advances in the implementation and adoption of cloud based software. He is a significant contributor to open source software, involved in many incredibly popular open source projects, including, but not limited to Kubernetes. Kelsey not only helped implement Kubernetes, but also helped to promote and spread its adoption and to build the community around it. In this episode Kelsey and Dave discuss a range of topics, centred on cloud computing, but also exploring software engineering and its nature in more detail. Find out if Dave and Kelsey disagree about stateful serverless and asynchrony.xx⭐ PATREON: Join the Continuous Delivery community and access extra perks & content! JOIN HERE ➡️ https://bit.ly/ContinuousDeliveryPatreon

GOTO - Today, Tomorrow and the Future
The Current State of Software Engineering • Jez Humble & Holly Cummins

GOTO - Today, Tomorrow and the Future

Play Episode Listen Later Dec 15, 2023 35:46 Transcription Available


This interview was recorded at GOTO Aarhus for GOTO Unscripted.gotopia.techRead the full transcription of this interview hereJez Humble - SRE at Google Cloud & Lecturer at UC BerkeleyHolly Cummins - Senior Principal Software Engineer on the Red Hat Quarkus TeamRESOURCESdora.devJezcontinuousdelivery.comgithub.com/jezhumblelinkedin.com/in/jez-humble@jezhumblesre.google/resourcesHollyhollycummins.comhollycummins.com/type/blog@holly_cumminshachyderm.io/@holly_cumminsgithub.com/holly-cumminslinkedin.com/in/holly-k-cumminsDESCRIPTIONHolly Cummins and Jez Humble explore the delicate balance of communication in the tech industry. They dissect two contrasting trends – the need for increased communication and the burden of communication overhead. Jez highlights the importance of effectively managing limited communication bandwidth, emphasizing the need to focus on the right things and automate processes when possible. They delve into the significance of good platforms and touch on the persistence of the perennial issue of code formatting standards.Despite the challenges, they remain optimistic about the potential for positive change and acknowledge the progress made through continuous integration.RECOMMENDED BOOKSNicole Forsgren, Jez Humble & Gene Kim • AccelerateKim, Humble, Debois, Willis & Forsgren • The DevOps HandbookJez Humble & David Farley • Continuous DeliveryJez Humble, Joanne Molesky & Barry O'Reilly • Lean EnterpriseHolly Cummins & Timothy Ward • Enterprise OSGi in ActionLiz Rice • Container SecurityLiz Rice • Kubernetes SecurityBrendan Burns, Joe Beda & Kelsey Hightower • Kubernetes: Up and RunningTwitterInstagramLinkedInFacebookLooking for a unique learning experience?Attend the next GOTO conference near you! Get your ticket: gotopia.techSUBSCRIBE TO OUR YOUTUBE CHANNEL - new videos posted almost daily

GOTO - Today, Tomorrow and the Future
War Stories from Moving to the Cloud • Holly Cummins & Lorna Jane Mitchell

GOTO - Today, Tomorrow and the Future

Play Episode Listen Later Jun 9, 2023 17:07 Transcription Available


This interview was recorded for GOTO Unscripted at GOTO Copenhagen.gotopia.techRead the full transcription of this interview hereHolly Cummins - Senior Principal Software Engineer on the Red Hat Quarkus TeamLorna Jane Mitchell - Head of Developer Relations at Aiven & Open Source SpecialistDESCRIPTIONAre you a developer ready to embark on your cloud journey but feeling overwhelmed? Fear not! The benefits of the cloud far outweigh the initial struggles. With automation and proper monitoring, you can avoid sky-high bills while elevating your company and user experience to new heights. Don't miss out on the opportunity to learn from Lorna Jane Mitchell and Holly Cummins as they share their practical war stories from their own cloud migration and operations. Join us and take your development game to the next level!RECOMMENDED BOOKSHolly Cummins & Timothy Ward • Enterprise OSGi in ActionLiz Rice • Container SecurityLiz Rice • Kubernetes SecurityBrendan Burns, Joe Beda & Kelsey Hightower • Kubernetes: Up and RunningJohn Arundel & Justin Domingus • Cloud Native DevOps with KubernetesPini Reznik, Jamie Dobson & Michelle Gienow • Cloud Native TransformationTwitterLinkedInFacebookLooking for a unique learning experience?Attend the next GOTO conference near you! Get your ticket: gotopia.techSUBSCRIBE TO OUR YOUTUBE CHANNEL - new videos posted almost daily

GOTO - Today, Tomorrow and the Future
Expert talk: Cloud Native & Serverless • Matt Turner & Eric Johnson

GOTO - Today, Tomorrow and the Future

Play Episode Listen Later Dec 30, 2022 36:30 Transcription Available


This interview was recorded at GOTO Amsterdam 2022 for GOTO Unscripted. gotopia.techRead the full transcription of this interview hereMatt Turner - DevOps Leader & Software Engineer at TetrateEric Johnson - Principal Developer Advocate for Serverless at AWSDESCRIPTIONShould everyone move to the cloud? Are all event-driven architectures serverless or is it rather the other way around? Join the two experts, Matt Turner, software engineer at Tetrate, and Eric Johnson, principal developer advocate for serverless at AWS, to discover if you should take that journey to become cloud native. Understand the power of these technologies together with some useful tips & tricks about testing and the BEAM languages.RECOMMENDED BOOKSBrendan Burns, Joe Beda & Kelsey Hightower • Kubernetes: Up and RunningLiz Rice • Container SecurityLiz Rice • Kubernetes SecurityBurns, Villalba, Strebel & Evenson • Kubernetes Best PracticesJohn Arundel & Justin Domingus • Cloud Native DevOps with KubernetesAdzic & Korac • Running ServerlessScott Patterson • Learn AWS Serverless ComputingPeter Sbarski • Serverless Architectures on AWSKasun Indrasiri & Danesh Kuruppu • gRPC: Up and RunningTwitterLinkedInFacebookLooking for a unique learning experience?Attend the next GOTO conference near you! Get your ticket: gotopia.techSUBSCRIBE TO OUR YOUTUBE CHANNEL - new videos posted almost daily

Kube Cuddle
Joe Beda

Kube Cuddle

Play Episode Listen Later Dec 8, 2022 73:42


Show notes:Joe's Twitter | Joe's MastodonRich's Twitter | Rich's MastodonKube Cuddle TwitterLinks:Developers, developers, developersThe Kubernetes Documentary: Part 1 | Part 2Brendan Burns | Craig McLuckieDark Side of the RingLXC | BSD Jails | Solaris ZonesTim Hockin | lmctfyDocker in dev vs prod meme Joe's slides from his 2014 Gluecon talkMesosKelsey's Tetris talk (a later version than the one I saw)go fmt | RubocopMesosBryan Liles | Naadir Jeewa | Kris NovaTGIKkubectl apply and the 3 way diffSPIFFELeigh Capili's talk on auth and RBACBonus link: Joe sent me this on Twitter after the interview, some notes he wrote on what a production stack should look like, from 2015. Listener questions from Bill Mulligan, Bryan Liles, Thomas Güttler, Ross Kukulinski, and Saim Safdar. Thank you!Episode TranscriptLogo by the amazing Emily Griffin.Music by Monplaisir.Thanks for listening. ★ Support this podcast on Patreon ★

GOTO - Today, Tomorrow and the Future
Driving Innovation with Kubernetes & Java • Ana-Maria Mihalceanu & Eric Johnson

GOTO - Today, Tomorrow and the Future

Play Episode Listen Later Oct 7, 2022 32:45 Transcription Available


This interview was recorded at GOTO Amsterdam 2022 for GOTO Unscripted.gotopia.techRead the full transcription of this interview hereAna-Maria Mihalceanu - Developer Advocate at Red Hat & Java ChampionEric Johnson - Principal Developer Advocate for Serverless at AWSDESCRIPTIONTechnology can advance faster if we share our knowledge. That's the mission of a developer advocate. Ana-Maria Mihalceanu, developer advocate at Red Hat, talked to Eric Johnson, principal developer advocate at AWS, about her passion for learning, sharing knowledge, Java and Kubernetes. Discover what a Kubernetes operator is and when to use it vs Terraform.RECOMMENDED BOOKSBrendan Burns, Joe Beda & Kelsey Hightower • Kubernetes: Up and RunningMarkus Eisele & Natale Vinto • Modernizing Enterprise JavaKevlin Henney & Trisha Gee • 97 Things Every Java Programmer Should KnowBurns, Villalba, Strebel & Evenson • Kubernetes Best PracticesAdzic & Korac • Running ServerlessScott Patterson • Learn AWS Serverless ComputingPeter Sbarski • Serverless Architectures on AWSTwitterLinkedInFacebookLooking for a unique learning experience?Attend the next GOTO conference near you! Get your ticket at gotopia.techSUBSCRIBE TO OUR YOUTUBE CHANNEL - new videos posted almost daily.Dev InterruptedWhat the smartest minds in engineering are thinking about, working on and investing in.Listen on: Apple Podcasts Spotify

Coffee and Open Source

Joe Beda is a technologist with a history of working across many parts of our industry from web browsers to real time communication systems to cloud computing. He started his career at Microsoft working on Internet Explorer and client platforms before moving on to Google. During his tenure at Google he started Google Compute Engine and helped start Kubernetes. He then co-founded Heptio and, after 2 years, sold it to VMware. Joe was at VMware for several years helping to create the Tanzu suite of products. Joe holds a B.S. from Harvey Mudd College and lives in Seattle, Washington with his wife Rachel (a medical doctor and also an HMC alum) and their two daughters. You can follow Joe on Social Media https://twitter.com/jbeda https://www.eightypercent.net/ PLEASE SUBSCRIBE TO THE PODCAST - Spotify: http://isaacl.dev/podcast-spotify - Apple Podcasts: http://isaacl.dev/podcast-apple - Google Podcasts: http://isaacl.dev/podcast-google - RSS: http://isaacl.dev/podcast-rss You can check out more episodes of Coffee and Open Source on https://www.coffeeandopensource.com/​​ Coffee and Open Source is hosted by Isaac Levin (https://twitter.com/isaacrlevin) --- Support this podcast: https://podcasters.spotify.com/pod/show/coffeandopensource/support

Software Defined Talk
Episode 363: Bad Bosses

Software Defined Talk

Play Episode Listen Later Jun 17, 2022 62:19


This week we discuss Oracle buying Corner, drama at Coinbase and the Gartner MQ for Observability. Plus, some thoughts on European Design Style… Runner-up Titles Secular Winds. Internet Hygiene Don't Bang the Table Do you own a motorcycle? The Hilda World-building Executive Retreat for Toxic Crytpo Execs. That's some flavor Client/server Bias. Power of Privilege Rundown Everyone tries to fix verticals like healthcare but can an outsider really do it? Oracle thinks it can fix healthcare's biggest tech issue (https://www.theverge.com/2022/6/10/23162503/oracle-cerner-health-records-data-interoperability) Oracle stock jumps 9% on strong cloud revenue (https://www.marketwatch.com/story/oracle-stock-jumps-9-on-strong-cloud-revenue-11655151953) CEO Bias Silicon Valley's Horrible Bosses (https://newsletters.theatlantic.com/galaxy-brain/62a7fbc951acba00209259f5/elon-musk-brian-armstrong-coinbase-crypto/) Coinbase CEO Twitter Thread (https://twitter.com/brian_armstrong/status/1535304943728414721) An Open letter to Elon… (https://www.theverge.com/2022/6/16/23170228/spacex-elon-musk-internal-open-letter-behavior) Productivity Google's changing its calendar invites to be clearer and more modern (https://www.theverge.com/2022/6/13/23166474/google-calendar-gmail-invite-redesign-updated-info) Who the **** Enjoys Using Outlook? (https://slate.com/technology/2022/06/gmail-versus-outlook.html) Honeycomb Cements Its Position as a Leader in 2022 Gartner® Magic Quadrant™ - Honeycomb (https://www.honeycomb.io/blog/honeycomb-leader-observability-gartnermq) Relevant to your Interests Introducing Achievements: recognizing the many stages of a developer's coding journey | The GitHub Blog (https://github.blog/2022-06-09-introducing-achievements-recognizing-the-many-stages-of-a-developers-coding-journey/) Microsoft's new Xbox TV app streams games without a console later this month (https://www.theverge.com/2022/6/9/23159460/microsoft-xbox-tv-app-samsung-2022-tv-xbox-cloud-gaming-streaming) MIT researchers uncover ‘unpatchable' flaw in Apple M1 chips – TechCrunch (https://techcrunch.com/2022/06/10/apple-m1-unpatchable-flaw/) Spotify comes for audiobooks (https://www.theverge.com/2022/6/9/23161536/spotify-audiobooks-amazon-audible-podcasts) Joe Beda retires, sort of. (http://:

The Cloudcast
The Kubernetes Developer Experience?

The Cloudcast

Play Episode Listen Later Feb 27, 2022 27:07


Kubernetes won the container wars and continues to grow in use across many industries. But how did something that was about Cloud-native Applications gain traction without a developer experience?SHOW: 595CLOUD NEWS OF THE WEEK - http://bit.ly/cloudcast-cnotwCHECK OUT OUR NEW PODCAST - "CLOUDCAST BASICS"SHOW SPONSORS:Teleport is the easiest, most secure way to access all your infrastructure Get started with Teleport CloudZero - Cloud Cost Intelligence for Engineering TeamsDatadog Kubernetes Solution: Maximum Visibility into Container EnvironmentsStart monitoring the health and performance of your container environment with a free 14 day Datadog trial. Listeners of The Cloudcast will also receive a free Datadog T-shirt.SHOW NOTES:Kubernetes - The Documentary - Part 1Kubernetes - The Documentary - Part 2Software Defined Talk - Eps.344 - Kubernetes Documentary HOW DID KUBERNETES WIN WHEN IT STARTED FROM BEHIND?Listening to this week's SDT show, and remembering listening to SDT years ago, @cote comments about why Kubernetes "won" were always interesting. In essence it was late to market, was lacking in features vs. competitors (Mesos, Swarm, CF), and had a terrible user-experience...so how did it "win"? It all seems ass-backwards. HOW HAS KUBERNETES CONTINUED TO WIN, WITHOUT A DEVELOPER EXPERIENCE?Mesos, CF and Swarm were all single-vendor dominated projects, and many companies had concerns about another generation of vendor lock-in. This point is reasonably valid, but the companies that were using Mesos, CF and Swarm did all seem to love that technology.Mesos was primarily focused on big data workloads. For each new application-type, you needed to write (or use) another application-specific framework. So it was good at its niche, but couldn't easily be used for other types of apps. [Kubernetes eventually copied this model with CRDs].Swarm was the easiest to use, but it wasn't very good technology and didn't scale. So it got pigeon-holed for smaller projects.CF focused on Java/SpringBoot, which is a big Enterprise opportunity. but CF was super complicated to set up. And CF never really embraced containers, so companies were weary of if they were missing this big trend (Docker).Kubernetes comes along and  becomes the good-enough platform. It's not dominated by a single vendor. It natively supports Docker, it has some built-in usage patterns so it's easier than Mesos to add apps, it scales better than Swarm, and it can support Java/Spring or even legacy Java (lift-and-shift). And as Joe Beda says, you could use it natively or you could build some PaaS-y like features on top of it.FEEDBACK?Email: show at the cloudcast dot netTwitter: @thecloudcastnet

GOTO - Today, Tomorrow and the Future
Hands-on Microservices • Ronnie Mitra & Mike Amundsen

GOTO - Today, Tomorrow and the Future

Play Episode Listen Later Feb 25, 2022 29:35 Transcription Available


This interview was recorded for the GOTO Book Club.http://gotopia.tech/bookclubRonnie Mitra - Co-Author of "Microservices: Up and Running"Mike Amundsen - Author of "Design and Build Great Web APIs"DESCRIPTIONMicroservices have long been a hot topic in software development, with both pros and cons associated with them. In “Microservices: Up and Running,” Ronnie Mitra and Irakli Nadareishvili go on a mission to offer guidelines for your first experience with microservices. In this Book Club discussion with Ronnie, world-class expert of API design, security and enterprise development in general, and Mike Amundsen, author of the book "Design and Build Great Web APIs," you'll discover the decisions that you'll need to consider, the reasoning behind each, and how to get started with microservices.The interview is based on Ronnie's book "Microservices: Up and Running": https://amzn.to/3c4HmmLRead the full transcription of the interview here:https://gotopia.tech/bookclub/episodes/microservices-hands-onRECOMMENDED BOOKSRonnie Mitra & Irakli Nadareishvili • Microservices: Up and Running• https://amzn.to/3c4HmmLRonnie Mitra, Irakli Nadareishvili, Matt McLarty & Mike Amundsen • Microservice Architecture • https://amzn.to/3fVNAb0Ronnie Mitra, Mehdi Medjaoui, Erik Wilde & Mike Amundsen • Continuous API Management • https://amzn.to/3uxdypwRonnie Mitra & many more • DataPower SOA Appliance Administration, Deployment, and Best Practices • https://amzn.to/3t2jhD9Brendan Burns, Joe Beda & Kelsey Hightower • Kubernetes: Up and Running • http://amzn.to/31OAhB9Matthew Skelton & Manuel Pais • Team Topologies • http://amzn.to/3sVLyLQMike Amundsen • Design and Build Great Web APIs • https://bookshop.org/a/9452/9781680506808https://twitter.com/GOTOconhttps://www.linkedin.com/company/goto-https://www.facebook.com/GOTOConferencesLooking for a unique learning experience?Attend the next GOTO conference near you! Get your ticket at https://gotopia.techSUBSCRIBE TO OUR YOUTUBE CHANNEL - new videos posted almost daily.https://www.youtube.com/user/GotoConferences/?sub_confirmation=1

GOTO - Today, Tomorrow and the Future
Migrating to Kubernetes + Best Practices for Cloud Native • Thomas Vitale, Lasse Højgaard & Lars Jensen

GOTO - Today, Tomorrow and the Future

Play Episode Listen Later Jan 21, 2022 37:41 Transcription Available


This interview was recorded at GOTO Copenhagen 2021 for GOTO Unscripted. https://gotopia.techRead the full transcription of this interview here:https://gotopia.tech/articles/cloud-native-kubernetes-and-all-things-relatedThomas Vitale - Senior Software Engineer at Systematic & Author of "Cloud Native Spring in Action"Lasse Højgaard - Cloud Architect & Software Pilot at TriforkLars Jensen - Lead Developer at GOTODESCRIPTIONThinking of going cloud native and looking for the best way to do it? In this Unscripted episode, Lars Jensen talks with cloud specialists Thomas Vitale and Lasse Højgaard about their day-to-day work and experience with cloud native, Kubernetes and all things related.RECOMMENDED BOOKSThomas Vitale • Cloud Native Spring in Action • https://amzn.to/3355Zy0Brendan Burns, Joe Beda & Kelsey Hightower • Kubernetes: Up and Running • http://amzn.to/31OAhB9Burns, Villalba, Strebel & Evenson • Kubernetes Best Practices • https://amzn.to/3gBXRsrSam Newman • Monolith to Microservices • https://amzn.to/2Nml96ESam Newman • Building Microservices • https://amzn.to/3dMPbOsRonnie Mitra & Irakli Nadareishvili • Microservices: Up and Running• https://amzn.to/3c4HmmLMitra, Nadareishvili, McLarty & Amundsen • Microservice Architecture • https://amzn.to/3fVNAb0Chris Richardson • Microservices Patterns • https://amzn.to/2SOnQ7hAdam Bellemare • Building Event-Driven Microservices • https://amzn.to/3yoa7TZDave Farley • Continuous Delivery Pipelines • https://amzn.to/3hjiE51https://twitter.com/GOTOconhttps://www.linkedin.com/company/goto-https://www.facebook.com/GOTOConferencesLooking for a unique learning experience?Attend the next GOTO conference near you! Get your ticket at https://gotopia.techSUBSCRIBE TO OUR YOUTUBE CHANNEL - new videos posted almost daily.https://www.youtube.com/user/GotoConferences/?sub_confirmation=1

CarahCast: Podcasts on Technology in the Public Sector
Mission First Podcast Series: Kubernetes 2.0

CarahCast: Podcasts on Technology in the Public Sector

Play Episode Listen Later Jan 5, 2022 57:48


Hear this conversation between Joe Beda, Co-creator of Kubernetes and Principal Engineer, VMware and Paul Puckett, Director of the Enterprise Cloud Management Office (ECMO), U.S. Army where they discuss:The development Kubernetes and when Joe realized it would be a part of every major software conversation globallyKey trends, practices, and differences happening in the public sector versus privateHow Kubernetes helps manage Day 2 complexities of applications spanning from cloud service providers (CSP's), datacenters, and to the edgeand much more...Tune into our monthly podcast series, Mission First, where we will be focusing on the Department of Defense and National Security mission and not products. Hear from thought-leaders in the industry as they discuss complex challenges and topics buzzing within federal government IT. 

Screaming in the Cloud
Building Distributed Cognition into Your Business with Sam Ramji

Screaming in the Cloud

Play Episode Listen Later Dec 9, 2021 39:56


About SamA 25-year veteran of the Silicon Valley and Seattle technology scenes, Sam Ramji led Kubernetes and DevOps product management for Google Cloud, founded the Cloud Foundry foundation, has helped build two multi-billion dollar markets (API Management at Apigee and Enterprise Service Bus at BEA Systems) and redefined Microsoft's open source and Linux strategy from “extinguish” to “embrace”.He is nerdy about open source, platform economics, middleware, and cloud computing with emphasis on developer experience and enterprise software. He is an advisor to multiple companies including Dell Technologies, Accenture, Observable, Fletch, Orbit, OSS Capital, and the Linux Foundation.Sam received his B.S. in Cognitive Science from UC San Diego, the home of transdisciplinary innovation, in 1994 and is still excited about artificial intelligence, neuroscience, and cognitive psychology.Links: DataStax: https://www.datastax.com Sam Ramji Twitter: https://twitter.com/sramji Open||Source||Data: https://www.datastax.com/resources/podcast/open-source-data Screaming in the Cloud Episode 243 with Craig McLuckie: https://www.lastweekinaws.com/podcast/screaming-in-the-cloud/innovating-in-the-cloud-with-craig-mcluckie/ Screaming in the Cloud Episode 261 with Jason Warner: https://www.lastweekinaws.com/podcast/screaming-in-the-cloud/what-github-can-give-to-microsoft-with-jason-warner/ TranscriptAnnouncer: Hello, and welcome to Screaming in the Cloud with your host, Chief Cloud Economist at The Duckbill Group, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.Corey: This episode is sponsored in part by our friends at Redis, the company behind the incredibly popular open source database that is not the bind DNS server. If you're tired of managing open source Redis on your own, or you're using one of the vanilla cloud caching services, these folks have you covered with the go to manage Redis service for global caching and primary database capabilities; Redis Enterprise. Set up a meeting with a Redis expert during re:Invent, and you'll not only learn how you can become a Redis hero, but also have a chance to win some fun and exciting prizes. To learn more and deploy not only a cache but a single operational data platform for one Redis experience, visit redis.com/hero. Thats r-e-d-i-s.com/hero. And my thanks to my friends at Redis for sponsoring my ridiculous non-sense.  Corey: Are you building cloud applications with a distributed team? Check out Teleport, an open source identity-aware access proxy for cloud resources. Teleport provides secure access to anything running somewhere behind NAT: SSH servers, Kubernetes clusters, internal web apps and databases. Teleport gives engineers superpowers! Get access to everything via single sign-on with multi-factor. List and see all SSH servers, kubernetes clusters or databases available to you. Get instant access to them all using tools you already have. Teleport ensures best security practices like role-based access, preventing data exfiltration, providing visibility and ensuring compliance. And best of all, Teleport is open source and a pleasure to use.Download Teleport at https://goteleport.com. That's goteleport.com.Corey: Welcome to Screaming in the Cloud, I'm Cloud Economist Corey Quinn, and recurring effort that this show goes to is to showcase people in their best light. Today's guest has done an awful lot: he led Kubernetes and DevOps Product Management for Google Cloud; he founded the Cloud Foundry Foundation; he set open-source strategy for Microsoft in the naughts; he advises companies including Dell, Accenture, the Linux Foundation; and tying all of that together, it's hard to present a lot of that in a great light because given my own proclivities, that sounds an awful lot like a personal attack. Sam Ramji is the Chief Strategy Officer at DataStax. Sam, thank you for joining me, and it's weird when your resume starts to read like, “Oh, I hate all of these things.”Sam: [laugh]. It's weird, but it's true. And it's the only life I could have lived apparently because here I am. Corey, it's a thrill to meet you. I've been an admirer of your public speaking, and public tweeting, and your writing for a long time.Corey: Well, thank you. The hard part is getting over the voice saying don't do it because it turns out that there's no real other side of public shutting up, which is something that I was never good at anyway, so I figured I'd lean into it. And again, I mean, that the sense of where you have been historically in terms of your career not, “Look what you've done,” which is a subtext that I could be accused of throwing in sometimes.Sam: I used to hear that a lot from my parents, actually.Corey: Oh, yeah. That was my name growing up. But you've done a lot of things, and you've transitioned from notable company making significant impact on the industry, to the next one, to the next one. And you've been in high-flying roles, doing lots of really interesting stuff. What's the common thread between all those things?Sam: I'm an intensely curious person, and the thing that I'm most curious about is distributed cognition. And that might not be obvious from what you see is kind of the… Lego blocks of my career, but I studied cognitive science in college when that was not really something that was super well known. So, I graduated from UC San Diego in '94 doing neuroscience, artificial intelligence, and psychology. And because I just couldn't stop thinking about thinking; I was just fascinated with how it worked.So, then I wanted to build software systems that would help people learn. And then I wanted to build distributed software systems. And then I wanted to learn how to work with people who were thinking about building the distributed software systems. So, you end up kind of going up this curve of, like, complexity about how do we think? How do we think alone? How do we learn to think? How do we think together?And that's the directed path through my software engineering career, into management, into middleware at BEA, into open-source at Microsoft because that's an amazing demonstration of distributed cognition, how, you know, at the time in 2007, I think, Sourceforge had 100,000 open-source projects, which was, like, mind boggling. Some of them even worked together, but all of them represented these groups of people, flung around the world, collaborating on something that was just fundamentally useful, that they were curious about. Kind of did the same thing into APIs because APIs are an even better way to reuse for some cases than having the source code—at Apigee. And kept growing up through that into, how are we building larger-scale thinking systems like Cloud Foundry, which took me into Google and Kubernetes, and then some applications of that in Autodesk and now DataStax. So, I love building companies. I love helping people build companies because I think business is distributed cognition. So, those businesses that build distributed systems, for me, are the most fascinating.Corey: You were basically handed a heck of a challenge as far as, “Well, help set open-source strategy,” back at Microsoft, in the days where that was a punchline. And credit where due, I have to look at Microsoft of today, and it's not a joke, you can have your arguments about them, but again in those days, a lot of us built our entire personality on hating Microsoft. Some folks never quite evolved beyond that, but it's a new ballgame and it's very clear that the Microsoft of yesteryear and the Microsoft of today are not completely congruent. What was it like at that point understanding that as you're working with open-source communities, you're doing that from a place of employment with a company that was widely reviled in the space.Sam: It was not lost on me. The irony, of course, was that—Corey: Well, thank God because otherwise the question where you would have been, “What do you mean they didn't like us?”Sam: [laugh].Corey: Which, on some levels, like, yeah, that's about the level of awareness I would have expected in that era, but contrary to popular opinion, execs at these companies are not generally oblivious.Sam: Yeah, well, if I'd been clever as a creative humorist, I would have given you that answer instead of my serious answer, but for some reason, my role in life is always to be the straight guy. I used to have Slashdot as my homepage, right? I love when I'd see some conspiracy theory about, you know, Bill Gates dressed up as the Borg, taking over the world. My first startup, actually in '97, was crushed by Microsoft. They copied our product, copied the marketing, and bundled it into Office, so I had lots of reasons to dislike Microsoft.But in 2004, I was recruited into their venture capital team, which I couldn't believe. It was really a place that they were like, “Hey, we could do better at helping startups succeed, so we're going to evangelize their success—if they're building with Microsoft technologies—to VCs, to enterprises, we'll help you get your first big enterprise deal.” I was like, “Man, if I had this a few years ago, I might not be working.” So, let's go try to pay it forward.I ended up in open-source by accident. I started going to these conferences on Software as a Service. This is back in 2005 when people were just starting to light up, like, Silicon Valley Forum with, you know, the CEO of Demandware would talk, right? We'd hear all these different ways of building a new business, and they all kept talking about their tech stack was Linux, Apache, MySQL, and PHP. I went to one eight-hour conference, and Microsoft technologies were mentioned for about 12 seconds in two separate chunks. So, six seconds, he was like, “Oh, and also we really like Microsoft SQL Server for our data layer.”Corey: Oh, Microsoft SQL Server was fantastic. And I know that's a weird thing for people to hear me say, just because I've been renowned recently for using Route 53 as the primary data store for everything that I can. But there was nothing quite like that as far as having multiple write nodes, being able to handle sharding effectively. It was expensive, and you would take a bath on the price come audit time, but people were not rolling it out unaware of those things. This was a trade off that they were making.Oracle has a similar story with databases. It's yeah, people love to talk smack about Oracle and its business practices for a variety of excellent reasons, at least in the database space that hasn't quite made it to cloud yet—knock on wood—but people weren't deploying it because they thought Oracle was warm and cuddly as a vendor; they did it because they can tolerate the rest of it because their stuff works.Sam: That's so well said, and people don't give them the credit that's due. Like, when they built hypergrowth in their business, like… they had a great product; it really worked. They made it expensive, and they made a lot of money on it, and I think that was why you saw MySQL so successful and why, if you were looking for a spec that worked, that you could talk through through an open driver like ODBC or JDBC or whatever, you could swap to Microsoft SQL Server. But I walked out of that and came back to the VC team and said, “Microsoft has a huge problem. This is a massive market wave that's coming. We're not doing anything in it. They use a little bit of SQL Server, but there's nothing else in your tech stack that they want, or like, or can afford because they don't know if their businesses are going to succeed or not. And they're going to go out of business trying to figure out how much licensing costs they would pay to you in order to consider using your software. They can't even start there. They have to start with open-source. So, if you're going to deal with SaaS, you're going to have to have open-source, and get it right.”So, I worked with some folks in the industry, wrote a ten-page paper, sent it up to Bill Gates for Think Week. Didn't hear much back. Bought a new strategy to the head of developer platform evangelism, Sanjay Parthasarathy who suggested that the idea of discounting software to zero for startups, with the hope that they would end up doing really well with it in the future as a Software as a Service company; it was dead on arrival. Dumb idea; bring it back; that actually became BizSpark, the most popular program in Microsoft partner history.And then about three months later, I got a call from this guy, Bill Hilf. And he said, “Hey, this is Bill Hilf. I do open-source at Microsoft. I work with Bill Gates. He sent me your paper. I really like it. Would you consider coming up and having conversation with me because I want you to think about running open-source technology strategy for the company.” And at this time I'm, like, 33 or 34. And I'm like, “Who me? You've got to be joking.” And he goes, “Oh, and also, you'll be responsible for doing quarterly deep technical briefings with Bill… Gates.” I was like, “You must be kidding.” And so of course I had to check it out. One thing led to another and all of a sudden, with not a lot of history in the open-source community but coming in it with a strategist's eye and with a technologist's eye, saying, “This is a problem we got to solve. How do we get after this pragmatically?” And the rest is history, as they say.Corey: I have to say that you are the Chief Strategy Officer at DataStax, and I pull up your website quickly here and a lot of what I tell earlier stage companies is effectively more or less what you have already done. You haven't named yourself after the open-source project that underlies the bones of what you have built so you're not going to wind up in the same glorious challenges that, for example, Elastic or MongoDB have in some ways. You have a pricing page that speaks both to the reality of, “It's two in the morning. I'm trying to get something up and running and I want you the hell out of my way. Just give me something that I can work with a reasonable free tier and don't make me talk to a salesperson.” But also, your enterprise tier is, “Click here to talk to a human being,” which is speaking enterprise slash procurement slash, oh, there will be contract negotiation on these things.It's being able to serve different ends of your market depending upon who it is that encounters you without being off-putting to any of those. And it's deceptively challenging for companies to pull off or get right. So clearly, you've learned lessons by doing this. That was the big problem with Microsoft for the longest time. It's, if I want to use some Microsoft stuff, once you were able to download things from the internet, it changed slightly, but even then it was one of those, “What exactly am I committing to here as far as signing up for this? And am I giving them audit rights into my environment? Is the BSA about to come out of nowhere and hit me with a surprise audit and find out that various folks throughout the company have installed this somewhere and now I owe more than the company's worth?” That was always the haunting fear that companies had back then.These days, I like the approach that companies are taking with the SaaS offering: you pay for usage. On some level, I'd prefer it slightly differently in a pay-per-seat model because at least then you can predict the pricing, but no one is getting surprise submarined with this type of thing on an audit basis, and then they owe damages and payment in arrears and someone has them over a barrel. It's just, “Oh. The bill this month was higher than we expected.” I like that model I think the industry does, too.Sam: I think that's super well said. As I used to joke at BEA Systems, nothing says ‘I love you' to a customer like an audit, right? That's kind of a one-time use strategy. If you're going to go audit licenses to get your revenue in place, you might be inducing some churn there. It's a huge fix for the structural problem in pricing that I think package software had, right?When we looked at Microsoft software versus open-source software, and particularly Windows versus Linux, you would have a structure where sales reps were really compensated to sell as much as possible upfront so they could get the best possible commission on what might be used perpetually. But then if you think about it, like, the boxes in a curve, right, if you do that calculus approximation of a smooth curve, a perpetual software license is a huge box and there's an enormous amount of waste in there. And customers figured out so as soon as you can go to a pay-per-use or pay-as-you-go, you start to smooth that curve, and now what you get is what you deserve, right, as opposed to getting filled with way more cost than you expect. So, I think this model is really super well understood now. Kind of the long run the high point of open-source meets, cloud, meets Software as a Service, you look at what companies like MongoDB, and Confluent, and Elastic, and Databricks are doing. And they've really established a very good path through the jungle of how to succeed as a software company. So, it's still difficult to implement, but there are really world-class guides right now.Corey: Moving beyond where Microsoft was back in the naughts, you were then hired as a VP over at Google. And in that era, the fact that you were hired as a VP at Google is fascinating. They preferred to grow those internally, generally from engineering. So, first question, when you were being hired as a VP in the product org, did they make you solve algorithms on a whiteboard to get there?Sam: [laugh]. They did not. I did have somewhat of an advantage [because they 00:13:36] could see me working pretty closely as the CEO of the Cloud Foundry Foundation. I'd worked closely with Craig McLuckie who notably brought Kubernetes to the world along with Joe Beda, and with Eric Brewer, and a number of others.And he was my champion at Google. He was like, “Look, you know, we need him doing Kubernetes. Let's bring Sam in to do that.” So, that was helpful. I also wrote a [laugh] 2000-word strategy document, just to get some thoughts out of my head. And I said, “Hey, if you like this, great. If you don't throw it away.” So, the interviews were actually very much not solving problems in a whiteboard. There were super collaborative, really excellent conversations. It was slow—Corey: Let's be clear, Craig McLuckie's most notable achievement was being a guest on this podcast back in Episode 243. But I'll say that this is a close second.Sam: [laugh]. You're not wrong. And of course now with Heptio and their acquisition by VMware.Corey: Ehh, they're making money beyond the wildest dreams of avarice, that's all well and good, but an invite to this podcast, that's where it's at.Sam: Well, he should really come on again, he can double down and beat everybody. That can be his landmark achievement, a two-timer on Screaming in [the] Cloud.Corey: You were at Google; you were at Microsoft. These are the big titans of their era, in some respect—not to imply that there has beens; they're bigger than ever—but it's also a more crowded field in some ways. I guess completing the trifecta would be Amazon, but you've had the good judgment never to work there, directly of course. Now they're clearly in your market. You're at DataStax, which is among other things, built on Apache Cassandra, and they launched their own Cassandra service named Keyspaces because no one really knows why or how they name things.And of course, looking under the hood at the pricing model, it's pretty clear that it really is just DynamoDB wearing some Groucho Marx classes with a slight upcharge for API level compatibility. Great. So, I don't see it a lot in the real world and that's fine, but I'm curious as to your take on looking at all three of those companies at different eras. There was always the threat in the open-source world that they are going to come in and crush you. You said earlier that Microsoft crushed your first startup.Google is an interesting competitor in some respects; people don't really have that concern about them. And your job as a Chief Strategy Officer at Amazon is taken over by a Post-it Note that simply says ‘yes' on it because there's nothing they're not going to do, or try, and experiment with. So, from your perspective, if you look at the titans, who is it that you see as the largest competitive threat these days, if that's even a thing?Sam: If you think about Sun Tzu and the Art of War, right—a lot of strategy comes from what we've learned from military environments—fighting a symmetric war, right, using the same weapons and the same army against a symmetric opponent, but having 1/100th of the personnel and 1/100th of the money is not a good plan.Corey: “We're going to lose money, going to be outcompeted; we'll make it up in volume. Oh, by the way, we're also slower than they are.”Sam: [laugh]. So, you know, trying to come after AWS, or Microsoft, or Google as an independent software company, pound-for-pound, face-to-face, right, full-frontal assault is psychotic. What you have to do, I think, at this point is to understand that these are each companies that are much like we thought about Linux, and you know, Macintosh, and Windows as operating systems. They're now the operating systems of the planet. So, that creates some economies of scale, some efficiencies for them. And for us. Look at how cheap object storage is now, right? So, there's never been a better time in human history to create a database company because we can take the storage out of the database and hand it over to Amazon, or Google, or Microsoft to handle it with 13 nines of durability on a constantly falling cost basis.So, that's super interesting. So, you have to prosecute the structure of the world as it is, based on where the giants are and where they'll be in the future. Then you have to turn around and say, like, “What can they never sell?”So, Amazon can never sell something that is standalone, right? They're a parts factory and if you buy into the Amazon-first strategy of cloud computing—which we did at Autodesk when I was VP of cloud platform there—everything is a primitive that works inside Amazon, but they're not going to build things that don't work outside of the Amazon primitives. So, your company has to be built on the idea that there's a set of people who value something that is purpose-built for a particular use case that you can start to broaden out, it's really helpful if they would like it to be something that can help them escape a really valuable asset away from the center of gravity that is a cloud. And that's why data is super interesting. Nobody wakes up in the morning and says, “Boy, I had such a great conversation with Oracle over the last 20 years beating me up on licensing. Let me go find a cloud vendor and dump all of my data in that so they can beat me up for the next 20 years.” Nobody says that.Corey: It's the idea of data portability that drives decision-making, which makes people, of course, feel better about not actually moving in anywhere. But the fact that they're not locked in strategically, in a way that requires a full software re-architecture and data model rewrite is compelling. I'm a big believer in convincing people to make decisions that look a lot like that.Sam: Right. And so that's the key, right? So, when I was at Autodesk, we went from our 100 million dollar, you know, committed spend with 19% discount on the big three services to, like—we started realize when we're going to burn through that, we were spending $60 million or so a year on 20% annual growth as the cloud part of the business grew. Thought, “Okay, let's renegotiate. Let's go and do a $250 million deal. I'm sure they'll give us a much better discount than 19%.” Short story is they came back and said, “You know, we're going to take you from an already generous 19% to an outstanding 22%.” We thought, “Wait a minute, we already talked to Intuit. They're getting a 40% discount on a $400 million spend.”So, you know, math is hard, but, like, 40% minus 22% is 18% times $250 million is a lot of money. So, we thought, “What is going on here?” And we realized we just had no credible threat of leaving, and Intuit did because they had built a cross-cloud capable architecture. And we had not. So, now stepping back into the kind of the world that we're living in 2021, if you're an independent software company, especially if you have the unreasonable advantage of being an open-source software company, you have got to be doing your customers good by giving them cross-cloud capability. It could be simply like the Amdahl coffee cup that Amdahl reps used to put as landmines for the IBM reps, later—I can tell you that story if you want—even if it's only a way to save money for your customer by using your software, when it gets up to tens and hundreds of million dollars, that's a really big deal.But they also know that data is super important, so the option value of being able to move if they have to, that they have to be able to pull that stick, instead of saying, “Nice doggy,” we have to be on their side, right? So, there's almost a detente that we have to create now, as cloud vendors, working in a world that's invented and operated by the giants.Corey: This episode is sponsored by our friends at Oracle HeatWave is a new high-performance accelerator for the Oracle MySQL Database Service. Although I insist on calling it “my squirrel.” While MySQL has long been the worlds most popular open source database, shifting from transacting to analytics required way too much overhead and, ya know, work. With HeatWave you can run your OLTP and OLAP, don't ask me to ever say those acronyms again, workloads directly from your MySQL database and eliminate the time consuming data movement and integration work, while also performing 1100X faster than Amazon Aurora, and 2.5X faster than Amazon Redshift, at a third of the cost. My thanks again to Oracle Cloud for sponsoring this ridiculous nonsense.Corey: When we look across the, I guess, the ecosystem as it's currently unfolding, a recurring challenge that I have to the existing incumbent cloud providers is they're great at offering the bricks that you can use to build things, but if I'm starting a company today, I'm not going to look at building it myself out of, “Ooh, I'm going to take a bunch of EC2 instances, or Lambda functions, or popsicles and string and turn it into this thing.” I'm going to want to tie together things that are way higher level. In my own case, now I wind up paying for Retool, which is, effectively, yeah, it runs on some containers somewhere, presumably, I think in Azure, but don't quote me on that. And that's great. Could I build my own thing like that?Absolutely not. I would rather pay someone to tie it together. Same story. Instead of building my own CRM by running some open-source software on an EC2 instance, I wind up paying for Salesforce or Pipedrive or something in that space. And so on, and so forth.And a lot of these companies that I'm doing business with aren't themselves running on top of AWS. But for web hosting, for example; if I look at the reference architecture for a WordPress site, AWS's diagram looks like a punchline. It is incredibly overcomplicated. And I say this as someone who ran large WordPress installations at Media Temple many years ago. Now, I have the good sense to pay WP Engine. And on a monthly basis, I give them money and they make the website work.Sure, under the hood, it's running on top of GCP or AWS somewhere. But I don't have to think about it; I don't have to build this stuff together and think about the backups and the failover strategy and the rest. The website just works. And that is increasingly the direction that business is going; things commoditize over time. And AWS in particular has done a terrible job, in my experience, of differentiating what it is they're doing in the language that their customers speak.They're great at selling things to existing infrastructure engineers, but folks who are building something from scratch aren't usually in that cohort. It's a longer story with time and, “Well, we're great at being able to sell EC2 instances by the gallon.” Great. Are you capable of going to a small doctor's office somewhere in the American Midwest and offering them an end-to-end solution for managing patient data? Of course not. You can offer them a bunch of things they can tie together to something that will suffice if they all happen to be software engineers, but that's not the opportunity.So instead, other companies are building those solutions on top of AWS, capturing the margin. And if there's one thing guaranteed to keep Amazon execs awake at night, it's the idea of someone who isn't them making money somehow somewhere, so I know that's got to rankle them, but they do not speak that language. At all. Longer-term, I only see that as a more and more significant crutch. A long enough timeframe here, we're talking about them becoming the Centurylinks of the world, the tier one backbone provider that everyone uses, but no one really thinks about because they're not a household name.Sam: That is a really thoughtful perspective. I think the diseconomies of scale that you're pointing to start to creep in, right? Because when you have to sell compute units by the gallon, right, you can't care if it's a gallon of milk, [laugh] or a gallon of oil, or you know, a gallon of poison. You just have to keep moving it through. So, the shift that I think they're going to end up having to make pragmatically, and you start to see some signs of it, like, you know, they hired but could not retain Matt [Acey 00:23:48]. He did an amazing job of bringing them to some pragmatic realization that they need to partner with open-source, but more broadly, when I think about Microsoft in the 2000s as they were starting to learn their open-source lessons, we were also being able to pull on Microsoft's deep competency and partners. So, most people didn't do the math on this. I was part of the field governance council so I understood exactly how the Microsoft business worked to the level that I was capable. When they had $65 billion in revenue, they produced $24 billion in profit through an ecosystem that generated $450 billion in revenue. So, for every dollar Microsoft made, it was $8 to partners. It was a fundamentally platform-shaped business, and that was how they're able to get into doctors offices in the Midwest, and kind of fit the curve that you're describing of all of those longtail opportunities that require so much care and that are complex to prosecute. These solved for their diseconomies of scale by having 1.2 million partner companies. So, will Amazon figure that out and will they hire, right, enough people who've done this before from Microsoft to become world-class in partnering, that's kind of an exercise left to the [laugh] reader, right? Where will that go over time? But I don't see another better mathematical model for dealing with the diseconomies of scale you have when you're one of the very largest providers on the planet.Corey: The hardest problem as I look at this is, at some point, you hit a point of scale where smaller things look a lot less interesting. I get that all the time when people say, “Oh, you fix AWS bills, aren't you missing out by not targeting Google bills and Azure bills as well?” And it's, yeah. I'm not VC-backed. It turns out that if I limit the customer base that I can effectively service to only AWS customers, yeah turns out, I'm not going to starve anytime soon. Who knew? I don't need to conquer the world and that feels increasingly antiquated, at least going by the stories everyone loves to tell.Sam: Yeah, it's interesting to see how cloud makes strange bedfellows, right? We started seeing this in, like, 2014, 2015, weird partnerships that you're like, “There's no way this would happen.” But the cloud economics which go back to utilization, rather than what it used to be, which was software lock-in, just changed who people were willing to hang out with. And now you see companies like Databricks going, you know, we do an amazing amount of business, effectively competing with Amazon, selling Spark services on top of predominantly Amazon infrastructure, and everybody seems happy with it. So, there's some hint of a new sensibility of what the future of partnering will be. We used to call it coopetition a long time ago, which is kind of a terrible word, but at least it shows that there's some nuance in you can't compete with everybody because it's just too hard.Corey: I wish there were better ways of articulating these things because it seems from the all the outside world, you have companies like Amazon and Microsoft and Google who go and build out partner networks because they need that external accessibility into various customer profiles that they can't speak to super well themselves, but they're also coming out with things that wind up competing directly or indirectly, with all of those partners at the same time. And I don't get it. I wish that there were smarter ways to do it.Sam: It is hard to even talk about it, right? One of the things that I think we've learned from philosophy is if we don't have a word for it, we can't be intelligent about it. So, there's a missing semantics here for being able to describe the complexity of where are you partnering? Where are you competing? Where are you differentiating? In an ecosystem, which is moving and changing.I tend to look at the tools of game theory for this, which is to look at things as either, you know, nonzero-sum games or zero-sum games. And if it's a nonzero-sum game, which I think are the most interesting ones, can you make it a positive sum game? And who can you play positive-sum games with? An organization as big as Amazon, or as big as Microsoft, or even as big as Google isn't ever completely coherent with itself. So, thinking about this as an independent software company, it doesn't matter if part of one of these hyperscalers has a part of their business that competes with your entire business because your business probably drives utilization of a completely different resource in their company that you can partner within them against them, effectively. Right?For example, Cassandra is an amazingly powerful but demanding workload on Kubernetes. So, there's a lot of Cassandra on EKS. You grow a lot of workload, and EKS business does super well. Does that prevent us from working with Amazon because they have Dynamo or because they have Keyspaces? Absolutely not, right?So, this is when those companies get so big that they are almost their own forest, right, of complexity, you can kind of get in, hang out, do well, and pretty much never see the competitive product, unless you're explicitly looking for it, which I think is a huge danger for us as independent software companies. And I would say this to anybody doing strategy for an organization like this, which is, don't obsess over the tiny part of their business that competes with yours, and do not pay attention to any of the marketing that they put out that looks competitive with what you have. Because if you can't figure out how to make a better product and sell it better to your customers as a single purpose corporation, you have bigger problems.Corey: I want to change gears slightly to something that's probably a fair bit more insulting, but that's okay. We're going to roll with it. That seems to be the theme of this episode. You have been, in effect, a CIO a number of times at different companies. And if we take a look at the typical CIO tenure, industry-wide, it's not long; it approaches the territory from an executive perspective of, “Be sure not to buy green bananas. You might not be here by the time they ripen.” And I'm wondering what it is that drives that and how you make a mark in a relatively short time frame when you're providing inputs and deciding on strategy, and those decisions may not bear fruit for years.Sam: CIO used to—we used say it stood for ‘Career Is Over' because the tenure is so short. I think there's a couple of reasons why it's so short. And I think there's a way I believe you can have impact in a short amount of time. I think the reason that it's been short is because people aren't sure what they want the CIO role to be.Do they want it to be a glorified finance person who's got a lot of data processing experience, but now really has got, you know, maybe even an MBA in finance, but is not focusing on value creation? Do they want it to be somebody who's all-singing, all-dancing Chief Data Officer with a CTO background who did something amazing and solved a really hard problem? The definition of success is difficult. Often CIOs now also have security under them, which is literally a job I would never ever want to have. Do security for a public corporation? Good Lord, that's a way to lose most of your life. You're the only executive other than the CEO that the board wants to hear from. Every sing—Corey: You don't sleep; you wait, in those scenarios. And oh, yeah, people joke about ablative CSOs in those scenarios. Yeah, after SolarWinds, you try and get an ablative intern instead, but those don't work as well. It's a matter of waiting for an inevitability. One of the things I think is misunderstood about management broadly, is that you are delegating work, but not the responsibility. The responsibility rests with you.So, when companies have these statements blaming some third-party contractor, it's no, no, no. I'm dealing with you. You were the one that gave my data to some sketchy randos. It is your responsibility that data has now been compromised. And people don't want to hear that, but it's true.Sam: I think that's absolutely right. So, you have this high risk, medium reward, very fungible job definition, right? If you ask all of the CIO's peers what their job is, they'll probably all tell you something different that represents their wish list. The thing that I learned at Autodesk, I was only there for 15 months, but we established a fundamental transformation of the work of how cloud platform is done at the company that's still in place a couple years later.You have to realize that you're a change agent, right? You're actually being hired to bring in the bulk of all the different biases and experiences you have to solve a problem that is not working, right? So, when I got to Autodesk, they didn't even know what their uptime was. It took three months to teach the team how to measure the uptime. Turned out the uptime was 97.7% for the cloud, for the world's largest engineering software company.That is 200 hours a year of unplanned downtime, right? That is not good. So, a complete overhaul [laugh] was needed. Understanding that as a change agent, your half-life is 12 to 18 months, you have to measure success not on tenure, but on your ability to take good care of the patient, right? It's going to be a lot of pain, you're going to work super hard, you're going to have to build trust with everyone, and then people are still going to hate you at the end. That is something you just have to kind of take on.As a friend of mine, Jason Warner joined Redpoint Ventures recently, he said this when he was the CTO of GitHub: “No one is a villain in their own story.” So, you realize, going into a big organization, people are going to make you a villain, but you still have to do incredibly thoughtful, careful work, that's going to take care of them for a long time to come. And those are the kinds of CIOs that I can relate to very well.Corey: Jason is great. You're name-dropping all the guests we've had. My God, keep going. It's a hard thing to rationalize and wrap heads around. It's one of those areas where you will not be measured during your tenure in the role, in some respects. And, of course, that leads to the cynical perspective as well, where well, someone's not going to be here long and if they say, “Yeah, we're just going to keep being stewards of the change that's already underway,” well, that doesn't look great, so quick, time to do a cloud migration, or a cloud repatriation, or time to roll something else out. A bit of a different story.Sam: One of the biggest challenges is how do you get the hearts and the minds of the people who are in the organization when they are no fools, and their expectation is like, “Hey, this company's been around for decades, and we go through cloud leaders or CIOs, like Wendy's goes through hamburgers.” They could just cloud-wash, right, or change-wash all their language. They could use the new language to describe the old thing because all they have to do is get through the performance review and outwait you. So, there's always going to be a level of defection because it's hard to change; it's hard to think about new things.So, the most important thing is how do you get into people's hearts and minds and enable them to believe that the best thing they could do for their career is to come along with the change? And I think that was what we ended up getting right in the Autodesk cloud transformation. And that requires endless optimism, and there's no room for cynicism because the cynicism is going to creep in around the edges. So, what I found on the job is, you just have to get up every morning and believe everything is possible and transmit that belief to everybody.So, if it seems naive or ingenuous, I think that doesn't matter as long as you can move people's hearts in each conversation towards, like, “Oh, this person cares about me. They care about a good outcome from me. I should listen a little bit more and maybe make a 1% change in what I'm doing.” Because 1% compounded daily for a year, you can actually get something done in the lifetime of a CIO.Corey: And I think that's probably a great place to leave it. If people want to learn more about what you're up to, how you think about these things, how you view the world, where can they find you?Sam: You can find me on Twitter, I'm @sramji, S-R-A-M-J-I, and I have a podcast that I host called Open||Source||Datawhere I invite innovators, data nerds, computational networking nerds to hang out and explain to me, a software programmer, what is the big world of open-source data all about, what's happening with machine learning, and what would it be like if you could put data in a container, just like you could put code in a container, and how might the world change? So, that's Open||Source||Data podcast.Corey: And we'll of course include links to that in the [show notes 00:35:58]. Thanks so much for your time. I appreciate it.Sam: Corey, it's been a privilege. Thank you so much for having me.Corey: Likewise. Sam Ramji, Chief Strategy Officer at DataStax. I'm Cloud Economist Corey Quinn, and this is Screaming in the Cloud. If you've enjoyed this podcast, please leave a five-star review on your podcast platform of choice, whereas if you've hated this podcast, please leave a five-star review on your podcast platform of choice, along with a comment telling me exactly which item in Sam's background that I made fun of is the place that you work at.Corey: If your AWS bill keeps rising and your blood pressure is doing the same, then you need The Duckbill Group. We help companies fix their AWS bill by making it smaller and less horrifying. The Duckbill Group works for you, not AWS. We tailor recommendations to your business and we get to the point. Visit duckbillgroup.com to get started.Announcer: This has been a HumblePod production. Stay humble.

Continuous Delivery
Come organizzare un evento tech - con Chiara Muzzolon

Continuous Delivery

Play Episode Listen Later Nov 24, 2021 60:56


Visto che l'ultima volta non è andata bene, un bel titolo clickbait. Manca solo che ci aggiungiamo LO ABBIAMO FATTO DAVVERO e la faccina allarmata di Joe Beda per scalare le tendenze di YouTube e i consigli di Spotify. Comunque, oggi parliamo davvero di come i tre ospiti presenti hanno organizzato un evento di successo, online, con il supporto della CNCF: i Kubernetes Community Days Italy 2021!Con Edoardo Dusi, Claudio Serena, Annalisa Gennaro, Paolo Mainardi, Chiara Muzzolon/* News */Nasce la PHP Foundationhttps://blog.jetbrains.com/phpstorm/2021/11/the-php-foundation/L'attacco e conseguente data breach di Robinhood è più grave di quanto comunicatohttps://www.vice.com/en/article/y3vddm/robinhood-hack-included-thousands-of-phone-numbers“Ti pago per cancellare il tuo modulo npm”https://drewdevault.com/2021/11/16/Cash-for-leftpad.htmlTorna Winamp, iscrivetevi alla betahttps://www.ghacks.net/2021/11/22/how-to-sign-up-for-the-winamp-beta-versionNFT Bay, per scaricare tutti gli NFThttps://www.theverge.com/2021/11/18/22790131/nft-bay-pirating-digital-ownership-piracy-crypto-art-right-clickA whole new world: Disney is latest firm to announce metaverse planshttps://www.theguardian.com/film/2021/nov/11/disney-is-latest-firm-to-announce-metaverse-plansMeta's prototype devices give us a glimpse of the metaverse lifehttps://thenextweb.com/news/meta-devices-metaverse-prototype-analysisWhy Cloud Native Is About Communityhttps://thenewstack.io/why-cloud-native-is-about-community/KCD Italyhttps://community.cncf.io/events/details/cncf-kcd-italy-presents-kubernetes-community-days-italy-virtual-event/https://www.cncf.io/blog/2021/10/04/how-was-a-pizza-chosen-as-the-kcd-italy-2021-logo//* Newsletter & Telegram */https://landing.sparkfabrik.com/continuous-delivery-newsletterhttps://t.me/continuous_delivery/* Link e Social */https://www.sparkfabrik.com/ - @sparkfabrik

The Art of Modern Ops
Kubernetes has won the enterprise

The Art of Modern Ops

Play Episode Play 15 sec Highlight Listen Later Nov 15, 2021 78:16


In the latest episode of our podcast, The Art of Modern Ops, we welcomed two renowned technologists to discuss the state of Kubernetes in the enterprise right now; Michael Cote and Joe Beda. They discuss the enterprise adoption of Kubernetes relative to the growth it saw in the start-up and small-to-medium-sized business.

DevOps and Docker Talk
Docker's New Licensing Changes

DevOps and Docker Talk

Play Episode Listen Later Sep 9, 2021 60:29


Full, unedited YouTube DevOps and Docker Live Show Docker Desktop changes licensing to require a paid plan in medium to large commercial organizations: Docker Blog article Docker is Updating and Extending Our Product Subscriptions Docker pricing FAQ Who's gonna build "OpenMoby" - Twitter thread from Joe Beda, Principal Engineer at VMware Bret's Docker Desktop feature list WSL2 Docker without Desktop - dev.to blog by Jonathan Bowman macOS Docker-like without Desktop - blog article containerd & Lima: Open Source Alternative to Docker for Mac by Akihiro Suda ★ Support this podcast on Patreon ★

Screaming in the Cloud
Helping Avoid the Kubernetes Hiccups with Rich Burroughs

Screaming in the Cloud

Play Episode Listen Later Aug 24, 2021 37:05


About RichRich Burroughs is a Senior Developer Advocate at Loft Labs where he's focused on improving workflows for developers and platform engineers using Kubernetes. He's the creator and host of the Kube Cuddle podcast where he interviews members of the Kubernetes community. He is one of the founding organizers of DevOpsDays Portland, and he's helped organize other community events. Rich has a strong interest in how working in tech impacts mental health. He has ADHD and has documented his journey on Twitter since being diagnosed.Links: Loft Labs: https://loft.sh Kube Cuddle Podcast: https://kubecuddle.transistor.fm LinkedIn: https://www.linkedin.com/in/richburroughs/ Twitter: https://twitter.com/richburroughs Polywork: https://www.polywork.com/richburroughs TranscriptAnnouncer: Hello, and welcome to Screaming in the Cloud with your host, Chief Cloud Economist at The Duckbill Group, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.Corey: This episode is sponsored in part my Cribl Logstream. Cirbl Logstream is an observability pipeline that lets you collect, reduce, transform, and route machine data from anywhere, to anywhere. Simple right? As a nice bonus it not only helps you improve visibility into what the hell is going on, but also helps you save money almost by accident. Kind of like not putting a whole bunch of vowels and other letters that would be easier to spell in a company name. To learn more visit: cribl.ioCorey: This episode is sponsored in part by Thinkst. This is going to take a minute to explain, so bear with me. I linked against an early version of their tool, canarytokens.org in the very early days of my newsletter, and what it does is relatively simple and straightforward. It winds up embedding credentials, files, that sort of thing in various parts of your environment, wherever you want to; it gives you fake AWS API credentials, for example. And the only thing that these things do is alert you whenever someone attempts to use those things. It's an awesome approach. I've used something similar for years. Check them out. But wait, there's more. They also have an enterprise option that you should be very much aware of canary.tools. You can take a look at this, but what it does is it provides an enterprise approach to drive these things throughout your entire environment. You can get a physical device that hangs out on your network and impersonates whatever you want to. When it gets Nmap scanned, or someone attempts to log into it, or access files on it, you get instant alerts. It's awesome. If you don't do something like this, you're likely to find out that you've gotten breached, the hard way. Take a look at this. It's one of those few things that I look at and say, “Wow, that is an amazing idea. I love it.” That's canarytokens.org and canary.tools. The first one is free. The second one is enterprise-y. Take a look. I'm a big fan of this. More from them in the coming weeks.scaCorey: Welcome to Screaming in the Cloud. I'm Corey Quinn. Periodically, I like to have, well, let's call it fun, at the expense of developer advocates; the developer relations folks; DevRelopers as I insist on pronouncing it. But it's been a while since I've had one of those come on the show and talk about things that are happening in that universe. So, today we're going back to change that a bit. My guest today is Rich Burroughs, who's a Senior Developer Advocate—read as Senior DevReloper—at Loft Labs. Rich, thanks for joining me.Rich: Hey, Corey. Thanks for having me on.Corey: So, you've done a lot of interesting things in the space. I think we first met back when you were at Sensu, you did a stint over at Gremlin, and now you're over at Loft. Sensu was monitoring things, Gremlin was about chaos engineering and breaking things on purpose, and when you're monitoring things that are breaking that, of course, leads us to Kubernetes, which is what Loft does. I'm assuming. That's probably not your marketing copy, though, so what is it you folks do?Rich: I was waiting for your Kubernetes trash talk. I knew that was coming.Corey: Yeah. Oh, good. I was hoping I could sort of sneak it around in there.Rich: [laugh].Corey: But yeah, you know me too well.Rich: By the way, I'm not dogmatic about tools, right? I think Kubernetes is great for some things and for some use cases, but it's not the best tool for everything. But what we do is we really focus a lot on the experience of developers who are writing applications that run in Kubernetes cluster, and also on the platform teams that are having to maintain the clusters. So, we really are trying to address the speed bumps, the things that people bang their shins on when they're trying to get their app running in Kubernetes.Corey: Part of the problem I've always found is that the thing that people bang their shins on is Kubernetes. And it's one of those, “Well, it's sort of in the title, so you can't really avoid it. The only way out is through.” You could also say, “It's better never begin; once begun, better finish.” The same thing seems to apply to technology in a whole bunch of different ways.And that's always been a strange thing for me where I would have bet against Kubernetes. In fact, I did, and—because it was incredibly complicated, and it came out of Google, not that someone needed to tell me. It was very clearly a Google-esque product. And we saw it sort of take the world by storm, and we are all senior YAML engineers now. And here we are.And now you're doing developer advocacy, which means you're at least avoiding the problem of actually working with Kubernetes day-in-day out yourself, presumably. But instead, you're storytelling about it.Rich: You know, I spent a good part of my day a couple days ago fighting with my Kubernetes cluster at Docker Desktop. So, I still feel the pain some, but it's a different kind of pain. I've not maintaining it in production. I actually had the total opposite experience to you. So, my introduction to Kubernetes was seeing Kelsey Hightower talk about it in, like, 2015.And I was just hooked. And the reason that I was hooked is because of what Kubernetes did, and I think especially the service primitive, is that it encoded a lot of these operational patterns that had developed into the actual platform. So, things like how you check if an app is healthy, if it's ready to start accepting requests. These are things that I was doing in the shops that I was working at already, but we had to roll it ourselves; we had to invent a way to do that. But when Kelsey started talking about Kubernetes, it became apparent to me that the people who designed this thing had a lot of experience running applications in distributed systems, and they understood what you needed to be able to do that competently.Corey: There's something to be said for packaging and shipping expertise, and it does feel like we're on a bit of a cusp, where the complexity has risen and risen and risen, and it's always a sawtooth graph where things get so complicated that you then are paying people a quarter-million dollars a year to run the thing. And then it collapses in on itself. And the complexity is still there, but it's submerged to a point where you don't need to worry about it anymore. And it feels like we're a couple years away from Kubernetes hitting that, but I do view that as inevitable. Is that, basically, completely out to sea? Is that something that you think is directionally correct, or something else?Rich: I mean, I think that the thing that's been there for a long time is, how do we take this platform and make it actually usable for people? And that's a lot more about the whole CNCF ecosystem than Kubernetes itself. How do we make it so that we can easily monitor this thing, that we can have observability, that we can deploy applications to it? And I think what we've seen over the last few years is that, even more than Kubernetes itself, the tools that allow you to do those other things that you need to do to be able to run applications have exploded and gotten a lot better, I think.Corey: The problem, of course, is the explosion part of it because we look at the other side, at the CNCF landscape diagram, and it is a hilariously overwrought picture of all of the different offerings and products and tools in the space. There are something like 400 blocks on it, the last time I checked. It looks like someone's idea of a joke. I mean, I come up with various shitposts that I'm sort of embarrassed I didn't come up with one anywhere near that funny.Rich: I left SRE a few years ago, and this actually is one of the reasons. So, the explosion in tools gave me a huge amount of imposter syndrome. And I imagine I'm not the only one because you're on Twitter, you're hanging around, you're seeing people talk about all these cool tools that are out there, and you don't necessarily have a chance to play with them, let alone use them in production. And so what I would find myself doing is I would compare myself to these people who were experts on these tools. Somebody who actually invented the thing, like Joe Beda or something like that, and it's obviously unfair to do because I'm not that person. But my brain just wants to do that. You see people out there that know more than you and a lot of times I would feel bad about it. And it's an issue, I really think it is.Corey: So, one of the problems that I ran into when I left SRE was that I was solving the same problem again and again, in rapid succession. I was generally one of the first early SRE-type hires, and, “Oh, everything's on fire, and I know how to fix those things. We're going to migrate out of EC2 Classic into VPCs; we're going to set up infrastructure as code so we're not hand-building these things from scratch every time.” And in time, we wind up getting to a point where it's, okay, there are backups, and it's easy to provision stuff, and things mostly work. And then it becomes tedium, where the day starts to look too much alike.And I start looking for other problems elsewhere in the organization, and it turns out that when you don't have strategic visibility into what other orgs are doing but tell them what they're doing wrong, you're not a popular person; and you're often wrong. And that was a source of some angst in my case. The reason I started what I do now is because I was looking to do something different where no two days look alike, and I sort of found that. Do you find that with respect to developer advocacy, or does it fall into some repetitive pattern? Not there's anything wrong with that; I wish I had the capability to do that, personally.Rich: So, it's interesting that you mentioned this because I've talked pretty publicly about the fact that I've been diagnosed with ADHD a few months ago. You talked about the fact that you have it as well. I loved your Twitter thread about it, by the way; I still recommend it to people. But I think the real issue for me was that as I got more advanced in my career, people assumed that because you have ‘senior' in your title, that you're a good project manager. It's just assumed that as you grow technically and move into more senior roles, that you're going to own projects. And I was just never good at that. I was always very good at reactive things, I think I was good at being on call, I think I was good at responding to incidents.Corey: Firefighting is great for someone with our particular predilections. It's, “Oh, great. There's a puzzle to solve. It's extremely critical that we solve it.” And it gets the adrenaline moving. It's great, “Cool, now fill out a bunch of Jira tickets.” And those things will sit there unfulfilled until the day I die.Rich: Absolutely. And it's still not a problem that I've solved. I'll preface this with the kids don't try this at home advice because everybody's situation is different. I'm a white guy in the industry with a lot of privilege; I've developed a really good network over the years; I don't spend a lot of time worried about what happens if I lose my job, right, or how am I going to get another one. But when I got this latest job that I'm at now, I was pretty open with the CEO who interviewed me—it's a very small company, I'm like employee number four.And so when we talked to him ahead of time, I was very clear with him about the fact that bored Rich is bad. If Rich gets bored with what he's doing, if he's not engaged, it's not going to be good for anyone involved. And so—Corey: He's going to go find problems to solve, and they very well may not align with the problems that you need solved.Rich: Yeah, I think my problem is more that I disengage. Like, I lose my passion for what it is that I'm doing. And so I've been pretty intentional about trying to kind of change it up, make different kinds of content. I happen to be at this place that has four open-source projects, right, along with our commercial project. And so, so far at least, there's been plenty for me to talk about. I haven't had to worry about being bored so far.Corey: Small companies are great for that because you're everyone does everything to some extent; you start spreading out. And the larger a company gets, the smaller your remit is. The argument I always made against working at Google, for example was, let's say that I went in with evil in mind on day one. I would not be able—regardless of how long I was there, how high in the hierarchy I climbed—to take down google.com for one hour—the search engine piece.If I can't have that much impact intentionally, then the question really becomes how much impact can I have in a positive direction with everyone supposedly working in concert with me? And the answer I always came up with was not that much, not in the context of a company like that. It's hard for me to feel relevant to a large company. For better or worse, that is the thing that keeps me engaged is, “You know, if I get this wrong enough, we don't have a company anymore,” is sort of the right spot for me.Rich: [laugh]. Yeah, I mean, it's interesting because I had been at a number of startups last few years that were fairly early stage, and when I was looking for work this last time, my impulse was to go the opposite direction, was to go to a big company, you know, something that was going to be a little more stable, maybe. But I just was so interested in what these folks were building. And I really clicked with Lukas, the CEO, when we talked, and I ended up deciding to go this route. But there's a flip side to that.There's a lot of responsibility that comes with that, too. Part of me wanting to avoid being in that spotlight, in a way; part of me wanted to back off and be one of the million people building things. But I'm happy that I made this choice, and like I said, it's been working out really well, so far.Corey: It seems to be. You seem happy, which is always a nice thing to be able to pick up from someone in how they go about these things. Talk to me a little bit about what Loft does. You're working on some virtual cluster nonsense that mostly sails past me. Can you explain it using small words?Rich: [laugh]. Yeah, sure. So, if you talk to people who use Kubernetes, a lot, you are—Corey: They seem sad all the time. But please continue.Rich: One of the reasons that they're sad is because of multi-tenancy in Kubernetes; it just wasn't designed with that sort of model in mind. And so what you end up with is a couple of different things that happen. Either people build these shared clusters and feel a whole lot of pain trying to share them because people commonly use namespaces to isolate things, and that model doesn't completely work. Because there are objects like CRDs and things that are global, that don't live in the namespace, and so that can cause pain. Or the other option that people go with is that they just spin up a whole bunch of clusters.So, every team or every developer gets their own cluster, and then you've got all this cluster sprawl, and you've got costs, and it's not great for the environment. And so what we are really focused a lot on with the virtual cluster stuff is it provides people what looks like a full-blown Kubernetes cluster, but it just lives inside the namespace on your host cluster. So, it actually uses K3s, from the Rancher folks, the SUSE folks. And literally, this K3s API server sits in the namespace. And as a user, it looks to you like a full-blown Kubernetes cluster.Corey: Got it. So, basically a lightweight [unintelligible 00:13:31] that winds up stripping out some of the overwrought complexity. Do you find that it winds up then becoming a less high-fidelity copy of production?Rich: Sure. It's not one-to-one, but nothing ever is, right?Corey: Right. It's a question of whether people admit it or not, and where they're willing to make those trade-offs.Rich: Right. And it's a lot closer to production than using Docker Compose or something like that. So yeah, like you said, it's all about trade-offs, and I think that everything that we do as technical people is about trade-offs. You can give everybody their own Kubernetes cluster, you know, would run it in GK or AWS, and there's going to be a cost associated with that, not just financially, but in terms of the headaches for the people administering things.Corey: The hard part from where I've always been sitting has just been—because again, I deal with large-scale build-outs; I come in in the aftermath of these things—and people look at the large Kubernetes environments that they've built and it's expensive, and you look at it from the cloud provider perspective, and it's just a bunch of one big noisy application that doesn't make much sense from the outside because it's obviously not a single application. And it's chatty across availability zone boundaries, so it costs two cents per gigabyte. It has no [affinity 00:14:42] for what's nearby, so instead of talking to the service that is three racks away, it talks the thing over an expensive link. And that has historically been a problem. And there are some projects being made in that direction, but it's mostly been a collective hand-waving around it.And then you start digging into it in other directions from an economics perspective, and they're at large scale in the extreme corner cases, it always becomes this, “Oh, it's more trouble than it's worth.” But that is probably unfair for an awful lot of the real-world use cases that don't rise to my level of attention.Rich: Yeah. And I mean, like I said earlier, I think that it's not the best use case for everything. I'm a big fan of the HashiCorp tools. I think Nomad is awesome. A lot of people use it, they use it for other things.I think that one of the big use cases for Nomad is, like, running batch jobs that need to be scheduled. And there are people who use Nomad and Kubernetes both. Or you might use something like Cloud Run or AppRun, whatever works for you. But like I said, from someone who spent literally decades figuring out how to operate software and operating it, I feel like the great thing about this platform is the fact that it does sort of encode those practices.I actually have a podcast of my own. It's called Kube Cuddle. I talk to people in the Kubernetes community. I had Kelsey Hightower on recently, and the thing that Kelsey will tell you, and I agree with him completely, is that, you know, we talk about the complexity in Kubernetes, but all of that complexity, or a lot of it, was there already.We just dealt with it in other ways. So, in the old days, I was the Kubernetes scheduler. I was the guy who knew which app ran on which host, and deployed them and did all that stuff. And that's just not scalable. It just doesn't work.Corey: This episode is sponsored by our friends at Oracle Cloud. Counting the pennies, but still dreaming of deploying apps instead of "Hello, World" demos? Allow me to introduce you to Oracle's Always Free tier. It provides over 20 free services and infrastructure, networking databases, observability, management, and security.And - let me be clear here - it's actually free. There's no surprise billing until you intentionally and proactively upgrade your account. This means you can provision a virtual machine instance or spin up an autonomous database that manages itself all while gaining the networking load, balancing and storage resources that somehow never quite make it into most free tiers needed to support the application that you want to build. With Always Free you can do things like run small scale applications, or do proof of concept testing without spending a dime. You know that I always like to put asterisks next to the word free. This is actually free. No asterisk. Start now. Visit https://snark.cloud/oci-free that's https://snark.cloud/oci-free.Corey: The hardest part has always been the people aspect of things, and I think folks have tried to fix this through a lens of, “The technology will solve the problem, and that's what we're going to throw at it, and see what happens by just adding a little bit more code.” But increasingly, it doesn't work. It works for certain problems, but not for others. I mean, take a look at the Amazon approach, where every team communicates via APIs—there's no shared data stores or anything like that—and their entire problem is a lack of internal communication. That's why the launch services that do basically the same thing as each other because no one bothers to communicate with one another. And half my job now is introducing Amazonians to one another. It empowers some amazing things, but it has some serious trade-offs. And this goes back to our ADHD aspect of the conversation.Rich: Yeah.Corey: The thing that makes you amazing is also the thing that makes you suck. And I think that manifests in a bunch of different ways. I mean, the fact that I can switch between a whole bunch of different topics and keep them all in state in my head is helpful, but it also makes me terrible, as far as an awful lot of different jobs, where don't come back to finish things like completing the Jira ticket to hit on Jira a second time in the same recording.Rich: Yeah, I'm the same way, and I think that you're spot on. I think that we always have to keep the people in mind. You know, when I made this decision to come to Loft Labs, I was looking at the tools and the tools were cool, but it wasn't just that. It's that they were addressing problems that people I know have. You hear these stories all the time about people struggling with the multi-tenancy stuff and I could see very quickly that the people building the tools were thinking about the people using them, and I think that's super important.Corey: As I check your LinkedIn profile, turns out, no, we met back in your Puppet days, the same era that I was a traveling trainer, teaching people how to Puppet and hoping not to get myself ejected from the premises for using sarcastic jokes about the company that I was conducting the training for. And that was fun. And then I worked at a bunch of places, you worked in a bunch of places, and you mentioned a few minutes ago that we share this privilege where if one of us loses our job, the next one is going to be a difficult thing for us to find, given the skill set that we have, the immense privilege that we enjoy, and the way that this entire industry works. Now, I will say that has changed somewhat since starting my own company. It's no longer the fear of, “Well, I'm going to land on my feet.” Rich: Right.Corey: Yeah, but I've got a bunch of people who are counting on me not to completely pooch this up. So, that's the thing that keeps me awake at night, now. But I'm curious, do you feel like that's given you the flexibility to explore a bunch of different company types and a bunch of different roles and stretch yourself a little with the understanding that, yeah, okay. If you've never last five years at the same company, that's not an inherent problem.Rich: Yeah, it's interesting. I've had conversations with people about this. If you do look up my LinkedIn, you're going to see that a lot of the recent jobs have been less than two years: year, year and a half, things like that. And I think that I do have some of that freedom, now. Those exits haven't always been by choice, right?And that's part of what happens in the industry, too. I think I've been laid off, like, four or five times now in my career. The worst one by far was when the bubble burst back in 2000. I was working at WebMD, and they ended up closing our office here.Corey: You were Doctor Google.Rich: I kind of was. So, I was actually the guy who would deploy the webmd.com site back then. And it was three big Sun servers. And I would manually go in and run shell scripts and take one out of the load balancer and roll the new code on it, and then move on to the next one. And those are early days; I started in the industry in about '95. Those early days, I just felt bulletproof because everybody needed somebody with my skills. And after that layoff in 2000, it was a lot different. The market just dried up, I went 10 months unemployed. I ended up taking a job where I took a really big pay cut in a role that wasn't really good for me, career-wise. And I guess it's been a little bit of a comfort to me, looking back. If I get laid off now, I know it's not going to be as bad as that was. But I think that's important, and one of the things that's helped me a lot and I'm assuming it's helped you, too, is building up a network, meeting people, making friends. I sort of hate the word networking because it has really negative connotations to it to me. The salespeople slapping each other on the back at the bar and exchanging business cards is the image that comes to my mind when I think of networking. But I don't think it has to be like that. I think that you can make genuine friendships with people in the industry that share the interests and passions that you have.Corey: That's part of it. People, I think, also have the wrong idea about passion and how that interplays with career. “Do a thing that you love, and the money will follow,” is terrific advice in the United States to make about $30,000 a year. I assure you, when I started this place, I was not deeply passionate about AWS billing. I developed a passion for it as I rapidly had to become an expert in this thing.I knew there was an expensive business problem there that leveraged the skill set that I already had and I could apply it to something that was valuable to more than just engineers because let's face it, engineers are generally terrible customers for a variety of reasons. And by doing that and becoming the expert in that space, I developed a passion for it. I think it was Scott Galloway who in one of his talks said he had a friend who was a tax attorney. And do you think that he started off passionate about tax law? Of course not.He was interested in making a giant pile of money. Like, his preferred seat when he flies is ‘private.' So, he's obviously very passionate about it now, but he found something that he could enjoy that would pay a bunch of money because it was an in-demand, expensive skill. I often wonder if instead of messing around and computers, my passion had been oil painting, for example. Would I have had anything approaching to the standard of living I have now?The answer is, “Of course not.” It would have been a very different story. And that says many deeply troubling things about our society across the board. I don't know how to fix any of them. I'm one of those people that rather than sitting here talking how the world should be; I deal with the world as I encounter it.And at times, that doesn't feel great, but it is the way that I've learned to cope, I guess, with the existential angst. I'm envious in some ways of the folks who sit here saying, “No, we demand a better world.” I wish I shared their optimism or ability to envision it being different than it is, but I just don't have it.Rich: Yeah, I mean, there are oil painters who make a whole lot of money, but it's not many of them, right?Corey: Yeah, but you shouldn't have to be dead first.Rich: [laugh]. I used to… know a painter who Jim Carrey bought one of his big canvases for quite a lot of money. So, they're not all dead. But again, your point is very valid. We are in this bubble in the tech industry where people do make on average, I think, a lot more money than people do in many other kinds of jobs.And I recently started thinking about possibly going into ADHD coaching. So, I have an ADHD coach myself; she has made a very big difference in my life so far. And I actually have started taking classes to prepare for possibly getting certified in that. And I'm not sure that I'm going to do it. I may stay in tech.I may do some of both. It doesn't have to be either-or. But it's been really liberating to just have this vision of myself working outside of tech. That's something that I didn't consider was even possible for quite a long time.Corey: I have to confess I've never had an ADHD coach. I was diagnosed when I was five years old and back then—my mother had it as well, and the way that it was dealt with in the '50s and '60s when she was growing up was, she had a teacher once physically tie her to a chair. Which—Rich: Oh, my gosh.Corey: —is generally frowned upon these days. And coaching was never a thing. They decided, “Oh, we're going to medicate you to the gills,” in my case. And that was great. I was basically a zombie for a lot of my childhood.When I was 17, I took myself off of it and figured I'd white-knuckle it for the next 10 years or so. Again, everyone's experience is different, but for me, didn't work, and it led to some really interesting tumultuous times in my '20s. I've never explored coaching just because it feels like so much of what I do is the weirdest aspects of different areas of ADHD. I also have constraints upon me that most folks with ADHD wouldn't have. And conversely, I have a tremendous latitude in other areas.For example, I keep dropping things periodically from time to time; I have an assistant. Turns out that most people, they bring in an assistant to help them with stuff will find themselves fired because you're not supposed to share inside company data with someone who is not an employee of that company. But when you own the company, as I do, it's well, okay, I'm not supposed to share confidential client data or give access to it to someone who's not an employee here. “Da da da da da. Welcome aboard. Your first day is Monday.”And now I've solved that problem in a way that is not open to most people. That is a tremendous benefit and I'm constantly aware of how much privilege is just baked into that. It's a hard thing for me to reconcile, so I've never explored the coaching angle. I also, on some level—and this is an area that I understand is controversial and I in no way, shape or form, mean any—want anyone to take anything negative away from this. There are a number of people I know where ADHD is a cornerstone of their identity, where that is the thing that they are.That is the adjective that gets hung on them the most—by choice, in many cases—and I'm always leery about going down that path because I'm super strange ever on a whole bunch of different angles, and even, “Oh, well he has ADHD. Does that explain it?” No, not really. I'm still really, really strange. But I never wanted to go down that path of it being, “Oh, Corey. The guy with ADHD.”And again, part of this is growing up with that diagnosis. I was always the odd kid, but I didn't want to be quote-unquote, “The freak” that always had to go to the nurse's office to wind up getting the second pill later in the day. I swear people must have thought I had irritable bowel syndrome or something. It was never, “Time to go to the nurse, Corey.” It was one of those [unintelligible 00:27:12]. “Wow, 11:30. Wow, he is so regular. He must have all the fiber in his diet.” Yeah, pretty much.Rich: I think that from reading that Twitter thread of yours, it sounds like you've done a great job at mitigating some of the downsides of ADHD. And I think it's really important when we talk about this that we acknowledge that everybody's experience is different. So, your experience with ADHD is likely different than mine. But there are some things that a lot of us have in common, and you mentioned some of them, that the idea of creating that Jira ticket and never following through, you put yourself in a situation where you have people around you and structures, external structures, that compensate for the things that you might have trouble with. And that's kind of how I'm looking at it right now.My question is, what can I do to be the most successful Rich Burroughs that I can be? And for me right now, having that coach really helps a lot because being diagnosed as an adult, there's a lot of self-image problems that can come from ADHD. You know that you failed at a lot of things over time; people have often pointed that out to you. I was the kid in high school who the counselors or my teachers were always telling me I needed to apply myself.Corey: “If you just tried harder and suck a little less, then you'll be much better off.” Yeah, “Just to apply yourself. You have so much potential, Rich.” Does any of that ring a bell?Rich: Yeah, for sure. And, you know, something my coach said to me not too long ago, I was talking about something and I said to her, I can't do X. Like, I'm just not—it's not possible. And her response was, “Well, what if you could?” And I think that's been one of the big benefits to me is she helps me think outside of my preconceptions of what I can do.And then the other part of it, that I'm personally finding really valuable, is having the goal setting and some level of accountability. She helps with those things as well. So, I'm finding it really useful. I'm sure it's not for everybody. And like we said, everybody's experience with ADHD isn't the same, but one of the things that I've had happened since I started talking about getting diagnosed, and what I've learned since then, is I've had a bunch of people come to me.And it's usually on Twitter; it's usually in DMs; you know, they don't want to talk about it publicly themselves, but they'll tell me that they saw my tweets and they went out and got diagnosed or their kid got diagnosed. And when I think about the difference that could make in someone's life, if you're a kid and you actually get diagnosed and hopefully get better treatment than it sounds like you did, it could make a really big positive impact in someone's life and that's the reason that I'm considering putting doing it myself is because I found that so rewarding. Some of these messages I get I'm almost in tears when I read them.Corey: Yeah. The reason I started talking about it more is that I was hoping that I could serve as something of, if not a beacon of inspiration, at least a cautionary tale of what not to do. But you never know if you ever get there or not. People come up and say that things you've said or posted have changed the trajectory of how they view their careers and you've had a positive impact on their life. And, I mean, you want to talk about weird Gremlins in our own minds?I always view that as just the nice things people say because they feel like they should. And that is ridiculous, but that's the voice in my head that's like, “You aren't shit, Corey, you aren't shit,” that's constantly whispering in my ear. And it's, I don't know if you can ever outrun those demons.Rich: I don't think I can outrun them. I don't think that the self-image issues I have are ever going to just go away. But one thing I would say is that since I've been diagnosed, I feel like I'm able to be at least somewhat kinder to myself than I was before because I understand how my brain works a little bit better. I already knew about the things that I wasn't good at. Like, I knew I wasn't a good project manager; I knew that already.What I didn't understand is some of the reasons why. I'm not saying that it's all because of ADHD, but it's definitely a factor. And just knowing that there's some reason for why I suck, sometimes is helpful. It lets me let myself off the hook, I guess, a little bit.Corey: Yeah, I don't have any answers here. I really don't. I hope that it becomes more clear in the fullness of time. I want to thank you for taking so much time to speak with me about all these things. If people want to learn more, where can they find you?Rich: I'm @richburroughs on Twitter, and also on Polywork, which I've been playing around with and enjoying quite a bit.Corey: I need to look into that more. I have an account but I haven't done anything with it, yet.Rich: It's a lot of fun and I think that, speaking of ADHD, one of the things that occurred to me is that I'm very bad at remembering the things that I accomplish.Corey: Oh, my stars, yes. People ask me what I do for a living and I just stammer like a fool.Rich: Yeah. And it's literally this map of, like, all the content I've been making. And so I'm able to look at that and, I think, appreciate what I've done and maybe pat myself on the back a little bit.Corey: Which is important. Thank you so much again, for your time, Rich. I really appreciate it.Rich: Thanks for having me on, Corey. This was really fun.Corey: Rich Burroughs, Senior Developer Advocate at Loft Labs. I'm Cloud Economist Corey Quinn, and this is Screaming in the Cloud. If you've enjoyed this podcast, please leave a five-star review on your podcast platform of choice, whereas if you've hated this podcast, please leave a five-star review on your podcast platform of choice along with a comment telling me what the demon on your shoulder whispers into your ear and that you can drive them off by using their true name, which is Kubernetes.Corey: If your AWS bill keeps rising and your blood pressure is doing the same, then you need The Duckbill Group. We help companies fix their AWS bill by making it smaller and less horrifying. The Duckbill Group works for you, not AWS. We tailor recommendations to your business and we get to the point. Visit duckbillgroup.com to get started.Announcer: This has been a HumblePod production. Stay humble.

The Swyx Mixtape
The Origin of Kubernetes and Heptio [Joe Beda]

The Swyx Mixtape

Play Episode Listen Later Aug 24, 2021 11:26


Listen to the OSS Startup Podcast for the full episode.References:- Heptio's acquisition in 2018 "It's not clear how many customers Heptio worked with but they included large, tech-forward businesses like Yahoo Japan."

linkmeup. Подкаст про IT и про людей
sysadmins №32. Погружение в K8s c VMware. Tanzuют все!

linkmeup. Подкаст про IT и про людей

Play Episode Listen Later Jul 30, 2021


Одним загадочным словом для нас стало меньше - разбирались что за зверь такой VMware Tanzu. Большинству из нас VMware известна только благодаря своему гипервизору ESXi и платформе vSphere. Но если копнуть чуть глубже, то легко выясняется, что по количеству коммитов в Kubernetes они занимают второе место. А один из основателей кубера, Joe Beda, так и вообще их сотрудник. Возникает закономерный вопрос — чем же занят лидер виртуализации в, казалось бы, конкурирующем мире контейнеров? Вот об этом мы и говорили. А конкретнее: - VMware + K8s, поддержка опенсорс от VMware. Зачем и почему. - Зачем вообще коммерческий релиз при наличии OpenSOurce? Плюсы и минусы обоих подходов - Кто будет отвечает за инфру будущего, обеспечивать поддержку и прочую безопасность - Как там с сетями - Как вообще пойти в это, если компания захотела, и надо с чего-то начинать. Сообщение sysadmins №32. Погружение в K8s c VMware. Tanzuют все! появились сначала на linkmeup.

GOTO - Today, Tomorrow and the Future
Is Cloud Native & Kubernetes the Same Nowadays? • Lars Jensen, Frederik Mogensen, Lasse Højgaard & Kasper Nissen

GOTO - Today, Tomorrow and the Future

Play Episode Listen Later Jul 9, 2021 40:51


This interview was recorded for the GOTO Podcast.https://gotopia.tech/podcastLars Jensen - Lead Developer at GOTOFrederik Mogensen - Software Pilot at TriforkLasse Højgaard - Software Pilot at TriforkKasper Nissen - Lead Platform Architect at LunarDESCRIPTIONIn this GOTO Podcast you'll discover the cloud native and Kubernetes ecosystem.RECOMMENDED BOOKSBrendan Burns, Joe Beda & Kelsey Hightower • Kubernetes: Up and Running • https://amzn.to/3wrtwlpJohn Arundel & Justin Domingus • Cloud Native DevOps with Kubernetes • https://amzn.to/3hKZvI5Kasun Indrasiri & Sriskandarajah Suhothayan • Design Patterns for Cloud Native Applications • https://amzn.to/3yCFxWEMichael Hausenblas & Stefan Schimanski • Programming Kubernetes • https://amzn.to/3qTvKchAlexander Raul • Cloud Native with Kubernetes • https://amzn.to/3yw9ckcNigel Poulton • The Kubernetes Book • https://amzn.to/3dW8ViUMarko Luksa • Kubernetes in Action • https://amzn.to/3dXk2Imhttps://twitter.com/GOTOconhttps://www.linkedin.com/company/goto-https://www.facebook.com/GOTOConferencesLooking for a unique learning experience?Attend the next GOTO conference near you! Get your ticket at http://gotopia.techSUBSCRIBE TO OUR YOUTUBE CHANNEL - new videos posted almost daily.https://www.youtube.com/user/GotoConferences/?sub_confirmation=1

Open Source Startup Podcast
E5: Open-Sourcing Kubernetes & Building a Company (Heptio) Around It

Open Source Startup Podcast

Play Episode Listen Later Jun 24, 2021 38:53


Joe Beda, Founder & CTO, Heptio 2:00: Developing the Kubernetes project at Google & deciding to open-source it 4:48: The origins of Heptio & building a company around Kubernetes 13:29: Open-source business models & 'open extensibility' as the new model 24:36: Scaling open-source businesses: metrics to track & interacting with the community 31:55: Good areas for successful open-source products & advice for founders

Future of Tech
The Future of Kubernetes

Future of Tech

Play Episode Listen Later May 24, 2021 44:23


If you want to know about Kubernetes, you should probably talk to the guy who built, pitched and then implemented K8 at Google. So that's who we called for this episode of Future of Tech.  Joe Beda is one of the fathers of Kubernetes and on this episode he takes us behind the scenes of developing K8, including why they decided to open-source the technology to level the playing field of app deployment. Today, Joe is a principal engineer at VMware, and he's still making waves in tech, particularly in the world of open source. Joe explains that working on open source projects fosters a sense of community and leads to more win-win scenarios that include integrated solutions that work for every vendor. Plus, Joe explains the future of edge computing and how service mesh and edge will work together. And, he talks about the future of Kubernetes and why the ultimate goal is to have Kubernetes become boring. Enjoy this episode! Main Takeaways: Problem Solver: One of the reasons Kubernetes is so popular is because it solves many problems at the same time. Using K8, you can efficiently use an entire set of machines dynamically across the whole company, and you can improve workflow with a set of APIs that app developers can access virtually anywhere. Getting to Win-Win: The key for a successful open source project is to find win-win scenarios where multiple developers across vendors can agree on a vision and integrate solutions from every corner of the project to create something that is greater than the sum of its parts and that every vendor can benefit from. Open source fosters a sense of community in this way and allows for all different types of developers to contribute even if they aren't necessarily directly involved with the problem at hand.  Getting Edgy: Moving forward, there will be a greater need to have more computing power in more locations in order to keep operations moving fast. Edge is one solution to this, but managing and scaling edge compute to make it more accessible to enterprises is one of the main challenges for IT leaders today. --- Future of Tech is brought to you by Amdocs Tech. Amdocs Tech is Amdocs's R&D and technology center, paving the way to a better-connected future by creating open, innovative, best-in-class products and continuously evolving the way we work, learn and live. To learn more about Amdocs Tech, visit the Amdocs Technology page on LinkedIn.

Screaming in the Cloud
The Switzerland of the Cloud with Sanjay Poonen

Screaming in the Cloud

Play Episode Listen Later May 18, 2021 40:46


About SanjaySanjay Poonen is the former COO of VMware, where he was responsible for worldwide sales, services, support, marketing and alliances. He was also responsible for the Security strategy and business at VMware. Prior to SAP, Poonen held executive roles at SAP, Symantec, VERITAS and Informatica, and he began his career as a software engineer at Microsoft, followed by Apple. Poonen holds two patents as well as an MBA from Harvard Business School, where he graduated a Baker Scholar; a master's degree in management science and engineering from Stanford University; and a bachelor's degree in computer science, math and engineering from Dartmouth College, where he graduated summa cum laude and Phi Beta Kappa.Links: VMware: https://www.vmware.com/ leadership values: https://www.youtube.com/watch?v=lxkysDMBM0Q Twitter: https://twitter.com/spoonen LinkedIn: https://www.linkedin.com/in/sanjaypoonen/ spoonen@vmware.com: mailto:spoonen@vmware.com TranscriptAnnouncer: Hello, and welcome to Screaming in the Cloud with your host, Chief Cloud Economist at The Duckbill Group, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.Corey: This episode is sponsored in part by Thinkst. This is going to take a minute to explain, so bear with me. I linked against an early version of their tool, canarytokens.org in the very early days of my newsletter, and what it does is relatively simple and straightforward. It winds up embedding credentials, files, that sort of thing in various parts of your environment, wherever you want to; it gives you fake AWS API credentials, for example. And the only thing that these things do is alert you whenever someone attempts to use those things. It’s an awesome approach. I’ve used something similar for years. Check them out. But wait, there’s more. They also have an enterprise option that you should be very much aware of canary.tools. You can take a look at this, but what it does is it provides an enterprise approach to drive these things throughout your entire environment. You can get a physical device that hangs out on your network and impersonates whatever you want to. When it gets Nmap scanned, or someone attempts to log into it, or access files on it, you get instant alerts. It’s awesome. If you don’t do something like this, you’re likely to find out that you’ve gotten breached, the hard way. Take a look at this. It’s one of those few things that I look at and say, “Wow, that is an amazing idea. I love it.” That’s canarytokens.org and canary.tools. The first one is free. The second one is enterprise-y. Take a look. I’m a big fan of this. More from them in the coming weeks.Corey: Let’s be honest—the past year has been a nightmare for cloud financial management. The pandemic forced us to move workloads to the cloud sooner than anticipated, and we all know what that means—surprises on the cloud bill and headaches for anyone trying to figure out what caused them. The CloudLIVE 2021 virtual conference is your chance to connect with FinOps and cloud financial management practitioners and get a behind-the-scenes look into proven strategies that have helped organizations like yours adapt to the realities of the past year. Hosted by CloudHealth by VMware on May 20th, the CloudLIVE 2021 conference will be 100% virtual and 100% free to attend, so you have no excuses for missing out on this opportunity to connect with the cloud management community. Visit cloudlive.com/coreyto learn more and save your virtual seat today. That’s cloud-l-i-v-e.com/corey to register.Corey: Welcome to Screaming in the Cloud. I’m Corey Quinn. I talk a lot about cloud in a variety of different contexts; this show is about the business of cloud. But, fundamentally, where cloud comes from was this novel concept, once upon a time, of virtualization. And that gave rise to a whole bunch of other things that later became, then containers, now it becomes Kubernetes, and if you want to go down the serverless path, you can.But it’s hard to think of a company that has had more impact on virtualization and that narrative than VMware. My guest today is Sanjay Poonen, Chief Operating Officer of VMware. Thank you for joining me.Sanjay: Thanks, Corey Quinn, it’s great to be with you and with your audience on this show.Corey: So, let’s start with the fun slash difficult questions. It’s easy to look at VMware as a way of virtualizing existing bare-metal workloads and moving those VMs around, but in many respects, that is perceived by some—ehem, ehem—to be something of a legacy model of cloud interaction where it solves the problem of on-premises, which is I’m really bad at running data centers so I’m just going to treat the cloud like a data center. And for some companies and some workloads, where, great, that’s fine. But isn’t that, I guess, a V1 vision of cloud, and if it is, why is VMware relevant to that?Sanjay: Great question, Corey. And I think it’s great to be straight up on a topic [unintelligible 00:02:01]. Yeah, I think you’re right. Listen, the ‘V’ in VMware is virtualization. The ‘VM’ is virtual machines.A lot of what is the underpinning of what made the private cloud, as we call it today, but the data center of the past successful was this virtualization technology. In the old days, people would send us electricity bills, before and after VMware, and how much they’re saving. So, this energy-saving concept of virtualization has been profound in the modernization of the data center and the advent of what’s called the private cloud. But as you looked at the public cloud innovate, whether it was AWS or even the SaaS applications—I mean, listen, the most popular capability initially on AWS was EC2 and S3, and the core of EC2 is virtualization. I think what we had to do, as this happened, was the foundation was certainly those services like EC2 and S3, but very quickly, the building phenomenon that attracted hundreds of thousands and I think now probably a few million customers to AWS was the large number of services, probably now 150, 200-odd services, that were built on top of that for everything from data, to AI, to a variety of other things that every year Andy Jassy and the team would build up.So, we had to make sure that over the course of the last, I’d say, certainly the last five to maybe eight years, we were becoming relevant to our customers that were a mix. There were customers who were large—I mean, we have about half a million customers—and in many cases, they have about 80, 90% of their workloads running on-prem and they want to move those workloads to the cloud, but they can’t just refactor and re-platform all of those apps that are running in the on-premise world. When they will try to do it by the end of the year—they may have 1000 applications—they got 10 done.Corey: Oh, and it’s not realistic and it’s unfair. I mean, there’s the idea of, “Oh, that’s legacy,” which is condescending engineering speak for it actually makes money because it’s been around for longer than six months. And sure you can have Twitter For Pets roll stuff out every day that you want; when you’re a bank, you have different constraints forced upon you. And I’m very sympathetic to folks who are in scenarios where they aren’t, for whatever reason, able to technically, culturally, or for regulatory reasons, be able to do continuous deployment of everything. I want to be very clear that I’ve in no way passing judgment on an entire sector of enterprise.Sanjay: But while that sector is important, there was also another sector starting to emerge: the Airbnbs, the Pinterests, the modern companies who may not need VMware at all as they’re building native, but may need some of our container in a new open-source capabilities. SaltStack was one of them; we will talk about that, I’m sure. So, we needed to be relevant to both customer communities because the Airbnbs of today, will be the Marriotts of tomorrow. So, we had to really rethink what is the future of VMware, what’s our existence in a public cloud phenomenon? That’s really what led to a complete watershed moment.I called publicly in the past sort of a Berlin Wall moment where Amazon and VMware were positioned pretty much as competitors for a long period of time when AWS was first started. Not that Andy was going around talking negatively about VMware, but I think people view these as two separate doors, and never the twain would meet. But when we decided to partner with them—I then quite frankly, the precursor to that was us divesting our public cloud strategy. We’d tried to build a competitive public cloud called vCloud Air between the period of 2012 and 2015, 2016—we had to reach an end of that movement, and catharsis of that, divest that asset, and it opened the door for a strategic partnership. But now we can go back to those customers and help them move their applications in a way that’s highly efficient, almost like a house on wheels, and then once it’s in that location in AWS—or one of the other public clouds—you can modernize it, too.So, then you get to both get the best of both worlds: get it into the public cloud, maybe retire some of your data centers if that’s what you want to do, and then modernize it with all the beautiful services. And that’s the best of both worlds. Now, if you have 1000 applications, you’re moving hundreds of them into the public cloud, and then using all of the powerful developer services on that VMware stack that’s built on the bare metal of AWS. So, we started out with AWS, but very quickly then, all the other public clouds, maybe the five or six that are named in the Gartner Magic Quadrant, came to us and said, “Well, if you’re doing that with AWS, would you consider doing that with us, too?”Corey: There’s definitely been an evolution of VMware. I mean, it’s in the name; you have the term VM sitting there. It’s easy to, at least from where I sit, think of, “Oh, VMware, back when running virtual machines was novel.” And there was a lot of skepticism around the idea. I’m going to level with you; I was a skeptic around virtualization. Then around cloud. Then around containers.And now I’m trying—all right I’m going to be in favor of serverless, which is almost certain to doom it because everything else that I’ve been skeptical of in this sense beyond any reasonable measure. So, there is this idea that VMs are this sort of old-school thinking. And that’s great if you have an existing workload that needs to be migrated, but there are a finite number of those in the world. As we turn towards net-new and greenfield build-outs, a lot of things are a lot more cloud-native than just hosting a bunch of—if you take the AWS example—EC2 instances hanging out in the network talking to other EC2 instances. Taking advantage of native offerings definitely seems to be on the rise. And there have been acquisitions that VMware has made. You talk about SaltStack, which was a great example, given that I wrote part of that very early on, and I don’t think the internet’s ever forgiven me for it. But also Bitnami—or BittenAMI, as I insist on pronouncing it—and you also acquired Wavefront. There’s a lot of interesting stuff that feels almost like a setting up a dichotomy of new VMware versus old VMware. What are the points of commonality there? What is the vision for the next 15 years of the company?Sanjay: Yeah, I think when we think about it, it’s very important that, first off, we acknowledge that our roots are what gives us sustenance because we have a large customer base that uses us. We have 80 million workloads running on that VMware infrastructure, formerly ESX, now vSphere. And that’s our heritage, and those customers are happy. In fact, they’re not, like, fleeing like birds into there, so we want to care for those customers.But we have to have a north star, like a magnet that pulls us into the modern world. And that’s been—you know, I talked about phase one was this really charting of the future of VMware for the cloud. Just as important has been focused on cloud-native and containers the last three, four years. So, we acquired Heptio. As you know, Heptio was founded by some of the inventors of Kubernetes who left Google, Joe Beda, and Craig McLuckie.And with that came a strong I would say relevancy, and trust to the Kubernetes, we’ve become one of the leading contributors to open-source Kubernetes. And that brain trust now, some of whom are at VMWare and many are in the community think of us very differently. And then we’ve supplemented that with many other moves that are much more cloud-native. You mentioned two or three of them: Bitnami, for that sort of marketplace; and then SaltStack for what we have been able to do in configuration management and infrastructure automation; Wavefront for container-based workloads. And we’re not done, and we think, listen, there will be many, many more things that the first 10, 15 years of VMware was very much about optimizing the private cloud, the next 10, 15 years could be optimizing for that app modernization cloud-native world.And we think that customers will want something that can work in a multi-cloud fashion. Now, multi-cloud for us is certainly private cloud and edge cloud, which may have very little to do with hardware that’s in the public cloud, but also AWS, Azure, and two or three other clouds. And if you think of each of these public clouds as mini skyscrapers—so AWS has 50 billion in revenue; I’m going to guess Azure is, like, 30, and then Google is I don’t know 12, 13; and then everyone else, and they’re all skyscrapers are different—it’s like, if we can be that company that fills the crevices between them with cement that’s valuable so that people can then build their houses on top of that, you’re probably not going to be best served with a container Stack that’s trapped to just one cloud. And then over time, you don’t have reasonable amount of flexibility if you choose to change that direction. Now, some people might say, “Listen, multi-cloud is—who cares about that?”But I think increasingly, we’re hearing from customers a desire to have more than just one cloud for a variety of reasons. They want to have options, portability, flexibility, negotiating price, in addition to their private cloud. So, it’s a two plus one, sometimes it might be a two plus two, meaning it’s a private cloud and the edge cloud. And I think VMware is a tremendous proposition to be that Switzerland-type company that’s relevant in a private cloud, one or two public clouds, and an edge cloud environment, Corey.Corey: Are you seeing folks having individual workloads that they want to flow from one cloud to another in a seamless way, or is it more aligned along an approach of having workload A lives in this cloud and workload B lives in this cloud? And you’re in a terrific position to opine on that more than most, given who you are.Sanjay: Yeah. We’re not yet as yet seeing these floating workloads that start here and move around, that’s—usually you build an application with purpose. Like, it sits here in this cloud and of course. But we’re seeing, increasingly, interest at customers’ not tethering it to proprietary services only. I mean, certainly, if you’re going to optimize it for AWS, you’re going to take advantage of EC2, S3, and then many of the, kind of, very capable [unintelligible 00:11:24], Aurora, there are others that might be there.But over time, especially the open-source movement that brings out open-source data services, open-source tooling, containers, all of that stuff, give ultimately customers the hope that certainly they should add economic value and developer productivity value, but they should also create some potential portability so that if in the future you wanted to make a change, you’re not bound to that cloud platform. And a particular cloud may not like us saying this, but that’s just the fact of how CIOs today are starting to think much more so as they build these up and as many of the other public clouds start to climb in functionality. Now, there are other use cases where particular SaaS applications of SaaS services are optimized for a particular [unintelligible 00:12:07], for example, Office 365, someone’s using a collaboration app, typically, there’s choices of one or two, you’re either using a G Suite and then it’s tied to Google, or it’s Office 365. But even there, we’re starting to see some nibbling around the edges. Just the phenomenon of Zoom; that wasn’t a capability that Microsoft brought very—and the services from Google, or Amazon, or Microsoft was just not as good as Zoom.And Zoom just took off and has become the leading video collaboration platform because they’re just simple, easy to use, and delightful. It doesn’t matter what infrastructure they run on, whether it’s AWS, I mean, now they’re running some of their workloads on Oracle. Who cares? It’s a SaaS service. So, I think increasingly, I think there will be a propensity towards SaaS applications over custom building. If I can buy it why would I want to build a video collaboration app myself internally, if I can buy it as a SaaS service from Zoom, or whoever have you?Corey: Oh, building it yourself would be ludicrous unless that was one of your core competencies.Sanjay: Exactly.Corey: And Zoom seems to have that on lock.Sanjay: Right. And so similarly, to the extent that I think IT folks can buy applications that are more SaaS than custom-built, or even on-prem, I mean, Salesforce—the success of Salesforce, and Workday, and Adobe, and then, of course, the smaller ones like Zoom, and Slack, and so on. So, it’s clear evidence that the world is going to move towards SaaS applications. But where you have to custom build an application because it’s very unique to your business or to something you need to very snap quickly together, I think there’s going to be increasingly a propensity towards using open-source types of tooling, or open-source platforms—Kubernetes being the best example of that—that then have some multi-cloud characteristics.Corey: In a similar note, I know that the term is apparently, at least this week on Twitter, being argued against, but what about cloud repatriation? A lot of noise has been made about people moving workloads from public cloud back to private cloud. And the example they always give is Dropbox moving its centralized storage service into an on-prem environment, and the second example is basically a pile of tumbleweeds because people don’t really have anything concrete to point at. Does that align with your experience? Is there a, I guess, a hidden wave of people doing a reverse cloud migration that just doesn’t get discussed?Sanjay: I think there’s a couple of phenomenons, Corey, that we watch here. Now, clearly a company of the scale of Dropbox has economics on data and storage, and I’ve talked to Drew and a variety of the folks there, as well as Box, on how they think about this because at that scale, they probably could get some advantages that I’m sure they’ve thought through in both the engineering and the cost. I mean, there’s both engineering optimization and costs that I’m sure Drew and the folks there are thinking through. But there’s a couple of phenomena that we do—I mean, if you go back to, I think, maybe three or four quarters ago, Brian Moynihan, the CEO of Bank of America, I think in 2019, mid to late 2019 made a statement in his earnings call, he was asked, “How do you think about cloud?” And he said, “Listen, I can run a private cloud cheaper and better than any of the public clouds, and I save 240%,” if I remember the data right.Now, his private cloud and Bank of America is a key customer [unintelligible 00:15:04] of us, we find that some of the bigger companies at scale are able to either get hardware at really good pricing, are able to engineer—because they have hundreds of thousands—they’re almost mini VMware, right, [unintelligible 00:15:18] themselves because they’ve got so many engineers. They can do certain things that a company that doesn’t want to hire those many—companies, Pinterest, Airbnb may not do. So, there are customers who are going to basically say, even prior to repatriation, that the best opportunity is a private cloud. And in that place, we have to work with our private cloud partners, whether it’s Dell or others, to make sure that stack of hardware from them plus the software VMware in the containers on top of that is as competitive and is best cost of ownership, best ROI. Now, when you get to your second—your question around repatriation, what we have found in certain regions outside the US because of sovereign data, sovereign clouds, sometimes some distrust of some of those countries of the US public cloud, are they worried about them getting too big, fear by monopoly, all those types of things, lead certain countries outside the US to think about something that they would need that’s sovereign to their country.And the idea of sovereign data and sovereign clouds does lead those to then investing in local cloud providers. I mean, for example in France, there is a provider called OVH that’s kind of trying to do some of that. In China, there’s a whole bunch of them, obviously, Alibaba being the biggest. And I think that’s going to continue to be a phenomenon where there’s a [federated said 00:16:32], we have a cloud provider program with this 4000 cloud providers, Corey, who built their stack on VMware; we’ve got to feed them. Now, while they are an individual revenue way smaller than the public clouds were, but collectively, they represent a significant mass of where those countries want to run in a local cloud provider.And from our perspective, we spent years and years enabling that group to be successful. We don’t see any decline. In fact, that business for us has been growing. I would have thought that business would just completely decline with the hyperscalers. If anything, they’ve grown.So, there’s a little bit of the rising tide is helping all boats rise, so to speak. And the hyperscaler’s growth has also relied on many of these, sort of, sovereign clouds. So, there’s repatriation happening; I think those sovereign clouds will benefit some, and it could also be in some cases where customers will invest appropriately in private cloud. But I don’t see that—I think if anything, it’s going to be the public cloud growing, the private cloud, and edge cloud growing. And then some of these, sort of, country-specific sovereign clouds also growing. I don’t see this being in a huge threat to the public cloud phenomena that we’re in.Corey: This episode is sponsored in part by our friends at Lumigo. If you've built anything from serverless, you know that if there's one thing that can be said universally about these applications, it's that it turns every outage into a murder mystery. Lumigo helps make sense of all of the various functions that wind up tying together to build applications. It offers one-click distributed tracing so you can effortlessly find and fix issues in your serverless and microservices environment. You've created more problems for yourself. Make one of them go away. To learn more, visit lumigo.io. Corey: I want to very clear, I think that there’s a common misconception that there’s this, somehow, ongoing fight between all the cloud providers, and all this cloud growth, and all this revenue is coming at the expense of other cloud providers. I think that it is simultaneously workloads that are being migrated from on-premises environments—yes—but a lot of it also feels like it’s net-new. It’s not just about increasingly capturing ever larger portions of the market but rather about the market itself expanding geometrically. For a long time, it felt like that was what tech was doing. Looking at the global IT spend numbers coming out of Gartner and other places, it seems like it’s certainly not slowing down. Does that align with your perception of it? Or are there clear winners and losers that are I guess, differentiating out?Sanjay: I think, Corey, you’re right. I think if you just use some of the data, the entire IT market, let’s just say it’s about $1 trillion, some estimates have it higher than that. Let’s break it down a little bit. Inside that 1 trillion market it is growing—I mean, obviously COVID, and GDP declined last year in calendar 2020 did affect overall IT, but I think let’s assume that we have some kind of U-shape or other kind of recovery, going into the second half of certainly into next year; technology should lead GDP in terms of its incline. But inside that trillion-dollar market, if you add up the SaaS market, it’s about $115 billion market.And these are companies like Salesforce, and Adobe, and Workday, and ServiceNow. You add them all up, and those are growing, I think the numbers were in the order of 15 or 20% in aggregate. But that SaaS market is [unintelligible 00:19:08]. And that’s growing, certainly faster than the on-prem applications market, just evidenced by the growth of those companies relative to on-premise investments in SAP or Oracle. And then if you look at the infrastructure market, it’s slightly bigger, it’s about $125 billion, growing slightly faster—20, 25%—and there you have the companies like AWS, Azure, and Google, and Alibaba, and whoever have you. And certainly, that growth is faster than some of the on-premise growth, but it’s not like the on-premise folks are declining. They’re growing at slower paces.Corey: It is harder to leave an on-premise environment running and rack up charges and blow out the bill that way, but it—not impossible, I suppose, but it’s harder to do than it is in public cloud. But I definitely agree that the growth rate surpasses what you would see if it were just people turning things on and forgetting to turn them off all the time.Sanjay: Yeah, and I think that phenomenon is a shift in spending where certainly last year we saw more spending in the cloud than on-premise. I think the on-premise vendors have a tremendous opportunity in front of them, which is to optimize every last dollar that is going to be spent in the data centers, private cloud. And between us and our partners like Dell and others, we’ve got to make sure we do that for our customer base that we’ve accumulated over last 10, 15 years. But there’s also a significant investment now moving to the edge. When I look at retailers, CPG companies—consumer packaged good companies—manufacturers, the conversation that I’m having with their C-level tech or business executives is all about putting compute in the stores.I mean, listen, what is the retailer concerned about? Fraud, and some of those other things, and empowering a quick self-service experience for a consumer who comes in and wants to check out of a Safeway or Walmart really quickly. These are just simple applications with local compute in the store, and the more that we can make that possible on top of almost like a nano data center or micro data center, running in the store with those applications resident there, talking—you know, you can’t just take all of that data, go back and forth to the cloud, but with resident services and capability right there, that’s a beautiful opportunity for the VMware and the Dells of the world. And that’s going to be a significant place where I think you’re going to see expansion of their focus. The Edge market today is I think, projected to be about $6 or $8 billion this year, and growing to $25 billion the next four or five years.So, much smaller than the previous numbers I shared—you know, $125, $115 billion for SaaS and IaaS—but I think the opportunity there, especially these industries that are federated: CPG, consumer packaged goods, manufacturing, retail, and logistics, too—you know, FedEx made a big announcement with VMware and Dell a few months ago about how they’re thinking about putting compute and local infrastructure at their distribution sites. I think this phenomenon, Corey, is going to happen in a number of different [unintelligible 00:21:48], and is a tremendous opportunity. Certainly, the public cloud vendors are trying to do that with Outposts and Azure Stack, but I think it does favor the on-premise vendors also having a very strong proposition for the edge cloud.Corey: I assumed that the whole discussion with FedEx started by someone dramatically misunderstanding what it meant to ship code to production.Sanjay: [laugh]. I mean, listen, at the end of the day, all of these folks who are in traditional industries are trying to hire world-class developers—like software companies—because all of them are becoming software companies. And I think the open-source movement, and all of these ways in which you have a software supply chain that’s more modernized, it’s affecting every company. So, I think if you went into the engineering product teams of Rob Carter, who runs technology for FedEx, you’ll find them and they may not have all of the sophistication as a world-class software company, but they’re getting increasingly very much digital in their focus of next generation. And same thing with UPS.I was talking to the CEO of UPS, we had her come and speak at our kickoff. It’s amazing how much her lingo—she was the former CFO of Home Depot—I felt like I was talking to a software executive, and this is the CEO of UPS, a logistics company. So, I think increasingly, every company is becoming a software company at their core. And you don’t need to necessarily know all the details of containers and virtualization, but you need to understand how software and digital transformation, how technology can power your digital transformation.Corey: One thing that I’ve noticed the more I get to talk to people doing different things in different roles was, at first I was excited because I get to talk to the people where they’re really doing it right and everything’s awesome. And I’ve increasingly of the opinion that those sites don’t actually exist. Everyone talks about the great thing is that they’re doing and aspirationally in certain areas in the terms of conference-ware, but you get down into the weeds, and everyone views their environment as being a burning tire fire of sadness and regret. Everyone thinks other people are doing it way better than they are. And in some cases they’re embarrassed about it, in some cases they’re open about it, but I feel like we’re still in the early days where no one is doing things in the quote-unquote, “Right ways,” but everyone thinks everyone else is.Sanjay: Yeah, I think, Corey, that’s absolutely right. We are very much early days in all of this phenomenon. I mean, listen, even the public cloud, Andy himself would say it’s [laugh]—he wouldn’t say it’s quite day one, but he would say it’s very early [unintelligible 00:24:03], even though they’ve had 15 years of incredible success and a $50 billion business. I would agree. And when you look at the customers and their persona—when I ask a CIO what percentage of—of an established company, not one of the modern ones who are built all cloud-native—but what percentage of your workloads are in a public cloud versus private cloud, the vast majority is still in a data center or private cloud.But with the intent—if it’s 90/10, let’s say 90 private 10—for that to become 70/30, 50/50. But very rarely do I hear a one of these large companies say it’s going to be 10/90 the opposite way in three, five years. Now, listen, I think every company as it grows that is more modern. I mean the Zooms of the world, the Modernas, the Airbnbs, as they get bigger and bigger, they represent a completely new phenomenon of how they are building applications that are all cloud-native. And the beautiful thing for me is just as a former engineering and developer, I mean, I grew up writing code in C, and C++ and then came BEA WebLogic, and IBM WebSphere, and [JGUI 00:25:04].And I was so excited for these frameworks. I’m not writing code, thankfully, anymore because it would create lots of problems if I did. But when I watched the phenomena, I think to myself, “Man, if I was a 22 year old entering the workforce now, it’s one of the most exciting times to write code and be a developer because what’s available to you, both in the combination of these cloud frameworks and open-source frameworks, is immense.” To be able to innovate much, much faster than we did 25, 30 years ago when I was a developer.Corey: It’s amazing there’s the pace of innovation, if cloud has changed nothing else, from my perspective, it’s been the idea that you can provision things without these hefty waiting periods. But I want to shift gears slightly because we’ve been talking about cloud for a bit in the context of infrastructure, and containers, and the rest, but if we start moving up the stack a little bit, that’s also considered cloud, which just seems to have that naming problem of namespace collision, just to confuse folks. But VMware is also active in this space, too. You’ve got things like Workspace ONE, you’ve got a bunch of other endpoint options as well that are focused on the security space. Is that aligned?Is that just sort of a different business unit? How does that, I guess, resonate between the various things that you folks do? Because it turns out, you’re kind of a big company, and it’s difficult to keep it all straight from an external perspective.Sanjay: Well, I think—listen, we’re roughly a little less than $12 billion in revenue last year. You can think of us in two buckets: everything in the first bucket is all that we talked about. Think of that as modernization of applications and cloud infrastructure, or what people might think about PaaS and IaaS without the underlying hardware; we’re not trying to build servers and storage and networking at the hardware level, you know, and so and so. But the software layer is about, that’s the first conversation we had for the last 15, 20 minutes. The second part of our business is where we’re touching end-users and infrastructure, and securing it.And we think that’s an important part because that also is something through software, and the cloud could be optimized. And we’ve had a long-standing digital workspace. In fact, when I came to VMware, it was the first business I was running in terms of all the products and end-user computing. And our thesis was many of the current tools, whether it’s the virtual desktop technology that people have from existing vendors, or even today, the security tools that they use is just too cumbersome. It’s too heavy.In many cases, people complain about the number of agents they have on their laptops, or the way in which they secure firewalls is too expensive and too many. We felt we could radically—VMware gets involved in problems where we can radically simplify thing with some disruptive innovation. And the idea was, first in the digital workspace was to radically reduce cost with software that was built for the cloud. And Workspace ONE and all of those things radically reduce the need for disparate technologies for virtual desktops, identity management, and endpoint management. We’ve done very well in that.We’re a leader in that segment, if you look at any of the analysts ratings, whether it’s Gardner or others. But security has been a more recent phenomenon where we felt like it leads us very quickly into securing those laptops because on those same laptops, you have antivirus, you have a variety of tools, and on the average, the CSOs, the Chief Security Officers tell me they have way too many agents, way too many consoles, way too many alerts, and if we could reduce that and have a single agent on a laptop, or maybe even agentless technology that secure this, that’s the Nirvana. And if you look at some of the recent things that have happened with SolarWinds, or Petya, WannaCry in the past, security’s of top concern, Corey, to boards. And the more that we could do to clean that up, I think we can emerge—which we’re already starting to—as a cybersecurity layer. So, that’s a smaller part of our business, but, I mean, it’s multi-billion now, and we think it’s a tremendous opportunity for us to take what we’re doing in workspace and security and make that a growth vector.So, I think both of these core areas, the cloud infrastructure, and modern applications—topic number one—workspace and security—topic number two—I’m both tremendous opportunities for VMware in our journey to grow from a $12 billion company to one day, hopefully, a $20 billion company.Corey: Would that we all had such problems, on some level. It’s really interesting seeing the evolution of companies going from relatively small companies and humble beginnings to these giant—I guess, I want to use the term Colossus, but I’m not sure if that’s insulting or [laugh] not—it’s phenomenal just to see the different areas of business that VMware has expanded into. I mean, I’ve had other folks from your org talking about what a Tanzu is or might be, so we aren’t even going to go down that rabbit hole due to time constraints at this point, but one thing that I do want to get into, slightly, has been a recurring theme in the show, which is where does the next generation of leaders come from? Where do the next generation engineers come from? And you’ve been devoting a bit of time to this. I think I saw one of your YouTube videos somewhat recently about your leadership values. Talk to me a little bit about that.Sanjay: Yeah. Corey, listen, I’m glad that we’re closing out this on some of the soft topics because I love talking to you, or other talented analysts and thought leaders around technology. It’s my roots; I’m a technical person at heart. I love technology. But I think the soft stuff is often the hard stuff.And the hard stuff is often the soft stuff. And what I mean by that is, when all this peels away, what your lasting legacy to the company are the people you invest in, the character you build. And, I mean, as an immigrant who came to this country, when I was 18 years old, $50 in my pocket, I was very fortunate to have a scholarship to go to a really nice University, Dartmouth College, to study computer science. I mean, I grew up in India and if it wasn’t for the opportunity to come here on a scholarship, I wouldn’t have [been here 00:30:32]. So, everything I consider a blessing and a learning opportunity where I’m looking at the advent of life as a growth mindset: what can I learn? And we all need to cultivate more and more aspects of that growth mindset where we move from being know-it-alls to learn-it-alls.And one of the key things that I talk about—and all of your listeners on this, listening to this, I welcome to go to YouTube and search Sanjay Poonen and leadership, it’s a 10-minute video—I’ll pick one of them. Most often as we get higher and higher in an organization, leaders tend to view things as a pyramid, and they’re kind of like this chief bird sitting at the top of the pyramid, and all these birds that are looking—below them on branches are looking up and all they see is crap falling down. Literally. That’s what happens when you look at the bird up. And our job as leaders is to invert that pyramid.And to actually think about the person who is on the front lines. In a software company, it’s an engineer and a sales rep. They are the folks on the frontline: they’re writing code or selling code. They are the true people who are making things happen. And when we as leaders look at ourselves as the bottom of the pyramid—some people call that, “Servant leadership.”Whatever way you call it, the phrase isn’t the point—the point is, invert that pyramid and to take obstacles out of people from the frontline. You really become not interested as much around what your own personal wellbeing, it’s about ensuring that those people in the middle layers and certainly at the leaf levels of the organization are enormously successful. Their success becomes your joy, and it becomes almost like a parent, right? I mean, Corey, you have kids; I’ve got kids. Imagine if you were a parent and you were jealous of your kid’s success.I mean, I want my three children, my daughter, my two children to do better than me, running races or whatever it is that they do. And I think as a leader, the more that we celebrate the successes of our teams and people, and our lasting legacy is not our own success; it’s what we have left behind, other people. I’ve say often there’s no success without successors. So, that mindset takes a lot of work because the natural tendency of the human mind and the human behavior is to be selfish and think about ourselves. But yeah, it’s a natural phenomenon.We’re born that way, we live in act that way, but the more that we start to create that, then taking that not just to our team, but also to the community allows us to build a better society. And that’s something I’m deeply passionate about, try to do my small piece for it, and in fact, I’m sometimes more excited about these topics of leadership than even technology.Corey: It feels like it’s the stuff that lasts; it has staying power. I could record a video now about technology choices and how to work with those technologies and unless it’s about Git, it’s probably not going to be too relevant in 10 years. But leadership is one of those eternal things where it’s, once you’ve experienced a certain level of success, you can really see what people do with that the people that I like to surround myself with, generally make it a point to send the elevator back down, so to speak.Sanjay: I agree, Corey, it’s—glad that you do it. I’m always looking for people that I can learn from, and it doesn’t matter where they are in society. I mean, I think you often—I mean, this is classic Dale Carnegie; one of the books that my dad gave to me at a young age that I encourage everyone to read, How to Win Friends and Influence People, talked about how you can detect a person’s character based on the way they treat the receptionist, or their assistants, the people who might be lower down the totem pole from them. And most often you have people who kiss up and kick down. And I think when you build an organization that’s that typical.A lot of companies are built that way where they kiss up and kick down, you actually have an inverted sense of values. And I think you have to go back to some of those old-school ways that Dale Carnegie or Steven Covey talked about because you don’t have to build a culture that’s obnoxious; you can build a company that’s both nice and competitive. It doesn’t mean that anything we’ve talked about for the last few minutes means that I’m any less competitive and I don’t want to beat the competition and win a deal. What you can do it nicely. And even that’s something that I’ve had to grow in.So, I think when we all look at ourselves as sculptures, work in progress, and we’re perfecting our craft, so to speak, both on the technical front, and the product front and customer relationship, but then also on the leadership and the personal growth front, we actually become both better people and then we also build better companies.Corey: And sometimes that’s really all that we can ask for. If people want to learn more about what you have to say and get your opinion on these things, okay can they find you?Sanjay: Listen, I’m very approachable. You can follow me on Twitter, I’m on LinkedIn [unintelligible 00:34:54], or my email spoonen@vmware.com. I’m out there.I read voraciously, and probably not as responsive, sometimes, but I try—certainly, customers will hear from me within 24 hours because I try to be very responsive to our customers. But you can connect with me on social media. And I’m honored to be on your show, Corey. I’ve been reading your stuff since it first came out, and then, obviously, a fan of the way you’re thinking about things. Sometimes I feel I need to correct your opinion, and some of that we did today. [laugh]. But you’ve been very—Corey: Oh, I would agree. I come out of this conversation with a different view of VMware than I went into it with. I’m being fully transparent on that.Sanjay: And you’ve helped us. I mean, quite frankly, your blogs and your focus on this and, like, is the V in VMware, like, a bad word? Is it legacy? It’s forced us to think, so I think it’s iron sharpens iron. I’m very delighted that we connected, I don’t know if it was a year or two years ago.And I’ve been a fan; I watch the stuff that you do at re:Invent, so keep going with what you’re doing. I think all of what you write and what you talk about is hopefully making an impact on people who read and listen. And look forward to continuing this dialogue, not just with me, but I think you’re talking to other people in VMware in the future. I’m not the smartest person at VMware, but I’m very fortunate to be [laugh] surrounded by many of them. So hopefully, you get to talk to them, also, in the near future.Corey: [laugh]. I will, of course, will put links to all that in the [show notes 00:36:11]. Thank you so much for taking the time to speak with me today. I really appreciate it.Sanjay: Thanks, Corey, and all the best of you and your organization.Corey: Sanjay Poonen, Chief Operating Officer of VMware, I’m Cloud Economist Corey Quinn, and this is Screaming in the Cloud. If you’ve enjoyed this podcast, please leave a five-star review on your podcast platform of choice, whereas if you’ve hated this podcast, please leave a five-star review on your podcast platform of choice, along with a condescending comment telling me that in fact, it is a best practice to ship your code to production via FedEx.Corey: If your AWS bill keeps rising and your blood pressure is doing the same, then you need The Duckbill Group. We help companies fix their AWS bill by making it smaller and less horrifying. The Duckbill Group works for you, not AWS. We tailor recommendations to your business and we get to the point. Visit duckbillgroup.com to get started.This has been a HumblePod production. Stay humble.

Founded and Funded
Building Companies on Open Source, Conversation with co-creator of Kubernetes, Joe Beda

Founded and Funded

Play Episode Listen Later Mar 4, 2021 35:34


Building a company on open-source software is not easy.  As one of the creators of Kubernetes, and a founder of Heptio, Joe Beda has experience building open source, cultivating a community and founding a company.  In this conversation he talks with Madrona Partner, Anu Sharma about the varying states of open source and how founders can think about building companies around or on top of these powerful technologies.  This is a co-production with Traction, Anu’s blog and video conversations on GTM for technical products. 

The Rabbi Palacci Podcast
The Story Of An Amazing Chesed – Joe Beda A”H

The Rabbi Palacci Podcast

Play Episode Listen Later Nov 18, 2020 3:58


CIO Exchange Podcast
Bridging Your Application Gaps – Guest: Joe Beda, VMware Principal Engineer and James Waters, CTO, Modern Application Platforms

CIO Exchange Podcast

Play Episode Listen Later Aug 18, 2020 28:31


Guest: Joe Beda, VMware Principal Engineer and James Waters, CTO, Modern Application PlatformsJoe Beda on Twitter: https://twitter.com/jbedaJames Waters on Twitter: https://twitter.com/wattersjamesCIO Exchange on Twitter: https://twitter.com/vmwcioexchangeYadin Porter de León on Twitter: https://twitter.com/porterdeleon [Subscribe to the Podcast]On Apple Podcast: https://podcastsconnect.apple.com/my-podcastsFor more podcasts, video and in-depth research go to https://www.vmware.com/cio

The Podlets - A Cloud Native Podcast
Orchestrators and Runtimes (Ep 22)

The Podlets - A Cloud Native Podcast

Play Episode Listen Later Jul 20, 2020 41:47


Due to its vast array of capabilities and applications, it can be challenging to pinpoint the exact essence of what Kubernetes does. Today we are asking questions about orchestration and runtimes and trying to see if we can agree on whether Kubernetes primarily does one or the other, or even something else. Kubernetes may rather be described as a platform for instance! In order to answer these questions, we look at what constitutes an orchestrator, thinking about management, workflow, and security across a network of machines. We also get into the nitty-gritty of runtimes and how Kubernetes can fit into this model too. The short answer to our initial question is that defining a platform, orchestrator or a runtime depends on your perspective and tasks and Kubernetes can fulfill any one of these buckets. We also look at other platforms, either past or present that might be compared to Kubernetes in certain areas and see what this might tell us about the definitions. Ultimately, we come away with the message that the exact way you slice what Kubernetes does, is not all-important. Rigid definitions might not serve us so well and rather our focus should be on an evolving understanding of these terms and the broadening horizon of what Kubernetes can achieve. For a really interesting meditation on how far we can take the Kube, be sure to join us today! Follow us: https://twitter.com/thepodlets Website: https://thepodlets.io Feeback: info@thepodlets.io https://www.notion.so/thepodlets/The-Podlets-Guest-Central-9cec18726e924863b559ef278cf695c9 Hosts: https://twitter.com/mauilion https://twitter.com/joshrosso https://twitter.com/embano1 https://twitter.com/opowero Key Points From This Episode: What defines an orchestrator? Different kinds of management, workflow, and security. Considerations in a big company that go into licensing, security and desktop content. Can we actually call Kubernetes and orchestrator or a runtime? How managing things at scale increases the need for orchestration. An argument for Kubernetes being considered an orchestrator and a runtime. Understanding runtimes as part of the execution environment and not the entire environment. How platforms, orchestration, and runtimes change positions according to perspective. Remembering the 'container orchestration wars' between Mezos, Swarm, Nomad, and Kubernetes. The effect of containerization and faster release cycles on the application of updates. Instances that Kubernetes might not be used for orchestration currently. The increasingly lower levels at which you can view orchestration and containers. The great job that Kubetenes is able to do in the orchestration and automation layer. How Kubernetes removes the need to reinvent everything over and over again. Breaking down rigid definitions and allowing some space for movement. Quotes: “Obviously, orchestrator is a very big word, it means lots of things but as we’ve already described, it’s hard to fully encapsulate what orchestration means at a lower level.” — @mauilion [0:16:30] “I wonder if there is any guidance or experiences we have with determining when you might need an orchestrator.” — @joshrosso [0:28:32] “Sometimes there is an elemental over-automation some people don’t want all of these automation happening in the background.” — @opowero [0:29:19] Links Mentioned in Today’s Episode: Apache Airflow — https://airflow.apache.org/ SSCM — https://www.lynda.com/Microsoft-365-tutorials/SSCM-Office-Deployment-Tool/5030992/2805770-4.html Ansible — https://www.ansible.com/ Docker — https://www.docker.com/ Joe Beda — https://www.vmware.com/latam/company/leadership/joe-beda.html Jazz Improvisation over Orchestration — https://blog.heptio.com/core-kubernetes-jazz-improv-over-orchestration-a7903ea92ca?gi=5c729e924f6c containerd — https://containerd.io/ AWS — https://aws.amazon.com/ Fleet — https://coreos.com/fleet/docs/latest/launching-containers-fleet.html Puppet — https://puppet.com/ HashiCorp Nomad — https://nomadproject.io/ Mesos — http://mesos.apache.org/ Swarm — https://docs.docker.com/get-started/swarm-deploy/ Red Hat — https://www.redhat.com/en Zalando — https://zalando.com/See omnystudio.com/listener for privacy information.

B2B Tech Talk with Ingram Micro
Ep. 76 How to Operate Legacy Apps Alongside Modern Apps

B2B Tech Talk with Ingram Micro

Play Episode Listen Later Jul 15, 2020 16:22 Transcription Available


“Cloud isn’t where you operate, it’s how you operate.” — Joe Beda, principal engineer at VMware & co-creator of Kubernetes With the right platform, you should be able to operate and leverage legacy applications alongside modern apps. This is just one of the many ideas Keri chats with Jim Potts, sr. systems engineer at VMware, about. They also discuss: -How the definition of an application has evolved -Challenges companies are facing when combining legacy apps with more modern app -Why vSphere 7 + Kubernetes is the best way to deploy containers at scale Find out what’s new for vSphere partners and why customers should upgrade to vSphere 7. To join the discussion, follow us on Twitter @IngramTechSol #B2BTechTalk Listen to this episode and more like it by subscribing to B2B Tech Talk on Spotify, Apple Podcasts, or Stitcher. You can also listen on our website.

A Bootiful Podcast
Kubernetes co-creator Joe Beda

A Bootiful Podcast

Play Episode Listen Later Apr 3, 2020 68:12


Hi, Spring fans! In this installment Josh Long (@starbuxman) talks to fellow cloud native at [VMWare (@VMWare)](http://twitter.com/vmware) and [Kubernetes (@KubernetesIO)](http://twitter.com/KubernetesIO) co-creator [Joe Beda (@jbeda)](http://twitter.com/jbeda)

Pivotal Insights
Episode 165: Accidental universal control plane, with Joe Beda

Pivotal Insights

Play Episode Listen Later Mar 25, 2020 45:17


In this episode, we talk with Joe Beda about, of course, kubernetes, but also about an organization's platform, the roles that work in the software supply chain in enterprises, types of developers, and other topics like what DevOps "is" now. This discussion will give you a good view of how to model and think about enterprise software development and operations now-a-days, and thinking through the strategy you want to take to transform your organization to be cloud native.

Cloud & Culture
Episode 165: Accidental universal control plane, with Joe Beda

Cloud & Culture

Play Episode Listen Later Mar 25, 2020 45:17


In this episode, we talk with Joe Beda about, of course, kubernetes, but also about an organization's platform, the roles that work in the software supply chain in enterprises, types of developers, and other topics like what DevOps "is" now. This discussion will give you a good view of how to model and think about enterprise software development and operations now-a-days, and thinking through the strategy you want to take to transform your organization to be cloud native.

Cloud Native in 15 Minutes
Episode 165: Accidental universal control plane, with Joe Beda

Cloud Native in 15 Minutes

Play Episode Listen Later Mar 25, 2020 45:17


In this episode, we talk with Joe Beda about, of course, kubernetes, but also about an organization's platform, the roles that work in the software supply chain in enterprises, types of developers, and other topics like what DevOps "is" now. This discussion will give you a good view of how to model and think about enterprise software development and operations now-a-days, and thinking through the strategy you want to take to transform your organization to be cloud native.

Pivotal Podcasts
Accidental universal control plane, with Joe Beda

Pivotal Podcasts

Play Episode Listen Later Mar 24, 2020


In this episode, we talk with Joe Beda about, of course, kubernetes, but also about an organization's platform, the roles that work in the software supply chain in enterprises, types of developers, and other topics like what DevOps "is" now. This discussion will give you a good view of how to model and think about enterprise software development and operations now-a-days, and thinking through the strategy you want to take to transform your organization to be cloud native.

Pivotal Conversations
Accidental universal control plane, with Joe Beda

Pivotal Conversations

Play Episode Listen Later Mar 24, 2020 45:17


In this episode, we talk with Joe Beda about, of course, kubernetes, but also about an organization's platform, the roles that work in the software supply chain in enterprises, types of developers, and other topics like what DevOps "is" now. This discussion will give you a good view of how to model and think about enterprise software development and operations now-a-days, and thinking through the strategy you want to take to transform your organization to be cloud native.

Kubernetes Podcast from Google
Kubernetes 1.18, with Jorge Alarcon

Kubernetes Podcast from Google

Play Episode Listen Later Mar 24, 2020 34:24


Kubernetes 1.18 is out - almost! A bug has pushed it back a day. While you’re waiting, release team lead Jorge Alarcon will tell you all about the fit and finish you can expect in the release when it’s out tomorrow. Adam and Craig bring you the other community news of the week, as well as some podcast follow-up. Do you have something cool to share? Some questions? Let us know: web: kubernetespodcast.com mail: kubernetespodcast@google.com twitter: @kubernetespod Chatter of the week Shoe Dog What the fox really says News of the week Kubernetes 1.18 is out! Well, not quite yet: this regression is being fixed Enhancement tracker Windows features: containerd kubeadm RuntimeClass GMSA Ingress API kubectl diff and APIServer dry-run kubectl debug CNCF SIG Contributor Strategy Kong ingress controller and Istio service mesh by Kevin Chen KubeCF becomes a Cloud Foundry Foundation incubation project Platform9 adds two new tiers And adds free JFrog Private Container Registry Backyards 1.2 Red Hat adds support for installing OpenShift on top of RHV Google Cloud Game Servers Kubei, a new open source runtime vulnerabilty scanner by Portshift Azure Container Registry adds customer managed keys AKS adds Ubuntu 18.04 Kubernetes security announcements CVE-2020-8551 - kublet CVE-2020-8552 - API server Using Inspektor Gadget to add network policies okteto push D2iQ changes CEOs Spectro Cloud comes out of stealth Links from the interview Kubernetes 1.18 release blog 1.18.0 announcement e-mail Computational biology and folding proteins Data for Democracy Kubernetes Up and Running by Joe Beda, Kelsey Hightower, and “the other guy“ The Kubernetes Slack Searchable.ai A bit about them Home slice Episode 72, with Lachlan Evenson Emeritus Adviser Release logo Sidecar containers Tim Hockin’s thoughts on Sidecar Containers not making 1.18 1.19 release lead: Taylor Dolezal Jorge on Twitter and alejandrox1 on the Kubernetes Slack

The Podlets - A Cloud Native Podcast
Should I Kubernetes? (Ep 18)

The Podlets - A Cloud Native Podcast

Play Episode Listen Later Feb 24, 2020 46:29


The question of diving into Kubernetes is something that faces us all in different ways. Whether you are already on the platform, are considering transitioning, or are thinking about what is best for your team moving forward, the possibilities and the learning-curve make it a somewhat difficult question to answer. In this episode, we discuss the topic and ultimately believe that an individual is the only one who can answer that question well. That being said, the capabilities of Kubernetes can be quite persuasive and if you are tempted then it is most definitely worth considering very seriously, at least. In our discussion, we cover some of the problems that Kubernetes solves, as well as some of the issues that might arise when moving into the Kubernetes space. The panel shares their thoughts on learning a new platform and compare it with other tricky installations and adoption periods. From there, we look at platforms and how Kubernetes fits and does not fit into a traditional definition of what a platform constitutes. The last part of this episode is spent considering the future of Kubernetes and how fast that future just might arrive. So for all this and a bunch more, join us on The Podlets Podcast, today! Follow us: https://twitter.com/thepodlets Website: https://thepodlets.io Feeback: info@thepodlets.io https://github.com/vmware-tanzu/thepodlets/issues Hosts: Carlisia Campos Josh Rosso Duffie Cooley Bryan Liles Key Points From This Episode: The main problems that Kubernetes solves and poses. Why you do not need to understand distributed systems in order to use Kubernetes. How to get around some of the concerns about installing and learning a new platform. The work that goes into readying a Kubernetes production cluster. What constitutes a platform and can we consider Kubernetes to be one? The two ways to approach the apparent value of employing Kubernetes. Making the leap to Kubernetes is a personal question that only you can answer. Looking to the future of Kubernetes and its possible trajectories. The possibility of more visual tools in the UI of Kubernetes. Understanding the concept of conditions in Kubernetes and its objects. Considering appropriate times to introduce a team to Kubernetes. Quotes: “I can use different tools and it might look different and they will have different commands but what I’m actually doing, it doesn’t change and my understanding of what I’m doing doesn’t change.” — @carlisia [0:04:31] “Kubernetes is a distributed system, we need people with expertise across that field, across that whole grouping of technologies.” — @mauilion [0:10:09] “Kubernetes is not just a platform. Kubernetes is a platform for building platforms.” — @bryanl [0:18:12] Links Mentioned in Today’s Episode: Weave — https://www.weave.works/docs/net/latest/overview/ AWS — https://aws.amazon.com/ DigitalOcean — https://www.digitalocean.com/ Heroku — https://www.heroku.com/ Red Hat — https://www.redhat.com/en Debian — https://www.debian.org/ Canonical — https://canonical.com/ Kelsey Hightower — https://github.com/kelseyhightower Joe Beda — https://www.vmware.com/latam/company/leadership/joe-beda.html Azure — https://azure.microsoft.com/en-us/ CloudFoundry — https://www.cloudfoundry.org/ JAY Z — https://lifeandtimes.com/ OpenStack — https://www.openstack.org/ OpenShift — https://www.openshift.com/ KubeVirt — https://kubevirt.io/ VMware — https://www.vmware.com/ Chef and Puppet — https://www.chef.io/puppet/ tgik.io — https://www.youtube.com/playlist?list=PL7bmigfV0EqQzxcNpmcdTJ9eFRPBe-iZa Matthias Endler: Maybe You Don't Need Kubernetes - https://endler.dev/2019/maybe-you-dont-need-kubernetes Martin Tournoij: You (probably) don’t need Kubernetes - https://www.arp242.net/dont-need-k8s.html Scalar Software: Why most companies don't need Kubernetes - https://scalarsoftware.com/blog/why-most-companies-dont-need-kubernetes GitHub: Kubernetes at GitHub - https://github.blog/2017-08-16-kubernetes-at-github Debugging network stalls on Kubernetes - https://github.blog/2019-11-21-debugging-network-stalls-on-kubernetes/ One year using Kubernetes in production: Lessons learned - https://techbeacon.com/devops/one-year-using-kubernetes-production-lessons-learned Kelsey Hightower Tweet: Kubernetes is a platform for building platforms. It's a better place to start; not the endgame - https://twitter.com/kelseyhightower/status/935252923721793536?s=2 Transcript: EPISODE 18 [INTRODUCTION] [0:00:08.7] ANNOUNCER: Welcome to The Podlets Podcast, a weekly show that explores Cloud Native one buzzword at a time. Each week, experts in the field will discuss and contrast distributed systems concepts, practices, tradeoffs and lessons learned to help you on your cloud native journey. This space moves fast and we shouldn’t reinvent the wheel. If you’re an engineer, operator or technically minded decision maker, this podcast is for you. [EPISODE] [0:00:41.9] JR: Hello everyone and welcome to The Podlets Podcast where we are going to be talking about should I Kubernetes? My name is Josh Rosso and I am very pleased to be joined by, Carlisia Campos. [0:00:55.3] CC: Hi everybody. [0:00:56.3] JR: Duffy Cooley. [0:00:57.6] DC: Hey folks. [0:00:58.5] JR: And Brian Lyles. [0:01:00.2] BL: Hi. [0:01:03.1] JR: All right everyone. I’m really excited about this episode because I feel like as Kubernetes has been gaining popularity over time, it’s been getting its fair share of promoters and detractors. That’s fair for any piece of software, right? I’ve pulled up some articles and we put them in the show notes about some of the different perspectives on both success and perhaps failures with Kub. But before we dissect some of those, I was thinking we could open it up more generically and think about based on our experience with Kubernetes, what are some of the most important things that we think Kubernetes solves for? [0:01:44.4] DC: All right, my list is very short and what Kubernetes solves for my point of view is that it allows or it actually presents an interface that knows how to run software and the best part about it is that it doesn’t – the standard interface. I can target Kubernetes rather than targeting the underlying hardware. I know certain things are going to be there, I know certain networking’s going to be there. I know how to control memory and actually, that’s the only reason that I really would give, say for Kubernetes, we need that standardization and you don’t want to set up VM’s, I mean, assuming you already have a cluster. This simplifies so much. [0:02:29.7] BL: For my part, I think it’s life cycle stuff that’s really the biggest driver for my use of it and for my particular fascination with it. I’ve been in roles in the past where I was responsible for ensuring that some magical mold of application on a thousand machines would magically work and I would have all the dependencies necessary and they would all agree on what those dependencies were and it would actually just work and that was really hard. I mean, getting to like a known state in that situation, it’s very difficult. Having something where either both the abstractions of containers and the abstraction of container orchestration, the ability to deploy those applications and all those dependencies together and the ability to change that application and its dependencies, using an API. That’s the killer part for me. [0:03:17.9] CC: For me, from a perspective of a developer is very much what Duffy just said but more so the uniformity that comes with all those bells and whistles that we get by having that API and all of the features of Kubernetes. We get such a uniformity across such a really large surface and so if I’m going to deploy apps, if I’m going to allow containers, what I have to do for one application is the same for another application. If I go work for another company, that uses Kubernetes, it is the same and if that Kubernetes is a hosted Kubernetes or if it’s a self-managed, it will be the same. I love that consistency and that uniformity that even so I can – there are many tools that help, they are customized, there’s help if you installing and composing specific things for your needs. But the understanding of what you were doing is it’s the same, right? I can use different tools and it might look different and they will have different commands but what I’m actually doing, it doesn’t change and my understanding of what I’m doing doesn’t change. I love that. Being able to do my work in the same way, I wish, you know, if that alone for me makes it worthwhile. [0:04:56.0] JR: Yeah, I think like my perspective is pretty much the same as what you all said and I think the one way that I kind of look at it too is Kubernetes does a better job of solving the concerns you just listed, then I would probably be able to build myself or my team would be able to solve for ourselves in a lot of cases. I’m not trying to say that specialization around your business case or your teams isn’t appropriate at times, it’s just at least for me, to your point Carlisia, I love that abstraction that’s consistent across environments. It handles a lot of the things, like Brian was saying, about CPU, memory, resources and thinking through all those different pieces. I wanted to take what we just said and maybe turn it a bit at some of the common things that people run in to with Kubernetes and just to maybe hit on a piece of low hanging fruit that I think is oftentimes a really fair perspective is Kubernetes is really hard to operate. Sure, it gives you all the benefits we just talked about but managing a Kubernetes cluster? That is not a trivial task. And I just wanted to kind of open that perspective up to all of us, you know? What are your thoughts on that? [0:06:01.8] DC: Well, the first thought is it doesn’t have to be that way. I think that’s a fallacy that a lot of people fall into, it’s hard. Guess what? That’s fine, we’re in the sixth year of Kubernetes, we’re not in the sixth year of stability of a stable release. It’s hard to get started with Kubernetes and what happens is we use that as an excuse to say well, you know what? It’s hard to get started with so it’s a failure. You know something else that was hard to get started with? Whenever I started with it in the 90s? Linux. You download it and downloading it on 30 floppy disks. There was the download corruption, real things, Z modem, X modem, Y modem. This is real, a lot of people don’t know about this. And then, you had to find 30 working flopping disk and you had to transfer 30, you know, one and a half megabyte — and it still took a long time to floppy disk and then you had to run the installer. And then most likely, you had to build a kernel. Downloading, transferring, installing, building a kernel, there was four places where just before you didn’t have windows, this was just to get you to a log in prompt, that could fail. With Kubernetes, we had this issue. People were installing Kubernetes, there’s cloud vendors who are installing it and then there’s people who were installing it on who knows what hardware. Guess what? That’s hard and it’s not even now, it’s not even they physical servers that’s networking. Well, how are you going to create a network that works across all your servers, well you’re going to need an overlay, which one are you going to use, Calico? Use Weave? You’re going to need something else that you created or something else if it works. Yeah, just we’re still figuring out where we need to be but these problems are getting solved. This will go away. [0:07:43.7] BL: I’m living that life right now, I just got a new laptop and I’m a Linux desktop kind of guy and so I’m doing it right now. What does it take to actually get a recent enough kernel that the hardware that is shipped with this laptop is supported, you know? It’s like, those problems continue, even though Linux has been around and considered stable and it’s the underpinning of much of what we do on the internet today, we still run into these things, it’s still a very much a thing. [0:08:08.1] CC: I think also, there’s a factor of experience, for example. This is not the first time you have to deal with this problem, right Duffy? Been using Linux on a desktop so this is not the first hardware that you had to setup Linux on. So you know where to go to find that information. Yeah, it’s sort of a pain but it’s manageable. I think a lot of us are suffering from gosh, I’ve never seen Kubernetes before, where do I even start and – or, I learned Kubernetes but it’s quite burdensome to keep up with everything as opposed to let’s say, if 10 years from now, we are still doing Kubernetes. You’ll be like yeah, okay, whatever. This is no big deal. So because we have done these things for a few years that we were not possibly say that it’s hard. I don’t’ think we would describe it that way. [0:09:05.7] DC: I think there will still be some difficulty to it but to your point, it’s interesting, if I look back like, five years ago, I was telling all of my friends. Look, if you’re a system’s administrator, go learn how to do other things, go learn how to be, go learn an API centric model, go play with AWS, go play with tools like this, right? If you’re a network administrator, learn to be a system’s administrator but you got to branch out. You got to figure out how to ensure that you’re relevant in the coming time. With all the things that are changing, right? This is true, I was telling my friend this five years ago, 10 years ago, continues, I continue to tell my friends that today. If I look at the Kubernetes platform, the complexity that represents in operating it is almost tailor made to those people though did do that, that decided to actually branch out and to understand why API’s are interesting and to understand, you know, can they have enough of an understanding in a generalist way to become a reasonable systems administrator and a network administrator and you know, start actually understanding the paradigms around distributed systems because those people are what we need to operate this stuff right now, we’re building – I mean, Kubernetes is a distributed system, we need people with expertise across that field, across that whole grouping of technologies. [0:10:17.0] BL: Or, don’t. Don’t do any of that. [0:10:19.8] CC: Brian, let me follow up on that because I think it’s great that you pointed that out Duffy. I was thinking precisely in terms of being a generalist and understanding how Kubernetes works and being able to do most of it but it is so true that some parts of it will always be very complex and it will require expertise. For example, security. Dealing with certificates and making sure that that’s working, if you want to – if you have particular needs for networking, but, understanding the whole idea of this systems, as it sits on top of Kubernetes, grasping that I think is going to – have years of experience under their belt. Become relatively simple, sorry Brian that I cut you off. [0:11:10.3] BL: That’s fine but now you gave me something else to say in addition to what I was going to say before. Here’s the killer. You don’t need to know distributed systems to use Kubernetes. Not at all. You can use a deployment, you can use a [inaudible] set, you can run a job, you can get workloads up on Kubernetes without having to understand that. But, Kubernetes also gives you some good constructs either in the Kubernetes API's itself or in its client libraries where you could build distributed systems in easier way but what I was going to say before that though is I can’t build a cluster. Well don’t. You know what you should do? Use a cloud vendor, use AWS, use Google, use Microsoft or no, I mean, did I say Microsoft? Google and Microsoft. Use Digital Ocean. There’s other people out there that do it as well, they can take care of all the hard things for you and three, four minutes or 10 minutes if you’re on certain clouds, you can have Kubernetes up and running and you don’t even have to think about a lot of these networking concerns to get started. I think that’s a little bit of the thud that we hear, "It’s hard to install." Well, don’t install it, you install it whenever you have to manage your own data centers. Guess what? When you have to manage your own data centers and you’re managing networking and storage, there’s a set of expertise that you already have on staff and maybe they don’t want to learn a new thing, that’s a personal problem, that’s not really a Kubernetes problem. Let’s separate those concerns and not use our lack or not wanting to, to stop us from actually moving forward. [0:12:39.2] DC: Yeah. Maybe even taking that example step forward. I think where this problem compounds or this perspective sometimes compounds about Kubernetes being hard to operate is coming from of some shops who have the perspective of are operational concerns today, aren’t that complex. Why are we introducing this overhead, this thing that we maybe don’t need and you know, to your point Brian, I wonder if we’d all entertain the idea, I’m sure we would that maybe even, speaking to the cloud vendors, maybe even just a Heroku or something. Something that doesn’t even concern itself with Kube but can get your workload up and running and successful as quickly as possible. Especially if you’re like, maybe a small startup type persona, even that’s adequate, right? It could have been not a failure of Kubernetes but more so choosing the wrong tool for the job, does that resonate with you all as well, does that make sense? [0:13:32.9 DC: Yeah, you know, you can’t build a house with a screwdriver. I mean, you probably could, it would hurt and it would take a long time. That’s what we’re running into. What you’re really feeling is that operationally, you cannot bridge the gap between running your application and running your application in Kubernetes and I think that’s fair, that’s actually a great thing, we prove that the foundations are stable enough that now, we can actually do research and figure out the best ways to run things because guess what? RPM’s from Red Hat and then you have devs from the Debian project, different ways of getting things, you have Snap from Canonical, it works and sometimes it doesn’t, we need to actually figure out those constructs in Kubernetes, they’re not free. These things did not exist because someone says, "Hey, I think we should do this." Many years. I was using RPM in the 90s and we need to remember that. [0:14:25.8] JR: On that front, I want to maybe point a question to you Duffy, if you don’t mind. Another big concern that I know you deal with a lot is that Kubernetes is great. Maybe I can get it up no problem. But to make it a viable deployment target at my organization, there’s a lot of work that goes into it to make a Kubernetes cluster production ready, right? That could be involving how you integrate storage and networking and security and on and on. I feel like we end up at this tradeoff of it’s so great that Kubernetes is super extensible and customizable but there is a certain amount of work that that kind of comes with, right? I’m curious Duff, what’s your perspective on that? [0:15:07.3] DC: I want to make a point that bring back to something Brian mentioned earlier, real quick, before I go on to that one. The point is that, I completely agree that yo do not have to actually be a distributed systems person to understand how to use Kubernetes and if that were a bar, we would have set that bar and incredibly, the inappropriate place. But from the operational perspective, that’s what we were referring to. I completely also agree that especially when we think about productionalizing clusters, if you’re just getting into this Kubernetes thing, it may be that you want to actually farm that out to another entity to create and productionalize those clusters, right? You have a choice to make just like you had a choice to make what when AWS came along. Just like you had a choice to make — we’re thinking of virtual machines, right? You have a choice and you continue to have a choice about how far down that rabbit hole as an engineering team of an engineering effort your company wants to go, right? Do you want farm everything out to the cloud and not have to deal with the operations, the day to day operations of those virtual machines and take the constraints that have been defined by that platformer, or do you want to operate that stuff locally, are you required by the law to operate locally? What does production really mean to you and like, what are the constraints that you actually have to satisfy, right? I think that given that choice, when we think about how to production Alize Kubernetes, it comes down to exactly that same set of things, right? Frequently, productionalizing – I’ve seen a number of different takes on this and it’s interesting because I think it’s actually going to move on to our next topic in line here. Frequently I see that productionizing or productionalizing Kubernetes means to provide some set of constraints around the consumption of the platform such that your developers or the focus that are consuming that platform have to operate within those rails, right? They could only define deployments and they can only define deployments that look like this. We’re going to ask them a varied subset of questions and then fill out all the rest of it for them on top of Kubernetes. The entry point might be CICD, it might be a repository, it might be code repository, very similar to a Heroku, right? The entry point could be anywhere along that thing and I’ve seen a number of different enterprises explore different ways to implement that. [0:17:17.8] JR: Cool. Another concept that I wanted to maybe have us define and think about, because I’ve heard the term platform quite a bit, right? I was thinking a little bit about you know, what the term platform means exactly? Then eventually, whether Kubernetes itself should be considered a platform. Backing u, maybe we could just start with a simple question, for all of us, what makes something a platform exactly? [0:17:46.8] BL: Well, a platform is something that provides something. That is a Brian Lyles exclusive. But really, what it is, what is a platform, a platform provides some kind of service that can be used to accomplish some task and Kubernetes is a platform and that thing, it provides constructs through its API to allow you to perform tasks. But, Kubernetes is not just a platform. Kubernetes is a platform for building platforms. The things that Kubernetes provides, the workload API, the networking API, the configuration and storage API’s. What they provide is a facility for you to build higher level constructs that control how you want to run the code and then how you want to connect the applications. Yeah, Kubernetes is actually a platform for platforms. [0:18:42.4] CC: Wait, just to make sure, Brian. You’re saying, because Kelsey Hightower for example is someone who says Kubernetes is a platform of platforms. Now, is Kubernetes both a platform of platforms, at the same time that it’s also a platform to run apps on? [0:18:59.4] BL: It’s both. Kelsey tweeted that there is some controversy on who said that first, it could have been Joe Beda, it could have been Kelsey. I think it was one of those two so I want to give a shout out to both of those for thinking in the same line and really thinking about this problem. But to go back to what you said, Carlisia, is it a platform for providing platforms and a platform? Yes, I will explain how. If you have Kubernetes running and what you can do is you can actually talk to the API, create a deployment. That is platform for running a workload. But, also what you can do is you can create through Kubernetes API mechanisms, ie. CRD’s, custom resource definitions. You can create custom resources that I want to have something called an application. You can basically extend the Kubernetes API. Not only is Kubernetes allowing you to run your workloads, it’s allowing you to specify, extend the API, which then in turn can be run with another controller that’s running on your platform that then gives you this thing when you cleared an application. Now, it creates deployment which creates a replica set, which creates a pod, which creates containers, which downloads images from a container registry. It actually is both. [0:20:17.8] DC: Yeah, I agree with that. Another quote that I remember being fascinated by which I think kind of also helps define what a platform is Kelsey put on out quote that said, Everybody wants platform at a service with the only requirement being that they’ve built it themselves." Which I think is awesome and it also kind of speaks, in my opinion to what I think the definition of a platform is, right? It’s an interface through which we can define services or applications and that interface typically will have some set of constraints or some set of workflows or some defined user experience on top of it. To Brian's point, I think that Kubernetes is a platform because it provides you a bunch of primitive s on the back end that you can use to express what that user experience might be. As we were talking earlier about what does it take to actually – you might move the entry point into this platform from the API, the Kubernetes API server, back down into CICD, right? Perhaps you're not actually defining us and called it a deployment, you’re just saying, I want so many instances off this, I don’t want it to be able to communicate with this other thing, right? It becomes – so my opinion, the definition about of a platform it is that user experience interface. It’s the constraints that we know things that you're going to put on top of that platform. [0:21:33.9] BL: I like that. I want to throw out a disclaimer right here because we’re here, because we’re talking about platforms. Kubernetes is not a platform, it’s as surface. That is actually, that’s different, a platform as a service is – from the way that we look at it, is basically a platform that can run your code, can actually make your code available to external users, can scale it up, can scale it down and manages all the nuances required for that operation to happen. Kubernetes does not do that out of the box but you can build a platform as a surface on Kubernetes. That’s actually, I think, where we’ll be going next is actually people, stepping out of the onesy-twosy, I can deploy a workload, but let’s actually work on thinking about this level. And I’ll tell you what. DEUS who got bought by Azure a few years ago, they actually did that, they built a pass that looks like Heroku. Microsoft and Azure thought that was a good idea so they purchased them and they’re still over there, thinking about great ideas but I think as we move forward, we will definitely see different types of paths on Kubernetes. The best thing is that I don’t think we’ll see them in the conventional sense of what we think now. We have a Heroku, which is like the git-push Heroku master, we share code through git. And then we have CloudFoundry idea of a paths which is, you can run CFPush and that actually is more of an extension of our old school Java applications, where we could just push [inaudible] here but I think at least I am hoping and this is something that I am actually working on not to toot my own horn too much but actually thinking about how do we actually – can we build a platform as a service toolkit? Can I actually just build something that’s tailing to my operation? And that is something that I think we’ll see a lot more in the next 18 months. At least you will see it from me and people that I am influencing. [0:23:24.4] CC: One thing I wanted to mention before we move onto anything else, in answering “Is Kubernetes right for me?” We are so biased. We need to play devil’s advocate at some point. But in answering that question that is the same as in when we need to answer, “Is technology x right for me?” and I think there is at a higher level there are two camps. One camp is very much of the thinking that, "I need to deliver value. I need to allow my software and if the tools I have are solving my problem I don’t need to use something else. I don’t need to use the fancy, shiny thing that’s the hype and the new thing." And that is so right. You definitely shouldn't be doing that. I am divided on this way of thinking because at the same time at that is so right. You do have to be conscious of how much money you’re spending on things and anyway, you have to be efficient with your resources. But at the same time, I think that a lot of people who don’t fully understand what Kubernetes really can do and if you are listening to this, if you maybe could rewind and listen to what Brian and Duffy were just saying in terms of workflows and the Kubernetes primitives. Because those things they are so powerful. They allow you to be so creative with what you can do, right? With your development process, with your roll out process and maybe you don’t need it now. Because you are not using those things but once you understand what it is, what it can do for your used case, you might start having ideas like, “Wow, that could actually make X, Y and Z better or I could create something else that could use these things and therefore add value to my enterprise and I didn’t even think about this before.” So you know two ways of looking at things. [0:25:40.0] BL: Actually, so the topic of this session was, “Should I Kubernetes” and my answer to that is I don’t know. That is something for you to figure out. If you have to ask somebody else I would probably say no. But on the other side, if you are looking for great networking across a lot of servers. If you are looking for service discovery, if you are looking for a system that can restart workloads when they fail, well now you should probably start thinking about Kubernetes. Because Kubernetes provides all of these things out of the box and are they easy to get started with though? Some of these things are harder. Service discovery is really easy but some of these things are a little bit harder but what Kubernetes does is here comes my hip-hop quote, Jay Z said this, basically he’s talking about difficult things and he basically wants difficult things to take a little bit of time and impossible things or things we thought that were impossible to take a week. So basically making difficult things easy and making things that you could not even imagine doing, attainable. And I think that is what Kubernetes brings to the table then I’ll go back and say this one more time. Should you use Kubernetes? I don’t know that is a personal problem that is something you need to answer but if you’re looking for what Kubernetes provides, yes definitely you should use it. [0:26:58.0] DC: Yeah, I agree with that I think it is a good summary there. But I also think you know coming back to whether you should Kubernetes part, from my perspective the reason that I Kubernetes, if you will, I love that as a verb is that when I look around at the different projects in the infrastructure space, as an operations person, one of the first things I look for is that API that pattern around consumption, what's actually out there and what’s developing that API. Is it a the business that is interested in selling me a new thing or is it an API that’s being developed by people who are actually trying to solve real problems, is there a reasonable way to go about this. I mean when I look at open stack, OpenStack was exactly the same sort of model, right? OpenStack existed as an API to help you consume infrastructure and I look at Kubernetes and I realize, “Wow, okay well now we are developing an API that allows us to think about the life cycle and management of applications." Which moves us up the stack, right? So for my part, the reason I am in this community, the reason I am interested in this product, the reason I am totally Kubernetes-ing is because of that. I realized that fundamentally infrastructure has to change to be able to support the kind of load that we are seeing. So whether you should Kubernetes, is the API valuable to you? Do you see the value in that or is there more value in continuing whatever paradigm you’re in currently, right? And judging that equally I think is important. [0:28:21.2] JR: Two schools of thoughts that I run into a lot on the API side of thing is whether overtime Kubernetes will become this implementation detail, where 99% of users aren’t even aware of the API to any extent. And then another one that kind of talks about the API is consistent abstraction with tons of flexibility and I think companies are going in both directions like OpenShift from Red Hat is perhaps a good example. Maybe that is one of those layer two platforms more so Brian that you were talking about, right? Where Kubernetes is the platform that was used to build it but the average person that interacts with it might not actually be aware of some of the Kubernetes primitives and things like that. So if we could all get out of our crystal balls for a second here, what do you all think in the future? Do you see the Kubernetes API becoming just a more prevalent industry standard or do you see it fading away in favor of some other abstraction that makes it easier? [0:29:18.3] BL: Oh wow, well I already see it as I don’t have to look too far in the future, right? I can see the Kubernetes API being used in ways that we could not imagine. The idea that I will think of is like KubeVirt. KubeVirt allows you to boot basically pods on whatever implements that it looks like a Kubelet. So it looks like something that could run pods. But the neat thing is that you can use something like KubeVirt with a virtual Kubelet and now you can boot them on other things. So ideas in that space, I don’t know VMware is actually going on that, “Wow, what if we can make virtual machines look like pods inside of Kubernetes? Pretty neat." Azure has definitely led work on this as now, we can just bring up either bring up containers, we can bring up VM’s and you don’t actually need a Kube server anymore. Now but the crazy part is that you can still use a workloads API’s, storage API’s with Kubernetes and it does not matter what backs it. And I’ll throw out one more suggestion. So there is also projects like AWS operators in [inaudible] point and what they allow you to do is to use the Kubernetes API or actually in cluster API, I'll use all three. But I use the Kubernetes API to boot things that aren’t even in the cluster and this will be AWS services or this could be databases across multiple clouds or guess what? More Kubernetes services. Yeah, so we are on that path but I just can’t wait to see what people are going to do with that. The power of Kubernetes is this API, it is just so amazing. [0:30:50.8] DC: For my part, I think is that I agree that the API itself is being extended in all kinds of amazing ways but I think that as I look around in the crystal ball, I think that the API will continue to be foundational to what is happening. If I look at the level two or level three platforms that are coming, I think those will continue to be a thing for enterprises because they will continue to innovate in that space and then they will continue to consume the underlying API structure and that portability Kubernetes exposes to define what that platform might look like for their own purpose, right? Giving them the ability to effectively have a platform as a service that they define themselves but using and under – you know, using a foundational layer that it’s like consistent and extensible and extensive I think that that’s where things are headed. [0:31:38.2] CC: And also more visual tools, I think is in our future. Better, actual visual UI's that people can use I think that’s definitely going to be in our future. [0:31:54.0] BL: So can I talk about that for a second? [0:31:55.9] CC: Please, Brian. [0:31:56.8] BL: I am wearing my octant hoodie today, which is a visual tool for Kubernetes and I will talk now as someone who has gone down this path to actually figure this problem out. As a prediction for the future, I think we’ll start creating better API’s in Kubernetes to allow for more visual things and the reason that I say that this is going to happen and it can’t really happen now is because for inside of an octant and whenever creating new eye views, pretty much happened now what that optic is. But what is going to happen and I see the rumblings from the community, I see the rumblings from K-native community as well is that we are going to start standardizing on conditions and using conditions as a way that we can actually say what’s going on. So let me back it up for a second so I can explain to people what conditions are. So Kubernetes, we think of Kubernetes as YAML and in a typical object in Kubernetes, you are going to have your type meta data. What is this, you are going to have your object meta data, what’s name this and then you are going to have a spec, how is this thing configured and then you are going to have a status and the status generally will say, “Well what is the status of this object? Is it deployment? How many references out? If it is a pod, am I ready to go?" But there is also this concept and status called conditions, which are a list of things that say how your thing, how your object is working. And right now, Kubernetes uses them in two ways, they use them in the negative way and the positive way. I think we are actually going to figure out which one we want to use and we are going to see more API’s just say conditions. And now from a UI developer, from my point of view, now I can just say, “I don’t really care what your optic is. You are going to give me conditions in a format that I know and I can just basically report on those in the status and I can tell you if the thing is working or not.” That is going to come too. And that will be neat because that means that we get basically, we can start building UI’s for free because we just have to learn the pattern. [0:33:52.2] CC: Can you talk a little bit more about conditions? Because this is not something I hear frequently and that I might know but then not know what you are talking about by this name. [0:34:01.1] BL: Oh yeah, I will give you the most popular one. So everything in Kubernetes is an object and that even means that the nodes that your workloads run on, are objects. If you run KubeControl, KubeCuddle, Kube whatever, git nodes, it will show you all the nodes in your cluster if you have permission to see that and if you do KubeCTL, gitnode, node name and then you actually have the YAML output what you will see in the bottom is an object called 'conditions'. And inside of there it will be something like is there sufficient memory, is the node – I actually don’t remember all of them but really what it is, they’re line items that say how this particular object is working. So do I have enough memory? Do I have enough storage? Am I out of actual pods that can be launched on me and what conditions are? It is basically saying, “Hey Brian, what is the weather outside?” I could say it's nice. Or I could be like, “Well, it’s 75 degrees, the wind is light but variable. It is not humid and these are what the conditions are.” They allow the object to specify things about itself that might be useful to someone who is consuming it. [0:35:11.1] CC: All right that was useful. I am actually trying to bring one up here. I never paid attention to that. [0:35:18.6] BL: Yeah and you will see it. So the two ones that are most common right now, there is some competition going on in Kubernetes architecture, trying to figure out how they are going to standardize on this but with pods and nodes you will see conditions on there and those are just telling you what is going on but the problem is that a condition is a type, a message, a status and something else but the problem is that the status can be true of false — oh and a reason, the status can be true or false but sometimes the type is a negative type where it would be like “node not ready”. And then it will say false because it is. And now whenever you’re inspecting that with automated code, you really want the positive condition to be true and the negative condition to be false and this is something that the K-native community is really working on now. They have the whole facility of this thing called duck typing. Which they can actually now pattern-match inside of optics to find all of these neat things. It is actually pretty intriguing. [0:36:19.5] CC: All right, it is interesting because I very much status is everything for objects and that is very much a part of my work flow. But I never noticed that there was some of the objects had conditions. I never noticed that and just a plug, we are very much going to have the K-native folks here to talk about duck typing. I am really excited about that. [0:36:39.9] BL: Yeah, they’re on my team. They’ll be happy to come. [0:36:42.2] CC: Oh yes, they are awesome. [0:36:44.5] JR: So I was thinking maybe we could wrap this conversation up and I think we have acknowledged that “Should I Kubernetes?” is a ridiculously hard question for us to answer for you and we should clearly not be the ones answering it for you but I was wondering if we could give some thoughts around — for the Podlet listener who is sitting at their desk right now thinking like, “Is now the right time for my organization to bring this in?” And I will start with some thought and then open it all up to you. So one common thing I think that I run into a lot is you know your current state and you know your desired state to steal a Kubernetes concept for a moment. And the desired state might be more decoupled services that are more scalable and so on and I think oftentimes at orgs we get a little bit too obsessed with the desired state that we forget about how far the gap is between the current state and the desired state. So as an example, you know maybe your shop’s biggest issue is the primary revenue generating application is a massive dot-net framework monolith, which isn’t exactly that easy to just port over into Kubernetes, right? So if a lot of your friction right now is teams collaborating on this tool, updating this tool, scaling this tool, maybe before even thinking about Kubernetes, being honest with the fact that a lot of value can be derived right now from some amount of application architecture changes. Or even sorry to use a buzzword but some amount of modernization of aspects of that application before you even get to the part of introducing Kubernetes. So that is one common one that I run into with orgs. What are some other kind of suggestion you have for people who are thinking about, “Is it the right time to introduce Kube?” [0:38:28.0] BL: So here is my thought, if you work for a small startup and you’re working on shipping value and you have no Kubernetes experience and staff and you don’t want to use for some reason you don’t want to use the cloud, you know go figure out your other problems then come back. But if you are an enterprise and especially if you work in a central enterprise group and you are thinking about “modernization”, I actually do suggest that you look at Kubernetes and here is the reason why. My guess is that if you’re a business of a certain size, you run VMware in your data center. I am just guessing that because I haven’t been to a company that doesn’t. Because we learned a long time ago that using virtual machines in many cases is way more efficient than just running hardware because what happens is we can’t use our compute capacity. So if you are working for a big company or even like a medium sized company, I don’t think – I am not telling you to run for it but I am telling you to at least have someone go look at it and investigate if this could ultimately be something that could make your stack easier to run. [0:39:31.7] DC: I think I am going to take the kind of the operations perspective. I think if you are in the business of coming up with a way to deploy applications on the servers and you are looking at trying to handle the lifecycle of that and you’re pretty fed up with the tooling that is out there and things like Puppet and Chef and tooling like that and you are looking to try and understand is there something in Kubernetes for me? Is there some model that could help me improve the way that I actually handle a lifecycle of those applications, be they databases or monoliths or compostable services? Any which way you want to look at it like are there tools there that can be expressed. Is the API expressive enough to help me solve some of those problems? In my opinion the answer is yes. I look at things like DaemonSet and the things like scheduling [inaudible] that are exposed by Kubernetes. And there is actually quite a lot of power there, quite a lot of capability in just the traditional model of how do I get this set of applications onto that set of servers or some subset they’re in. So I think it is worth evaluating if that is the place you’re in as an organization and if you are looking at fleets of equipment and trying to handle that magical recipe of multiple applications and dependencies and stuff. See what is the water is like on this side, it is not so bad. [0:40:43.1] CC: Yes, I don’t think there is a way to answer this question. It is Kubernetes for me without actually trying it, giving it a try yourself like really running something of maybe low risk. We can read blogposts to the end of the world but until you actually do it and explore the boundaries is what I would say, try to learn what else can you do that maybe you don’t even need but maybe might become useful once you know you can use. Yeah and another thing is maybe if you are a shop that has one or two apps and you don’t need full blown, everything that Kubernetes has to offer and there is a much more scaled down tool that will help you deploy and run your apps, that’s fine. But if you have more, a certain number, I don’t know what that number would be but multiple apps and multiple services just think about having that uniformity across everything. Because for example, I’ve worked in shops where the QA machines were taking care by a group of dev ops people and the production machines, oh my god they were taken care by other groups and now the different group of people and the two sides of these groups used were different and I as a developer, I had to know everything, you know? How to deploy here, how to deploy there and I had to have my little notes and recipes because whenever I did it – First of all I wasn’t doing that multiple times a day. I had to read through the notes to know what to do. I mean just imagine if it was one platform that I was deploying to with the CLI comments there, it is very easy to use like Kubernetes has, gives us with Kubes ETL. You know you have to think outside of the box. Think about these other operations that you have that people in your company are going to have to do. How is this going to be taught in the future? Having someone who knows your stack because your stack is the same that people in your industry are also using. I think about all of these things not just – I think people have to take it across the entire set of problems. [0:43:01.3] BL: I wanted to mention one more thing and this is we are producing lots of content here with The Podlets and with our coworkers. So I want to actually give a shout out to the TGIK. We want to know what you can do in Kubernetes and you want to have your imagination expanded a little bit. Every Friday we make a new video and actually funny enough, three fourths of the people on this call have actually done this. Where, on Friday, we pick a topic and we go in and it might be something that would be interesting to you or it might not and we are all over the place. We are not just doing applications but we are applications low level, mapping applications on Kubernetes, new things that just came out. We have been doing this for a 101 episodes now. Wow. So you can go look at that if you need some examples of what things you could do on Kubernetes. [0:43:51.4] CC: I am so glad to tgik.io maybe somebody, an English speaker should repeat that because of my accent but let me just say I am so glad you mentioned that Brian because I was sitting here as we are talking and thinking there should be a catalog of used cases of what Kubernetes can do not just like the rice and beans but a lot of different used cases, maybe things that are unique that people don’t think about to use because they haven’t run into that need yet. But they could use it as a pause, okay that would enable me to do these thing that I didn’t even think about. That is such a great catalog of used cases. It is probably the best resource. Somebody say the website again? Duffy what is it? [0:44:38.0] DC: tgik.io and it is every Friday at 1 PM Pacific. [0:44:43.2] CC: And it is live. It’s live and it’s recorded, so it is uploaded to the VMware Cloud Native YouTube and everything is going to be on the show notes too. [0:44:52.4] DC: It’s neat, you can come ask us questions there is a live chat inside of that and you can use that live chat. You can ask us questions. You can give us ideas, all kinds of crazy things just like you can with The Podlets. If you have an idea for an episode or something that you want us to cover or if you have something that you are interested in, you can go to thepodlets.io that will link you to our GitHub pages where you can actually open an issue about things you’d love to hear more about. [0:45:15.0] JR: Awesome and then maybe on that note, Podlets, is there anything else you all would like to add on “Should I Kubernetes?” or do you think we’ve – [0:45:22.3] BL: As best as our bias will allow it I would say. [0:45:27.5] JR: As best as we can. [0:45:27.9] CC: We could go another hour. [0:45:29.9] JR: It’s true. [0:45:30.8] CC: Maybe we’ll have “Should I Kubernetes?” Part 2. [0:45:34.9] JR: All right everyone, well that wraps it up for at least Part 1 of “Should I Kubernetes?” and we appreciate you listening. Thanks so much. Be sure to check out the show notes as Duffy mentioned for some of the articles we read preparing for this episode and TGIK links and all that good stuff. So again, I am Josh Russo signing out, with us also Carlisia Campos. [0:45:55.8] CC: Bye everybody, it was great to be here. [0:45:57.7] JR: Duffy Coolie. [0:45:58.5] DC: Thanks you all. [0:45:59.5] JR: And Brian Lyles. [0:46:00.6] BL: Until next time. [0:46:02.1] JR: Bye. [END OF EPISODE] [0:46:03.5] ANNOUNCER: Thank you for listening to The Podlets Cloud Native Podcast. Find us on Twitter at https://twitter.com/ThePodlets and on the http://thepodlets.io/ website, where you'll find transcripts and show notes. We'll be back next week. Stay tuned by subscribing. [END]See omnystudio.com/listener for privacy information.

Virtual Stack
30 - Virtual Stack - VMware Tanzu and Project Pacific

Virtual Stack

Play Episode Listen Later Jan 17, 2020 55:50


Back in August 2019, Pat Gelsinger and Joe Beda were on stage at VMworld US to announce VMware Tanzu and Project Pacific, the products and solutions that are designed to change the way customers build, run and manage their Kubernetes based applications. On the 30th episode of the Virtual Stack Podcast, I’m joined by Scott Lowe (Staff Kubernetes Architect at VMware, author, blogger and fellow podcaster) and we discuss about these recent announcements and check under the hood. Hope you enjoy the show. You can reach out to Scott and follow his work on his blog and Twitter. Scott hosts a popular podcast called Full Stack Journey and he’s also contributing to the KubeAcademy initiative. Virtual Stack is available on all major apps: Apple Podcast, Spotify, Google Podcast, Stitcher and more. As usual, feel free to share your feedback via Twitter (@emregirici), LinkedIn or virtualstack.tech. Links: Live announcement at VMworld US (33:00 - 43:00) Project Pacific product page VMware Tanzu product page Scott's author profile; the books that he authored and co-authored

The Podlets - A Cloud Native Podcast
CI and CD in Cloud Native (Ep 11)

The Podlets - A Cloud Native Podcast

Play Episode Listen Later Jan 6, 2020 43:15


A warm welcome to John Harris who will be joining us for his first time on the show today to discuss our exciting topic, CI and CD in cloud native! CI and CD are two terms that usually get spoken about together but are actually two different things entirely if you think about them. We begin by getting into exactly what these differences are, highlighting the regulatory aspects of CD in contrast to the future-focussed nature of CI. We then move on to a deep exploration of their benefits in optimizing processes in cloud native space through automation and surveillance from development to production environments. You’ll hear about the benefits of automatic building in container orchestration, the value of make files and local test commands, and the evolution of CI from its ‘rubber chicken’ days with Martin Fowler and Jez Humble. We take a deep dive into the many ways that containers differ from regular binary as far as deployment methods, build speed, automation, run targets, realtime reflections of changes, and regulation. Moreover, we talk to the challenges of transitioning between testing and production environments, getting past human error through automation, and using sealed secrets to manage clusters. We also discuss the benefits and drawbacks of different CI tools such as Kubebuilder, Argo, Jenkins X, and Tekton. Our conversation gets wrapped up by looking at some of the exciting developments on the horizon of CI and CD, so make sure to tune in! Follow us: https://twitter.com/thepodlets Website: https://thepodlets.io Feeback: info@thepodlets.io https://github.com/vmware-tanzu/thepodlets/issues Hosts: Bryan Liles Nicholas Lane Key Points From This Episode: • The difference between CI and CD.• Understanding the meaning of CD: ‘continuous delivery’ and ‘continuous deployment’.• Building an artifact that can be deployed in the future is termed ‘continuous integration’.• The benefits of continuous integration for container orchestration: automatic building.• What to do before starting a project regarding make files and local test commands.• Kubebuilder is a tool that scaffolds out the creation of controllers and web hooks.• Where CI has got to as far as location since its ‘rubber chicken’ co-located days.• The prescience of Martin Fowler and Jez Humble regarding continuous integration.• The value of running tests in a CI process for quality maintenance purposes.• What makes containers great as far as architecture, output, deployment, and speed.• The benefits of CD regarding deployment automation, reflection, and regulation.• Transitioning between testing and production environments using targets, clusters, pipelines.• Getting past human error through automation via continuous deployment.• What containers mean for the traditional idea of environments.• How labeling factors into the simplicity of transitioning from development to production.• What GitOps means for keeping track of changes in environments using tags.• How sealed secrets stop the need to change an app when managing clusters.• The tools around CD and what a good CD system should look like.• Using Argo and Spinnaker to take better advantage of hardware.• How JenkinsX helps mediate YAML when installing into clusters.• Why the customizable nature of CI tools can be seen as negative.• The benefits of using cloud native-built tools like Tekton.• Perspectives on what is missing in the cloud native space.• A definition of blue-green deployments and how they operate in service meshes.• The business abstraction elements of CI tools that are lacking.• Testing and data storage-related aspects of CI/CD that need to be developed. Quotes: “With the advent of containers, now it’s as simple as identifying the images you want and basically running that image in that environment.” — @bryanl [0:18:32] “The whole goal whenever you’re thinking about continuous delivery or continuous deployment is that any human intervention on the actual moving of code is a liability and is going to break.” — @bryanl [0:21:27] “Any time you’re in developer tooling, everyone wants to do something slightly differently. All of these tools are so tweak-able that they become so general.” — @johnharris85 [0:34:23] Links Mentioned in Today’s Episode: John Harris — https://www.linkedin.com/in/johnharris85/Jenkins — https://jenkins.io/CircleCI — https://circleci.com/Drone — https://drone.io/Travis — https://travis-ci.org/GitLab — https://about.gitlab.com/Docker — https://www.docker.com/Go — https://golang.org/Rust — https://www.rust-lang.org/Kubebuilder — https://github.com/kubernetes-sigs/kubebuilderMartin Fowler — https://martinfowler.com/Jez Humble — https://continuousdelivery.com/about/David Farley — https://dfarley.com/index.htmlAMD — https://www.amd.com/enIntel — https://www.intel.com/content/www/us/en/homepage.htmlWindows — https://www.microsoft.com/en-za/windowsLinux — https://www.linux.org/Intel 386 — http://www.computinghistory.org.uk/det/6192/Introduction-of-Intel-386/386SX — https://www.computerworld.com/article/2475341/flashback--remembering-the-386sx.html386DX — https://en.wikipedia.org/wiki/Intel_80386Pentium — https://www.intel.com/content/www/us/en/products/processors/pentium.htmlAMD64 — https://www.webopedia.com/TERM/A/AMD64.htmlARM — https://en.wikipedia.org/wiki/ARM_architectureTomcat — http://tomcat.apache.org/Netflix — https://www.netflix.com/za/GitOps — https://www.weave.works/technologies/gitops/Weave — https://www.weave.works/Argo — https://www.intuit.com/blog/technology/introducing-argo-flux/Spinnaker — https://www.spinnaker.io/Google X — https://x.company/Jenkins X — https://jenkins.io/projects/jenkins-x/YAML — https://yaml.org/Tekton — https://github.com/tektonCouncourse CI — https://concourse-ci.org/ Transcript: EPISODE 11 [INTRODUCTION] [0:00:08.7] ANNOUNCER: Welcome to The Podlets Podcast, a weekly show that explores Cloud Native one buzzword at a time. Each week, experts in the field will discuss and contrast distributed systems concepts, practices, tradeoffs and lessons learned to help you on your cloud native journey. This space moves fast and we shouldn’t reinvent the wheel. If you’re an engineer, operator or technically-minded decision maker, this podcast is for you. [EPISODE] [00:00:41] BL: Back to the Kubelets Podcast, episode 11. I’m Bryan Liles, and today we have Nicholas Lane. [00:00:50] NL: Hello! [00:00:51] BL: And joining us for the first time, we have John Harris. [00:00:55] JH: Hey everyone. How is it going? [00:00:56] BL: All right! So today we’re going to talk about CI and CD in cloud native. I want to start this off with this whole term CI and CD. We talk about them together, that are two different things almost entirely if you think about them. But CI stands for continuous integration, and then we have CD. What does CD stand for? [00:01:19] NL: Compact disk. [00:01:20] BL: Right. True, and actually I’ve used that term before. I actually do agree. But what else does CD stand for? [00:01:28] NL: It’s continuous deployment right? [00:01:30] BL: Yeah, and? [00:01:31] JH: Continuous delivery. [00:01:32] NL: Oh! I forgot about that one. [00:01:35] BL: Yeah, that’s the interesting thing, is that as we talk about tech and we give things acronyms, CD is just a great one. Change in directories, compact disk, continuous delivery and continuous deployment. Here’s the bonus question, does anyone here know the difference between continuous delivery and continuous deployment? [00:01:58] NL: Now that’s interesting. [00:01:59] JH: I would go ahead and say continuous delivery is the ability to move changes through the pipeline, but you still have the ability to do human intervention at any stage, and usually deployments production and continuous delivery would be a business decision, whereas continuous deployment is no gating and everything just go straight to product. [00:02:18] BL: Oh, John! Gold start for you, because that is one of the common ones. I just like to bring that up because we always talk about CI and CD as they are just one thing, but they’re actually way bigger topics and we’ve already introduced three things here. Let’s start at the beginning and let’s talk about continuous integration, a.k.a CI. I’ll start off. We have CI, and what is the goal of CI? I think that we always get boggled down with tech terms and all these technology and all these packages from all these companies. But I’d like to boil CI down to one simple thing. The process of continuous integration is to build an artifact that can be deployed somewhere at some future date at some future time by some future person, process. Everything else is a detail of the system you choose to use. Whether you use Jenkins, or CircleCI, or Drone, or you built your own thing, or you’re using Travis, or any of the other online CI tools. At the end of the day, you’re building either – If you’re doing web development. Maybe you’re building out Docker files, because we’re in cloud native. I mean docker images, because we’re in cloud native. But if you’re not, maybe you’re just building JARs, WARs, or EARs, or a ZIP file, or a binary, or something. I’d just like to start off, start this off with there. Any more thoughts on continuous integration? [00:03:48] NL: Yeah. I think the only times that I’ve ever used something that’s like continuous integration is when I’ve been doing like more container orchestration, like development, things on top of like things like Kubernetes, for instance. The thing I really like about it is like the concept of being able to like, from my computer, save and do an automatic save and push to a local repo and have all of the pieces get built for me automatically somewhere else, and I just love that so much because it saves so much brain thinky juice to run every command to make the binary you need. [00:04:28] BL: So did you actually create those scripts yourself? [00:04:30] NL: Some of them. When I’ve used things like GitLab, I use the pipeline that exists there and just fiddled around with like a little bit of code, like some bash there, but like not too much because GitLab has a pretty robust pipeline. Travis — I don’t think I needed to actually. Travis had a pretty good just go make Docker build, scripts already templated out for you. [00:04:53] JH: Yeah. I’d like to tell people whenever you start any project, whether it’s big or small, especially if it’s on – Not on Windows. I’ll tell you something different if it’s on Windows. But if you’re developing on a Mac or developing on Linux, the first thing you should do in your project is create a make file or your programming language equivalent of a make file, and then in that make file what you should do is write a command that will build your software that runs its tests locally, and also builds – whatever the process is. I mean, if you’re running in Go, you do a Go build. If you’re using Rust, build with Rust, or C++, or whatever before you even write any code. The reason why is because the hardest part is making your code build, and if you leave that to the end, you’re actually making it harder on yourself. If your code build works from the beginning, all you have to do is change it to fit what you’re doing rather than thinking about it when it’s crunch time. [00:05:57] NL: I actually ran into that exact scenario recently, because I’ve been building some tooling around some Kubernetes stuff, and the first one I did, I built it all manually by hand. Then at the end I was like – I gave it to the person who wanted it and they’re like, “So, where’s the make file?” I’m like, “Where’s the what?” So I had go in and like fill in the make file, and that was a huge pain in the butt. Then recently the other thing I’ve been using is Kubebuilder. John, you and I have been talking about Kubebuilder quite a bit, but using Kubebuilder, and one of the things it does for you is it scaffolds out and a make file for you, and that was like going from me doing it by myself to having it already exist for you or just having it at the beginning was so much better. I totally agree with you, Brian. [00:06:42] BL: So quick point of order here. For those of us who don’t know what Kubebuilder is. What is Kubebuilder? [00:06:48] NL: Kubebuilder is a tool that was created by members of the Kubernetes Community to scaffold out the creation of controllers and web hooks. What a controller is in Kubernetes is a piece of software that waits, sort of watches a specific object or many specific objects and reconciles them. If they noticed that something has changed and you want to make an action based on that change, the controller does that for you. [00:07:17] JH: Okay. So it actually makes the action of working with CRDs and Kubernetes much easier than creating it all yourself. [00:07:26] NL: Correct. Yeah. So, for instance, the one that I made for myself was a tool that watched, updated and watched a specific CRD, but it wasn’t necessarily a controller. It was just like flagging on whether or not a change occurred, and I used the dynamic client, and that was a huge headache on of itself. Kubebuilder has like the ability to watch not just CRDs, but any object in Kubernetes and then reconcile them based on changes. [00:07:53] NL: It’s pretty great. [00:07:54] BL: All right. So back to CI. John, do you have any opinions on CI or anecdotes or anything like that? [00:07:59] JH: Yeah. I think one of the interesting things about the original kind of philosophy of CI outside of tooling was like trunk-based development that every develop changes get integrated into trunk as soon as possible. You don’t get into integration hell and rebasing. I guess it’s kind of interesting when you apply that to a cloud native landscape where like when that stuff came out with like Martin Fowler or Jez Humble probably 10, 15 years ago almost now, a lot of dev teams were co-located. You could do CI. I think there was a rubber chicken method where you didn’t use a tool. It was just whoever had the chicken that’s responsible for the build. Just to pull everyone else’s changes. But now it seems like everything is branch-based. When you look at a project like Kubernetes, there’s a huge number of contributors all geographically displaced, different time zones, lots of different branches and features going on at the same time. It’s interesting how these original principles of continuous integration from the beginning now apply to these huge projects in the cloud native landscape. [00:08:56] BL: Yeah, that’s actually a great point of how prescient Martin Fowler has been for many, many years, and even with Jez Humble being able to see these problems 10, 15 years ago and be able to describe them. I believe Jez Humble wrote the CD book, the continuous delivery book. [00:09:15] JH: Yeah, with David Farley, I think. [00:09:18] NL: Yeah. Yeah, he did. So, John, you brought up some good things about CI. I try to simplify everything. I think the mark of someone who really knows what they’re talking about is being able to explain everything in the simplest words possible, and then you can work backwards when people understand. I started off by saying that CI produces an artifact. I didn’t talk about branches or anything like that, or even the integration piece. But now let’s go into that a little bit. There are a lot of misconceptions about CI in general, but one of the things that we talk about is that you have to run test. No, you don’t have to run test, but should you? Yes, 100% of the time. Your CI process, your integration process should actually build your software and run the test, because running the test on this dedicated service or hardware wherever it is ensures that the quality of your software is there at least as much as your developers have insured the quality in the test. It’s very important those run, and a lot of bugs of course can be spotted by running a CI. I mean, we are all sorts of developers here, and I tell you what, sometimes I forget to run the test locally and CI catches me before a commit makes it into master and it has a huge typo or a whole bunch of print lines in there. Moving on here, thinking about CI and cloud native. Whenever you’re creating a cloud native app, have you ever thought about the differences between let’s say creating just a regular binary that maybe runs on a server, but not in a container on somebody’s cloud native stack, i.e. Kubernetes? Have you ever thought about the differences of things to think about? [00:11:04] BL: Yeah. So part of it is – I would imagine or I believe it’s like things like resource, like what resources you need or what architecture you’re deploying into. You need the binary to make like run in this – With containerization, it’s easy because you’re like, “I know that the container is going to be this architecture,” but you can’t necessarily guarantee that outside of a containerized world. I mean, I suppose you can being like with the right tooling setup you can be like, “I only want to run on this.” But that isn’t necessarily guaranteed, because any computer that runs on could be just whatever architecture that happens to land on, right? Also, something to – I think of is like how do you start processes on disparate computers in a controlled fashion? Something like, again, with containers, you can trust that the container runtime will run it for you. But without that, it seems like a much harder task. [00:12:01] NL: Yeah, I would agree. Then I said that containers in general just help us out, because most of our workloads go on some AMD or Intel 64 bit and it’s Linux. We know what our output is going to be. So it’s not like in the old days where you had to actually figure out what your run target was. I mean, that’s even on Intel stacks. I mean, I’m updating myself here where you had like – When the 386 was out and then you had the 386SX and the 386DX, there were different things there, and you actually compile your code different. Then when the 46 came out and then when we had introduction of Pentium chips, things were different. But now we can pretty much all target AMD64, and in some cases, I mean, there are some chip things like the bigger encryption things that are in the newer chips. But for the most part, we know what our deployed target is going to be. But the cool thing is also that we don’t have to have Intel or AMD64. It could be ARM32 or ARM64, and with the addition to a lot of the work that has been going on in Windows land lately, we can have Windows images. I don’t know so many people were doing that yet. I’m not out and part of the field, but I like that the opportunity is there. [00:13:25] JH: Oh! I think one of the interesting things is the deployment method as well. Now with containers, everything is kind of an immutable rip and replace. Like if we develop an application, we know that the old container is going to stop when I deploy a new one. I think Netflix were doing a little bit of this before containers and some other folks with like baking AMIs and using that immutable method. But I think before that it was if we had a WAR file, we had to throw it back into Tomcat, let Tomcat pick it up or whatever. Everything was a little bit more flaky in terms of deployment. We had to do a lot of checks around deployment rather than just bring something out, bring something back in blue/green, whatever. [00:13:59] BL: Well, I actually like that you brought that up, because that’s actually one of the greatest parts of this whole cloud native thing, is that when we’re using containers and we’re deploying with containers, we know what our file system is going to look like, because we created it. There would not be some rogue file or another configuration there that will trip up our deployment, because at build time, we’ve created the environment. It’s much better than that facility that Netflix was doing with baking AMIs. In a previous life, I actually ran the facility for baking AMIs at a large company where we had thousands of developers on more than a thousand dev teams, and we had a lot of spyware. Whenever you had to build an image, it was fine in one account, but if you had let’s say a thousand accounts with the way that AWS works and encrypted images, you actually had to copy all the images to all the accounts. It couldn’t actually boot it from your account. That process would literally take all night to get it done across all of our accounts. If you made a mistake, guess what? You get to do it again. So I am glad that we actually have this thing called a container and all these things based on CRI, the container runtime, that we are able to quickly build containers. I don’t want to just limit this conversation to continuous integration. Let’s get into the other parts too with deployment and delivery. What is so novel about CD and the cloud native world? [00:15:35] NL: I think to me it’s the ability to have your code or your artifact or whatever it is, whatever you’re working on. When you make a change, you can see the change reflected in reality, whatever your reality looks like, without your intervention. I mean, you might have had to set up all the pipelines and all that jargon, but when you press save in VS code and it creates a branch and runs all your tests and then deploys it for you or delivers it for you into what you’d define as reality, that’s just so nice, because it really kind of sucks having to do the like, “Okay, I’ve got a new deployment. Destroy the old deployment. Put in the new one or like rev the new image tag or whatever in the deployment you’re doing.” All these manual steps, again, thinky-brain juice, it takes pieces of your attention away, and having these pieces like added for you is just so nice. [00:16:30] BL: Yeah, what do you think, John? [00:16:32] JH: Yeah. I think just something in the state of DevOps we’ve bought one of the best predictors for a company’s success is like cycle time of feature from ideation to production. I think like the faster we can get that cycle – It kind of gets me interested. How long does an application take to build? If it takes two hours, how good are you at getting features out there quickly? Maybe one of the drivers with microservices, smaller pieces independently deployed, we can get features out to production quicker, because I think the name of the game is just about enabling developers to put the decision in the hands of the business to decide when the customer should see that feature. I think the tighter we can make that cycle, the better for everyone. [00:17:14] BL: Oh, no! I agree. I love and hate web services, but what I do like is the idea of making these abstractions smaller, and if the abstractions are smaller, it’s less code. A lot of the languages we use now are faster compiling, let’s say, a large C++ project. That could take literally two hours to compile. But now when we have languages like Go, and Rust is not as fast, but it’s not slow as well. Then we have all of our interpret languages, whether it’d be Python, or JavaScript, or TypeScript, where we can actually go from an idea, run the test in a few minutes and build this image that we can actually run and see it almost in real-time. Now with the complexity of the tools, I mean, the features that are built in the tools, we can now easily manage multiple deployment environments, because think about before, you would have a dev environment, and that would be the Wild West. That would be literally where it would be awful. You might have to rebuild it every couple of months. Then you would have staging, and then maybe you would have some kind of pre-prod environment just as like your final smoke test, and then you would have your production. Maintaining all the software on all those was extremely hard. But now with the advent of containers, now it’s as simple as identifying the images you want and basically running that image in that environment. I like where we’ve ended up. But with all power comes new problems, and just because we can deploy quicker means we just run into a lot of different problems we didn’t run into before. The first one that I’ll bring up is the complexity. Auto conversion between environments, so moving code between test staging and production. How do we do that? Any ideas before I throw some out there? [00:19:11] NL: I guess you would have different, or maybe the same pipeline but different targets for like if say you’re using something like Kubernetes. You could have one part of your pipeline deploy initially to this Kubernetes context, which points to like one cluster. It’s building up clusters by environment type and then deploying into those, running your tests, see if it runs properly and then switch over to the next context to apply that image tag and that information and then just go down the chain until you go to production. [00:19:44] BL: Well, that’s interesting. One thing I’d like to throw out there, and I’m not advocating any particular product. But the idea of having pipelines for continuous integration and your CD process is great, where you can now have gates and you can basically automate the whole thing. Code goes into CI and we built an artifact, and a message can go out automatically to an approver or not, and that message could say, “Hey! This code is going to be integrated into our trunk or our master branch.” They can either do it themselves manually as a lot of people do or they can actually maybe click on a link or check a checkbox and this gets integrated in. Then what automatically could happen at this point is, and I’ve seen a lot of companies doing this, is now we take that software and we spin up a new whole environment and we just install that software. For that one particular feature that you worked on, you can actually get an automatic environment for that. Then what we can do is we can take that environment itself and we can now merge this maybe into a staging branch or tag it with a staging label, and that automatically gets moved to staging. Depending on how complicated you are, how advanced you are, now you can actually have it go out to your product people or people who make decisions, maybe your executives, and they can view the software in whatever context it happens to be in. Then they can say, “Okay.” Now that’s when we’re talking about now we can hit okay and the software just keeps on moving to the pipeline and it gets into production. The whole goal here, and this is actually where your goal should be just in general whenever you’re thinking about continuous delivery or continuous deployment is that any human intervention on the actual moving of code is a liability and is going to break, and it’s going to break because on Friday afternoon at 5:25 PM, someone’s thinking about the weekend and they’re not thinking about code, and they’re going to break your build. Our goal is to build these delivery systems that are Friday afternoon proof. We can push code anytime. It doesn’t matter. We trust our process. [00:22:03] JH: I think it’s a great point about environments. I think back in the day, an environment used to be a set of machines, and then test used to be – staging was where there were kind of more stable versions of APIs and folks were more coordinated pushing things into them. What really is an environment? Like you said, when we push micro services or whatever service, we can spin up an entire Kubernetes cluster just for that service. We can set it up. We can run whatever tests we want. We could tear it down. With the advent of Elastic compute, and now containers, they really enabled this world where like the traditional idea of an environment and what constitutes an environment is starting to get a bit kind of sloppy and blend into each other. [00:22:42] BL: I like it though. I think it’s progress. [00:22:45] NL: I totally agree. The one that scares me but I also find like really interesting, is the idea of having all of your environments in one set of machines. So clusters. Having a multi-tenanted set of machines for like dev staging and production, they’re all running in the same place and they’re all just separated by like what configuration of like connectivity from different networking and things like that set up. When a user hits your website, bryanliles.com, they should go to the production images, but those are binaries, and those binaries should be running in the same space essentially as the development ones. It’s scary, but it’s also like allows for like some really fast testing and integration. I find it to be very fascinating. [00:23:33] BL: I mean that’s where we want to be. I find more often than not that people have separate clusters for dev and staging and production. But using the Kubernetes API, you don’t have to do that, because what we can do is we can force deployment or workload to a set of machines based on their label. That’s actually one of the very strong positives for Kubernetes. Forget all the complexity. One of the things that makes it easy is to say that I want this particular deployment to only live on my development machines. Well, which development machine? I don’t care. What if we increase our development pool size? We just re-label nodes. It doesn’t matter. Now we can just control that. When it comes down to controlling cost and complexity, this is actually one idea that Kubernetes is leading and just making it easier to actually use more of your hardware. [00:24:31] NL: Yeah. Absolutely. That’s so great because if you think about it from a CI/CD standpoint, at that point all you have to do is just change the label to where you’re applying this piece of code. So you’re like, “Node selector, label equals dev. Okay, now it’s staging. Okay, now it’s prod.” [00:24:47] BL: So this brings me into the next part of what I want to talk about or introduce to you all today. We’re on a journey as you probably can tell. Now whenever we have our CI process and we’re building and we’re deploying, where do we store our configurations? [00:25:04] NL: [inaudible 00:25:04]. [00:25:06] BL: Ever thought about that? [00:25:08] NL: Okay. I mean, in a Kubernetes perspective, you might be using something like etcd to sort of – But like everything else, what if you’re using Travis? [inaudible 00:25:16] store everything. Everything should be versioned, right? Everything should be – [00:25:20] BL: Yeah, 100%. [00:25:24] NL: I would store everything these as much as possible. Now, do I do that all the time? God, no! Absolutely not. I’m a human being after all. [00:25:32] BL: I mean, that’s what I actually want to bring up, is this concept of GitOps. GitOps was a coined term by my friend, Alexis, who works at Weave. I think Weave created this. Really what it’s about is instead of having – basically, Kubernetes is declarative, and our configurations can be declarative too, because what we can do is make sure is we can have tech space configurations, and for one reason it’s because tech space means it can be versioned. It can be diffs. We take those text versions and we put them in our same repository we put our code in. How do we know what’s in production at any given time or any given time in the past? We just look at the tags of what we did. We had a push at 5:15 on August 13th. Of course, this is 5:15, you could see time, because any other time doesn’t exist in the computer land. So what we could do is we could just basically tag that particular version as like 2019-08-13. If I said 5-17-55, and we call 01 just so we could have 100 deploys in a day. If we started doing that, now not only can we control what we have, but we can also know what was on in any given environment at any given time. Because with Git and with Mercurial and any other of these – Well, only the popular ones, with Git and Mercurial, you can definitely do this. Any given commit can have multiple tags. You could actually have a tag that hit dev and then a tag that, let’s say, hits staging, and then a tag that hit production, the exact same code but three different tags. So you know at any given time what happened. [00:27:18] JH: Yeah, the config thing is so important. I think that was another Jez Humble quote where it was like, “Give me three hours access to your code and I’ll break it. But give me 5 minutes with your configurations and I’ll break it.” Almost like every big bug is, right, someone was accidentally pointing the prod server to the staging database like, “Oops! Their API was pointing to the wrong port, and everything came down,” or we changed the wrong versions or whatever. I think that’s one of the intersections of developers and operations folks. We kind of talked about like Dev Ops and things like that. I really love the idea of everything being kept in Git and using GitOps, but then we’ve got things like secrets and configuration that shouldn’t be seen or being able to be edited by developers, but need to be for ops folks. But we still want to keep the single point of truth. Things like sealed secrets have really enabled us to move along in this area where we can keep everything in text-based version. [00:28:08] BL: All right. Quick point of order here. Sealed secrets is a controller/CRD created by Bitnami. What it allows you do is, John – [00:28:23] JH: It allows you – It creates a CRD, which is sealed secret, which is a special resource type in your cluster and also creates a key, which is only available to that operator running in your cluster. You can submit a sealed secret in plain text or you can submit a secret in plain text and it will throw it back out as an encrypted secret with that key and then you can check that into version control. Then when you go to deploy your software, you can deploy that encrypted secret into the cluster. The operator will pick it up, decrypt it using only the key that it has access to and then put it back in the cluster as a regular secret. Your application just interacts with regular Kubernetes secrets. You don’t need to change your app. They deal with all the encryption outside of the user intervention. [00:29:03] BL: I think the most important part of what you said is that this allows us to have no excuses about what we can store in our repositories for our configuration, because someone is going to make the argument, “No, we can’t store secrets, because someone’s going to be able to see them.” Well, guess what? We never even stored an unencrypted secret in our repository. They’re all encrypted, and it’s still secrets. It’s [inaudible 00:29:25]. I don’t know if anyone’s cracked yet. I’m sure maybe a state level actor has thought of it. But for us regular people, even our companies, like even at VMware, or even at Google, they have not done it yet. So it’s still pretty safe. Thinking even further now, and really what I’m trying to paint the picture of is not just how do you do CD, but really what CD could look like and how it can actually make you happy rather than sad. The next item I wanted to think about was tools around CD and creating tools and what does a good continuous delivery system look like. I kind of hinted about this earlier whenever I was talking about pipelines. The ability to take advantage of your hardware, so we’re deploying to let’s say 100 servers. We’re pulling 5 or 6 services to 100 node cluster. We can do those all at once, and what we can do is you want to have a system that can actually run like this. I could think of a couple. From Intuit, there is Argo, and they have Argo CD. There is the tool created by Google and maybe Netflix. I want to have to look that one up. It’s funny, because they quoted – [00:30:40] JH: Spinnaker? [00:30:42] BL: Spinnaker. They quoted me in their book, and I don’t remember their name. I’m sorry anyone from Spinnaker product listening. Once again, not advocating any products, but they have the concept of doing pipelines. Then you also have other things for your projects, like if you’re using open source, Drone. Another X Google – I think it was X-Googler that made this. Basically, they have ways you can do more than one thing at a time. The most important piece about this is not only can you do more than one thing at a time, is that you have a programmatic check that it’ll make sure that you can verify that whatever you did was successful. We deployed to staging or we deployed to our smoke test servers for our smoke test, and that requires our testing people and an executive signoff. They can actually just wait until they get their signoff or maybe if it goes over a day or so, they can actually – It just fails, and now the build is done. But that part is pretty neat. Any other topics over here before I start throwing out more? [00:31:45] NL: I think I just have thoughts on some of the tools that we’ve used. Everyone Jenkins. Jenkins can do anything that you want it to do, but you really have to tighten the screws on it. It is super powerful. It’s kind of like Bash, like Bash scripting. It’s super powerful, but you have to know precisely what you’re doing, otherwise it can really hurt you. Actually, I have used Spinnaker in the past, and I’ve really liked it. It has a good UI, very good pipelines. Easy blue/green or canary deployment mechanism, I thought that was great. I’ve looked at Drone, believe it or not, but Drone is actually pretty cool. Check out Drone. I really liked it. [00:32:25] BL: Well, since we’re throwing out products, Jenkins, does have JenkinsX. I have not given it the full rundown yet. But what I do like about it, and I think everyone should pay attention to this if you’re doing a product in this space, is that when you install JenkinsX, you install it locally to your machine. You basically get this binary called JX, and you then tell JX to install it into your cluster. Instead of just doing kubectl apply-f a whole bunch of YAML, it actually ask you questions and it sets up GitHub repositories or wherever you need these repositories. It sets up [inaudible 00:33:01] spaces for you. There’s no just [inaudible 00:33:05] kubectl apply-f HTTPS: I just owned your system, because that’s actually a problem. Then it solves the YAML sprawl, because YAML and Kubernetes is something that is complained about a lot, but it’s how it’s configured. But it’s also just a detail what we’re supposed to be doing, and we actually work with Joe Beda and I could talk about this all the time, is that the YAML is the implementation, but it’s not the idea. The idea is that we build tools on top of that that create YAML so users have to see less YAML. I think that’s a problem with Jenkins, is that it’s so powerful and they’re like, “Well, we want powerful people or smart people to be able to do smart things. So here you go.” The problem with that is that where do I start? It’s a little daunting. So I do think that they definitely came with the much stronger game with this JX command. Just as a little sidebar, we do it as well with our Valero project, and I think that just speaks, should be like the bar for anything. If you’re installing something into a cluster, you should come up with a command line tool that helps you manage the lifecycle of whatever you’re installing to the operator, YAML, whatever. [00:34:18] JH: I think what’s interesting about the options, this is definitely one area where there’s so much nuance. Any time you’re in developer tooling, everyone wants to do something slightly differently. All of these tools are so tweak-able that they become so general. I think it’s probably one of the criticisms that could be leveraged against Jenkins is that you can do everything, and that’s actually a negative as well as a positive. Sometimes it’s too overwhelming. There are too many ways of doing things. I’m a fan of some of the more kind opinionated tools in that space. [00:34:45] BL: Yeah. I like opinionated tools as well, but the problem that we’re having in this cloud native space is that, yeah, Kubernetes is five-years-old now. We are just getting to the point where we actually understand what a good decision is, because there was a lot of guesses before and we’ve done a lot of things, and some of these have been good ideas, but in some cases they have not been great ideas. Even I ran the project case on it. Great idea on paper, but implementation, it required people to know too many things. We’d learned a lot of lessons from that. That’s what I think we’re going to find out in this space is that we’re going to learn little lessons. I say this project from my last project that I was going to bring up is something that I think has learned some of the lessons. Google sponsors a project called Tekton, and if you go to – It’s like I believe, and they have some continuous delivery stuff in there and they implement pipelines. But the neat part is, and this is actually the best part, it’s actually a cloud native built service. So every step of your delivery process, from creating images, to actually putting them on clusters, is backed by a Docker image or a container, and I think that part is pretty neat. So now you can define your steps. What is your step? Well, you can use one of their pre-baked, run this command, or if you have something special, like the example before I was giving out where you would say that you need an approval, maybe it’s a Slack approval. You send something with Slack and it has a checkbox, check yes if you like me. What we can do now is we can actually control that and it’s easy to write something a little Docker image that can actually make that call and then get the request and then it can move it on. If you’re looking at more of a toolkit full of good ideas, I do think that Tekton has definitely has some lots of industry. People are looking at it and it’s probably the best example of getting it right in the cloud native way. Because a lot of the products we have now are not cloud native. We’re talking about Jenkins. We’re talking about Spinnaker and we talk about Drone and Travis, which is totally a SaaS product. They’re not cloud native. Actually, the neat part about Tekton is that it actually comes with its own controllers and its own CRDs. So you can actually build these things up using your familiar Kubernetes tooling, which means in theory we could actually use the tooling that we are deploying. We can actually control it in the same way as our applications, because it’s just yet another object that goes in our cluster. [00:37:21] NL: That does sound pretty cool. One other that I meant to bring up was Concourse. Have you check out Concourse yet? [00:37:27] BL: CouncourseCI. I have not. I have used it, but never in a way where I would have a big opinion on it. [00:37:34] NL: I’m kind of in the same place. I think it’s a good idea. It seems really neat, but I need to kick the tires a little more. I will say that I really like the UI. The structure of the UI is really nice. Everything makes sense, and anything you can click on like drills into something a bit deeper. I think that’s pretty cool, but it is one of the shout that I went out to as well as like another tool that I’m aware of. [00:37:52] BL: Yeah, that’s pretty interesting. So we’ve gone about 40 minutes now. Let’s actually start winding this down, and the way that I’m going to suggest that we wind this down is thinking about where we are now. What’s missing in this space and what else could we actually be doing in the cloud native space to make this work out better? [00:38:12] NL: I think I’d like to see better structured or better examples of blue-green or canary deployments with tests associated, and that might just be like me not looking hard enough at this problem. But anytime I began looking at blue-green, I get the idea of what someone’s done, but I would love to see some implementation details, or any of these opinionated tools having opinions around blue-green and what they specifically do to test it. I feel like I’m just not seeing that. [00:38:41] BL: With blue-green, blue-green is hard to do in Kubernetes without an external tool, because for everyone, a blue-green deployment is, I have a software deployment and we’ll give it a color. We’ll call it blue, and I have the next version, and we’ll call it green. Really what I can do is I basically have two versions of my application deployed and I can use my load balancer, or in this case, my service to just change the label or the selector in my service and now I can point at at my green from my blue. Then I want to deploy again, I can just deploy another blue and then change my label selector again. The problem with this is that you can do it in Kubernetes, just fine. But out of the box with Kubernetes, you will drop traffic, because guess what? What happens to a connection that was initiated or a session that was initiated on the blue cluster when you went to green? Actually, this is a whole conversation in itself about service meshes and this is actually one of the reasons service mesh is a big topic, because you can do this blue-green, or another example would be Netflix and Redblack, or you get the creative people who are like rainbow deployments, because just having two is not good enough for them. So they want to have any number of deployments going at one time. I agree with that 100%. [00:39:57] JH: I think, yeah, integrating tools like launch. [inaudible 00:40:01] and I think there are more which enable – I think we’re missing the business abstractions on this stuff so far. Like you said, it’s kind of hard to do if you need to go into the gritty of it right now, but I think the business abstractions of if we deploy a different version to a certain subset of customers, can we get all of those metrics? Can we get those traces back in? Will you automate it, roll it out? Can we increase the percentage of customers that are seeing those things? Have that all controlled in a Kubernetes native way, but having roll it up to a business and more of an abstraction. I think that stuff is currently missing. I think the underpinning kind of technologies are coming up, stuff like service mesh, but I think it’s the abstraction that’s really going to make it useful, which doesn’t exist today. [00:40:39] BL: Yeah. Actually, that’s pretty close to what I was going to say. We built all these tooling that helps us basically as technologists, but really what it comes down to is the business. A lot of the things we’re talking about where we’re talking about CD is important to the business, but when we’re talking about metrics or trace collection, that’s not important to the business, because they only care about the SLA. This is on the SLO side. What we really need to do is mature our processes enough that we can actually marry our outputs to something that other people can understand that has no jargon and it’s sales going up, sales going down. Everything else is just a detail. So, anything else? [00:41:20] NL: Something I think I’d like to see is in our testing, if there was a good way to accurately show the effect of something at load in a CI/CD component. Because one of the things that I’ve run into is like I’ve got this great idea for how this code should work and when I deploy it, it works great. The like a thousand people touch it all at once and it doesn’t work right anymore. I’d love to have some tool along the way that can test things out of load and like show me something that I could fix before all those people touch it. [00:41:57] BL: Yes, that would be a good tool to have. So John, anything else for you? [00:42:02] JH: I’ll open a can of worms right at the end and say the biggest problem here is probably going to be data when we have a lot of systems we need to talk to each other and we need the data to align between those systems and we have now proliferation of environments and clusters. Like how do we get that data reliably into the place that it needs to be to make up testing robust enough to get things out there? It’s probably an episode on some – [00:42:23] BL: Yeah, that’s a big conversation that if we could answer it, we wouldn’t working at VMware. We would have our own companies doing all these great things. But we can definitely iterate on it. So with that, I think we’re going to wrap it up. Thanks for listening to the Kubelets. I’m Bryan Liles, and with me today was Nicholas Lane and John – Yeah, and John Harris. [00:42:47] JH: Thanks everyone. [00:42:47] BL: All right, we’ll see you next time. [END OF EPISODE] [00:42:50] ANNOUNCER: Thank you for listening to The Podlets Cloud Native Podcast. Find us on Twitter at https://twitter.com/ThePodlets and on the http://thepodlets.io/ website, where you'll find transcripts and show notes. We'll be back next week. Stay tuned by subscribing. [END]See omnystudio.com/listener for privacy information.

Cloud & Culture
Highlights episode No. 1 (cloud-native architecture and technologies)

Cloud & Culture

Play Episode Listen Later Dec 12, 2019 19:51


Original episodes and blog posts:Learn the 'whats' and 'whys' of Apache Kafka in 15 minutesLearn about Kubernetes and digital transformation in 15 minutesLearn about the nexus of MongoDB and microservices in 15 minutesHow to navigate the nuanced world of PaaS, CaaS, and KubernetesSecuring applications in the era of speed, scale, and open sourceThe next generation of SQL is about simplicity and global scaleFollow everyone on Twitter:Derrick HarrisNeha NarkhedeJoe BedaEliot HorowitzCornelia DavisGuy PodjarnyPeter Mattis

Pivotal Insights
Highlights episode No. 1 (cloud-native architecture and technologies)

Pivotal Insights

Play Episode Listen Later Dec 12, 2019 19:51


Original episodes and blog posts:Learn the 'whats' and 'whys' of Apache Kafka in 15 minutesLearn about Kubernetes and digital transformation in 15 minutesLearn about the nexus of MongoDB and microservices in 15 minutesHow to navigate the nuanced world of PaaS, CaaS, and KubernetesSecuring applications in the era of speed, scale, and open sourceThe next generation of SQL is about simplicity and global scaleFollow everyone on Twitter:Derrick HarrisNeha NarkhedeJoe BedaEliot HorowitzCornelia DavisGuy PodjarnyPeter Mattis

The Podlets - A Cloud Native Podcast
[BONUS] A conversation with Joe Beda (Ep 6)

The Podlets - A Cloud Native Podcast

Play Episode Listen Later Nov 22, 2019 47:21


For this special episode, we are joined by Joe Beda who is currently Principal Engineer at VMware. He is also one of the founders of Kubernetes from his days at Google! We use this open table discussion to look at a bunch of exciting topics from Joe's past, present, and future. He shares some of the invaluable lessons he has learned and offers some great tips and concepts from his vast experience building platforms over the years. We also talk about personal things like stress management, avoiding burnout and what is keeping him up at night with excitement and confusion! Large portions of the show are obviously spent discussion different aspects and questions about Kubernetes, including its relationship with etcd and Docker, its reputation as a very complex platform and Joe's thoughts for investing in the space. Joe opens up on some interesting new developments in the tech world and his wide-ranging knowledge is so insightful and measured, you are not going to want to miss this! Join us today, for this great episode! Follow us: https://twitter.com/thepodlets Website: https://thepodlets.io Feeback: info@thepodlets.io https://github.com/vmware-tanzu/thepodlets/issues Special guest: Joe Beda Hosts: Carlisia Campos Bryan Liles Michael Gasch Key Points From This Episode: A quick history of Joe and his work at Google on Kubernetes. The one thing that Joe thinks sometimes gets lost in translation on these topics. Lessons that Joe has learned in the different companies where he has worked. How Joe manages mental stress and maintains enough energy for all his commitments. Reflections on Kubernetes relationship with and usage of etcd. Is Kubernetes supposed to be complex? Why are people so divided about it? Joe's experience as a platform builder and the most important lessons he has learned. Thoughts for venture capitalists looking to invest in the Kubernetes space. Joe's thoughts on a few different recent developments in the tech world. The relationship and between Kubernetes and Docker and possible ramifications of this. The tech that is most exciting and alien to Joe at the moment! Quotes: “These things are all interrelated. At a certain point, the technology and the business and career and work-life – all those things really impact each other.” — @jbeda [0:03:41] “I think one of the things that I enjoy is actually to be able to look at things from all those various different angles and try and find a good path forward.” — @jbeda [0:04:19] “It turns out that as you bounced around the industry a little bit, there's actually probably more alike than there is different.” — @jbeda [0:06:16] “What are the things that people can do now that they couldn't do pre-Kubernetes? Those are the things where we're going to see the explosion of growth.” — @jbeda [0:32:40] “You can have the most beautiful technology, if you can't tell the human story about it, about what it does for folks, then nobody will care.” — @jbeda [0:33:27] Links Mentioned in Today’s Episode: The Podlets on Twitter — https://twitter.com/thepodlets Kubernetes — https://kubernetes.io/Joe Beda — https://www.linkedin.com/in/jbedaEighty Percent — https://www.eightypercent.net/Heptio — https://heptio.cloud.vmware.com/Craig McLuckie — https://techcrunch.com/2019/09/11/kubernetes-co-founder-craig-mcluckie-is-as-tired-of-talking-about-kubernetes-as-you-are/Brendan Burns — https://thenewstack.io/kubernetes-co-creator-brendan-burns-on-what-comes-next/Microsoft — https://www.microsoft.comKubeCon — https://events19.linuxfoundation.org/events/kubecon-cloudnativecon-europe-2019/re:Invent — https://reinvent.awsevents.com/etcd — https://etcd.io/CosmosDB — https://docs.microsoft.com/en-us/azure/cosmos-db/introductionRancher — https://rancher.com/PostgresSQL — https://www.postgresql.org/Linux — https://www.linux.org/Babel — https://babeljs.io/React — https://reactjs.org/Hacker News — https://news.ycombinator.com/BigTable — https://cloud.google.com/bigtable/Cassandra — http://cassandra.apache.org/MapReduce — https://www.ibm.com/analytics/hadoop/mapreduceHadoop — https://hadoop.apache.org/Borg — https://kubernetes.io/blog/2015/04/borg-predecessor-to-kubernetes/Tesla — https://www.tesla.com/Thomas Edison — https://www.biography.com/inventor/thomas-edisonNetscape — https://isp.netscape.com/Internet Explorer — https://internet-explorer-9-vista-32.en.softonic.com/Microsoft Office — https://www.office.comVB — https://docs.microsoft.com/en-us/visualstudio/get-started/visual-basic/tutorial-console?view=vs-2019Docker — https://www.docker.com/Uber — https://www.uber.comLyft — https://www.lyft.com/Airbnb — https://www.airbnb.com/Chromebook — https://www.google.com/chromebook/Harbour — https://harbour.github.io/Demoscene — https://www.vice.com/en_us/article/j5wgp7/who-killed-the-american-demoscene-synchrony-demoparty Transcript: BONUS EPISODE 001 [INTRODUCTION] [0:00:08.7] ANNOUNCER: Welcome to The Podlets Podcast, a weekly show that explores Cloud Native one buzzword at a time. Each week, experts in the field will discuss and contrast distributed systems concepts, practices, tradeoffs and lessons learned to help you on your cloud native journey. This space moves fast and we shouldn’t reinvent the wheel. If you’re an engineer, operator or technically minded decision maker, this podcast is for you. [EPISODE] [0:00:41.9] CC: Hi, everybody. Welcome back to The Podlets. We have a new name. This is our first episode with a new name. Don’t want to go much into it, other than we had to change from The Kubelets to The Podlets, because the Kubelets conflicts with an existing project and we’ve thought it was just better to change. The show, the concept, the host, everything stays the same. I am super excited today, because we have a special guest, Joe Beda and Bryan Liles, Michael Gasch. Joe, just give us a brief introduction. The other hosts have been on the show before. People should know about them. Everybody should know about you too, but there's always newcomers in the space, so give us a little bit of a background. [0:01:29.4] JB: Yeah, sure. I'm Joe Beda. I was one of the founders of Kubernetes back when I was at Google, along with Craig McLuckie and Brendan Burns, with a bunch of other folks joining on soon after. I'm currently Principal Engineer at VMware, helping to cover all things Kubernetes and Tanzu related across the company. I came into VMware via the acquisition of Heptio, where Bryan's wearing the shirt today. Left Google, did that with Craig for about two years. Then it's almost a full year here at VMware. We're at 11 months officially as of two days ago. Yeah, really excited to be here. [0:02:12.0] CC: Yeah, I am so excited. Your name is Joe Beda. I always say Joe Beda. [0:02:16.8] JB: You know what? It's four letters and it's easy – it's amazing how many different ways there are to pronounce it. I don't get picky about it. [0:02:23.4] CC: Okay, cool. Well, today I learned. I am very excited about this show, because basically, I get to ask you anything I want. [0:02:35.9] JB: I’ll do my best to answer. [0:02:37.9] CC: Yeah. You can always not answer. There are so many interviews of you out there on YouTube, podcasts. We are going to try to do something different. Let me fire the first question I have for you. When people interview you, they ask you yeah, the usual questions, the questions that are very useful for the community. I want to ask you is this, what are people asking you that you think are the wrong questions? [0:03:08.5] JB: I don't think there's any bad questions like this. I think that there's a ton of interest that's when we're talking about technical stuff at different parts of the Kubernetes stack, I think that there's a lot of business context around the container ecosystem and the companies and around to forming Heptio, all that. A lot of times, I'll have discussions around career and what led me to where I'm at now. I think those are all a lot of really interesting things to talk about all around all that. The one thing that I think is doesn't always come across is these things are all interrelated. At a certain point, the technology and the business and career and work-life – all those things really impact each other. I think it's a mistake to try and take these things in isolation. There's a ton of lead over. I think one of the things that we tried to do at Heptio, and I think we did a good job is recognized that for anybody senior enough inside of any organization, they really have to be able to play all roles, right? At a certain point, everybody is as a business person, fundamentally, in terms of actually moving the ball forward for the company, for the business as a whole. Yeah. I think one of the things that I enjoy is actually to be able to look at things from all those various different angles and try and find a good path forward. [0:04:28.7] BL: All right. Taking that, so you've gone from big co to big co, to VC to small co to big co. What does that unique experience taught you and what can you share with us? [0:04:45.5] JB: Bryan, you know my resume better than I do apparently. I started my career at Microsoft and cut my teeth working on Internet Explorer and doing client side stuff there. I then went to Google in the office up here in Seattle. It was actually in Kirkland, this little hole-in-the-wall, temporary office, preemie work type of thing. I’m thinking, “Hey, I want to do some server-side stuff.” Worked on Google Talk, worked on ads, worked on cloud, started Kubernetes, was a little burned out. Took some time off, goofed off. Did this entrepreneur-in-residence thing for VC and then started Heptio and then sold the VMware. [0:05:23.7] BL: When you're in a big company, especially when you're more junior, it's easy to get caught up in playing the game inside of that company. When I say the game, what I mean is that there are measures of success within big companies and there are ways to advance see approval, see rewards that are all very specific to that company. I think the culture of a company is really defined by what are the parameters and what are the successes, the success factors for getting ahead inside of each of those different companies. I think a lot of times, especially when as a Microsoft straight out at college, I did a couple internships at Microsoft and then joining – leaving Microsoft that first time was actually really, really difficult because there is this fear of like, “Oh, my God. Everything's going to be super different.” It turns out that as you bounced around the industry a little bit, there's actually probably more alike than there is different. The biggest difference I think between large company and small company is really, and I'll throw out some science analogies here. I think, oftentimes organizations are a little bit like the ideal gas law. Okay, maybe going past y'all, but this is – PV = nRT. Pressure times volume equals number of molecules times temperature and the R is a constant. The idea here is that this is an equation where as you add more molecules to a constrained space, that will actually change the temperature and the pressure and these things all rise. What happens is inside of a large company, you end up with so many people within a constrained space in terms of the product space. When you add more people to the organization, or when you're looking to get ahead, it feels very zero-sum. It very much feels like, “Hey, for me to advance, somebody else has to lose.” That's not how the real world works, but oftentimes that's how it feels inside of the big company, is that if it feels zero-sum like that. The liberating thing for being at a startup and I think why so many people get addicted to working at startups is that startups are fundamentally not zero-sum. Everybody succeeds and fails together. When a new person shows up, your thought process is naturally like, “Awesome, we got more cylinders in the engine. We’re going to go faster,” which is not always the case inside of a big company. Now, I think as you get senior enough, all of a sudden these things changes, because you're not just operating within the confines of that company. You're actually again, playing a role in the business, you're looking at the ecosystem, you're looking at the community, you're looking at the competitive landscape and that's where you have your eye on the ball and that's what defines success for you, not the internal company metrics, but really the business metrics is what defines success for you. The thing that I'm trying to do, here at VMware now is as we do Tanzu is make sure that we recognize the unbounded possibilities in front of us inside of this world, make sure that we actually focus our energy on serving customers. In doing so, out-compete others in the market. It's not a zero-sum game, it's not something where as we bring more folks on that we feel we're competing with them. That's a little rambling of an answer. I don't know if that links together for you, Bryan. [0:08:41.8] BL: No, no. That was pretty good. [0:08:44.1] JB: Thanks. [0:08:46.6] MG: Joe, that's probably going to be a context switch now. You touched on the time when you went through the burnout phase. Then last week, I think you put out a tweet on there's so much stuff going on, which tweet I'm talking about. Yeah. In the Kubernetes community, you’re a rock star. At VMware, you're already a rock star being on stage at VMware shaking hands with Pat. I mean, there's so many people, so many e-mails, so many slacks, whatever that you get every day, but still I feel you are able to keep the balance, stay grounded and always have a chat, even though sometimes I don't want to approach you, but sometimes I do when I have some crazy questions maybe. Still you’re not pushing people away. How do you manage with mental stress preventing another burnout? What is the secret sauce here? Because I feel I need to work on that. [0:09:37.4] JB: Well, I mean it's hard. The tweet that I put out was last week I was coming back from Barcelona and tired of travel. I'm looking forward to right now, we're recording this just before KubeCon. Then after KubeCon, planning to go to re:Invent in Vegas, which is just a social denial-of-service. It's just overwhelming being with that. I was tired of traveling. I posted something and came across a little stronger than I wanted to. That I just hate people, right? I was at that point where it's just you're traveling and you just don't want to deal with anybody and every little thing is really bugging you and annoying you. I think burnout is an interesting thing. For me and I think there's different causes for different folks. Number one is that it's always fascinating when you start a new job, your calendar is empty, your responsibilities are low. Then as you are successful and you integrate yourself into the organization, all of a sudden you find that you have more work than you have time to do. Then you hit this point where you try and like, “I'm just going to keep doing it. I'm going to power through.” Then you finally hit this point where you're like, “This is just not humanly possible.” Then you go into a triage mode and then you have to decide what's important. I know that there's more to be done than I can do. I have to be very thoughtful about prioritizing what I'm doing. There's a lot of techniques that you can bring to bear there. Being explicit about what your goals are and what your priorities are, writing those things down, whether it's an OKR process, or whether it's just here's the my top three things that I'm focusing on. Making sure that those things are purposefully meaningful to you, right? Understanding the difference between urgent and important, which these are business booky type of things, but it's this idea of there are things that feel they have to get done right now and then there are things that are long-term important. If you're not thoughtful about how you do things, you spend all your time doing the urgent things, but you never get to the stuff that's the actually long-term important. That's a really easy trap to get yourself into. Finding ways to delegate to folks is really, really helpful here, in terms of empowering others, trusting them. It's hard to let go sometimes, but I think being able to set the stage for other people to be successful is really empowering. Then just recognizing it's not all going to get done and that's okay. You can't hold yourself to expect that. Now with respect to burnout, for me, the biggest driver for burnout in my career has been when I felt personal responsibility over something, but I have been had the tools, or the authority, or the ability to impact it.When you feel in your bones ownership over something, but yet you can't actually really own it, that is what causes burnout for me. I think there are studies talking about how the worst job is middle management. I think it's not being the CEO. It's not being new to the organization, being junior. It's actually being stuck in the middle. Because you're given a certain amount of responsibility, but you aren't always given the tools necessary to be able to drive that. Whereas the folks at the top, oftentimes they don't have those constraints, so they actually own stuff and have agency to be able to take care of it. I think when you're starting on more junior in the organization, the scope of ownership that you feel is relatively minor. That being stuck in the middle is the biggest driver for me for burnout. A big part of that is just recognizing that sometimes you have to take a step back and personally divest that feeling of ownership when really it's not yours to own. I'll give you an example, is that I started Google Compute Engine at Google, which is arguably the foundational cloud service for GCP. As it grew, as it became more important to Google, as it got reorged, more or more of the leadership and responsibilities and decision-making, I’m up here in Seattle, move down the mountain view, a lot of that stuff was focused at had been in the cloud market, but then at Google for 10 or 15 years coming in and they're like, “Okay, that's cute. We got it from here,” right? That was a case where it was my thing. I felt a lot of ownership over it. It was clear after a certain amount of time, hey, you know what? I just work here. I'm just doing my job and I do what I do, but really it’s these other folks that are driving the bus. That's a painful transition to actually go from that feeling of ownership to I just work here. That I think is one of the reasons why oftentimes, people leave the companies. I think that was one of the big drivers for why I ended up leaving Google, was that lack of agency to be able to impact things that I cared about quite a bit. [0:13:59.8] CC: I think that's one reason why – well, I think that working in the companies where things are moving fast, because they have a very clear, very worthwhile goal provides you the opportunity to just have so much work that you have to say no to a lot of things like where you were saying, and also take ownership of pieces of that work, because there's more work to go around than people to do it. For example, since Heptio and VM – okay, I’m plugging. This is a big plug for VMware I guess, but it definitely is a place that's moving fast. It's not crazy. It's reasonable, because everybody, pretty much, wherever one of us grown up. There is so much to do and people are glad when you take ownership of things. That really for me is a big source of work satisfaction. [0:14:51.2] JB: Yeah. I think it's that zero-sum versus positive-sum game. I think that when you – there's a lot more room for you to actually feel that ownership, have that agency, have that responsibility when you're in a positive-sum environment, versus a zero-sum environment. [0:15:04.9] BL: All right, so now I want to ask your technical question. [0:15:08.1] JB: All right. [0:15:09.5] BL: Not a really hard one. Just more of how you think about this. Kubernetes is five and almost five and a half years old. One of the key components of Kubernetes is etcd. Now knowing what we know now and 2019 with Kubernetes have you used etcd as its key store? Or would you have gone another direction? [0:15:32.1] JB: I think etcd is a good fit. The truth of the matter is that we didn't give that decision as much thought as we probably should have early on. We saw that it was relatively easy to stand up and get going with. At least on paper, it had the qualities that we were looking for, so we started building with it and then just ran with it. Something like ZooKeeper was also something we could have taken, but the operational overhead at the time of ZooKeeper was very different from etcd. I think we could have gone in the direction of them and this is why [inaudible 0:15:58.5] for a lot of their tools, where they actually build the data store into the tool in a native way. I think that can lead in some ways to a simpler getting started experience, because there's just one thing to boot up, but also it's more monolithic from a backup, maintenance, recovery type of thing. The one thing that I think we probably should have done there in retrospect is to try and create a little bit more of an arm's length relationship in Kubernetes and etcd. In terms of having some cleaner interfaces, some more contractor and stuff, so that we could have actually swapped something else out. There's folks that are doing it, so it's not impossible, but it's definitely not something that's easy to do, or well-supported. I think that that's probably the thing that I wouldn't change in that space. Another thing we might want to change, I think it might have been good to be more explicit about being able to actually shard things out, so that you could have multiple data stores for multiple resources and actually find a way to horizontally scale. Now we do that with events, because we were writing events into etcd and that's just a totally different stream of data, but everything else right now – I think now there's room to do this into the future. I think we've been able to push etcd vertically up until now. There will come a time where we need to find ways to shard that thing up horizontally. [0:17:12.0] CC: Is it possible though to use a different data store than etcd for Kubernetes? [0:17:18.4] JB: The things that I'm aware of here and there may be more and I may not be a 100% up to date, is I do know that the Azure folks created a proxy layer that speaks to the etcd protocol, but that is actually implemented on the backend using CosmoDB. That approach there was to essentially create a translation layer. Then Rancher created this project, which is a little bit if you've – been added a bit of a fork of Kubernetes, where they're I believe using PostgresSQL as the database for Kubernetes. I haven't looked to see exactly how they ended up swapping that in. My guess is that there's some chewing gum and bailing wiring and it's quite a bit of effort for each version upgrade to be able to actually adapt that moving forward. Don't know for sure. I haven't looked deeply. [0:18:06.0] CC: Okay. Now I would love to philosophize a little bit, or maybe a lot about Kubernetes. In the spirit of thinking of different questions to ask, so I had a bunch of questions and then I was thinking, “How could I ask this question in a different way?” Maybe this is not the right “question.” Here is the way I came up with this question. We’re so divided out there. One camp loves Kubernetes, another camp, "So hard, so complicated, it’s so complex. Why even bother with it? I don't understand why people are using this." Basically, there is that sentiment that Kubernetes is complicated. I don't think anybody would refute that. Now is that even the right way to talk about Kubernetes? Is it even not supposed to be complicated? I mean, what kind of a tool is it that we are thinking, it should just work, it should be just be super simple. Is it true that it should be a super simple tool to use? [0:19:09.4] JB: I mean, that's a loaded question [inaudible]. Let me just first say that number one, if people are complaining, I mean, I'm stealing this from Tim [inaudible], who I think this is the way he takes some of these things in stride. If people are complaining, then you're relevant, right? If nobody is complaining, then nobody cares about what you're doing. I think that it's a good thing that folks are taking a critical look at Kubernetes. That means that they're taking a look at it, right? For five years in, Kubernetes is on an upswing. That's not going to necessarily last forever. I think we have work to do to continually earn Kubernetes’s place in the technology stack over time. Now that being said, Kubernetes is a super, super flexible tool. It can do so many things in so many different situations. It's used from everything from in retail stores across the tens of thousands of stores, any type of solutions. People are looking at it for telco, 5G. People are looking at it to even running it inside cars, which scares me, right? Then all the way up to folks like at CERN using it to do data analytics for hiring and physics, right? The technology that I look at that's probably most comparable to that is something like Linux. Linux is actually scalable from everything from a phone, all the way up to an IBM mainframe, but it's not easy, right? I mean, to be able to adapt it across all that things, you have to essentially download the kernel type, make config and then answer 5,000 questions, right, for those who haven't done that. It's not an easy thing to do. I think that a lot of times, people might be looking at Kubernetes at the wrong level to be able to say this should be simple. Nobody looks at the Linux kernel that you get from git cloning, Linux’s fork and compiling it and saying, “Yeah, this is too hard.” Of course it's hard. It's the Linux kernel. You expect that you're going to have a curated experience if you want something easy, right? Whether that be an Android phone or Ubuntu or what have you. I think to some degree, we're still in the early days where people are dealing with it perhaps at to raw level, versus actually dealing with it in a more opinionated way. Now I think the fascinating thing for Kubernetes is that it provides a lot of the extension points and patterns, so that we don't know exactly what those higher-level easier-to-use abstractions are going to look like, but we know, or at least we're pretty confident that we have the right tools and the right environment to be able to experiment our way there. I think we're not there yet, but we're set up for success. That's the first thing. The second thing is that Kubernetes introduces a whole bunch of different concepts and ideas and these things are different and uncomfortable for folks. It's hard to learn new things. It's hard for me to learn new things and it's hard for everybody to learn new things. When you compare Kubernetes to say, getting started with the modern front-end web development stack, with things like Babel and React and how do you deploy this and what are all these different options and it changes on a weekly basis. There's a hell of a lot in common actually between these two ecosystems. They're both really hard, they both introduce all these new concepts and you have to be embedded in it to really get it. Now that being said, if you just wanted take raw JavaScript, or jQuery and have at it, you can do it and you'll see on Hacker News articles every once in a while where people are like, “Hey, I've programmed my site with jQuery and it's just fine. I don't need all this new stuff,” right? Just like you'll see folks saying like, “I just SSH’d in and actually ran some stuff and it works fine. I don't need all this Kubernetes stuff.” If that works for you, that's great. Kubernetes doesn't have to solve every problem for every person. Then the next thing is that I think that there's a lot of people who've been solving these problems again and again and again and again, but they've been solving them in their own way. It's not uncommon when you look at back-end systems, to join a company, look at what they've built and found that it's a complicated, bespoke system of chewing gum and baling wire with maybe a little bit Ansible, maybe a little bit of Puppets and bash. Everybody has built their own, complex, overwrought system to do a lot of the stuff that Kubernetes does. I think one of the values that we see here is that these things are complex, unique complex to do it, but shared complexity is more valuable than personal complexity. If we can agree on some of these concepts, then that's something that can be leveraged widely and it will fade to the background over time, versus having everybody invent their own complex system every time they need to solve these problems. With that all said, we got a ton of work to do. It's not like we're done here and I'm not going to actually sit here and say Kubernetes is easy, or that every complex thing is absolutely necessary and that we can't find ways to simplify it. We clearly can. I just think that when folks say, “Hey, I just want this to be easy." I think they're being a little bit too naïve, because it's a very difficult problem domain. [0:23:51.9] BL: I'd like to add on to that. I think about this a lot as well. Something that Joe said to me few years back, where Kubernetes is the platform for creating platforms, it is very applicable here. Where we are looking at as an industry, we need to stop looking at Kubernetes as some estimation. Your destination is really running your applications that give you pleasure, or make your business money. Kubernetes is a tool to enable us to think about our applications more, rather than the underlying ecosystem. We don't think about servers. We want to think about storage and networking, even things like finding things in your cluster. You don't think about that. Kubernetes gives it to you. If we start thinking about Kubernetes as a way to enable us to do better things, we can go back to what Joe said about Linux. Back whenever I started using Linux in the mid-90s, guess what? We compiled it. Make them big. That stuff was hard and it was slow. Now think about this, in my office I have three different Linux distributions running. You know what? I don't even think about it anymore. I don't think about configuring X. I don't think about anything. One thing that from Kubernetes is going to grow is it's going to – we're going to figure out these problems and it's going to allow us to think of these other crazy things, which is going to push the industry further. Think maybe 20 years from now if we're still running Kubernetes, who cares? It's just going to be there. We're going to think about some other problem and it could be amazing. This is good times. [0:25:18.2] JB: At one point. Sorry, the dog’s going to bark here. I mean, at one point people cared about some of the BIOS that they were running on our computers, right? That was something that you stressed out about. I mean, back in the bad old days when I was doing DOS gaming and you're like, “Oh, well this BIOS is incompatible with the –” IRQ's and all that. It's just background now. [0:25:36.7] CC: Yeah, I think about this too as a developer. I might have mentioned this before in this podcast. I have never gone from one job to another job and had to use the same deployment system. Every single job I've ever had, the deployment system is completely different, completely different set of tooling and completely different process. Just being able to walk out from one job to another job and be able to use the same platform for deployment, it must be amazing. On the flip side, being able to hire people that will join your organization already know how your deployment works, that has value in itself. It's a huge value that I don't think people talk about enough. [0:26:25.5] JB: Well honestly, this was one of the motivations for creating Kubernetes, is that I looked around Google early on and Google is really good at importing open source, circa 2000, right? This is like, “Hey, you want to use libpng, or you want to use this library, or whatever.” That was the type of open source that Google is really, really good at using. Then Google did things, like say release the Big Table paper. Then somebody went through and then created Cassandra out of it. Maybe there's some ideas in Cassandra that actually build on top of big table, or you're looking at MapReduce versus Hadoop. All of a sudden, you found that these things diverge and Google had zero ability to actually import open source, circa 2010, right? It could not back import systems, because the operational characteristics of these things were solely alien when compared to something like Borg. You see this also, like we would acquire companies and it would take those companies way too long to be able to essentially re-platform themselves on top of Borg, because it was just so different. This is one of the reasons, honestly, why we ended up doing something like GCE is to actually have a platform that was actually more familiar from acquisition. It's one of the reasons we did it. Then also introducing Kubernetes, it's not Borg. It's a cousin of Borg inside of Google. For those who don't know, Borg is the container system that’s been in production at Google for probably 15 years now, and the spiritual grandfather to Kubernetes in a lot of ways. A lot of the ideas that you learn from Kubernetes are applicable to Borg. It's not nearly as big a leap for people to actually change between them, as it was before, Kubernetes was out there. [0:27:58.6] MG: Joe, I got a similar question, because it seems to be like you're a platform builder. You've worked on GCE, Kubernetes obviously. If you would be talking to another platform architect or builder, what would be something that you would recommend to them based on your experiences? What is a key ingredient, technically speaking of a platform that you should be building today, or the main thing, or the lesson learned that you had from building those platforms, like technical advice, if you will? [0:28:26.8] JB: I mean, that's a really good question. I think in my mind, the mark of a good platform is when people can use it to do things that you hadn't imagined when you were building it, right? The goal here is that you want a platform to be a force multiplier. You wanted to enable people to do amazing things. You compare, again the Linux kernel, even something as simple as our electrical grid, right? The folks who established those standards, God knows how long ago, right? A 150 years ago or whenever, the whole Tesla versus Thomas Edison, [inaudible]. Nobody had any idea the long-term impact that would have on society over time. I think that's the definition of a successful platform in my mind. You got to keep that in mind, right? I think that for me, a lot of times people design for the first five minutes at the expense of the next five years. I've seen in a lot of times where you design for hey, I'm getting a presentation. I want to be able to fit something amazing on one slot. You do it, but then all of a sudden somebody wants to do something different. They want to go off course, they want to go off the rails, they want to actually experiment and the thing is just brittle. It's like, “Hey, it does this. It doesn't do anything else. Do you want to do something else? Sorry, this isn't the tool for you.” For me, I think that's a trap, right? Because it's easy to get it early users based on that very curated experience. It's hard to keep those users as they actually start using the thing in anger, as they start interfacing with the real world, as they deal with things that you didn't think of as a platform. I'm always thinking about how can every that you put in the platform be used in multiple ways? How can you actually make these things be composable building blocks, because then that gives you the opportunity for folks to actually compose them in ways that you didn't imagine, starting out. I think that's some of it. I started my career at Microsoft working on Internet Explorer. The fascinating thing about Microsoft is that through and through and through and through Microsoft is a platform company. It started with DOS and Windows and Office, but even though Office is viewed as a platform inside of Microsoft. They fundamentally understand in their bones the benefit of actually starting that platform flywheel. It was really interesting to actually be doing this for the first browser wars of IE versus Netscape when I started my own career, to actually see the fact that Microsoft always saw Internet Explorer as a platform, whereas I think Netscape didn't really get it in the same way, right? They didn't understand the potential, I think in the way that Microsoft did it. For me, I mean, just being where you start your career, oftentimes you actually sets your patterns in terms of how you look at things over time. I think a lot of this platform thinking comes from just imprinting when I was a baby developer, I think. I don't know. It takes a lot of time to really internalize that stuff. [0:31:14.1] BL: The lesson here is this a good one, is that when we're building things that are way bigger than us, don't think of your product as the end goal. Think of it as an enabler. When it's an enabler, that's where you get that X multiplier. Then that's where you get all the residuals. Microsoft actually is a great example of it. My gosh. Just think of what Microsoft has been able to do with the power of Office? [0:31:39.1] JB: Yeah. I look at something like VB in the Microsoft world. We still don't have VB for the cloud era. We still haven't created that. I think there's still opportunity there to actually strike. VB back in the day, for those who weren't there, struck this amazing balance of being easy to get started with, but also something that could actually grow with you over time, because it had all these extension mechanisms where you could actually – there's the marketplace controls that you could buy, you could partner with other developers that were writing C or C++. It was an incredible platform. Then they leverage to Office to extend the capabilities of VB. It's an amazing ecosystem. Sorry. I didn't mean to interrupt you, Bryan. [0:32:16.0] BL: Oh, no. That's all good. I get as excited about it as you do whenever I think about it. It's a pretty exciting place to be. [0:32:21.8] JB: Yeah. I'll talk to VC's, because I did a startup and the EIR thing and I'll have them ask me things like, “Hey, where should we invest in the Kubernetes space?” My answer is using the BS analogy like, “You got to go where the puck is going.” Invest in the things that Kubernetes enables. What are the things that people can do now that they couldn't do pre-Kubernetes? Those are the things where we're going to see the explosion of growth. It's not about the Kubernetes. It's really about a larger ecosystem that Kubernetes is the seed crystal for. [0:32:56.2] BL: For those of you listening, if you want to get anything out of here, rewind back about 20 seconds and play that over and over again, what Joe just said. [0:33:04.2] MG: Yeah. This was brilliant. [0:33:05.9] BL: It’s where the puck is going. It's not where we are now. We're building for the future. We're not building for now. [0:33:11.1] MG: I'm looking at this tweetable quotes here, the last 20 seconds, so many tweetable quotes. We have to decide which ones to tweet then. [0:33:18.5] CC: Well, we’ll tweet them all. [0:33:20.0] MG: Oh, yes. [0:33:21.3] JB: Here’s another thing. Here’s another piece of career advice. Successful people are good storytellers. You can have the most beautiful technology, if you can't tell the human story about it, about what it does for folks, then nobody will care. I spend a lot of the time on Twitter and probably too much time, if you ask my family. That medium of being able to actually distill your thoughts down into something that is tweetable, quotable, really potent, that is a skill that's worth developing and it's a skill that's worth valuing. Because there's things that are rolling around in my head and I still haven't found a way to get them into a tweet. At some point, I'll figure it out and it'll be a thing. It takes a lot of time to build that skill to be able to refine like that. [0:34:08.5] CC: I want to say an anecdote of myself. I interview a small – so tiny startup, maybe less than 10 people at the time in Cambridge back when I lived up there. The guy was borderline wanting to hire me and no. I sent him an e-mail to try to influence his decision and it was a long-ass e-mail. They said, “No, thank you.” Then I think we had a good rapport. I said, well, anything you can tell me about your decision then? He said something along the lines like, I was too verbose. That was pre-Twitter. Twitter I think existed, but it was at the very beginning, I wasn't using it. Yeah, people. Be concise. Decision-makers don't have time to read long things. You need to be able to convey your message in short sentences, few sentences. It's crucial. [0:35:07.5] BL: All right, so we're nearing the end. I want to ask another question, because these are random questions for Joe. Joe, it is the week before KubeCon North America 2019 and today is actually an interesting day. A couple of neat things happened today. We had Docker. It was neat. Docker split somewhat and it sold part of it and now they're going to be a tools company. That's neat. We're all still trying decoding what that actually is. Here's the neat piece, Apple released a laptop that can have 64 gigabytes of memory. [0:35:44.4] MG: Has an escape key. [0:35:45.7] BL: It has an escape key. [0:35:47.6] MG: This is brilliant. [0:35:48.6] BL: Yeah. I think the question was what do you think about that? [0:35:52.8] JB: Okay. Well, so first of all, I mean, Docker is fascinating and I think this is – there's a lot of lessons there and I'm not sure I'm the one to tell them. I think it's easy to armchair-quarterback these things. It's hard to live that story. I think that it's fun to play that what-if game. I think it does show that this stuff is hard. You can have everything in your grasp and then just have it all slip away. I think that's not anybody's fault. It's just there's different strategies, different approaches in how this stuff plays out over time. On the laptop thing, I think my current laptop has 16 gigs of RAM. One of the things that we're seeing is that as we move towards a microservices world, I gave a talk about this probably three or four years ago. As we move to a microservices world, I think there's one stage where you create a bunch of microservices, but you still view those things as an app. You say, "This microservice belongs to this app." Within a mature organization, those things start to grow and eventually what you find is that you have services that are actually useful for multiple apps. Your entire production infrastructure becomes this web of services that are calling each other. Apps are just entry points into these things at different points of that web of infrastructure. This is the way that things work at Google. When you see companies that are microservices-based, let's take an Uber, or Lyft or an Airbnb. As they diversify the set of products that they're offering, you know they're not running completely independent stacks. You know that there's places where these things connect to behind the scenes in a microservices world. What does that mean for developers? What it means is that you can no longer fit an entire company's worth of infrastructure on your laptop anymore. Within a certain constraint, you can go through and actually say, “Hey, I can bring up this canonical cut of microservices. I can bring that up on my laptop, but it will have dependencies that I either have to actually call into the prod dependencies, call into specialized staging, or mock those things out, so that I can actually run this thing locally and develop it.” With 64 gig of RAM, I can run more on my laptop, right? There's a little bit of kick in that can down the road in terms of okay, there's this race between more microservicey versus how much I can port on my laptop. The interesting thing is that where is this going to end? Are we going to have the ability to bring more and more with your laptop? Are you going to be able to run in the split brain thing across like there's people who will create network connections between these things? Or are we going to move to a world where you're doing more development on cluster, in the cloud and your laptop gets thinner and thinner, right? Either you absolutely need 64 gig because you're pushing up against the boundaries of what you can do on your laptop, or you've given up and it's all running in the cloud. Yet anyways, you might as well just use a Chromebook. It's fascinating that we're seeing this divergence of scaling up, versus actually moving stuff to the cloud. I can tell you at Google, a lot of folks, even developers can actually be super, super productive with something relatively thin like Chromebook, because there's so many tools there that really are targeted at doing all that stuff remotely, in Google's production data centers and such. That's I think the interesting implication from a developer point of view with 64 gigabytes of RAM. What you going to do Bryan? You're going to get the 64 gig Mac? You’re going to do it? [0:39:11.2] BL: It’s already coming. They'll be here week after next. [0:39:13.2] JB: You already ordered it? You are such an Apple fanboy. Oh, man. [0:39:18.6] BL: Oh, I'm actually so not to go too much into it. I am a fan of lots of memory. You know what? We work in this cloud native world. Any given week, I’ll work on four to five projects. I'm lazy. I don't want to shut any of them down. Now with 64 gigs, I don't have to shut anything down. [0:39:37.2] JB: It was so funny. When I was at Microsoft, everybody actually focused on Microsoft Windows boot time. They’re like, “We got to make it boot faster. We got to make it boot faster.” I'm like, I don't boot that often. I just want the thing to resume from sleep, right? If you can make that reliable on that theme. [0:39:48.7] CC: Yeah. I frequently have to restart my computer, because of memory issues. I don't want to know which app is taking up memory. I have a tool that I can look up, but I just shut it down, flush the memory. I do have a question related to Docker. Kubernetes, I don't know if it's right to say that Kubernetes is so reliant on Docker, because I know it works with other container technologies as well. In the worst case scenario, it's obviously, I have no reason to predict this, but in the worst case scenario where Docker, let's say is discontinued, how would that affect Kubernetes? [0:40:25.3] JB: Early on when we were doing Kubernetes and you're in this relationship with a company like Docker, I looked at what Docker was doing and you're like, “Okay, where is the real value here over time?” In my mind, I thought that the interface with developers that distributed kernel, that API surface area of Kubernetes, that was really the thing and that a lot of the Docker stuff was over time going to fade to the background. I think we've seen that happen, because when we talk about production systems, we definitely have moved past Docker and we have the CRI, we have Container D, which it was essentially built by Docker, donated to the CNCF as it made its way towards graduation. I think it's graduated now. The governance ties to Docker have been severed at this point. In production systems for Kubernetes, we've moved past that. I still think that there's developer experiences oftentimes reliant on Docker and things like Docker files. I think we're moving past that also. I think that if Docker were to disappear off the face of the earth, there would be some adjustment, but I think we have the right toolkits and the right systems to be able to do that. Some of that is open sourced by Docker as part of the Moby project. The whole Docker file evaluation flow is actually in this thing called Build Kit that you can actually use in different contexts outside of the Docker game. I think there's a lot of the building action. The thing that I think is the most influential thing that actually I think will stand the test of time is the Docker container image format. That artifact that you upload, that you download, the registry APIs. Now those things have been codified and are moving forward slowly under the OCI, the open container initiative project, which is a little bit of a sister foundation niche type of thing to the CNCF. I think that's the influence over time. Then related to that, I think the world should be a little bit worried about Docker Hub and what that means for Docker Hub over time, because that is not a cheap service to run. It's done as a public good, similar to github. If the commercial aspects of that are not healthy, then I think it might be disruptive if we see something bad happen with Docker Hub itself. I don't know what exactly the replacement for that would be overnight. That'd be incredibly disruptive. [0:42:35.8] CC: Should be Harbour. [0:42:37.7] JB: I mean, Harbour is a thing, but somebody's got a run it and somebody's got to pay the bandwidth bills, right? Thank you to Docker for paying those bandwidth bills, because it's actually been good for not just Docker, but for our entire ecosystem to be able to do that. I don't know what that looks like moving forward. I think it's going to be – I mean, maybe github with github artifacts and it's going to pick up the slack. We’re going to have to see. [0:42:58.6] MG: Good. I have one last question from my end. Totally different topic, not Docker at all. Or maybe, depends on your answer to it. The question is you're very technical person, what is the technology, or the stuff that your brain is currently spinning on, if you can disclose? Obviously, no secrets. What keeps you awake at night, in your brain? [0:43:20.1] JB: I mean, I think the thing that – a couple of things, is that stuff that's just completely different from our world, I think is interesting. I think we've entered at a place where programming computers, and so stuff is so specialized. That again, I talk about if you made me be a front-end developer, I would flail for several months trying to figure out how to even be productive, right? I think similar when we look at something like machine learning, there's a lot of stuff happening there really fast. I understand the broad strokes, but I can't say that I understand it to any deep degree. I think it's fascinating and exciting the amount of diversity in this world and stuff to learn. Bryan's asked me in the past. It's like, “Hey, if you're going to quit and start a new career and do something different, what would it be?” I think I would probably do something like generative art, right? Essentially, there's folks out there writing these programs to generate art, a little bit of the moral descendant of Demoscene that was I don't know. I wonder was the Demoscene happened, Bryan. When was that? [0:44:19.4] BL: Oh, mid 90s, or early 90s. [0:44:22.4] JB: That’s right. I was never super into that. I don't think I was smart enough. It's crazy stuff. [0:44:27.6] MG: I actually used to write demoscenes. [0:44:28.8] JB: I know you did. I know you did. Okay, so just for those not familiar, the Demoscene was essentially you wrote essentially X86 assembly code to do something cool on screen. It was all generated so that the amount of code was vanishingly small. It was this puzzle/art/technical tour de force type of thing. [0:44:50.8] BL: We wrote trigonometry in a similar – that's literally what we did. [0:44:56.2] JB: I think a lot of that stuff ends up being fun. Stuff that's related to our world, I think about how do we move up the stack and I think a lot of folks are focused on the developer experience, how do we make that easier. I think one of the things through the lens of VMware and Tanzu is looking at how does this stuff start to interface with organizational mechanics? How does the typical enterprise work? How do we actually make sure that we can start delivering a toolset that works with that organization, versus working against the organization? That I think is an interesting area, where it's hard because it involves people. Back-end people like programmers, they love it because they don't have to deal with those pesky people, right? They get to define their interfaces and their interfaces are pure and logical. I think that UI work, UX work, anytime when you deal with people, that's the hardest thing, because you don't get to actually tell them how to think. They tell you how to think and you have to adapt to it, which is actually different from a lot of back-end here in logical type of folks. I think there's an aspect of that that is user experience at the consumer level. There's developer experience and there's a whole class of things, which is maybe organizational experience. How do you interface with the organization, versus just interfacing, whether it's individuals in the developer, or the end-user point of view? I don't know if as an industry, we actually have our heads wrapped around that organizational limits. [0:46:16.6] CC: Well, we have arrived at the end. Makes me so sad, because we could talk for easily two more hours. [0:46:24.8] JB: Yeah, we could definitely keep going. [0:46:26.4] CC: We’re going to bring you back, Joe. Don’t worry. [0:46:28.6] JB: For sure. Anytime. [0:46:29.9] CC: Or do worry. All right, so we are going to release these episodes right after KubeCon. Glad everybody could be here today. Thank you. Make sure to subscribe and follow us on Twitter. Follow us everywhere and suggest episode topics for us. Bye and until next time. [0:46:52.3] JB: Thank you so much. [0:46:52.9] MG: Bye. [0:46:54.1] BL: Bye. Thank you. [END OF EPISODE] [0:46:55.1] ANNOUNCER: Thank you for listening to The Podlets Cloud Native Podcast. Find us on Twitter at https://twitter.com/ThePodlets and on the http://thepodlets.io/ website, where you'll find transcripts and show notes. We'll be back next week. Stay tuned by subscribing. [END]See omnystudio.com/listener for privacy information.

The Podlets - A Cloud Native Podcast

Welcome to the first episode of The Podlets Podcast! On the show today we’re kicking it off with some introductions to who we all are, how we got involved in VMware and a bit about our career histories up to this point. We share our vision for this podcast and explain the unique angle from which we will approach our conversations, a way that will hopefully illuminate some of the concepts we discuss in a much greater way. We also dive into our various experiences with open source, share what some of our favorite projects have been and then we define what the term “cloud native” means to each of us individually. The contribution that the Cloud Native Computing Foundation (CNCF) is making in the industry is amazing, and we talk about how they advocate the programs they adopt and just generally impact the community. We are so excited to be on this podcast and to get feedback from you, so do follow us on Twitter and be sure to tune in for the next episode! Note: our show changed name to The Podlets. Follow us: https://twitter.com/thepodlets Hosts: Carlisia Campos Kris Nóva Josh Rosso Duffie Cooley Nicholas Lane Key Points from This Episode: An introduction to us, our career histories and how we got into the cloud native realm. Contributing to open source and everyone’s favorite project they have worked on. What the purpose of this podcast is and the unique angle we will approach topics from. The importance of understanding the “why” behind tools and concepts. How we are going to be interacting with our audience and create a feedback loop. Unpacking the term “cloud native” and what it means to each of us. Differentiating between the cloud native apps and cloud native infrastructure. The ability to interact with APIs as the heart of cloud natives. More about the Cloud Native Computing Foundation (CNCF) and their role in the industry. Some of the great things that happen when a project is donated to the CNCF. The code of conduct that you need to adopt to be part of the CNCF. And much more! Quotes: “If you tell me the how before I understand what that even is, I'm going to forget.” — @carlisia [0:12:54] “I firmly believe that you can't – that you don't understand a thing if you can't teach it.” — @mauilion [0:13:51] “When you're designing software and you start your main function to be built around the cloud, or to be built around what the cloud enables us to do in the services a cloud to offer you, that is when you start to look at cloud native engineering.” — @krisnova [0:16:57] Links Mentioned in Today’s Episode: Kubernetes — https://kubernetes.io/The Podlets on Twitter — https://twitter.com/thepodlets VMware — https://www.vmware.com/Nicholas Lane on LinkedIn — https://www.linkedin.com/in/nicholas-ross-laneRed Hat — https://www.redhat.com/CoreOS — https://coreos.com/Duffie Cooley on LinkedIn — https://www.linkedin.com/in/mauilionApache Mesos — http://mesos.apache.org/Kris Nova on LinkedIn — https://www.linkedin.com/in/kris-novaSolidFire — https://www.solidfire.com/NetApp — https://www.netapp.com/us/index.aspxMicrosoft Azure — https://azure.microsoft.com/Carlisia Campos on LinkedIn — https://www.linkedin.com/in/carlisiaFastly — https://www.fastly.com/FreeBSD — https://www.freebsd.org/OpenStack — https://www.openstack.org/Open vSwitch — https://www.openvswitch.org/Istio — https://istio.io/The Kublets on GitHub — https://github.com/heptio/thekubeletsCloud Native Infrastructure on Amazon — https://www.amazon.com/Cloud-Native-Infrastructure-Applications-Environment/dp/1491984309Cloud Native Computing Foundation — https://www.cncf.io/Terraform — https://www.terraform.io/KubeCon — https://www.cncf.io/community/kubecon-cloudnativecon-events/The Linux Foundation — https://www.linuxfoundation.org/Sysdig — https://sysdig.com/opensource/falco/OpenEBS — https://openebs.io/Aaron Crickenberger — https://twitter.com/spiffxp Transcript: [INTRODUCTION] [0:00:08.1] ANNOUNCER: Welcome to The Podlets Podcast, a weekly show that explores Cloud Native one buzzword at a time. Each week, experts in the field will discuss and contrast distributed systems concept, practices, tradeoffs and lessons learned to help you on your cloud native journey. This space moves fast and we shouldn’t reinvent the wheel. If you’re an engineer, operator or technically minded decision maker, this podcast is for you. [EPISODE] [0:00:41.3] KN: Welcome to the podcast. [0:00:42.5] NL: Hi. I’m Nicholas Lane. I’m a cloud native Architect. [0:00:45.0] CC: Who do you work for, Nicholas? [0:00:47.3] NL: I've worked for VMware, formerly of Heptio. [0:00:50.5] KN: I think we’re all VMware, formerly Heptio, aren’t we? [0:00:52.5] NL: Yes. [0:00:54.0] CC: That is correct. It just happened that way. Now Nick, why don’t you tell us how you got into this space? [0:01:02.4] NL: Okay. I originally got into the cloud native realm working for Red Hat as a consultant. At the time, I was doing OpenShift consultancy. Then my boss, Paul, Paul London, left Red Hat and I decided to follow him to CoreOS, where I met Duffie and Josh. We were on the field engineering team there and the sales engineering team. Then from there, I found myself at Heptio and now with VMware. Duffie, how about you? [0:01:30.3] DC: My name is Duffie Cooley. I'm also a cloud native architect at VMware, also recently Heptio and CoreOS. I've been working in technologies like cloud native for quite a few years now. I started my journey moving from virtual machines into containers with Mesos. I spent some time working on Mesos and actually worked with a team of really smart individuals to try and develop an API in front of that crazy Mesos thing. Then we realized, “Well, why are we doing this? There is one that's called Kubernetes. We should jump on that.” That's the direction in my time with containerization and cloud native stuff has taken. How about you Josh? [0:02:07.2] JR: Hey, I’m Josh. I similar to Duffie and Nicholas came from CoreOS and then to Heptio and then eventually VMware. Actually got my start in the middleware business oddly enough, where we worked on the Egregious Spaghetti Box, or the ESB as it’s formally known. I got to see over time how folks were doing a lot of these, I guess, more legacy monolithic applications and that sparked my interest into learning a bit about some of the cloud native things that were going on. At the time, CoreOS was at the forefront of that. It was a natural progression based on the interests and had a really good time working at Heptio with a lot of the folks that are on this call right now. Kris, you want to give us an intro? [0:02:48.4] KN: Sure. Hi, everyone. Kris Nova. I've been SRE DevOps infrastructure for about a decade now. I used to live in Boulder, Colorado. I came out of a couple startups there. I worked at SolidFire, we went to NetApp. I used to work on the Linux kernel there some. Then I was at Deis for a while when I first started contributing to Kubernetes. We got bought by Microsoft, left Microsoft, the Azure team. I was working on the original managed Kubernetes there. Left that team, joined up with Heptio, met all of these fabulous folks. I think, I wrote a book and I've been doing a lot of public speaking and some other junk along the way. Yeah. Hi. What about you, Carlisia? [0:03:28.2] CC: All right. I think it's really interesting that all the guys are lined up on one call and all the girls on another call. [0:03:34.1] NL: We should have probably broken it up more. [0:03:36.4] CC: I am a developer and have always been a developer. Before joining Heptio, I was working for Fastly, which is a CDN company. They’re doing – helping them build the latest generation of their TLS management system. At some point during my stay there, Kevin Stuart was posting on Twitter, joined Heptio. At this point, Heptio was about, I don't know, between six months in a year-old. I saw those tweets go by I’m like, “Yeah, that sounds interesting, but I'm happy where I am.” I have a very good friend, Kennedy actually. He saw those tweets and here he kept saying to me, “You should apply. You should apply, because they are great people. They did great things. Kubernetes is so hot.” I’m like, “I'm happy where I am.” Eventually, I contacted Kevin and he also said, “Yeah, that it would be a perfect match.” two months later decided to apply. The people are amazing. I did think that Kubernetes was really hard, but my decision-making went towards two things. The people are amazing and some people who were working there I already knew from previous opportunities. Some of the people that I knew – I mean, I love everyone. The only thing was that it was an opportunity for me to work with open source. I definitely could not pass that up. I could not be happier to have made that decision now with VMware acquiring Heptio, like everybody here I’m at VMware. Still happy. [0:05:19.7] KN: Has everybody here contributed to open source before? [0:05:22.9] NL: Yup, I have. [0:05:24.0] KN: What's everybody's favorite project they've worked on? [0:05:26.4] NL: That's an interesting question. From a business aspect, I really like Dex. Dex is an identity provider, or a middleware for identity provider. It provides an OIDC endpoint for multiple different identity providers. You can absorb them into Kubernetes. Since Kubernetes only has an OIDC – only accepts OIDC job tokens for authentication, that functionality that Dex provides is probably my favorite thing. Although, if I'm going to be truly honest, I think right now the thing that I'm the most excited about working on is my own project, which is starting to join like me, joining into my interest in doing Chaos engineering. What about you guys? What’s your favorite? [0:06:06.3] KN: I understood some of those words. NL: Those are things we'll touch on on different episodes. [0:06:12.0] KN: Yeah. I worked on FreeBSD for a while. That was my first welcome to open source. I mean, that was back in the olden days of IRC clients and writing C. I had a lot of fun, and still I'm really close with a lot of folks in the FreeBSD community, so that always has a special place in my heart, I think, just that was my first experience of like, “Oh, this is how you work on a team and you work collaboratively together and it's okay to fail and be open.” [0:06:39.5] NL: Nice. [0:06:40.2] KN: What about you, Josh? [0:06:41.2] JR: I worked on a project at CoreOS. Well, a project that's still out there called ALB Ingress controller. It was a way to bring the AWS ALBs, which are just layer 7 load balancers and take the Kubernetes API ingress, attach those two together so that the ALB could serve ingress. The reason that it was the most interesting, technology aside, is just it went from something that we started just myself and a colleague, and eventually gained community adoption. We had to go through the process of just being us two worrying about our concerns, to having to bring on a large community that had their own business requirements and needs, and having to say no at times and having to encourage individuals to contribute when they had ideas and issues, because we didn't have the bandwidth to solve all those problems. It was interesting not necessarily from a technical standpoint, but just to see what it actually means when something starts to gain traction. That was really cool. Yeah, how about you Duffie? [0:07:39.7] DC: I've worked on a number of projects, but I find that generally where I fit into the ecosystem is basically helping other people adopt open source technologies. I spent a quite a bit of my time working on OpenStack and I spent some time working on Open vSwitch and recently in Kubernetes. Generally speaking, I haven't found myself to be much of a contributor to of code to those projects per se, but more like my work is just enabling people to adopt those technologies because I understand the breadth of the project more than the detail of some particular aspect. Lately, I've been spending some time working more on the SIG Network and SIG-cluster-lifecycle stuff. Some of the projects that have really caught my interest are things like, Kind which is Kubernetes in Docker and working on KubeADM itself, just making sure that we don't miss anything obvious in the way that KubeADM is being used to manage the infrastructure again. [0:08:34.2] KN: What about you, Carlisia? [0:08:36.0] CC: I realize it's a mission what I'm working on at VMware. That is coincidentally the project – the open source project that is my favorite. I didn't have a lot of experience with open source, just minor contributions here and there before this project. I'm working with Valero. It's a disaster recovery tool for Kubernetes. Like I said, it's open source. We’re coming up to version 1 pretty soon. The other maintainers are amazing, super knowledgeable and very experienced, mature. I have such a joy to work with them. My favorites. [0:09:13.4] NL: That's awesome. [0:09:14.7] DC: Should we get into the concept of cloud native and start talking about what we each think of this thing? Seems like a pretty loaded topic. There are a lot of people who would think of cloud native as just a generic term, we should probably try and nail it down here. [0:09:27.9] KN: I'm excited for this one. [0:09:30.1] CC: Maybe we should talk about what this podcast show is going to be? [0:09:34.9] NL: Sure. Yeah. Totally. [0:09:37.9] CC: Since this is our first episode. [0:09:37.8] NL: Carlisia, why don't you tell us a little bit about the podcast? [0:09:40.4] CC: I will be glad to. The idea that we had was to have a show where we can discuss cloud native concepts. As opposed to talking about particular tools or particular project, we are going to aim to talk about the concepts themselves and approach it from the perspective of a distributed system idea, or issue, or concept, or a cloud native concept. From there, we can talk about what really is this problem, what people or companies have this problem? What usually are the solutions? What are the alternative ways to solve this problem? Then we can talk about tools that are out there that people can use. I don't think there is a show that approaches things from this angle. I'm really excited about bringing this to the community. [0:10:38.9] KN: It's almost like TGIK, but turned inside out, or flipped around where TGIK, we do tools first and we talk about what exactly is this tool and how do you use it, but I think this one, we're spinning that around and we're saying, “No, let's pick a broader idea and then let's explore all the different possibilities with this broader idea.” [0:10:59.2] CC: Yeah, I would say so. [0:11:01.0] JR: From the field standpoint, I think this is something we often times run into with people who are just getting started with larger projects, like Kubernetes perhaps, or anything really, where a lot of times they hear something like the word Istio come out, or some technology. Often times, the why behind it isn't really considered upfront, it's just this tool exists, it's being talked about, clearly we need to start looking at it. Really diving into the concepts and the why behind it, hopefully will bring some light to a lot of these things that we're all talking about day-to-day. [0:11:31.6] CC: Yeah. Really focusing on the what and the why. The how is secondary. That's what my vision of this show is. [0:11:41.7] KN: I like it. [0:11:43.0] NL: That's something that really excites me, because there are a lot of these concepts that I talk about in my day-to-day life, but some of them, I don't actually think that I understand pretty well. It's those words that you've heard a million times, so you know how to use them, but you don't actually know the definition of them. [0:11:57.1] CC: I'm super glad to hear you say that mister, because as a developer in many not a system – not having a sysadmin background. Of course, I did sysadmin things as a developer, but not it wasn't my day-to-day thing ever. When I started working with Kubernetes, a lot of things I didn't quite grasp and that's a super understatement. I noticed that I mean, I can ask questions. No problem. I will dig through and find out and learn. The problem is that in talking to experts, a lot of the time when people, I think, but let me talk about myself. A lot of time when I ask a question, the experts jump right to the how. What is this? “Oh, this is how you do it.” I don't know what this is. Back off a little bit, right? Back up. I don't know what this is. Why is this doing this? I don't know. If you tell me the how before I understand what that even is, I'm going to forget. That's what's going to happen. I mean, it’s great you're trying to make an effort and show me the how to do something. This is personal, the way I learn. I need to understand the how first. This is why I'm so excited about this show. It's going to be awesome. This is what we’re going to talk about. [0:13:19.2] DC: Yeah, I agree. This is definitely one of the things that excites me about this topic as well, is that I find my secret super power is troubleshooting. That means that I can actually understand what the expected relationships between things should do, right? Rather than trying to figure out. Without really digging into the actual problem of stuff and what and the how people were going, or the people who were developing the code were trying to actually solve it, or thought about it. It's hard to get to the point where you fully understand that that distributed system. I think this is a great place to start. The other thing I'll say is that I firmly believe that you can't – that you don't understand a thing if you can't teach it. This podcast for me is about that. Let's bring up all the questions and we should enable our audience to actually ask us questions somehow, and get to a place where we can get as many perspectives on a problem as we can, such that we can really dig into the detail of what the problem is before we ever talk about how to solve it. Good stuff. [0:14:18.4] CC: Yeah, absolutely. [0:14:19.8] KN: Speaking of a feedback loop from our audience and taking the problem first and then solution in second, how do we plan on interacting with our audience? Do we want to maybe start a GitHub repo, or what are we thinking? [0:14:34.2] NL: I think a GitHub repo makes a lot of sense. I also wouldn't mind doing some social media malarkey, maybe having a Twitter account that we run or something like that, where people can ask questions too. [0:14:46.5] CC: Yes. Yes to all of that. Yeah. Having an issue list that in a repo that people can just add comments, praises, thank you, questions, suggestions for concepts to talk about and say like, “Hey, I have no clue what this means. Can you all talk about it?” Yeah, we'll talk about it. Twitter. Yes. Interact with those on Twitter. I believe our Twitter handle is TheKubelets. [0:15:12.1] KN: Oh, we already have one. Nice. [0:15:12.4] NL: Yes. See, I'm learning something new already. [0:15:15.3] CC: We already have. I thought you were all were joking. We have the Kubernetes repo. We have a github repo called – [0:15:22.8] NL: Oh, perfect. [0:15:23.4] CC: heptio/thekubelets. [0:15:27.5] DC: The other thing I like that we do in TGIK is this HackMD thing. Although, I'm trying to figure out how we could really make that work for us in a show that's recorded every week like this one. I think, maybe what we could do is have it so that when people can listen to the recording, they could go to the HackMD document, put questions in or comments around things if they would like to hear more about, or maybe share their perspectives about these topics. Maybe in the following week, we could just go back and review what came in during that period of time, or during the next session. [0:15:57.7] KN: Yeah. Maybe we're merging the HackMD on the next recording. [0:16:01.8] DC: Yeah. [0:16:03.3] KN: Okay. I like it. [0:16:03.6] DC: Josh, you have any thoughts? Friendster, MySpace, anything like that? [0:16:07.2] JR: No. I think we could pass on MySpace for now, but everything else sounds great. [0:16:13.4] DC: Do we want to get into the meat of the episode? [0:16:15.3] KN: Yeah. [0:16:17.2] DC: Our true topic, what does cloud native mean to all of us? Kris, I'm interested to hear your thoughts on this. You might have written a book about this? [0:16:28.3] KN: I co-authored a book called Cloud Native Infrastructure, which it means a lot of things to a lot of people. It's one of those umbrella terms, like DevOps. It's up to you to interpret it. I think in the past couple of years of working in the cloud native space and working directly at the CNCF as a CNCF ambassador, Cloud Native Computing Foundation, they're the open source nonprofit folks behind this term cloud native. I think the best definition I've been able to come up with is when you're designing software and you start your main function to be built around the cloud, or to be built around what the cloud enables us to do in the services a cloud to offer you, that is when you start to look at cloud native engineering. I think all cloud native infrastructure is, it's designing software that manages and mutates infrastructure in that same way. I think the underlying theme here is we're no longer caddying configurations disk and doing system D restarts. Now we're just sending HTTPS API requests and getting messages back. Hopefully, if the cloud has done what we expect it to do, that broadcast some broader change. As software engineers, we can count on those guarantees to design our software around. I really think that you need to understand that it's starting with the main function first and completely engineering your app around these new ideas and these new paradigms and not necessarily a migration of a non-cloud native app. I mean, you technically could go through and do it. Sure, we've seen a lot of people do it, but I don't think that's technically cloud native. That's cloud alien. Yeah. I don't know. That's just my thought. [0:18:10.0] DC: Are you saying that cloud native approach is a greenfield approach generally? To be a cloud native application, you're going to take that into account in the DNA of your application? [0:18:20.8] KN: Right. That's exactly what I'm saying. [0:18:23.1] CC: It's interesting that never said – mentioned cloud alien, because that plays into the way I would describe the meaning of cloud native. I mean, what it is, I think Nova described it beautifully and it's a lot of – it really shows her know-how. For me, if I have to describe it, I will just parrot things that I have read, including her book. What it means to me, what it means really is I'm going to use a metaphor to explain what it means to me. Given my accent, I’m obviously not an American born, and so I'm a foreigner. Although, I do speak English pretty well, but I'm not native. English is not my native tongue. I speak English really well, but there are certain hiccups that I'm going to have every once in a while. There are things that I'm not going to know what to say, or it's going to take me a bit long to remember. I rarely run into not understanding it, something in English, but it happens sometimes. That's the same with the cloud native application. If it hasn't been built to run on cloud native platforms and systems, you can migrate an application to cognitive environment, but it's not going to fully utilize the environments, like a native app would. That's my take. [0:19:56.3] KN: Cloud immigrant. [0:19:57.9] CC: Cloud immigrant. Is Nick a cloud alien? [0:20:01.1] KN: Yeah. [0:20:02.8] CC: Are they cloud native alien, or cloud native aliens. Yeah. [0:20:07.1] JR: On that point, I'd be curious if you all feel there is a need to discern the notion of cloud native infrastructure, or platforms, then the notion of cloud native apps themselves. Where I'm going with this, it's funny hearing the Greenfield thing and what you said, Carlisia, with the immigration, if you will, notion. Oftentimes, you see these very cloud native platforms, things, the amount of Kubernetes, or even Mesos or whatever it might be. Then you see the applications themselves. Some people are using these platforms that are cloud native to be a forcing function, to make a lot of their legacy stuff adopt more cloud native principles, right? There’s this push and pull. It's like, “Do I make my app more cloud native? Do I make my infrastructure more cloud native? Do I do them both at the same time?” Be curious what your thoughts are on that, or if that resonates with you at all. [0:21:00.4] KN: I've got a response here, if I can jump in. Of course, Nova with opinions. Who would have thought? I think what I'm hearing here, Josh is as we're using these cloud native platforms, we're forcing the hand of our engineers. In a world where we may be used to just send this blind DNS request out so whatever, and we would be ignorant of where that was going, now in the cloud native world, we know there's the specific DNS implementation that we can count on. It has this feature set that we can guarantee our software around. I think it's a little bit of both and I think that there is definitely an art to understanding, yes, this is a good idea to do both applications and infrastructure. I think that's where you get into this what it needs to be a cloud native engineer. Just in the same traditional legacy infrastructure stack, there's going to be good engineering choices you can make and there's going to be bad ones and there's many different schools of thought over do I go minimalist? Do I go all in at once? What does that mean? I think we're seeing a lot of folks try a lot of different patterns here. I think there's pros and cons though. [0:22:03.9] CC: Do you want to talk about this pros and cons? Do you see patterns that are more successful for some kinds of company versus others? [0:22:11.1] KN: I mean, I think going back to the greenfield thing that we were talking about earlier, I think if you are lucky enough to build out a greenfield application, you're able to bake in greenfield infrastructure management instead as well. That's where you get these really interesting hybrid applications, just like Kubernetes, that span the course of infrastructure and application. If we were to go into Kubernetes and say, “I wanted to define a service of type load balancer,” it’s actually going to go and create a load balancer for you and actually mutate that underlying infrastructure. The only way we were able to get that power and get that paradigm is because on day one, we said we're going to do that as software engineers; taking the infrastructure where you were hidden behind the firewall, or hidden behind the load balancer in the past. The software would have no way to reason about it. They’re blind and greenfield really is going to make or break your ability to even you take the infrastructure layers. [0:23:04.3] NL: I think that's a good distinction to make, because something that I've been seeing in the field a lot is that the users will do cloud native practices, but they’ll use a tool to do the cloud native for them, right? They'll use something along the lines of HashiCorp’s Terraform to create the VMs and the load balancers for them. It's something I think that people forget about is that the application themselves can ask for these resources as well. Terraform is just using an API and your code can use an API to the same API, in fact. I think that's an important distinction. It forces the developer to think a little bit like a sysadmin sometimes. I think that's a good melding of the dev and operations into this new word. Regrettably, that word doesn't exist right now. [0:23:51.2] KN: That word can be cloud native. [0:23:53.3] DC: Cloud here to me breaks down into a different set of topics as well. I remember seeing a talk by Brandon Phillips a few years ago. In his talk, he was describing – he had some numbers up on the screen and he was talking about the fact that we were going to quickly become overwhelmed by the desire to continue to develop and put out more applications for our users. His point was that every day, there's another 10,000 new users of the Internet, new consumers that are showing up on the Internet, right? Globally, I think it's something to the tune of about 350,000 of the people in this room, right? People who understand infrastructure, people who understand how to interact with applications, or to build them, those sorts of things. There really aren't a lot of people who are in that space today, right? We're surrounded by them all the time, but they really just globally aren't that many. His point is that if we don't radically change the way that we think about the development as the deployment and the management of all of these applications that we're looking at today, we're going to quickly be overrun, right? There aren't going to be enough people on the planet to solve that problem without thinking about the problem in a fundamentally different way. For me, that's where the cloud native piece comes in. With that, comes a set of primitives, right? You need some way to automate, or to write software that will manage other software. You need the ability to manage the lifecycle of that software in a resilient way that can be managed. There are lots of platforms out there that thought about this problem, right? There are things like Mesos, there are things like Kubernetes. There's a number of different shots on goal here. There are lots of things that I've really tried to think about that problem in a fundamentally different way. I think of those primitives that being able to actually manage the lifecycle of software, being able to think about packaging that software in such a way that it can be truly portable, the idea that you have some API abstraction that brings again, that portability, such that you can make use of resources that may not be hosted on your infrastructure on your own personal infrastructure, but also in the cloud, like how do we actually make that API contract so complete that you can just take that application anywhere? These are all part of that cloud native definition in my opinion. [0:26:08.2] KN: This is so fascinating, because the human race totally already learned this lesson with the Linux kernel in the 90s, right? We had all these hardware manufacturers coming out and building all these different hardware components with different interfaces. Somebody said, “Hey, you know what? There's a lot of noise going on here. We should standardize these and build a contract.” That contract then implemented control loops, just like in Kubernetes and then Mesos. Poof, we have the Linux kernel now. We're just distributed Linux kernel version 2.0. The human race is here repeating itself all over again. [0:26:41.7] NL: Yeah. It seems like the blast radius of Linux kernel 2.0 is significantly higher than the Linux kernel itself. That made it sound like I was like, pooh-poohing what you're saying. It’s more like, we're learning the same lesson, but at a grander scale now. [0:27:00.5] KN: Yeah. I think that's a really elegant way of putting it. [0:27:03.6] DC: You do raise a good point. If you are embracing on a cloud native infrastructure, remember that little changes are big changes, right? Because you're thinking about managing the lifecycle of a thousand applications now, right? If you're going full-on cloud native, you're thinking about operating at scale, it's a byproduct of that. Little changes that you might be able to make to your laptop are now big changes that are going to affect a fleet of thousand machines, right? [0:27:30.0] KN: We see this in Kubernetes all the time, where a new version of Kubernetes comes out and something totally unexpected happens when it is ran at scale. Maybe it worked on 10 nodes, but when we need to fire up a thousand nodes, what happens then? [0:27:42.0] NL: Yeah, absolutely. That actually brings up something that to me, defines cloud native as well. A lot of my definition of cloud native follows in suit with Kris Nova's book, or Kris Nova, because your book was what introduced me to the phrase cloud native. It makes sense that your opinion informs my opinion, but something that I think that we were just starting to talk about a little bit is also the concept of stability. Cloud native applications and infrastructure means coding with instability in mind. It's not being guaranteed that your VM will live forever, because it's on somebody else's hardware, right? Their hardware could go down, and so what do you do? It has to move over really quickly, has to figure out, have the guarantees of its API and its endpoints are all going to be the same no matter what. All of these things have to exist for the code, or for your application to live in the cloud. That's something that I find to be very fascinating and that's something that really excites me, is not trying to make a barge, but rather trying to make a schooner when you're making an app. Something that can, instead of taking over the waves, can be buffeted by the waves and still continue. [0:28:55.6] KN: Yeah. It's a little more reactive. I think we see this in Kubernetes a lot. When I interviewed Joe a couple years ago, Joe Beda for the book to get a quote from him, he said, this magic phrase that has stuck with me over the past few years, which is “goal-seeking behavior.” If you look at a Kubernetes object, they all use this concept in Go called embedding. Every Kubernetes object has a status in the spec. All it is is it’s what's actually going on, versus what did I tell it, what do I want to go on. Then all we're doing is just like you said with your analogy, is we're just trying to be reactive to that and build to that. [0:29:31.1] JR: That's something I wonder if people don't think about a lot. They don't they think about the spec, but not the status part. I think the status part is as important, or more important maybe than the spec. [0:29:41.3] KN: It totally is. Because I mean, a status like, if you have one potentiality for status, your control loop is going to be relatively trivial. As you start understanding more of the problems that you could see and your code starts to mature and harden, those statuses get more complex and you get more edge cases and your code matures and your code hardens. Then we can take that and globally in these larger cloud native patterns. It's really cool. [0:30:06.6] NL: Yeah. Carlisia, you’re a developer who's now just getting into the cloud native ecosystem. What are your thoughts on developing with cloud native practices in mind? [0:30:17.7] CC: I’m not sure I can answer that. When I started developing for Kubernetes, I was like, “What is a pod?” What comes first? How does this all fit together? I joined the project [inaudible 00:30:24]. I don't have to think about that. It's basically moving the project along. I don't have to think what I have to do differently from the way I did things before. [0:30:45.1] DC: One thing that I think you probably ran into in working with the application is the management of state and how that relates to – where you actually end up coupling that state. Before in development, you might just assume that there is a database somewhere that you would have to interact with. That database is a way of actually pushing that state off of the code that you're actually going to work with. In this way, that you might think of being able to write multiple consumers of state, or multiple things that are going to mutate state and all share that same database. This is one of the patterns that comes up all the time when we start talking about cloud native architectures, is because we have to really be very careful about how we manage that state and mainly, because one of the other big benefits of it is the ability to horizontally scale things that are going to mutate, or consume state. [0:31:37.5] CC: My brain is in its infancy as it relates to Kubernetes. All that I see is APIs all the way down. It's just APIs all the way down. It’s not very different than as a developer for me, is not very much more complex than developing against the database that sits behind. Ask me again a year from now and I will have a more interesting answer. [0:32:08.7] KN: This is so fascinating, right? I remember a couple years ago when Kubernetes was first coming out and listening to some of the original “Elders of Kubernetes,” and even some of the stuff that we were working on at this time. One of the things that they said was we hope one day, somebody doesn't have to care about what's passed these APIs and gets to look at Kubernetes as APIs only. Then they hear that come from you authentically, it's like, “Hey, that's our success statement there. We nailed it.” It's really cool. [0:32:37.9] CC: Yeah. I don’t understood their patterns and I probably should be more cognizant about these patterns are, even if it's just to articulate them. To me, my day-to-day challenge is understanding the API, understanding what library call do I make to make this happen and how – which is just programming 101 almost. Not different from any other regular project. [0:33:10.1] JR: Yeah. That is something that's nice about programming with Kubernetes in mind, because a lot of times you can use the source code as documentation. I hate to say that particularly is a non-developer. I'm a sysadmin first getting into development and documentation is key in my mind. There's been more than a few times where I'm like, “How do I do this?” You can look in the source code for pretty much any application that you're using that's in Kubernetes, or around the Kubernetes ecosystem. The API for that application is there and it'll tell you what you need to do, right? It’s like, “Oh, this is how you format your config file. Got it.” [0:33:47.7] CC: At the same time, I don't want to minimize that knowing what the patterns are is very useful. I haven't had to do any design for Valero for our projects. Maybe if I had, I would have to be forced to look into that. I'm still getting to know the codebase and developing features, but no major design that I had to lead at least. I think with time, I will recognize those patterns and it will make it easier for me to understand what is happening. What I was saying is that not understanding the patterns that are behind the design of those APIs doesn't preclude me at all so call against it, against them. [0:34:30.0] KN: I feel this is the heart of cloud native. I think we totally nailed it. The heart of cloud native is in the APIs and your ability to interact with the APIs. That's what makes it programmable and that's what makes – gives you the interface for you and your software to interact with that. [0:34:45.1] DC: Yeah, I agree with that. API first. On the topic of cloud native, what about the Cloud Native Computing Foundation? What are our thoughts on the CNCF and what is the CNCF? Josh, you have any thoughts on that? [0:35:00.5] JR: Yeah. I haven't really been as close to the CNCF as I probably should, to be honest with you. One of the great things that the CNCF has put together are programs around getting projects into this, I don't know if you would call it vendor neutral type program. Maybe somebody can correct me on that. Effectively, there's a lot of different categories, like networking and storage and runtimes for containers and things of that nature. There's a really cool landscape that can show off a lot of these different technologies. A lot of the categories, I'm guessing we'll be talking about on this podcast too, right? Things like, what does it mean to do cloud native networking and so on and so forth? That's my purview of the CNCF. Of course, they put on KubeCon, which is the most important thing to me. I'm sure someone else on this call can talk deeper at an organization level what they do. [0:35:50.5] KN: I'm happy to jump in here. I've been working with them for I think three years now. I think first, it's important to know that they are a subsidiary of the Linux Foundation. The Linux Foundation is the original open source, nonprofit here, and then the CNCF is one of many, like Apache is another one that is underneath the broader Linux Foundation umbrella. I think the whole point of – or the CNCF is to be this neutral party that can help us as we start to grow and mature the ecosystem. Obviously, money is going to be involved here. Obviously, companies are going to be looking out for their best interest. It makes sense to have somebody managing the software that is outside, or external of these revenue-driven companies. That's where I think the CNCF comes into play. I think that's its main responsibility is. What happens when somebody from company A and somebody from Company B disagree with the direction that the software should go? The CNCF can come in and say, “Hey, you know what? Let's find a happy medium in here and let's find a solution that works for both folks and let's try to do this the best we can.” I think a lot of this came from lessons we learned the hard way with Linux. In a weird way, we did – we are in version 2.0, but we were able to take advantage of some of the priority here. [0:37:05.4] NL: Do you have any examples of a time in the CNCF jumped in and mediated between two companies? [0:37:11.6] KN: Yeah. I think the steering committee, the Kubernetes steering committee is a great example of this. It's a relatively new thing. It hasn't been around for a very long time. You look at the history of Kubernetes and we used to have this incubation process that has since been retired. We've tried a lot of solutions and the CNCF has been pretty instrumental and guiding the shape of how we're going to manage, solve governance for such a monolithic project. As Kubernetes grows, the problem space grows and more people get involved. We're having to come up with new ways of managing that. I think that's not necessarily a concrete example of two specific companies, but I think that's more of as people get involved, the things that used to work for us in the past are no longer working. The CNCF is able to recognize that and guide us out of that. [0:37:57.2] DC: Cool. That’s such a very good perspective on the CNCF that I didn't have before. Because like Josh, my perspective with CNCF was well, they put on that really cool party three times a year. [0:38:07.8] KN: I mean, they definitely are great at throwing parties. [0:38:12.6] NL: They are that. [0:38:14.1] CC: My perspective of the CNCF is from participating in the Kubernetes meetup here in San Diego. I’m trying to revive our meetup, which is really hard to do, but different topic. I know that they try to make it easier for people to find meetups, because they have on meetup.com, they have an organization. I don't know what the proper name is, but if you go there and you put your zip code, you'll find any meetup that's associated with them. My meetup here in San Diego is associated, can be easily found. They try to give a little bit of money for swags. We also give out ads for meetup. They offer help for finding speakers and they also have a speaker catalog on their website. They try to help in those ways, which I think is very helpful, very valuable. [0:39:14.9] DC: Yeah, I agree. I know about CNCF, mostly just from interacting with folks who are working on its behalf. Working at meeting a bunch of the people who are working on the Kubernetes project, on behalf of the CNCF, folks like Ihor and people like that, which are constantly amazingly with the amount of work that they do on behalf of the CNCF. I think it's been really good seeing what it means to provide governance over a project. I think that really highlights – that's really highlighted by the way that Kubernetes itself has managed. I think a lot of us on the call have probably worked with OpenStack and remember some of the crazy battles that went on between vendors around particular components in that stack. I've yet to actually really see that level of noise creep into the Kubernetes situation. I think squarely on the CNCF around managing governance, and also run the community for just making it accessible enough thing that people can plug into it, without actually having to get into a battle about taking ownership of CNI, for example. Nobody should own CNI. That should be its own project under its own governance. How you satisfy the needs for something like container networking should be a project that you develop as a company, and you can make the very best one that you could make it even attract as many customers to that as you want. Fundamentally, the way that your interface to that major project should be something that is abstracted in such a way that it isn't owned by any one company. There should be a contact in an API, that sort of thing. [0:40:58.1] KN: Yeah. I think the best analogy I ever heard was like, “We’re just building USB plugs.” [0:41:02.8] DC: That's actually really great. [0:41:05.7] JR: To that point Duffie, I think what's interesting is more and more companies are looking to the CNCF to determine what they're going to place their bets on from a technology perspective, right? Because they've been so burned historically from some project owned by one vendor and they don't really know where it's going to end up and so on and so forth. It's really become a very serious thing when people consider the technologies they're going to bet their business on. [0:41:32.0] DC: Yeah. When a project is absorbed into the CNCF, or donated to the CNCF, I guess. There are a number of projects that this has happened to. Obviously, if you see that iChart that is the CNCF landscape, there's just tons of things happening inside of there. It's a really interesting process, but I think that from my part, I remember recently seeing Sysdig Falco show up in that list and seeing them donate – seeing Sysdig donate Falco to the CNCF was probably one of the first times that I've actually have really tried to see what happens when that happens. I think that some of the neat stuff here that happens is that now this is an open source project. It's under the governance of the CNCF. It feels to me more an approachable project, right? I don't feel I have to deal with Sysdig directly to interact with Falco, or to contribute to it. It opens that ecosystem up around this idea, or the genesis of the idea that they built around Falco, which I think is really powerful. What do you all think of that? [0:42:29.8] KN: I think, to look at it from a different perspective, that's one example of when the CNCF helps a project liberate itself. There's plenty of other examples out there where the CNCF is an opt-in feature, that is only there if we need it. I think cluster API, which I'm sure we're going to talk about this in a later episode. I mean, just a quick overview is a lot of different vendors implementing the same API and making that composable and modular. I mean, nowhere along the way in the history of that project has the CNCF had to come and step in. We’ve been able to operate independently of that. I think because the CNCF is even there, we all are under this working agreement of we're going to take everybody's concerns into consideration and we're going to take everybody’s use case in some consideration, work together as an ecosystem. I think it's just even having that in place, whether or not you use it or not is a different story. [0:43:23.4] CC: Do you all know any project under the CNCF? [0:43:26.1] KN: I have one. [0:43:27.7] JR: Well, I've heard of this one. It's called Kubernetes. [0:43:30.1] CC: Is it called Kubernetes or Kubernetes? [0:43:32.8] JR: It’s called Kubernetes. [0:43:36.2] CC: Wow. That’s not what Duffie thinks. [0:43:38.3] DC: I don’t say it that way. No, it's been pretty fascinating seeing just the breadth of projects that are under there. In fact, I was just recently noticing that OpenEBS is up for joining the CNCF. There seems to be – it's fascinating that the things that are being generated through the CNCF and going through that life cycle as a project sometimes overlap with one another and it's very – it seems it's a delicate balance that the CNCF would have to play to keep from playing favorites. Because part of the charter of CNCF is to promote the project, right? I'm always curious to see and I'm fascinated to see how this plays out as we see projects that are normally competitive with one another under the auspice of the same organization, like a CNCF. How do they play this in such a way that they remain neutral, even it would – it seems like it would take a lot of intention. [0:44:39.9] KN: Yeah. Well, there's a difference between just being a CNCF project and being an official project, or a graduated project. There's different tiers. For instance, Kubicorn, a tool that I wrote, we just adopted the CNCF, like I think a code of conduct and there was another file I had to include in the repo and poof, were magically CNCF now. It's easy to get onboard. Once you're onboard, there's legal implications that come with that. There totally is this tier ladder stature that I'm not even super familiar with. That’s how officially CNCF you can be as your product grows and matures. [0:45:14.7] NL: What are some of the code of conduct that you have to do to be part of the CNCF? [0:45:20.8] KN: There's a repo on it. I can maybe find it and add it to the notes after this, but there's this whole tutorial that you can go through and it tells you everything you need to add and what the expectations are and what the implications are for everything. [0:45:33.5] NL: Awesome. [0:45:34.1] CC: Well, Valero is a CNCF project. We follow the what is it? The covenant? [0:45:41.2] KN: Yeah, I think that’s what it is. [0:45:43.0] CC: Yes. Which is the same that Kubernetes follows. I am not sure if there can be others that can be adopted, but this is definitely one. [0:45:53.9] NL: Yeah. According to Aaron Crickenberger, who was the Release Lead for Kubernetes 1.14, the CNCF code of conduct can be summarized as don't be a jerk. [0:46:06.6] KN: Yeah. I mean, there's more to it than that, but – [0:46:10.7] NL: That was him. [0:46:12.0] KN: Yeah. This is something that I remember seeing an open source my entire career, open source comes with this implication of you need to be well-rounded and polite and listen and be able to take others’ just thoughts and concerns into consideration. I think we just are getting used to working like that as an engineering industry. [0:46:32.6] NL: Agreed. Yeah. Which is a great point. It's something that I hadn't really thought of. The idea of development back in the day, it seems like before, there was such a thing as the CNCF are cloud native. It seemed that things were combative, or people were just trying to push their agenda as much as possible. Bully their way through. That doesn't seem that happens as much anymore. Do you guys have any thoughts on that? [0:46:58.9] DC: I think what you're highlighting is more the open source piece than the cloud native piece, which I – because I think that when you're working – open source, I think has been described a few times as a force multiplier for software development and software adoption. I think of these things are very true. If you look at a lot of the big successful closed source projects, they have – the way that people in this room and maybe people listening to this podcast might perceive them, it's definitely just fundamentally differently than some open source project. Mainly, because it feels it's more of a community-driven thing and it also feels you're not in a place where you're beholden to a set of developers that you don't know that are not interested in your best, and in what's best for you, or your organization to achieve whatever they set out to do. With open source, you can be a part of the voice of that project, right? You can jump in and say, “You know, it would really be great if this thing has this feature, or I really like how you would do this thing.” It really feels a lot more interactive and inclusive. [0:48:03.6] KN: I think that that is a natural segue to this idea of we build everything behind the scenes and then hey, it's this new open source project, that everything is done. I don't really think that's open source. We see some of these open source projects out there. If you go look at the git commit history, it's all everybody from the same company, or the same organization. To me, that's saying that while granted the source code might be technically open source, the actual act of engineering and architecting the software is not done as a group with multiple buyers into it. [0:48:37.5] NL: Yeah, that's a great point. [0:48:39.5] DC: Yeah. One of the things I really appreciate about Heptio actually is that all of the projects that we developed there were – that the developer chat for that was all kept in some neutral space, like the Kubernetes Slack, which I thought was really powerful. Because it means that not only is it open source and you can contribute code to a project, but if you want to talk to people who are also being paid to develop that project, you can just go to the channel and talk to them, right? It's more than open source. It's open community. I thought that was really great. [0:49:08.1] KN: Yeah. That's a really great way of putting it. [0:49:10.1] CC: With that said though, I hate to be a party pooper, but I think we need to say goodbye. [0:49:16.9] KN: Yeah. I think we should wrap it up. [0:49:18.5] JR: Yeah. [0:49:19.0] CC: I would like to re-emphasize that you can go to the issues list and add requests for what you want us to talk about. [0:49:29.1] DC: We should also probably link our HackMD from there, so that if you want to comment on something that we talked about during this episode, feel free to leave comments in it and we'll try to revisit those comments maybe in our next episode. [0:49:38.9] CC: Exactly. That's a good point. We will drop a link the HackMD page on the corresponding issue. There is going to be an issue for each episode, so just look for that. [0:49:51.8] KN: Awesome. Well, thanks for joining everyone. [0:49:54.1] NL: All right. Thank you. [0:49:54.6] CC: Thank you. I'm really glad to be here. [0:49:56.7] DC: Hope you enjoyed the episode and I look forward to a bunch more. [END OF EPISODE] [0:50:00.3] ANNOUNCER: Thank you for listening to The Podlets Cloud Native Podcast. Find us on Twitter https://twitter.com/ThePodlets and on the https://thepodlets.io, where you'll find transcripts and show notes. We'll be back next week. Stay tuned by subscribing. [END]See omnystudio.com/listener for privacy information.

Go Time
Kubernetes and Cloud Native

Go Time

Play Episode Listen Later Nov 1, 2019 59:46 Transcription Available


Johnny and Mat are joined by Kris Nova and Joe Beda to talk about Kubernetes and Cloud Native. They discuss the rise of “Cloud Native” applications as facilitated by Kubernetes, good places to use Kubernetes, the challenges faced running such a big open source project, Kubernetes’ extensibility, and how Kubernetes fits into the larger Cloud Native world.

Changelog Master Feed
Kubernetes and Cloud Native (Go Time #105)

Changelog Master Feed

Play Episode Listen Later Nov 1, 2019 59:46 Transcription Available


Johnny and Mat are joined by Kris Nova and Joe Beda to talk about Kubernetes and Cloud Native. They discuss the rise of “Cloud Native” applications as facilitated by Kubernetes, good places to use Kubernetes, the challenges faced running such a big open source project, Kubernetes’ extensibility, and how Kubernetes fits into the larger Cloud Native world.

Go Time
Kubernetes and Cloud Native

Go Time

Play Episode Listen Later Nov 1, 2019 59:46


Johnny and Mat are joined by Kris Nova and Joe Beda to talk about Kubernetes and Cloud Native. They discuss the rise of “Cloud Native” applications as facilitated by Kubernetes, good places to use Kubernetes, the challenges faced running such a big open source project, Kubernetes’ extensibility, and how Kubernetes fits into the larger Cloud Native world.

Cloud Native in 15 Minutes
Kubernetes (with Joe Beda)

Cloud Native in 15 Minutes

Play Episode Listen Later Jul 9, 2019 17:25


Learn more: Kubernetes Pivotal Container Service (PKS) VMware Enterprise PKS VMware Open Source (formerly Heptio) The CIO's guide to Kubernetes Follow everyone on Twitter: Intersect (@IntersectIT) Pivotal (@pivotal) Joe Beda (@jbeda) Derrick Harris (@derrickharris) Kubernetes (@apachekafka) VMware (@VMware)

Pivotal Insights
Kubernetes (with Joe Beda)

Pivotal Insights

Play Episode Listen Later Jul 9, 2019 17:24


Learn more: Kubernetes Pivotal Container Service (PKS) VMware Enterprise PKS VMware Open Source (formerly Heptio) The CIO's guide to Kubernetes Follow everyone on Twitter: Intersect (@IntersectIT) Pivotal (@pivotal) Joe Beda (@jbeda) Derrick Harris (@derrickharris) Kubernetes (@apachekafka) VMware (@VMware)

Cloud & Culture
Kubernetes (with Joe Beda)

Cloud & Culture

Play Episode Listen Later Jul 9, 2019 17:24


Learn more: Kubernetes Pivotal Container Service (PKS) VMware Enterprise PKS VMware Open Source (formerly Heptio) The CIO's guide to Kubernetes Follow everyone on Twitter: Intersect (@IntersectIT) Pivotal (@pivotal) Joe Beda (@jbeda) Derrick Harris (@derrickharris) Kubernetes (@apachekafka) VMware (@VMware)

Cloud Engineering – Software Engineering Daily
Kubernetes Vision with Joe Beda

Cloud Engineering – Software Engineering Daily

Play Episode Listen Later Jun 11, 2019 72:18


Google Cloud was started with a vision of providing Google infrastructure to the masses. In 2008, it was not obvious that Google should become a cloud provider. Amazon Web Services was finding success among startups who needed on-demand infrastructure, but the traditional enterprise market was not yet ready to buy cloud resources. Googlers liked the The post Kubernetes Vision with Joe Beda appeared first on Software Engineering Daily.

The InfoQ Podcast
Joe Beda on Kubernetes & the CNCF

The InfoQ Podcast

Play Episode Listen Later Feb 12, 2019 30:12


Today on The InfoQ Podcast, Wes talks with Joe Beda. Joe is one of the co-creators of Kubernetes. What started in the fall of 2013 with Craig McLuckie, Joe Beda, and Brendan Burns working on cloud infrastructure has become the default orchestrator for cloud native architectures. Today on the show, the two discuss the recent purchase of Heptio by VMWare, the Kubernetes Privilege Escalation Flaw (and the response to it), Kubernetes Enhancement Proposals, the CNCF/organization of Kubernetes, and some of the future hopes for the platform. Why listen to this podcast: - Heptio, the company Joe and Craig McLuckie co-founded, viewed themselves as not a Kubernetes company, but more of a cloud native company. Joining VMWare allowed the company to continue a mission of helping people decouple “moving to cloud/taking advantage of cloud” patterns (regardless of where you’re running). - Re:Invent 2017 when EKS was announced was a watershed moment for Kubernetes. It marked a time where enough customers were asking for Kubernetes that the major cloud providers started to offer first-class support. - Kubernetes 1.13 included a patch for the Kubernetes Privilege Escalation Flaw Patch. While the flaw was a bad thing, it demonstrated product maturity in the way the community-based security response. - Kubernetes has an idea of committees, sigs, and working groups. Security is one of the committees. There were a small group of people who coordinated the security response. From there, trusted sets of vendors validated and test patches. Most of the response is based on how many other open source projects handle security response. - Over the last couple of releases, Kubernetes has introduced a Sig Architecture special interest group. It’s an overarching review for changes that sweep across Kubernetes. As part of Sig Architecture, the Kubernetes community has introduced Kubernetes Enhancement Proposal process (or KEPs). It’s a way for people to propose architectural changes to Kubernetes. - The goal of the CNCF is to curate and provide support to a set of projects (of which Kubernetes is one). The TOC (Technical Oversight Committee) decides which projects are going to be part of the CNCF and how those projects are supported. - Kubernetes was always viewed by the creators as something to be build on. It was never really viewed as the end goal. You can also subscribe to the InfoQ newsletter to receive weekly updates on the hottest topics from professional software development. bit.ly/24x3IVq Subscribe: www.youtube.com/infoq Like InfoQ on Facebook: bit.ly/2jmlyG8 Follow on Twitter: twitter.com/InfoQ Follow on LinkedIn: www.linkedin.com/company/infoq

THE ARCHITECHT SHOW
Ep. 70: Joe Beda on creating Kubernetes and Compute Engine, and where it's all headed

THE ARCHITECHT SHOW

Play Episode Listen Later Sep 20, 2018 53:26


In this episode of the ARCHITECHT Show, Heptio co-founder and CTO Joe Beda discusses a litany of cloud-native topics, including his roles helping to create Kubernetes and Compute Engine while at Google. Aside from going in depth on those two projects, Beda also shares his thoughts on where the container space is headed; the role of serverless computing; the still-tricky art of doing open source right; and straddling the lines between developers, operators and executives when it comes to building, marketing and selling enterprise software.

Kubernetes Podcast from Google
Supporting Kubernetes, with Ken Massada

Kubernetes Podcast from Google

Play Episode Listen Later Aug 28, 2018 25:19


What does it take to support Kubernetes for other users? Kenneth Massada, a lead for GKE support at Google Cloud, tells Craig and Adam his story. Do you have something cool to share? Some questions? Let us know: web: kubernetespodcast.com mail: kubernetespodcast@google.com twitter: @kubernetespod Chatter Adam lives in Seattle, which is on fire Craig baked some tasty cookies Using this recipe But not using Vegemite, British Marmite or New Zealand Marmite, which are three totally separate things. Only one of which is nice. Hint: it’s the last one News of the week 2018 Kubernetes Steering Committee Elections Binary Authorization on Google Kubernetes Engine kube-hunter from Aqua Security Video Blog Kubernetes issues and solutions from Alexander Lukyanchenko at Avito Cilium 1.2 released Accelerating Envoy with the Linux Kernel James Lee’s blogs on Kubernetes networking Amazon EKS supports GPU-Enabled EC2 instances Links from the interview etcd is hard: Configuration flags OpenAI suggestions on scaling Kubernetes to 2,500 nodes includes a separate events database Kubernetes docs on configuring and upgrading etcd Tina and Fred from Google SRE also discussed etcd on Episode 9 (Or use GKE, where we do it all for you) Other hard concepts: apiVersion: is hard spec: is hard Liveliness and readiness probes - don’t make them the same! Joe Beda thinks of YAML as machine code in Episode 12 What would Ken like to see changed in Kubernetes? Affinity and anti-affinity rules and topology keys Kenneth Massada on Twitter Or summon him with a GCP support case!

Kubernetes Podcast from Google
Knative, with Oren Teich

Kubernetes Podcast from Google

Play Episode Listen Later Jul 31, 2018 22:43


One of the most interesting announcements from Google Cloud Next was Knative, a framework for building serverless products on top of Kubernetes. Craig and Adam talk to Google Director of Product Management, Oren Teich, about the launch. Do you have something cool to share? Some questions? Let us know: web: kubernetespodcast.com mail: kubernetespodcast@google.com twitter: @kubernetespod News of the week Google’s Cloud Services Platform: Recapping GKE On-Prem and Knative Cloud Services Platform session video with Chen Goldberg and Aparna Sinha Google Cloud Build GitHub integration Knative analysis: Joe Beda’s TGI Kubernetes on Knative Using the Knative build system by itself Visual descriptions: Kubernetes: the theme park analogy The Kubernetes Comic Kubernetes blog posts: KubeVirt: Extending Kubernetes with CRDs for Virtualized Workloads Feature highlight: CPU Manager Links from the interview Oren Teich on Twitter About Knative: Launch blog post Knative page at Google Cloud GitHub Slack Google Cloud Next videos: Serverless at Google Cloud, with Oren Teich High-level video intro to GKE Serverless add-on and Knative, with DeWitt Clinton and Ryan Gregg Request early access to the Serverless add-on for GKE Developer video intro to Knative, with Ville “Fifth Beatle” Aikas and Mark Chmarny Mark’s Knative samples IBM “Zed Series”

Kubernetes Podcast from Google
Kubernetes Origins, with Joe Beda

Kubernetes Podcast from Google

Play Episode Listen Later Jul 17, 2018 44:48


Joe Beda, Craig McLuckie and Brendan Burns are considered the “co-founders” of Kubernetes; working with the cluster management teams at Google, they made the case that their implementation of the Borg and Omega patterns should become a proper product. Joe and Craig now run Heptio, a company working to bring Kubernetes to the enterprise. Your hosts talk to Joe Beda about the history of Kubernetes, creating a diverse company, and what exactly is wrong with YAML. Do you have something cool to share? Some questions? Let us know: web: kubernetespodcast.com mail: kubernetespodcast@google.com twitter: @kubernetespod News of the week Minimal Ubuntu Sysdig security blog series Why Red Hat think Kubernetes is the new application server Deep dive blog posts for Kubernetes 1.11: IPVS-Based in cluster load balancing CoreDNS for Kubernetes Cluster DNS Resizing Persistent Volumes Dynamic Kubelet configuration Interview transcript blog post for Episode 10 with Josh Berkus and Tim Pepper Elastifile announce Kubernetes and Tensorflow integration Heptio Ark v0.9.0 Links from the interview Joe Beda on Twitter Heptio Heptio Blog 4 years of Kubernetes blog post Heptio open source projects: ksonnet Heptio Ark Heptio Sonobuoy Heptio Contour Heptio Gimbal What’s wrong with YAML? YAML as machine language Metaparticle kustomize TGI Kubernetes video series

All Things Devops Podcast
Ep. 7: Kubernetes and Microservices with Joe Beda

All Things Devops Podcast

Play Episode Listen Later Mar 7, 2018 45:09


In this Episode, Joe and Rahul discuss Kubernetes and Joe discusses the history of Kubernetes and tools he is building for Kuberntes ecosystem issues.

PodCTL - Kubernetes and Cloud-Native
Kubernetes Roles & Personas

PodCTL - Kubernetes and Cloud-Native

Play Episode Listen Later Mar 4, 2018 29:05


Show: 28Show Overview: Brian and Tyler talk about Joe Beda's "More Usable Kubernetes" presentation at KubeCon focused on Roles and Personas of Kubernetes environments. They look at how Cluster and Applications are separated, and how Operators and Developers distribute roles, as well as the intersection of those four areas. Show Notes:Kubernetes Personas (via Joe Beda, KubeCon 2017 Austin)Spring Cloud for Microservices Compared to Kubernetes (by Bilgin Ibryam)Topics - On today's show, we looked at the four quadrants outlined by Joe Beda in his talk "More Usable Kubernetes" at KubeCon 2017 Austin. He looked at each role and how well the Kubernetes community has addressed that functional area in both tooling and clear definition of the tasks required. We explored where areas are doing well (green) and where there are still areas that need improvement (yellow or red). Feedback?Email: PodCTL at gmail dot comTwitter: @PodCTL Web: http://podctl.com

The New Stack Analysts
#158: Exploring Kubernetes Abstractions

The New Stack Analysts

Play Episode Listen Later Feb 15, 2018 42:47


On today's episode of The New Stack Analysts, TNS Founder Alex Williams, TNS Correspondent TC Curie, and  Janakiram MSV, Principal Analyst at Janakiram & Associates were joined by Heptio Co-Founder and CTO and Kubernetes co-founder Joe Beda, alongside Sebastien Goasguen, Kubernetes Tech Lead at Bitnami. The discussion this week centered around the many abstractions available to developers working with Kubernetes, and how these impact developer teams both large and small. “What I'm seeing is there is this full effort to bring in another abstraction layer on top of Kubernetes to encourage users, beginners, and even large enterprise IT teams to target Kubernetes without understanding the nuts and bolts of Kubernetes certificates," said MSV.

Cloud Engineering – Software Engineering Daily
Kubernetes Usability with Joe Beda

Cloud Engineering – Software Engineering Daily

Play Episode Listen Later Jan 8, 2018 68:24


Docker was released in 2013, and popularized the use of containers. A container is an abstraction for isolating a well-defined portion of an operating system. Developers quickly latched onto containers as a way to cut down on the cost of virtual machines–as well as isolate code and simplify deployments. Developers began deploying so many containers The post Kubernetes Usability with Joe Beda appeared first on Software Engineering Daily.

The Frontside Podcast
070: Kubernetes with Joe Beda

The Frontside Podcast

Play Episode Listen Later May 18, 2017 40:16


Kubernetes Joe Beda @jbeda | Heptio | eightypercent.net Show Notes: 00:51 - What is Kubernetes? Why does it exist? 07:32 - Kubernetes Cluster; Cluster Autoscaling 11:43 - Application Abstraction 14:44 - Services That Implement Kubernetes 16:08 - Starting Heptio 17:58 - Kubernetes vs Services Like Cloud Foundry and OpenShift 22:39 - Getting Started with Kubernetes 27:37 - Working on the Original Internet Explorer Team Resources: Google Compute Engine Google Container Engine Minikube Kubernetes: Up and Running: Dive into the Future of Infrastructure by Kelsey Hightower, Brendan Burns, and Joe Beda Joe Beda: Kubecon Berlin Keynote: Scaling Kubernetes: How do we grow the Kubernetes user base by 10x? Wordpress with Helm Sock Shop: A Microservices Demo Application Kelsey Hightower Keynote: Kubernetes Federation Joe Beda: Kubernetes 101 AWS Quick Start for Kubernetes by Heptio Open Source Bridge: Enter the coupon code PODCAST to get $50 off a ticket! The conference will be held June 20-23, 2017 at The Eliot Center in downtown Portland, Oregon. Transcript: CHARLES: Hello everybody and welcome to The Frontside Podcast, Episode 70. With me is Elrick Ryan. ELRICK: Hey, what's going on? CHARLES: We're going to get started with our guest here who many of you may have heard of before. You probably heard of the technology that he created or was a key part of creating, a self-described medium deal. [Laughter] JOE: Thanks for having me on. I really appreciate it. CHARLES: Joe, here at The Frontside most of what we do is UI-related, completely frontend but obviously, the frontend is built on backend technology and we need to be running things that serve our clients. Kubernetes is something that I think I started hearing about, I don't know maybe a year ago. All of a sudden, it just started popping up in my Twitter feed and I was like, "Hmm, that's a weird word," and then people started talking more and more about it and move from something that was behind me into something that was to the side and now it's edging into our peripheral vision more and more as I think more and more people adopt it, to build things on top of it. I'm really excited to have you here on the show to just talk about it. I guess we should start by saying what is the reason for its existence? What are the unique set of problems that you were encountering or noticed that everybody was encountering that caused you to want to create this? JOE: That's a really good set up, I think just for way of context, I spent about 10 years at Google. I learned how to do software on the server at Google. Before that, I was at Microsoft working on Internet Explorer and Windows Presentation Foundation, which maybe some of your listeners had to actually go ahead and use that. I learned how to write software for the server at Google so my experience in terms of what it takes to build and deploy software was really warped by that. It really doesn't much what pretty much anybody else in the industry does or at least did. As my career progressed, I ended up starting this project called Google Compute Engine which is Google's virtual machine as a service, analogous to say, EC2. Then as that became more and more of a priority for the company. There was this idea that we wanted internal Google developers to have a shared experience with external users. Internally, Google didn't do anything with virtual machines hardly. Everything was with containers and Google had built up some really sophisticated systems to be able to manage containers across very large clusters of computers. For Google developers, the interface to the world of production and how you actually launched off and monitor and maintain it was through this toolset, Borg and all these fellow travelers that come along with it inside of Google. Nobody really actually managed machines using traditional configuration management tools like Puppet or Chef or anything like that. It's a completely different experience. We built a compute engine, GCE and then I had a new boss because of executive shuffle and he spun up a VM and he'd been at Google for a while. His reaction to the thing was like, "Now, what?" I was like I'm sitting there at the root prompt go and like, "I don't know what to do now." It turns out that inside of Google that was actually a common thing. It just felt incredibly primitive to actually have a raw VM that you could have SSH into because there's so much to be done above that to get to something that you're comfortable with building a production grade service on top of. The choice as Google got more and more serious about cloud was to either have everybody inside of Google start using raw VMs and live the life that everybody outside of Google's living or try and bring the experience around Borg and this idea of very dynamic, container-centric, scheduled-cluster thinking bring that outside of Google. Borg was entangled enough with the rest of Google systems that sort of porting that directly and externalizing that directly wasn't super practical. Me and couple of other folks, Brendan Burns and Craig McLuckie pitched this crazy idea of starting a new open source project that borrowed from a lot of the ideas from Borg but really melded it with a lot of the needs for folks outside of Google because again, Google is a bit of a special case in so many ways. The core problem that we're solving here is how do you move the idea of deploying software from being something that's based on these physical concepts like virtual machines, where the amount of problems that you have to solve, to actually get that thing up and running is actually pretty great. How do we move that such that you have a higher, more logical set of abstractions that you're dealing with? Instead of worrying about what kernel you're running on, instead of worrying about individual nodes and what happens if a node goes down, you can instead just say, "Make sure this thing is running," and the system will just do its best to make sure that things are running and then you can also do interesting things like make sure 10 of these things are running, which is at Google scale that ends up being important. CHARLES: When you say like a thing, you're talking about like a database server or API server or --? JOE: Yeah, any process that you could want to be running. Exactly. The abstraction that you think about when you're deploying stuff into the cloud moves from a virtual machine to a process. When I say process, I mean like a process plus all the things that it needs so that ends up being a container or a Docker image or something along those lines. Now the way that Google does it internally slightly different than how it's done with Docker but you can squint at these things and you can see a lot of parallels there. When Docker first came out, it was really good. I think at Docker and containers people look for three things out of it. The first one is that they want a packaged artifact, something that I can create, run on my laptop, run in a data center and it's mostly the same thing running in both places and that's an incredibly useful thing, like on your Mac you have a .app and it's really a directory but the finder treats it as you can just drag it around and the thing runs. Containers are that for the server. They just have this thing that you can just say, run this thing on the server and you're pretty sure that it's going to run. That's a huge step forward and I think that's what most folks really see in the value with respect to Docker. Other things that folks look at with containerized technology is a level of efficiency of being able to pack a lot of stuff onto a little bit of hardware. That was the main driver for Google. Google has so many computers that if you improve utilization by 1%, that ends up being real money. Then the last thing is, I think a lot of folks look at this as a security boundary and I think there's some real nuance conversations to have around that. The goal is to take that logical infrastructure and make it such that, instead of talking about raw VMs, you're actually talking about containers and processes and how these things relate to each other. Yet, you still have the flexibility of a tool box that you get with an infrastructure level system versus if you look at something like Heroku or App Engine or these other platform as a service. Those things are relatively fixed function in terms of their architectures that you can build. I think the container cluster stuff that you see with things like Kubernetes is a nice middle ground between raw VMs and a very, very opinionated platform as a service type of thing. It ends up being a building block for building their more specialized experiences. There's a lot to digest there so I apologize. CHARLES: Yeah, there's a lot to digest there but we can jump right into digesting it. You were talking about the different abstractions where you have your hardware, your virtual machine and the containers that are running on top of that virtual machine and then you mentioned, I think I'm all the way up there but then you said Kubernetes cluster. What is the anatomy of a Kubernetes cluster and what does that entail? And what can you do with it? JOE: When folks talk about Kubernetes, I think there's two different audiences and it's important to talk about the experience from each audience. There's the audience from the point of view of what it takes to actually run a cluster -- this is a cluster operator audience -- then there's the audience in terms of what it takes to use a cluster. Assuming that somebody else is running a cluster for me, what does it look like for me to go ahead and use this thing? This is really different from a lot of different dev app tools which really makes these things together. We've tried to create a clean split here. I'm going to skip past what it means to launch and run a Kubernetes cluster because it turns out that over time, this is going to be something that you can just have somebody else do for you. It's like running your own MySQL database versus using RDS in Amazon. At some point, you're going to be like, "You know what, that's a pain in the butt. I want to make that somebody else's problem." When it comes to using the cluster, pretty much what it comes down to is that you can tell a cluster. There's an API to a cluster and that API is sort of a spiritual cousin to something like the EC2 API. You can talk to this API -- it's a RESTful API -- and you can say, "Make sure that you have 10 of these container images running," and then Kubernetes will make sure that ten of those things are running. If a node goes down, it'll start another one up and it will maintain that. That's the first piece of the puzzle. That creates a very dynamic environment where you can actually program these things coming and going, scaling up and down. The next piece of the puzzle that really, really starts to be necessary then is that if you have things moving around, you need a way to find them. There is built in ideas of defining what a service is and then doing service discovery. Service discovery is a fancy name for naming. It's like I have a name for something, I want to look that up to an IP address so that I can talk to it. Traditionally we use DNS. DNS is problematic in the super dynamic environments so a lot of folks, as they build backend systems within the data center, they really start moving past DNS to something that's a lot more dynamic and purpose-built for that. But you can think about it in your mind as a fancy super-fast DNS. CHARLES: The customer is itself something that's abstract so I can change it state and configure it and say, "I want 10 instances of Postgres running," or, "I want between five and 15 and it will handle all of that for you." How do you then make it smart so that you can react to load, for example like all of the sudden, this thing is handling more load so I need to say... What's the word I'm looking for, I need to handle -- JOE: Autoscale? CHARLES: Yeah, autoscale. Are there primitives for that? JOE: Exactly. Kubernetes itself was meant to be a tool box that you can build on top of. There are some common community-built primitives for doing it's called -- excuse the nomenclature here because there's a lot of it in Kubernetes and I can define it -- Horizontal Pod Autoscaling. It's this idea that you can have a set of pods and you want to tune the number of replicas to that pod based on load. That's something that's built in. But now maybe you're cluster, you don't have enough nodes in your cluster as you go up and down so there's this idea of cluster autoscaling where I want to add more capacity that I'm actually launching these things into. Fundamentally, Kubernetes is built on top of virtual machines so at the base, there's a bunch of virtual or physical machines hardware that's running and then it's the idea of how do I schedule stuff into that and then I can pack things into that cluster. There's this idea of scaling the cluster but then also scaling workloads running on top of the cluster. If you find that some of these algorithms or methods for how you want to scale things when you want to launch things, how you want to hook them up, if those things don't work for you, the Kubernetes system itself is programmable so you can build your own algorithms for how you want to launch and control things. It's really built from the get go to be an extensible system. CHARLES: One question that's keeps coming up is as I hear you describing these things is the Kubernetes cluster then, it's not application-oriented so you could have multiple applications running on a single cluster? JOE: Very much so. CHARLES: How do you then layer on your application abstraction on top of this cluster abstraction? JOE: An application is made up of a bunch of running bits, whether it'd be a database. I think as we move towards microservices, it's not just going to be one set of code. It can be a bunch of sets of codes that are working together or bunch of servers that are working together. There are these ideas are like I want to run 10 of these things, I want to run five of these things, I want to run three of these things and then I want them to be able to find each other and then I want to take this thing and I want to expose it out to the internet through a load balancer on Amazon, for example. Kubernetes can help to set up all those pieces. It turns out that Kubernetes doesn't have an idea of an application. There is no actually object inside a Kubernetes called application. There is this idea of running services and exposing services and if you bring a bunch of services together, that ends up being an application. But in a modern world, you actually have services that can play double duty across applications. One of the things that I think is exciting about Kubernetes is that it can grow with you as you move from a single application to something that really becomes a service mesh, as your application, your company grows. Imagine that you have some sort of app and then you have your customer service portal for your internal employees. You can have those both being frontend applications, both running on a Kubernetes cluster, talking to a common backend with a hidden API that you don't expose to customers but it's something that's exposed to both of those frontends and then that API may talk to a database. Then as you understand your problems, you can actually spawn off different microservices that can be managed separately by different teams. Kubernetes becomes a platform where you can actually start with something relatively simple and then grow with that and have it stretch from single application to multiple service microservice-base application to a larger cluster that can actually stretch across multiple teams and there's a bunch of facilities for folks not stepping on each other's toes as they do this stuff. Just to be clear, this is what Kubernetes is as it's based. I think one of the powerful things that you can do is that there's a whole host to folks that are building more platform as a service like abstractions on top of Kubernetes. I'm not going to say it's a trivial thing but it's a relatively straightforward thing to build a Heroku-like experience on top of Kubernetes. But the great thing is that if you find that that Heroku experience, if some of the opinions that were made as part of that don't work for you, you can actually drop down to a level that's more useful than going all the way down to raw VM because right now, if you're running on Heroku and something doesn't work for you, it's like, "Here's a raw VM. Good luck with that." There's a huge cliff as you actually want to start coloring outside the lines for, as I mix my metaphors here for these platform services. ELRICK: What services that are out there that you can use that would implement Kubernetes? JOE: That's a great question. There are a whole host there. One of the folks in the community has pulled together a spreadsheet of all the different ways to install and run Kubernetes and I think there were something like 60 entries on it. It's an open source system. It's credibly adaptable in terms of running in all sorts of different mechanisms for places and there are really active startups that are helping folks to run that stuff. In terms of the easiest turnkey things, I would probably start with Google Container Engine, which is honestly one click. It fits within a Free Tier. It can get you up and running so that you can actually play with Kubernetes super easy. There's this thing from the folks at CoreOS called minikube that lets you run it on your laptop as a development environment. That's a great way to kick the tires. If you're on Amazon, my company Heptio has a quick start that we did with some of the Amazon community folks. It's a cloud formation template that launches a Kubernetes stack that you can get up and running and really understand what's happening. I think as users, understand what value it brings at the user level then they'll figure out whether they want to invest in terms of figuring out what the best place to run and the best way to run it for them is. I think my advice to folks would be find some way to start getting familiar with it and then decide if you have to go deep in terms of how to be a cluster operator and how to run the thing. ELRICK: Yup. That was going to be my next question. You just brought up your company, Heptio. What was the reason for starting that startup? JOE: Heptio was founded by Craig McLuckie, one of the other Kubernetes founders and me. We started about six months or seven months ago now. The goal here is to bring Kubernetes to enterprises and how do we bridge the gap of bringing some of this technology forward company thinking to think about companies like Google and Twitter and Facebook. They have a certain way of thinking about building a deployment software. How do we bring those ideas into more mainstream enterprise? How do we bridge that gap and we're really using doing Kubernetes as the tool to do that? We're doing a bunch of things to make that happen. The first being that we're offering training, support and services so right now, if companies want to get started today, they can engage with us and we can help them understand what makes sense there. Over time, we want to make that be more self-service, easier to do so that you actually don't have to hire someone like us to get started and to be successful there. We want to invest in the community in terms of making Kubernetes easier to approach, easier to run and then more applicable to a more diverse set of audiences. This conversation that we're having here, I'm hoping that at some point, we won't have to have this because Kubernetes will be easy enough and self-describing enough that folks won't feel like they have to dig deep to get started. Then the last thing that we're going to be doing is offering commercial services and software that really helps teach Kubernetes into the fabric of how large companies work. I think there's a set of tools that you need as you move from being a startup or a small team to actually dealing within the structure of a large enterprise and that's really where we're going to be looking to create and sell product. ELRICK: Gotcha. CHARLES: How does Kubernetes then compare in contrast to other technologies that we hear when we talk about integrating with the enterprise and having enterprise clients managing their own infrastructure things like Cloud Foundry, for example. From someone who's kind of ignorant of both, how do you discriminate between the two? JOE: Cloud Foundry is a more of a traditional platform as a service. There's a lot to like there and there are some places where the Kubernetes community and the Cloud Foundry community are starting to cooperate. There is a common way for provisioning and creating external services so you can say, "I want MySQL database." We're trying to make that idea of, "Give me MySQL database. I don't care who and where it's running." We're trying to make those mechanisms common across Cloud Foundry and Kubernetes so there is some effort going in there. But Cloud Foundry is more of a traditional platform as a service. It's opinionated in terms of the right way to create, launch, roll out, hooks services together. Whereas, Kubernetes is more of a building block type of thing. Kubernetes is, at least raw Kubernetes in some ways a more of a lower levels building block technology than something like Cloud Foundry. The most applicable competitor in this world to Cloud Foundry, I would say would be OpenShift from Red Hat. Open Shift is a set of extensions built on top of it. Right now, it's a little bit of a modified version of Kubernetes but over time that teams working to make it be a set of pure extensions on top of Kubernetes that adds a platform as a service layer on top of the container cluster layer. The experience for Open Shift will be comparable to the experience for Cloud Foundry. There's other folks like Microsoft just bought the small company called Deis. They offer a thing called Workflow which gives you a little bit of the flavor of a platform as a service also. There's multiple flavors of platforms built on top of Kubernetes that would be more apples to apples comparable to something like Cloud Foundry. Now the interesting thing with thing Deis' Workflow or Open Shift or some of the other platforms built on top of Kubernetes is that, again if you find yourself where that platform doesn't work for you for some reason, you don't have to throw out everything. You can actually start picking and choosing what primitives you want to drop down to in the Kubernetes world without having to go down to raw VMs. Whereas, Cloud Foundry really doesn't have a widely supported, sort of more raw interface to run in containers and services. It's kind of subtle. CHARLES: Yeah, it's kind of subtle. This is an analogy that just popped into my head while I was listening to you and I don't know if this is way off base. But when you were describing having... What was the word you used? You said a container clast --? It was a container clustered... JOE: Container orchestrator, container cluster. These are all -- CHARLES: Right and then kind of hearkening back to the beginning of our conversation where you were talking about being able to specify, "I want 10 of these processes," or an elastic amount of these processes that reminded me of Erlang VM and how kind of baked into that thing is the concept of these lightweight processes and be able to manage communication between these lightweight processes and also supervise these processes and have layers of supervisors supervising other supervisors to be able to declare a configuration for a set of processes to always be running. Then also propagate failure of those processes and escalate and stuff like that. Would you say that there is an analogy there? I know there are completely separate beast but is there a co-evolution there? JOE: I've never used Erlang in Anger so it's hard for me to speak super knowledgeably about it. For what I understand, I think there is a lot in common there. I think Erlang was originally built by Nokia for telecoms switches, I believe which you have these strong availability guarantees so any time when you're aiming for high availability, you need to decouple things with outside control loops and ways to actually coordinate across pieces of hardware and software so that when things fail, you can isolate that and have a blast radius for a failure and then have higher level mechanisms that can help recover. That's very much what happens with something like Kubernetes and container orchestrator. I think there's a ton of parallels there. CHARLES: I'm just trying to grasp at analogies of things that might be -- ELRICK: I think they call that the OTP, Open Telecom Platform or something like that in Erlang. CHARLES: Yeah, but it just got a lot of these things -- ELRICK: Very similar. CHARLES: Yeah, it seems very similar. ELRICK: Interestingly enough, for someone that's starting from the bottom, an initiated person to Kubernetes containers, Docker images, Docker, where would they start to ramp up themselves? I know you mentioned that you are writing a book --? JOE: Yes. ELRICK: -- 'Kubernetes: Up and Running'. Would that be a good place to start when it comes out or is there like another place they should start before they get there. What is your thoughts on that? JOE: Definitely, check out the book. This is a book that I'm writing with Kelsey Hightower who's one of the developer evangelists for Google. He is the most dynamic speaker I've ever seen so if you ever have a chance to see him live, it's pretty great. But Kelsey started this and he's a busy guy so he brought in Brendan Burns, one of the other Kubernetes co-founders and me to help finish that book off and that should be coming out soon. It's Kubernetes: Up and Running. Definitely check that out. There's a bunch of good tutorials out there also that start introducing you to a lot of the concepts in Kubernetes. I'm not going to go through all of those concepts right now. There's probably like half a dozen different concepts and terminology, things that you have to learn to really get going with it and I think that's a problem right now. There's a lot to import before you can get started. I gave a talk at the Kubernetes Conference in Berlin, a month or two ago and it was essentially like, yeah we got our work cut out for us to actually make the stuff applicable to wider audience. But if you want to see the power, I think one of the things that you can do is there's the system built on top of Kubernetes called Helm, H-E-L-M, like a ship's helm because we love our nautical analogies here. Helm is a package manager for Kubernetes and just like you can login to say, in Ubuntu machine and do apps get install MySQL and you have a database up and running. With Helm you can say, create and install 'WordPress install' on my Kubernetes cluster and it'll just make that happen. It takes this idea of package management of describing applications up to the next level. When you're doing regular sysadmin stuff, you can actually go through and do the system to [Inaudible] files or to [Inaudible] files and copy stuff out and use Puppet and Chef to orchestrate all of that stuff. Or you can take the stuff that sort of package maintainers for the operating system have done and actually just go ahead and say, "Get that installed." We want to be able to offer a similar experience at the cluster level. I think that's a great way to start seeing the power. After you understand all these concepts here is how easy you can make it to bring up and run these distributed systems that are real applications. The Weaveworks folks, there are company that do container networking and introspection stuff based out of London. They have this example application called Sock Shop. It's like the pet shop example but distributed and built to show off how you can build an application on top of Kubernetes that pulls a lot of moving pieces together. Then there's some other applications out there like that that give you a little bit of an idea of what things look like as you start using this stuff to its fullest extent. I would say start with something that feels concrete where you can start poking around and seeing how things work before you commit. I know some people are sort of depth first learners and some are breadth first learners. If you're depth first, go and read the book, go to Kubernetes documentation site. If you're breadth first, just start with an application and go from there. ELRICK: Okay. CHARLES: I think I definitely fall into that breadth first. I want to build something with it first before trying to manage my own cluster. ELRICK: Yeah. True. I think I watched your talk and I did watch one of Kelsey's talks: container management. There was stuff about replicators and schedulers and I was like, "The ocean just getting deeper and deeper," as I listened to his talk. JOE: Actually, I think this is one of the cultural gaps to bridge between frontend and backend thinking. I think a lot of backend folks end up being these depths first types of folks, where when they want to use a technology, they want to read all the source code before they first apply it. I'm sure everybody has met those type of developers. Then I think there's folks that are breadth first where they really just want to understand enough to be effective, they want to get something up and running, they want to like if they hit a problem, then they'll go ahead and fix that problem but other than that, they're very goal-oriented towards, I want to get this thing running. Kubernetes right now is kind of built by systems engineers for systems engineers and it shows so we have our work cut out for us, I think to bridge that gap. It's going to be an ongoing thing. ELRICK: Yeah, I'm like a depth first but I have to keep myself in check because I have to get work done as a developer. [Laughter] JOE: That sounds about right, yeah. Yeah, so you're held accountable for writing code. CHARLES: Yeah. That's where real learning happens when you're depth first but you've got deadlines. ELRICK: Yes. CHARLES: I think that's a very effective combination. Before we go, I wanted to switch topic away from Kubernetes for just a little bit because you mentioned something when we were emailing that, I guess in a different lifetime you were actually on the original IE team or at the very beginning of the Internet Explorer team at Microsoft? JOE: Yes, that's where I started my career. Back in '97, I've done a couple of internships at Microsoft and then went to join full time, moved up here to Seattle and I had a choice between joining the NT kernel team or the Internet Explorer team. This was after IE3 before IE4. I don't know if this whole internet thing is going to pan out but it looks like that gives you a lot of interesting stuff. You got to understand the internet, it wasn't an assumed thing back then, right? ELRICK: Yeah, that's true. JOE: I don't know, this internet thing. CHARLES: I know. I was there and I know that like old school IE sometimes gets a bad rap. It does get a bad rap for being a little bit of an albatross but if you were there for the early days of IE, it really was the thing that blew it wide open like people do not give credit. It was extraordinarily ahead of its time. That was [Inaudible] team that coin DHTML back to when it was called DHTML. I remember, actually using it for the first time, I think about '97 is about what I was writing raw HTML for everything. CSS wasn't even a thing hardly. When I realized, all these static things when we render them, they're etched in stone. The idea that every one of these properties which I already knew is now dynamic and completely reflected, just moment to moment. It was just eye-opening. It was mind blowing and it was kind of the beginning of the next 20 years. I want to just talk a little bit about that, about where those ideas came from and what was the impetus for that? JOE: Oh, man. There's so much history here. First of all, thank you for calling out. I think we did a lot of really interesting groundbreaking work then. I think the sin was not in IE6 as it was but in [inaudible]. I think the fact that -- CHARLES: IE6 was actually an amazing browser. Absolutely an amazing browser. JOE: And then the world moved past it, right? It didn't catch up. That was the problem. For its time when it was released, I was proud of that release. But four years on, things get a little bit long in the tooth. I think IE3 was based on rendering engine that was very static, very similar to Netscape at the time. The thing to keep in mind is that Netscape at that time, it would download a webpage, parse it and display it. There was no idea of a DOM at Netscape at that point so it would throw away a lot of the information and actually only store stuff that was very specific to the display context. Literally, when you resize the window for Netscape back then, it would actually reparse the original HTML to regenerate things. It wasn't even able to actually resize the window without going through and reparsing. What we did with IE4 -- and I joined sort of close to the tail on IE4 so I can't claim too much credit here -- is bringing some of the ideas from something like Visual Basic and merge those into the idea of the browser where you actually have this programming model which became the DOM of where your controls are, how they fit together, being able to live modify these things. This was all part and parcel of how people built Windows applications. It turns out that IE4 was the combination of the old IE3 rendering engine, sort of stealing stuff from there but then this project that was built as a bunch of Active X controls for Office called [inaudible]. As you smash that stuff together and turn it into a browser rendering engine, that browser rendering engine ended up being called Trident. That's the thing that got a nautical theme. I don't think it's connected and that's the thing that that I joined and started working on at the time. This whole idea that you have actually have this DOM, that you can modify a programmable representation of DHTML and have it be live updated on screen, that was only with IE4. I don't think anybody had done it at that point. The competing scheme from Netscape was this thing called layers where it was essentially multiple HTML documents where you could replace one of the HTML documents and they would be rendered on top of each other. It was awful and it lost to the mist of time. CHARLES: I remember marketing material about layers and hearing how layers was just going to be this wonderful thing but I don't ever remember actually, did they ever even ship it? JOE: I don't know if they did or not. The thing that you got to understand is that anybody who spent any significant amount of time at Microsoft, you just really internalize the idea of a platform like no place else. Microsoft lives and breathes platforms. I think sometimes it does them a disservice. I've been out of Microsoft for like 13 years now so maybe some of my knowledge is a little outdated here but I still have friends over there. But Microsoft is like the poor schmuck that goes to Vegas and pulls the slot machine and wins the jackpot on the first pull. I'm not saying that there wasn't a lot of hard work that went behind Windows but like they hit the goldmine with that from a platform point of view and then they essentially did it again with Office. You have these two incredibly powerful platforms that ended up being an enormous growth engine for the company over time so that fundamentally changed the world view of Microsoft where they really viewed everything as a platform. I think there were some forward thinking people at Netscape and other companies but I think, Microsoft early on really understood what it meant to be a platform and we saw back then what the web could be. One of the original IE team members, I'm going to give a shout out to him, Chris Wilson who's now on the Chrome team, I think. I don't know where he is these days. Chris was on the original IE team. He's still heavily involved in web standards. None of this stuff is a surprise to us. I look at some of the original so after we finished IE6, a lot of the IE team rolled off to doing Avalon which became Windows Presentation Foundation, which was really looking to sort of reinvent Windows UI, importing a bunch of the ideas from web and modern programming there. That's where we came up with XAML and eventually begat Silverlight for good or ill. But some of our original demos for Avalon, if you go back in time and look at that, that was probably... I don't know, 2000 or something like that. They're exactly the type of stuff that people are building with the web platform today. Back then, they'll flex with the thing. We're reinventing this stuff over and over again. I like where it's going. I think we're in a good spot right now but we see things like the Shadow DOM come up and I look at that and I'm like, "We had HTC controls which did a lot of Shadow DOM stuff like stuff in IE early on." These things get reinvented and refined over time and I think it's great but it's fascinating to be in the industry long enough that you can see these patterns repeat. CHARLES: It is actually interesting. I remember doing UI in C++ and in Java. We did a lot of Java and it was a long time. I felt like I was wandering in the wilderness of the web where I was like, "Oh, man. I just wish we had these capabilities of things that we could do in swing, 10 or 15 years ago," but the happy ending is that I really actually do feel we are in a place now, finally where you have options for it really is truly competitive as a developer experience to the way it was, these many years ago and it's also a testament just how compelling the deployment model of the web is, that people were willing to forgo all of that so they could distribute their applications really easily. JOE: Never underestimate the power of view source. CHARLES: Yeah. [Laughter] ELRICK: I think that's why this sort of conversations are very powerful, like going back in time and looking at the development up until now because like they say that people that don't know their history, they're doomed to repeat it. I think this is a beautiful conversation. JOE: Yeah. Because I've done that developer focused frontend type of stuff. I've done the backend stuff. One of the things that I noticed is that you see patterns repeat over and over again. Let's be honest, it probably more like a week, I was going to say a weekend and learn the React the other day and the way that it encapsulate state up and down, model view, it's like these things are like there's different twists on them that you see in different places but you see the same patterns repeat again and again. I look at the way that we do scheduling in Kubernetes. Scheduling is this idea that you have a bunch of workloads that have a certain amount of CPU and RAM that they require like you want to play this Tetris game of being able to fit these things in, you look at scheduling like that and there are echoes for how layout happens in a browser. There is a deeper game coming on here and as you go through your career and if you're like me and you always are interested in trying new things, you never leave it all behind. You always see things that influence your thinking moving forward. CHARLES: Absolutely. I kind of did the opposite. I started out on the backend and then moved over into the frontend but there's never been any concept that I was familiar with working on server side code that did not come to my aid at some point working on the frontend. I can appreciate that fully. ELRICK: Yup. I can agree with the same thing. I jump all around the board, learning things that I have no use currently but somehow, they come back to help me. CHARLES: That will come back to help you. You thread them together at some point. ELRICK: Yup. CHARLES: As they said in one of my favorite video games in high school, Mortal Kombat there is no knowledge that is not power. JOE: I was all Street Fighter. CHARLES: Really? [Laughter] JOE: I cut class in high school and went to play Street Fighter at the mall. CHARLES: There is no knowledge that isn't power except for... I'm not sure that the knowledge of all these little mashy key buttons combinations, really, I don't think there's much power in that. JOE: Well, the Konami code still shows up all the time, right? [Laughter] CHARLES: I'm surprised how that's been passed down from generation to generation. JOE: You still see it show up in places that you wouldn't expect. One of the sad things that early on in IE, we had all these Internet Explorer Easter eggs where if you type this right combination into the address bar, do this thing and you clicked and turn around three times and face west, you actually got this cool DHTML thing and those things are largely disappearing. People don't make Easter eggs like they used to. I think there's probably legal reasons for making sure that every feature is as spec. But I kind of missed those old Easter eggs that we used to find. CHARLES: Yeah, me too. I guess everybody save their Easter eggs for April 1st but -- JOE: For the release notes, [inaudible]. CHARLES: All right. Well, thank you so much for coming by JOE. I know I'm personally excited. I'm going to go find one of those Kubernetes as a services that you mentioned and try and do a little breadth first learning but whether you're depth first or breadth first, I say go to it and thank you so much for coming on the show. JOE: Well, thank you so much for having me on. It's been great. CHARLES: Before we go, there is actually one other special item that I wanted to mention. This is the Open Source Bridge which is a conference being held in Portland, Oregon on the 20th to 23rd of June this year. The tracks are activism, culture hacks, practice and theory and podcast listeners may be offered a discount code for $50 off of the ticket by entering in the code 'podcast' on the Event Brite page, which we will link to in the show notes. Thank you, Elrick. Thank you, Joe. Thank you everybody and we will see you next week.

Cloud Engineering – Software Engineering Daily
Google Cloudbuilding with Joe Beda

Cloud Engineering – Software Engineering Daily

Play Episode Listen Later Oct 20, 2016 60:47


Google Compute Engine is the public cloud built by Google. It provides infrastructure- and platform-as-a-service capabilities that rival Amazon Web Services. Today’s guest Joe Beda was there from the beginning of GCE, and he was also one of the early engineers on the Kubernetes project. Google’s internal systems have made it easy for employees to The post Google Cloudbuilding with Joe Beda appeared first on Software Engineering Daily.