Podcasts about kubecon cloudnativecon

  • 32PODCASTS
  • 105EPISODES
  • 35mAVG DURATION
  • ?INFREQUENT EPISODES
  • Nov 25, 2024LATEST

POPULARITY

20172018201920202021202220232024


Best podcasts about kubecon cloudnativecon

Latest podcast episodes about kubecon cloudnativecon

OpenObservability Talks
CNCF Ambassadors Share the Best of KubeCon 2024 - OpenObservability Talks S5E06

OpenObservability Talks

Play Episode Listen Later Nov 25, 2024 59:54


Catch up on everything you missed at KubeCon North America 2024! Join us for a special recap that brings you closer to the action. This is a special episode in collaboration with the Cloud Native Computing Foundation (CNCF), the foundation behind KubeCon+CloudNativeCon and the cloud-native projects. Dotan Horovits, our host and a CNCF Ambassador, will be joined by an all-star panel of cloud-native experts—CNCF Ambassadors Viktor Farcic and Max Körbächer—each bringing their unique insights and takeaways from the conference. Together, they unpack the major project announcements and key themes from this year's event: the standout talks, co-located events, maintainer meetings and those memorable hallway conversations. Get insights from the experts who know the cloud-native space inside out. Viktor Farcic is a lead rapscallion at Upbound and a published author. He is a host of the YouTube channel DevOps Toolkit and a co-host of DevOps Paradox. Max is Co-Founder at Liquid Reply. He is Co-Chair of the CNCF Environmental Sustainability Technical Advisory Group and served 3 years at the Kubernetes release team. He runs the Munich Kubernetes Meetup as well as the Munich and Ukraine Kubernetes Community Days. Dotan Horovits is a DevOps specialist with special focus on observability solutions and related open source projects such as OpenTelemetry, Jaeger, Prometheus and OpenSearch. He runs the OpenObservability Talks podcast, now in its 5th year. Don't miss this expert-led KubeCon recap, in collaboration with the Cloud Native Computing Foundation's official channel! The episode was live-streamed on 19 November 2024 in collaboration with the Cloud Native Computing Foundation, and the video is available at https://www.youtube.com/watch?v=1TrPev5IzB8 You can read the recap post: https://medium.com/@horovits/1362959030c1 OpenObservability Talks episodes are released monthly, on the last Thursday of each month and are available for listening on your favorite podcast app and on YouTube. We live-stream the episodes on Twitch and YouTube Live - tune in to see us live, and chime in with your comments and questions on the live chat. ⁠⁠https://www.youtube.com/@openobservabilitytalks⁠   https://www.twitch.tv/openobservability⁠ Show Notes: 00:00 - episode and speaker intro 02:45 - KubeCon Salt Lake City stats and trends 05:26 - The cloud-native stack is maturing up 08:12 - KubeCon's role in the cloud-native space 11:23 - Platform Engineering trend 14:07 - Open specifications and Kubernetes API 18:44 - Flatcar joins the CNCF with container focused OS 24:54 - wasmCloud moves to CNCF incubation and WASMCon highlights 31:49 - CNCF Ambassador program and recent Community Awards 35:24 - KubeCon event plan and expansion, and local KCDs 43:34 - Environmental Sustainability TAG 47:46 - Dapr and cert-manager reached CNCF graduation 51:11 - Cloud Native Reference Architectures 54:39 - observability updates for Jaeger, Prometheus and more 58:53 - episode outro Resources: CNCF community awards: https://www.cncf.io/announcements/2024/11/14/cloud-native-computing-foundation-announces-the-2024-community-awards-winners/ Dapr graduation: https://www.cncf.io/announcements/2024/11/12/cloud-native-computing-foundation-announces-dapr-graduation/ wasmCloud moves to incubation: https://www.cncf.io/blog/2024/11/12/cncf-welcomes-wasmcloud-to-the-cncf-incubator/ More on wasmCloud: https://medium.com/p/02a5025c6115 OpenTelemetry expands into CI/CD observability https://www.linkedin.com/feed/update/urn:li:share:7259200802689273856 Jaeger v2 unveiled https://medium.com/p/be612dbee774 Prometheus 3.0 unveiled https://medium.com/p/1c5edca32c87 Flatcar joins the CNCF https://www.linkedin.com/feed/update/urn:li:share:7257278073824288768/ OpenCost matured into incubation https://www.linkedin.com/feed/update/urn:li:share:7257826394179522562 New Cloud Native Reference Architecture hub: https://architecture.cncf.io/ CNCF upcoming events: https://www.cncf.io/events/ Kubernetes Community Days events around the world https://www.cncf.io/kcds/ Socials: Twitter:⁠ https://twitter.com/OpenObserv⁠ YouTube: ⁠https://www.youtube.com/@openobservabilitytalks⁠ Dotan Horovits ============ Twitter: @horovits LinkedIn: www.linkedin.com/in/horovits Mastodon: @horovits@fosstodon BlueSky: @horovits.bsky.social Viktor Farcic =========== Twitter: https://twitter.com/vfarcic LinkedIn: https://www.linkedin.com/in/viktorfarcic BlueSky: https://bsky.app/profile/vfarcic.bsky.social Max Körbächer ============= Twitter: https://twitter.com/mkoerbi LinkedIn: https://www.linkedin.com/in/maxkoerbaecher BlueSky: https://bsky.app/profile/mkoerbi.bsky.social Mastodon: https://fosstodon.org/@mkorbi@mastodon.social

Kubernetes Podcast from Google
Leading Kubernetes into its Second Decade

Kubernetes Podcast from Google

Play Episode Listen Later Jun 11, 2024 63:18


We talk with Nikhita Raghunath, Nabarun Pal, and Paco Xu. Nikhita, Nabarun, and Paco have each held various leadership positions related to the Kubernetes project. They talk about their journeys, the various leadership roles they've been in, and offer advice for new contributors and those who want to move into leadership in the project.   Nikhita is a Staff Software Engineer at Broadcom. She is currently a member of the CNCF Technical Oversight Committee (TOC) overseeing all technical matters of the CNCF. In the past, she was a member of the Kubernetes Steering Committee, a technical lead for SIG Contributor Experience and has also won the CNCF Top Committer Award. Currently, she is also a co-chair of the KubeCon+CloudNativeCon conference. Nabarun is a Staff Software Engineer at Broadcom, a maintainer of the Kubernetes project, a member of the Kubernetes Steering Committee and a chair of Kubernetes SIG Contributor Experience. In the past, he was the release lead for Kubernetes 1.21 and has served eight release teams. Nabarun also works actively with the Python community by organizing PyCon India and has been recognized in media publications for his work. Paco is an open source team lead in DaoCloud. He started to work on container/docker in 2016 and later started to participate in the Kubernetes Community in 2018. He is a current member of Kubernetes Steering Committee and works mainly on kubeadm and sig-node. He is Co-chair of KubeCon+CloudNativeCon China 2024.   Do you have something cool to share? Some questions? Let us know: - web: kubernetespodcast.com - mail: kubernetespodcast@google.com - twitter: @kubernetespod   News of the week Blog: 10 Years of Kubernetes CNCF-Hosted Co-Located Events Overview CFP for CNCF-hosted Co-located Events Kubernetes Community Days Links from the interviews CNCF Technical Oversight Committee SIG ContribEx Google Summer of Code CNCF Top Committer Award 2021 - Nikhita Raghunath Blog Post: Google Summer of Code with Kubernetes by Nikhita Raghunath Kubernetes Docs: Extend the Kubernetes API with CustomResourceDefinitions SIG API Machinery SIG Testing SIG Release CNCF Chop Wood Carry Water Award 2018 - Nikhita Raghunath Kubernetes Steering Committee KubeCon India KubeCon NA Kubernetes 1.21: Power to the Community Pycon India Kubernetes Python Client on GitHub Kubernetes Contributor Summit 2019 YouTube Playlist Kubernetes Release Team KubeCon NA 2024 Scholarships (applications due by September 1, 2024) Kubeadm SIG Node KubeCon China 2024 Kubelet Kubernetes Production Readiness Review Process Kubernetes Release Team CI Signal Lead Runbook  

The IaC Podcast
Cloud-Native Security and Networking with Liz Rice

The IaC Podcast

Play Episode Listen Later May 30, 2024 26:00


How are modern cloud-native environments changing the way we handle security? Liz Rice, Chief Open Source Officer at Isovalent, explains why traditional IP-based network policies are becoming outdated and how game-changers like Cilium and eBPF, which leverage Kubernetes identities, offer more effective and readable policies. We also discuss the role of community-driven projects under the CNCF, and she shares tips for creating strong, future-proof solutions. What challenges should we expect next? Tune in to find out!Liz Rice is Chief Open Source Officer with eBPF specialists Isovalent, creators of the Cilium cloud native networking, security and observability project. She is the author of Container Security, and Learning eBPF, both published by O'Reilly, and she sits on the CNCF Governing Board, and on the Board of OpenUK. She was Chair of the CNCF's Technical Oversight Committee in 2019-2022, and Co-Chair of KubeCon + CloudNativeCon in 2018.She has a wealth of software development, team, and product management experience from working on network protocols and distributed systems, and in digital technology sectors such as VOD, music, and VoIP. When not writing code, or talking about it, Liz loves riding bikes in places with better weather than her native London, competing in virtual races on Zwift, and making music under the pseudonym Insider Nine.

Open at Intel
Better Than the Sum of Our Parts

Open at Intel

Play Episode Listen Later Apr 24, 2024 28:11 Transcription Available


Stephen Augustus, Head of Open Source at Cisco, and Liz Rice, Chief Open Source Officer at Isovalent, discuss Cisco's acquisition of Isovalent, which has closed since recording, bringing together two teams with long-standing expertise in open source cloud native technologies, observability, and security. The two share their excitement about working together, emphasizing the alignment of Isovalent with Cisco's security division and the potential enhancements this acquisition brings to open source projects like Cilium and eBPF. They explore the implications for the open source community, and the continuous investment and development in these projects under Cisco's umbrella. We discuss the ways this merger could innovate security practices, enhance infrastructure observability, and leverage AI for more intelligent networking solutions. 00:00 Welcome and Introduction 00:22 Cisco's Acquisition of Isovalent 00:53 The Excitement and Potential of the Acquisition 02:14 Strategic Alignment and Future Vision 04:03 Open Source Commitment and Community Impact 06:53 The Road Ahead: Integration and Innovation 19:49 Exploring AI and Future Technologies at Cisco 26:03 Reflections and Closing Thoughts Resources: Cilium, eBPF and Beyond | Open at Intel (podbean.com) The Art of Open Source: A Conversation with Stephen Augustus | Open at Intel (podbean.com) Guests: Liz Rice is Chief Open Source Officer with eBPF specialists Isovalent, creators of the Cilium cloud native networking, security and observability project. She was Chair of the CNCF's Technical Oversight Committee in 2019-2022, and Co-Chair of KubeCon + CloudNativeCon in 2018. She is also the author of Container Security, published by O'Reilly. She has a wealth of software development, team, and product management experience from working on network protocols and distributed systems, and in digital technology sectors such as VOD, music, and VoIP. When not writing code, or talking about it, Liz loves riding bikes in places with better weather than her native London, competing in virtual races on Zwift, and making music under the pseudonym Insider Nine. Stephen Augustus is a Black engineering director and leader in open source communities. He is the Head of Open Source at Cisco, working within the Strategy, Incubation, & Applications (SIA) organization. For Kubernetes, he has co-founded transformational elements of the project, including the KEP (Kubernetes Enhancements Proposal) process, the Release Engineering subproject, and Working Group Naming. Stephen has also previously served as a chair for both SIG PM and SIG Azure. He continues his work in Kubernetes as a Steering Committee member and a Chair for SIG Release. Across the wider LF (Linux Foundation) ecosystem, Stephen has the pleasure of serving as a member of the OpenSSF Governing Board and the OpenAPI Initiative Business Governing Board. Previously, he was a TODO Group Steering Committee member, a CNCF (Cloud Native Computing Foundation) TAG Contributor Strategy Chair, and one of the Program Chairs for KubeCon / CloudNativeCon, the cloud native community's flagship conference. He is a maintainer for the Scorecard and Dex projects, and a prolific contributor to CNCF projects, amongst the top 40 (as of writing) code/content committers, all-time. In 2020, Stephen co-founded the Inclusive Naming Initiative, a cross-industry group dedicated to helping projects and companies make consistent, responsible choices to remove harmful language across codebases, standards, and documentation. He has previously held positions at VMware (via Heptio), Red Hat, and CoreOS. Stephen is based in New York City.  

De Nederlandse Kubernetes Podcast
#46: Exploring the Intersection of Minecraft, Kubernetes, and Education

De Nederlandse Kubernetes Podcast

Play Episode Listen Later Mar 26, 2024 33:33


[English episode] In Aflevering 46 van onze podcast duiken we in een boeiend gesprek tijdens KubeCon + CloudNativeCon met twee opmerkelijke gasten:Enrico Bartz van SVA GmbH, een OpenSource- en Minecraft-liefhebber, en Jenny Bartz van MINNY.IO, een Minecraft Education Ambassador.Enrico's reis in de IT-wereld begon in 2008, evoluerend van traditionele operaties naar DevOps, waarin hij vandaag uitblinkt. Jenny, een voormalige IT-specialist die grafisch ontwerper werd, richtte zich op onderwijs en streeft ernaar een betekenisvolle impact te hebben door traditioneel en digitaal leren te combineren.Ons gesprek verkent het kruispunt van Minecraft, Kubernetes en moderne technologie. Ontdek hoe een Kubernetes-native leerplatform samengevoegd met Minecraft kinderen van 8 tot 11 jaar in staat stelt digitale pioniers te worden. Leer hoe het hosten van Minecraft-servers en het begrijpen van programmeerconcepten een passie voor technologie kan aanwakkeren.We verkennen ook de integratie van het op Kubernetes gebaseerde platform, HobbyFarm, met Minecraft-onderwijs, waardoor leren een avontuurlijke, code-rijke ervaring wordt. Deze innovatieve aanpak inspireert jonge ontwikkelaars en maakt onderwijs boeiend en meeslepend.Sluit je aan bij ons op dit unieke avontuur terwijl we Minecraft, Kubernetes en coderingseducatie combineren om een wereld van eindeloze mogelijkheden te ontsluiten en de volgende generatie makers en vernieuwers te inspireren.ebartz (Enrico Bartz) · GitHubGitHub - minny-io/kubecon-paris-24Minny.io – Spielbasiertes Lernen für Kinder

Next in Tech
Ep. 159 - KubeCon Preview

Next in Tech

Play Episode Listen Later Mar 19, 2024 32:14


The spring edition of the Cloud Native Computing Foundation's KubeCon/CloudNativeCon is about to get underway and Jean Atelsek and William Fellows return to discuss what will be playing out in the sessions and on the exhibit floor with host Eric Hanselman. The next stage of licensing pull back from a number of open source providers is on display, as Open Core revenue models are realigned. FinOps efforts try to tame operational costs and evolution of platform engineering approaches continues.

The New Stack Podcast
Hello, GitOps -- Boeing's Open Source Push

The New Stack Podcast

Play Episode Listen Later Dec 12, 2023 19:14


Boeing, with around 6,000 engineers, is emphasizing open source engagement by focusing on three main themes, according to Damani Corbin, who heads Boeing's Open Source office. He joined our host, Alex Williams, for a discussion at KubeCon+CloudNativeCon in Chicago.The first priority Corbin talks about is simplifying the consumption of open source software for developers. Second, Boeing aims to facilitate developer contributions to open source projects, fostering involvement in communities like the Cloud Native Computing Foundation and the Linux Foundation. The third theme involves identifying opportunities for "inner sourcing" to share internally developed solutions across different groups.Boeing is actively working to break down barriers and encourage code reuse across the organization, promoting participation in open source initiatives. Corbin highlights the importance of separating business-critical components from those that can be shared with the community, prioritizing security and extending efforts to enhance open source security practices. The organization is consolidating its open source strategy by collaborating with legal and information security teams.Corbin emphasizes the goal of making open source involvement accessible and attractive, with a phased approach to encourage meaningful contributions and ultimately enabling the compensation of engineers for open source work in the future.Learn more from The New Stack about Boeing and CNCF open source projects:How Boeing Uses Cloud NativeHow Open Source Has Turned the Tables on Enterprise SoftwareScaling Open Source Community by Getting Closer to UsersMercedes-Benz: 4 Reasons to Sponsor Open Source Projects

Open at Intel
Cilium, eBPF and Beyond

Open at Intel

Play Episode Listen Later Dec 7, 2023 22:58


In this podcast, Isovalent's Liz Rice discusses her involvement with several open source projects, such as the Cilium project and the eBPF platform. With the graduation of Cilium in the CNCF, Liz explains its networking and security capabilities and how it benefits the cloud-native ecosystem. She also dives into eBPF and discusses the implications of AI. The talk concludes with an exploration about open source communities, recommendations regarding emerging trends in the open source world, and Liz's anticipation for the future of Cilium and the impact of eBPF. 00:00 Introduction and Guest Background 01:10 Understanding Cilium and its Role in Networking 02:15 Exploring the Origins and Impact of eBPF 04:21 Insights into the eBPF Summit and Community Events 08:00 The Role of Open Source in Technology Development 12:40 The Intersection of AI and Open Source 18:21 Future Developments in Cilium and Open Source 21:02 Conclusion and Final Thoughts Guest: Liz Rice is Chief Open Source Officer with eBPF specialists Isovalent, creators of the Cilium cloud native networking, security and observability project. She was Chair of the CNCF's Technical Oversight Committee in 2019-2022, and Co-Chair of KubeCon + CloudNativeCon in 2018. She is also the author of Container Security, published by O'Reilly. She has a wealth of software development, team, and product management experience from working on network protocols and distributed systems, and in digital technology sectors such as VOD, music, and VoIP. When not writing code, or talking about it, Liz loves riding bikes in places with better weather than her native London, competing in virtual races on Zwift, and making music under the pseudonym Insider Nine.  

Open at Intel
The Art of Open Source: A Conversation with Stephen Augustus

Open at Intel

Play Episode Listen Later Dec 6, 2023 30:58


Stephen Augustus, the Head of Open Source at Cisco, shares his experiences and insights about contributing to and maintaining open source projects including Kubernetes and OpenSSF Scorecard. Stephen highlights the importance of building sustainable practices and the value of having product, program, and project management skills in open source projects. Discussions delve into the inner workings of the Kubernetes project, the role and functionality of the OpenSSF Scorecard, and the process of incorporating new contributors and projects. He further emphasizes the importance of transparency and intentionality in corporations' involvement in open source projects. 00:00 Introduction and Guest Background 00:22 Stephen's Journey into Open Source and Kubernetes 05:41 The Success Factors of Kubernetes 06:09 Maintaining the Maintainers: The Balance of Work in Open Source 06:28 The Role of Corporations in Open Source 09:03 The Overwhelming Nature of Open Source Contribution 10:10 The Impact of Kubernetes on Other Open Source Projects 10:59 The Increasing Complexity in Full Stack Development 12:29 The Importance of Open Source Project Management 20:27 OpenSSF Scorecard  Guest: Stephen Augustus is a Black engineering director and leader in open source communities. He is the Head of Open Source at Cisco, working within the Strategy, Incubation, & Applications (SIA) organization. For Kubernetes, he has co-founded transformational elements of the project, including the KEP (Kubernetes Enhancements Proposal) process, the Release Engineering subproject, and Working Group Naming. Stephen has also previously served as a chair for both SIG PM and SIG Azure. He continues his work in Kubernetes as a Steering Committee member and a Chair for SIG Release. Across the wider LF (Linux Foundation) ecosystem, Stephen has the pleasure of serving as a member of the OpenSSF Governing Board and the OpenAPI Initiative Business Governing Board. Previously, he was a TODO Group Steering Committee member, a CNCF (Cloud Native Computing Foundation) TAG Contributor Strategy Chair, and one of the Program Chairs for KubeCon / CloudNativeCon, the cloud native community's flagship conference. He is a maintainer for the Scorecard and Dex projects, and a prolific contributor to CNCF projects, amongst the top 40 (as of writing) code/content committers, all-time. In 2020, Stephen co-founded the Inclusive Naming Initiative, a cross-industry group dedicated to helping projects and companies make consistent, responsible choices to remove harmful language across codebases, standards, and documentation. He has previously held positions at VMware (via Heptio), Red Hat, and CoreOS. Stephen is based in New York City.

theCUBE Insights
Analyst ANGLE | KubeCon + CloudNativeCon NA 2023

theCUBE Insights

Play Episode Listen Later Nov 8, 2023 18:09


theCUBE hosts John Furrier, Rob Strechay, Dustin Kirkland and Savannah Peterson wrap up our coverage of day one of KubeCon + CloudNativeCon NA 2023

Techzine Talks
Kubernetes is niet meer spannend (en dat is een goede zaak)

Techzine Talks

Play Episode Listen Later May 8, 2023 36:09


Kubernetes wordt meer en meer mainstream, ook al is het nog altijd complex. Dat is een van de conclusies na KubeCon + CloudNativeCon, dat recent door de Cloud Native Computing Foundation in Amsterdam werd georganiseerd.We waren aanwezig in de RAI in Amsterdam om ons weer eens onder te dompelen in de wereld van Kubernetes en cloud-native development. Het was dit jaar drukker dan ooit. We hebben vernomen dat er zo'n 12.000 mensen op afkwamen, maar ook dat de tickets al geruime tijd voor het event uitverkocht waren. Er was dus nog meer interesse dan dat. Naast het officiële gedeelte in de RAI, zijn er verder ook nog allerlei zogeheten off-sites, al dan niet onder de vlag van CNCF. Deze vinden voor een deel plaats voordat het event formeel begint. De topics waar het over gaat tijdens KubeCon + CloudNativeCon leven dus zonder meer, dat is wel duidelijk.Het gaat steeds minder over KubernetesEen van de constateringen van KubeCon + CloudNativeCon is dat Kubernetes als geheel weer een stukje volwassener is geworden. Vorig jaar hebben we hier al eens een aflevering van Techzine Talks over opgenomen. Deze trend heeft zich in de afgelopen jaar doorgezet. Dat houdt in dat de sweetspot voor het aantal Kubernetes-clusters per organisaties omhoog gaat, maar dat ook de enorme uitwassen in aantallen clusters steeds minder worden. Onderzoek van VMware wees dit uit. Bevindingen van Red Hat naar het moderniseren van applicaties leek deze uitkomst ook te onderschrijven. Het rehosten (lift-and-shift) wordt minder populair ten faveure van replatformen en refactoring. Deze beweging zou je niet zien als er niet meer kennis en kunde komt rondom de mogelijkheden van Kubernetes en cloud-native in het algemeen.Al met al wordt Kubernetes steeds minder spannend en interessant, geeft CNCF CTO Chris Aniszczyk in gesprek met ons dan ook aan. Hij ziet eenzelfde beweging als we bij Linux hebben gezien. Daaromheen werden voorheen ook enorme conferenties georganiseerd, nu is dat veel minder. Dit terwijl Linux in de tussenliggende tijd alleen maar belangrijker is geworden. Dat zal voor Kubernetes ook gaan gebeuren.Tijdens KubeCon hebben we naast Kubernetes (waar het dus steeds minder over gaat) enkele trends gespot, die we in deze aflevering van Techzine Talks bespreken. Onder andere OTEL (Open Telemetry), Wasm (WebAssembly), maar vooral ook IDP (Internal Development Platforms) en de rol die Backstage hierin speelt komen voorbij. En uiteraard hebben we het over het skills gap.

Screaming in the Cloud
Learning eBPF with Liz Rice

Screaming in the Cloud

Play Episode Listen Later May 2, 2023 33:59


Liz Rice, Chief Open Source Officer at Isovalent, joins Corey on Screaming in the Cloud to discuss the release of her newest book, Learning eBPF, and the exciting possibilities that come with eBPF technology. Liz explains what got her so excited about eBPF technology, and what it was like to write a book while also holding a full-time job. Corey and Liz also explore the learning curve that comes with kernel programming, and Liz illustrates why it's so important to be able to explain complex technologies in simple terminology. About LizLiz Rice is Chief Open Source Officer with eBPF specialists Isovalent, creators of the Cilium cloud native networking, security and observability project. She sits on the CNCF Governing Board, and on the Board of OpenUK. She was Chair of the CNCF's Technical Oversight Committee in 2019-2022, and Co-Chair of KubeCon + CloudNativeCon in 2018. She is also the author of Container Security, and Learning eBPF, both published by O'Reilly.She has a wealth of software development, team, and product management experience from working on network protocols and distributed systems, and in digital technology sectors such as VOD, music, and VoIP. When not writing code, or talking about it, Liz loves riding bikes in places with better weather than her native London, competing in virtual races on Zwift, and making music under the pseudonym Insider Nine.Links Referenced: Isovalent: https://isovalent.com/ Learning eBPF: https://www.amazon.com/Learning-eBPF-Programming-Observability-Networking/dp/1098135121 Container Security: https://www.amazon.com/Container-Security-Fundamental-Containerized-Applications/dp/1492056707/ GitHub for Learning eBPF: https://github.com/lizRice/learning-eBPF TranscriptAnnouncer: Hello, and welcome to Screaming in the Cloud with your host, Chief Cloud Economist at The Duckbill Group, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.Corey: Welcome to Screaming in the Cloud. I'm Corey Quinn. Our returning guest today is Liz Rice, who remains the Chief Open Source Officer with Isovalent. But Liz, thank you for returning, suspiciously closely timed to when you have a book coming out. Welcome back.Liz: [laugh]. Thanks so much for having me. Yeah, I've just—I've only had the physical copy of the book in my hands for less than a week. It's called Learning eBPF. I mean, obviously, I'm very excited.Corey: It's an O'Reilly book; it has some form of honeybee on the front of it as best I can tell.Liz: Yeah, I was really pleased about that. Because eBPF has a bee as its logo, so getting a [early 00:01:17] honeybee as the O'Reilly animal on the front cover of the book was pretty pleasing, yeah.Corey: Now, this is your second O'Reilly book, is it not?Liz: It's my second full book. So, I'd previously written a book on Container Security. And I've done a few short reports for them as well. But this is the second, you know, full-on, you can buy it on Amazon kind of book, yeah.Corey: My business partner wrote Practical Monitoring for O'Reilly and that was such an experience that he got entirely out of observability as a field and ran running to AWS bills as a result. So, my question for you is, why would anyone do that more than once?Liz: [laugh]. I really like explaining things. And I had a really good reaction to the Container Security book. I think already, by the time I was writing that book, I was kind of interested in eBPF. And we should probably talk about what that is, but I'll come to that in a moment.Yeah, so I've been really interested in eBPF, for quite a while and I wanted to be able to do the same thing in terms of explaining it to people. A book gives you a lot more opportunity to go into more detail and show people examples and get them kind of hands-on than you can do in their, you know, 40-minute conference talk. So, I wanted to do that. I will say I have written myself a note to never do a full-size book while I have a full-time job because it's a lot [laugh].Corey: You do have a full-time job and then some. As we mentioned, you're the Chief Open Source Officer over at Isovalent, you are on the CNCF governing board, you're on the board of OpenUK, and you've done a lot of other stuff in the open-source community as well. So, I have to ask, taking all of that together, are you just allergic to things that make money? I mean, writing the book as well on top of that. I'm told you never do it for the money piece; it's always about the love of it. But it seems like, on some level, you're taking it to an almost ludicrous level.Liz: Yeah, I mean, I do get paid for my day job. So, there is that [laugh]. But so, yeah—Corey: I feel like that's the only way to really write a book is, in turn, to wind up only to just do it for—what someone else is paying you to for doing it, viewing it as a marketing exercise. It pays dividends, but those dividends don't, in my experience from what I've heard from everyone say, pay off as of royalties on book payments.Liz: Yeah, I mean, it's certainly, you know, not a bad thing to have that income stream, but it certainly wouldn't make you—you know, I'm not going to retire tomorrow on the royalty stream unless this podcast has loads and loads of people to buy the book [laugh].Corey: Exactly. And I'm always a fan of having such [unintelligible 00:03:58]. I will order it while we're on the call right now having this conversation because I believe in supporting the things that we want to see more of in the world. So, explain to me a little bit about what it is. Whatever you talking about learning X in a title, I find that that's often going to be much more approachable than arcane nonsense deep-dive things.One of the O'Reilly books that changed my understanding was Linux Kernel Internals, or Understanding the Linux Kernel. Understanding was kind of a heavy lift at that point because it got very deep very quickly, but I absolutely came away understanding what was going on a lot more effectively, even though I was so slow I needed a tow rope on some of it. When you have a book that started with learning, though, I imagined it assumes starting at zero with, “What's eBPF?” Is that directionally correct, or does it assume that you know a lot of things you don't?Liz: Yeah, that's absolutely right. I mean, I think eBPF is one of these technologies that is starting to be, particularly in the cloud-native world, you know, it comes up; it's quite a hot technology. What it actually is, so it's an acronym, right? EBPF. That acronym is almost meaningless now.So, it stands for extended Berkeley Packet Filter. But I feel like it does so much more than filtering, we might as well forget that altogether. And it's just become a term, a name in its own right if you like. And what it really does is it lets you run custom programs in the kernel so you can change the way that the kernel behaves, dynamically. And that is… it's a superpower. It's enabled all sorts of really cool things that we can do with that superpower.Corey: I just pre-ordered it as a paperback on Amazon and it shows me that it is now number one new release in Linux Networking and Systems Administration, so you're welcome. I'm sure it was me that put it over the top.Liz: Wonderful. Thank you very much. Yeah [laugh].Corey: Of course, of course. Writing a book is one of those things that I've always wanted to do, but never had the patience to sit there and do it or I thought I wasn't prolific enough, but over the holidays, this past year, my wife and business partner and a few friends all chipped in to have all of the tweets that I'd sent bound into a series of leather volumes. Apparently, I've tweeted over a million words. And… yeah, oh, so I have to write a book 280 characters at a time, mostly from my phone. I should tweet less was really the takeaway that I took from a lot of that.But that wasn't edited, that wasn't with an overall theme or a narrative flow the way that an actual book is. It just feels like a term paper on steroids. And I hated term papers. Love reading; not one to write it.Liz: I don't know whether this should make it into the podcast, but it reminded me of something that happened to my brother-in-law, who's an artist. And he put a piece of video on YouTube. And for unknowable reasons if you mistyped YouTube, and you spelt it, U-T-U-B-E, the page that you would end up at from Google search was a YouTube video and it was in fact, my brother-in-law's video. And people weren't expecting to see this kind of art movie about matches burning. And he just had the worst comment—like, people were so mean in the comments. And he had millions of views because people were hitting this page by accident, and he ended up—Corey: And he made the cardinal sin of never read the comments. Never break that rule. As soon as you do that, it doesn't go well. I do read the comments on various podcast platforms on this show because I always tell people to insulted all they want, just make sure you leave a five-star review.Liz: Well, he ended up publishing a book with these comments, like, one comment per page, and most of them are not safe for public consumption comments, and he just called it Feedback. It was quite something [laugh].Corey: On some level, it feels like O'Reilly books are a little insulated from the general population when it comes to terrible nonsense comments, just because they tend to be a little bit more expensive than the typical novel you'll see in an airport bookstore, and again, even though it is approachable, Learning eBPF isn't exactly the sort of title that gets people to think that, “Ooh, this is going to be a heck of a thriller slash page-turner with a plot.” “Well, I found the protagonist unrelatable,” is not sort of the thing you're going to wind up seeing in the comments because people thought it was going to be something different.Liz: I know. One day, I'm going to have to write a technical book that is also a murder mystery. I think that would be, you know, quite an achievement. But yeah, I mean, it's definitely aimed at people who have already come across the term, want to know more, and particularly if you're the kind of person who doesn't want to just have a hand-wavy explanation that involves boxes and diagrams, but if, like me, you kind of want to feel the code, and you want to see how things work and you want to work through examples, then that's the kind of person who might—I hope—enjoy working through the book and end up with a possible mental model of how eBPF works, even though it's essentially kernel programming.Corey: So, I keep seeing eBPF in an increasing number of areas, a bunch of observability tools, a bunch of security tools all tend to tie into it. And I've seen people do interesting things as far as cost analysis with it. The problem that I run into is that I'm not able to wind up deploying it universally, just because when I'm going into a client engagement, I am there in a purely advisory sense, given that I'm biasing these days for both SaaS companies and large banks, that latter category is likely going to have some problems if I say, “Oh, just take this thing and go ahead and deploy it to your entire fleet.” If they don't have a problem with that, I have a problem with their entire business security posture. So, I don't get to be particularly prescriptive as far as what to do with it.But if I were running my own environment, it is pretty clear by now that I would have explored this in some significant depth. Do you find that it tends to be something that is used primarily in microservices environments? Does it effectively require Kubernetes to become useful on day one? What is the onboard path where people would sit back and say, “Ah, this problem I'm having, eBPF sounds like the solution.”Liz: So, when we write tools that are typically going to be some sort of infrastructure, observability, security, networking tools, if we're writing them using eBPF, we're instrumenting the kernel. And the kernel gets involved every time our application wants to do anything interesting because whenever it wants to read or write to a file, or send receive network messages, or write something to the screen, or allocate memory, or all of these things, the kernel has to be involved. And we can use eBPF to instrument those events and do interesting things. And the kernel doesn't care whether those processes are running in containers, under Kubernetes, just running directly on the host; all of those things are visible to eBPF.So, in one sense, doesn't matter. But one of the reasons why I think we're seeing eBPF-based tools really take off in cloud-native is that you can, by applying some programming, you can link events that happened in the kernel to specific containers in specific pods in whatever namespace and, you know, get the relationship between an event and the Kubernetes objects that are involved in that event. And then that enables a whole lot of really interesting observability or security tools and it enables us to understand how network packets are flowing between different Kubernetes objects and so on. So, it's really having this vantage point in the kernel where we can see everything and we didn't have to change those applications in any way to be able to use eBPF to instrument them.Corey: When I see the stories about eBPF, it seems like it's focused primarily on networking and flow control. That's where I'm seeing it from a security standpoint, that's where I'm seeing it from cost allocation aspect. Because, frankly, out of the box, from a cloud provider's perspective, Kubernetes looks like a single-tenant application with a really weird behavioral pattern, and some of that crosstalk gets very expensive. Is there a better way than either using eBPF and/or VPC flow logs to figure out what's talking to what in the Kubernetes ecosystem, or is BPF really your first port of call?Liz: So, I'm coming from a position of perspective of working for the company that created the Cilium networking project. And one of the reasons why I think Cilium is really powerful is because it has this visibility—it's got a component called Hubble—that allows you to see exactly how packets are flowing between these different Kubernetes identities. So, in a Kubernetes environment, there's not a lot of point having network flows that talk about IP addresses and ports when what you really want to know is, what's the Kubernetes namespace, what's the application? Defining things in terms of IP addresses makes no sense when they're just being refreshed and renewed every time you change pods. So yeah, Kubernetes changes the requirements on networking visibility and on firewalling as well, on network policy, and that, I think, is you don't have to use eBPF to create those tools, but eBPF is a really powerful and efficient platform for implementing those tools, as we see in Cilium.Corey: The only competitor I found to it that gives a reasonable explanation of why random things are transferring multiple petabytes between each other in the middle of the night has been oral tradition, where I'm talking to people who've been around there for a while. It's, “So, I'm seeing this weird traffic pattern at these times a day. Any idea what that might be?” And someone will usually perk up and say, “Oh, is it—” whatever job that they're doing. Great. That gives me a direction to go in.But especially in this era of layoffs and as environments exist for longer and longer, you have to turn into a bit of a data center archaeologist. That remains insufficient, on some level. And some level, I'm annoyed with trying to understand or needing to use tooling like this that is honestly this powerful and this customizable, and yes, on some level, this complex in order to get access to that information in a meaningful sense. But on the other, I'm glad that that option is at least there for a lot of workloads.Liz: Yeah. I think, you know, that speaks to the power of this new generation of tooling. And the same kind of applies to security forensics, as well, where you might have an enormous stream of events, but unless you can tie those events back to specific Kubernetes identities, which you can use eBPF-based tooling to do, then how do you—the forensics job of tying back where did that event come from, what was the container that was compromised, it becomes really, really difficult. And eBPF tools—like Cilium has a sub-project called Tetragon that is really good at this kind of tying events back to the Kubernetes pod or whether we want to know what node it was running on what namespace or whatever. That's really useful forensic information.Corey: Talk to me a little bit about how broadly applicable it is. Because from my understanding from our last conversation, when you were on the show a year or so ago, if memory serves, one of the powerful aspects of it was very similar to what I've seen some of Brendan Gregg's nonsense doing in his kind of various talks where you can effectively write custom programming on the fly and it'll tell you exactly what it is that you need. Is this something that can be instrument once and then effectively use it for basically anything, [OTEL 00:16:11]-style, or instead, does it need to be effectively custom configured every time you want to get a different aspect of information out of it?Liz: It can be both of those things.Corey: “It depends.” My least favorite but probably the most accurate answer to hear.Liz: [laugh]. But I think Brendan did a really great—he's done many talks talking about how powerful BPF is and built lots of specific tools, but then he's also been involved with Bpftrace, which is kind of like a language for—a high-level language for saying what it is that you want BPF to trace out for you. So, a little bit like, I don't know, awk but for events, you know? It's a scripting language. So, you can have this flexibility.And with something like Bpftrace, you don't have to get into the weeds yourself and do kernel programming, you know, in eBPF programs. But also there's gainful employment to be had for people who are interested in that eBPF kernel programming because, you know, I think there's just going to be a whole range of more tools to come, you know>? I think we're, you know, we're seeing some really powerful tools with Cilium and Pixie and [Parker 00:17:27] and Kepler and many other tools and projects that are using eBPF. But I think there's also a whole load of more to come as people think about different ways they can apply eBPF and instrument different parts of an overall system.Corey: We're doing this over audio only, but behind me on my wall is one of my least favorite gifts ever to have been received by anyone. Mike, my business partner, got me a thousand-piece puzzle of the Kubernetes container landscape where—Liz: [laugh].Corey: This diagram is psychotic and awful and it looks like a joke, except it's not. And building that puzzle was maddening—obviously—but beyond that, it was a real primer in just how vast the entire container slash Kubernetes slash CNCF landscape really is. So, looking at this, I found that the only reaction that was appropriate was a sense of overwhelmed awe slash frustration, I guess. It's one of those areas where I spend a lot of time focusing on drinking from the AWS firehose because they have a lot of products and services because their product strategy is apparently, “Yes,” and they're updating these things in a pretty consistent cadence. Mostly. And even that feels like it's multiple full-time jobs shoved into one.There are hundreds of companies behind these things and all of them are in areas that are incredibly complex and difficult to go diving into. EBPF is incredibly powerful, I would say ridiculously so, but it's also fiendishly complex, at least shoulder-surfing behind people who know what they're doing with it has been breathtaking, on some level. How do people find themselves in a situation where doing a BPF deep dive make sense for them?Liz: Oh, that's a great question. So, first of all, I'm thinking is there an AWS Jigsaw as well, like the CNCF landscape Jigsaw? There should be. And how many pieces would it have? [It would be very cool 00:19:28].Corey: No, because I think the CNCF at one point hired a graphic designer and it's unclear that AWS has done such a thing because their icons for services are, to be generous here, not great. People have flashcards that they've built for is what services does logo represent? Haven't a clue, in almost every case because I don't care in almost every case. But yeah, I've toyed with the idea of doing it. It's just not something that I'd ever want to have my name attached to it, unfortunately. But yeah, I want someone to do it and someone else to build it.Liz: Yes. Yeah, it would need to refresh every, like, five minutes, though, as they roll out a new service.Corey: Right. Because given that it appears from the outside to be impenetrable, it's similar to learning VI in some cases, where oh, yeah, it's easy to get started with to do this trivial thing. Now, step two, draw the rest of the freaking owl. Same problem there. It feels off-putting just from a perspective of you must be at least this smart to proceed. How do you find people coming to it?Liz: Yeah, there is some truth in that, in that beyond kind of Hello World, you quite quickly start having to do things with kernel data structures. And as soon as you're looking at kernel data structures, you have to sort of understand, you know, more about the kernel. And if you change things, you need to understand the implications of those changes. So, yeah, you can rapidly say that eBPF programming is kernel programming, so why would anybody want to do it? The reason why I do it myself is not because I'm a kernel programmer; it's because I wanted to really understand how this is working and build up a mental model of what's happening when I attach a program to an event. And what kinds of things can I do with that program?And that's the sort of exploration that I think I'm trying to encourage people to do with the book. But yes, there is going to be at some point, a pretty steep learning curve that's kernel-related but you don't necessarily need to know everything in order to really have a decent understanding of what eBPF is, and how you might, for example—you might be interested to see what BPF programs are running on your existing system and learn why and what they might be doing and where they're attached and what use could that be.Corey: Falling down that, looking at the process table once upon a time was a heck of an education, one week when I didn't have a lot to do and I didn't like my job in those days, where, “Oh, what is this Avahi daemon that constantly running? MDNS forwarding? Who would need that?” And sure enough, that tickled something in the back of my mind when I wound up building out my networking box here on top of BSD, and oh, yeah, I want to make sure that I can still have discovery work from the IoT subnet over to whatever it is that my normal devices live. Ah, that's what that thing always running for. Great for that one use case. Almost never needed in other cases, but awesome. Like, you fire up a Raspberry Pi. It's, “Why are all these things running when I'm just want to have an embedded device that does exactly one thing well?” Ugh. Computers have gotten complicated.Liz: I know. It's like when you get those pop-ups on—well certainly on Mac, and you get pop-ups occasionally, let's say there's such and such a daemon wants extra permissions, and you think I'm not hitting that yes button until I understand what that daemon is. And it turns out, it's related, something completely innocuous that you've actually paid for, but just under a different name. Very annoying. So, if you have some kind of instrumentation like tracing or logging or security tooling that you want to apply to all of your containers, one of the things you can use is a sidecar container approach. And in Kubernetes, that means you inject the sidecar into every single pod. And—Corey: Yes. Of course, the answer to any Kubernetes problem appears to be have you tried running additional containers?Liz: Well, right. And there are challenges that can come from that. And one of the reasons why you have to do that is because if you want a tool that has visibility over that container that's inside the pod, well, your instrumentation has to also be inside the pod so that it has visibility because your pod is, by design, isolated from the host it's running on. But with eBPF, well eBPF is in the kernel and there's only one kernel, however many containers were running. So, there is no kind of isolation between the host and the containers at the kernel level.So, that means if we can instrument the kernel, we don't have to have a separate instance in every single pod. And that's really great for all sorts of resource usage, it means you don't have to worry about how you get those sidecars into those pods in the first place, you know that every pod is going to be instrumented if it's instrumented in the kernel. And then for service mesh, service mesh usually uses a sidecar as a Layer 7 Proxy injected into every pod. And that actually makes for a pretty convoluted networking path for a packet to sort of go from the application, through the proxy, out to the host, back into another pod, through another proxy, into the application.What we can do with eBPF, we still need a proxy running in userspace, but we don't need to have one in every single pod because we can connect the networking namespaces much more efficiently. So, that was essentially the basis for sidecarless service mesh, which we did in Cilium, Istio, and now we're using a similar sort of approach with Ambient Mesh. So that, again, you know, avoiding having the overhead of a sidecar in every pod. So that, you know, seems to be the way forward for service mesh as well as other types of instrumentation: avoiding sidecars.Corey: On some level, avoiding things that are Kubernetes staples seems to be a best practice in a bunch of different directions. It feels like it's an area where you start to get aligned with the idea of service meesh—yes, that's how I pluralize the term service mesh and if people have a problem with that, please, it's imperative you've not send me letters about it—but this idea of discovering where things are in a variety of ways within a cluster, where things can talk to each other, when nothing is deterministically placed, it feels like it is screaming out for something like this.Liz: And when you think about it, Kubernetes does sort of already have that at the level of a service, you know? Services are discoverable through native Kubernetes. There's a bunch of other capabilities that we tend to associate with service mesh like observability or encrypted traffic or retries, that kind of thing. But one of the things that we're doing with Cilium, in general, is to say, but a lot of this is just a feature of the networking, the underlying networking capability. So, for example, we've got next generation mutual authentication approach, which is using SPIFFE IDs between an application pod and another application pod. So, it's like the equivalent of mTLS.But the certificates are actually being passed into the kernel and the encryption is happening at the kernel level. And it's a really neat way of saying we don't need… we don't need to have a sidecar proxy in every pod in order to terminate those TLS connections on behalf of the application. We can have the kernel do it for us and that's really cool.Corey: Yeah, at some level, I find that it still feels weird—because I'm old—to have this idea of one shared kernel running a bunch of different containers. I got past that just by not requiring that [unintelligible 00:27:32] workloads need to run isolated having containers run on the same physical host. I found that, for example, running some stuff, even in my home environment for IoT stuff, things that I don't particularly trust run inside of KVM on top of something as opposed to just running it as a container on a cluster. Almost certainly stupendous overkill for what I'm dealing with, but it's a good practice to be in to start thinking about this. To my understanding, this is part of what AWS's Firecracker project starts to address a bit more effectively: fast provisioning, but still being able to use different primitives as far as isolation boundaries go. But, on some level, it's nice to not have to think about this stuff, but that's dangerous.Liz: [laugh]. Yeah, exactly. Firecracker is really nice way of saying, “Actually, we're going to spin up a whole VM,” but we don't ne—when I say ‘whole VM,' we don't need all of the things that you normally get in a VM. We can get rid of a ton of things and just have the essentials for running that Lambda or container service, and it becomes a really nice lightweight solution. But yes, that will have its own kernel, so unlike, you know, running multiple kernels on the same VM where—sorry, running multiple containers on the same virtual machine where they would all be sharing one kernel, with Firecracker you'll get a kernel per instance of Firecracker.Corey: The last question I have for you before we wind up wrapping up this episode harkens back to something you said a little bit earlier. This stuff is incredibly technically nuanced and deep. You clearly have a thorough understanding of it, but you also have what I think many people do not realize is an orthogonal skill of being able to articulate and explain those complex concepts simply an approachably, in ways that make people understand what it is you're talking about, but also don't feel like they're being spoken to in a way that's highly condescending, which is another failure mode. I think it is not particularly well understood, particularly in the engineering community, that there are—these are different skill sets that do not necessarily align congruently. Is this something you've always known or is this something you've figured out as you've evolved your career that, oh I have a certain flair for this?Liz: Yeah, I definitely didn't always know it. And I started to realize it based on feedback that people have given me about talks and articles I'd written. I think I've always felt that when people use jargon or they use complicated language or they, kind of, make assumptions about how things are, it quite often speaks to them not having a full understanding of what's happening. If I want to explain something to myself, I'm going to use straightforward language to explain it to myself [laugh] so I can hold it in my head. And I think people appreciate that.And you can get really—you know, you can get quite in-depth into something if you just start, step by step, build it up, explain everything as you go along the way. And yeah, I think people do appreciate that. And I think people, if they get lost in jargon, it doesn't help anybody. And yeah, I very much appreciate it when people say that, you know, they saw a talk or they read something I wrote and it meant that they finally grokked whatever that concept was that that I was trying to explain. I will say at the weekend, I asked ChatGPT to explain DNS in the style of Liz Rice, and it started off, it was basically, “Hello there. I'm Liz Rice and I'm here to explain DNS in very simple terms.” I thought, “Okay.” [laugh].Corey: Every time I think I've understood DNS, there's another level to it.Liz: I'm pretty sure there is a lot about DNS that I don't understand, yeah. So, you know, there's always more to learn out there.Corey: There's certainly is. I really want to thank you for taking time to speak with me today about what you're up to. Where's the best place for people to find you to learn more? And of course, to buy the book.Liz: Yeah, so I am Liz Rice pretty much everywhere, all over the internet. There is a GitHub repo that accompanies the books that you can find that on GitHub: lizRice/learning-eBPF. So, that's a good place to find some of the example code, and it will obviously link to where you can download the book or buy it because you can pay for it; you can also download it from Isovalent for the price of your contact details. So, there are lots of options.Corey: Excellent. And we will, of course, put links to that in the [show notes 00:32:08]. Thank you so much for your time. It's always great to talk to you.Liz: It's always a pleasure, so thanks very much for having me, Corey.Corey: Liz Rice, Chief Open Source Officer at Isovalent. I'm Cloud Economist Corey Quinn, and this is Screaming in the Cloud. If you've enjoyed this podcast, please leave a five-star review on your podcast platform of choice, whereas if you've hated this podcast, please leave a five-star review on your podcast platform of choice, along with an angry comment that you have somehow discovered this episode by googling for knitting projects.Corey: If your AWS bill keeps rising and your blood pressure is doing the same, then you need The Duckbill Group. We help companies fix their AWS bill by making it smaller and less horrifying. The Duckbill Group works for you, not AWS. We tailor recommendations to your business and we get to the point. Visit duckbillgroup.com to get started.

Roaring Elephant
Episode 346 – KubeCon 2023

Roaring Elephant

Play Episode Listen Later Apr 25, 2023 33:19


Hear some highlights of last weeks KubeCon / CloudNativeCon 2023 that took place in Amsterdam. We did have some audio issues with this recording. Our apologies for that! KubeCon CloudNativeCon 2023 - Amsterdam Please use the Contact Form on this blog or our twitter feed to send us your questions, or to suggest future episode topics you would like us to cover.

De Nederlandse Kubernetes Podcast
#13 Terugblik KubeCon + CloudNativeCon Europe 2023

De Nederlandse Kubernetes Podcast

Play Episode Listen Later Apr 24, 2023 25:44


In deze aflevering blikken Ronald en Jan terug op het grootste Kubernetes event van Europa dat afgelopen week plaatsvond in Amsterdam: KubeCon + CloudNativeCon 2023. Ze gaan in op de opvallendste ontwikkelingen, trends en nieuwe functionaliteiten omtrent Kubernetes.

The New Stack Podcast
KubeCon + CloudNativeCon EU 2023: Hello Amsterdam

The New Stack Podcast

Play Episode Listen Later Apr 5, 2023 25:09


Hoi Europe and beyond!Once again it is time for cloud native enthusiasts and professionals to converge and discuss cloud native computing in all its efficiency and complexity. The Cloud Native Computing Foundation's KubeCon+CloudNativeCon 2023 is being held later this month in Amsterdam, April 18 - 21, at the Rai Convention Centre.In this latest edition of The New Stack podcast, we spoke with two of the event's co-chairs who helped define this year's themes for the show, which is expected to draw over 9,000 attendees: Aparna Subramanian, Shopify's Director of Production Engineering for Infrastructure; and Cloud Native Infra and Security Enterprise Architect Frederick Kautz. 

Kubernetes Podcast from Google
Kubernetes Registry with Benjamin Elder

Kubernetes Podcast from Google

Play Episode Listen Later Feb 14, 2023 47:51


Benjamin Elder is a Senior Software Engineer at Google, a Kubernetes SIG Testing Chair & Tech Lead, and a Kubernetes Steering Committee member. In this episode we got to chat with Benjamin about the new kubernetes registry migration from k8s.gcr.io to registry.k8s.io. We also had an opportunity to discuss the community, the various SIG's (Special Interest Groups) Benjamin is involved with the amount of work needed to drive the project forward.   Do you have something cool to share? Some questions? Let us know: - web: kubernetespodcast.com - mail: kubernetespodcast@google.com - twitter: @kubernetespod Chatter of the week Google Developer Experts program. ChatGPT. OpenAI Case Study. Kubernetes Jobs API. Job Tracking, to Support Massively Parallel Batch Workloads, Is GA in kubernetes 1.26. Stateful apps on Kubernetes. Kelsey Hightower's take on Databases on Kubernetes twitter space. Kubernetes Resources Model News of the week Linkerd published a 2022 recap The CNCF Cloud Native Maturity Model The CNCF Cloud Native Maturity Model website Using Amazon EKS with Google Workspace identities CNCF Ambassador 2.0 program Cloud Native Security Con NA 2023 (website - recordings) The CNCF important updates for KubeCon + CloudNativeCon 2023 and co-located events Kubernetes 1.26 news: https://kubernetes.io/blog/ Eviction policy for unhealthy pods guarded by PodDisruptionBudgets:https://kubernetes.io/blog/2023/01/06/unhealthy-pod-eviction-policy-for-pdbs/ Retroactive Default StorageClass: https://kubernetes.io/blog/2023/01/05/retroactive-default-storage-class/ Alpha support for cross-namespace storage data sources: https://kubernetes.io/blog/2023/01/02/cross-namespace-data-sources-alpha/ Advancements in Kubernetes Traffic Engineering: https://kubernetes.io/blog/2022/12/30/advancements-in-kubernetes-traffic-engineering/ Job Tracking, to Support Massively Parallel Batch Workloads, Is Generally Available: https://kubernetes.io/blog/2022/12/29/scalable-job-tracking-ga/ CPUManager goes GA: https://kubernetes.io/blog/2022/12/27/cpumanager-ga/ Pod Scheduling Readiness: https://kubernetes.io/blog/2022/12/26/pod-scheduling-readiness-alpha/ Support for Passing Pod fsGroup to CSI Drivers At Mount Time: https://kubernetes.io/blog/2022/12/23/kubernetes-12-06-fsgroup-on-mount/ GA Support for Kubelet Credential Providers: https://kubernetes.io/blog/2022/12/22/kubelet-credential-providers/ Introducing Validating Admission Policies: https://kubernetes.io/blog/2022/12/20/validating-admission-policies-alpha/ Device Manager graduates to GA: https://kubernetes.io/blog/2022/12/19/devicemanager-ga/ Non-Graceful Node Shutdown Moves to Beta: https://kubernetes.io/blog/2022/12/16/kubernetes-1-26-non-graceful-node-shutdown-beta/ Alpha API For Dynamic Resource Allocation: https://kubernetes.io/blog/2022/12/15/dynamic-resource-allocation/ Windows HostProcess Containers Are Generally Available: https://kubernetes.io/blog/2022/12/13/windows-host-process-containers-ga/ We're now signing our binary release artifacts!: https://kubernetes.io/blog/2022/12/12/kubernetes-release-artifact-signing/   Links from the interview Benjamin Elder LinkedIn Github Twitter Kubernetes Steering Committee Kubernetes SIG Testing Kubernetes IN Docker (KIND) Benjamin on the podcast episode 96 Paris Pittman LinkedIN Twitter Kubernetes registry move from k8s.gcr.io to registry.k8s.io Archeio is the tool used to redirect to GCR or S3 depending on the client. The design of how requests are handled. Doc detailing the background of this migration. Kubernetes SIG Contributor Experience Kubernetes Slack channel

The New Stack Podcast
Hazelcast and the Benefits of Real Time Data

The New Stack Podcast

Play Episode Listen Later Dec 28, 2022 14:31


In this latest podcast from The New Stack, we interview Manish Devgan, chief product officer for Hazelcast, which offers a real time stream processing engine. This interview was recorded at KubeCon+CloudNativeCon, held last October in Detroit. "'Real time' means different things to different people, but it's really a business term," Devgan explained. In the business world, time is money, and the more quickly you can make a decision, using the right data, the more quickly one can take action. Although we have many "batch-processing" systems, the data itself rarely comes in batches, Devgan said. "A lot of times I hear from customers that are using a batch system, because those are the things which are available at that time. But data is created in real time sensors, your machines, espionage data, or even customer data — right when customers are transacting with you." What is a Real Time Data Processing Engine? A real time data processing engine can analyze data as it is coming in from the source. This is different from traditional approaches that store the data first, then analyze it later. Bank loans may is example of this approach. With a real time data processing engine in place, a bank can offer a loan to a customer using an automated teller machine (ATM) in real time, Devgan suggested.  "As the data comes in, you can actually take action based on context of the data," he argued. Such a loan app may combine real-time data from the customer alongside historical data stored in a traditional database. Hazelcast can combine historical data with real time data to make workloads like this possible. In this interview, we also debated the merits of Kafka, the benefits of using a managed service rather than running an application in house, Hazelcast's users, and features in the latest release of the Hazelcast platform.        

The New Stack Podcast
Open Source Underpins A Home Furnishings Provider's Global Ambitions

The New Stack Podcast

Play Episode Listen Later Dec 1, 2022 16:03


Wayfair describes itself as the “the destination for all things home: helping everyone, anywhere create their feeling of home.” It provides an online platform to acquire home furniture, outdoor decor and other furnishings. It also supports its suppliers so they can use the platform to sell their home goods, explained Natali Vlatko, global lead, open source program office (OSPO) and senior software engineering manager, for Wayfair as the featured guest in Detroit during KubeCon + CloudNativeCon North America 2022. “It takes a lot of technical, technical work behind the scenes to kind of get that going,” Vlatko said. This is especially true as Wayfair scales its operations worldwide. The infrastructure must be highly distributed, relying on containerization, microservices, Kubernetes, and especially, open source to get the job done. “We have technologists throughout the world, in North America and throughout Europe as well,”  Vlatko said. “And we want to make sure that we are utilizing cloud native and open source, not just as technologies that fuel our business, but also as the ways that are great for us to work in now.” Open source has served as a “great avenue” for creating and offering technical services, and to accomplish that, Vlatko amassed the requite tallent, she said. Vlatko was able to amass a small team of engineers to focus on platform work, advocacy, community management and internally on compliance with licenses. About five years ago when Vlatko joined Wayfair, the company had yet to go “full tilt into going all cloud native,”  Vlatko said. Wayfair had a hybrid mix of on-premise and cloud infrastructure. After decoupling from a monolith into a microservices architecture “that journey really began where we understood the really great benefits of microservices and got to a point where we thought, ‘okay, this hybrid model for us actually would benefit our microservices being fully in the cloud,” Vlatko said. In late 2020, Wayfair had made the decision to “get out of the data centers” and shift operations to the cloud, which was completed in October, Vlatko said.  The company culture is such that engineers have room to experiment without major fear of failure by doing a lot of development work in a sandbox environment. “We've been able to create production environments that are close to our production environments so that experimentation in sandboxes can occur. Folks can learn as they go without actually fearing failure or fearing a mistake,”  Vlatko said. “So, I think experimentation is a really important aspect of our own learning and growth for cloud native. Also, coming to great events like KubeCon + CloudNativeCon and other events [has been helpful]. We're hearing from other companies who've done the same journey and process and are learning from the use cases.”

The New Stack Podcast
How Boeing Uses Cloud Native

The New Stack Podcast

Play Episode Listen Later Nov 23, 2022 12:03


In this latest podcast from The New Stack, we spoke with Ricardo Torres, who is the chief engineer of open source and cloud native for aerospace giant Boeing. Torres also joined the Cloud Native Computing Foundation in May to serve as a board member. In this interview, recorded at KubeCon+CloudNativeCon last month, Torres speaks about Boeing's use of open source software, as well as its adoption of cloud native technologies. While we may think of Boeing as an airplane manufacturer, it would be more accurate to think of the company as a large-scale system integrator, one that uses a lot of software. So, like other large-scale companies, Boeing sees a distinct advantage in maintaining good relations with the open source community. "Being able to leverage the best technologists out there in the rest of the world is of great value to us strategically," Torres said. This strategy allows Boeing to "differentiate on what we do as our core business rather than having to reinvent the wheel all the time on all of the technology." Like many other large companies, Boeing has created an open source office to better work with the open source community. Although Boeing is primarily a consumer of open source software, it still wants to work with the community. "We want to make sure that we have a strategy around how we contribute back to the open source community, and then leverage those learnings for inner sourcing," he said. Boeing also manages how it uses open source internally, keeping tight controls on the supply chain of open source software it uses. "As part of the software engineering organization, we partner with our internal IT organization, to look at our internet traffic and assure nobody's going out and downloading directly from an untrusted repository or registry. And then we host instead, we have approved sources internally." It's not surprising that Boeing, which deals with a lot of government agencies, embraces the practice of using software bills of material (SBOMs), which provide a full listing of what components are being used in a software system. In fact, the company has been working to extend the comprehensiveness of SBOMs, according to Torres. " I think one of the interesting things now is the automation," he said of SBOMs. "And so we're always looking to beef up the heuristics because a lot of the tools are relatively naïve, and that they trust that the dependencies that are specified are actually representative of everything that's delivered. And that's not good enough for a company like Boeing. We have to be absolutely certain that what's there is exactly what did we expected to be there."Cloud Native ComputingWhile Boeing builds many systems that reside in private data centers, the company is also increasingly relying on the cloud as well. Earlier this year, Boeing had signed agreements with the three largest cloud service providers (CSPs): Amazon Web Services, Microsoft Azure and the Google Cloud Platform. "A lot of our cloud presence is about our development environments. And so, you know, we have cloud-based software factories that are using a number of CNCF and CNCF-adjacent technologies to enable our developers to move fast," Torres said.

The New Stack Podcast
OpenTelemetry Properly Explained and Demoed

The New Stack Podcast

Play Episode Listen Later Nov 15, 2022 18:16


OpenTelemetry project offers vendor-neutral integration points that help organizations obtain the raw materials — the "telemetry" — that fuel modern observability tools, and with minimal effort at integration time. But what does OpenTelemetry mean for those who use their favorite observability tools but don't exactly understand how it can help them? How might OpenTelemetry be relevant to the folks who are new to Kuberentes (the majority of KubeCon attendees during the past years) and those who are just getting started with observability?  Austin Parker, head of developer relations, Lightstep and Morgan McLean, director of product management, Splunk, discuss during this podcast at KubeCon + CloudNativeCon 2022 how the OpenTelemetry project has created demo services to help cloud native community members better understand cloud native development practices and test out OpenTelemetry, as well as Kubernetes, observability software, etc.  At this conjecture in DevOps history, there has been considerable hype around observability for developers and operations teams, and more recently, much attention has been given to helping combine the different observability solutions out there in use through a single interface, and to that end, OpenTelemetry has emerged as a key standard.  DevOps teams today need OpenTelemetry since they typically work with a lot of different data sources for observability processes, Parker said. “If you want observability, you need to transform and send that data out to any number of open source or commercial solutions and you need a lingua franca to to be consistent. Every time I have a host, or an IP address, or any kind of metadata, consistency is key and that's what OpenTelemetry provides.” Additionally, as a developer or an operator, OpenTelemetry serves to instrument your system for observability, McLean said. “OpenTelemetry does that through the power of the community working together to define those standards and to provide the components needed to extract that data among hundreds of thousands of different combinations of software and hardware and infrastructure that people are using,” McLean said. Observability and OpenTelemetry, while conceptually straightforward, do require a learning curve to use. To that end, the OpenTelemetry project has released a demo to help. It is intended to both better understand cloud native development practices and to test out OpenTelemetry, as well as Kubernetes, observability software, etc.,the project's creators say. OpenTelemetry Demo v1.0 general release is available on GitHub and on the OpenTelemetry site. The demo helps with learning how to add instrumentation to an application to gather metrics, logs and traces for observability. There is heavy instruction for open source projects like Prometheus for Kubernetes and Jaeger for distributed tracing. How to acquaint yourself with tools such as Grafana to create dashboards are shown. The demo also extends to scenarios in which failures are created and OpenTelemetry data is used for troubleshooting and remediation. The demo was designed for the beginner or the intermediate level user, and can be set up to run on Docker or Kubernetes in about five minutes.  “The demo is a great way for people to get started,” Parker said. “We've also seen a lot of great uptake from our commercial partners as well who have said ‘we'll use this to demo our platform.'”

The New Stack Podcast
KubeCon+CloudNativeCon 2022 Rolls into Detroit

The New Stack Podcast

Play Episode Listen Later Oct 13, 2022 27:01


It's that time of the year again, when cloud native enthusiasts and professionals assemble to discuss all things Kubernetes. KubeCon+CloudNativeCon 2023 is being held later this month in Detroit, October 24-28. In this latest edition of The New Stack Makers podcast, we spoke with Priyanka Sharma, general manager of the Cloud Native Computing Foundation — which organizes KubeCon —and CERN computer engineer and KubeCon co-chair Ricardo Rocha. For this show, we discussed what we can expect from the upcoming event. This year, there will be a focus on Kubernetes in the enterprise, Sharma said. "We are reaching a point where Kubernetes is becoming the de facto standard when it comes to container orchestration. And there's a reason for it. It's not just about Kubernetes. Kubernetes spawned the cloud native ecosystem and the heart of the cloud native movement is building fast, resiliently observable software that meets customer needs. So ultimately, it's making you a better provider to your customers, no matter what kind of business you are." Of this year's topics, security will be a big theme, Rocha said. Technologies such as Falco and Cilium will be discussed. Linux kernel add-on eBPF is popping up in a lot of topics, especially around networking. Observability and hybrid deployments also weigh heavily on the agenda. "The number of solutions [around Hybrid] are quite large, so it's interesting to see what people come up with," he said. In addition to KubeCon itself, this year there are a number of co-located events, held during or before the conference itself. Some of them hosted by CNCF while others are hosted by other companies such as Canonical. They include the Network Application Day, BackstageCon, CloudNative eBPF Day, CloudNativeSecurityCon, CloudNative WASM Day, Data-on-Kubernetes Day, EnvoyCon, gRPCConf, KNativeCon, Spinnaker Summit, Open Observability Day, Cloud Native Telco Day, Operator Day, The Continuous Delivery Summit, among others. What's amazing is not only the number of co-located events, but the high quality of talks being held there. "Co-located events are a great way to know what's exciting to folks in the ecosystem right now," Sharma said. "Cloud native has really become the scaffolding of future progress. People want to build on cloud native, but have their own focus areas." WebAssembly (WASM) is a great example of this. "In the beginning, you wouldn't have thought of WebAssembly as part of the cloud native narrative, but here we are," Sharma said. "The same thinking from professionals who conceptualized cloud native in the beginning are now taking it a step further." "There's a lot of value in co-located events, because you get a group of people for a longer period in the same room, focusing on one topic," Rocha said. Other topics discussed in the podcast include the choice of Detroit as a conference hub, the fun activities that CNCF have planned in between the technical sessions, surprises at the keynotes, and so much more! Give it a listen.

Eficode
Why sidecar-less Cilium Mesh is a game-changer

Eficode

Play Episode Listen Later Aug 4, 2022 45:26


The Cilium project - best known as a networking plugin for Kubernetes - just released a service mesh functionality. We've invited Liz Rice for a technical conversation around Cilium Service Mesh. Liz Rice is Chief Open Source Officer with eBPF specialists Isovalent, creators of the Cilium cloud native networking, security and observability project. She was Chair of the CNCF's Technical Oversight Committee in 2019-2022, and Co-Chair of KubeCon + CloudNativeCon in 2018. She is also the author of Container Security, published by O'Reilly. Liz Rice in LinkedIn https://www.linkedin.com/in/lizrice A guided tour of Cilium Service Mesh: https://www.youtube.com/watch?v=e10kDBEsZw4 Guide: Make the most of DevOps and Cloud https://hubs.li/Q01jd2fZ0 The Future of Kubernetes - a blog post: https://hubs.li/Q01jd0ZD0 Takeaways and Trends in CloudNativeCon 2022 - a blog post: https://hubs.li/Q01jd0-x0

Console DevTools
eBPF, with Liz Rice (Isovalent) - S03E02

Console DevTools

Play Episode Listen Later Jun 16, 2022 32:23


In this episode we speak to Liz Rice, Chief Open Source Officer at Isovalent, the company behind the open source eBPF product Cilium. We discuss why it's such a revolutionary approach to developing low-level kernel applications, how BPF can be used for observability, networking and security, how developers should think about application security, and why all of these technologies are open source.About Liz RiceLiz Rice is Chief Open Source Officer at eBPF pioneers Isovalent, creators of the Cilium project, which provides cloud native networking, observability and security. Prior to Isovalent she was VP Open Source Engineering with security specialists Aqua Security. She is also Chair of the CNCF's Technical Oversight Committee, has co-chaired the KubeCon / CloudNativeCon and is an Ambassador for Open UK.Other things mentioned:IsovalentBerkeley labDave ThalerKubernetesFirecrackerLambdaM1 MacbookVS CodeLet us know what you think on Twitter:https://twitter.com/consoledotdevhttps://twitter.com/davidmyttonhttps://twitter.com/lizriceOr by email: hello@console.devAbout ConsoleConsole is the place developers go to find the best tools. Our weekly newsletter picks out the most interesting tools and new releases. We keep track of everything - dev tools, devops, cloud, and APIs - so you don't have to. Sign up for free at: https://console.devRecorded: 2022-05-05. 

Techzine Talks
OpenInfra Summit: Heeft OpenStack nog toekomst?

Techzine Talks

Play Episode Listen Later Jun 13, 2022 28:30


De OpenInfra Foundation is fundamenteel aan het veranderen. Wat betekent dat voor OpenStack? En hoe relevant is en blijft datgene wat de OIF doet in Europa?Enkele weken geleden waren we aanwezig bij de Europese uitvoering van KubeCon + CloudNativeCon van de Cloud-Native Computing Foundation (CNCF) in Valencia. Deze week was het de beurt aan de OpenInfra Foundation (OIF) om on tijdens de OpenInfra Summit in Berlijn bij te praten over wat het allemaal in petto heeft voor de open-source projecten die onder die stichting vallen.Meer dan OpenStackBij het horen van de naam OpenInfra Foundation is het wellicht niet voor iedereen duidelijk waar we het over hebben. Deze naam is namelijk nog niet zo heel oud. Voor 2021 ging deze stichting door het leven als de OpenStack Foundation. De naamswijziging is een duidelijk signaal naar de markt geweest. De OIF gaf en geeft hier mee aan dat het meer in het portfolio heeft dan alleen OpenStack. Toegegeven, de naam OpenStack was voor ons al enige tijd niet meer een synoniem voor innovatie en spannende ontwikkelingen, dus wij snappen de naamswijziging heel goed. Sterker nog, de OIF had het wat ons betreft wel iets eerder mogen doen.We zeggen niets geks als we stellen dat OpenStack niet helemaal geworden is wat men er een jaar of tien geleden van verwachtte. Deels vanwege ontwikkelingen in de rest van de markt, met de onstuitbare en snelle opkomst van de hyperscalers. Deels ook vanwege het imago dat aan OpenStack kleeft als een omgeving die niet per se eenvoudig op te zetten en te beheren is. Je moet er behoorlijk wat mensen opzetten, iets wat veel organisaties niet kunnen en/of willen.Dit imago is inmiddels niet meer helemaal verdiend en terecht overigens. Het is anno 2022 een stuk eenvoudiger om met OpenStack aan de slag te gaan dan het in het verleden was. Evenals andere open-source projecten zoals Kubernetes zal het echter nooit doodsimpel worden. De operationele kant zal altijd een uitdaging zijn bij open-source.Maar goed, de naamswijziging heeft er vooral voor gezorgd dat het nu duidelijk is dat dat er meer is dan OpenStack binnen de OIF. Het CI/CD-platform Zuul is het oudste project naast OpenStack. Dat viert dit jaar ook alweer zijn tiende verjaardag.In deze podcast gaan we in op de ontwikkelingen rondom de OpenInfra Foundation. Denken we in de toekomst nog aan OpenStack als we het hebben over de OpenInfra Foundation, of zal het dan toch vooral gaan over de andere projecten? En kunnen we een zekere mate van integratie verwachten tussen de verschillende grote(re) open-source communities?

Kubernetes Bytes
KubeCon + CloudNativeCon Europe 2022 Recap

Kubernetes Bytes

Play Episode Listen Later May 27, 2022 46:13


In this episode, Ryan and Bhavin talk about Kubecon + CloudNativeCon Europe 2022 and discuss all the vendor announcements from the past couple of weeks. Kubecon Europe had close to 7500 attendees and shows a continuous increase in the adoption of containers and Kubernetes. Below, you can find links to the things discussed during the podcast: The State of Cloud-Native Development Report - Q3 2021 (came out in May 2022): https://www.cncf.io/wp-content/uploads/2022/05/Q3-2021-State-of-Cloud-Native-development_FINAL.pdf Akuity raises $20M Series A to take Argo project next level: https://siliconangle.com/2022/05/16/kubernetes-startup-akuity-raises-20m-take-argo-project-next-level/ Teleport raises $110M series C to $1.1B evaluation: https://goteleport.com/blog/series-c/ Snapt launches Nova - https://aithority.com/it-and-devops/cloud/snapt-announces-the-one-security-package-to-run-kubernetes-in-public-cloud/ Kasten K10 - v5: https://www.storagereview.com/news/kasten-k10-v5-0-offers-enhanced-kubernetes-security-and-more Datadog https://containerjournal.com/news/news-releases/datadog-enhances-monitoring-and-security-for-kubernetes/ Sysdig launches Sysdig Advisor: https://containerjournal.com/kubecon-cnc-eu-2022/sysdig-introduces-sysdig-advisor-to-drastically-simplify-kubernetes-troubleshooting/ Red Hat open sources StackRox: https://techcrunch.com/2022/05/17/red-hat-open-sources-stackrox-the-kubernetes-security-platform-it-acquired-last-year Portworx - PDS and BaaS - https://portworx.com/blog/announcing-general-availability-of-portworx-data-services/ https://portworx.com/blog/fast-and-simple-data-protection-with-portworx-backup-as-a-service/ Datacore launches Bolt - based on OpenEBS after Mayadata acquisition - https://blocksandfiles.com/2022/05/18/datacore-bolt-kubernetes/ Kubecost - 1 click Request Sizing to Automatically Optimize Kubernetes Clusters and Eliminate Wasted Spend - https://www.yahoo.com/now/kubecost-launches-1-click-request-060000724.html SUSE open sources NeuVector container security platform - https://containerjournal.com/features/suse-integrates-container-security-platform-with-rancher/ Lacework - https://containerjournal.com/features/lacework-dives-deeper-into-kubernetes-security/ NetApp Astra Data Store - https://blocksandfiles.com/2022/05/25/netapp-per-ardua-ad-as

CloudSkills.fm
145: Taylor Dolezal on the Importance of Community in Tech

CloudSkills.fm

Play Episode Listen Later May 5, 2022 56:33


In this episode Mike Pfeiffer chats with Taylor Dolezal about the importance of community in tech.Taylor specializes in Kubernetes, Terraform, cloud, and distributed systems, and currently serves as Head of Ecosystem at the CNCF.In this episode we talk about:Taylor's career origin story as a developer, systems engineer, and technology advocate working at places like Disney and Hashicorp.Taylor's current work at the CNCF focsed on serving the Cloud Native end-user community, giving back through research & assets, and running events like KubeCon + CloudNativeCon.Importance of community in tech, including the challenges of building a community and how to get organizations to recognize the value.Taylor DolezalLinkedIn: https://www.linkedin.com/in/onlydole/Twitter: https://twitter.com/onlydoleCNCF Bio: https://www.cncf.io/people/staff/?p=taylor-dolezalMike Pfeifferhttps://linktr.ee/mikepfeiffer

The New Stack Podcast
KubeCon + CloudNativeCon 2022 Europe, in Valencia: Bring a Mask

The New Stack Podcast

Play Episode Listen Later Apr 26, 2022 29:21


Last week, the country of Spain dropped its mandate for residents and visitors to wear masks, to ward off further infections of the Coronavirus. So, for this year's KubeCon + CloudNativeCon  Europe conference, to be held May 16 - 20th of May in Valencia, Spain, the Cloud Native Computing Foundation dropped its own original mandate that attendees wear masks, a rule that had been in place for its other recent conferences.This turned out to be the wrong decision, CNCF admitted a week later. A lot of people who already bought tickets were upset at this laxing of the rules for the conference, which could put them in greater danger of contacting the disease.So the CNCF put the mandate back in place, and offered refunds for those who felt Spain's own decision would put them in harm's way. CNCF will even send you a week's worth of N95 masks if you request them.So, long story short: bring a mask to KubeCon. And, as always, it is still a requirement to show proof of vaccination and temperature checks will be made as well.Tricky business running a conference in this time, no?In this latest episode of The New Stack Makers podcast, we take a look at what to expect from this year's KubeCon EU 2022. Our guests for this podcast are Priyanka Sharma, the executive director of CNCF, and Ricardo Rocha, who is a KubeCon co-chair and computer engineer at CERN. TNS Editor-in-chief Joab Jackson hosted this podcast.We recorded this podcast prior to the discussion around masks, and at the time, Sharma said that the CNCF based the mask ruling on Spain's own country-wide mandates. "So we are being very cautious with the health requirements for the event," she said.The conference team is also keeping an eye on Russia's aggressive moves in the Ukraine, though it is unlikely that the chaos will reach all the way to Spain. Still, "this is why it's essential to always have the hybrid option .. [to] have the virtual elements sorted," Sharma said.As the CNCF flagship conference, KubeCon brings together managers and users of a wide variety of cloud native technologies, including containerd, CoreDNS, Envoy, etcd, Fluentd, Harbor, Helm, Istio, Jaeger, Kubernetes, Linkerd, Open Policy Agent, Prometheus, Rook, Vitess, Argo, CRI-O, Crossplane, dapr, Dragonfly,  Falco, Flagger, Flux, gRPC, KEDA, SPIFFE, SPIRE, and Thanos, and many many more. Most have been featured on TNS at one time or another.In this podcast, we also discuss what to expect from the virtual sessions at the conference, what to do in Valencia, the current state of Kubernetes, and we get some unofficial picks from Sharma and Rocha as to what keynotes not miss and what sessions to attend."The virtual option is great," Rocha said. "But I think the in-person conferences have have their own value. And there's a lot to be to be gained about meeting people directly and exchanging ideas and going to these events on the side of the conference as well."

Screaming in the Cloud
Siphoning through the Acronyms with Liz Rice

Screaming in the Cloud

Play Episode Listen Later Mar 8, 2022 37:12


About LizLiz Rice is Chief Open Source Officer with cloud native networking and security specialists Isovalent, creators of the Cilium eBPF-based networking project. She is chair of the CNCF's Technical Oversight Committee, and was Co-Chair of KubeCon + CloudNativeCon in 2018. She is also the author of Container Security, published by O'Reilly.She has a wealth of software development, team, and product management experience from working on network protocols and distributed systems, and in digital technology sectors such as VOD, music, and VoIP. When not writing code, or talking about it, Liz loves riding bikes in places with better weather than her native London, and competing in virtual races on Zwift.Links: Isovalent: https://isovalent.com/ Container Security: https://www.amazon.com/Container-Security-Fundamental-Containerized-Applications/dp/1492056707/ Twitter: https://twitter.com/lizrice GitHub: https://github.com/lizrice Cilium and eBPF Slack: http://slack.cilium.io/ CNCF Slack: https://cloud-native.slack.com/join/shared_invite/zt-11yzivnzq-hs12vUAYFZmnqE3r7ILz9A TranscriptAnnouncer: Hello, and welcome to Screaming in the Cloud with your host, Chief Cloud Economist at The Duckbill Group, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.Corey: Today's episode is brought to you in part by our friends at MinIO the high-performance Kubernetes native object store that's built for the multi-cloud, creating a consistent data storage layer for your public cloud instances, your private cloud instances, and even your edge instances, depending upon what the heck you're defining those as, which depends probably on where you work. It's getting that unified is one of the greatest challenges facing developers and architects today. It requires S3 compatibility, enterprise-grade security and resiliency, the speed to run any workload, and the footprint to run anywhere, and that's exactly what MinIO offers. With superb read speeds in excess of 360 gigs and 100 megabyte binary that doesn't eat all the data you've gotten on the system, it's exactly what you've been looking for. Check it out today at min.io/download, and see for yourself. That's min.io/download, and be sure to tell them that I sent you.Corey: This episode is sponsored in part by our friends at Sysdig. Sysdig is the solution for securing DevOps. They have a blog post that went up recently about how an insecure AWS Lambda function could be used as a pivot point to get access into your environment. They've also gone deep in-depth with a bunch of other approaches to how DevOps and security are inextricably linked. To learn more, visit sysdig.com and tell them I sent you. That's S-Y-S-D-I-G dot com. My thanks to them for their continued support of this ridiculous nonsense.Corey: Welcome to Screaming in the Cloud. I'm Corey Quinn. One of the interesting things about hanging out in the cloud ecosystem as long as I have and as, I guess, closely tied to Amazon as I have been, is that you learned that you never quite are able to pronounce things the way that people pronounce them internally. In-house pronunciations are always a thing. My guest today is Liz Rice, the Chief Open Source Officer at Isovalent, and they're responsible for, among other things, the Cilium open-source project, which is around eBPF, which I can only assume is internally pronounced as ‘Ehbehpf'. Liz, thank you for joining me today and suffering my pronunciation slings and arrows.Liz: I have never heard ‘Ehbehpf' before, but I may have to adopt it. That's great.Corey: You also are currently—in a term that is winding down if I'm not misunderstanding—you were the co-chair of KubeCon and CloudNativeCon at the CNCF, and you are also currently on the technical oversight committee for the foundation.Liz: Yeah, yeah. I'm currently the chair, in fact, of the technical oversight committee.Corey: And now that Amazon has joined, I assumed that they had taken their horrible pronunciation habits, like calling AMIs ‘Ah-mies' and whatnot, and started spreading them throughout the ecosystem with wild abandon.Liz: Are we going to have to start calling CNCF ‘Ka'Nff' or something?Corey: Exactly. They're very frugal, by which I mean they never buy a vowel. So yeah, it tends to be an ongoing challenge. Joking and all the rest aside, let's start, I guess, at the macro view. The CNCF does an awful lot of stuff, where if you look at the CNCF landscape, for example, like, I think some of my jokes on the internet go a bit too far, but you look at this thing and last time I checked, there were something like four or 500 different players in various spaces.And it's a very useful diagram, don't get me wrong by any stretch of the imagination, but it also is one of those things that is so staggeringly vast that I've got a level with you on this one, given my old, ancient sysadmin roots, “The hell with it. I'm going to run some VMs in a three-tiered architecture just like grandma and grandpa used to do,” and call it good. Not really how the industry is evolved, but it's overwhelming.Liz: But that might be the right solution for your use case so, you know, don't knock it if it works.Corey: Oh, yeah. If it's a terrible architecture and it works, is it really that terrible of an architecture? One wonders.Liz: Yeah, yeah. I mean, I'm definitely not one of those people who thinks, you know, every solution has the same—you know, is solved by the same hammer, you know, all problems are not the same nail. So, I am a big fan of a lot of the CNCF projects, but that doesn't mean to say I think those are the only ways to deploy software. You know, there are plenty of things like Lambda are a really great example of something that is super useful and very applicable for lots of applications and for lots of development teams. Not necessarily the right solution for everything. And for other people, they need all the bells and whistles that something like Kubernetes gives them. You know, horses for courses.Corey: It's very easy for me to make fun of just about any company or service or product, but the thing that always makes me set that aside and get down to brass tacks has been, “Okay, great. You can build whatever you want. You can tell whatever glorious marketing narrative you wish to craft, but let's talk to a real customer because once we do that, then if you're solving a problem that someone is having in the wild, okay, now it's no longer just this theoretical exercise and PowerPoint. Now, let's actually figure out how things work when the rubber meets the road.”So, let's start, I guess, with… I'll leave it to you. Isovalent are the creators of the Cilium eBPF-based networking project.Liz: Yeah.Corey: And eBPF is the part of that I think I'm the most familiar with having heard the term. Would you rather start on the company side or on the eBPF side?Liz: Oh, I don't mind. Let's—why don't we start with eBPF? Yeah.Corey: Cool. So easy, ridiculous question. I know that it's extremely important because Brendan Gregg periodically gets on stage and tells amazing stories about this; the last time he did stuff like that, I went stumbling down into the rabbit hole of DTrace, and I have never fully regretted doing that, nor completely forgiven him. What is eBPF?Liz: So, it stands for extended Berkeley Packet Filter, and we can pretty much just throw away those words because it's not terribly helpful. What eBPF allows you to do is to run custom programs inside the kernel. So, we can trigger these programs to run, maybe because a network packet arrived, or because a particular function within the kernel has been called, or a tracepoint has been hit. There are tons of places you can attach these programs to, or events you can attach programs to.And when that event happens, you can run your custom code. And that can change the behavior of the kernel, which is, you know, great power and great responsibility, but incredibly powerful. So Brendan, for example, has done a ton of really great pioneering work showing how you can attach these eBPF programs to events, use that to collect metrics, and lo and behold, you have amazing visibility into what's happening in your system. And he's built tons of different tools for observing everything from, I don't know, memory use to file opens to—there's just endless, dozens and dozens of tools that Brendan, I think, was probably the first to build. And now this sort of new generations of eBPF-based tooling that are kind of taking that legacy, turning them into maybe more, going to say user-friendly interfaces, you know, with GUIs, and hooking them up to metrics platforms, and in the case of Cilium, using it for networking and hooking it into Kubernetes identities, and making the information about network flows meaningful in the context of Kubernetes, where things like IP addresses are ephemeral and not very useful for very long; I mean, they just change at any moment.Corey: I guess I'm trying to figure out what part of the stack this winds up applying to because you talk about, at least to my mind, it sounds like a few different levels all at once: You talk about running code inside of the kernel, which is really close to the hardware—it's oh, great. It's adventures in assembly is almost what I'm hearing here—but then you also talk about using this with GUIs, for example, and operating on individual packets to run custom programs. When you talk about running custom programs, are we talking things that are a bit closer to, “Oh, modify this one field of that packet and then call it good,” or are you talking, “Now, we launch Microsoft Word.”Liz: Much more the former category. So yeah, let's inspect this packet and maybe change it a bit, or send it to a different—you know, maybe it was going to go to one interface, but we're going to send it to a different interface; maybe we're going to modify that packet; maybe we're going to throw the packet on the floor because we don't—there's really great security use cases for inspecting packets and saying, “This is a bad packet, I do not want to see this packet, I'm just going to discard it.” And there's some, what they call ‘Packet of Death' vulnerabilities that have been mitigated in that way. And the real beauty of it is you just load these programs dynamically. So, you can change the kernel or on the fly and affect that behavior, just immediately have an effect.If there are processes already running, they get instrumented immediately. So, maybe you run a BPF program to spot when a file is opened. New processes, existing processes, containerized processes, it doesn't matter; they'll all be detected by your program if it's observing file open events.Corey: Is this primarily used from a security perspective? Is it used for—what are the common use cases for something like this?Liz: There's three main buckets, I would say: Networking, observability, and security. And in Cilium, we're kind of involved in some aspects of all those three things, and there are plenty of other projects that are also focusing on one or other of those aspects.Corey: This is where when, I guess, the challenge I run into the whole CNCF landscape is, it's like, I think the danger is when I started down this path that I'm on now, I realized that, “Oh, I have to learn what all the different AWS services do.” This was widely regarded as a mistake. They are not Pokémon; I do not need to catch them all. The CNCF landscape applies very similarly in that respect. What is the real-world problem space for which eBPF and/or things like Cilium that leverage eBPF—because eBPF does sound fairly low-level—that turn this into something that solves a problem people have? In other words, what is the problem that Cilium should be the go-to answer for when someone says, “I have this thing that hurts.”Liz: So, at one level, Cilium is a networking solution. So, it's Kubernetes CNI. You plug it in to provide connectivity between your applications that are running in pods. Those pods have to talk to each other somehow and Cilium will connect those pods together for you in a very efficient way. One of the really interesting things about eBPF and networking is we can bypass some of the networking stack.So, if we are running in containers, we're running our applications in containers in pods, and those pods usually will have their own networking namespace. And that means they've got their own networking stack. So, a packet that arrives on your machine has to go through the networking stack on that host machine, go across a virtual interface into your pod, and then go through the networking stack in that pod. And that's kind of inefficient. But with eBPF, we can look at the packet the moment it's come into the kernel—in fact in some cases, if you have the right networking interfaces, you can do it while it's still on the network interface card—so you look at that packet and say, “Well, I know what pod that's destined for, I can just send it straight there.” I don't have to go through the whole networking stack in the kernel because I already know exactly where it's going. And that has some real performance improvements.Corey: That makes sense. In my explorations—we'll call it—with Kubernetes, it feels like the universe—at least at the time I went looking into it—was, “Step One, here's how to wind up launching Kubernetes to run a blog.” Which is a bit like using a chainsaw to wind up cutting a sandwich. Okay, massively overpowered but I get the basic idea, like, “Okay, what's project Step Two?” It's like, “Oh, great. Go build Google.”Liz: [laugh].Corey: Okay, great. It feels like there's some intermediary steps that have been sort of glossed over here. And at the small-scale that I kicked the tires on, things like networking performance never even entered the equation; it was more about get the thing up and running. But yeah, at scale, when you start seeing huge numbers of containers being orchestrated across a wide variety of hosts that has serious repercussions and explains an awful lot. Is this the sort of thing that gets leveraged by cloud providers themselves, is it something that gets built in mostly on-prem environments, or is it something that rides in, almost, user-land for most of these use cases that customers coming to bringing to those environments? I'm sorry, users, not customers. I'm too used to the Amazonian phrasing of everyone as a customer. No, no, they are users in an open-source project.Liz: [laugh]. Yeah, so if you're using GKE, the GKE Dataplane V2 is using Cilium. Alibaba Cloud uses Cilium. AWS is using Cilium for EKS Anywhere. So, these are really, I think, great signals that it's super scalable.And it's also not just about the connectivity, but also about being able to see your network flows and debug them. Because, like you say, that day one, your blog is up and running, and day two, you've got some DNS issue that you need to debug, and how are you going to do that? And because Cilium is working with Kubernetes, so it knows about the individual pods, and it's aware of the IP addresses for those pods, and it can map those to, you know, what's the pod, what service is that pod involved with. And we have a component of Cilium called Hubble that gives you the flows, the network flows, between services. So, you know, we've probably all seen diagrams showing Service A talking to Service B, Service C, some external connectivity, and Hubble can show you those flows between services and the outside world, regardless of how the IP addresses may be changing underneath you, and aggregating network flows into those services that make sense to a human who's looking at a Kubernetes deployment.Corey: A running gag that I've had is that one of the drawbacks and appeals of Kubernetes, all at once, is that it lets you cosplay as a cloud provider, even if you don't happen to work for one of them. And there's a bit of truth to it, but let's be serious here, despite what a lot of the cloud providers would wish us to believe via a bunch of marketing, there's a tremendous number of data center environments out there, hybrid environments, and companies that are in those environments are not somehow laggards, or left behind technologically, or struggling to digitally transform. Believe it or not—I know it's not a common narrative—but large companies generally don't employ people who lack critical thinking skills and strategic insight. There's usually a reason that things are the way that they are and when you don't understand that my default approach is that, oh context that gets missing, so I want to preface this with the idea there is nothing wrong in those environments. But in a purely cloud-native environment—which means that I'm very proud about having no single points of failure as I have everything routing to a single credit card that pays the cloud providers—great. What is the story for Cilium if I'm using, effectively, the managed Kubernetes options that Name Any Cloud Provider will provide for me these days? Is it at that point no longer for me or is it something that instead expresses itself in ways I'm not seeing, yet?Liz: Yeah, so I think, as an open-source project—and it is the only CNI that's at incubation level or beyond, so you know, it's CNCF-supported networking solution; you can use it out of the box, you can use it for your tiny blog application if you've decided to run that on Kubernetes, you can do so—things start to get much more interesting at scale. I mean, that… continuum between you know, there are people purely on managed services, there are people who are purely in the cloud, hybrid cloud is a real thing, and there are plenty of businesses who have good reasons to have some things in their own data centers, something's in the public cloud, things distributed around the world, so they need connectivity between those. And Cilium will solve a lot of those problems for you in the open-source, but also, if you're telco scale and you have things like BGP networks between your data centers, then that's where the paid versions of Cilium, the enterprise versions of Cilium, can help you out. And, as Isovalent, that's our business model to have, like—we fully support or we contribute a lot of resources into the open-source Cilium, and we want that to be the best networking solution for anybody, but if you are an enterprise who wants those extra bells and whistles, and the kind of scale that, you know, a telco, or a massive retailer, or a large media organization, or name your vertical, then we have solutions for that as well. And I think it was one of the really interesting things about the eBPF side of it is that, you know, we're not bound to just Kubernetes, you know? We run in the kernel, and it just so happens that we have that Kubernetes interface for allocating IP addresses to endpoints that happened to be pods. But—Corey: So, back to my crappy pile of VMs—because the hell with all this newfangled container nonsense—I can still benefit from something like Cilium?Liz: Exactly, yeah. And there's plenty of people using it for just load-balancing, which, why not have an eBPF-based high-performance load balancer?Corey: Hang on, that's taking me a second to work my way through. What is the programming language for eBPF? It is something custom?Liz: Right. So, when you load your BPF program into the kernel, it's in the form of eBPF bytecode. There are people who write an eBPF bytecode by hand; I am not one of those people.Corey: There are people who used to be able to write Sendmail configs without running through the M four preprocessor, and I don't understand those people either.Liz: [laugh]. So, our choices are—well, it has to be a language that can be compiled into that bytecode, and at the moment, there are two options: C, and more recently, Rust. So, the C code, I'm much more familiar with writing BPF code in C, it's slightly limited. So, because these BPF programs have to be safe to run, they go through a verification process which checks that you're not going to crash the kernel, that you're not going to end up in some hardware loop, and basically make your machine completely unresponsive, we also have to know that BPF programs, you know, they'll only access memory that they're supposed to and that they can't mess up other processes. So, there's this BPF verification step that checks for example that you always check that a pointer isn't nil before you dereference it.And if you try and use a pointer in your C code, it might compile perfectly, but when you come to load it into the kernel, it gets rejected because you forgot to check that it was non-null before.Corey: You try and run it, the whole thing segfaults, you see the word ‘fault' there and well, I guess blameless just went out the window there.Liz: [laugh]. Well, this is the thing: You cannot segfault in the kernel, you know, or at least that's a bad [day 00:19:11]. [laugh].Corey: You say that, but I'm very bad with computers, let's be clear here. There's always a way to misuse things horribly enough.Liz: It's a challenge. It's pretty easy to segfault if you're writing a kernel module. But maybe we should put that out as a challenge for the listener, to try to write something that crashes the kernel from within an eBPF because there's a lot of very smart people.Corey: Right now the blood just drained from anyone who's listening, in the kernel space or the InfoSec space, I imagine.Liz: Exactly. Some of my colleagues at Isovalent are thinking, “Oh, no. What's she brought on here?” [laugh].Corey: What have you done? Please correct me if I'm misunderstanding this. So, eBPF is a very low-level tool that requires certain amounts of braining in order [laugh] to use appropriately. That can be a heavy lift for a lot of us who don't live in those spaces. Cilium distills this down into something that is all a lot more usable and understandable for folks, and then beyond that, you wind up with Isovalent, that winds up effectively productizing and packaging this into something that becomes a lot more closer to turnkey. Is that directionally accurate?Liz: Yes, I would say that's true. And there are also some other intermediate steps, like the CLI tools that Brendan Gregg did, where you can—I mean, a CLI is still fairly low-level, but it's not as low-level as writing the eBPF code yourself. And you can be quite in-dep—you know, if you know what things you want to observe in the kernel, you don't necessarily have to know how to write the eBPF code to do it, but if you've got these fairly low-level tools to do it. You're absolutely right that very few people will need to write their own… BPF code to run in the kernel.Corey: Let's move below the surface level of awareness; the same way that most of us don't need to know how to compile our own kernel in this day and age.Liz: Exactly.Corey: A few people very much do, but because of their hard work, the rest of us do not.Liz: Exactly. And for most of us, we just take the kernel for granted. You know, most people writing applications, it doesn't really matter if—they're just using abstractions that do things like open files for them, or create network connections, or write messages to the screen, you don't need to know exactly how that's accomplished through the kernel. Unless you want to get into the details of how to observe it with eBPF or something like that.Corey: I'm much happier not knowing some of the details. I did a deep dive once into Linux system kernel internals, based on an incredibly well-written but also obnoxiously slash suspiciously thick O'Reilly book, Linux Systems Internalsand it was one of those, like, halfway through, “Can I please be excused? My brain is full.” It's one of those things that I don't use most of it on a day-to-day basis, but it's solidified by understanding of what the computer is actually doing in a way that I will always be grateful for.Liz: Mmm, and there are tens of millions of lines of code in the Linux kernel, so anyone who can internalize any of that is basically a superhero. [laugh].Corey: I have nothing but respect for people who can pull that off.Corey: Couchbase Capella Database-as-a-Service is flexible, full-featured and fully managed with built in access via key-value, SQL, and full-text search. Flexible JSON documents aligned to your applications and workloads. Build faster with blazing fast in-memory performance and automated replication and scaling while reducing cost. Capella has the best price performance of any fully managed document database. Visit couchbase.com/screaminginthecloud to try Capella today for free and be up and running in three minutes with no credit card required. Couchbase Capella: make your data sing.In your day job, quote-unquote—which is sort of a weird thing to say, given that you are working at an open-source company; in fact, you are the Chief Open Source Officer, so what you're doing in the community, what you're exploring on the open-source project side of things, it is all interrelated. I tend to have trouble myself figuring out where my job starts and stops most weeks; I'm sympathetic to it. What inspired you folks to launch a company that is, “Ah, we're going to be in the open-source space?” Especially during a time when there's been a lot of pushback, in some respects, about the evolution of open-source and the rise of large cloud providers, where is open-source a viable strategy or a tactic to get to an outcome that is pleasing for all parties?Liz: Mmm. So, I wasn't there at the beginning, for the Isovalent journey, and Cilium has been around for five or six years, now, at this point. I very strongly believe in open-source as an effective way of developing technology—good technology—and getting really good feedback and, kind of, optimizing the speed at which you can innovate. But I think it's very important that businesses don't think—if you're giving away your code, you cannot also sell your code; you have to have some other thing that adds value. Maybe that's some extra code, like in the Isovalent example, the enterprise-related enhancements that we have that aren't part of the open-source distribution.There's plenty of other ways that people can add value to open-source. They can do training, they can do managed services, there's all sorts of different—support was the classic example. But I think it's extremely important that businesses don't just expect that I can write a bunch of open-source code, and somehow magically, through building up a whole load of users, I will find a way to monetize that.Corey: A bunch of nerds will build my product for me on nights and weekends. Yeah, that's a bit of an outmoded way of thinking about these things.Liz: Yeah exactly. And I think it's not like everybody has perfect ability to predict the future and you might start a business—Corey: And I have a lot of sympathy for companies who originally started with the idea of, “Well, we are the project leads. We know this code the best, therefore we are the best people in the world to run this as a service.” The rise of the hyperscale cloud providers has called that into significant question. And I feel for them because it's difficult to completely pivot your business model when you're already a publicly-traded company. That's a very fraught and challenging thing to do. It means that you're left with a bunch of options, none of them great.Cilium as a project is not that old, neither is Isovalent, but it's new enough in the iterative process, that you were able to avoid that particular pitfall. Instead, you're looking at some level of making this understandable and useful to humans, almost the point where it disappears from their level of awareness that they need to think about. There's huge value in something like that. Do you think that there is a future in which projects and companies built upon projects that follow this model are similarly going to be having challenges with hyperscale cloud providers, or other emergent threats to the ecosystem—sorry, ‘threat' is an unfair and unkind word here—but changes to the ecosystem, as we see the world evolving in ways that most of us did not foresee?Liz: Yeah, we've certainly seen some examples in the last year or two, I guess, of companies that maybe didn't anticipate, and who necessarily has a crystal ball to anticipate how cloud providers might use their software? And I think in some cases, the cloud providers has not always been the most generous or most community-minded in their approach to how they've done that. But I think for a company, like Isovalent, our strong point is talent. It would be extremely rare to find the level of expertise in, you know, what is a pretty specialized area. You know, the people at Isovalent who are working on Cilium are also working on eBPF itself, and that level of expertise is, I think, pretty unrivaled.So, we're in such a new space with eBPF, we've only in the last year or so, got to the point where pretty much everyone is running a kernel that's new enough to use eBPF. Startups do have a kind of agility that I think gives them an advantage, which I hope we'll be able to capitalize on. I think sometimes when businesses get upset about their code being used, they probably could have anticipated it. You know, if it's open-source, people will use your software, and you have to think of that.Corey: “What do you mean you're using the thing we gave away for free and you're not paying us to use it?”Liz: Yeah.Corey: “Uh, did you hear what you just said?” Some of this was predictable, let's be fair.Liz: Yeah, and I think you really have to, as a responsible business, think about, well, what does happen if they use all the open-source code? You know, is that a problem? And as far as we're concerned, everybody using Cilium is a fantastic… thing. We fully welcome everyone using Cilium as their data plane because the vast majority of them would use that open-source code, and that would be great, but there will be people who need that extra features and the expertise that I think we're in a unique position to provide. So, I joined Isovalent just about a year ago, and I did that because I believe in the technology, I believe in the company, I believe in, you know, the foundations that it has in open-source.It's a very much an open-source first organization, which I love, and that resonates with me and how I think we can be successful. So, you know, I don't have that crystal ball. I hope I'm right, we'll find out. We should do this again, you know, a couple of years and see how that's panning out. [laugh].Corey: I'll book out the date now.Liz: [laugh].Corey: Looking back at our conversation just now, you talked about open-source, and business strategy and how that's going to be evolving. We talked about the company, we talked about an incredibly in-depth, technical product that honestly goes significantly beyond my current level of technical awareness. And at no point in any of those aspects of the conversation did you talk about it in a way that I did not understand, nor did you come off in any way as condescending. In fact, you wrote an O'Reilly book on Container Security that's written very much the same way. How did you learn to do that? Because it is, frankly, an incredibly rare skill.Liz: Oh, thank you. Yeah, I think I have never been a fan of jargon. I've never liked it when people use a complicated acronym, or really early days in my career, there was a bit of a running joke about how everything was TLAs. And you think, well, I understand why we use an acronym to shorten things, but I don't think we need to assume that everybody knows what everything stands for. Why can't we explain things in simple language? Why can't we just use ordinary terms?And I found that really resonates. You know, if I'm doing a presentation or if I'm writing something, using straightforward language and explaining things, making sure that people understand the, kind of, fundamentals that I'm going to build my explanation on. I just think that has a—it results in people understanding, and that's my whole point. I'm not trying to explain something to—you know, my goal is that they understand it, not that they've been blown away by some kind of magic. I want them to go away going, “Ah, now I understand how this bit fits with that bit,” or, “How this works.” You know?Corey: The reason I bring it up is that it's an incredibly undervalued skill because when people see it, they don't often recognize it for what it is. Because when people don't have that skill—which is common—people just write it off as oh, that person's a bad communicator. Which I think is a little unfair. Being able to explain complex things simply is one of the most valuable yet undervalued skills that I've found in this entire space.Liz: Yeah, I think people sometimes have this sort of wrong idea that vocabulary and complicated terms are somehow inherently smarter. And if you use complicated words, you sound smarter. And I just don't think that's accessible, and I don't think it's true. And sometimes I find myself listening to someone, and they're using complicated terms or analogies that are really obscure, and I'm thinking, but could you explain that to me in words of one syllable? I don't think you could. I think you're… hiding—not you [laugh]. You know, people—Corey: Yeah. No, no, that's fair. I'll take the accusation as [unintelligible 00:31:24] as I can get it.Liz: [laugh]. But I think people hide behind complex words because they don't really understand them sometimes. And yeah, I would rather people understood what I'm saying.Corey: To me—I've done it through conference talks, but the way I generally learn things is by building something with them. But the way I really learn to understand something is I give a conference talk on it because, okay, great. I can now explain Git—which was one of my early technical talks—to folks who built Git. Great. Now, how about I explain it to someone who is not immersed in the space whatsoever? And if I can make it that accessible, great, then I've succeeded. It's a lot harder than it looks.Liz: Yeah, exactly. And one of the reasons why I enjoy building a talk is because I know I've got a pretty good understanding of this, but by the time I've got this talk nailed, I will know this. I might have forgotten it in six months time, you know, but [laugh] while I'm giving that talk, I will have a really good understanding of that because the way I want to put together a talk, I don't want to put anything in a talk that I don't feel I could explain. And that means I have to understand how it works.Corey: It's funny, this whole don't give talks about things you don't understand seems like there's really a nouveau concept, but here we are, we're [working on it 00:32:40].Liz: I mean, I have committed to doing talks that I don't fully understand, knowing that—you know, with the confidence that I can find out between now and the [crosstalk 00:32:48]—Corey: I believe that's called a forcing function.Liz: Yes. [laugh].Corey: It's one of those very high-risk stories, like, “Either I'm going to learn this in the next three months, or else I am going to have some serious egg on my face.”Liz: Yeah, exactly, definitely a forcing function. [laugh].Corey: I really want to thank you for taking so much time to speak with me today. If people want to learn more, where can they find you?Liz: So, I am online pretty much everywhere as lizrice, and I am on Twitter. I'm on GitHub. And if you want to come and hang out, I am on the Cilium and eBPF Slack, and also the CNCF Slack. Yeah. So, come say hello.Corey: There. We will put links to all of that in the [show notes 00:33:28]. Thank you so much for your time. I appreciate it.Liz: Pleasure.Corey: Liz Rice, Chief Open Source Officer at Isovalent. I'm Cloud Economist Corey Quinn, and this is Screaming in the Cloud. If you've enjoyed this podcast, please leave a five-star review on your podcast platform of choice, whereas if you've hated this podcast, please leave a five-star review on your podcast platform of choice, along with an angry comment containing an eBPF program that on every packet fires off a Lambda function. Yes, it will be extortionately expensive; almost half as much money as a Managed NAT Gateway.Corey: If your AWS bill keeps rising and your blood pressure is doing the same, then you need The Duckbill Group. We help companies fix their AWS bill by making it smaller and less horrifying. The Duckbill Group works for you, not AWS. We tailor recommendations to your business and we get to the point. Visit duckbillgroup.com to get started.Announcer: This has been a HumblePod production. Stay humble.

The New Stack Podcast
Why CI/CD Continues to Evolve

The New Stack Podcast

Play Episode Listen Later Dec 14, 2021 11:10


Continuous integration and delivery (CI/CD) has seen some radical changes during the past few years, especially for continuous delivery. While not so long ago, application development and delivery was exclusively for monolithic stacks but delivering software for microservices and container environments is a very different animal.In this The New Stack Maker podcast, recorded at KubeCon+CloudNativeCon in October, guest Rob Zuber, chief technology officer at CircleCI, discusses the evolution of CI/CD from the perspective of CircleCI's experience for over a decade.Alex Williams, founder and publisher of The New Stack, hosted this podcast.

Greater Than Code
259: Continuous Iteration, Continuous Improvement – Always Evolving Over Time with Rin Oliver

Greater Than Code

Play Episode Listen Later Nov 17, 2021 43:14


01:42 - Rin's Superpower: Writing, Public Speaking, and Being Neurodivergent + Awesome! 02:18 - GitHub Actions (https://github.com/features/actions) * Concurrent Actions * CICD (Continuous Improvement, Continuous Deployment) * Security * Trivy (https://aquasecurity.github.io/trivy/v0.17.0/) * Building Secure Open Source Communities From the Ground Up (https://www.youtube.com/watch?v=DtDitJyd-3s) * Camunda Community Hub (https://github.com/camunda-community-hub) * community-action-maven-release (https://github.com/camunda-community-hub/community-action-maven-release) 07:47 - Improving Developer Experience * Kubernetes Community Contributor Experience Special Interest Group (https://github.com/kubernetes/community/blob/master/sig-contributor-experience/README.md) * Contributing Code * Kubernetes.dev (https://www.kubernetes.dev/) 11:33 - Neurodivergence + Autistic Burnout * A Vulnerable Tale About Burnout - Julia Simon (https://www.youtube.com/watch?v=lpiXbfOTNYw) * CNCF Slack (https://events.linuxfoundation.org/kubecon-cloudnativecon-north-america/attend/slack-guidelines/) * Articles From Rin (https://muckrack.com/kiran-oliver/articles) * John K. Sawers: Hacking Your Emotional API (https://www.youtube.com/watch?v=OGDRUI8biTc) * CPTSD (https://www.healthline.com/health/cptsd) * EMDR (https://www.emdr.com/) 17:04 - Mentoring and Reviewing for Kubernetes (https://kubernetes.io/) * KubeCon + CloudNativeCon (https://events.linuxfoundation.org/kubecon-cloudnativecon-north-america/) 20:49 - Open Source Contribution * Paying Maintainers * Getting Hired Based on Contributions * Getting Started with DevOps/DevSecOps Contributing * MiniKube (https://minikube.sigs.k8s.io/docs/start/) * The Diana Initiative (https://www.dianainitiative.org/) * Trivy (https://aquasecurity.github.io/trivy/v0.17.0/) * Auditing 29:04 - Mentoring (Cont'd) * Pod Mentoring (https://github.com/kubernetes/community/blob/master/mentoring/programs/mentoring-events.md) * Ruby Central Scholarship Program (http://rubycentral.org/scholarships) 32:46 - Evaluating Open Source Projects: Tips For Newbies * Contributor Licence Agreements (CLAs) * Codes of Conduct (CoCs) * Evaluate the Community Reflections: John: Technical Mentorship vs Social Mentorship. Mando: Providing a welcoming sense of community for people with non-traditional backgrounds. Rin: Being intentional about helping others, but also helping others means helping yourself. John 2: The distinction between technical and autistic burnout. This episode was brought to you by @therubyrep (https://twitter.com/therubyrep) of DevReps, LLC (http://www.devreps.com/). To pledge your support and to join our awesome Slack community, visit patreon.com/greaterthancode (https://www.patreon.com/greaterthancode) To make a one-time donation so that we can continue to bring you more content and transcripts like this, please do so at paypal.me/devreps (https://www.paypal.me/devreps). You will also get an invitation to our Slack community this way as well. Transcript: Coming Soon! Special Guest: Rin Oliver.

The New Stack Podcast
Why Cloud Native Is About Community

The New Stack Podcast

Play Episode Listen Later Nov 16, 2021 16:31


Cloud native is really only as good as the support and input the community provides. It is in this spirit that the Cloud Native Computing Foundation (CNCF) continues to invest heavily in the community to support new and existing projects, including  Kubernetes, Prometheus and Envoy that are among the cornerstones of cloud native today.During this latest episode of The New Stack Makers podcast, held live at KubeCon + CloudNativeCon last month, CNCF Marketing Manager Bill Mulligan and CNCF Developer Advocate Ihor Dvoretskyi spoke of the CNCF's Cloud Native Credits and Kubernetes Community Day program, as well as why these and other initiatives are vital to building cloud native tools and infrastructure of today and in the future.Alex Williams, founder and publisher of The New Stack, hosted this podcast.

The New Stack Podcast
Chainguard, a 'Zero Trust' Supply Chain Security Company

The New Stack Podcast

Play Episode Listen Later Oct 27, 2021 14:31


Five former Googlers recently started Chainguard, a newly minted supply chain security company focusing on Zero Trust principles. Their mission is to help support DevOps teams with their monumental struggles of securing application code across the development, deployment and management cycle.“Supply chain security by default is our mission and making it really easy for developers to do the right thing,” Kim Lewandowski, founder and product, for Chainguard, said during a The New Stack Makers podcast  recorded live at KubeCon + CloudNativeCon in October.Alex Williams, founder and publisher of TNS, hosted the podcast.

The New Stack Podcast
How GitOps Benefits from Security-as-Code

The New Stack Podcast

Play Episode Listen Later Oct 26, 2021 33:37


Security-as-code is the practice of “building security into DevOps tools and workflows by mapping out how changes to code and infrastructure are made and finding places to add security checks, tests, and gates without introducing unnecessary costs or delays,” according to tech publisher O'Reilly. In this latest “pancakes and podcast” special episode —recorded during a pancake breakfast during KubeCon + CloudNativeCon in October — we discuss how security-as-code can benefit emerging GitOps practices.The guests were  Sean O'Dell, director of developer advocacy, Accurics, Sara Joshi, who was an associate software engineer for Accurics when this recording was made;  Parminder Singh, chief information security officer (CISO), for hybrid-cloud digital-transformation services provider DigitalOnUs; Brendan O'Leary, staff developer evangelist, GitLab; Cindy Blake, senior security evangelist, GitLab; and Emily Omier, contributor, The New Stack and owner of marketing consulting provider Emily Omier Consulting.Alex Williams, founder and publisher of TNS, hosted the podcast.

Kubernetes Podcast from Google
Engineering Effectiveness and KubeCon NA 2021, with Jasmine James

Kubernetes Podcast from Google

Play Episode Listen Later Oct 21, 2021 44:07


Jasmine James is an Engineering Manager within the Engineering Effectiveness organization at Twitter, focused on their internal developer experience. She is also the latest co-chair of KubeCon + CloudNativeCon, starting with the North America event last week. Jasmine joins us to talk about being in the same room as other people - up to 3,000 of them - for the first time in a long while. The cover art for this show is courtesy of the CNCF and licensed under CC-BY. Do you have something cool to share? Some questions? Let us know: web: kubernetespodcast.com mail: kubernetespodcast@google.com twitter: @kubernetespod Chatter of the last wee while KubeCon NA 2021 Google Cloud Next ‘21 SREcon21 William Shatner’s words after touching the edge of the final frontier Adele to release a new album Common People Shatner’s new album “Bill” News of the recent past Google Cloud Next: Google Distributed Cloud Edge and Hosted BigQuery Omni is GA Anthos for VMs Managed Service for Prometheus VMworld VMware Tanzu Community Edition Cartographer for supply chain choreography KubeCon + CloudNativeCon CNCF announces record number of new silver members KCNA entry-level certification Cilium joins the CNCF Triggermesh becomes open source Codefresh replatforms on upstream Argo Cloud Native security microsurvey results Introducing Chainguard Episode 152, guest hosted by Dan Lorenc Episode 47, with Kim Lewandowski Kubernetes documentary trailer Links from the interview Atlanta AT&T Delta Air Lines Avoiding the weeds in the Cloud Native Landscape at KubeCon NA 2018 Q&A with Jasmine James, newest KubeCon co-chair The selection process for KubeCon NA 2021 Upcoming CNCF events Co-co-chairs: Episode 117, with Constance Caramanolis Episode 130, with Stephen Augustus Keynotes of note: Three Developer Experience keynotes from Constance, Jasmine, and Robert Duffy A Vulnerable Tale about Burnout by Julia Simon The Road to Multicluster by Kaslin Fields Episode 62, with Ricardo Rocha, Lukas Heinrch and Clemens Lange Interaction wristbands Horseback riding and fishing Jasmine James on Twitter

The New Stack Podcast
What to Expect at KubeCon+CloudNativeCon

The New Stack Podcast

Play Episode Listen Later Sep 27, 2021 26:25


It's that time of the year again, when we gather to discuss all matters related to Kubernetes and the other assorted tooling necessary to make cloud native computing happen.KubeCon+CloudNativeCon will be held in Los Angeles next month, October 11 -15.A key difference at this year's event — the first onsite event from the Cloud Native Computing Foundation since the beginning of the pandemic — is that the flagship cloud native conference will offer a much more significant virtual experience for those unable to travel to the venue in L.A..The virtual aspect of this year's KubeCon+CloudNativeCon “is expected to continue indefinitely,” Priyanka Sharma, general manager, CNCF said in this edition of The New Stack Makers podcast. Sharma was joined by conference co-chair Jasmine James, who is the Twitter developer experience lead and manager for engineering effectiveness. They discussed this year's schedule and agenda, how it will all compare to KubeCon+CloudNativeCon of years past and general cloud native trends. TNS Editor-In-Chief, Joab Jackson, hosted this episode of The New Stack Makers.

StormForge Fireside Chats
Series Trailer

StormForge Fireside Chats

Play Episode Listen Later Sep 24, 2021 0:41


Join Noah Abrahams and Cody Crudgington from Stormforge.io for a series of interviews leading up to #KubeCon #CloudNativeCon this fall. They'll be discussing pain points within the Cloud Native ecosystem, Kubernetes, and a whole lot more.

Ship It! DevOps, Infra, Cloud Native
Shipping KubeCon EU 2021

Ship It! DevOps, Infra, Cloud Native

Play Episode Listen Later May 28, 2021 61:31 Transcription Available


This week on Ship It! Gerhard is joined by Constance Caramanolis, Principal Engineer at Splunk and former maintainer of Envoy Proxy, and Stephen Augustus, Head of Open Source at Cisco & self-proclaimed Caesar of Systems. Constance and Stephen are the KubeCon + CloudNativeCon co-chairs. Join us to find out what happens before and after KubeCon gets shipped.

Changelog Master Feed
Shipping KubeCon EU 2021 (Ship It! #2)

Changelog Master Feed

Play Episode Listen Later May 28, 2021 61:31 Transcription Available


This week on Ship It! Gerhard is joined by Constance Caramanolis, Principal Engineer at Splunk and former maintainer of Envoy Proxy, and Stephen Augustus, Head of Open Source at Cisco & self-proclaimed Caesar of Systems. Constance and Stephen are the KubeCon + CloudNativeCon co-chairs. Join us to find out what happens before and after KubeCon gets shipped.

Kubernetes Podcast from Google
Putting on a KubeCon, with Colleen Mickey

Kubernetes Podcast from Google

Play Episode Listen Later May 6, 2021 32:08


A small army of community volunteers is necessary to host a KubeCon, but behind them is a professional events team. Colleen Mickey is Director of Event Services at the Linux Foundation and is responsible for KubeCon + CloudNativeCon, as well as other events like Hyperledger Global Forum and cdCon. She talks to us about hosting, feeding and watering 10,000 people, as well as the change to virtual events. We also bring the round-up of the KubeCon news, including our famous Lightning Round. Do you have something cool to share? Some questions? Let us know: web: kubernetespodcast.com mail: kubernetespodcast@google.com twitter: @kubernetespod Chatter of the week Episode 29, with Janet Kuo Looking back at KubeCon Shanghai 2018 News of the week New Relic and Pixie Labs blogs on Pixie being open sourced New Relic joins CNCF as a Platinum Member Red Hat launches the Stackrox community at stackrox.io OpenShift GitOps and OpenShift Pipelines Snyk’s State of Cloud Native Application Security report announcement and results OCI Distribution Specification reaches 1.0 Prometheus to launch conformance program New CNCF sandbox projects: Vineyard, an in-memory immutable data manager WasmEdge Runtime, a WebAssembly Virtual Machine for cloud, AI, and blockchain applications ChaosBlade, an open-source version of Alibaba’s chaos tools Fluid, a data and storage abstraction for AI and cloud-native applications Submariner, a cross-cluster overlay of overlay networks Antrea, a Kubernetes CNI plugin Episode 128, with Antonin Bas CNCF Edge survey results and free Kubernetes on Edge Training Episode 116, with Alex Ellis Inclusive Naming Initiative receives Honorable Mention at Fast Company’s 2021 World Changing Ideas Awards ‘Master,’ ‘Slave’ and the Fight Over Offensive Terms in Computing by Kate Conger of the New York Times Episode 130, with Stephen Augustus Spotify wins CNCF Top End User Award Episode 50, with David Xia Episode 136, with Lee Mills and Matt Clarke. Lightning round Accuknox secured $4.6m in seed funding Accurics announced Terrascan integrates with Argo CD Ambassador introduced a Developer Control Plane Armory introduced mini-Spinnaker installation Minnaker, built on k3s Arrikto announced MiniKF 1.3 and Eenterprise Kubeflow for Azure Avesha launched Smart Application Cloud Framework Bridgecrew published security trends from analyzing Helm charts CAST AI announced Amazon EKS cost optimizer Civo launched K3s-as-a service to early adopters Cloudical introduced version 1.8 of VanillaStack DataStax announced that k8ssandra supports all distributions Dynatrace added the ability to ingest OpenTelemetry traces HAProxy launched version 1.6 Kubernetes ingress controller Kasten added ransomware protection with v4.0 of K10 Kubermatic Kubernetes Platform 2.17 Kubernative says that KubeOps is now a full-fledged Managed Kubernetes Framework Netdata has added Kubernetes monitoring features to their Cloud service Nirmata announced Nirmata Policy Manager, based on Kyverno OpenNebula released a new K3s Virtual Appliance for running Edge Clouds Portainer raised $6M in a Series A round to Accelerate their global expansion Portworx pre-announced PX-Backup 2.0 with support for external auth services Rancher launched a new Rancher Desktop tool in Alpha for Windows and Mac Rafay launched new features to its Kubernetes Management Cloud Splunk announced their Observability Cloud is Generally Available StackPulse announced a Kubernetes-centric operations center StorageOS version 2.4 brings encryption at rest and rapid application recovery StormForge introduced automatic scanning of in-cluster resources StreamNative open sourced Function Mesh for running Apache Pulsar functions Sysdig added runtime detection and response for AWS Fargate Tigera released Calico Enterprise 3.5 with Dynamic Service Graph and eBPF data plane Timescale raised $40m Series B for Postgres-based TSDB and Prometheus cloud Trilio announced Kubernetes Backup Monitoring for Velero users Vitess launched version 10, with support for the Ruby on Rails framework Wanclouds launched multi-cloud Disaster Recovery as a Service Weaveworks launched Weave Kubernetes Platform 2.5 with multi cluster observability platform Zebrium now automatically perform Root Cause Analysis with integration into Opsgenie Links from the interview The first KubeCon in 2015 KubeCon donated to the CNCF CNCF presents CloudNativeCon and hosts future KubeCon events (2016) Dreamforce brings in cruise ships KubeCon NA 2017 in Austin, TX Linux Foundation Climate Finance Foundation Diamond sponsor lottery Diversity and inclusion at KubeCon EU Sponsorship open for KubeCon NA 2021 Event platforms: Intrado MeetingPlay KubeCon + CloudNativeCon Europe 2021 KubeCon + CloudNativeCon North America 2021 GopherCon EU 2018 in Iceland Colleen Mickey on LinkedIn

The New Stack Podcast
The Insider's Guide to KubeCon+CloudNativeCon EU 2021

The New Stack Podcast

Play Episode Listen Later Apr 20, 2021 34:23


In this episode of The New Stack Makers podcast, hosted by Joab Jackson, managing editor for The New Stack, we speak two of the fabled conference's key organizers about what to expect and what the organizers' goals are: Priyanka Sharma, general manager for CNCF and Stephen Augustus, engineering director and head of open source at Cisco. This is no business-as-usual KubeCon conference, of course. Last year's KubeCon EU was cancelled just a few weeks before the event was scheduled to take place. Then, many question marks remained during the early days of the pandemic about not only the future of conferences but how workers in the IT industry would continue to live and work. As it turns out, this year's event is virtual, of course, and at the very least, there is no shortage of talks and events. All told, for KubeCon, experts from organizations including Adobe, Apple, CERN, Nvidia and OVHcloud will deliver more than 100 sessions, keynotes, lightning talks, and breakout sessions. There will also be more than 60 sessions hosted by project maintainers – spanning beginner-level introductions, end-user case studies and technical deep dives.

Yalla To The Cloud
Episode 43: Cloud-Native #8

Yalla To The Cloud

Play Episode Listen Later Jan 25, 2021 8:21


בפינה זו, נגיש לכם מידע על העבודה היומיומית בסביבת ענן מנקודת המבט שלנו. והפעם, יחד עם אורן, נמשיך בסדרת ה-cloud-native שלנו! בפרק הקודם, השביעי בסדרת ה-cloud-native,אירחנו את אורן וסיכמנו את האירועים המעניינים ביותר שהיו בחודשים האחרונים בתחום: KubeCon ו-CloudNativeCon! דיברנו על מה הם בכלל הכנסים הללו, נקודות חשובות שעלו ומה חייבים לדעת. בפרק זה, השמיני בסדרה, שוחחנו על קונטיינרים ועל Docker images, ונכנסו לעובי הקורה של תהליך בניית image. אבי הראה לנו איך לנהל את התהליך, ואיך הדברים נראים hands-on, ודיברנו על המשמעות של קבלת ההחלטות בעת הבנייה. נתראה בפרקים הבאים! רוצים להתעדכן בתכנים נוספים בנושאי ענן וטכנולוגיות מתקדמות? הירשמו עכשיו לניוזלטר שלנו ותמיד תישארו בעניינים > IsraelCloudsNewsletter

Yalla To The Cloud
Episode 42: Cloud-Native #7

Yalla To The Cloud

Play Episode Listen Later Jan 17, 2021 21:28


בפינה זו, נגיש לכם מידע על העבודה היומיומית בסביבת ענן מנקודת המבט שלנו. והפעם, ניקח הפסקה מה-FinOps ונחזור לסדרת ה-cloud-native שלנו! בפרק הקודם, השישי (והאחרון בינתיים) בסדרת ה-FinOps, החלטנו לקחת את השיח על Governance צעד אחד קדימה ולהתמקד בו – שוחחנו על איך לנהל אותו כראוי בענן (דביר סיפר לנו מה עושים ב-Wix), על אילו כלים יכולים לעזור לביצוע תהליך יעיל יותר, ועל איך מתנהלים עם טאגים. בפרק זה, השביעי בסדרת ה-cloud-native, אירחנו את אורן וסיכמנו את האירועים המעניינים ביותר שהיו בחודשים האחרונים בתחום: KubeCon ו-CloudNativeCon! דיברנו על מה הם בכלל הכנסים הללו, נקודות חשובות שעלו ומה חייבים לדעת. נתראה בפרקים הבאים! רוצים להתעדכן בתכנים נוספים בנושאי ענן וטכנולוגיות מתקדמות? הירשמו עכשיו לניוזלטר שלנו ותמיד תישארו בעניינים > IsraelCloudsNewsletter

The New Stack Podcast
The AWS Viewpoint on Open Source and Kubernetes

The New Stack Podcast

Play Episode Listen Later Dec 22, 2020 49:37


KubeCon+CloudNativeCon sponsored this podcast. Kubernetes is certainly evolving, but it will be some time before organizations deploy and run applications seamlessly in cloud native environments without today's associated challenges of its adoption and maintenance. Amazon Web Services (AWS), of course, is both an early proponent of Kubernetes and a leading provider of cloud native services and support, and has thus been implicitly involved with its changes over the past few years. In this The New Stack Makers podcast, AWS' Bob Wise, general manager of Kubernetes, and Peder Ulander, head of product marketing for enterprise, developer and open source initiatives, described AWS' role in Kubernetes and how cloud native plays into the company's open source strategy. They also discussed how Kubernetes is evolving in the market, including in terms of how customer needs are changing, and why open source technologies are critical to fill in gaps in order for cloud native to realize its full potential. Alex Williams, founder and publisher of the New Stack, hosted this episode.

The New Stack Podcast
Why IAM is a Pain Point in Kubernetes

The New Stack Podcast

Play Episode Listen Later Dec 18, 2020 43:46


Prisma Cloud from Palo Alto Networks sponsored this podcast. Identity and access management (IAM) was previously relatively straightforward. Often delegated as a low-level management task to the local area network (LAN) or wide area network (WAN) admin, the process of setting permissions for tiered data access was definitely not one of the more challenging security-related duties. However, in today's highly distributed and relatively complex computing environments, network and associated IAM are exponentially more complex. As application creation and deployment become more distributed, often among multicloud containerized environments, the resulting dependencies, as well as vulnerabilities, continue to proliferate as well, thus widening the scope of potential attack surfaces. How to manage IAM in this context was the main topic of this episode of The New Stack Analysts podcast, as KubeCon + CloudNativeCon attendees joined TNS Founder and Publisher Alex Williams and guests live for the latest “Virtual Pancake & Podcast.” They discussed why IAM has become even more difficult to manage than in the past and offered their perspectives about potential solutions. They also showed how enjoying pancakes — or other variations of breakfast — can make IAM challenges more manageable. The event featured Lin Sun, senior technical staff member and Master Inventor, Istio/IBM; Joab Jackson, managing editor, The New Stack and Nathaniel “Q” Quist, senior threat researcher (Public Cloud Security – Unit 42), Palo Alto Networks. Jackson noted how the evolution of IAM has not been conducive to handling the needs of present-day distributed computing. Previously, it was “not exactly a security thing” nor a “developer problem,” and wasn't even “a security problem, he said. “[IAM] really almost was a network problem: if a certain individual or a certain process wants to access another process or a resource online, then you have to have the permissions in place to meet all the policy requirements about who can ask for these particular resources,” Jackson said. “And this is an entirely new problem with distributed computing on a massive and widespread scale…it's almost a mindset, number one, about who can figure out what to do and then how to go about doing it.”

The New Stack Analysts
Why IAM is a Pain Point in Kubernetes

The New Stack Analysts

Play Episode Listen Later Dec 18, 2020 43:45


Prisma Cloud from Palo Alto Networks sponsored this podcast. Identity and access management (IAM) was previously relatively straightforward. Often delegated as a low-level management task to the local area network (LAN) or wide area network (WAN) admin, the process of setting permissions for tiered data access was definitely not one of the more challenging security-related duties. However, in today's highly distributed and relatively complex computing environments, network and associated IAM are exponentially more complex. As application creation and deployment become more distributed, often among multicloud containerized environments, the resulting dependencies, as well as vulnerabilities, continue to proliferate as well, thus widening the scope of potential attack surfaces. How to manage IAM in this context was the main topic of this episode of The New Stack Analysts podcast, as KubeCon + CloudNativeCon attendees joined TNS Founder and Publisher Alex Williams and guests live for the latest “Virtual Pancake & Podcast.” They discussed why IAM has become even more difficult to manage than in the past and offered their perspectives about potential solutions. They also showed how enjoying pancakes — or other variations of breakfast — can make IAM challenges more manageable. The event featured Lin Sun, senior technical staff member and Master Inventor, Istio/IBM; Joab Jackson, managing editor, The New Stack and Nathaniel “Q” Quist, senior threat researcher (Public Cloud Security – Unit 42), Palo Alto Networks. Jackson noted how the evolution of IAM has not been conducive to handling the needs of present-day distributed computing. Previously, it was “not exactly a security thing” nor a “developer problem,” and wasn't even “a security problem, he said. “[IAM] really almost was a network problem: if a certain individual or a certain process wants to access another process or a resource online, then you have to have the permissions in place to meet all the policy requirements about who can ask for these particular resources,” Jackson said. “And this is an entirely new problem with distributed computing on a massive and widespread scale…it's almost a mindset, number one, about who can figure out what to do and then how to go about doing it.”

The New Stack Podcast
On the Tech Radar: Database Storage

The New Stack Podcast

Play Episode Listen Later Dec 16, 2020 51:29


KubeCon+CloudNativeCon sponsored this podcast. How to manage database storage in cloud native environments continues to be a major challenge for many organizations. Database storage also came to the fore as the issue to explore in the latest Cloud Native Computing Foundation (CNCF) Tech Radar report. In this edition of The New Stack Analysts podcast, host Alex Williams, founder and publisher of The New Stack and co-hosts Cheryl Hung, vice president of ecosystem at Cloud Native Computing Foundation (CNCF) and Dave Zolotusky, senior staff engineer at Spotify discuss stateless database storage, recent results of the report findings and perspectives from the user community. The podcast guests — who both contributed to the CNCF Tech Radar report and hail from the database storage user community — were Jackie Fong, engineering leader, Kubernetes and developer experience for Ticketmaster, and Mya Pitzeruse, software engineer, OSS contributor, effx.

The New Stack Analysts
On the Tech Radar: Database Storage

The New Stack Analysts

Play Episode Listen Later Dec 16, 2020 51:28


KubeCon+CloudNativeCon sponsored this podcast. How to manage database storage in cloud native environments continues to be a major challenge for many organizations. Database storage also came to the fore as the issue to explore in the latest Cloud Native Computing Foundation (CNCF) Tech Radar report. In this edition of The New Stack Analysts podcast, host Alex Williams, founder and publisher of The New Stack and co-hosts Cheryl Hung, vice president of ecosystem at Cloud Native Computing Foundation (CNCF) and Dave Zolotusky, senior staff engineer at Spotify discuss stateless database storage, recent results of the report findings and perspectives from the user community. The podcast guests — who both contributed to the CNCF Tech Radar report and hail from the database storage user community — were Jackie Fong, engineering leader, Kubernetes and developer experience for Ticketmaster, and Mya Pitzeruse, software engineer, OSS contributor, effx.

Bol.com - Techlab
Service Mesh - a must have for a dynamic infrastructure

Bol.com - Techlab

Play Episode Listen Later Dec 3, 2020 51:13


Introduction Weaving a Service Mesh for Multiple Clusters at bol.com - that was the title for the presentation our guests gave during the virtual KubeCon and CloudNativeCon 2020 earlier this year. This triggered our curiosity. We have questions like what is this service mesh about? Why do we need it? How did we implement it?.GuestsRemco Overdijk; Tech Lead Provisioning Fleet / Expert System Engineer at bol.com. You might know him from the episode where we spoke about Kubernetes.James Brook; Cloud Solutions Architect at GoogleNotesCNCFIstioRemco and James presenting at KubeCon/CloudNativeCon 2020

The Art of Modern Ops
Navigating the Kubernetes Hype Cycle with Cornelia Davis, Weaveworks & Liz Rice, Aqua Security

The Art of Modern Ops

Play Episode Play 24 sec Highlight Listen Later Apr 23, 2020 31:37


Anyone who looks at the growth of Kubecon CloudNativeCon attendance over the past four years can recognize that Kubernetes is more than a passing fad. Instead, Kubernetes represents the way forward for companies to scale, go faster and be more competitive. Many question whether we've achieved ‘peak Kubecon'.Listen in as Cornelia Davis and Liz Rice discuss the current state of cloud native, its ecosystem of projects and how the CNCF can help you navigate the complexity of piecing together a Kubernetes solution that's right for your team.Most organizations find open source software appealing because of the choice it offers. However there are tradeoffs; an abundance of choice can also increase complexity. An ideal situation would consist of an integrated plug and play solution with open source standards and solutions. One of the most pressing questions is whether we'll see a more integrated solution for delivering Kubernetes in the enterprise versus a straight do yourself approach.

The Podlets - A Cloud Native Podcast
Stateful and Stateless Workloads (Ep 9)

The Podlets - A Cloud Native Podcast

Play Episode Listen Later Dec 23, 2019 43:34


This week on The Podlets Cloud Native Podcast we have Josh, Carlisia, Duffie, and Nick on the show, and are also happy to be joined by a newcomer, Brian Liles, who is a senior staff engineer at VMWare! The purpose of today’s show is coming to a deeper understanding of the meaning of ‘stateful’ versus ‘stateless’ apps, and how they relate to the cloud native environment. We cover some definitions of ‘state’ initially and then move to consider how ideas of data persistence and co-ordination across apps complicate or elucidate understandings of ‘stateful’ and ‘stateless’. We then think about the challenging practice of running databases within Kubernetes clusters, which effectively results in an ephemeral system becoming stateful. You’ll then hear some clarifications of the meaning of operators and controllers, the role they play in mediating and regulating states, and also how important they are in a rapidly evolving but skills-scarce environment. Another important theme in this conversation is the CAP theorem or the impossibility of consistency, availability and partition tolerance all at once, but the way different databases allow for different combinations of two out of the three. We then move on to chat about the fundamental connection between workloads and state and then end off with a quick consideration about how ideas of stateful and stateless play out in the context of networks. Today’s show is a real deep dive offering perspectives from some the most knowledgeable in the cloud native space so make sure to tune in! Follow us: https://twitter.com/thepodlets Website: https://thepodlets.io Feeback: info@thepodlets.io https://github.com/vmware-tanzu/thepodlets/issues Hosts: Carlisia Campos Duffie Cooley Bryan Liles Josh Rosso Nicholas Lane Key Points From This Episode: • What ‘stateful’ means in comparison to ‘stateless’.• Understanding ‘state’ as a term referring to data which must persist.• Examples of stateful apps such as databases or apps that revolve around databases.• The idea that ‘persistence’ is debatable, which then problematizes the definition of ‘state’. • Considerations of the push for cloud native to run stateless apps.• How inter-app coordination relates to definitions of stateful and stateless applications.• Considering stateful data as data outside of a stateless cloud native environment.• Why it is challenging to run databases in Kubernetes clusters.• The role of operators in running stateful databases in clusters.• Understanding CRDs and controllers, and how they relate to operators.• Controllers mediate between actual and desired states.• Operators are codified system administrators.• The importance of operators as app number grows in a skill-scarce environment.• Mechanisms around stateful apps are important because they ensure data integrity.• The CAP theorem: the impossibility of consistency, availability, and tolerance.• Why different databases allow for different iterations of the CAP theorem.• When partition tolerance can and can’t get sacrificed.• Recommendations on when to run stateful or stateless apps through Kubernetes.• The importance of considering models when thinking about how to run a stateful app.• Varying definitions of workloads.• Pods can run multiple workloads• Workloads create states, so you can’t have one without the other.• The term ‘workloads’ can refer to multiple processes running at once.• Why the ephemerality of Kubernetes systems makes it hard to run stateful applications. • Ideas of stateful and stateless concerning networks.• The shift from server to browser in hosting stateful sessions. Quotes: “When I started envisioning this world of stateless apps, to me it was like, ‘Why do we even call them apps? Why don’t we just call them a process?’” — @carlisia [0:02:60] “‘State’ really is just that data which must persist.” — @joshrosso [0:04:26] “From the best that I can surmise, the operator pattern is the combination of a CRD plus a controller that will operate on events from the Kubernetes API based on that CRD’s configuration.” — @bryanl [0:17:00] “Once again, don’t let developers name them anything.” — @bryanl [0:17:35] “Data integrity is so important” — @apinick [0:22:31] “You have to really be careful about the different models that you’re evaluating when trying to think about how to manage a stateful application like a database.” — @mauilion [0:31:34] Links Mentioned in Today’s Episode: KubeCon+CloudNativeCon — https://events19.linuxfoundation.org/events/kubecon-cloudnativecon-north-america-2019/Google Spanner — https://cloud.google.com/spanner/CockroachDB — https://www.cockroachlabs.com/CoreOS — https://coreos.com/Red Hat — https://www.redhat.com/enMetacontroller — https://metacontroller.app/Brandon Philips — https://www.redhat.com/en/blog/authors/brandon-phillipsMySQL — https://www.mysql.com/ Transcript: EPISODE 009 [INTRODUCTION] [0:00:08.7] ANNOUNCER: Welcome to The Podlets Podcast, a weekly show that explores Cloud Native one buzzword at a time. Each week, experts in the field will discuss and contrast distributed systems concepts, practices, tradeoffs and lessons learned to help you on your cloud native journey. This space moves fast and we shouldn’t reinvent the wheel. If you’re an engineer, operator or technically minded decision maker, this podcast is for you. [INTERVIEW] [00:00:41] JR: All right! Hello, everybody, and welcome to episode 6 of The Cubelets Podcast. Today we are going to be discussing the concept of stateful and stateless and what that means in this crazy cloud native landscape that we all work. I am Josh Rosso. Joined with me today is Carlisia. [00:00:59] CC: Hi, everybody. [00:01:01] JR: We also have Duffie. [00:01:03] D: Hey, everybody. [00:01:04] JR: Nicholas. [00:01:05] NL: Yo! [00:01:07] JR: And a newcomer to the podcast, we also have Brian. Brian, you want to give us a little intro about yourself? [00:01:12] BL: Hi! I’m Brian. I work at VMWare. I do lots of community stuff, including sharing the KubeCon+CloudNativeCon. [00:01:22] JR: Awesome! Cool. All right. We’ve got a pretty good cast this week. So let’s dive right into it. I think one of the first things that we’ve been talking a bit about is the concept of what makes an application stateful? And of course in reverse, what makes an application stateless? Maybe we could try to start by discerning those two. Maybe starting with stateless if that makes? Does someone want to take that on? [00:01:45] CC: Well, I’m going to jump right in. I have always been a developer, as supposed to some of you or all of you have who have system admin backgrounds. The first time that I heard the stateless app, I was like, “What?” That wasn’t recent, okay? It was a long time ago, but that was a knot in my head. Why would you have a stateless app? If you have an app, you’re going to need state. I couldn’t imagine what that was. But of course it makes a lot of sense now. That was also when we were more in the monolithic world. [00:02:18] BM: Actually that’s a good point. Before you go into that, it’s a great point. Whenever we start with apps or we start developing apps, we think of an application. An application does everything. It takes input and it does stuff and it gives output. But now in this new world where we have lots of apps, big apps, small apps, we start finding that there’s apps that only talk and coordinate with other apps. They don’t do anything else. They don’t save any data. They don’t do anything. That’s what – where we get into this thing called stateless apps. Apps don’t have any type of data that they store locally. [00:02:53] CC: Yeah. It’s more like when I envision in my head. You said it brilliantly, Brian. It’s almost like a process. When I started envisioning this world of stateless apps, to me it was like, “Why do we even call them apps? Why don’t we just call them a process?” They’re just shifting back data and forth but they’re not – To me, at the beginning, apps were always stateless. They went together. [00:03:17] D: I think, frequently, people think of applications that have only locally relevant stuff that is actually not going to persist to disc, but maybe held in memory or maybe only relevant to the type of connection that’s coming through that application also as stateless, which is interesting, because there’s still some state there, but the premise is that you could lose that state and not lose the functionality of that code. [00:03:42] NL: Something that we might want to dive into really quickly when talking about stateless and stateful apps. What do we mean by the word state? When I first learned about these things, that was what always screwed me up. I’m like, “What do you mean state? Like Washington? Yeah. We got it over here.” [00:03:57] JR: Oh! State. That’s that word. State is one of those words that we use to sound smarter than we actually are 95% of the time, and that’s a number I just made up. When people are talking about state, they mean databases. Yeah. But there are other types of state as well. If you maintain local cache that needs to be persistent, if you have local files that you’re dealing with, like you’re opening files. That’s still state. State really is just that it’s data that must persist. [00:04:32] D: I agree with that definition. I think that state, whether persisted to memory or persisted to disc or persisted to some external system, that’s still what we refer to as state. [00:04:41] JR: All right. Makes sense and sounds about like what I got from it as well. [00:04:45] CC: All right. So now we have this world where we talk about stateless apps and stateful apps. Are there even stateful apps? Do we call a database an app? If we have a distributed system where we have one stateless app over here, another stateless app over there and then we have the database that’s connected to the two of them, are we calling the database a stateful app or is that whole thing – How do we call this? [00:05:15] NL: Yeah. The database is very much a state as an app with state. I’m very much – [00:05:19] D: That’s a close definition. Yeah. [00:05:21] NL: Yeah. Literally, it’s the epitome of a stateful app. But then you also have these apps that talk to databases as well and they might have local data, like data that – they start a transaction and then complete it or they have a long distributed type transaction. Any apps that revolve around a database, if they store local data, whether it’s within a transaction or something else, they’re still stateful apps. [00:05:46] D: Yup. I think you can modify and input data or modify state that has to be persisted in some way I think is a stateful app, even though I do think it’s confusing because of what – As I said before, I think that there are a bunch of applications that we think of, like not everybody considers Spark jobs to be stateful. Spark jobs, for example, are something that would bring data in, mutate that data in some way, produce some output and go away. The definition there is that Spark would generally push the resulting data into some other external system. It’s interesting, because in that model, Spark is not considered to be a stateful app because the Spark job could fail, go away, get recreated, pick up the pieces where it left off or just redo that work until all of the work is done. In many cases, people consider that to be a stateless application. That’s I think is like the crux – In my opinion, the crux of the confusion around what a stateful and stateless application is, is that people frequently – I think it’s more about where you store – what you mean by persistence and how that actually realizes in your application. If you’re pushing your state to an external database, is your application still stateful? [00:06:58] NL: I think it’s a good question, or if you are gathering data from an external source and mutating it in some way, but you don’t need data to be present when you start up, is that a stateful app or a stateless app? Even though you are taking in data, modifying it and checking it, sending out to some other mechanism or serving it in your own way, does that become like a stateless app? If that app gets killed and it comes back and it’s able to recover, is it stateful or stateless? That’s a bit of a gray area, I think. [00:07:26] JR: Yeah. I feel like a lot of the customers I work with, if the application can get killed even if it has some type of local state, they still refer to it as stateless usually, to me at least, when we talk about it because they think, “I can kind of restart this application and I’m not too worried about losing whatever it may have had.” Let’s say cached for simplicity, right? I think that kind of leads us into an interesting question. We’ve talked a lot on this podcast about cloud native infrastructure and cloud native applications and it seems like since the inception of cloud native, there’s always been this push that a stateless app is the best candidate to run or the easiest candidate to run. I’m just curious if we could dive into that for a moment. Why in the cloud native infrastructure area has there always been this push for running stateless applications? Why is it simpler? Those kinds of things. [00:08:15] BL: Before we dive into that, we have to realize – And this is just a problem of our whole ecosystem, this whole cloud native. We’re very hand-wavy in our descriptions for things. There’re a lot of ambiguous descriptions, and state is one of those. Just keep that in mind, that when we’re talking today, we’re really just talking about these things that store data and when that’s the state. Just keep that in mind as you’re listening to this. But when it comes to distributed systems in general, the easiest system is a system that doesn’t need coordination with any other system. If it happens to die, that’s okay. We can just restart it. People like to start there. It’s the easiest thing to start. [00:08:58] NL: Yeah, that was basically what I was going to say. If your application needs to tie into other applications, it becomes significantly more complicated to implement it, at least for your first time and in your system. These small applications that only – They don’t care about anybody else, they just take in data or not, they just do whatever. Those are super easy to start with because they’re just like, “Here. Start this up. Who cares? Whatever happens, it happens.” [00:09:21] CC: That could be a good boundary to define – I don’t want to jump back too far, but to define where is the stateless app to me is part of a system and just say it depends for it to come back up. Does it depend on something else that has state? [00:09:39] BL: I’ll give you an example. I can give you a good example of a stateless app that we use every day, every single one of us, none of us on this call, but when you search Google. You go to google.com and you go to the bar and you type in a search, what’s happening is there is a service at the beginning that collects that search and it federates the search over many different probably clusters of computers so they can actually do the search currently. That app that actually coordinates all that work is a stateless app most likely. All it does is just splits it up and allows more CPUs to do the work. Probably, that goes away. Probably not a problem. You probably have 10 more of them. That’s what I consider stateless. It doesn’t really own any of the data. It’s the coordinator. [00:10:25] CC: Yeah. If it goes down, it comes back up. It doesn’t need to reset itself to the state where it was before. It can truly be considered a stateless because it can just, “Okay. I reset. I’m starting from the beginning from this clear state.” [00:10:43] BL: Yes. That’s a good summary of that. [00:10:45] CC: Because another way to think about stateless – What makes an app stateful app, does it have to be combined or like deployed and shipped together with the part that maintains the state? That’s a more clear cut definition. Then that app is definitely a stateful app. [00:11:05] D: What we frequently talk about in like the cloud native space is like you know that you have a stateless app if you can just create 20 of them and not have to worry about the coordination of them. They are all workers. They are all going to take input. You could spread the load across those 20 in an identical way and not worry about which one you landed on. That’s stateless application. A stateful application is a very different thing. You have to have some coordination. You have to say how many databases can you have on a backend? Because you’re persisting data there, you have to be really careful about that you only write to the master database or to the writing database and you could read of any other memories of that database cluster, that sort of stuff. [00:11:44] CC: It might seem that we are going so deep into this differentiating between stateful and stateless, but this is so important because clusters are usually designed to be ephemeral. Ephemeral means obviously they die down, they are brought back up, the nodes, and you should worry as least as possible with the state of things. Then going back to what Joshua is saying, when we are in this cloud native world, usually we are talking about stateless apps, stateless workloads and then we’re going to just talk about what workload means. But then if that’s the case, where are the stateful apps? It’s like we have this vision that the stateful apps live outside the cloud native world? How does it work? But it’s supposed to work. [00:12:36] BL: Yup. This is the question that keeps a lot of people employed. Making sure my state is available when I need it. You know what? I’m not going to even use that word state. Making sure my data is available wherever I need it and when I need it. I don’t want to go too deep in right now, but this is actually a huge problem in the Kubernetes community in general, and we see it because there’s been lots of advice given, “Don’t run things like databases in your clusters.” This is why we see people taking the ideas of Google Spanner and like CockroachDB and actually going through a lot of work to make sure that you can run databases in Kubernetes clusters. The interesting piece about this is that we’re actually to the point where we can run these types of workloads in our clusters, but with a caveat, big star at the end, it’s very difficult and you have to know what you’re doing. [00:13:34] JR: Yeah. I want to dovetail on that Brian, because it’s something that we see all the time. I feel like when we first started setting up, let’s call them clusters, but in our case it was Kubernetes, right? We always saw that data level always being delegated to like if you’re in Amazon, some service that they hosted and so on. But now I think more and more of the customers that at least I’m seeing. I’m sure Nicholas and Duffie too, they’re interested in doing exactly what you just described. Cockroach is an example I literally just worked with recently, and it’s just interesting how much more thoughtful they have to be about their cluster operations. Going back to what you said Carlisia, it’s not as easy as just like trashing a cluster and instantiating a new one anymore, like they’re used to. They need to be more thoughtful about keeping that data integrity intact through things like upgrades and disaster recover. [00:14:18] D: Another interesting point kind to your point, Brian, is that like, frequently, people are starting to have conversations and concerns around data gravity, which means that I have a whole bunch of data that I need to work with, like to a Spark job, which I mentioned earlier. I need to basically put my compute where that data is. The way that I store that data inside the cluster and use Kubernetes to manage it or whether I just have to make sure that I have some way of bringing up compute workloads close to that data. It’s actually kind of introducing a whole new layer to this whole thing. [00:14:48] BL: Yeah! Whole new layer of work and a whole new layer of complexity, because that’s actually – The crux of all this is like where we slide the complexity too, but this is interesting, and I don’t want to go too far to this one definitely. This is why we’re seeing more people creating operators around managing data. I’ve seen operators who are bringing databases up inside of Kubernetes. I’ve seen operators that actually can bring up resources outside of Kubernetes using the Kubernetes API. The interesting thing about this is that I looked at both solutions and I said, “I still don’t know what the answer is,” and that’s great. That means that we have a lot to learn about the problem, and at least we have some paths for it. [00:15:29] NL: Actually, that kind of reminds me of the first time I ever heard the word stateful or stateless – I’m an infrastructure guy. Was around the discussion of operators, which there’s only a couple of years ago when operators were first introduced at CoreOS and some people were like, “Oh! Well, this is how you now operate a stateful mechanism inside of Kubernetes. This is the way forward that we want to propose.” I was just like, “Cool! What is that? What’s state? What do you mean stateful and stateless?” I had no idea. Josh, you were there. You’re like, “Your frontend doesn’t care about state and your backend does.” I’m like, “Does it? I don’t know. I’m not a developer.” [00:16:10] JR: Let’s talk about exactly that, because I think these patterns we’re starting to see are coming out of the needs that we’re all talking about, right? We’ve seen at least in the Kubernetes community a lot of push for these different constructs, like something called a stateful [inaudible 00:16:21], which isn’t that important right now, but then also like an operator. Maybe we can start by defining what is an operator? What is that pattern and why does it relate to stateful apps? [00:16:31] CC: I think that would be great. I am not clear what an operator is. I know there’s going to be a controller involved. I know it’s not a CRD. I am not clear on that at all, because I only work with CRDs and we don’t define – like the project I worked on with Velero, we don’t categorize it as an operator. I guess an operator uses specific framework that exists out there. Is it a Kubernetes library? I have no idea. [00:16:56] BL: We did it to ourselves again. We’re all doing these to ourselves. From the best that I can surmise, the operator pattern is the combination of a CRD plus a controller that will operate on events from the Kubernetes API based on that CRD’s configuration. That’s what an operator is. [00:17:17] NL: That’s exactly right. [00:17:18] BL: To conflate this, Red Hat created the operator SDK, and then you have [inaudible 00:17:23] and you have a Metacontroller, which can help you build operators. Then we actually sometimes conflate and call CRDs operators, and that’s pretty confusing for everyone. Once again, don’t let developers name anything. [00:17:41] CC: Wait. So let’s back up a little. Okay. There is an actual library that’s called an operator. [00:17:46] BL: Yes. There’s an operator SDK. [00:17:47] CC: Referred to as an operator. I heard that. Okay. Great. But let me back up a little because – [00:17:49] D: The word operator can [00:17:50] CC: Because if you are developing an app for Kubernetes, if you’re extending Kubernetes, you are – Okay, you might not use CRDs, but if you are using CRDs, you need a controller, right? Because how will you do actions? Then every app that has a CRD – because the alternative to having CRDs is just using the API directly without creating CRDs to reflect to resources. If you’re creating CRDs to reflect to resources, you need controllers. All of those apps, they have CRDs, are operators. [00:18:24] D: Yip [inaudible 00:18:25] is an operator. [00:18:26] CC: [inaudible 00:18:26] not an operator. How can you extend Kubernetes and not be qualified [inaudible 00:18:31] operator? [00:18:32] BL: Well, there’s a way. There is a way. You can actually just create a CRD and use a CRD for data storage, you know, store states, and you can actually query the Kubernetes API for that information. You don’t need a controller, but we couple them with controllers a lot to perform action based on that state we’ve saved to etcd. [00:18:50] CC: Duffie. [00:18:51] D: I want to back up just for a moment and talk about the controller pattern and what it is and then go from there to operators, because I think it makes it easier to get it in your head. A control pattern is effectively a way to understand desired state and real state and provide some logic or business code that will allow you to converge those two states, your actual state and your desired state. This is a pattern that we see used in almost everything within a distributed system. It’s like within Kubernetes, within most of the kind of more interesting systems that are out there. This control pattern describes a pretty good way of actually managing application flow across distributed systems. Now, operators, when they were initially introduced, we were talking about that this is a slightly different thing. Operators, when we introduced the idea, came more from like the operational burden of these stateful applications, things like databases and those sorts of stuff. With the database, etcd for example, you have a whole bunch of operational and runtime concerns around managing the lifecycle of that system. How do I add a new member to the cluster? What do I do when a member dies? How do I take action? Right now, that’s somebody like myself waking up at 2 in the morning and working through a run book to basically make sure that that service remains operational through the night. But the idea of an operator was to take that control pattern that we described earlier and make it wake up at 2 in the morning to fix this stuff. We’re going to actually codify the operational knowledge of managing the burden of these stateful applications so that we don’t have to wake up at 2 in the morning and do it anymore. Nobody wants to do that. [00:20:32] BL: Yeah. That makes sense. Remember back at KubCon years ago, I know it was one in Seattle where Brandon Philips was on stage talking about operators. He basically was saying if we think about SysOp, system operators, it was a way to basically automate or capture the knowledge of our system administrators in scripts or in a process or in code a la operators. [00:20:57] D: The last part that I’ll add to this thing, which I think is actually what really describes the value of this idea to me is that there are only so many people on the planet that do what the people in this blog post do. Maybe you’re one of them that listen to this podcast. People who are operating software or operating infrastructure at scale, there just aren’t that many of us on the planet. So as we add more applications, as more people adopt the cloud native regime or start coming to a place where they can crank out more applications more quickly, we’re going to have to get to a place where we are able to automate the burden of managing those applications, because there just aren’t enough of us to be able to support the load that is coming. There just aren’t enough people on the planet that do this to be able to support that. That’s the thing that excites me most about the operator pattern, is that it gives us a place to start. It gives us a place to actually start thinking about managing that burden over time, because if we don’t start changing the way we think about managing that burden, we’re going to run out of people. We’re not going to be able to do it. [00:22:05] NL: Yeah. It’s interesting. With stateful apps, we keep kind of bringing them – coming back to stateful apps, because stateful apps are hard and stateless apps are easy, and we’ve created all these mechanisms around operating things with state because of how just complicated it is to make sure that your data is ready, accessible and has integrity. That’s the big one that I keep not thinking about as a SysOps person coming into the Dev world. Data integrity is so important and making sure that your data is exactly what it needs to be and was the last time you checked it, is super important. It’s only something I’m really starting to grasp. That’s why I was like these things, like operators and all these mechanisms that we keep creating and recreating and recreating keep coming about, because making sure that your stateful apps have the right data at the right time is so important. [00:22:55] BL: Since you brought this up, and we just talked about why a state is so hard, I want to introduce the new term to this conversation, the whole CAP theorem, where data would typically be – in a distributed system at least, your data will be consistent or your data can be available, or if your distributed systems falls in multiple parts, you can have partition tolerance. This is one of those computer science things where you can actually pick two. You can have it be available and have partition tolerance, but your data won’t be consistent, or you can have consistency and availability, but you won’t have partition tolerance. If your cluster splits into two for some reason, the data will be bad. This is why it’s hard, this is why people have written basically lots of PhD dissertations on this subject, and this is why we are talking about this here today, is because managing state, and particularly managing distributed, is actually a very, very hard problem. But there’s software out there that will help us, and Kubernetes is definitely part of that and stateful sets are definitely part of that as well. [00:24:05] JR: I was just going to say on those three points, consistently, availability and partition tolerance. Obviously, we’d want all three if we could have them. Is there one that we most commonly tradeoff and give up or does it go case-by-case? [00:24:17] BL: Actually, it’s been proven. You can’t have all three. It’s literally impossible. It depends. If you have a MySQL server and you’re using MySQL to actually serve data out of this, you’re going to most likely get consistency and availability. If you have it replicated, you might not have partition tolerance. That’s something to think about, and there are different databases and this is actually one of the reasons why there are different databases. This is why people use things like relational databases and they use key value stores not because we really like the interfaces, but because they have different properties around the data. [00:24:55] NL: That’s an interesting point and something that I had recently just been thinking about, like why are there so many different types of databases. I just didn’t know. It was like in only recently heard of CAP theorem as well just before you mentioned it. I’m like, “Wow! That’s so fascinating.” The whole thing where you only pick two. You can’t get three. Josh, to kind of go back to your question really quickly, I think that partition tolerance is the one that we throw away the most. We’re willing to not be able to segregate our database as much as possible because C and A are just too important, I think. At least that’s what I’m saying, like I am wearing an [inaudible 00:25:26] shirt and [inaudible 00:25:27] is not partition tolerant. It’s bad at it. [00:25:31] BL: This is why Google introduced Spanner, and Spanner in some situations can get free with tradeoffs and a lot of really, really smart stuff, but most people can’t run this scale. But we do need to think about partition tolerance, especially with data whenever – Let’s say you run a store and you have multiple instances across the world and someone buys something from inventory, what is your inventory look like at any particular point? You don’t have to answer my question, of course, but think about that. These are still very important problems if fiber gets cut across the Atlantic and now I’ve sold more things than I have. Carlisia, speaking to you as someone who’s only been a developer, have you moved your thoughts on state any further? [00:26:19] CC: Well, I feel that I’m clear on – Well, I think you need to clarify your question better for me. If you’re asking if I understand what it means, I understand what it means. But I actually was thinking to ask this question to all of you, because I don’t know the answer, if that’s the question you’re asking me. I want to put that to the group. Do you recommend people, as in like now-ish, to run stateful workloads? We need to talk about workloads mean. Run stateful apps or database in sites if they’re running a Kubernetes cluster or if they’re planning for that, do you all as experts recommend that they should already be looking into doing that or they should be running for now their stateful apps or databases outside of the cloud native ecosystem and just connecting the two? Because if that’s what your question was, I don’t know. [00:27:21] BL: Well, I’ll take this first. I think that we should be spending lots of more time than we are right now in coming up community-tested solutions around using stateful sets to their best ability. What that means is let’s say if you’re running a database inside of Kubernetes and you’re using a stateful set to manage this, what we do need to figure out is what happens when my database goes down? The pod just kills? When I bring up a new version, I need to make sure that I have the correct software to verify integrity, rebuilt things, so that when it comes back up, it comes back up correctly. That’s what I think we should be doing. [00:27:59] JR: For me, I think working with customers, at least Kubernetes-oriented folks, when they’re trying to introduce Kubernetes as their orchestration part of their overall platform, I’m usually just trying to kind of meet them where they’re at. If they’re new to Kubernetes and distributed systems as a whole, if we have stateless, let’s call them maybe simpler applications to start with, I generally have them lean into that first, because we already have so much in front of us to learn about. I think it was either Brian or Duffie, you said it introduces a whole bunch more complexity. You have to know what you’re doing. You have to know how to operate these things. If they’re new to Kubernetes, I generally will advise start with stateless still. But that being said, so many of our customers that we work with are very interested in running stateful workloads on Kubernetes. [00:28:42] CC: But just to clarify what you said, Josh, because you spoke like an expert, but I still have beginner’s ears. You said something that sounded to me like you recommend that you go stateless. It sounded to me like that. What you really say is that they take out the stateless part of what they have, which they might already have or they might have to change and put the stateless. You’re not suggesting that, “Oh! You can’t do stateful anymore. You need to just do everything stateless.” What you’re saying is take the stateless part of your system, put that in Kubernetes, because that is really well-tested and keep the stateful outside of that ecosystem. Is that right? [00:29:27] JR: I think that’s a better way to put it. Again, it’s not that Kubernetes can’t do stateful. It’s more of a concept of biting off more than you can chew. We still work with a lot of people who are very new to these distributed systems concepts, and to take on running stateful workloads, if we could just delegate that to some other layer, like outside of the cluster, that could be a better place to start, at least in my experience. Nicholas and Duff might have different – [00:29:51] NL: Josh, you basically nailed it like what I was going to say, where it’s like if the team that I’m working with is interested in taking on the complexity of maintaining their databases, their stateful sets and making sure that they have data integrity and availability, then I’m all for them using Kubernetes for a stateful set. Kubernetes can run stateful applications, but there is all this complexity that we keep talking about and maintaining data and all that. If they’re willing to take on their complexity, great, it’s there for you. If they’re not, if they’re a little bit kind of behind as – Not behind, but if they’re kind of starting out their Kubernetes journey or their distributed systems journey, I would recommend them to move that complexity to somebody else and start with something a little bit easier, like a stateless application. There are a lot of good services that provide data as a service, right? You’ve got dataview as RDS is great for creating stateful application. You can leverage it anytime and you’ve got like dedicated wires too. I would point them to there first if they don’t want to take on like complexity. [00:30:51] D: I completely agree with that. An important thing I would add, which is in response to the stateful set piece here, is that as we’ve already described, managing a stateful application like a database does come with some complexity. So you should really carefully look at just what these different models provide you. Whether that model is making use of a stateful set, which provides you like ordinality, ensuring that things start up in a particular order and some of the other capabilities around that stuff. But it won’t, for example, manage some of the complexity. A stateful set won’t, for example, try and issue a command to the new member to make sure that it’s part of an existing database cluster. It won’t manage that kind of stuff. So you have to really be careful about the different models that you’re evaluating when trying to think about how to manage a stateful application like a database. I think because it’s actually why the topic of an operator came up kind of earlier, which was that like there are a lot of primitives within Kubernetes in general that provide you a lot of capability for managing things like stateful applications, but they may not entirely suit your needs. Because of the complexity with stateful applications, you have to really kind of be really careful about what you adopt and where you jump in. [00:32:04] CC: Yeah. I know just from working with Velero, which is a tool for doing backup and recovery migration of Kubernetes clusters. I know that we backup volumes. So if you have something mounted on a volume, we can back that up. I know for a fact that people are using that to backup stateful workloads. We need to talk about workloads. But at any case, one thing to – I think one of you mentioned is that you definitely also need to look at a backup and recovery strategy, which is ever more important if you’re doing stateful workloads. [00:32:46] NL: That’s the only time it’s important. If you’re doing stateless, who cares? [00:32:49] BL: Have we defined what a workload is? [00:32:50] CC: Yeah. But let me say something. Yeah, I think we should do an episode on that maybe, maybe not. We should do an episode on GitOps type of thing for related things, because even though you – Things are stateless, but I don’t want to get into it. Your cluster will change state. You can recover in stuff from like a fresh version. But as it goes through a lifecycle, it will change state and you might want to keep that state. I don’t know. I’m not the expert in that area, but let’s talk about workloads, Brian. Okay. Let me start talking about workloads. I never heard the term workload until I came into the cloud native world, and that was about a year ago or when they started looking in this space more closely. Maybe a little bit before a year ago. It took me forever to understand what a workload was. Now I understand, especially today, we’re talking about a little bit before we started recording. Let me hear from you all what it means to you. [00:34:00] BL: This is one of those terms, and I’m sure like the last any ex-Googlers about this, they’ll probably agree. This is a Google term that we actually have zero context about why it’s a term. I’m sure we could ask somebody and they would tell us, but workloads to me personally are anything that ultimately creates a pod. Deployments create replica sets, create pods. That whole thing is a workload. That’s how I look at it. [00:34:29] CC: Before there were pods, were there workloads, or is a workload a new thing that came along with pods? [00:34:35] BL: Once again, these words don’t make any sense to us, because they’re Google terms. I think that a pod is a part of a workload, like a deployment is a part of a workload, like a replica set is part of a workload. Workload is the term that encompasses an entire set of objects. [00:34:52] D: I think of a workload as a subset of an application. When I think of an application or a set of microservices, I might think of each of the services that make up that entire application as a workload. I think of it that way because that’s generally how I would divide it up to Brian’s point into different deployment or different stateful sets or different – That sort of stuff. Thinking of them each as their own autonomous piece, and altogether they form an application. That’s my think of it. [00:35:20] CC: To connect to what Brian said, deployment, will always run in the pods, which is super confusing if you’re not looking at these things, just so people understand, because it took me forever to understand that. The connection between a workload, a deployment and a pod. Pods contain – If you have a deployment that you’re going to shift Kubernetes – I don’t know if shift is the right word. You’re going to need to run on Kubernetes. That deployment needs to run somewhere, in some artifact, and that artifact is called a pod. [00:35:56] NL: Yeah. Going back to what Duffie said really quickly. A workload to me was always a process, kind of like not just a pod necessarily, but like whatever it is that if you’re like, “I just need to get this to run,” whatever that is. To me that was always a workload, but I think I’m wrong. I think I’m oversimplifying it. I’m just like whatever your process is. [00:36:16] BL: Yeah. I would give you – The reason why I would not say that is because a pod can run multiple containers at once, which ergo is multiple processes. That’s why I say it that way. [00:36:29] NL: Oh! You changed my mind. [00:36:33] BL: The reason I bring this up, and this is probably a great idea for a future show, is about all the jargon and terminology that we use in this land that we just take as everyone knows it, but we don’t all know it, and should be a great conversation to have around that. But the reason I always bring up the whole workload thing is because when we think about workloads and then you can’t have state without workloads, really. I just wanted to make sure that we tied those two things together. [00:36:58] CC: Why can you not have state without workloads? What does that mean? [00:37:01] BL: Well, the reason you can’t have state without workloads is because something is going to have to create that state, whether that workload is running in or out a cluster. Something is going to have to create it. It just doesn’t come out of nowhere. [00:37:11] CC: That goes back to what Nick was saying, that he thinks a workload is a process. Was that was you said, Nick? [00:37:18] NL: It is, yeah, but I’m renegading on that. [00:37:23] CC: At least I could see why you said that. Sorry, Brian. I cut you off. [00:37:28] BL: What I was saying is a workload ultimately is one or more processes. It’s not just a process. It’s not a single process. It could be 10, it could be 1. [00:37:39] JS: I have one final question, and we can bail on this and edit it out if it’s not a good one to end with. I hope it’s not too big, but I think maybe one thing we overlooked is just why it’s hard to run stateful workloads in these new systems like Kubernetes. We talked about how there’s more complexity and stuff, but there might be some room to talk about – People have been spinning up an EC2 server, a server on the web and running MySQL on it forever. Why in like the Kubernetes world of like pods and things is it a little bit harder to run, say, MySQL just [inaudible 00:38:10]. Is that something worth diving into? [00:38:13] NL: Yeah, I think so. I would say that for things like, say, applications, like databases particularly, they are less resilient to outages. While Kubernetes itself is dedicated to – Or most container orchestrations, but Kubernetes specifically, are dedicated to running your pods continuously as long as they will, that it is still somewhat of a shifting landscape. You do have priority and preemption. If you don’t set those things up properly of if there’s just like a total failure of your system at large, your stateful application can just go down at any time. Then how do you reconcile the outage in data, whatever data that might have gotten lost? Those sorts of things become significantly more complicated in an environment like Kubernetes where you don’t necessarily have access to a command line to run the commands to recover as easy. You may not, but it’s the same. [00:39:01] BL: Yes. You got to understand what databases do. Disk is slow, whether you have spinning disk or you have disk on chip, like SSD. What databases do in a lot of cases is they store things in memory. So if it goes away, didn’t get stored. In other cases, what databases do is they have these huge transactional logs, maybe they write them out in files and then they process the transaction log whenever they have CPU time. If a database dies just suddenly, maybe its state is inconsistent because it had items that were to be processed in a queue that haven’t been processed. Now it doesn’t know what’s going on, which is why – [00:39:39] NL: That’s interesting. I didn’t know that. [00:39:40] BL: If you kill MySQL, like kill MySQL D with a -9, why it might not come back up. [00:39:46] JR: Yeah. Going back to Kubernetes as an example, we are living in this newer world where things can get rescheduled and moved around and killed and their IPs changed and things. It seems like this environment is, should I say, more ephemeral, and those types of considerations becoming to be more complex. [00:40:04] NL: I think that really nails it. Yeah. I didn’t know that there were transactional logs about databases. I should, I feel like, have known that but I just have no idea. [00:40:11] D: There’s one more part to the whole stateful, stateless thing that I think is important to cover, but I don’t know if we’ll be able to cover it entirely in the time that we have left, and that is from the network perspective. If you think about the types of connections coming into an application, we refer to some of those connections as stateful and stateless. I think that’s something we could tackle in our remaining time, or what’s everybody’s thought? [00:40:33] JR: Why don’t you try giving us maybe a quick summary of it, Duffie, and then we can end on that. [00:40:36] CC: Yeah. I think it’s a good idea to talk about network and then address that in the context of network. I’m just thinking an idea for an episode. But give us like a quick rundown. [00:40:45] D: Sure. A lot of the kind of older monolithic applications, the way that you would scale these things is you would have multiple of them and then you would have some intelligence in the way that you’re routing connections down to those applications that would describe the ability to ensure that when Bob accesses a website and he authenticates, he’s going to authenticate to one specific instance of this application and the intelligence up in the frontend is going to handle the routing to make sure that Bob’s connection always comes back to that same instance. This is an older pattern. It’s been around for a very long time and it’s certainly the way that we first kind of learned to scale applications before we’ve decided to break into maker services and kind of handle a lot of this routing in a more resilient way. That was kind of one of the early versions of how we do this, and that is a pretty good example of a stateful session, and that there is actually some – Perhaps Bob has authenticated and he has a cookie that allows him, that when he comes back to that particular application, a lot of the settings, his browser settings, whether he’s using the dark theme or the light theme, that sort of stuff, is persisted on the server side rather than on the client side. That’s kind of what I mean by stateful sessions. Stateless sessions mean it doesn’t really matter that the user is terminating to the same end of point, because we’ve managed to keep the state either with the client. We’re handling state on the browser side of things rather on the server side of things. So you’re not necessarily gaining anything by pushing that connection back to the same specific instance, but just to a service that is more widely available. There are lots of examples of this. I mean, Brian’s example of Google earlier. Obviously, when I come back to Google, there are some things I want it to remember. I want it to remember that I’m logged in as myself. I want it to remember that I’ve used a particular – I want it to remember my history. I want it to remember that kind of stuff so that I could go back and find things that I looked at before. There are a ton of examples of this when we think about it. [00:42:40] JR: Awesome! All right, everyone. Thank you for joining us in episode 6, Stateful and Stateless. Signing off. I’m Josh Rosso, and going across the line, thank you Nicholas Lane. [00:42:54] NL: Thank you so much. This was really informative for me. [00:42:56] JR: Carlisia Campos. [00:42:57] CCC: This was a great conversation. Bye, everybody. [00:42:59] JR: Our new comer, Brian Liles. [00:43:01] BL: Until next time. [00:43:03] JR: And Duffie Cooley. [00:43:05] DCC: Thank you so much, everybody. [00:43:06] JR: Thanks all. [00:43:07] CCC: Bye! [END OF EPISODE] [0:50:00.3] ANNOUNCER: Thank you for listening to The Podlets Cloud Native Podcast. Find us on Twitter at https://twitter.com/ThePodlets and on the http://thepodlets.io/ website, where you'll find transcripts and show notes. We'll be back next week. Stay tuned by subscribing. [END]See omnystudio.com/listener for privacy information.