POPULARITY
In this episode of Inside the Network, we sit down with Jeetu Patel, Cisco's Executive Vice President and Chief Product Officer. Jeetu previously led Cisco's Security and Collaboration business units. Under his leadership, these divisions have become major growth engines fueled by AI-driven innovation, strategic acquisitions, and a renewed focus on user experience.With a market cap of $250 billion and a security business generating $10 billion in revenue, Cisco is evolving into a different kind of startup, one that moves with speed and urgency. Jeetu shares why he joined Cisco to spearhead this transformation and how the company is positioning itself in the cybersecurity space, competing with incumbents like Palo Alto Networks and Zscaler, as well as disruptors like Wiz and Cato Networks. We explore how Cisco's $28 billion acquisition of Splunk, along with key deals like Armorblox, Isovalent, and Robust Intelligence, is reshaping its security business. Jeetu also dives into the challenges CISOs face today - tool sprawl, talent shortages, and AI-driven threats - and how Cisco plans to simplify security at scale.For founders, Jeetu breaks down his six-vector rubric for evaluating opportunities, the key factors Cisco considers in acquisitions, and what startup leaders need to do to get Cisco's attention. He also provides an inside look at Cisco's legendary distribution machine and how startups can leverage it for hypergrowth. Finally, we discuss Jeetu's concept of a "personal board" and his views on navigating geopolitical challenges in the tech industry.
In this episode, we sit down with Donia Chaiehloudj, a Senior Software Engineer at Isovalent, to discuss her diverse career journey, from working in image processing and electronics to becoming a Go developer in the cloud-native space. Donia shares her experiences transitioning into software development, her work with Go and Kubernetes, and her leadership role in the GDG Sophia-Antipolis community. She also touches on her passion for public speaking, open-source contributions, and balancing her career with life as a new mother. This episode is perfect for anyone interested in tech, community building, and career growth in software engineering.00:00 Introduction1:57 What is Donia Doing Today?14:00 Highschool Interests18:34 Engineering School35:31 Internship Work / Software Transition42:15 Metal Health in School50:00 Graduating University / Job Searching58:00 Becoming a Java Developer1:10:00 Public Speaking / Community1:23:50 Contact InformationConnect with Donia: Twitter: https://twitter.com/doniacldLinkedin:https://www.linkedin.com/in/donia-chaiehloudj/Mentioned in today's episode:Isovalent: https://isovalent.com/TinyGo: https://tinygo.org/Want more from Ardan Labs? You can learn Go, Kubernetes, Docker & more through our video training, live events, or through our blog!Online Courses : https://ardanlabs.com/education/ Live Events : https://www.ardanlabs.com/live-training-events/ Blog : https://www.ardanlabs.com/blog Github : https://github.com/ardanlabs
How are modern cloud-native environments changing the way we handle security? Liz Rice, Chief Open Source Officer at Isovalent, explains why traditional IP-based network policies are becoming outdated and how game-changers like Cilium and eBPF, which leverage Kubernetes identities, offer more effective and readable policies. We also discuss the role of community-driven projects under the CNCF, and she shares tips for creating strong, future-proof solutions. What challenges should we expect next? Tune in to find out!Liz Rice is Chief Open Source Officer with eBPF specialists Isovalent, creators of the Cilium cloud native networking, security and observability project. She is the author of Container Security, and Learning eBPF, both published by O'Reilly, and she sits on the CNCF Governing Board, and on the Board of OpenUK. She was Chair of the CNCF's Technical Oversight Committee in 2019-2022, and Co-Chair of KubeCon + CloudNativeCon in 2018.She has a wealth of software development, team, and product management experience from working on network protocols and distributed systems, and in digital technology sectors such as VOD, music, and VoIP. When not writing code, or talking about it, Liz loves riding bikes in places with better weather than her native London, competing in virtual races on Zwift, and making music under the pseudonym Insider Nine.
What if the future of cloud-native networking could revolutionize everything you thought you knew about Kubernetes? Join us on this episode of Cables 2 Clouds as we continue our "Cloud Demystified" series with a deep dive into Kubernetes networking. We're thrilled to have Nicolas Vibert, a seasoned pro from Isovalent with nearly two decades of experience at Cisco, VMware, and HashiCorp. Together, we explore the essentials of Kubernetes networking through the innovative lens of Cilium, a CNI specifically designed for cloud-native environments. Nico shares his unique journey of learning Kubernetes from a network engineer's perspective, emphasizing the critical role of hands-on experience and mentorship. We also discuss the creation of hands-on labs and educational materials tailored for network engineers. This segment is loaded with analogies to help traditional network professionals grasp key Kubernetes concepts with ease.Ever wondered how Kubernetes orchestrates its complex networking operations? We break down the intricacies of the Kubernetes control plane, likening it to traditional network engineering concepts for clarity. Discover the limitations of Kubernetes' default networking tool, kube-proxy, and why modern CNIs like Cilium offer a more efficient solution for large-scale deployments. Nico explains how Cilium leverages eBPF maps for effective traffic routing and load balancing within Kubernetes clusters. Tune in for invaluable insights into the evolving landscape of cloud-native networking solutions.Check out the Fortnightly Cloud Networking NewsVisit our website and subscribe: https://www.cables2clouds.com/Follow us on Twitter: https://twitter.com/cables2cloudsFollow us on YouTube: https://www.youtube.com/@cables2clouds/Follow us on TikTok: https://www.tiktok.com/@cables2cloudsMerch Store: https://store.cables2clouds.com/Join the Discord Study group: https://artofneteng.com/iaatjArt of Network Engineering (AONE): https://artofnetworkengineering.com
eBPF is a kernel technology enabling high-performance, low overhead tools for networking, security and observability. In simpler terms: eBPF makes the kernel programmable!Tune in to this episode whether you have never heard about eBPF, using eBPF based tools such as bcc, Cillium, Falco, Tetragon, Inspector Gadget ... or whether you are developing your own eBPF programs!Liz Rice, Chief Open Source Officer at Isovalent, kicks this episode off with a brief introduction of eBPF, explains how it works, which use cases it has enabled and why eBPF can truly give you super powers! In our conversation we dive deeper into the performance aspects of eBPF: how and why tools like Cillium outperforms classical network load balancers, how performance engineers can use it and how the Kernel internally handles eBPF extecutions.We discussed a lot of follow up material - here are all the relevant links:Liz's slide deck on "Unleashing the kernel with eBPF": https://speakerdeck.com/lizrice/unleashing-the-kernel-with-ebpfeBPF Documentary on YouTube: https://www.youtube.com/watch?v=Wb_vD3XZYOALearning eBPF GitHub repo accompanying her book: https://github.com/lizrice/learning-ebpf eBPF website: https://epbf.ioLiz on LinkedIn: https://www.linkedin.com/in/lizrice/
How is eBPF impacting Kubernetes Network Security? In this episode, recorded LIVE at Kubecon EU Paris 2024, Liz Rice, Chief Open Source Officer at Isovalent took us through the technical nuances of eBPF and its role in enabling dynamic, efficient network policies that go beyond traditional security measures. She also discusses Tetragon, the new subproject under Cilium, designed to enhance runtime security with deeper forensic capabilities. A great conversation for anyone involved in Kubernetes workload management, offering a peek into the future of cloud-native technologies and the evolving landscape of network security. Guest Socials: Liz's Linkedin Podcast Twitter - @CloudSecPod If you want to watch videos of this LIVE STREAMED episode and past episodes - Check out our other Cloud Security Social Channels: - Cloud Security Podcast- Youtube - Cloud Security Newsletter - Cloud Security BootCamp Questions asked: (00:00) Introduction (01:46) A bit about Liz Rice (02:11) What is eBPF and Cilium? (03:24) SC Linux vs eBPF (04:11) Business use case for Cilium (06:37) Cilium vs Cloud Managed Services (08:51) Why was there a need for Tetragon? (11:20) Business use case for Tetragon (11:32) Projects related to Multi-Cluster Deployment (12:45) Where can you learn more about eBPF and Tetragon (13:50) Hot Topics from Kubecon EU 2024 (15:07) The Fun Section (15:35) How has Kubecon changed over the years? Resources spoken about during the interview: Cilium Tetragon eBPF
After the news of Cisco's recent acquisition of Isovalent, as well last week's exciting announcement of Cisco Hypershield, we sat down with Natilik's Security Solutions Director, Rob Eldridge and Multi-Cloud Solutions Director, Matyáš Prokop, to get their initial thoughts on what this means to the industry.
Stephen Augustus, Head of Open Source at Cisco, and Liz Rice, Chief Open Source Officer at Isovalent, discuss Cisco's acquisition of Isovalent, which has closed since recording, bringing together two teams with long-standing expertise in open source cloud native technologies, observability, and security. The two share their excitement about working together, emphasizing the alignment of Isovalent with Cisco's security division and the potential enhancements this acquisition brings to open source projects like Cilium and eBPF. They explore the implications for the open source community, and the continuous investment and development in these projects under Cisco's umbrella. We discuss the ways this merger could innovate security practices, enhance infrastructure observability, and leverage AI for more intelligent networking solutions. 00:00 Welcome and Introduction 00:22 Cisco's Acquisition of Isovalent 00:53 The Excitement and Potential of the Acquisition 02:14 Strategic Alignment and Future Vision 04:03 Open Source Commitment and Community Impact 06:53 The Road Ahead: Integration and Innovation 19:49 Exploring AI and Future Technologies at Cisco 26:03 Reflections and Closing Thoughts Resources: Cilium, eBPF and Beyond | Open at Intel (podbean.com) The Art of Open Source: A Conversation with Stephen Augustus | Open at Intel (podbean.com) Guests: Liz Rice is Chief Open Source Officer with eBPF specialists Isovalent, creators of the Cilium cloud native networking, security and observability project. She was Chair of the CNCF's Technical Oversight Committee in 2019-2022, and Co-Chair of KubeCon + CloudNativeCon in 2018. She is also the author of Container Security, published by O'Reilly. She has a wealth of software development, team, and product management experience from working on network protocols and distributed systems, and in digital technology sectors such as VOD, music, and VoIP. When not writing code, or talking about it, Liz loves riding bikes in places with better weather than her native London, competing in virtual races on Zwift, and making music under the pseudonym Insider Nine. Stephen Augustus is a Black engineering director and leader in open source communities. He is the Head of Open Source at Cisco, working within the Strategy, Incubation, & Applications (SIA) organization. For Kubernetes, he has co-founded transformational elements of the project, including the KEP (Kubernetes Enhancements Proposal) process, the Release Engineering subproject, and Working Group Naming. Stephen has also previously served as a chair for both SIG PM and SIG Azure. He continues his work in Kubernetes as a Steering Committee member and a Chair for SIG Release. Across the wider LF (Linux Foundation) ecosystem, Stephen has the pleasure of serving as a member of the OpenSSF Governing Board and the OpenAPI Initiative Business Governing Board. Previously, he was a TODO Group Steering Committee member, a CNCF (Cloud Native Computing Foundation) TAG Contributor Strategy Chair, and one of the Program Chairs for KubeCon / CloudNativeCon, the cloud native community's flagship conference. He is a maintainer for the Scorecard and Dex projects, and a prolific contributor to CNCF projects, amongst the top 40 (as of writing) code/content committers, all-time. In 2020, Stephen co-founded the Inclusive Naming Initiative, a cross-industry group dedicated to helping projects and companies make consistent, responsible choices to remove harmful language across codebases, standards, and documentation. He has previously held positions at VMware (via Heptio), Red Hat, and CoreOS. Stephen is based in New York City.
Take a Network Break! This week we cover Hypershield, a new Cisco security product that uses technology from its Isovalent acquisition. We parse a blog from Broadcom CEO Hock Tan on the company’s VMware strategy, and discuss China’s latest counter-punch in its tech infrastructure fight with the United States. A KPMG survey reveals that executives... Read more »
Take a Network Break! This week we cover Hypershield, a new Cisco security product that uses technology from its Isovalent acquisition. We parse a blog from Broadcom CEO Hock Tan on the company’s VMware strategy, and discuss China’s latest counter-punch in its tech infrastructure fight with the United States. A KPMG survey reveals that executives... Read more »
Take a Network Break! This week we cover Hypershield, a new Cisco security product that uses technology from its Isovalent acquisition. We parse a blog from Broadcom CEO Hock Tan on the company’s VMware strategy, and discuss China’s latest counter-punch in its tech infrastructure fight with the United States. A KPMG survey reveals that executives... Read more »
Willkommen zurück beim Focus On DevOps Podcast! Wir haben unsere kleine Winterpause hinter uns gelassen und sind im neuen Jahr mit voller Energie zurück. Und wir haben direkt eine kleine Überraschung für euch. Volkmar und Enrico haben Verstärkung bekommen - Moritz Meid, unser Security Zivi, ist jetzt Teil des Teams! In unserer ersten Newsfolge des Jahres gehen wir direkt auf die Features von Kubernetes 1.29 ein. Was ist neu? Was hat sich verbessert und warum sollte es euch nicht egal sein? Neben den kürzlichen Akquisen von PingSafe und Isovalent besprechen wir die neuesten Container-Sicherheitslücken in Form der "Leaky Vessels", welche Ende Januar durch Snyk festgestellt wurden.
Prepare to navigate the ever-shifting seas of network infrastructure with us as we chart a course through the recently announced partnership between Frontier Communications and Nile's network-as-a-service. We're unboxing the mechanics behind Nile's service model and assessing how this alliance could revolutionize the managed service realm for enterprises and MSPs. If you're mulling over an agile networking solution that's managed with precision, you won't want to miss this in-depth analysis of what this collaboration means for the industry.Our journey doesn't end there; we're also sizing up the cloud service expansion race with Equinix's new Fabric Cloud Router. By placing it side-by-side with competitors like Megaport, we uncover the hidden intricacies and potential costs that come with scaling cloud capabilities. And for a twist, we spotlight Google Cloud's dance with Huggingface in a partnership that could potentially transform AI development, offering a communal playground for innovation that could be the next GitHub for machine learning pros.Finally, we cast an analytical lens on the Biden administration's latest chess move in international tech policy, aiming to restrict China's leverage of American computing might for AI development. We'll unravel the threads of 'Know Your Customer' regulations and their potential impact on the titans of tech, from AWS to Azure. Wrapping up, we decode a recent Isovalent blog post, heralding a new chapter for service mesh technologies in network engineering. Stay tuned for a riveting wrap-up as we preview what's on the horizon for our next episode's enlightening discourse.Check out the Fortnightly Cloud Networking NewsVisit our website and subscribe: https://www.cables2clouds.com/Follow us on Twitter: https://twitter.com/cables2cloudsFollow us on YouTube: https://www.youtube.com/@cables2clouds/Follow us on TikTok: https://www.tiktok.com/@cables2cloudsMerch Store: https://store.cables2clouds.com/Join the Discord Study group: https://artofneteng.com/iaatjArt of Network Engineering (AONE): https://artofnetworkengineering.com
In this episode, join us as we delve into the intricate world of Platform Engineering with Aparna Subramanian, Director of Production Engineering at Shopify. Discover how Shopify, a powerhouse in e-commerce, masters the art of scaling platform engineering. Gain invaluable insights into their strategies, innovations, and lessons learned while navigating the complexities of sustaining and evolving a robust infrastructure to support millions, even through special peak events like Black Friday and Cyber Monday. If you're keen on understanding the backbone of a thriving online platform, don't miss out on this episode. Aparna started her career as a Software Engineer and has spent most part of her almost two decades of technology experience specializing in Infrastructure and Data Platforms. In her current role she leads Shopify's Cloud Native Production Platform. Previously, she was Director of Engineering at VMware where she was a founding member of Tanzu on vSphere, a Kubernetes Platform for the hybrid cloud. She also serves as co-chair of the “CNCF End User Developer Experience” SIG and as member of the CNCF End user technical advisory board. The episode was live-streamed on 11 January 2024 and the video is available at https://www.youtube.com/watch?v=6ShtsTTUizI OpenObservability Talks episodes are released monthly, on the last Thursday of each month and are available for listening on your favorite podcast app and on YouTube. We live-stream the episodes on Twitch and YouTube Live - tune in to see us live, and chime in with your comments and questions on the live chat. https://www.youtube.com/@openobservabilitytalks https://www.twitch.tv/openobservability Show Notes: 00:00 - Show intro & 2023 stats 01:49 - Episode and guest intro 04:15 - Shopify's scale 06:09 - Shopify's journey to Platform Engineering 08:56 - Shopify's platform structure 11:49 - division of responsibility 13:51 - golden path vs flexibility 17:58 - balancing flexibility and abstraction 19:56 - platform group structure 23:28 - handling load spikes 28:55 - FinOps in Platform Engineering 38:38 - avoiding silos and the cultural aspect 41:13 - CNCF end-user SIG and community challenges 49:24 - KubeCon Paris and guest contact 51:03 - OpenTofu reached GA 53:33 - Isovalent acquired by Cisco 55:00 - year-end summary articles 57:07 - .NET Aspire released preview2 58:58 - Episode and show outro Resources: Shopify Engineering Blog https://shopify.engineering/ Performance wins at Shopify: https://www.shopify.com/news/performance%F0%9F%91%86-complexity%F0%9F%91%87-killer-updates-from-shopify-engineering CNCF End User SIG https://github.com/cncf/enduser-public OpenTofu has reached GA https://logz.io/blog/terraform-is-no-longer-open-source-is-opentofu-opentf-the-successor/?utm_source=devrel&utm_medium=devrel Observability in 2024: https://thenewstack.io/observability-in-2024-more-opentelemetry-less-confusion/ OpenTelemetry in 2024: https://www.apmdigest.com/2024-application-performance-management-apm-predictions-4 .NET Aspire preview2: https://devblogs.microsoft.com/dotnet/announcing-dotnet-aspire-preview-2/ Socials: Twitter: https://twitter.com/OpenObserv YouTube: https://www.youtube.com/@openobservabilitytalks Dotan Horovits ============ Twitter: @horovits LinkedIn: in/horovits Mastodon: @horovits@fosstodon Aparna Subramanian ================= Twitter: @aparnastweets LinkedIn: https://www.linkedin.com/in/subramanianaparna/
Guest is Bill Mulligan. Bill is Community Pollinator at Isovalent working on Cilium and eBPF. We learned how to properly pronounce Isovalent and what it actually means. We also spoke in depth about eBPF, Cilium, network function in Kubernetes and more. Do you have something cool to share? Some questions? Let us know: - web: kubernetespodcast.com - mail: kubernetespodcast@google.com - twitter: @kubernetespod News of the week The Kubernetes legacy Linux package repositories are going away in January 2024 Kubernetes 1.29 is now available on GKE in the Rapid Channel The Vmware Tanzu Application Catalog is fully compliant with the SLSA Level 3 AWS extended support for Kubernetes minor versions pricing update The Kubernetes Contributor Summit Paris CFP is Open, closes Feb 4th KubeCon and CloudNativeCon EU 2024 co-located events agenda is live The Cloud Native Glossary is now available in French Blixt a new experimental LoadBalancer based on the Gateway API and eBPF Links from the interview Bill Mulligan: LinkedIn Twitter/X Covalent bonds on Wikipedia Isovalent Hybridization on Wikipedia Isovalent company site BPF - Berkeley Packet Filtering eBPF project site Fast by Friday: Why eBPF is Essential - Brendan Gregg GKE Dataplane V2 Cilium project site Hubble documentation Cilium Service Mesh Cilium annual report Cilium Certified Associate (CCA) CCA Study Guide from Isovalent on GitHub Istio Certified Associate (ICA) Certified Kubernetes Administrator (CKA) Certified Kubernetes Application Developer (CKAD) Kubernetes and Cloud Native Associate (KCNA) Resources to prepare for the CCA certification Isovalent library The World of Cilium Cisco acquired Isovalent Developing eBPF Apps in Java BGP in eBPF
This week’s Network Break examines why Cisco bought eBPF startup Isovalent (hint: it’s about cloud-native networking), why Broadcom is cranking up pressure on VMware resellers and customers (hint: it’s about money), and why Google Cloud is sort of dropping fees for customers who want to exit the cloud (hint: it’s about getting out ahead of... Read more »
This week’s Network Break examines why Cisco bought eBPF startup Isovalent (hint: it’s about cloud-native networking), why Broadcom is cranking up pressure on VMware resellers and customers (hint: it’s about money), and why Google Cloud is sort of dropping fees for customers who want to exit the cloud (hint: it’s about getting out ahead of... Read more »
This week’s Network Break examines why Cisco bought eBPF startup Isovalent (hint: it’s about cloud-native networking), why Broadcom is cranking up pressure on VMware resellers and customers (hint: it’s about money), and why Google Cloud is sort of dropping fees for customers who want to exit the cloud (hint: it’s about getting out ahead of... Read more »
Get ready to have your mind blown as we navigate the tectonic shifts within the cloud and networking realm, stirring up the landscape with Cisco's latest power move. The acquisition of Isovalent, the wizards behind EBPF and Cilium, isn't just a headline—it's a signpost towards a future where Cisco could become the Gandalf of the cloud ecosystem. We're peeling back the layers of this strategic play, and we're not shying away from the hard questions: What does this mean for the container network interfaces and the open-source community? Will Cisco's embrace of Cilium lead to a magical blend of innovation and tradition? There's no crystal ball, but we've got the next best thing—insights, predictions, and a bit of laughter at the absurdity of network gear that might just ask for printer toner.Then, we pivot to another industry shockwave as we scrutinize Hewlett-Packard Enterprises' bold acquisition of Juniper Networks. It's not just a plot twist; it's a full-blown narrative reshuffle that has us pondering HPE's hunger for Juniper's network prowess and AI treasure trove. Does this signal a new dawn for HPE in the arena of AI-driven networking? How will Aruba Networks and Mist Systems fit into this newly-drawn map? And just when you thought we might take a breather, we're diving into the implications of VMware's reimagined partner program, forecasting what the future holds for VMware Certified Professionals and the swirling eddy of industry changes. Stay tuned, as we tackle these industry-shaking developments with our trademark mix of depth and wit—because who says tech talk can't have a few laughs along the way?Check out the Fortnightly Cloud Networking NewsVisit our website and subscribe: https://www.cables2clouds.com/Follow us on Twitter: https://twitter.com/cables2cloudsFollow us on YouTube: https://www.youtube.com/@cables2clouds/Follow us on TikTok: https://www.tiktok.com/@cables2cloudsMerch Store: https://store.cables2clouds.com/Join the Discord Study group: https://artofneteng.com/iaatjArt of Network Engineering (AONE): https://artofnetworkengineering.com
It has been a big year in the enterprise IT industry for many reasons. Pat Gelsinger and Intel continue to refocus its business, the SEC gets serious with SolarWinds, Broadcom's deal to acquire VMware, HPE Aruba Ascendant, the incredible rise of generative AI, the battle of open source licenses, and of course, the Futurum Group acquiring Gestalt IT and Tech Field Day. Let's discuss on this year-end edition of the Gestalt IT Rundown. Time Stamps: 0:00 - Welcome to the Gestalt IT Rundown 0:42 - Intel Continues Business Focus 4:52 - SEC Getting Serious, Especially with SolarWinds 9:24 - Broadcom VMware Deal 12:55 - HPE Aruba Ascendant 17:09 - Gen AI is the Rage 22:18 - The Open Source Licensing Battle 27:48 - The Futurum Group Acquires Tech Field Day and Gestalt IT 29:32 - Thanks for Watching 31:26 - Thanking the Community 33:32 - Coming in 2024 Follow our Hosts on Social Media Tom Hollingsworth: https://www.twitter.com/NetworkingNerd Stephen Foskett: https://www.twitter.com/SFoskett Follow Gestalt IT Website: https://www.GestaltIT.com/ Twitter: https://www.twitter.com/GestaltIT LinkedIn: https://www.linkedin.com/company/Gestalt-IT Tags: #Rundown, #AI, #CXL, #AIPCs, #EmeraldRapids, #CFD19, #CiscoLiveEMEA, #AIFD4, #NFD34, #VMware, @Cisco, @RedHat, @Isovalent, @VMware, @Broadcom, @HashiCorp, @IntelBusiness, @Adobe, @Figma, @SFoskett, @NetworkingNerd, @GestaltIT,
Leading global tech analysts Patrick Moorhead (Moor Insights & Strategy) and Daniel Newman (Futurum Research) are front and center on The Six Five analyzing the tech industry's biggest news each and every week and also conducting interviews with tech industry "insiders" on a regular basis. The Six Five represents six (6) handpicked topics that will be covered for five (5) minutes each. Welcome to this week's edition of “The 6-5.” I'm Patrick Moorhead with Moor Insights & Strategy, co-host, joined by Daniel Newman with Futurum Research. On this week's show we will be talking: New York Times Sues Microsoft & OpenAI Over Copyright https://twitter.com/danielnewmanUV/status/174005092432292295 https://x.com/PatrickMoorhead/status/1740066678841462799?s=20 Micron Q1 Earnings https://twitter.com/MoorInsStrat/status/1737926233860554998 https://twitter.com/danielnewmanUV/status/1737881348151406782 https://x.com/YahooFinance/status/1737592460870840827?s=20 Cisco Acquires Isovalent https://twitter.com/PatrickMoorhead/status/1737844460246196634 https://www.techtarget.com/searchitoperations/news/366564394/Cisco-Security-Cloud-adds-Isovalent-for-multi-cloud-networks 2024 Watch List in Semis 2024 Watch List in AI 2024 Watch List in Cloud Disclaimer: This show is for information and entertainment purposes only. While we will discuss publicly traded companies on this show. The contents of this show should not be taken as investment advice.
Cisco ha anunciado oficialmente sus planes para adquirir Isovalent, una empresa privada especializada en soluciones de seguridad y redes nativas en la nube de código abierto.
Cisco announced this morning that it intends to acquire Isovalent, a cloud-native security and networking startup that should fit well with the company's core networking and security strategy. Learn more about your ad choices. Visit megaphone.fm/adchoices
In this podcast, Isovalent's Liz Rice discusses her involvement with several open source projects, such as the Cilium project and the eBPF platform. With the graduation of Cilium in the CNCF, Liz explains its networking and security capabilities and how it benefits the cloud-native ecosystem. She also dives into eBPF and discusses the implications of AI. The talk concludes with an exploration about open source communities, recommendations regarding emerging trends in the open source world, and Liz's anticipation for the future of Cilium and the impact of eBPF. 00:00 Introduction and Guest Background 01:10 Understanding Cilium and its Role in Networking 02:15 Exploring the Origins and Impact of eBPF 04:21 Insights into the eBPF Summit and Community Events 08:00 The Role of Open Source in Technology Development 12:40 The Intersection of AI and Open Source 18:21 Future Developments in Cilium and Open Source 21:02 Conclusion and Final Thoughts Guest: Liz Rice is Chief Open Source Officer with eBPF specialists Isovalent, creators of the Cilium cloud native networking, security and observability project. She was Chair of the CNCF's Technical Oversight Committee in 2019-2022, and Co-Chair of KubeCon + CloudNativeCon in 2018. She is also the author of Container Security, published by O'Reilly. She has a wealth of software development, team, and product management experience from working on network protocols and distributed systems, and in digital technology sectors such as VOD, music, and VoIP. When not writing code, or talking about it, Liz loves riding bikes in places with better weather than her native London, competing in virtual races on Zwift, and making music under the pseudonym Insider Nine.
eBPF is recent graduate in the CNCF family and this means that the world of Cloud and Kubernetes, networking looks very different with more security capabilities. Cilium the project from Isovalent has been gaining traction for network security for kubernetes as blindsides have been called out in the managed kubernetes deployments. This episode was recorded at KubeCon NA with Thomas Graf from Isovalent to share what the blindsides are and why eBPF provides better network security capability for kubernetes deployments of any scale. Guest Socials: Thomas's Linkedin (@ThomasGraf) Podcast Twitter - @CloudSecPod If you want to watch videos of this LIVE STREAMED episode and past episodes - Check out our other Cloud Security Social Channels: - Cloud Security Newsletter - Cloud Security BootCamp Questions asked: (00:00) Introduction (03:42) A bit about Thomas (04:11) Traditional Networking in Kubernetes (06:52) What is Cilium? (07:52) What is eBPF? (08:46) What do people use Cilium for? (11:31) Starting with network security in Kubernetes (13:02) Complexities with Scale (16:02) How do projects graduate? (17:02) The eBPF documentary (17:27) Opensource to Company (18:52) Practitioner to Founder (19:57) Building an open source project (21:13) The Fun Questions! You can check out the The eBPF Documentary here
In today's episode, Raymond De Jong, Field CTO for EMEA at Isovalent is here to talk about their Cilium project. Cilium is providing networking, observability and security for cloud-native applications using eBPF (Extended Berkeley Packet Filter). He is also explaining the differences and specialties with Cilium compared to other network solutions. We hope you enjoy the episode!
Anand Babu "AB" Periasamy is the cofounder and CEO of MinIO, a high performance object storage for AI that's built for large scale workloads. They have raised $126M in funding from the likes of General Catalyst, Softbank, Intel Capital, and Nexus Venture Partners. It's the world's fastest growing object storage company with more than 1 billion Docker pulls and more than 35K stars on GitHub. He's also an angel investor with investments in companies like H2O.ai, Isovalent, Starburst, Postman, and many more. He was previously the cofounder and CTO of Gluster, which got acquired by Red Hat. In this episode, we cover a range of topics including: - Why is storage important for AI workflows - What are the characteristics of a good data storage product - Repatriation of data from public cloud to on-prem - Running ML experiments in parallel - AI compute offerings from data infrastructure providers - Making data infrastructure faster and cheaper AB's favorite book: An Awesome Book! (Author: Dallas Clayton)--------Where to find Prateek Joshi: Newsletter: https://prateekjoshi.substack.com Website: https://prateekj.com LinkedIn: https://www.linkedin.com/in/prateek-joshi-91047b19 Twitter: https://twitter.com/prateekvjoshi
Intro Mike: Hello and welcome to Open Source Underdogs! I’m your host, Mike Schwartz, and this is episode 63, with Liz Rice, Chief Open Source Officer at Isovalent, the software startup behind Cilium, an eBPF-based Networking, Security and Observability project. This episode was recorded in early February at the inaugural State of Open Source Conference or SoCon,... The post Episode 63: EBPF Networking Isovalent with Liz Rice – Chief Open Source Officer first appeared on Open Source Underdogs.
We've had previous conversations with Liz Rice in the DevOps Sauna podcast before and hosted her eBPF talk at The DEVOPS Conference Global 2023. It seems that many people don't know much about the work that Isovalent is doing and eBPF in general, although they are huge worldwide. Bill Mulligan, the Community Pollinator of Isovalent, is here to tell a few stories about their work and impact. -DevOps Sauna episode with Liz Rice Why sidecar-less Cilium Service Mesh is a game-changer: https://hubs.li/Q01WTQ8H0 -Liz Rice's keynote at The DEVOPS Conference Global 2023: https://hubs.li/Q01WTR150
#217: Extended Berkeley Packet Filter, or eBPF, has been making waves in the tech industry over the past few years. It's a technology that enables you to extend the functionality of the Linux kernel without having to write kernel modules. But what exactly is eBPF, and how does it impact our systems, networks, and security? In this episode, we speak with Liz Rice, Chief Open Source Officer with eBPF pioneers Isovalent, about where eBPF started and why you may never write a line of (byte)code of eBPF yourself. Liz's contact information: Twitter: https://twitter.com/lizrice LinkedIn: https://www.linkedin.com/in/lizrice/ YouTube channel: https://youtube.com/devopsparadox/ Books and Courses: Catalog, Patterns, And Blueprints https://www.devopstoolkitseries.com/posts/catalog/ Review the podcast on Apple Podcasts: https://www.devopsparadox.com/review-podcast/ Slack: https://www.devopsparadox.com/slack/ Connect with us at: https://www.devopsparadox.com/contact/
On this week's episode of Dev Interrupted, we talk to Liz Rice, Chief Open Source Officer at Isovalent, and author of the book Learning eBPF: Programming the Linux Kernel for Enhanced Observability, Networking, and Security. Liz is an expert on open source, containers, and cloud-native technologies, and joins us to discuss her book, what she describes as some of the eBPF "superpowers" people are talking about, and some of the fascinating projects surrounding eBPF like Project Kepler. Liz also gives advice to engineers looking to try their hand at writing a book. Show Notes:Register for our summer series! Check out Liz's book: https://isovalent.com/learning-ebpf/Isovalent's labs: https://isovalent.com/resource-library/labs/Support the show: Subscribe to our Substack Follow us on YouTube Review us on Apple Podcasts or Spotify Follow us on Twitter or LinkedIn Offers: Learn about Continuous Merge with gitStream Want to try LinearB? Book a Demo & use discount code "Dev Interrupted Podcast"
Liz Rice, Chief Open Source Officer at Isovalent, joins Corey on Screaming in the Cloud to discuss the release of her newest book, Learning eBPF, and the exciting possibilities that come with eBPF technology. Liz explains what got her so excited about eBPF technology, and what it was like to write a book while also holding a full-time job. Corey and Liz also explore the learning curve that comes with kernel programming, and Liz illustrates why it's so important to be able to explain complex technologies in simple terminology. About LizLiz Rice is Chief Open Source Officer with eBPF specialists Isovalent, creators of the Cilium cloud native networking, security and observability project. She sits on the CNCF Governing Board, and on the Board of OpenUK. She was Chair of the CNCF's Technical Oversight Committee in 2019-2022, and Co-Chair of KubeCon + CloudNativeCon in 2018. She is also the author of Container Security, and Learning eBPF, both published by O'Reilly.She has a wealth of software development, team, and product management experience from working on network protocols and distributed systems, and in digital technology sectors such as VOD, music, and VoIP. When not writing code, or talking about it, Liz loves riding bikes in places with better weather than her native London, competing in virtual races on Zwift, and making music under the pseudonym Insider Nine.Links Referenced: Isovalent: https://isovalent.com/ Learning eBPF: https://www.amazon.com/Learning-eBPF-Programming-Observability-Networking/dp/1098135121 Container Security: https://www.amazon.com/Container-Security-Fundamental-Containerized-Applications/dp/1492056707/ GitHub for Learning eBPF: https://github.com/lizRice/learning-eBPF TranscriptAnnouncer: Hello, and welcome to Screaming in the Cloud with your host, Chief Cloud Economist at The Duckbill Group, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.Corey: Welcome to Screaming in the Cloud. I'm Corey Quinn. Our returning guest today is Liz Rice, who remains the Chief Open Source Officer with Isovalent. But Liz, thank you for returning, suspiciously closely timed to when you have a book coming out. Welcome back.Liz: [laugh]. Thanks so much for having me. Yeah, I've just—I've only had the physical copy of the book in my hands for less than a week. It's called Learning eBPF. I mean, obviously, I'm very excited.Corey: It's an O'Reilly book; it has some form of honeybee on the front of it as best I can tell.Liz: Yeah, I was really pleased about that. Because eBPF has a bee as its logo, so getting a [early 00:01:17] honeybee as the O'Reilly animal on the front cover of the book was pretty pleasing, yeah.Corey: Now, this is your second O'Reilly book, is it not?Liz: It's my second full book. So, I'd previously written a book on Container Security. And I've done a few short reports for them as well. But this is the second, you know, full-on, you can buy it on Amazon kind of book, yeah.Corey: My business partner wrote Practical Monitoring for O'Reilly and that was such an experience that he got entirely out of observability as a field and ran running to AWS bills as a result. So, my question for you is, why would anyone do that more than once?Liz: [laugh]. I really like explaining things. And I had a really good reaction to the Container Security book. I think already, by the time I was writing that book, I was kind of interested in eBPF. And we should probably talk about what that is, but I'll come to that in a moment.Yeah, so I've been really interested in eBPF, for quite a while and I wanted to be able to do the same thing in terms of explaining it to people. A book gives you a lot more opportunity to go into more detail and show people examples and get them kind of hands-on than you can do in their, you know, 40-minute conference talk. So, I wanted to do that. I will say I have written myself a note to never do a full-size book while I have a full-time job because it's a lot [laugh].Corey: You do have a full-time job and then some. As we mentioned, you're the Chief Open Source Officer over at Isovalent, you are on the CNCF governing board, you're on the board of OpenUK, and you've done a lot of other stuff in the open-source community as well. So, I have to ask, taking all of that together, are you just allergic to things that make money? I mean, writing the book as well on top of that. I'm told you never do it for the money piece; it's always about the love of it. But it seems like, on some level, you're taking it to an almost ludicrous level.Liz: Yeah, I mean, I do get paid for my day job. So, there is that [laugh]. But so, yeah—Corey: I feel like that's the only way to really write a book is, in turn, to wind up only to just do it for—what someone else is paying you to for doing it, viewing it as a marketing exercise. It pays dividends, but those dividends don't, in my experience from what I've heard from everyone say, pay off as of royalties on book payments.Liz: Yeah, I mean, it's certainly, you know, not a bad thing to have that income stream, but it certainly wouldn't make you—you know, I'm not going to retire tomorrow on the royalty stream unless this podcast has loads and loads of people to buy the book [laugh].Corey: Exactly. And I'm always a fan of having such [unintelligible 00:03:58]. I will order it while we're on the call right now having this conversation because I believe in supporting the things that we want to see more of in the world. So, explain to me a little bit about what it is. Whatever you talking about learning X in a title, I find that that's often going to be much more approachable than arcane nonsense deep-dive things.One of the O'Reilly books that changed my understanding was Linux Kernel Internals, or Understanding the Linux Kernel. Understanding was kind of a heavy lift at that point because it got very deep very quickly, but I absolutely came away understanding what was going on a lot more effectively, even though I was so slow I needed a tow rope on some of it. When you have a book that started with learning, though, I imagined it assumes starting at zero with, “What's eBPF?” Is that directionally correct, or does it assume that you know a lot of things you don't?Liz: Yeah, that's absolutely right. I mean, I think eBPF is one of these technologies that is starting to be, particularly in the cloud-native world, you know, it comes up; it's quite a hot technology. What it actually is, so it's an acronym, right? EBPF. That acronym is almost meaningless now.So, it stands for extended Berkeley Packet Filter. But I feel like it does so much more than filtering, we might as well forget that altogether. And it's just become a term, a name in its own right if you like. And what it really does is it lets you run custom programs in the kernel so you can change the way that the kernel behaves, dynamically. And that is… it's a superpower. It's enabled all sorts of really cool things that we can do with that superpower.Corey: I just pre-ordered it as a paperback on Amazon and it shows me that it is now number one new release in Linux Networking and Systems Administration, so you're welcome. I'm sure it was me that put it over the top.Liz: Wonderful. Thank you very much. Yeah [laugh].Corey: Of course, of course. Writing a book is one of those things that I've always wanted to do, but never had the patience to sit there and do it or I thought I wasn't prolific enough, but over the holidays, this past year, my wife and business partner and a few friends all chipped in to have all of the tweets that I'd sent bound into a series of leather volumes. Apparently, I've tweeted over a million words. And… yeah, oh, so I have to write a book 280 characters at a time, mostly from my phone. I should tweet less was really the takeaway that I took from a lot of that.But that wasn't edited, that wasn't with an overall theme or a narrative flow the way that an actual book is. It just feels like a term paper on steroids. And I hated term papers. Love reading; not one to write it.Liz: I don't know whether this should make it into the podcast, but it reminded me of something that happened to my brother-in-law, who's an artist. And he put a piece of video on YouTube. And for unknowable reasons if you mistyped YouTube, and you spelt it, U-T-U-B-E, the page that you would end up at from Google search was a YouTube video and it was in fact, my brother-in-law's video. And people weren't expecting to see this kind of art movie about matches burning. And he just had the worst comment—like, people were so mean in the comments. And he had millions of views because people were hitting this page by accident, and he ended up—Corey: And he made the cardinal sin of never read the comments. Never break that rule. As soon as you do that, it doesn't go well. I do read the comments on various podcast platforms on this show because I always tell people to insulted all they want, just make sure you leave a five-star review.Liz: Well, he ended up publishing a book with these comments, like, one comment per page, and most of them are not safe for public consumption comments, and he just called it Feedback. It was quite something [laugh].Corey: On some level, it feels like O'Reilly books are a little insulated from the general population when it comes to terrible nonsense comments, just because they tend to be a little bit more expensive than the typical novel you'll see in an airport bookstore, and again, even though it is approachable, Learning eBPF isn't exactly the sort of title that gets people to think that, “Ooh, this is going to be a heck of a thriller slash page-turner with a plot.” “Well, I found the protagonist unrelatable,” is not sort of the thing you're going to wind up seeing in the comments because people thought it was going to be something different.Liz: I know. One day, I'm going to have to write a technical book that is also a murder mystery. I think that would be, you know, quite an achievement. But yeah, I mean, it's definitely aimed at people who have already come across the term, want to know more, and particularly if you're the kind of person who doesn't want to just have a hand-wavy explanation that involves boxes and diagrams, but if, like me, you kind of want to feel the code, and you want to see how things work and you want to work through examples, then that's the kind of person who might—I hope—enjoy working through the book and end up with a possible mental model of how eBPF works, even though it's essentially kernel programming.Corey: So, I keep seeing eBPF in an increasing number of areas, a bunch of observability tools, a bunch of security tools all tend to tie into it. And I've seen people do interesting things as far as cost analysis with it. The problem that I run into is that I'm not able to wind up deploying it universally, just because when I'm going into a client engagement, I am there in a purely advisory sense, given that I'm biasing these days for both SaaS companies and large banks, that latter category is likely going to have some problems if I say, “Oh, just take this thing and go ahead and deploy it to your entire fleet.” If they don't have a problem with that, I have a problem with their entire business security posture. So, I don't get to be particularly prescriptive as far as what to do with it.But if I were running my own environment, it is pretty clear by now that I would have explored this in some significant depth. Do you find that it tends to be something that is used primarily in microservices environments? Does it effectively require Kubernetes to become useful on day one? What is the onboard path where people would sit back and say, “Ah, this problem I'm having, eBPF sounds like the solution.”Liz: So, when we write tools that are typically going to be some sort of infrastructure, observability, security, networking tools, if we're writing them using eBPF, we're instrumenting the kernel. And the kernel gets involved every time our application wants to do anything interesting because whenever it wants to read or write to a file, or send receive network messages, or write something to the screen, or allocate memory, or all of these things, the kernel has to be involved. And we can use eBPF to instrument those events and do interesting things. And the kernel doesn't care whether those processes are running in containers, under Kubernetes, just running directly on the host; all of those things are visible to eBPF.So, in one sense, doesn't matter. But one of the reasons why I think we're seeing eBPF-based tools really take off in cloud-native is that you can, by applying some programming, you can link events that happened in the kernel to specific containers in specific pods in whatever namespace and, you know, get the relationship between an event and the Kubernetes objects that are involved in that event. And then that enables a whole lot of really interesting observability or security tools and it enables us to understand how network packets are flowing between different Kubernetes objects and so on. So, it's really having this vantage point in the kernel where we can see everything and we didn't have to change those applications in any way to be able to use eBPF to instrument them.Corey: When I see the stories about eBPF, it seems like it's focused primarily on networking and flow control. That's where I'm seeing it from a security standpoint, that's where I'm seeing it from cost allocation aspect. Because, frankly, out of the box, from a cloud provider's perspective, Kubernetes looks like a single-tenant application with a really weird behavioral pattern, and some of that crosstalk gets very expensive. Is there a better way than either using eBPF and/or VPC flow logs to figure out what's talking to what in the Kubernetes ecosystem, or is BPF really your first port of call?Liz: So, I'm coming from a position of perspective of working for the company that created the Cilium networking project. And one of the reasons why I think Cilium is really powerful is because it has this visibility—it's got a component called Hubble—that allows you to see exactly how packets are flowing between these different Kubernetes identities. So, in a Kubernetes environment, there's not a lot of point having network flows that talk about IP addresses and ports when what you really want to know is, what's the Kubernetes namespace, what's the application? Defining things in terms of IP addresses makes no sense when they're just being refreshed and renewed every time you change pods. So yeah, Kubernetes changes the requirements on networking visibility and on firewalling as well, on network policy, and that, I think, is you don't have to use eBPF to create those tools, but eBPF is a really powerful and efficient platform for implementing those tools, as we see in Cilium.Corey: The only competitor I found to it that gives a reasonable explanation of why random things are transferring multiple petabytes between each other in the middle of the night has been oral tradition, where I'm talking to people who've been around there for a while. It's, “So, I'm seeing this weird traffic pattern at these times a day. Any idea what that might be?” And someone will usually perk up and say, “Oh, is it—” whatever job that they're doing. Great. That gives me a direction to go in.But especially in this era of layoffs and as environments exist for longer and longer, you have to turn into a bit of a data center archaeologist. That remains insufficient, on some level. And some level, I'm annoyed with trying to understand or needing to use tooling like this that is honestly this powerful and this customizable, and yes, on some level, this complex in order to get access to that information in a meaningful sense. But on the other, I'm glad that that option is at least there for a lot of workloads.Liz: Yeah. I think, you know, that speaks to the power of this new generation of tooling. And the same kind of applies to security forensics, as well, where you might have an enormous stream of events, but unless you can tie those events back to specific Kubernetes identities, which you can use eBPF-based tooling to do, then how do you—the forensics job of tying back where did that event come from, what was the container that was compromised, it becomes really, really difficult. And eBPF tools—like Cilium has a sub-project called Tetragon that is really good at this kind of tying events back to the Kubernetes pod or whether we want to know what node it was running on what namespace or whatever. That's really useful forensic information.Corey: Talk to me a little bit about how broadly applicable it is. Because from my understanding from our last conversation, when you were on the show a year or so ago, if memory serves, one of the powerful aspects of it was very similar to what I've seen some of Brendan Gregg's nonsense doing in his kind of various talks where you can effectively write custom programming on the fly and it'll tell you exactly what it is that you need. Is this something that can be instrument once and then effectively use it for basically anything, [OTEL 00:16:11]-style, or instead, does it need to be effectively custom configured every time you want to get a different aspect of information out of it?Liz: It can be both of those things.Corey: “It depends.” My least favorite but probably the most accurate answer to hear.Liz: [laugh]. But I think Brendan did a really great—he's done many talks talking about how powerful BPF is and built lots of specific tools, but then he's also been involved with Bpftrace, which is kind of like a language for—a high-level language for saying what it is that you want BPF to trace out for you. So, a little bit like, I don't know, awk but for events, you know? It's a scripting language. So, you can have this flexibility.And with something like Bpftrace, you don't have to get into the weeds yourself and do kernel programming, you know, in eBPF programs. But also there's gainful employment to be had for people who are interested in that eBPF kernel programming because, you know, I think there's just going to be a whole range of more tools to come, you know>? I think we're, you know, we're seeing some really powerful tools with Cilium and Pixie and [Parker 00:17:27] and Kepler and many other tools and projects that are using eBPF. But I think there's also a whole load of more to come as people think about different ways they can apply eBPF and instrument different parts of an overall system.Corey: We're doing this over audio only, but behind me on my wall is one of my least favorite gifts ever to have been received by anyone. Mike, my business partner, got me a thousand-piece puzzle of the Kubernetes container landscape where—Liz: [laugh].Corey: This diagram is psychotic and awful and it looks like a joke, except it's not. And building that puzzle was maddening—obviously—but beyond that, it was a real primer in just how vast the entire container slash Kubernetes slash CNCF landscape really is. So, looking at this, I found that the only reaction that was appropriate was a sense of overwhelmed awe slash frustration, I guess. It's one of those areas where I spend a lot of time focusing on drinking from the AWS firehose because they have a lot of products and services because their product strategy is apparently, “Yes,” and they're updating these things in a pretty consistent cadence. Mostly. And even that feels like it's multiple full-time jobs shoved into one.There are hundreds of companies behind these things and all of them are in areas that are incredibly complex and difficult to go diving into. EBPF is incredibly powerful, I would say ridiculously so, but it's also fiendishly complex, at least shoulder-surfing behind people who know what they're doing with it has been breathtaking, on some level. How do people find themselves in a situation where doing a BPF deep dive make sense for them?Liz: Oh, that's a great question. So, first of all, I'm thinking is there an AWS Jigsaw as well, like the CNCF landscape Jigsaw? There should be. And how many pieces would it have? [It would be very cool 00:19:28].Corey: No, because I think the CNCF at one point hired a graphic designer and it's unclear that AWS has done such a thing because their icons for services are, to be generous here, not great. People have flashcards that they've built for is what services does logo represent? Haven't a clue, in almost every case because I don't care in almost every case. But yeah, I've toyed with the idea of doing it. It's just not something that I'd ever want to have my name attached to it, unfortunately. But yeah, I want someone to do it and someone else to build it.Liz: Yes. Yeah, it would need to refresh every, like, five minutes, though, as they roll out a new service.Corey: Right. Because given that it appears from the outside to be impenetrable, it's similar to learning VI in some cases, where oh, yeah, it's easy to get started with to do this trivial thing. Now, step two, draw the rest of the freaking owl. Same problem there. It feels off-putting just from a perspective of you must be at least this smart to proceed. How do you find people coming to it?Liz: Yeah, there is some truth in that, in that beyond kind of Hello World, you quite quickly start having to do things with kernel data structures. And as soon as you're looking at kernel data structures, you have to sort of understand, you know, more about the kernel. And if you change things, you need to understand the implications of those changes. So, yeah, you can rapidly say that eBPF programming is kernel programming, so why would anybody want to do it? The reason why I do it myself is not because I'm a kernel programmer; it's because I wanted to really understand how this is working and build up a mental model of what's happening when I attach a program to an event. And what kinds of things can I do with that program?And that's the sort of exploration that I think I'm trying to encourage people to do with the book. But yes, there is going to be at some point, a pretty steep learning curve that's kernel-related but you don't necessarily need to know everything in order to really have a decent understanding of what eBPF is, and how you might, for example—you might be interested to see what BPF programs are running on your existing system and learn why and what they might be doing and where they're attached and what use could that be.Corey: Falling down that, looking at the process table once upon a time was a heck of an education, one week when I didn't have a lot to do and I didn't like my job in those days, where, “Oh, what is this Avahi daemon that constantly running? MDNS forwarding? Who would need that?” And sure enough, that tickled something in the back of my mind when I wound up building out my networking box here on top of BSD, and oh, yeah, I want to make sure that I can still have discovery work from the IoT subnet over to whatever it is that my normal devices live. Ah, that's what that thing always running for. Great for that one use case. Almost never needed in other cases, but awesome. Like, you fire up a Raspberry Pi. It's, “Why are all these things running when I'm just want to have an embedded device that does exactly one thing well?” Ugh. Computers have gotten complicated.Liz: I know. It's like when you get those pop-ups on—well certainly on Mac, and you get pop-ups occasionally, let's say there's such and such a daemon wants extra permissions, and you think I'm not hitting that yes button until I understand what that daemon is. And it turns out, it's related, something completely innocuous that you've actually paid for, but just under a different name. Very annoying. So, if you have some kind of instrumentation like tracing or logging or security tooling that you want to apply to all of your containers, one of the things you can use is a sidecar container approach. And in Kubernetes, that means you inject the sidecar into every single pod. And—Corey: Yes. Of course, the answer to any Kubernetes problem appears to be have you tried running additional containers?Liz: Well, right. And there are challenges that can come from that. And one of the reasons why you have to do that is because if you want a tool that has visibility over that container that's inside the pod, well, your instrumentation has to also be inside the pod so that it has visibility because your pod is, by design, isolated from the host it's running on. But with eBPF, well eBPF is in the kernel and there's only one kernel, however many containers were running. So, there is no kind of isolation between the host and the containers at the kernel level.So, that means if we can instrument the kernel, we don't have to have a separate instance in every single pod. And that's really great for all sorts of resource usage, it means you don't have to worry about how you get those sidecars into those pods in the first place, you know that every pod is going to be instrumented if it's instrumented in the kernel. And then for service mesh, service mesh usually uses a sidecar as a Layer 7 Proxy injected into every pod. And that actually makes for a pretty convoluted networking path for a packet to sort of go from the application, through the proxy, out to the host, back into another pod, through another proxy, into the application.What we can do with eBPF, we still need a proxy running in userspace, but we don't need to have one in every single pod because we can connect the networking namespaces much more efficiently. So, that was essentially the basis for sidecarless service mesh, which we did in Cilium, Istio, and now we're using a similar sort of approach with Ambient Mesh. So that, again, you know, avoiding having the overhead of a sidecar in every pod. So that, you know, seems to be the way forward for service mesh as well as other types of instrumentation: avoiding sidecars.Corey: On some level, avoiding things that are Kubernetes staples seems to be a best practice in a bunch of different directions. It feels like it's an area where you start to get aligned with the idea of service meesh—yes, that's how I pluralize the term service mesh and if people have a problem with that, please, it's imperative you've not send me letters about it—but this idea of discovering where things are in a variety of ways within a cluster, where things can talk to each other, when nothing is deterministically placed, it feels like it is screaming out for something like this.Liz: And when you think about it, Kubernetes does sort of already have that at the level of a service, you know? Services are discoverable through native Kubernetes. There's a bunch of other capabilities that we tend to associate with service mesh like observability or encrypted traffic or retries, that kind of thing. But one of the things that we're doing with Cilium, in general, is to say, but a lot of this is just a feature of the networking, the underlying networking capability. So, for example, we've got next generation mutual authentication approach, which is using SPIFFE IDs between an application pod and another application pod. So, it's like the equivalent of mTLS.But the certificates are actually being passed into the kernel and the encryption is happening at the kernel level. And it's a really neat way of saying we don't need… we don't need to have a sidecar proxy in every pod in order to terminate those TLS connections on behalf of the application. We can have the kernel do it for us and that's really cool.Corey: Yeah, at some level, I find that it still feels weird—because I'm old—to have this idea of one shared kernel running a bunch of different containers. I got past that just by not requiring that [unintelligible 00:27:32] workloads need to run isolated having containers run on the same physical host. I found that, for example, running some stuff, even in my home environment for IoT stuff, things that I don't particularly trust run inside of KVM on top of something as opposed to just running it as a container on a cluster. Almost certainly stupendous overkill for what I'm dealing with, but it's a good practice to be in to start thinking about this. To my understanding, this is part of what AWS's Firecracker project starts to address a bit more effectively: fast provisioning, but still being able to use different primitives as far as isolation boundaries go. But, on some level, it's nice to not have to think about this stuff, but that's dangerous.Liz: [laugh]. Yeah, exactly. Firecracker is really nice way of saying, “Actually, we're going to spin up a whole VM,” but we don't ne—when I say ‘whole VM,' we don't need all of the things that you normally get in a VM. We can get rid of a ton of things and just have the essentials for running that Lambda or container service, and it becomes a really nice lightweight solution. But yes, that will have its own kernel, so unlike, you know, running multiple kernels on the same VM where—sorry, running multiple containers on the same virtual machine where they would all be sharing one kernel, with Firecracker you'll get a kernel per instance of Firecracker.Corey: The last question I have for you before we wind up wrapping up this episode harkens back to something you said a little bit earlier. This stuff is incredibly technically nuanced and deep. You clearly have a thorough understanding of it, but you also have what I think many people do not realize is an orthogonal skill of being able to articulate and explain those complex concepts simply an approachably, in ways that make people understand what it is you're talking about, but also don't feel like they're being spoken to in a way that's highly condescending, which is another failure mode. I think it is not particularly well understood, particularly in the engineering community, that there are—these are different skill sets that do not necessarily align congruently. Is this something you've always known or is this something you've figured out as you've evolved your career that, oh I have a certain flair for this?Liz: Yeah, I definitely didn't always know it. And I started to realize it based on feedback that people have given me about talks and articles I'd written. I think I've always felt that when people use jargon or they use complicated language or they, kind of, make assumptions about how things are, it quite often speaks to them not having a full understanding of what's happening. If I want to explain something to myself, I'm going to use straightforward language to explain it to myself [laugh] so I can hold it in my head. And I think people appreciate that.And you can get really—you know, you can get quite in-depth into something if you just start, step by step, build it up, explain everything as you go along the way. And yeah, I think people do appreciate that. And I think people, if they get lost in jargon, it doesn't help anybody. And yeah, I very much appreciate it when people say that, you know, they saw a talk or they read something I wrote and it meant that they finally grokked whatever that concept was that that I was trying to explain. I will say at the weekend, I asked ChatGPT to explain DNS in the style of Liz Rice, and it started off, it was basically, “Hello there. I'm Liz Rice and I'm here to explain DNS in very simple terms.” I thought, “Okay.” [laugh].Corey: Every time I think I've understood DNS, there's another level to it.Liz: I'm pretty sure there is a lot about DNS that I don't understand, yeah. So, you know, there's always more to learn out there.Corey: There's certainly is. I really want to thank you for taking time to speak with me today about what you're up to. Where's the best place for people to find you to learn more? And of course, to buy the book.Liz: Yeah, so I am Liz Rice pretty much everywhere, all over the internet. There is a GitHub repo that accompanies the books that you can find that on GitHub: lizRice/learning-eBPF. So, that's a good place to find some of the example code, and it will obviously link to where you can download the book or buy it because you can pay for it; you can also download it from Isovalent for the price of your contact details. So, there are lots of options.Corey: Excellent. And we will, of course, put links to that in the [show notes 00:32:08]. Thank you so much for your time. It's always great to talk to you.Liz: It's always a pleasure, so thanks very much for having me, Corey.Corey: Liz Rice, Chief Open Source Officer at Isovalent. I'm Cloud Economist Corey Quinn, and this is Screaming in the Cloud. If you've enjoyed this podcast, please leave a five-star review on your podcast platform of choice, whereas if you've hated this podcast, please leave a five-star review on your podcast platform of choice, along with an angry comment that you have somehow discovered this episode by googling for knitting projects.Corey: If your AWS bill keeps rising and your blood pressure is doing the same, then you need The Duckbill Group. We help companies fix their AWS bill by making it smaller and less horrifying. The Duckbill Group works for you, not AWS. We tailor recommendations to your business and we get to the point. Visit duckbillgroup.com to get started.
Cloud Security Podcast - This month we are talking about "Kubernetes Security & KubeCon EU 2023" and for the third episode in this series, we spoke to Liz Rice ( Liz's Linkedin). Liz Rice from Isovalent speaks about how Network Security can be done in Kubernetes. Kubernetes network security with eBPF, Cilium can be raised to be better than selinux seccomp tcpdump - yes the linux networking security tools. Yes you read that right. Episode ShowNotes, Links and Transcript on Cloud Security Podcast: www.cloudsecuritypodcast.tv FREE CLOUD BOOTCAMPs on www.cloudsecuritybootcamp.com Host Twitter: Ashish Rajan (@hashishrajan) Guest Socials: Andrew Martin (Andrew's Linkedin) Podcast Twitter - @CloudSecPod @CloudSecureNews If you want to watch videos of this LIVE STREAMED episode and past episodes - Check out our other Cloud Security Social Channels: - Cloud Security News - Cloud Security BootCamp Spotify TimeStamp for Interview Questions (00:00) Introduction (00:15) A word from our sponsor snyk.io/csp (03:36) A bit about Liz Rice (04:36) Liz's path into Cloud Native (06:22) What is EBPF? (08:12) Use case for EBPF in on premise (10:37) SC Linux and EBPF (11:28) Why we are solving this now with Kubernetes? (13:22) EBPF in managed vs unmanaged Kubernetes? (15:37) Implementation of EBPF (17:38) Access Management and Network Security (21:02) Challenges with multi cluster Kubernetes deployment (24:03) Key management in multi cluster (25:11) Current gaps in Kubernetes security (27:41) Developer first in the cloud native space (32:47) The future of EBPF (34:36) Where can you learn more about EBPF (36:25) The fun questions See you at the next episode!
In this episode, Michael catches up with Stephane Karagulmez, Senior Solution Architect at Isovalent (founded by the creators of Cilium). Michael spent a lot of time working with Cilium, which is open-source software that provides networking and observability capabilities for Kubernetes workloads. Cilium is based on another open-source project, eBFP. It's important to understand the details and performance changes when implementing eBPF and removing kube-proxy.
In this episode, Michael catches up with Stephane Karagulmez, Senior Solution Architect at Isovalent (founded by the creators of Cilium). Michael spent a lot of time working with Cilium, which is open-source software that provides networking and observability capabilities for Kubernetes workloads. Cilium is based on another open-source project, eBFP. It's important to understand the details and performance changes when implementing eBPF and removing kube-proxy. The post Kubernetes Unpacked 022: Kubernetes Networking And Abstraction With Cilium And eBPF appeared first on Packet Pushers.
In this episode, Michael catches up with Stephane Karagulmez, Senior Solution Architect at Isovalent (founded by the creators of Cilium). Michael spent a lot of time working with Cilium, which is open-source software that provides networking and observability capabilities for Kubernetes workloads. Cilium is based on another open-source project, eBFP. It's important to understand the details and performance changes when implementing eBPF and removing kube-proxy.
In this episode, Michael catches up with Stephane Karagulmez, Senior Solution Architect at Isovalent (founded by the creators of Cilium). Michael spent a lot of time working with Cilium, which is open-source software that provides networking and observability capabilities for Kubernetes workloads. Cilium is based on another open-source project, eBFP. It's important to understand the details and performance changes when implementing eBPF and removing kube-proxy. The post Kubernetes Unpacked 022: Kubernetes Networking And Abstraction With Cilium And eBPF appeared first on Packet Pushers.
In this episode, Michael catches up with Stephane Karagulmez, Senior Solution Architect at Isovalent (founded by the creators of Cilium). Michael spent a lot of time working with Cilium, which is open-source software that provides networking and observability capabilities for Kubernetes workloads. Cilium is based on another open-source project, eBFP. It's important to understand the details and performance changes when implementing eBPF and removing kube-proxy.
AB Periasamy, Co-Founder and CEO of MinIO, joins Corey on Screaming in the Cloud to discuss what it means to be truly open source and the current and future state of multi-cloud. AB explains how MinIO was born from the idea that the world was going to produce a massive amount of data, and what it's been like to see that come true and continue to be the future outlook. AB and Corey explore why some companies are hesitant to move to cloud, and AB describes why he feels the move is inevitable regardless of cost. AB also reveals how he has helped create a truly free open-source software, and how his partnership with Amazon has been beneficial. About ABAB Periasamy is the co-founder and CEO of MinIO, an open source provider of high performance, object storage software. In addition to this role, AB is an active investor and advisor to a wide range of technology companies, from H2O.ai and Manetu where he serves on the board to advisor or investor roles with Humio, Isovalent, Starburst, Yugabyte, Tetrate, Postman, Storj, Procurify, and Helpshift. Successful exits include Gitter.im (Gitlab), Treasure Data (ARM) and Fastor (SMART).AB co-founded Gluster in 2005 to commoditize scalable storage systems. As CTO, he was the primary architect and strategist for the development of the Gluster file system, a pioneer in software defined storage. After the company was acquired by Red Hat in 2011, AB joined Red Hat's Office of the CTO. Prior to Gluster, AB was CTO of California Digital Corporation, where his work led to scaling of the commodity cluster computing to supercomputing class performance. His work there resulted in the development of Lawrence Livermore Laboratory's “Thunder” code, which, at the time was the second fastest in the world. AB holds a Computer Science Engineering degree from Annamalai University, Tamil Nadu, India.AB is one of the leading proponents and thinkers on the subject of open source software - articulating the difference between the philosophy and business model. An active contributor to a number of open source projects, he is a board member of India's Free Software Foundation.Links Referenced: MinIO: https://min.io/ Twitter: https://twitter.com/abperiasamy LinkedIn: https://www.linkedin.com/in/abperiasamy/ Email: mailto:ab@min.io TranscriptAnnouncer: Hello, and welcome to Screaming in the Cloud with your host, Chief Cloud Economist at The Duckbill Group, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.Corey: This episode is sponsored in part by our friends at Chronosphere. When it costs more money and time to observe your environment than it does to build it, there's a problem. With Chronosphere, you can shape and transform observability data based on need, context and utility. Learn how to only store the useful data you need to see in order to reduce costs and improve performance at chronosphere.io/corey-quinn. That's chronosphere.io/corey-quinn. And my thanks to them for sponsor ing my ridiculous nonsense. Corey: Welcome to Screaming in the Cloud. I'm Corey Quinn, and I have taken a somewhat strong stance over the years on the relative merits of multi-cloud, and when it makes sense and when it doesn't. And it's time for me to start modifying some of those. To have that conversation and several others as well, with me today on this promoted guest episode is AB Periasamy, CEO and co-founder of MinIO. AB, it's great to have you back.AB: Yes, it's wonderful to be here again, Corey.Corey: So, one thing that I want to start with is defining terms. Because when we talk about multi-cloud, there are—to my mind at least—smart ways to do it and ways that are frankly ignorant. The thing that I've never quite seen is, it's greenfield, day one. Time to build something. Let's make sure we can build and deploy it to every cloud provider we might ever want to use.And that is usually not the right path. Whereas different workloads in different providers, that starts to make a lot more sense. When you do mergers and acquisitions, as big companies tend to do in lieu of doing anything interesting, it seems like they find it oh, we're suddenly in multiple cloud providers, should we move this acquisition to a new cloud? No. No, you should not.One of the challenges, of course, is that there's a lot of differentiation between the baseline offerings that cloud providers have. MinIO is interesting in that it starts and stops with an object store that is mostly S3 API compatible. Have I nailed the basic premise of what it is you folks do?AB: Yeah, it's basically an object store. Amazon S3 versus us, it's actually—that's the comparable, right? Amazon S3 is a hosted cloud storage as a service, but underneath the underlying technology is called object-store. MinIO is a software and it's also open-source and it's the software that you can deploy on the cloud, deploy on the edge, deploy anywhere, and both Amazon S3 and MinIO are exactly S3 API compatible. It's a drop-in replacement. You can write applications on MinIO and take it to AWS S3, and do the reverse. Amazon made S3 API a standard inside AWS, we made S3 API standard across the whole cloud, all the cloud edge, everywhere, rest of the world.Corey: I want to clarify two points because otherwise I know I'm going to get nibbled to death by ducks on the internet. When you say open-source, it is actually open-source; you're AGPL, not source available, or, “We've decided now we're going to change our model for licensing because oh, some people are using this without paying us money,” as so many companies seem to fall into that trap. You are actually open-source and no one reasonable is going to be able to disagree with that definition.The other pedantic part of it is when something says that it's S3 compatible on an API basis, like, the question is always does that include the weird bugs that we wish it wouldn't have, or some of the more esoteric stuff that seems to be a constant source of innovation? To be clear, I don't think that you need to be particularly compatible with those very corner and vertex cases. For me, it's always been the basic CRUD operations: can you store an object? Can you give it back to me? Can you delete the thing? And maybe an update, although generally object stores tend to be atomic. How far do you go down that path of being, I guess, a faithful implementation of what the S3 API does, and at which point you decide that something is just, honestly, lunacy and you feel no need to wind up supporting that?AB: Yeah, the unfortunate part of it is we have to be very, very deep. It only takes one API to break. And it's not even, like, one API we did not implement; one API under a particular circumstance, right? Like even if you see, like, AWS SDK is, right, Java SDK, different versions of Java SDK will interpret the same API differently. And AWS S3 is an API, it's not a standard.And Amazon has published the REST specifications, API specs, but they are more like religious text. You can interpret it in many ways. Amazon's own SDK has interpreted, like, this in several ways, right? The only way to get it right is, like, you have to have a massive ecosystem around your application. And if one thing breaks—today, if I commit a code and it introduced a regression, I will immediately hear from a whole bunch of community what I broke.There's no certification process here. There is no industry consortium to control the standard, but then there is an accepted standard. Like, if the application works, they need works. And one way to get it right is, like, Amazon SDKs, all of those language SDKs, to be cleaner, simpler, but applications can even use MinIO SDK to talk to Amazon and Amazon SDK to talk to MinIO. Now, there is a clear, cooperative model.And I actually have tremendous respect for Amazon engineers. They have only been kind and meaningful, like, reasonable partnership. Like, if our community reports a bug that Amazon rolled out a new update in one of the region and the S3 API broke, they will actually go fix it. They will never argue, “Why are you using MinIO SDK?” Their engineers, they do everything by reason. That's the reason why they gained credibility.Corey: I think, on some level, that we can trust that the API is not going to meaningfully shift, just because so much has been built on top of it over the last 15, almost 16 years now that even slight changes require massive coordination. I remember there was a little bit of a kerfuffle when they announced that they were going to be disabling the BitTorrent endpoint in S3 and it was no longer going to be supported in new regions, and eventually they were turning it off. There were still people pushing back on that. I'm still annoyed by some of the documentation around the API that says that it may not return a legitimate error code when it errors with certain XML interpretations. It's… it's kind of become very much its own thing.AB: [unintelligible 00:06:22] a problem, like, we have seen, like, even stupid errors similar to that, right? Like, HTTP headers are supposed to be case insensitive, but then there are some language SDKs will send us in certain type of casing and they expect the case to be—the response to be same way. And that's not HTTP standard. If we have to accept that bug and respond in the same way, then we are asking a whole bunch of community to go fix that application. And Amazon's problem are our problems too. We have to carry that baggage.But some places where we actually take a hard stance is, like, Amazon introduced that initially, the bucket policies, like access control list, then finally came IAM, then we actually, for us, like, the best way to teach the community is make best practices the standard. The only way to do it. We have been, like, educating them that we actually implemented ACLs, but we removed it. So, the customers will no longer use it. The scale at which we are growing, if I keep it, then I can never force them to remove.So, we have been pedantic about, like, how, like, certain things that if it's a good advice, force them to do it. That approach has paid off, but the problem is still quite real. Amazon also admits that S3 API is no longer simple, but at least it's not like POSIX, right? POSIX is a rich set of API, but doesn't do useful things that we need to do. So, Amazon's APIs are built on top of simple primitive foundations that got the storage architecture correct, and then doing sophisticated functionalities on top of the simple primitives, these atomic RESTful APIs, you can finally do it right and you can take it to great lengths and still not break the storage system.So, I'm not so concerned. I think it's time for both of us to slow down and then make sure that the ease of operation and adoption is the goal, then trying to create an API Bible.Corey: Well, one differentiation that you have that frankly I wish S3 would wind up implementing is this idea of bucket quotas. I would give a lot in certain circumstances to be able to say that this S3 bucket should be able to hold five gigabytes of storage and no more. Like, you could fix a lot of free tier problems, for example, by doing something like that. But there's also the problem that you'll see in data centers where, okay, we've now filled up whatever storage system we're using. We need to either expand it at significant cost and it's going to take a while or it's time to go and maybe delete some of the stuff we don't necessarily need to keep in perpetuity.There is no moment of reckoning in traditional S3 in that sense because, oh, you can just always add one more gigabyte at 2.3 or however many cents it happens to be, and you wind up with an unbounded growth problem that you're never really forced to wrestle with. Because it's infinite storage. They can add drives faster than you can fill them in most cases. So, it's it just feels like there's an economic story, if nothing else, just from a governance control and make sure this doesn't run away from me, and alert me before we get into the multi-petabyte style of storage for my Hello World WordPress website.AB: Mm-hm. Yeah, so I always thought that Amazon did not do this—it's not just Amazon, the cloud players, right—they did not do this because they want—is good for their business; they want all the customers' data, like unrestricted growth of data. Certainly it is beneficial for their business, but there is an operational challenge. When you set quota—this is why we grudgingly introduced this feature. We did not have quotas and we didn't want to because Amazon S3 API doesn't talk about quota, but the enterprise community wanted this so badly.And eventually we [unintelligible 00:09:54] it and we gave. But there is one issue to be aware of, right? The problem with quota is that you as an object storage administrator, you set a quota, let's say this bucket, this application, I don't see more than 20TB; I'm going to set 100TB quota. And then you forget it. And then you think in six months, they will reach 20TB. The reality is, in six months they reach 100TB.And then when nobody expected—everybody has forgotten that there was a code a certain place—suddenly application start failing. And when it fails, it doesn't—even though the S3 API responds back saying that insufficient space, but then the application doesn't really pass that error all the way up. When applications fail, they fail in unpredictable ways. By the time the application developer realizes that it's actually object storage ran out of space, the lost time and it's a downtime. So, as long as they have proper observability—because I mean, I've will also asked observability, that it can alert you that you are only going to run out of space soon. If you have those system in place, then go for quota. If not, I would agree with the S3 API standard that is not about cost. It's about operational, unexpected accidents.Corey: Yeah, on some level, we wound up having to deal with the exact same problem with disk volumes, where my default for most things was, at 70%, I want to start getting pings on it and at 90%, I want to be woken up for it. So, for small volumes, you wind up with a runaway log or whatnot, you have a chance to catch it and whatnot, and for the giant multi-petabyte things, okay, well, why would you alert at 70% on that? Well, because procurement takes a while when we're talking about buying that much disk for that much money. It was a roughly good baseline for these things. The problem, of course, is when you have none of that, and well it got full so oops-a-doozy.On some level, I wonder if there's a story around soft quotas that just scream at you, but let you keep adding to it. But that turns into implementation details, and you can build something like that on top of any existing object store if you don't need the hard limit aspect.AB: Actually, that is the right way to do. That's what I would recommend customers to do. Even though there is hard quota, I will tell, don't use it, but use soft quota. And the soft quota, instead of even soft quota, you monitor them. On the cloud, at least you have some kind of restriction that the more you use, the more you pay; eventually the month end bills, it shows up.On MinIO, when it's deployed on these large data centers, that it's unrestricted access, quickly you can use a lot of space, no one knows what data to delete, and no one will tell you what data to delete. The way to do this is there has to be some kind of accountability.j, the way to do it is—actually [unintelligible 00:12:27] have some chargeback mechanism based on the bucket growth. And the business units have to pay for it, right? That IT doesn't run for free, right? IT has to have a budget and it has to be sponsored by the applications team.And you measure, instead of setting a hard limit, you actually charge them that based on the usage of your bucket, you're going to pay for it. And this is a observability problem. And you can call it soft quotas, but it hasn't been to trigger an alert in observability. It's observability problem. But it actually is interesting to hear that as soft quotas, which makes a lot of sense.Corey: It's one of those problems that I think people only figure out after they've experienced it once. And then they look like wizards from the future who, “Oh, yeah, you're going to run into a quota storage problem.” Yeah, we all find that out because the first time we smack into something and live to regret it. Now, we can talk a lot about the nuances and implementation and low level detail of this stuff, but let's zoom out of it. What are you folks up to these days? What is the bigger picture that you're seeing of object storage and the ecosystem?AB: Yeah. So, when we started, right, our idea was that world is going to produce incredible amount of data. In ten years from now, we are going to drown in data. We've been saying that today and it will be true. Every year, you say ten years from now and it will still be valid, right?That was the reason for us to play this game. And we saw that every one of these cloud players were incompatible with each other. It's like early Unix days, right? Like a bunch of operating systems, everything was incompatible and applications were beginning to adopt this new standard, but they were stuck. And then the cloud storage players, whatever they had, like, GCS can only run inside Google Cloud, S3 can only run inside AWS, and the cloud player's game was bring all the world's data into the cloud.And that actually requires enormous amount of bandwidth. And moving data into the cloud at that scale, if you look at the amount of data the world is producing, if the data is produced inside the cloud, it's a different game, but the data is produced everywhere else. MinIO's idea was that instead of introducing yet another API standard, Amazon got the architecture right and that's the right way to build large-scale infrastructure. If we stick to Amazon S3 API instead of introducing it another standard, [unintelligible 00:14:40] API, and then go after the world's data. When we started in 2014 November—it's really 2015, we started, it was laughable. People thought that there won't be a need for MinIO because the whole world will basically go to AWS S3 and they will be the world's data store. Amazon is capable of doing that; the race is not over, right?Corey: And it still couldn't be done now. The thing is that they would need to fundamentally rethink their, frankly, you serious data egress charges. The problem is not that it's expensive to store data in AWS; it's that it's expensive to store data and then move it anywhere else for analysis or use on something else. So, there are entire classes of workload that people should not consider the big three cloud providers as the place where that data should live because you're never getting it back.AB: Spot on, right? Even if network is free, right, Amazon makes, like, okay, zero egress-ingress charge, the data we're talking about, like, most of MinIO deployments, they start at petabytes. Like, one to ten petabyte, feels like 100 terabyte. For even if network is free, try moving a ten-petabyte infrastructure into the cloud. How are you going to move it?Even with FedEx and UPS giving you a lot of bandwidth in their trucks, it is not possible, right? I think the data will continue to be produced everywhere else. So, our bet was there we will be [unintelligible 00:15:56]—instead of you moving the data, you can run MinIO where there is data, and then the whole world will look like AWS's S3 compatible object store. We took a very different path. But now, when I say the same story that when what we started with day one, it is no longer laughable, right?People believe that yes, MinIO is there because our market footprint is now larger than Amazon S3. And as it goes to production, customers are now realizing it's basically growing inside a shadow IT and eventually businesses realize the bulk of their business-critical data is sitting on MinIO and that's how it's surfacing up. So now, what we are seeing, this year particularly, all of these customers are hugely concerned about cost optimization. And as part of the journey, there is also multi-cloud and hybrid-cloud initiatives. They want to make sure that their application can run on any cloud or on the same software can run on their colos like Equinix, or like bunch of, like, Digital Reality, anywhere.And MinIO's software, this is what we set out to do. MinIO can run anywhere inside the cloud, all the way to the edge, even on Raspberry Pi. It's now—whatever we started with is now has become reality; the timing is perfect for us.Corey: One of the challenges I've always had with the idea of building an application with the idea to run it anywhere is you can make explicit technology choices around that, and for example, object store is a great example because most places you go now will or can have an object store available for your use. But there seem to be implementation details that get lost. And for example, even load balancers wind up being implemented in different ways with different scaling times and whatnot in various environments. And past a certain point, it's okay, we're just going to have to run it ourselves on top of HAproxy or Nginx, or something like it, running in containers themselves; you're reinventing the wheel. Where is that boundary between, we're going to build this in a way that we can run anywhere and the reality that I keep running into, which is we tried to do that but we implicitly without realizing it built in a lot of assumptions that everything would look just like this environment that we started off in.AB: The good part is that if you look at the S3 API, every request has the site name, the endpoint, bucket name, the path, and the object name. Every request is completely self-contained. It's literally a HTTP call away. And this means that whether your application is running on Android, iOS, inside a browser, JavaScript engine, anywhere across the world, they don't really care whether the bucket is served from EU or us-east or us-west. It doesn't matter at all, so it actually allows you by API, you can build a globally unified data infrastructure, some buckets here, some buckets there.That's actually not the problem. The problem comes when you have multiple clouds. Different teams, like, part M&A, the part—like they—even if you don't do M&A, different teams, no two data engineer will would agree on the same software stack. Then where they will all end up with different cloud players and some is still running on old legacy environment.When you combine them, the problem is, like, let's take just the cloud, right? How do I even apply a policy, that access control policy, how do I establish unified identity? Because I want to know this application is the only one who is allowed to access this bucket. Can I have that same policy on Google Cloud or Azure, even though they are different teams? Like if that employer, that project, or that admin, if he or she leaves the job, how do I make sure that that's all protected?You want unified identity, you want unified access control policies. Where are the encryption key store? And then the load balancer itself, the load, its—load balancer is not the problem. But then unless you adopt S3 API as your standard, the definition of what a bucket is different from Microsoft to Google to Amazon.Corey: Yeah, the idea of an of the PUTS and retrieving of actual data is one thing, but then you have how do you manage it the control plane layer of the object store and how do you rationalize that? What are the naming conventions? How do you address it? I even ran into something similar somewhat recently when I was doing an experiment with one of the Amazon Snowball edge devices to move some data into S3 on a lark. And the thing shows up and presents itself on the local network as an S3 endpoint, but none of their tooling can accept a different endpoint built into the configuration files; you have to explicitly use it as an environment variable or as a parameter on every invocation of something that talks to it, which is incredibly annoying.I would give a lot for just to be able to say, oh, when you're talking in this profile, that's always going to be your S3 endpoint. Go. But no, of course not. Because that would make it easier to use something that wasn't them, so why would they ever be incentivized to bake that in?AB: Yeah. Snowball is an important element to move data, right? That's the UPS and FedEx way of moving data, but what I find customers doing is they actually use the tools that we built for MinIO because the Snowball appliance also looks like S3 API-compatible object store. And in fact, like, I've been told that, like, when you want to ship multiple Snowball appliances, they actually put MinIO to make it look like one unit because MinIO can erase your code objects across multiple Snowball appliances. And the MC tool, unlike AWS CLI, which is really meant for developers, like low-level calls, MC gives you unique [scoring 00:21:08] tools, like lscp, rsync-like tools, and it's easy to move and copy and migrate data. Actually, that's how people deal with it.Corey: Oh, God. I hadn't even considered the problem of having a fleet of Snowball edges here that you're trying to do a mass data migration on, which is basically how you move petabyte-scale data, is a whole bunch of parallelism. But having to figure that out on a case-by-case basis would be nightmarish. That's right, there is no good way to wind up doing that natively.AB: Yeah. In fact, Western Digital and a few other players, too, now the Western Digital created a Snowball-like appliance and they put MinIO on it. And they are actually working with some system integrators to help customers move lots of data. But Snowball-like functionality is important and more and more customers who need it.Corey: This episode is sponsored in part by Honeycomb. I'm not going to dance around the problem. Your. Engineers. Are. Burned. Out. They're tired from pagers waking them up at 2 am for something that could have waited until after their morning coffee. Ring Ring, Who's There? It's Nagios, the original call of duty! They're fed up with relying on two or three different “monitoring tools” that still require them to manually trudge through logs to decipher what might be wrong. Simply put, there's a better way. Observability tools like Honeycomb (and very little else because they do admittedly set the bar) show you the patterns and outliers of how users experience your code in complex and unpredictable environments so you can spend less time firefighting and more time innovating. It's great for your business, great for your engineers, and, most importantly, great for your customers. Try FREE today at honeycomb.io/screaminginthecloud. That's honeycomb.io/screaminginthecloud.Corey: Increasingly, it felt like, back in the on-prem days, that you'd have a file server somewhere that was either a SAN or it was going to be a NAS. The question was only whether it presented it to various things as a volume or as a file share. And then in cloud, the default storage mechanism, unquestionably, was object store. And now we're starting to see it come back again. So, it started to increasingly feel, in a lot of ways, like Cloud is no longer so much a place that is somewhere else, but instead much more of an operating model for how you wind up addressing things.I'm wondering when the generation of prosumer networking equipment, for example, is going to say, “Oh, and send these logs over to what object store?” Because right now, it's still write a file and SFTP it somewhere else, at least the good ones; some of the crap ones still want old unencrypted FTP, which is neither here nor there. But I feel like it's coming back around again. Like, when do even home users wind up instead of where do you save this file to having the cloud abstraction, which hopefully, you'll never have to deal with an S3-style endpoint, but that can underpin an awful lot of things. It feels like it's coming back and that's cloud is the de facto way of thinking about things. Is that what you're seeing? Does that align with your belief on this?AB: I actually, fundamentally believe in the long run, right, applications will go SaaS, right? Like, if you remember the days that you used to install QuickBooks and ACT and stuff, like, on your data center, you used to run your own Exchange servers, like, those days are gone. I think these applications will become SaaS. But then the infrastructure building blocks for these SaaS, whether they are cloud or their own colo, I think that in the long run, it will be multi-cloud and colo all combined and all of them will look alike.But what I find from the customer's journey, the Old World and the New World is incompatible. When they shifted from bare metal to virtualization, they didn't have to rewrite their application. But this time, you have—it as a tectonic shift. Every single application, you have to rewrite. If you retrofit your application into the cloud, bad idea, right? It's going to cost you more and I would rather not do it.Even though cloud players are trying to make, like, the file and block, like, file system services [unintelligible 00:24:01] and stuff, they make it available ten times more expensive than object, but it's just to [integrate 00:24:07] some legacy applications, but it's still a bad idea to just move legacy applications there. But what I'm finding is that the cost, if you still run your infrastructure with enterprise IT mindset, you're out of luck. It's going to be super expensive and you're going to be left out modern infrastructure, because of the scale, it has to be treated as code. You have to run infrastructure with software engineers. And this cultural shift has to happen.And that's why cloud, in the long run, everyone will look like AWS and we always said that and it's now being becoming true. Like, Kubernetes and MinIO basically is leveling the ground everywhere. It's giving ECS and S3-like infrastructure inside AWS or outside AWS, everywhere. But what I find the challenging part is the cultural mindset. If they still have the old cultural mindset and if they want to adopt cloud, it's not going to work.You have to change the DNA, the culture, the mindset, everything. The best way to do it is go to the cloud-first. Adopt it, modernize your application, learn how to run and manage infrastructure, then ask economics question, the unit economics. Then you will find the answers yourself.Corey: On some level, that is the path forward. I feel like there's just a very long tail of systems that have been working and have been meeting the business objective. And well, we should go and refactor this because, I don't know, a couple of folks on a podcast said we should isn't the most compelling business case for doing a lot of it. It feels like these things sort of sit there until there is more upside than just cost-cutting to changing the way these things are built and run. That's the reason that people have been talking about getting off of mainframe since the '90s in some companies, and the mainframe is very much still there. It is so ingrained in the way that they do business, they have to rethink a lot of the architectural things that have sprung up around it.I'm not trying to shame anyone for the [laugh] state that their environment is in. I've never yet met a company that was super proud of its internal infrastructure. Everyone's always apologizing because it's a fire. But they think someone else has figured this out somewhere and it all runs perfectly. I don't think it exists.AB: What I am finding is that if you are running it the enterprise IT style, you are the one telling the application developers, here you go, you have this many VMs and then you have, like, a VMware license and, like, Jboss, like WebLogic, and like a SQL Server license, now you go build your application, you won't be able to do it. Because application developers talk about Kafka and Redis and like Kubernetes, they don't speak the same language. And that's when these developers go to the cloud and then finish their application, take it live from zero lines of code before it can procure infrastructure and provision it to these guys. The change that has to happen is how can you give what the developers want now that reverse journey is also starting. In the long run, everything will look alike, but what I'm finding is if you're running enterprise IT infrastructure, traditional infrastructure, they are ashamed of talking about it.But then you go to the cloud and then at scale, some parts of it, you want to move for—now you really know why you want to move. For economic reasons, like, particularly the data-intensive workloads becomes very expensive. And at that part, they go to a colo, but leave the applications on the cloud. So, it's the multi-cloud model, I think, is inevitable. The expensive pieces that where you can—if you are looking at yourself as hyperscaler and if your data is growing, if your business focus is data-centric business, parts of the data and data analytics, ML workloads will actually go out, if you're looking at unit economics. If all you are focused on productivity, stick to the cloud and you're still better off.Corey: I think that's a divide that gets lost sometimes. When people say, “Oh, we're going to move to the cloud to save money.” It's, “No you're not.” At a five-year time horizon, I would be astonished if that juice were worth the squeeze in almost any scenario. The reason you go for therefore is for a capability story when it's right for you.That also means that steady-state workloads that are well understood can often be run more economically in a place that is not the cloud. Everyone thinks for some reason that I tend to be its cloud or it's trash. No, I'm a big fan of doing things that are sensible and cloud is not the right answer for every workload under the sun. Conversely, when someone says, “Oh, I'm building a new e-commerce store,” or whatnot, “And I've decided cloud is not for me.” It's, “Ehh, you sure about that?”That sounds like you are smack-dab in the middle of the cloud use case. But all these things wind up acting as constraints and strategic objectives. And technology and single-vendor answers are rarely going to be a panacea the way that their sales teams say that they will.AB: Yeah. And I find, like, organizations that have SREs, DevOps, and software engineers running the infrastructure, they actually are ready to go multi-cloud or go to colo because they have the—exactly know. They have the containers and Kubernetes microservices expertise. If you are still on a traditional SAN, NAS, and VM architecture, go to cloud, rewrite your application.Corey: I think there's a misunderstanding in the ecosystem around what cloud repatriation actually looks like. Everyone claims it doesn't exist because there's basically no companies out there worth mentioning that are, “Yep, we've decided the cloud is terrible, we're taking everything out and we are going to data centers. The end.” In practice, it's individual workloads that do not make sense in the cloud. Sometimes just the back-of-the-envelope analysis means it's not going to work out, other times during proof of concepts, and other times, as things have hit a certain point of scale, we're in an individual workload being pulled back makes an awful lot of sense. But everything else is probably going to stay in the cloud and these companies don't want to wind up antagonizing the cloud providers by talking about it in public. But that model is very real.AB: Absolutely. Actually, what we are finding with the application side, like, parts of their overall ecosystem, right, within the company, they run on the cloud, but the data side, some of the examples, like, these are in the range of 100 to 500 petabytes. The 500-petabyte customer actually started at 500 petabytes and their plan is to go at exascale. And they are actually doing repatriation because for them, their customers, it's consumer-facing and it's extremely price sensitive, but when you're a consumer-facing, every dollar you spend counts. And if you don't do it at scale, it matters a lot, right? It will kill the business.Particularly last two years, the cost part became an important element in their infrastructure, they knew exactly what they want. They are thinking of themselves as hyperscalers. They get commodity—the same hardware, right, just a server with a bunch of [unintelligible 00:30:35] and network and put it on colo or even lease these boxes, they know what their demand is. Even at ten petabytes, the economics starts impacting. If you're processing it, the data side, we have several customers now moving to colo from cloud and this is the range we are talking about.They don't talk about it publicly because sometimes, like, you don't want to be anti-cloud, but I think for them, they're also not anti-cloud. They don't want to leave the cloud. The completely leaving the cloud, it's a different story. That's not the case. Applications stay there. Data lakes, data infrastructure, object store, particularly if it goes to a colo.Now, your applications from all the clouds can access this centralized—centralized, meaning that one object store you run on colo and the colos themselves have worldwide data centers. So, you can keep the data infrastructure in a colo, but applications can run on any cloud, some of them, surprisingly, that they have global customer base. And not all of them are cloud. Sometimes like some applications itself, if you ask what type of edge devices they are running, edge data centers, they said, it's a mix of everything. What really matters is not the infrastructure. Infrastructure in the end is CPU, network, and drive. It's a commodity. It's really the software stack, you want to make sure that it's containerized and easy to deploy, roll out updates, you have to learn the Facebook-Google style running SaaS business. That change is coming.Corey: It's a matter of time and it's a matter of inevitability. Now, nothing ever stays the same. Everything always inherently changes in the full sweep of things, but I'm pretty happy with where I see the industry going these days. I want to start seeing a little bit less centralization around one or two big companies, but I am confident that we're starting to see an awareness of doing these things for the right reason more broadly permeating.AB: Right. Like, the competition is always great for customers. They get to benefit from it. So, the decentralization is a path to bringing—like, commoditizing the infrastructure. I think the bigger picture for me, what I'm particularly happy is, for a long time we carried industry baggage in the infrastructure space.If no one wants to change, no one wants to rewrite application. As part of the equation, we carried the, like, POSIX baggage, like SAN and NAS. You can't even do [unintelligible 00:32:48] as a Service, NFS as a Service. It's too much of a baggage. All of that is getting thrown out. Like, the cloud players be helped the customers start with a clean slate. I think to me, that's the biggest advantage. And that now we have a clean slate, we can now go on a whole new evolution of the stack, keeping it simpler and everyone can benefit from this change.Corey: Before we wind up calling this an episode, I do have one last question for you. As I mentioned at the start, you're very much open-source, as in legitimate open-source, which means that anyone who wants to can grab an implementation and start running it. How do you, I guess make peace with the fact that the majority of your user base is not paying you? And I guess how do you get people to decide, “You know what? We like the cut of his jib. Let's give him some money.”AB: Mm-hm. Yeah, if I looked at it that way, right, I have both the [unintelligible 00:33:38], right, on the open-source side as well as the business. But I don't see them to be conflicting. If I run as a charity, right, like, I take donation. If you love the product, here is the donation box, then that doesn't work at all, right?I shouldn't take investor money and I shouldn't have a team because I have a job to pay their bills, too. But I actually find open-source to be incredibly beneficial. For me, it's about delivering value to the customer. If you pay me $5, I ought to make you feel $50 worth of value. The same software you would buy from a proprietary vendor, why would—if I'm a customer, same software equal in functionality, if its proprietary, I would actually prefer open-source and pay even more.But why are, really, customers paying me now and what's our view on open-source? I'm actually the free software guy. Free software and open-source are actually not exactly equal, right? We are the purest of the open-source community and we have strong views on what open-source means, right. That's why we call it free software. And free here means freedom, right? Free does not mean gratis, that free of cost. It's actually about freedom and I deeply care about it.For me it's a philosophy and it's a way of life. That's why I don't believe in open core and other models that holding—giving crippleware is not open-source, right? I give you some freedom but not all, right, like, it's it breaks the spirit. So, MinIO is a hundred percent open-source, but it's open-source for the open-source community. We did not take some community-developed code and then added commercial support on top.We built the product, we believed in open-source, we still believe and we will always believe. Because of that, we open-sourced our work. And it's open-source for the open-source community. And as you build applications that—like the AGPL license on the derivative works, they have to be compatible with AGPL because we are the creator. If you cannot open-source, you open-source your application derivative works, you can buy a commercial license from us. We are the creator, we can give you a dual license. That's how the business model works.That way, the open-source community completely benefits. And it's about the software freedom. There are customers, for them, open-source is good thing and they want to pay because it's open-source. There are some customers that they want to pay because they can't open-source their application and derivative works, so they pay. It's a happy medium; that way I actually find open-source to be incredibly beneficial.Open-source gave us that trust, like, more than adoption rate. It's not like free to download and use. More than that, the customers that matter, the community that matters because they can see the code and they can see everything we did, it's not because I said so, marketing and sales, you believe them, whatever they say. You download the product, experience it and fall in love with it, and then when it becomes an important part of your business, that's when they engage with us because they talk about license compatibility and data loss or a data breach, all that becomes important. Open-source isn't—I don't see that to be conflicting for business. It actually is incredibly helpful. And customers see that value in the end.Corey: I really want to thank you for being so generous with your time. If people want to learn more, where should they go?AB: I was on Twitter and now I think I'm spending more time on, maybe, LinkedIn. I think if they—they can send me a request and then we can chat. And I'm always, like, spending time with other entrepreneurs, architects, and engineers, sharing what I learned, what I know, and learning from them. There is also a [community open channel 00:37:04]. And just send me a mail at ab@min.io and I'm always interested in talking to our user base.Corey: And we will, of course, put links to that in the [show notes 00:37:12]. Thank you so much for your time. I appreciate it.AB: It's wonderful to be here.Corey: AB Periasamy, CEO and co-founder of MinIO. I'm Cloud Economist Corey Quinn and this has been a promoted guest episode of Screaming in the Cloud. If you've enjoyed this podcast, please leave a five-star review on your podcast platform of choice, whereas if you've hated this podcast, please leave a five-star review on your podcast platform of choice that presumably will also include an angry, loud comment that we can access from anywhere because of shared APIs.Corey: If your AWS bill keeps rising and your blood pressure is doing the same, then you need The Duckbill Group. We help companies fix their AWS bill by making it smaller and less horrifying. The Duckbill Group works for you, not AWS. We tailor recommendations to your business and we get to the point. Visit duckbillgroup.com to get started.
In this episode of Tech. Strong. Women., hosts Jodi Ashley and Tracy Ragan talk with Liz Rice, chief open source officer at Isovalent, about extended Berkeley Packet Filter (eBPF) and its impact on the open source ecosystem and observability, the challenges open source developers face with the upcoming Cyber Resilience Act legislation and the significant impact AI has on open source. Rice will discuss how AI opens up numerous opportunities for further automation and optimization of open source development, integration and delivery all along the SDLC. Finally, Rice explores how the modern remote, work-from-anywhere technology landscape will empower more women and underrepresented groups to pursue tech careers.
Liz Rice, Chief Open Source Officer, Isovalent & Emeritus Chair, Technical Oversight Committee talks with Lisa Martin and John Furrier for coverage of Cloud Native SecurityCon 2023.
Reza Rahman, a Principal Program Manager for Java on Azure, talks to us about the work his team has done to enable customers using legacy Java frameworks to easily migrate to Azure or modernize in a variety of ways. Media file: https://azpodcast.blob.core.windows.net/episodes/Episode449.mp3 YouTube: https://youtu.be/OqjDtcT6c9E Resources: https://speakerdeck.com/reza_rahman/why-jakarta-ee-developers-are-first-class-citizens-on-azure https://learn.microsoft.com/en-us/azure/developer/java/ee/ https://aka.ms/migration-survey Other updates: Public preview: Call automation capabilities for Azure Communication Services | Azure updates | Microsoft Azure Improve speech-to-text accuracy with Azure Custom Speech https://azure.microsoft.com/en-us/blog/improve-speechtotext-accuracy-with-azure-custom-speech/ Microsoft and Isovalent partner to bring next generation eBPF dataplane for cloud-native applications in Azure https://azure.microsoft.com/en-us/blog/microsoft-and-isovalent-partner-to-bring-next-generation-ebpf-dataplane-for-cloudnative-applications-in-azure/ Microsoft Cost Management updates—November 2022 https://azure.microsoft.com/en-us/blog/microsoft-cost-management-updates-november-2022/ Free Services | Microsoft Azure Public preview: Azure SQL Trigger for Azure Functions | Azure updates | Microsoft Azure
Bret is joined by Liz Rice, Chief Open Source Officer at Isovalent, the makers of Cilium, to discuss Cilium and eBPF. Liz Rice is back to give us more insight into eBPF and the Cilium project. Isovalent is the company that created and manages the Cilium Project, which does an increasing number of things for Kubernetes, including networking, CNI support, security, advanced networking stuff, and observability, as well as other things like load balancing. Liz is one of my go-to experts on how low-level Linux internals work. She's been speaking about container internals since the early days of Docker.Streamed live on YouTube on September 8, 2022.Unedited live recording of this show on YouTube (Ep #183)★Topics★Cilium websiteIsovalent websiteeBPFNetwork Policy Editor★Liz Rice★Liz Rice on TwitterLiz Rice's websiteBooks on Containers, eBPF, Kubernetes and Go★Join my Community★ Best coupons for my Docker and Kubernetes coursesChat with us and fellow students on our Discord Server DevOps FansHomepage bretfisher.com ★ Support this podcast on Patreon ★
Tracy Holmes is a Technical Community Advocate at Isovalent who takes us through her path to reaching her present position. In this episode we can watch her story develop from being involved in music and theater in high school to working for a prestigious technology company. 00:00 Introduction 05:40 What is Tracy doing today?08:55 First memory of a computer 14:00 Interests in high school22:22 Growing up / raising kids30:00 Struggles in college / anxiety36:25 Switching Majors in college42:25 Social responsibility with jobs46:39 After college jobs58:38 Working as a system admin1:02:00 Anxiety and fixing things1:16:00 Participating in meetups1:22:20 Finding Isovalent1:29:00 Future projects with technology1:33:20 Thing happen for a reason1:38:20 Contact infoConnect with Tracy: Twitter: https://twitter.com/tracypholmesLinkedin: https://www.linkedin.com/in/tracypholmes/Github: https://github.com/tracypholmesMentioned in today's episode:Isovalent : https://isovalent.comHashiCorp: https://www.hashicorp.comWant more from Ardan Labs? You can learn Go, Kubernetes, Docker & more through our video training, live events, or through our blog!Online Courses: https://ardanlabs.com/education/ Live Events: https://www.ardanlabs.com/live-training-events/ Blog: https://www.ardanlabs.com/blog Github: https://github.com/ardanlabs
Liz Rice, Chief Open Source Officer at Isovalent, joins me to discuss the business model behind Cilium and the enjoyment she has found working in open source. In this episode, Liz and I discuss why Isovalent decided to donate Cilium to CNCF, and the additional decisions behind developing for Cilium open source versus Cilium for Enterprise. Tune into this episode to hear how entrepreneurship taught Liz what she didn't enjoy doing so she could focus on work she enjoys, and what she finds most rewarding about working in open source.Highlights: Liz introduces herself and describes her role as Chief Open Source Officer at Isovalent (00:55) Why Isovalent decided to donate Cilium to CNCF (02:11) What Liz sees as the relationship between cloud native and open source (07:43) Liz's past experiences as an entrepreneur and how it led her to to where she is now (10:05) How Isovalent has evolved and grown into a company with enterprise product offerings (17:17) How decisions are made differently when developing the open-source version of Cilium versus the enterprise version (22:12) The gratification and value Liz has found working in open source (25:58) Links: LinkedIn: https://www.linkedin.com/in/lizrice Twitter: https://twitter.com/lizrice Github: https://github.com/lizrice Company: https://isovalent.com/ Cilium: www.cilium.io eBPF: www.eBPF.io
How much on-premises IT is there? Is it 25% of all IT, or more like 90%. We try to figure that out at the start, and then move onto our selected news items since last time: a new take on services meshes known as "ambient mesh," a look at GitLabs, and two surveys going over cloud security and management/observability. And, check out Ben's video on getting kubernetes setup for developers with the Tanzu Application Platform. Check out SpringOne which is coming up Dec 6th to 8th in San Francisco. You can get $200 off if you register with the code COTE200. If you prefer visuals, check out the livestream recording of this episode which includes some bonus after show content where we dream about the alternative reality where Amiga still excited and hear about Coté's stroopwafel/coffee incident. Topics: How much on-premises vs. public cloud IT is out there? Is it, like, 50/50, or more like 90/10? AWS CEO throws out feels that 90% of IT is still on-premises. A lot of words that go nowhere, from Coté, on this topic. News: In service mesh land, there's a new idea: Ambient Mesh. Related, check out this talk from Duffie Cooley of Isovalent. What does GitLabs do, exactly, cause they seem to be doing it very well. New Relic Observability report Snyk State of Cloud Security Report. Organizations of varying sizes and industries reported being impacted by major cloud security events over the last 12 months, with startups (89%) and public sector organizations (88%) the most affected
How much on-premises IT is there? Is it 25% of all IT, or more like 90%. We try to figure that out at the start, and then move onto our selected news items since last time: a new take on services meshes known as "ambient mesh," a look at GitLabs, and two surveys going over cloud security and management/observability. And, check out Ben's video on getting kubernetes setup for developers with the Tanzu Application Platform. Check out SpringOne which is coming up Dec 6th to 8th in San Francisco. You can get $200 off if you register with the code COTE200. If you prefer visuals, check out the livestream recording of this episode which includes some bonus after show content where we dream about the alternative reality where Amiga still excited and hear about Coté's stroopwafel/coffee incident. Topics: How much on-premises vs. public cloud IT is out there? Is it, like, 50/50, or more like 90/10? AWS CEO throws out feels that 90% of IT is still on-premises. A lot of words that go nowhere, from Coté, on this topic. News: In service mesh land, there's a new idea: Ambient Mesh. Related, check out this talk from Duffie Cooley of Isovalent. What does GitLabs do, exactly, cause they seem to be doing it very well. New Relic Observability report Snyk State of Cloud Security Report. Organizations of varying sizes and industries reported being impacted by major cloud security events over the last 12 months, with startups (89%) and public sector organizations (88%) the most affected
About MartinMartin Casado is a general partner at the venture capital firm Andreessen Horowitz where he focuses on enterprise investing. He was previously the cofounder and chief technology officer at Nicira, which was acquired by VMware for $1.26 billion in 2012. While at VMware, Martin was a fellow, and served as senior vice president and general manager of the Networking and Security Business Unit, which he scaled to a $600 million run-rate business by the time he left VMware in 2016.Martin started his career at Lawrence Livermore National Laboratory where he worked on large-scale simulations for the Department of Defense before moving over to work with the intelligence community on networking and cybersecurity. These experiences inspired his work at Stanford where he created the software-defined networking (SDN) movement, leading to a new paradigm of network virtualization. While at Stanford he also cofounded Illuminics Systems, an IP analytics company, which was acquired by Quova Inc. in 2006.For his work, Martin was awarded both the ACM Grace Murray Hopper award and the NEC C&C award, and he's an inductee of the Lawrence Livermore Lab's Entrepreneur's Hall of Fame. He holds both a PhD and Masters degree in Computer Science from Stanford University.Martin serves on the board of ActionIQ, Ambient.ai, Astranis, dbt Labs, Fivetran, Imply, Isovalent, Kong, Material Security, Netlify, Orbit, Pindrop Security, Preset, RapidAPI, Rasa, Tackle, Tecton, and Yubico.Links: Yet Another Infra Group Discord Server: https://discord.gg/f3xnJzwbeQ “The Cost of Cloud, a Trillion Dollar Paradox” - https://a16z.com/2021/05/27/cost-of-cloud-paradox-market-cap-cloud-lifecycle-scale-growth-repatriation-optimization/ TranscriptAnnouncer: Hello, and welcome to Screaming in the Cloud with your host, Chief Cloud Economist at The Duckbill Group, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.Corey: This episode is sponsored in part by Honeycomb. When production is running slow, it's hard to know where problems originate. Is it your application code, users, or the underlying systems? I've got five bucks on DNS, personally. Why scroll through endless dashboards while dealing with alert floods, going from tool to tool to tool that you employ, guessing at which puzzle pieces matter? Context switching and tool sprawl are slowly killing both your team and your business. You should care more about one of those than the other; which one is up to you. Drop the separate pillars and enter a world of getting one unified understanding of the one thing driving your business: production. With Honeycomb, you guess less and know more. Try it for free at honeycomb.io/screaminginthecloud. Observability: it's more than just hipster monitoring.Corey: This episode is sponsored in part by our friends at Sysdig. Sysdig secures your cloud from source to run. They believe, as do I, that DevOps and security are inextricably linked. If you wanna learn more about how they view this, check out their blog, it's definitely worth the read. To learn more about how they are absolutely getting it right from where I sit, visit Sysdig.com and tell them that I sent you. That's S Y S D I G.com. And my thanks to them for their continued support of this ridiculous nonsense.Corey: Welcome to Screaming in the Cloud. I'm Corey Quinn. I'm joined today by someone who has taken a slightly different approach to being—well, we'll call it cloud skepticism here. Martin Casado is a general partner at Andreessen Horowitz and has been on my radar starting a while back, based upon a piece that he wrote focusing on the costs of cloud and how repatriation is going to grow. You wrote that in conjunction with your colleague, Sarah Wang. Martin, thank you so much for joining me. What got you onto that path?Martin: So, I want to be very clear, just to start with is, I think cloud is the biggest innovation that we've seen in infrastructure, probably ever. It's a core part of the industry. I think it's very important, I think every company's going to be using cloud, so I'm very pro-cloud. I just think the nature of how you use clouds is shifting. And that was the focus.Corey: When you first put out your article in conjunction with your colleague as well, like, I saw it and I have to say that this was the first time I'd really come across any of your work previously. And I have my own biases that I started from, so my opening position on reading it was this is just some jerk who's trying to say something controversial and edgy to get attention. That's my frickin job. Excuse me, sir. And who is this clown?So, I started digging, and what I found really changed my perspective because as mentioned at the start of the show, you are a general partner at Andreessen Horowitz, which means you are a VC. You are definitionally almost the archetype of a VC in that sense. And to me, being a venture capitalist means the most interesting thing about you is that you write a large check consisting of someone else's money. And that's never been particularly interesting.Martin: [laugh].Corey: You kind of cut against that grain and that narrative. You have a master's and a PhD in computer science from Stanford; you started your career at one of the national labs—Laurence Livermore, if memory serves—you wound up starting a business, Nicira, if I'm pronouncing that correctly—Martin: Yeah, yeah, yeah.Corey: That you then sold to VMware in 2012, back at a time when that was a noble outcome, rather than a state of failure because VMware is not exactly what it once was. You ran a $600 million a year business while you were there. Basically, the list of boards that you're on is lengthy enough and notable enough that it sounds almost like you're professionally bored, so I don't—Martin: [laugh].Corey: So, looking at this, it's okay, this is someone who actually knows what he is talking about, not just, “Well, I talked to three people in pitch meetings and I now think I know what is going on in this broader industry.” You pay attention, and you're connected, disturbingly well, to what's going on, to the point where if you see something, it is almost certainly rooted in something that is happening. And it's a big enough market that I don't think any one person can keep their finger on the pulse of everything. So, that's when I started really digging into it, paying attention, and more or less took a lot of what you wrote as there are some theses in here that I want to prove or disprove. And I spent a fair bit of time basically threatening, swindling, and bribing people with infinite cups of coffee in order to start figuring out what is going on.And I am begrudgingly left with no better conclusion than you have a series of points in here that are very challenging to disprove. So, where do you stand today, now that, I guess, the whole rise and fall of the hype around your article on cloud repatriation—which yes, yes, we'll put a link to it in the show notes if people want to go there—but you've talked about this in a lot of different contexts. Having had the conversations that you've had, and I'm sure some very salty arguments with people who have a certain vested interest in you being wrong, do you wind up continuing to stand by the baseline positions that you've laid out, or have they evolved into something more nuanced?Martin: So yeah, I definitely want to point out, so this was work done with Sarah Wang was also at Andreessen Horowitz; she's also a GP. She actually did the majority of the analysis and she's way smarter than I am. [laugh]. And so, I'm just very—feel very lucky to work with her on this. And I want to make sure she gets due credit on this.So, let's talk about the furor. So like, I actually thought that this was kind of interesting and it started a good discussion, but instead, like, [laugh] the amount of, like, response pieces and, like, angry emails I got, and [laugh] like, I mean it just—and I kind of thought to myself, like, “Why are people so upset?” I think there's three reasons. I'm going to go through them very quickly because they're interesting.So, the first one is, like, you're right, like, I'm a VC. I think people see a VC and they're like, oh, lack of credibility, lack of accountability, [laugh], you know, doesn't know what they're doing, broad pattern matcher. And, like, I will say, like, I did not necessarily write this as a VC; I wrote this as somebody that's, like, listen, my PhD is an infrastructure; my company was an infrastructure. It's all data center stuff. I had a $600 million a year data center business that sold infrastructure into data centers. I've worked with all of the above. Like, I've worked with Amazon, I've—Corey: So, you sold three Cisco switches?Martin: [laugh]. That's right.Corey: I remember those days. Those were awesome, but not inexpensive.Martin: [laugh]. That's right. Yeah, so like, you know, I had 15 years. It's kind of a culmination of that experience. So, that was one; I just think that people see VC and they have a reaction.The second one is, I think people still have the first cloud wars fresh in their memories and so they just don't know how to think outside of that. So, a lot of the rebuttals were first cloud war rebuttals. Like, “Well, but internal IT is slow and you can't have the expertise.” But like, they just don't apply to the new world, right? Like, listen, if you're Cloudflare, to say that you can't run, like, a large operation is just silly. If you went to Cloudflare and you're like, “Listen, you can't run your own infrastructure,” like, they'd take out your sucker and pat you on the head. [laugh].Corey: And not for nothing, if you try to run what they're doing on other cloud providers from a pure bandwidth perspective, you don't have a company anymore, regardless of how well funded you are. It's a never-full money pit that just sucks all of the money. And I've talked to a number of very early idea stage companies that aren't really founded yet about trying to do things like CDN-style work or streaming video, and a lot of those questions start off with well, we did some back-of-the-envelope math around AWS data transfer pricing, and if our numbers are right, when we scale, we'll be spending $65,000 on data transfer every minute. What did we get wrong?And it's like, “Oh, yeah, you realize that one thing is per hour not per minute, so slight difference there. But no, you're basically correct. Don't do it.” And yeah, no one pays retail price at that volume, but they're not going to give you a 99.999% discount on these things, so come up with a better plan. Cloudflare's business will not work on AWS, full stop.Martin: Yep, yep. So, I legitimately know, basically, household name public companies that are software companies that anybody listening to this knows the name of these companies, who have product lines who have 0% margins because they're [laugh] basically, like, for every dollar they make, they pay a dollar to Amazon. Like, this is a very real thing, right? And if you go to these companies, these are software infrastructure companies; they've got very talented teams, they know how to build, like, infrastructure. To tell them that like, “Well, you know, you can't build your own infrastructure,” or something is, I mean, it's like telling, like, an expert in the business, they can't do what they do; this is what they do. So, I just think that part of the furor, part of the uproar, was like, I just think people were stuck in this cloud war 1.0 mindset.I think the third thing is, listen, we've got an oligopoly, and they employ a bunch of people, and they've convinced a bunch of people they're right, and it's always hard to change that. And I also think there's just a knee-jerk reaction to these big macro shifts. And it was the same thing we did to software-defined networking. You know, like, my grad school work was trying to change networking to go from hardware to software. I remember giving a talk at Cisco, and I was, like, this kind of like a naive grad student, and they literally yelled at me out of the room. They're like, it'll never work.Corey: They tried to burn you as a witch, as I recall.Martin: [laugh]. And so, your specific question is, like, have our views evolved? But the first one is, I think that this macro downturn really kind of makes the problem more acute. And so, I think the problem is very, very real. And so, I think the question is, “Okay, so what happens?”So, let's say if you're building a new software company, and you have a choice of using, like, one of the Big Three public clouds, but it impacts your margins so much that it depresses your share price, what do you do? And I think that we thought a lot more about what the answers there are. And the ones that I think that we're seeing is, some actually are; companies are building their own infrastructure. Like, very famously MosaicML is building their own infrastructure. Fly.io, -building their own infrastructure.Mighty—you know, Suhail's company—building his own infrastructure. Cloudflare has their own infrastructure. So, I think if you're an infrastructure provider, a very reasonable thing to do is to build your own infrastructure. If you're not a core infrastructure provider, you're not; you can still use somebody's infrastructure that's built at a better cost point.So, for example, if I'm looking at a CDN tier, I'm going to use Fly.io, right? I mean, it's like, it's way cheaper, the multi-region is way better, and so, like, I do think that we're seeing, like, almost verticalized clouds getting built out that address this price point and, like, these new use cases. And I think this is going to start happening more and more now. And we're going to see basically almost the delamination of the cloud into these verticalized clouds.Corey: I think there's also a question of scale, where if you're starting out in the evening tonight, to—I want to build, I don't know Excel as a service or something. Great. You're pretty silly if you're not going to start off with a cloud provider, just because you can get instant access to resources, and if your product catches on, you scale out without having to ever go back and build it as quote-unquote “Enterprise grade,” as opposed to having building it on cheap servers or Raspberry Pis or something floating around. By the time that costs hit a certain point—and what that point is going to depend on your stage of company and lifecycle—you're remiss if you don't at least do an analysis on is this the path we want to continue on for the service that we're offering?And to be clear, the answer to this is almost entirely going to be bounded by the context of your business. I don't believe that companies as a general rule, make ill-reasoned decisions. I think that when we see a decision a company makes, by and large, there's context or constraints that we don't see that inform that. I know, it's fun to dunk on some of the large companies' seemingly inscrutable decisions, but I will say, having had the privilege to talk to an awful lot of execs in an awful lot of places—particularly on this show—I don't find myself encountering a whole lot of people in those roles who I come away with thinking that they're a few fries short of a Happy Meal. They generally are very well reasoned in why they do what they do. It's just a question of where we think the future is going on some level.Martin: Yep. So, I think that's absolutely right. So, to be a little bit more clear on what I think is happening with the cloud, which is I think every company that gets created in tech is going to use the cloud for something, right? They'll use it for development, the website, test, et cetera. And many will have everything in the cloud, right?So, the cloud is here to stay, it's going to continue to grow, it's a very important piece of the ecosystem, it's very important piece of IT. I'm very, very pro cloud; there's a lot of value. But the one area that's under pressure is if your product is SaaS if your product is selling Software as a Service, so then your product is basically infrastructure, now you've got a product cost model that includes the infrastructure itself, right? And if you reduce that, that's going to increase your margin. And so, every company that's doing that should ask the question, like, A, is the Big Three the right one for me?Maybe a verticalized cloud—like for example, whatever Fly or Mosaic or whatever is better because the cost is better. And I know how to, you know, write software and run these things, so I'll use that. They'll make that decision or maybe they'll build their own infrastructure. And I think we're going to see that decision happening more and more, exactly because now software is being offered as a service and they can do that. And I just want to make the point, just because I think it's so important, that the clouds did exactly this to the hardware providers. So, I just want to tell a quick story, just because for me, it's just so interesting. So—Corey: No, please, I was only really paying attention to this market from 2016 or so. There was a lot of the early days that I was using as a customer, but I wasn't paying attention to the overall industry trends. Please, storytime. This is how I learned things. I hang out with smart people and I come away a little bit smarter than when I started.Martin: [laugh]. This is, like, literally my fa—this is why this is one of my favorite topics is what I'm about to tell you, which is, so the clouds have always had this argument, right? The big clouds, three clouds, they're like, “Listen, why would you build your own cloud? Because, like, you don't have the expertise, and it's hard and you don't have economies of scale.” Right?And the answer is you wouldn't unless it impacts your share price, right? If it impacts your share price, then of course you would because it makes economic sense. So, the clouds had that exact same dilemma in 2005, right? So, in 2005, Google and Amazon and Microsoft, they looked at their COGS, they looked like, “Okay, I'm offering a cloud. If I look at the COGS, who am I paying?”And it turns out, there was a bunch of hardware providers that had 30% margins or 70% margins. They're like, “Why am I paying Cisco these big margins? Why am I paying Dell these big margins?” Right? So, they had the exact same dilemma.And all of the arguments that they use now applied then, right? So, the exact same arguments, for example, “AWS, you know nothing about hardware. Why would you build hardware? You don't have the expertise. These guys sell to everybody in the world, you don't have the economies of scale.”So, all of the same arguments applied to them. And yet… and yes because it was part of COGS] that it impacted the share price, they can make the economic argument to actually build hardware teams and build chips. And so, they verticalized, right? And so, it just turns out if the infrastructure becomes parts of COGS, it makes sense to optimize that infrastructure. And I would say, the Big Three's foray into OEMs and hardware is a much, much, much bigger leap than an infrastructure company foraying into building their own infrastructure.Corey: There's a certain startup cost inherent to all these things. And the small version of that we had in every company that we started in a pre-cloud era: renting virtual computers from vendors was a thing, but it was still fraught and challenging and things that we use, then, like, GoGrid no longer exist, for good reason. But the alternative was, “Great, I'm going to start building and seeing if this thing has any traction.” Well, you need to go lease a rack somewhere and buy servers from Dell, and they're going to do the fast expedited option, which means only six short weeks until they show up in the data center and then gets sent away because they weren't expecting to receive them. And you wind up with this entire universe of hell between cross-connects and all the rest.And that's before you can ever get anything in front of customers or users to see what happens. Now, it's a swipe of a credit card away and your evening's experiments round up to 25 cents. That was significant. Having to make these significant tens of thousands of dollars of investment just to launch is no longer true. And I feel like that was a great equalizer in some respects.Martin: Yeah, I think that—Corey: And that cost has been borne by the astonishing level of investment that the cloud providers themselves have made. And that basically means that we don't have to. But it does come at a cost.Martin: I think it's also worth pointing out that it's much easier to stand up your own infrastructure now than it has been in the past, too. And so, I think that there's a gradient here, right? So, if you're building a SaaS app, [laugh] you would be crazy not to use the cloud, you just be absolutely insane, right? Like, what do you know about core infrastructure? You know, what do you know about building a back-end? Like, what do you know about operating these things? Go focus on your SaaS app.Corey: The calluses I used to have from crimping my own Ethernet patch cables in data centers have faded by now. I don't want them to come back. Yeah, we used to know how to do these things. Now, most people in most companies do not have that baseline of experience, for excellent reasons. And I wouldn't wish that on the current generation of engineers, except for the ones I dislike.Martin: However, that is if you're building an application. Almost all of my investments are people that are building infrastructure. [laugh]. They're already doing these hardcore backend things; that's what they do: they sell infrastructure. Would you think, like, someone, like, at Databricks doesn't understand how to run infr—of course it does. I mean, like, or Snowflake or whatever, right?And so, this is a gradient. On the extreme app end, you shouldn't be thinking about infrastructure; just use the cloud. Somewhere in the middle, maybe you start on the cloud, maybe you don't. As you get closer to being a cloud service, of course you're going to build your own infrastructure.Like, for example—listen, I mean, I've been mentioning Fly; I just think it's a great example. I mean, Fly is a next-generation CDN, that you can run compute on, where they build their own infrastructure—it's a great developer experience—and they would just be silly. Like, they couldn't even make the cost model work if they did it on the cloud. So clearly, there's a gradient here, and I just think that you would be remiss and probably negligent if you're selling software not to have this conversation, or at least do the analysis.Corey: This episode is sponsored in part by our friend EnterpriseDB. EnterpriseDB has been powering enterprise applications with PostgreSQL for 15 years. And now EnterpriseDB has you covered wherever you deploy PostgreSQL on-premises, private cloud, and they just announced a fully-managed service on AWS and Azure called BigAnimal, all one word. Don't leave managing your database to your cloud vendor because they're too busy launching another half-dozen managed databases to focus on any one of them that they didn't build themselves. Instead, work with the experts over at EnterpriseDB. They can save you time and money, they can even help you migrate legacy applications—including Oracle—to the cloud. To learn more, try BigAnimal for free. Go to biganimal.com/snark, and tell them Corey sent you.Corey: I think there's also a philosophical shift, where a lot of the customers that I talk to about their AWS bills want to believe something that is often not true. And what they want to believe is that their AWS bill is a function of how many customers they have.Martin: Oh yeah.Corey: In practice, it is much more closely correlated with how many engineers they've hired. And it sounds like a joke, except that it's not. The challenge that you have when you choose to build in a data center is that you have bounds around your growth because there are capacity concerns. You are going to run out of power, cooling, and space to wind up having additional servers installed. In cloud, you have an unbounded growth problem.S3 is infinite storage, and the reason I'm comfortable saying that is that they can add hard drives faster than you can fill them. For all effective purposes, it is infinite amounts of storage. There is no forcing function that forces you to get rid of things. You spin up an instance, the natural state of it in a data center as a virtual machine or a virtual instance, is that it's going to stop working two to three years left on maintain when a raccoon hauls it off into the woods to make a nest or whatever the hell raccoons do. In cloud, you will retire before that instance does is it gets migrated to different underlying hosts, continuing to cost you however many cents per hour every hour until the earth crashes into the sun, or Amazon goes bankrupt.That is the trade-off you're making. There is no forcing function. And it's only money, which is a weird thing to say, but the failure mode of turning something off mistakenly that takes things down, well that's disastrous to your brand and your company. Just leaving it up, well, it's only money. It's never a top-of-mind priority, so it continues to build and continues to build and continues to build until you're really forced to reckon with a much larger problem.It is a form of technical debt, where you've kicked the can down the road until you can no longer kick that can. Then your options are either go ahead and fix it or go back and talk to you folks, and it's time for more money.Martin: Yeah. Or talk to you. [laugh].Corey: There is that.Martin: No seriously, I think everybody should, honestly. I think this is a board-level concern for every compa—I sit on a lot of boards; I see this. And this has organically become a board-level concern. I think it should become a conscious board-level concern of, you know, cloud costs, impact COGS. Any software company has it; it always becomes an issue, and so it should be treated as a first-class problem.And if you're not thinking through your options—and I think by the way, your company is a great option—but if you're not thinking to the options, then you're almost fiduciarily negligent. I think the vast, vast majority of people and vast majority of companies are going to stay on the cloud and just do some basic cost controls and some just basic hygiene and they're fine and, like, this doesn't touch them. But there are a set of companies, particularly those that sell infrastructure, where they may have to get more aggressive. And that ecosystem is now very vibrant, and there's a lot of shifts in it, and I think it's the most exciting place [laugh] in all of IT, like, personally in the industry.Corey: One question I have for you is where do you draw the line around infrastructure companies. I tend to have an evolving view of it myself, where things that are hard and difficult do not become harder with time. It used to require a deep-level engineer with a week to kill to wind up compiling and building a web server. Now, it is evolved and evolved and evolved; it is check a box on a webpage somewhere and you're serving a static website. Managed databases, I used to think, were something that were higher up the stack and not infrastructure. Today, I'd call them pretty clearly infrastructure.Things seem to be continually, I guess, a slipping beneath the waves to borrow an iceberg analogy. And it's only the stuff that you can see that is interesting and differentiated, on some level. I don't know where the industry is going at all, but I continue to think of infrastructure companies as being increasingly broad.Martin: Yeah, yeah, yeah. This is my favorite question. [laugh]. I'm so glad you asked. [laugh].Corey: This was not planned to be clear.Martin: No, no, no. Listen, I am such an infrastructure maximalist. And I've changed my opinion on this so much in the last three years. So, it used to be the case—and infrastructure has a long history of, like, calling the end of infrastructure. Like, every decade has been the end of infrastructure. It's like, you build the primitives and then everything else becomes an app problem, you know?Like, you build a cloud, and then we're done, you know? You build the PC and then we're done. And so, they are even very famous talks where people talk about the end of systems when we've be built everything right then. And I've totally changed my view. So, here's my current view.My current view is, infrastructure is the only, really, differentiation in systems, in all IT, in all software. It's just infrastructure. And the app layer is very important for the business, but the app layer always sits on infrastructure. And the differentiations in app is provided by the infrastructure. And so, the start of value is basically infrastructure.And the design space is so huge, so huge, right? I mean, we've moved from, like, PCs to cloud to data. Now, the cloud is decoupling and moving to the CDN tier. I mean, like, the front-end developers are building stuff in the browser. Like, there's just so much stuff to do that I think the value is always going to accrue to infrastructure.So, in my view, anybody that's improving the app accuracy or performance or correctness with technology is an infrastructure company, right? And the more of that you do, [laugh] the more infrastructure you are. And I think, you know, in 30 years, you and I are going to be old, and we're going to go back on this podcast. We're going to talk and there's going to be a whole bunch of infrastructure companies that are being created that have accrued a lot of value. I'm going to say one more thing, which is so—okay, this is a sneak preview for the people listening to this that nobody else has heard before.So Sarah, and I are back at it again, and—the brilliant Sarah, who did the first piece—and we're doing another study. And the study is if you look at public companies and you look at ones that are app companies versus infrastructure companies, where does the value accrue? And there's way, way more app companies; there's a ton of app companies, but it turns out that infrastructure companies have higher multiples and accrue more value. And that's actually a counter-narrative because people think that the business is the apps, but it just turns out that's where the differentiation is. So, I'm just an infra maximalist. I think you could be an infra person your entire career and it's the place to be. [laugh].Corey: And this is the real value that I see of looking at AWS bills. And our narrative is oh, we come in and we fix the horrifying AWS bill. And the naive pass is, “Oh, you cut the bill and make it lower?” Not always. Our primary focus has been on understanding it because you get a phone-number-looking bill from AWS. Great, you look at it, what's driving the cost? Storage.Okay, great. That doesn't mean anything to the company. They want to know what teams are doing this. What's it going to cost for them to add another thousand monthly active users? What is the increase in cost? How do they wind up identifying their bottlenecks? How do they track and assign portions of their COGS to different aspects of their service? How do they trace the flow of capital for their organization as they're serving their customers?And understanding the bill and knowing what to optimize and what not to becomes increasingly strategic business concern.Martin: Yeah.Corey: That's the fun part. That's the stuff I don't see that software has a good way of answering, just because there's no way to use an API to gain that kind of business context. When I started this place, I thought I was going to be building software. It turns out, there's so many conversations that have to happen as a part of this that cannot be replicated by software. I mean, honestly, my biggest competitor for all this stuff is Microsoft Excel because people want to try and do it themselves internally. And sometimes they do a great job, sometimes they don't, but it's understanding their drivers behind their cost. And I think that is what was often getting lost because the cloud obscures an awful lot of that.Martin: Yeah. I think even just summarize this whole thing pretty quickly, which is, like, I do think that organically, like, cloud cost has become a board-level issue. And I think that the shift that founders and execs should make is to just, like, treat it like a first-class problem upfront. So, what does that mean? Minimally, it means understanding how these things break down—A, to your point—B, there's a number of tools that actually help with onboarding of this stuff. Like, Vantage is one that I'm a fan of; it just provides some visibility.And then the third one is if you're selling Software as a Service, that's your core product or software, and particularly it's a infrastructure, if you don't actually do the analysis on, like, how this impacts your share price for different cloud costs, if you don't do that analysis, I would say your fiduciarily negligent, just because the impact would be so high, especially in this market. And so, I think, listen, these three things are pretty straightforward and I think anybody listening to this should consider them if you're running a company, or you're an executive company.Corey: Let's be clear, this is also the kind of problem that when you're sitting there trying to come up with an idea for a business that you can put on slide decks and then present to people like you, these sounds like the paradise of problems to have. Like, “Wow, we're successful and our business is so complex and scaled out that we don't know where exactly a lot of these cost drivers are coming from.” It's, “Yeah, that sounds amazing.” Like, I remember those early days, back when all I was able to do and spend time on and energy on was just down to the idea of, ohh, I'm getting business cards. That's awesome. That means I've made it as a business person.Spoiler: it did not. Having an aggressive Twitter presence, that's what made me as a business person. But then there's this next step and this next step and this next step and this next step, and eventually, you look around and realize just how overwrought everything you've built is and how untangling it just becomes a bit of a challenge and a hell of a mess. Now, the good part is at that point of success, you can bring people in, like, a CFO and a finance team who can do some deep-level analysis to help identify what COGS is—or in some cases, have some founders, explain what COGS is to you—and understand those structures and how you think about that. But it always feels like it's a trailing problem, not an early problem that people focus on.Martin: I'll tell you the reason. The reason is because this is a very new phenomenon that it's part of COGS. It's literally five years new. And so, we're just catching up. Even now, this discussion isn't what it was when we first wrote the post.Like, now people are pretty educated on, like, “Oh yeah, like, this is really an issue. Oh, yeah. It contributes to COGS. Oh, yeah. Like, our stock price gets hit.” Like, it's so funny to watch, like, the industry mature in real-time. And I think, like, going forward, it's just going to be obvious that this is a board-level issue; it's going to be obvious this is, like, a first-class consideration. But I agree with you. It's like, listen, like, the industry wasn't ready for it because we didn't have public companies. A lot of public companies, like, this is a real issue. I mean really we're talking about the last five, seven years.Corey: It really is neat, just in real time watching how you come up with something that sounds borderline heretical, and in a relatively short period of time, becomes accepted as a large-scale problem, and now it's now it is fallen off of the hype train into, “Yeah, this is something to be aware of.” And people's attention spans have already jumped to the next level and next generation of problem. It feels like this used to take way longer for these cycles, and now everything is so rapid that I almost worry that between the time we're recording this and the time that it publishes in a few weeks, what is going to have happened that makes this conversation irrelevant? I didn't used to have to think like that. Now, I do.Martin: Yeah, yeah, yeah, for sure. Well, just a couple of things. I want to talk about, like, one of the reasons that accelerated this, and then when I think is going forward. So, one of the reasons this was accelerated was just the macro downturn. Like, when we wrote the post, you could make the argument that nobody cares about margins because it's all about growth, right?And so, like—and even then, it still saved a bunch of money, but like, a lot of people were like, “Listen, the only thing that matters is growth.” Now, that's absolutely not the case if you look at public market valuations. I mean, people really care about free cash flow, they really care about profitability, and they really care about margins. And so, it's just really forced the issue. And it also, like, you know, made kind of what we were saying very, very clear.I would say, you know, as far as shifts that are going, I think one of the biggest shifts is for every back-end developer, there's, like, a hundred front-end developers. It's just crazy. And those front-end developers—Corey: A third of a DevOps engineer.Martin: [laugh]. True. I think those front-end developers are getting, like, better tools to build complete apps, right? Like, totally complete apps, right? Like they've got great JavaScript frameworks that coming out all the time.And so, you could argue that actually a secular technology change—which is that developers are now rebuilding apps as kind of front-end applications—is going to pull compute away from the clouds anyways, right? Like if instead of, like, the app being some back-end thing running in AWS, but instead is a front-end thing, you know, running in a browser at the CDN tier, while you're still using the Big Three clouds, it's being used in a very different way. And we may have to think about it again differently. Now, this, again, is a five-year going forward problem, but I do feel like there are big shifts that are even changing the way that we currently think about cloud now. And we'll see.Corey: And if those providers don't keep up and start matching those paradigms, there's going to be an intermediary shim layer of companies that wind up converting their resources and infrastructure into things that suit this new dynamic, and effectively, they're going to become the next version of, I don't know, Level 3, one of those big underlying infrastructure companies that most people have never heard of or have to think about because they're not doing anything that's perceived as interesting.Martin: Yeah, I agree. And I honestly think this is why Cloudflare and Cloudflare work is very interesting. This is why Fly is very interesting. It's a set of companies that are, like, “Hey, listen, like, workloads are moving to the front-end and, you know, you need compute closer to the user and multi-region is really important, et cetera.” So, even as we speak, we're seeing kind of shifts to the way the cloud is moving, which is just exciting. This is why it's, like, listen, infrastructure is everything. And, like, you and I like if we live to be 200, we can do [laugh] a great infrastructure work every year.Corey: I'm terrified, on some level, that I'll still be doing the exact same type of thing in 20 years.Martin: [laugh].Corey: I like solving different problems as we go. I really want to thank you for spending so much time talking to me today. If people want to learn more about what you're up to, slash beg you for other people's money or whatnot, where's the best place for them to find you?Martin: You know, we've got this amazing infrastructure Discord channel. [laugh].Corey: Really? I did not know that.Martin: I love it. It's, like, the best. Yeah, my favorite thing to do is drink coffee and talk about infrastructure. And like, I posted this on Twitter and we've got, like, 600 people. And it's just the best thing. So, that's honestly the best way to have these discussions. Maybe can you put, like, the link in, like, the show notes?Corey: Oh, absolutely. It is already there in the show notes. Check the show notes. Feel free to join the infrastructure Discord. I will be there waiting for you.Martin: Yeah, yeah, yeah. That'll be fantastic.Corey: Thank you so much for being so generous with your time. I appreciate it.Martin: This was great. Likewise, Corey. You're always a class act and I really appreciate that about you.Corey: I do my best. Martin Casado, general partner at Andreessen Horowitz. I'm Cloud Economist Corey Quinn, and this is Screaming in the Cloud. If you've enjoyed this podcast, please leave a five-star review on your podcast platform of choice, whereas if you've hated this podcast, please leave a five-star review on your podcast platform of choice along with an angry comment telling me that I got it completely wrong and what check you wrote makes you the most interesting.Announcer: The content here is for informational purposes only and should not be taken as legal, business, tax, or investment advice, or be used to evaluate any investment or security and is not directed at any investors or potential investors in any a16z fund. For more details, please see a16z.com/disclosures.Corey: If your AWS bill keeps rising and your blood pressure is doing the same, then you need The Duckbill Group. We help companies fix their AWS bill by making it smaller and less horrifying. The Duckbill Group works for you, not AWS. We tailor recommendations to your business and we get to the point. Visit duckbillgroup.com to get started.Announcer: This has been a HumblePod production. Stay humble.
A few weeks ago, Jerod spoke with Liz Rice about the power of eBPF on The Changelog. Today, we have the pleasure of both Liz Rice, Chief Open Source Office at Isovalent & Thomas Graf, CTO & co-founder at Isovalent, the creators of Cilium. Around 2014, Facebook achieved a 10x performance improvement by replacing their traditional load balancers with eBPF. In 2017, every single packet that went to Facebook was processed by eBPF. Nowadays, every Android phone is using it. Truth be told, if it's network-related and it matters, eBPF is most likely a part of it.
A few weeks ago, Jerod spoke with Liz Rice about the power of eBPF on The Changelog. Today, we have the pleasure of both Liz Rice, Chief Open Source Office at Isovalent & Thomas Graf, CTO & co-founder at Isovalent, the creators of Cilium. Around 2014, Facebook achieved a 10x performance improvement by replacing their traditional load balancers with eBPF. In 2017, every single packet that went to Facebook was processed by eBPF. Nowadays, every Android phone is using it. Truth be told, if it's network-related and it matters, eBPF is most likely a part of it.
Today on the Day Two Cloud podcast we go deep on the Cilium service mesh, including a packet walk that takes us from packet ingestion all the way through a Kubernetes cluster. We also talk about how Cilium eBPF differs from other sidecar proxies and the potential performance and observability gains. Strap on your propeller beanie as we try to keep up with guest is Thomas Graf, a co-creator of Cilium and CTO of Isovalent.
Today on the Day Two Cloud podcast we go deep on the Cilium service mesh, including a packet walk that takes us from packet ingestion all the way through a Kubernetes cluster. We also talk about how Cilium eBPF differs from other sidecar proxies and the potential performance and observability gains. Strap on your propeller beanie as we try to keep up with guest is Thomas Graf, a co-creator of Cilium and CTO of Isovalent. The post Day Two Cloud 160: Going Deep Into Cilium Service Mesh With eBPF appeared first on Packet Pushers.
Today on the Day Two Cloud podcast we go deep on the Cilium service mesh, including a packet walk that takes us from packet ingestion all the way through a Kubernetes cluster. We also talk about how Cilium eBPF differs from other sidecar proxies and the potential performance and observability gains. Strap on your propeller beanie as we try to keep up with guest is Thomas Graf, a co-creator of Cilium and CTO of Isovalent.
Today on the Day Two Cloud podcast we go deep on the Cilium service mesh, including a packet walk that takes us from packet ingestion all the way through a Kubernetes cluster. We also talk about how Cilium eBPF differs from other sidecar proxies and the potential performance and observability gains. Strap on your propeller beanie as we try to keep up with guest is Thomas Graf, a co-creator of Cilium and CTO of Isovalent. The post Day Two Cloud 160: Going Deep Into Cilium Service Mesh With eBPF appeared first on Packet Pushers.
Today on the Day Two Cloud podcast we go deep on the Cilium service mesh, including a packet walk that takes us from packet ingestion all the way through a Kubernetes cluster. We also talk about how Cilium eBPF differs from other sidecar proxies and the potential performance and observability gains. Strap on your propeller beanie as we try to keep up with guest is Thomas Graf, a co-creator of Cilium and CTO of Isovalent. The post Day Two Cloud 160: Going Deep Into Cilium Service Mesh With eBPF appeared first on Packet Pushers.
Today on the Day Two Cloud podcast we go deep on the Cilium service mesh, including a packet walk that takes us from packet ingestion all the way through a Kubernetes cluster. We also talk about how Cilium eBPF differs from other sidecar proxies and the potential performance and observability gains. Strap on your propeller beanie as we try to keep up with guest is Thomas Graf, a co-creator of Cilium and CTO of Isovalent.
eBPF is a revolutionary kernel technology that has lit the cloud native world on fire. If you're going to have one person explain the excitement, that person would be Liz Rice. Liz is the COSO at Isovalent, creators of the open source Cilium project and pioneers of eBPF tech. On this episode Liz tells Jerod all about the power of eBPF, where it came from, what kind of new applications its enabling, and who is building the next generation of networking, security, and observability tools with it.
eBPF is a revolutionary kernel technology that has lit the cloud native world on fire. If you're going to have one person explain the excitement, that person would be Liz Rice. Liz is the COSO at Isovalent, creators of the open source Cilium project and pioneers of eBPF tech. On this episode Liz tells Jerod all about the power of eBPF, where it came from, what kind of new applications its enabling, and who is building the next generation of networking, security, and observability tools with it.
In episode 30 of The Kubelist Podcast, Marc Campbell and Benjie De Groot speak with Thomas Graf, Co-Founder and CTO of Isovalent. This conversation includes a deep dive on eBPF, the origins of Cilium and the lessons learned while creating it in the open, and insights on kernel development.
In episode 30 of The Kubelist Podcast, Marc Campbell and Benjie De Groot speak with Thomas Graf, Co-Founder and CTO of Isovalent. This conversation includes a deep dive on eBPF, the origins of Cilium and the lessons learned while creating it in the open, and insights on kernel development.
The Cilium project - best known as a networking plugin for Kubernetes - just released a service mesh functionality. We've invited Liz Rice for a technical conversation around Cilium Service Mesh. Liz Rice is Chief Open Source Officer with eBPF specialists Isovalent, creators of the Cilium cloud native networking, security and observability project. She was Chair of the CNCF's Technical Oversight Committee in 2019-2022, and Co-Chair of KubeCon + CloudNativeCon in 2018. She is also the author of Container Security, published by O'Reilly. Liz Rice in LinkedIn https://www.linkedin.com/in/lizrice A guided tour of Cilium Service Mesh: https://www.youtube.com/watch?v=e10kDBEsZw4 Guide: Make the most of DevOps and Cloud https://hubs.li/Q01jd2fZ0 The Future of Kubernetes - a blog post: https://hubs.li/Q01jd0ZD0 Takeaways and Trends in CloudNativeCon 2022 - a blog post: https://hubs.li/Q01jd0-x0
In this episode we speak to Liz Rice, Chief Open Source Officer at Isovalent, the company behind the open source eBPF product Cilium. We discuss why it's such a revolutionary approach to developing low-level kernel applications, how BPF can be used for observability, networking and security, how developers should think about application security, and why all of these technologies are open source.About Liz RiceLiz Rice is Chief Open Source Officer at eBPF pioneers Isovalent, creators of the Cilium project, which provides cloud native networking, observability and security. Prior to Isovalent she was VP Open Source Engineering with security specialists Aqua Security. She is also Chair of the CNCF's Technical Oversight Committee, has co-chaired the KubeCon / CloudNativeCon and is an Ambassador for Open UK.Other things mentioned:IsovalentBerkeley labDave ThalerKubernetesFirecrackerLambdaM1 MacbookVS CodeLet us know what you think on Twitter:https://twitter.com/consoledotdevhttps://twitter.com/davidmyttonhttps://twitter.com/lizriceOr by email: hello@console.devAbout ConsoleConsole is the place developers go to find the best tools. Our weekly newsletter picks out the most interesting tools and new releases. We keep track of everything - dev tools, devops, cloud, and APIs - so you don't have to. Sign up for free at: https://console.devRecorded: 2022-05-05.
In this episode we talk with Tracy P Holmes, technical community advocate at Isovalent and Pachi Parra, developer advocate at Github about getting a conference talk proposal accepted. Get some tips and advice from their own personal experiences and a glimpse at this year's Codeland 2022, since both of them will be speaking at this year's conference. Show Notes (sponsor) DevNews (sponsor) CodeNewbie (sponsor) DataStax (sponsor) Cockroach Labs (DevDiscuss) (sponsor) Swimm (DevDiscuss) (sponsor) Stellar (DevDiscuss) (sponsor) Ali Spittel- Yes, You Should Write That Blog Post OS 101 Notion Obsidian Linux Foundation Mercedes Bernard- How to Write a Great Abstract Proposal
Un plugin réseau est une brique cruciale au sein Kubernetes, car c'est grâce à lui qu'il est possible de créer un réseau au niveau du cluster. Depuis l'arrivée de la Container Network Interface, ou CNI pour faire court, il est très facile de brancher n'importe quel plugin qui réponde à cette interface ; et ces plugins sont légion ! Il y en a UN cependant qui se démarque du lot : Cilium. En effet, Cilium a pris le parti depuis le jour 1 d'utiliser eBPF pour gérer le réseau, la ou les autres reposent le plus souvent sur Netfilter. Or si Cilium a su tacler la concurrence en tirant le meilleur parti d'eBPF pour la mise à l'échelle, son voyage au cœur d'eBPF ne s'est pas arrêté là. L'Observabilité, les capacités de Service Mesh et de Sécurité sont autant d'aspects d'eBPF qui peuvent être intégrés à un network plugin. Pour nous dévoiler la magie qui se cache derrière eBPF, j'ai le plaisir de recevoir Robin Hahling, mon invite de l'episode 20 sur Hubble qui est Software Engineer pour Isovalent, la société derriere Cilium, et Raphaeël Pinson, Solutions Architect également chez Isovalent. Ensemble nous discutons bien évidemment de Cilium et d'eBPF, mais aussi de toutes les nouveautés qui ont été apportées ces deux dernières années a Cilium. Notes de l'épisode What's new in Cilium 1.11? How Cilium Protects Against Common Network Attacks Isovalent announce Tetragon , an eBPF-based Security Observability & Runtime Enforcement Quelques nouvelles de My Little Team Les 50 meilleures équipes B Corp pour qui travailler en 2022
A conversation with Dan Wendlandt of Isovalent about all things eBPF and Cilium.
L'observabilité a une importance primordiale dans un paradigme orienté microservices, en particulier avec une approche Kube native. Elle peut toutefois se faire aux dépens d'un excès de consommation de ressources et de complexité nuisant à l'expérience utilisateur globale. Cilium by Isovalent, est une implémentation de la Container Network Interface de Kubernetes. En plus du networking, cilium apporte une couche d'observabilité nommée Hubble à vos différents services qui tournent sur vos clusters Kubernetes. Les technologies sous-jacentes rendent son usage très performant et quasi-transparent. C'est une solution utilisée par des entreprises telles que Google, Netflix ou encore AWS. Pour en parler avec nous, nous accueillons Youssef Azrak expert réseau et Senior solution architect chez Isovalent. Comment commencer avec Cilium ? https://docs.cilium.io/ Le podcast isovalent animé par Liz Rice https://github.com/isovalent/eCHO
About LizLiz Rice is Chief Open Source Officer with cloud native networking and security specialists Isovalent, creators of the Cilium eBPF-based networking project. She is chair of the CNCF's Technical Oversight Committee, and was Co-Chair of KubeCon + CloudNativeCon in 2018. She is also the author of Container Security, published by O'Reilly.She has a wealth of software development, team, and product management experience from working on network protocols and distributed systems, and in digital technology sectors such as VOD, music, and VoIP. When not writing code, or talking about it, Liz loves riding bikes in places with better weather than her native London, and competing in virtual races on Zwift.Links: Isovalent: https://isovalent.com/ Container Security: https://www.amazon.com/Container-Security-Fundamental-Containerized-Applications/dp/1492056707/ Twitter: https://twitter.com/lizrice GitHub: https://github.com/lizrice Cilium and eBPF Slack: http://slack.cilium.io/ CNCF Slack: https://cloud-native.slack.com/join/shared_invite/zt-11yzivnzq-hs12vUAYFZmnqE3r7ILz9A TranscriptAnnouncer: Hello, and welcome to Screaming in the Cloud with your host, Chief Cloud Economist at The Duckbill Group, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.Corey: Today's episode is brought to you in part by our friends at MinIO the high-performance Kubernetes native object store that's built for the multi-cloud, creating a consistent data storage layer for your public cloud instances, your private cloud instances, and even your edge instances, depending upon what the heck you're defining those as, which depends probably on where you work. It's getting that unified is one of the greatest challenges facing developers and architects today. It requires S3 compatibility, enterprise-grade security and resiliency, the speed to run any workload, and the footprint to run anywhere, and that's exactly what MinIO offers. With superb read speeds in excess of 360 gigs and 100 megabyte binary that doesn't eat all the data you've gotten on the system, it's exactly what you've been looking for. Check it out today at min.io/download, and see for yourself. That's min.io/download, and be sure to tell them that I sent you.Corey: This episode is sponsored in part by our friends at Sysdig. Sysdig is the solution for securing DevOps. They have a blog post that went up recently about how an insecure AWS Lambda function could be used as a pivot point to get access into your environment. They've also gone deep in-depth with a bunch of other approaches to how DevOps and security are inextricably linked. To learn more, visit sysdig.com and tell them I sent you. That's S-Y-S-D-I-G dot com. My thanks to them for their continued support of this ridiculous nonsense.Corey: Welcome to Screaming in the Cloud. I'm Corey Quinn. One of the interesting things about hanging out in the cloud ecosystem as long as I have and as, I guess, closely tied to Amazon as I have been, is that you learned that you never quite are able to pronounce things the way that people pronounce them internally. In-house pronunciations are always a thing. My guest today is Liz Rice, the Chief Open Source Officer at Isovalent, and they're responsible for, among other things, the Cilium open-source project, which is around eBPF, which I can only assume is internally pronounced as ‘Ehbehpf'. Liz, thank you for joining me today and suffering my pronunciation slings and arrows.Liz: I have never heard ‘Ehbehpf' before, but I may have to adopt it. That's great.Corey: You also are currently—in a term that is winding down if I'm not misunderstanding—you were the co-chair of KubeCon and CloudNativeCon at the CNCF, and you are also currently on the technical oversight committee for the foundation.Liz: Yeah, yeah. I'm currently the chair, in fact, of the technical oversight committee.Corey: And now that Amazon has joined, I assumed that they had taken their horrible pronunciation habits, like calling AMIs ‘Ah-mies' and whatnot, and started spreading them throughout the ecosystem with wild abandon.Liz: Are we going to have to start calling CNCF ‘Ka'Nff' or something?Corey: Exactly. They're very frugal, by which I mean they never buy a vowel. So yeah, it tends to be an ongoing challenge. Joking and all the rest aside, let's start, I guess, at the macro view. The CNCF does an awful lot of stuff, where if you look at the CNCF landscape, for example, like, I think some of my jokes on the internet go a bit too far, but you look at this thing and last time I checked, there were something like four or 500 different players in various spaces.And it's a very useful diagram, don't get me wrong by any stretch of the imagination, but it also is one of those things that is so staggeringly vast that I've got a level with you on this one, given my old, ancient sysadmin roots, “The hell with it. I'm going to run some VMs in a three-tiered architecture just like grandma and grandpa used to do,” and call it good. Not really how the industry is evolved, but it's overwhelming.Liz: But that might be the right solution for your use case so, you know, don't knock it if it works.Corey: Oh, yeah. If it's a terrible architecture and it works, is it really that terrible of an architecture? One wonders.Liz: Yeah, yeah. I mean, I'm definitely not one of those people who thinks, you know, every solution has the same—you know, is solved by the same hammer, you know, all problems are not the same nail. So, I am a big fan of a lot of the CNCF projects, but that doesn't mean to say I think those are the only ways to deploy software. You know, there are plenty of things like Lambda are a really great example of something that is super useful and very applicable for lots of applications and for lots of development teams. Not necessarily the right solution for everything. And for other people, they need all the bells and whistles that something like Kubernetes gives them. You know, horses for courses.Corey: It's very easy for me to make fun of just about any company or service or product, but the thing that always makes me set that aside and get down to brass tacks has been, “Okay, great. You can build whatever you want. You can tell whatever glorious marketing narrative you wish to craft, but let's talk to a real customer because once we do that, then if you're solving a problem that someone is having in the wild, okay, now it's no longer just this theoretical exercise and PowerPoint. Now, let's actually figure out how things work when the rubber meets the road.”So, let's start, I guess, with… I'll leave it to you. Isovalent are the creators of the Cilium eBPF-based networking project.Liz: Yeah.Corey: And eBPF is the part of that I think I'm the most familiar with having heard the term. Would you rather start on the company side or on the eBPF side?Liz: Oh, I don't mind. Let's—why don't we start with eBPF? Yeah.Corey: Cool. So easy, ridiculous question. I know that it's extremely important because Brendan Gregg periodically gets on stage and tells amazing stories about this; the last time he did stuff like that, I went stumbling down into the rabbit hole of DTrace, and I have never fully regretted doing that, nor completely forgiven him. What is eBPF?Liz: So, it stands for extended Berkeley Packet Filter, and we can pretty much just throw away those words because it's not terribly helpful. What eBPF allows you to do is to run custom programs inside the kernel. So, we can trigger these programs to run, maybe because a network packet arrived, or because a particular function within the kernel has been called, or a tracepoint has been hit. There are tons of places you can attach these programs to, or events you can attach programs to.And when that event happens, you can run your custom code. And that can change the behavior of the kernel, which is, you know, great power and great responsibility, but incredibly powerful. So Brendan, for example, has done a ton of really great pioneering work showing how you can attach these eBPF programs to events, use that to collect metrics, and lo and behold, you have amazing visibility into what's happening in your system. And he's built tons of different tools for observing everything from, I don't know, memory use to file opens to—there's just endless, dozens and dozens of tools that Brendan, I think, was probably the first to build. And now this sort of new generations of eBPF-based tooling that are kind of taking that legacy, turning them into maybe more, going to say user-friendly interfaces, you know, with GUIs, and hooking them up to metrics platforms, and in the case of Cilium, using it for networking and hooking it into Kubernetes identities, and making the information about network flows meaningful in the context of Kubernetes, where things like IP addresses are ephemeral and not very useful for very long; I mean, they just change at any moment.Corey: I guess I'm trying to figure out what part of the stack this winds up applying to because you talk about, at least to my mind, it sounds like a few different levels all at once: You talk about running code inside of the kernel, which is really close to the hardware—it's oh, great. It's adventures in assembly is almost what I'm hearing here—but then you also talk about using this with GUIs, for example, and operating on individual packets to run custom programs. When you talk about running custom programs, are we talking things that are a bit closer to, “Oh, modify this one field of that packet and then call it good,” or are you talking, “Now, we launch Microsoft Word.”Liz: Much more the former category. So yeah, let's inspect this packet and maybe change it a bit, or send it to a different—you know, maybe it was going to go to one interface, but we're going to send it to a different interface; maybe we're going to modify that packet; maybe we're going to throw the packet on the floor because we don't—there's really great security use cases for inspecting packets and saying, “This is a bad packet, I do not want to see this packet, I'm just going to discard it.” And there's some, what they call ‘Packet of Death' vulnerabilities that have been mitigated in that way. And the real beauty of it is you just load these programs dynamically. So, you can change the kernel or on the fly and affect that behavior, just immediately have an effect.If there are processes already running, they get instrumented immediately. So, maybe you run a BPF program to spot when a file is opened. New processes, existing processes, containerized processes, it doesn't matter; they'll all be detected by your program if it's observing file open events.Corey: Is this primarily used from a security perspective? Is it used for—what are the common use cases for something like this?Liz: There's three main buckets, I would say: Networking, observability, and security. And in Cilium, we're kind of involved in some aspects of all those three things, and there are plenty of other projects that are also focusing on one or other of those aspects.Corey: This is where when, I guess, the challenge I run into the whole CNCF landscape is, it's like, I think the danger is when I started down this path that I'm on now, I realized that, “Oh, I have to learn what all the different AWS services do.” This was widely regarded as a mistake. They are not Pokémon; I do not need to catch them all. The CNCF landscape applies very similarly in that respect. What is the real-world problem space for which eBPF and/or things like Cilium that leverage eBPF—because eBPF does sound fairly low-level—that turn this into something that solves a problem people have? In other words, what is the problem that Cilium should be the go-to answer for when someone says, “I have this thing that hurts.”Liz: So, at one level, Cilium is a networking solution. So, it's Kubernetes CNI. You plug it in to provide connectivity between your applications that are running in pods. Those pods have to talk to each other somehow and Cilium will connect those pods together for you in a very efficient way. One of the really interesting things about eBPF and networking is we can bypass some of the networking stack.So, if we are running in containers, we're running our applications in containers in pods, and those pods usually will have their own networking namespace. And that means they've got their own networking stack. So, a packet that arrives on your machine has to go through the networking stack on that host machine, go across a virtual interface into your pod, and then go through the networking stack in that pod. And that's kind of inefficient. But with eBPF, we can look at the packet the moment it's come into the kernel—in fact in some cases, if you have the right networking interfaces, you can do it while it's still on the network interface card—so you look at that packet and say, “Well, I know what pod that's destined for, I can just send it straight there.” I don't have to go through the whole networking stack in the kernel because I already know exactly where it's going. And that has some real performance improvements.Corey: That makes sense. In my explorations—we'll call it—with Kubernetes, it feels like the universe—at least at the time I went looking into it—was, “Step One, here's how to wind up launching Kubernetes to run a blog.” Which is a bit like using a chainsaw to wind up cutting a sandwich. Okay, massively overpowered but I get the basic idea, like, “Okay, what's project Step Two?” It's like, “Oh, great. Go build Google.”Liz: [laugh].Corey: Okay, great. It feels like there's some intermediary steps that have been sort of glossed over here. And at the small-scale that I kicked the tires on, things like networking performance never even entered the equation; it was more about get the thing up and running. But yeah, at scale, when you start seeing huge numbers of containers being orchestrated across a wide variety of hosts that has serious repercussions and explains an awful lot. Is this the sort of thing that gets leveraged by cloud providers themselves, is it something that gets built in mostly on-prem environments, or is it something that rides in, almost, user-land for most of these use cases that customers coming to bringing to those environments? I'm sorry, users, not customers. I'm too used to the Amazonian phrasing of everyone as a customer. No, no, they are users in an open-source project.Liz: [laugh]. Yeah, so if you're using GKE, the GKE Dataplane V2 is using Cilium. Alibaba Cloud uses Cilium. AWS is using Cilium for EKS Anywhere. So, these are really, I think, great signals that it's super scalable.And it's also not just about the connectivity, but also about being able to see your network flows and debug them. Because, like you say, that day one, your blog is up and running, and day two, you've got some DNS issue that you need to debug, and how are you going to do that? And because Cilium is working with Kubernetes, so it knows about the individual pods, and it's aware of the IP addresses for those pods, and it can map those to, you know, what's the pod, what service is that pod involved with. And we have a component of Cilium called Hubble that gives you the flows, the network flows, between services. So, you know, we've probably all seen diagrams showing Service A talking to Service B, Service C, some external connectivity, and Hubble can show you those flows between services and the outside world, regardless of how the IP addresses may be changing underneath you, and aggregating network flows into those services that make sense to a human who's looking at a Kubernetes deployment.Corey: A running gag that I've had is that one of the drawbacks and appeals of Kubernetes, all at once, is that it lets you cosplay as a cloud provider, even if you don't happen to work for one of them. And there's a bit of truth to it, but let's be serious here, despite what a lot of the cloud providers would wish us to believe via a bunch of marketing, there's a tremendous number of data center environments out there, hybrid environments, and companies that are in those environments are not somehow laggards, or left behind technologically, or struggling to digitally transform. Believe it or not—I know it's not a common narrative—but large companies generally don't employ people who lack critical thinking skills and strategic insight. There's usually a reason that things are the way that they are and when you don't understand that my default approach is that, oh context that gets missing, so I want to preface this with the idea there is nothing wrong in those environments. But in a purely cloud-native environment—which means that I'm very proud about having no single points of failure as I have everything routing to a single credit card that pays the cloud providers—great. What is the story for Cilium if I'm using, effectively, the managed Kubernetes options that Name Any Cloud Provider will provide for me these days? Is it at that point no longer for me or is it something that instead expresses itself in ways I'm not seeing, yet?Liz: Yeah, so I think, as an open-source project—and it is the only CNI that's at incubation level or beyond, so you know, it's CNCF-supported networking solution; you can use it out of the box, you can use it for your tiny blog application if you've decided to run that on Kubernetes, you can do so—things start to get much more interesting at scale. I mean, that… continuum between you know, there are people purely on managed services, there are people who are purely in the cloud, hybrid cloud is a real thing, and there are plenty of businesses who have good reasons to have some things in their own data centers, something's in the public cloud, things distributed around the world, so they need connectivity between those. And Cilium will solve a lot of those problems for you in the open-source, but also, if you're telco scale and you have things like BGP networks between your data centers, then that's where the paid versions of Cilium, the enterprise versions of Cilium, can help you out. And, as Isovalent, that's our business model to have, like—we fully support or we contribute a lot of resources into the open-source Cilium, and we want that to be the best networking solution for anybody, but if you are an enterprise who wants those extra bells and whistles, and the kind of scale that, you know, a telco, or a massive retailer, or a large media organization, or name your vertical, then we have solutions for that as well. And I think it was one of the really interesting things about the eBPF side of it is that, you know, we're not bound to just Kubernetes, you know? We run in the kernel, and it just so happens that we have that Kubernetes interface for allocating IP addresses to endpoints that happened to be pods. But—Corey: So, back to my crappy pile of VMs—because the hell with all this newfangled container nonsense—I can still benefit from something like Cilium?Liz: Exactly, yeah. And there's plenty of people using it for just load-balancing, which, why not have an eBPF-based high-performance load balancer?Corey: Hang on, that's taking me a second to work my way through. What is the programming language for eBPF? It is something custom?Liz: Right. So, when you load your BPF program into the kernel, it's in the form of eBPF bytecode. There are people who write an eBPF bytecode by hand; I am not one of those people.Corey: There are people who used to be able to write Sendmail configs without running through the M four preprocessor, and I don't understand those people either.Liz: [laugh]. So, our choices are—well, it has to be a language that can be compiled into that bytecode, and at the moment, there are two options: C, and more recently, Rust. So, the C code, I'm much more familiar with writing BPF code in C, it's slightly limited. So, because these BPF programs have to be safe to run, they go through a verification process which checks that you're not going to crash the kernel, that you're not going to end up in some hardware loop, and basically make your machine completely unresponsive, we also have to know that BPF programs, you know, they'll only access memory that they're supposed to and that they can't mess up other processes. So, there's this BPF verification step that checks for example that you always check that a pointer isn't nil before you dereference it.And if you try and use a pointer in your C code, it might compile perfectly, but when you come to load it into the kernel, it gets rejected because you forgot to check that it was non-null before.Corey: You try and run it, the whole thing segfaults, you see the word ‘fault' there and well, I guess blameless just went out the window there.Liz: [laugh]. Well, this is the thing: You cannot segfault in the kernel, you know, or at least that's a bad [day 00:19:11]. [laugh].Corey: You say that, but I'm very bad with computers, let's be clear here. There's always a way to misuse things horribly enough.Liz: It's a challenge. It's pretty easy to segfault if you're writing a kernel module. But maybe we should put that out as a challenge for the listener, to try to write something that crashes the kernel from within an eBPF because there's a lot of very smart people.Corey: Right now the blood just drained from anyone who's listening, in the kernel space or the InfoSec space, I imagine.Liz: Exactly. Some of my colleagues at Isovalent are thinking, “Oh, no. What's she brought on here?” [laugh].Corey: What have you done? Please correct me if I'm misunderstanding this. So, eBPF is a very low-level tool that requires certain amounts of braining in order [laugh] to use appropriately. That can be a heavy lift for a lot of us who don't live in those spaces. Cilium distills this down into something that is all a lot more usable and understandable for folks, and then beyond that, you wind up with Isovalent, that winds up effectively productizing and packaging this into something that becomes a lot more closer to turnkey. Is that directionally accurate?Liz: Yes, I would say that's true. And there are also some other intermediate steps, like the CLI tools that Brendan Gregg did, where you can—I mean, a CLI is still fairly low-level, but it's not as low-level as writing the eBPF code yourself. And you can be quite in-dep—you know, if you know what things you want to observe in the kernel, you don't necessarily have to know how to write the eBPF code to do it, but if you've got these fairly low-level tools to do it. You're absolutely right that very few people will need to write their own… BPF code to run in the kernel.Corey: Let's move below the surface level of awareness; the same way that most of us don't need to know how to compile our own kernel in this day and age.Liz: Exactly.Corey: A few people very much do, but because of their hard work, the rest of us do not.Liz: Exactly. And for most of us, we just take the kernel for granted. You know, most people writing applications, it doesn't really matter if—they're just using abstractions that do things like open files for them, or create network connections, or write messages to the screen, you don't need to know exactly how that's accomplished through the kernel. Unless you want to get into the details of how to observe it with eBPF or something like that.Corey: I'm much happier not knowing some of the details. I did a deep dive once into Linux system kernel internals, based on an incredibly well-written but also obnoxiously slash suspiciously thick O'Reilly book, Linux Systems Internalsand it was one of those, like, halfway through, “Can I please be excused? My brain is full.” It's one of those things that I don't use most of it on a day-to-day basis, but it's solidified by understanding of what the computer is actually doing in a way that I will always be grateful for.Liz: Mmm, and there are tens of millions of lines of code in the Linux kernel, so anyone who can internalize any of that is basically a superhero. [laugh].Corey: I have nothing but respect for people who can pull that off.Corey: Couchbase Capella Database-as-a-Service is flexible, full-featured and fully managed with built in access via key-value, SQL, and full-text search. Flexible JSON documents aligned to your applications and workloads. Build faster with blazing fast in-memory performance and automated replication and scaling while reducing cost. Capella has the best price performance of any fully managed document database. Visit couchbase.com/screaminginthecloud to try Capella today for free and be up and running in three minutes with no credit card required. Couchbase Capella: make your data sing.In your day job, quote-unquote—which is sort of a weird thing to say, given that you are working at an open-source company; in fact, you are the Chief Open Source Officer, so what you're doing in the community, what you're exploring on the open-source project side of things, it is all interrelated. I tend to have trouble myself figuring out where my job starts and stops most weeks; I'm sympathetic to it. What inspired you folks to launch a company that is, “Ah, we're going to be in the open-source space?” Especially during a time when there's been a lot of pushback, in some respects, about the evolution of open-source and the rise of large cloud providers, where is open-source a viable strategy or a tactic to get to an outcome that is pleasing for all parties?Liz: Mmm. So, I wasn't there at the beginning, for the Isovalent journey, and Cilium has been around for five or six years, now, at this point. I very strongly believe in open-source as an effective way of developing technology—good technology—and getting really good feedback and, kind of, optimizing the speed at which you can innovate. But I think it's very important that businesses don't think—if you're giving away your code, you cannot also sell your code; you have to have some other thing that adds value. Maybe that's some extra code, like in the Isovalent example, the enterprise-related enhancements that we have that aren't part of the open-source distribution.There's plenty of other ways that people can add value to open-source. They can do training, they can do managed services, there's all sorts of different—support was the classic example. But I think it's extremely important that businesses don't just expect that I can write a bunch of open-source code, and somehow magically, through building up a whole load of users, I will find a way to monetize that.Corey: A bunch of nerds will build my product for me on nights and weekends. Yeah, that's a bit of an outmoded way of thinking about these things.Liz: Yeah exactly. And I think it's not like everybody has perfect ability to predict the future and you might start a business—Corey: And I have a lot of sympathy for companies who originally started with the idea of, “Well, we are the project leads. We know this code the best, therefore we are the best people in the world to run this as a service.” The rise of the hyperscale cloud providers has called that into significant question. And I feel for them because it's difficult to completely pivot your business model when you're already a publicly-traded company. That's a very fraught and challenging thing to do. It means that you're left with a bunch of options, none of them great.Cilium as a project is not that old, neither is Isovalent, but it's new enough in the iterative process, that you were able to avoid that particular pitfall. Instead, you're looking at some level of making this understandable and useful to humans, almost the point where it disappears from their level of awareness that they need to think about. There's huge value in something like that. Do you think that there is a future in which projects and companies built upon projects that follow this model are similarly going to be having challenges with hyperscale cloud providers, or other emergent threats to the ecosystem—sorry, ‘threat' is an unfair and unkind word here—but changes to the ecosystem, as we see the world evolving in ways that most of us did not foresee?Liz: Yeah, we've certainly seen some examples in the last year or two, I guess, of companies that maybe didn't anticipate, and who necessarily has a crystal ball to anticipate how cloud providers might use their software? And I think in some cases, the cloud providers has not always been the most generous or most community-minded in their approach to how they've done that. But I think for a company, like Isovalent, our strong point is talent. It would be extremely rare to find the level of expertise in, you know, what is a pretty specialized area. You know, the people at Isovalent who are working on Cilium are also working on eBPF itself, and that level of expertise is, I think, pretty unrivaled.So, we're in such a new space with eBPF, we've only in the last year or so, got to the point where pretty much everyone is running a kernel that's new enough to use eBPF. Startups do have a kind of agility that I think gives them an advantage, which I hope we'll be able to capitalize on. I think sometimes when businesses get upset about their code being used, they probably could have anticipated it. You know, if it's open-source, people will use your software, and you have to think of that.Corey: “What do you mean you're using the thing we gave away for free and you're not paying us to use it?”Liz: Yeah.Corey: “Uh, did you hear what you just said?” Some of this was predictable, let's be fair.Liz: Yeah, and I think you really have to, as a responsible business, think about, well, what does happen if they use all the open-source code? You know, is that a problem? And as far as we're concerned, everybody using Cilium is a fantastic… thing. We fully welcome everyone using Cilium as their data plane because the vast majority of them would use that open-source code, and that would be great, but there will be people who need that extra features and the expertise that I think we're in a unique position to provide. So, I joined Isovalent just about a year ago, and I did that because I believe in the technology, I believe in the company, I believe in, you know, the foundations that it has in open-source.It's a very much an open-source first organization, which I love, and that resonates with me and how I think we can be successful. So, you know, I don't have that crystal ball. I hope I'm right, we'll find out. We should do this again, you know, a couple of years and see how that's panning out. [laugh].Corey: I'll book out the date now.Liz: [laugh].Corey: Looking back at our conversation just now, you talked about open-source, and business strategy and how that's going to be evolving. We talked about the company, we talked about an incredibly in-depth, technical product that honestly goes significantly beyond my current level of technical awareness. And at no point in any of those aspects of the conversation did you talk about it in a way that I did not understand, nor did you come off in any way as condescending. In fact, you wrote an O'Reilly book on Container Security that's written very much the same way. How did you learn to do that? Because it is, frankly, an incredibly rare skill.Liz: Oh, thank you. Yeah, I think I have never been a fan of jargon. I've never liked it when people use a complicated acronym, or really early days in my career, there was a bit of a running joke about how everything was TLAs. And you think, well, I understand why we use an acronym to shorten things, but I don't think we need to assume that everybody knows what everything stands for. Why can't we explain things in simple language? Why can't we just use ordinary terms?And I found that really resonates. You know, if I'm doing a presentation or if I'm writing something, using straightforward language and explaining things, making sure that people understand the, kind of, fundamentals that I'm going to build my explanation on. I just think that has a—it results in people understanding, and that's my whole point. I'm not trying to explain something to—you know, my goal is that they understand it, not that they've been blown away by some kind of magic. I want them to go away going, “Ah, now I understand how this bit fits with that bit,” or, “How this works.” You know?Corey: The reason I bring it up is that it's an incredibly undervalued skill because when people see it, they don't often recognize it for what it is. Because when people don't have that skill—which is common—people just write it off as oh, that person's a bad communicator. Which I think is a little unfair. Being able to explain complex things simply is one of the most valuable yet undervalued skills that I've found in this entire space.Liz: Yeah, I think people sometimes have this sort of wrong idea that vocabulary and complicated terms are somehow inherently smarter. And if you use complicated words, you sound smarter. And I just don't think that's accessible, and I don't think it's true. And sometimes I find myself listening to someone, and they're using complicated terms or analogies that are really obscure, and I'm thinking, but could you explain that to me in words of one syllable? I don't think you could. I think you're… hiding—not you [laugh]. You know, people—Corey: Yeah. No, no, that's fair. I'll take the accusation as [unintelligible 00:31:24] as I can get it.Liz: [laugh]. But I think people hide behind complex words because they don't really understand them sometimes. And yeah, I would rather people understood what I'm saying.Corey: To me—I've done it through conference talks, but the way I generally learn things is by building something with them. But the way I really learn to understand something is I give a conference talk on it because, okay, great. I can now explain Git—which was one of my early technical talks—to folks who built Git. Great. Now, how about I explain it to someone who is not immersed in the space whatsoever? And if I can make it that accessible, great, then I've succeeded. It's a lot harder than it looks.Liz: Yeah, exactly. And one of the reasons why I enjoy building a talk is because I know I've got a pretty good understanding of this, but by the time I've got this talk nailed, I will know this. I might have forgotten it in six months time, you know, but [laugh] while I'm giving that talk, I will have a really good understanding of that because the way I want to put together a talk, I don't want to put anything in a talk that I don't feel I could explain. And that means I have to understand how it works.Corey: It's funny, this whole don't give talks about things you don't understand seems like there's really a nouveau concept, but here we are, we're [working on it 00:32:40].Liz: I mean, I have committed to doing talks that I don't fully understand, knowing that—you know, with the confidence that I can find out between now and the [crosstalk 00:32:48]—Corey: I believe that's called a forcing function.Liz: Yes. [laugh].Corey: It's one of those very high-risk stories, like, “Either I'm going to learn this in the next three months, or else I am going to have some serious egg on my face.”Liz: Yeah, exactly, definitely a forcing function. [laugh].Corey: I really want to thank you for taking so much time to speak with me today. If people want to learn more, where can they find you?Liz: So, I am online pretty much everywhere as lizrice, and I am on Twitter. I'm on GitHub. And if you want to come and hang out, I am on the Cilium and eBPF Slack, and also the CNCF Slack. Yeah. So, come say hello.Corey: There. We will put links to all of that in the [show notes 00:33:28]. Thank you so much for your time. I appreciate it.Liz: Pleasure.Corey: Liz Rice, Chief Open Source Officer at Isovalent. I'm Cloud Economist Corey Quinn, and this is Screaming in the Cloud. If you've enjoyed this podcast, please leave a five-star review on your podcast platform of choice, whereas if you've hated this podcast, please leave a five-star review on your podcast platform of choice, along with an angry comment containing an eBPF program that on every packet fires off a Lambda function. Yes, it will be extortionately expensive; almost half as much money as a Managed NAT Gateway.Corey: If your AWS bill keeps rising and your blood pressure is doing the same, then you need The Duckbill Group. We help companies fix their AWS bill by making it smaller and less horrifying. The Duckbill Group works for you, not AWS. We tailor recommendations to your business and we get to the point. Visit duckbillgroup.com to get started.Announcer: This has been a HumblePod production. Stay humble.
About ABAB Periasamy is the co-founder and CEO of MinIO, an open source provider of high performance, object storage software. In addition to this role, AB is an active investor and advisor to a wide range of technology companies, from H2O.ai and Manetu where he serves on the board to advisor or investor roles with Humio, Isovalent, Starburst, Yugabyte, Tetrate, Postman, Storj, Procurify, and Helpshift. Successful exits include Gitter.im (Gitlab), Treasure Data (ARM) and Fastor (SMART).AB co-founded Gluster in 2005 to commoditize scalable storage systems. As CTO, he was the primary architect and strategist for the development of the Gluster file system, a pioneer in software defined storage. After the company was acquired by Red Hat in 2011, AB joined Red Hat's Office of the CTO. Prior to Gluster, AB was CTO of California Digital Corporation, where his work led to scaling of the commodity cluster computing to supercomputing class performance. His work there resulted in the development of Lawrence Livermore Laboratory's “Thunder” code, which, at the time was the second fastest in the world. AB holds a Computer Science Engineering degree from Annamalai University, Tamil Nadu, India.AB is one of the leading proponents and thinkers on the subject of open source software - articulating the difference between the philosophy and business model. An active contributor to a number of open source projects, he is a board member of India's Free Software Foundation.Links: MinIO: https://min.io/ Twitter: https://twitter.com/abperiasamy MinIO Slack channel: https://minio.slack.com/join/shared_invite/zt-11qsphhj7-HpmNOaIh14LHGrmndrhocA LinkedIn: https://www.linkedin.com/in/abperiasamy/ TranscriptAnnouncer: Hello, and welcome to Screaming in the Cloud with your host, Chief Cloud Economist at The Duckbill Group, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.Corey: This episode is sponsored in part by our friends at Sysdig. Sysdig is the solution for securing DevOps. They have a blog post that went up recently about how an insecure AWS Lambda function could be used as a pivot point to get access into your environment. They've also gone deep in-depth with a bunch of other approaches to how DevOps and security are inextricably linked. To learn more, visit sysdig.com and tell them I sent you. That's S-Y-S-D-I-G dot com. My thanks to them for their continued support of this ridiculous nonsense.Corey: This episode is sponsored in part by our friends at Rising Cloud, which I hadn't heard of before, but they're doing something vaguely interesting here. They are using AI, which is usually where my eyes glaze over and I lose attention, but they're using it to help developers be more efficient by reducing repetitive tasks. So, the idea being that you can run stateless things without having to worry about scaling, placement, et cetera, and the rest. They claim significant cost savings, and they're able to wind up taking what you're running as it is, in AWS, with no changes, and run it inside of their data centers that span multiple regions. I'm somewhat skeptical, but their customers seem to really like them, so that's one of those areas where I really have a hard time being too snarky about it because when you solve a customer's problem, and they get out there in public and say, “We're solving a problem,” it's very hard to snark about that. Multus Medical, Construx.ai, and Stax have seen significant results by using them, and it's worth exploring. So, if you're looking for a smarter, faster, cheaper alternative to EC2, Lambda, or batch, consider checking them out. Visit risingcloud.com/benefits. That's risingcloud.com/benefits, and be sure to tell them that I said you because watching people wince when you mention my name is one of the guilty pleasures of listening to this podcast.in a siloCorey: Welcome to Screaming in the Cloud. I'm Corey Quinn. I'm joined this week by someone who's doing something a bit off the beaten path when we talk about cloud. I've often said that S3 is sort of a modern wonder of the world. It was the first AWS service brought into general availability. Today's promoted guest is the co-founder and CEO of MinIO, Anand Babu Periasamy, or AB as he often goes, depending upon who's talking to him. Thank you so much for taking the time to speak with me today.AB: It's wonderful to be here, Corey. Thank you for having me.Corey: So, I want to start with the obvious thing, where you take a look at what is the cloud and you can talk about AWS's ridiculous high-level managed services, like Amazon Chime. Great, we all see how that plays out. And those are the higher-level offerings, ideally aimed at problems customers have, but then they also have the baseline building blocks services, and it's hard to think of a more baseline building block than an object store. That's something every cloud provider has, regardless of how many scare quotes there are around the word cloud; everyone offers the object store. And your solution is to look at this and say, “Ah, that's a market ripe for disruption. We're going to build through an open-source community software that emulates an object store.” I would be sitting here, more or less poking fun at the idea except for the fact that you're a billion-dollar company now.AB: Yeah.Corey: How did you get here?AB: So, when we started, right, we did not actually think about cloud that way, right? “Cloud, it's a hot trend, and let's go disrupt is like that. It will lead to a lot of opportunity.” Certainly, it's true, it lead to the M&S, right, but that's not how we looked at it, right? It's a bad idea to build startups for M&A.When we looked at the problem, when we got back into this—my previous background, some may not know that it's actually a distributed file system background in the open-source space.Corey: Yeah, you were one of the co-founders of Gluster—AB: Yeah.Corey: —which I have only begrudgingly forgiven you. But please continue.AB: [laugh]. And back then we got the idea right, but the timing was wrong. And I had—while the data was beginning to grow at a crazy rate, end of the day, GlusterFS has to still look like an FS, it has to look like a file system like NetApp or EMC, and it was hugely limiting what we can do with it. The biggest problem for me was legacy systems. I have to build a modern system that is compatible with a legacy architecture, you cannot innovate.And that is where when Amazon introduced S3, back then, like, when S3 came, cloud was not big at all, right? When I look at it, the most important message of the cloud was Amazon basically threw everything that is legacy. It's not [iSCSI 00:03:21] as a Service; it's not even FTP as a Service, right? They came up with a simple, RESTful API to store your blobs, whether it's JavaScript, Android, iOS, or [AAML 00:03:30] application, or even Snowflake-type application.Corey: Oh, we spent ten years rewriting our apps to speak object store, and then they released EFS, which is NFS in the cloud. It's—AB: Yeah.Corey: —I didn't realize I could have just been stubborn and waited, and the whole problem would solve itself. But here we are. You're quite right.AB: Yeah. And even EFS and EBS are more for legacy stock can come in, buy some time, but that's not how you should stay on AWS, right? When Amazon did that, for me, that was the opportunity. I saw that… while world is going to continue to produce lots and lots of data, if I built a brand around that, I'm not going to go wrong.The problem is data at scale. And what do I do there? The opportunity I saw was, Amazon solved one of the largest problems for a long time. All the legacy systems, legacy protocols, they convinced the industry, throw them away and then start all over from scratch with the new API. While it's not compatible, it's not standard, it is ridiculously simple compared to anything else.No fstabs, no [unintelligible 00:04:27], no [root 00:04:28], nothing, right? From any application anywhere you can access was a big deal. When I saw that, I was like, “Thank you Amazon.” And I also knew Amazon would convince the industry that rewriting their application is going to be better and faster and cheaper than retrofitting legacy applications.Corey: I wonder how much that's retconned because talking to some of the people involved in the early days, they were not at all convinced they [laugh] would be able to convince the industry to do this.AB: Actually, if you talk to the analyst reporters, the IDC's, Gartner's of the world to the enterprise IT, the VMware community, they would say, “Hell no.” But if you talk to the actual application developers, data infrastructure, data architects, the actual consumers of data, for them, it was so obvious. They actually did not know how to write an fstab. The iSCSI and NFS, you can't even access across the internet, and the modern applications, they ran across the globe, in JavaScript, and all kinds of apps on the device. From [Snap 00:05:21] to Snowflake, today is built on object store. It was more natural for the applications team, but not from the infrastructure team. So, who you asked that mattered.But nevertheless, Amazon convinced the rest of the world, and our bet was that if this is going to be the future, then this is also our opportunity. S3 is going to be limited because it only runs inside AWS. Bulk of the world's data is produced everywhere and only a tiny fraction will go to AWS. And where will the rest of the data go? Not SAN, NAS, HDFS, or other blob store, Azure Blob, or GCS; it's not going to be fragmented. And if we built a better object store, lightweight, faster, simpler, but fully compatible with S3 API, we can sweep and consolidate the market. And that's what happened.Corey: And there is a lot of validity to that. We take a look across the industry, when we look at various standards—I mean, one of the big problems with multi-cloud in many respects is the APIs are not quite similar enough. And worse, the failure patterns are very different, of I don't just need to know how the load balancer works, I need to know how it breaks so I can detect and plan for that. And then you've got the whole identity problem as well, where you're trying to manage across different frames of reference as you go between providers, and leads to a bit of a mess. What is it that makes MinIO something that has been not just something that has endured since it was created, but clearly been thriving?AB: The real reason, actually is not the multi-cloud compatibility, all that, right? Like, while today, it is a big deal for the users because the deployments have grown into 10-plus petabytes, and now the infrastructure team is taking it over and consolidating across the enterprise, so now they are talking about which key management server for storing the encrypted keys, which key management server should I talk to? Look at AWS, Google, or Azure, everyone has their own proprietary API. Outside they, have [YAML2 00:07:18], HashiCorp Vault, and, like, there is no standard here. It is supposed to be a [KMIP 00:07:23] standard, but in reality, it is not. Even different versions of Vault, there are incompatibilities for us.That is where—like from Key Management Server, Identity Management Server, right, like, everything that you speak around, how do you talk to different ecosystem? That, actually, MinIO provides connectors; having the large ecosystem support and large community, we are able to address all that. Once you bring MinIO into your application stack like you would bring Elasticsearch or MongoDB or anything else as a container, your application stack is just a Kubernetes YAML file, and you roll it out on any cloud, it becomes easier for them, they're able to go to any cloud they want. But the real reason why it succeeded was not that. They actually wrote their applications as containers on Minikube, then they will push it on a CI/CD environment.They never wrote code on EC2 or ECS writing objects on S3, and they don't like the idea of [past 00:08:15], where someone is telling you just—like you saw Google App Engine never took off, right? They liked the idea, here are my building blocks. And then I would stitch them together and build my application. We were part of their application development since early days, and when the application matured, it was hard to remove. It is very much like Microsoft Windows when it grew, even though the desktop was Microsoft Windows Server was NetWare, NetWare lost the game, right?We got the ecosystem, and it was actually developer productivity, convenience, that really helped. The simplicity of MinIO, today, they are arguing that deploying MinIO inside AWS is easier through their YAML and containers than going to AWS Console and figuring out how to do it.Corey: As you take a look at how customers are adopting this, it's clear that there is some shift in this because I could see the story for something like MinIO making an awful lot of sense in a data center environment because otherwise, it's, “Great. I need to make this app work with my SAN as well as an object store.” And that's sort of a non-starter for obvious reasons. But now you're available through cloud marketplaces directly.AB: Yeah.Corey: How are you seeing adoption patterns and interactions from customers changing as the industry continues to evolve?AB: Yeah, actually, that is how my thinking was when I started. If you are inside AWS, I would myself tell them that why don't use AWS S3? And it made a lot of sense if it's on a colo or your own infrastructure, then there is an object store. It even made a lot of sense if you are deploying on Google Cloud, Azure, Alibaba Cloud, Oracle Cloud, it made a lot of sense because you wanted an S3 compatible object store. Inside AWS, why would you do it, if there is AWS S3?Nowadays, I hear funny arguments, too. They like, “Oh, I didn't know that I could use S3. Is S3 MinIO compatible?” Because they will be like, “It came along with the GitLab or GitHub Enterprise, a part of the application stack.” They didn't even know that they could actually switch it over.And otherwise, most of the time, they developed it on MinIO, now they are too lazy to switch over. That also happens. But the real reason that why it became serious for me—I ignored that the public cloud commercialization; I encouraged the community adoption. And it grew to more than a million instances, like across the cloud, like small and large, but when they start talking about paying us serious dollars, then I took it seriously. And then when I start asking them, why would you guys do it, then I got to know the real reason why they wanted to do was they want to be detached from the cloud infrastructure provider.They want to look at cloud as CPU network and drive as a service. And running their own enterprise IT was more expensive than adopting public cloud, it was productivity for them, reducing the infrastructure, people cost was a lot. It made economic sense.Corey: Oh, people always cost more the infrastructure itself does.AB: Exactly right. 70, 80%, like, goes into people, right? And enterprise IT is too slow. They cannot innovate fast, and all of those problems. But what I found was for us, while we actually build the community and customers, if you're on AWS, if you're running MinIO on EBS, EBS is three times more expensive than S3.Corey: Or a single copy of it, too, where if you're trying to go multi-AZ and you have the replication traffic, and not to mention you have to over-provision it, which is a bit of a different story as well. So, like, it winds up being something on the order of 30 times more expensive, in many cases, to do it right. So, I'm looking at this going, the economics of running this purely by itself in AWS don't make sense to me—long experience teaches me the next question of, “What am I missing?” Not, “That's ridiculous and you're doing it wrong.” There's clearly something I'm not getting. What am I missing?AB: I was telling them until we made some changes, right—because we saw a couple of things happen. I was initially like, [unintelligible 00:12:00] does not make 30 copies. It makes, like, 1.4x, 1.6x.But still, the underlying block storage is not only three times more expensive than S3, it's also slow. It's a network storage. Trying to put an object store on top of it, another, like, software-defined SAN, like EBS made no sense to me. Smaller deployments, it's okay, but you should never scale that on EBS. So, it did not make economic sense. I would never take it seriously because it would never help them grow to scale.But what changed in recent times? Amazon saw that this was not only a problem for MinIO-type players. Every database out there today, every modern database, even the message queues like Kafka, they all have gone scale-out. And they all depend on local block store and putting a scale-out distributed database, data processing engines on top of EBS would not scale. And Amazon introduced storage optimized instances. Essentially, that reduced to bet—the data infrastructure guy, data engineer, or application developer asking IT, “I want a SuperMicro, or Dell server, or even virtual machines.” That's too slow, too inefficient.They can provision these storage machines on demand, and then I can do it through Kubernetes. These two changes, all the public cloud players now adopted Kubernetes as the standard, and they have to stick to the Kubernetes API standard. If they are incompatible, they won't get adopted. And storage optimized that is local drives, these are machines, like, [I3 EN 00:13:23], like, 24 drives, they have SSDs, and fast network—like, 25-gigabit 200-gigabit type network—availability of these machines, like, what typically would run any database, HDFS cluster, MinIO, all of them, those machines are now available just like any other EC2 instance.They are efficient. You can actually put MinIO side by side to S3 and still be price competitive. And Amazon wants to—like, just like their retail marketplace, they want to compete and be open. They have enabled it. In that sense, Amazon is actually helping us. And it turned out that now I can help customers build multiple petabyte infrastructure on Amazon and still stay efficient, still stay price competitive.Corey: I would have said for a long time that if you were to ask me to build out the lingua franca of all the different cloud providers into a common API, the S3 API would be one of them. Now, you are building this out, multi-cloud, you're in all three of the major cloud marketplaces, and the way that you do that and do those deployments seems like it is the modern multi-cloud API of Kubernetes. When you first started building this, Kubernetes was very early on. What was the evolution of getting there? Or were you one of the first early-adoption customers in a Kubernetes space?AB: So, when we started, there was no Kubernetes. But we saw the problem was very clear. And there was containers, and then came Docker Compose and Swarm. Then there was Mesos, Cloud Foundry, you name it, right? Like, there was many solutions all the way up to even VMware trying to get into that space.And what did we do? Early on, I couldn't choose. I couldn't—it's not in our hands, right, who is going to be the winner, so we just simply embrace everybody. It was also tiring that to allow implement native connectors to all of them different orchestration, like Pivotal Cloud Foundry alone, they have their own standard open service broker that's only popular inside their system. Go outside elsewhere, everybody was incompatible.And outside that, even, Chef Ansible Puppet scripts, too. We just simply embraced everybody until the dust settle down. When it settled down, clearly a declarative model of Kubernetes became easier. Also Kubernetes developers understood the community well. And coming from Borg, I think they understood the right architecture. And also written in Go, unlike Java, right?It actually matters, these minute new details resonating with the infrastructure community. It took off, and then that helped us immensely. Now, it's not only Kubernetes is popular, it has become the standard, from VMware to OpenShift to all the public cloud providers, GKS, AKS, EKS, whatever, right—GKE. All of them now are basically Kubernetes standard. It made not only our life easier, it made every other [ISV 00:16:11], other open-source project, everybody now can finally write one code that can be operated portably.It is a big shift. It is not because we chose; we just watched all this, we were riding along the way. And then because we resonated with the infrastructure community, modern infrastructure is dominated by open-source. We were also the leading open-source object store, and as Kubernetes community adopted us, we were naturally embraced by the community.Corey: Back when AWS first launched with S3 as its first offering, there were a bunch of folks who were super excited, but object stores didn't make a lot of sense to them intrinsically, so they looked into this and, “Ah, I can build a file system and users base on top of S3.” And the reaction was, “Holy God don't do that.” And the way that AWS decided to discourage that behavior is a per request charge, which for most workloads is fine, whatever, but there are some that causes a significant burden. With running something like MinIO in a self-hosted way, suddenly that costing doesn't exist in the same way. Does that open the door again to so now I can use it as a file system again, in which case that just seems like using the local file system, only with extra steps?AB: Yeah.Corey: Do you see patterns that are emerging with customers' use of MinIO that you would not see with the quote-unquote, “Provider's” quote-unquote, “Native” object storage option, or do the patterns mostly look the same?AB: Yeah, if you took an application that ran on file and block and brought it over to object storage, that makes sense. But something that is competing with object store or a layer below object store, that is—end of the day that drives our block devices, you have a block interface, right—trying to bring SAN or NAS on top of object store is actually a step backwards. They completely missed the message that Amazon told that if you brought a file system interface on top of object store, you missed the point, that you are now bringing the legacy things that Amazon intentionally removed from the infrastructure. Trying to bring them on top doesn't make it any better. If you are arguing from a compatibility some legacy applications, sure, but writing a file system on top of object store will never be better than NetApp, EMC, like EMC Isilon, or anything else. Or even GlusterFS, right?But if you want a file system, I always tell the community, they ask us, “Why don't you add an FS option and do a multi-protocol system?” I tell them that the whole point of S3 is to remove all those legacy APIs. If I added POSIX, then I'll be a mediocre object storage and a terrible file system. I would never do that. But why not write a FUSE file system, right? Like, S3Fs is there.In fact, initially, for legacy compatibility, we wrote MinFS and I had to hide it. We actually archived the repository because immediately people started using it. Even simple things like end of the day, can I use Unix [Coreutils 00:19:03] like [cp, ls 00:19:04], like, all these tools I'm familiar with? If it's not file system object storage that S3 [CMD 00:19:08] or AWS CLI is, like, to bloatware. And it's not really Unix-like feeling.Then what I told them, “I'll give you a BusyBox like a single static binary, and it will give you all the Unix tools that works for local filesystem as well as object store.” That's where the [MC tool 00:19:23] came; it gives you all the Unix-like programmability, all the core tool that's object storage compatible, speaks native object store. But if I have to make object store look like a file system so UNIX tools would run, it would not only be inefficient, Unix tools never scaled for this kind of capacity.So, it would be a bad idea to take step backwards and bring legacy stuff back inside. For some very small case, if there are simple POSIX calls using [ObjectiveFs 00:19:49], S3Fs, and few, for legacy compatibility reasons makes sense, but in general, I would tell the community don't bring file and block. If you want file and block, leave those on virtual machines and leave that infrastructure in a silo and gradually phase them out.Corey: This episode is sponsored in part by our friends at Vultr. Spelled V-U-L-T-R because they're all about helping save money, including on things like, you know, vowels. So, what they do is they are a cloud provider that provides surprisingly high performance cloud compute at a price that—while sure they claim its better than AWS pricing—and when they say that they mean it is less money. Sure, I don't dispute that but what I find interesting is that it's predictable. They tell you in advance on a monthly basis what it's going to going to cost. They have a bunch of advanced networking features. They have nineteen global locations and scale things elastically. Not to be confused with openly, because apparently elastic and open can mean the same thing sometimes. They have had over a million users. Deployments take less that sixty seconds across twelve pre-selected operating systems. Or, if you're one of those nutters like me, you can bring your own ISO and install basically any operating system you want. Starting with pricing as low as $2.50 a month for Vultr cloud compute they have plans for developers and businesses of all sizes, except maybe Amazon, who stubbornly insists on having something to scale all on their own. Try Vultr today for free by visiting: vultr.com/screaming, and you'll receive a $100 in credit. Thats v-u-l-t-r.com slash screaming.Corey: So, my big problem, when I look at what S3 has done is in it's name because of course, naming is hard. It's, “Simple Storage Service.” The problem I have is with the word simple because over time, S3 has gotten more and more complex under the hood. It automatically tiers data the way that customers want. And integrated with things like Athena, you can now query it directly, whenever of an object appears, you can wind up automatically firing off Lambda functions and the rest.And this is increasingly looking a lot less like a place to just dump my unstructured data, and increasingly, a lot like this is sort of a database, in some respects. Now, understand my favorite database is Route 53; I have a long and storied history of misusing services as databases. Is this one of those scenarios, or is there some legitimacy to the idea of turning this into a database?AB: Actually, there is now S3 Select API that if you're storing unstructured data like CSV, JSON, Parquet, without downloading even a compressed CSV, you can actually send a SQL query into the system. IN MinIO particularly the S3 Select is [CMD 00:21:16] optimized. We can load, like, every 64k worth of CSV lines into registers and do CMD operations. It's the fastest SQL filter out there. Now, bringing these kinds of capabilities, we are just a little bit away from a database; should we do database? I would tell definitely no.The very strength of S3 API is to actually limit all the mutations, right? Particularly if you look at database, they're dealing with metadata, and querying; the biggest value they bring is indexing the metadata. But if I'm dealing with that, then I'm dealing with really small block lots of mutations, the separation of objects storage should be dealing with persistence and not mutations. Mutations are [AWS 00:21:57] problem. Separation of database work function and persistence function is where object storage got the storage right.Otherwise, it will, they will make the mistake of doing POSIX-like behavior, and then not only bringing back all those capabilities, doing IOPS intensive workloads across the HTTP, it wouldn't make sense, right? So, object storage got the API right. But now should it be a database? So, it definitely should not be a database. In fact, I actually hate the idea of Amazon yielding to the file system developers and giving a [file three 00:22:29] hierarchical namespace so they can write nice file managers.That was a terrible idea. Writing a hierarchical namespace that's also sorted, now puts tax on how the metadata is indexed and organized. The Amazon should have left the core API very simple and told them to solve these problems outside the object store. Many application developers don't need. Amazon was trying to satisfy everybody's need. Saying no to some of these file system-type, file manager-type users, what should have been the right way.But nevertheless, adding those capabilities, eventually, now you can see, S3 is no longer simple. And we had to keep that compatibility, and I hate that part. I actually don't mind compatibility, but then doing all the wrong things that Amazon is adding, now I have to add because it's compatible. I kind of hate that, right?But now going to a database would be pushing it to the whole new level. Here is the simple reason why that's a bad idea. The right way to do database—in fact, the database industry is already going in the right direction. Unstructured data, the key-value or graph, different types of data, you cannot possibly solve all that even in a single database. They are trying to be multimodal database; even they are struggling with it.You can never be a Redis, Cassandra, like, a SQL all-in-one. They tried to say that but in reality, that you will never be better than any one of those focused database solutions out there. Trying to bring that into object store will be a mistake. Instead, let the databases focus on query language implementation and query computation, and leave the persistence to object store. So, object store can still focus on storing your database segments, the table segments, but the index is still in the memory of the database.Even the index can be snapshotted once in a while to object store, but use objects store for persistence and database for query is the right architecture. And almost all the modern databases now, from Elasticsearch to [unintelligible 00:24:21] to even Kafka, like, message queue. They all have gone that route. Even Microsoft SQL Server, Teradata, Vertica, name it, Splunk, they all have gone object storage route, too. Snowflake itself is a prime example, BigQuery and all of them.That's the right way. Databases can never be consolidated. There will be many different kinds of databases. Let them specialize on GraphQL or Graph API, or key-value, or SQL. Let them handle the indexing and persistence, they cannot handle petabytes of data. That [unintelligible 00:24:51] to object store is how the industry is shaping up, and it is going in the right direction.Corey: One of the ways I learned the most about various services is by talking to customers. Every time I think I've seen something, this is amazing. This service is something I completely understand. All I have to do is talk to one more customer. And when I was doing a bill analysis project a couple of years ago, I looked into a customer's account and saw a bucket with okay, that has 280 billion objects in it—and wait was that billion with a B?And I asked them, “So, what's going on over there?” And there's, “Well, we built our own columnar database on top of S3. This may not have been the best approach.” It's, “I'm going to stop you there. With no further context, it was not, but please continue.”It's the sort of thing that would never have occurred to me to even try, do you tend to see similar—I would say they're anti-patterns, except somehow they're made to work—in some of your customer environments, as they are using the service in ways that are very different than ways encouraged or even allowed by the native object store options?AB: Yeah, when I first started seeing the database-type workloads coming on to MinIO, I was surprised, too. That was exactly my reaction. In fact, they were storing these 256k, sometimes 64k table segments because they need to index it, right, and the table segments were anywhere between 64k to 2MB. And when they started writing table segments, it was more often [IOPS-type 00:26:22] I/O pattern, then a throughput-type pattern. Throughput is an easier problem to solve, and MinIO always saturated these 100-gigabyte NVMe-type drives, they were I/O intensive, throughput optimized.When I started seeing the database workloads, I had to optimize for small-object workloads, too. We actually did all that because eventually I got convinced the right way to build a database was to actually leave the persistence out of database; they made actually a compelling argument. If historically, I thought metadata and data, data to be very big and coming to object store make sense. Metadata should be stored in a database, and that's only index page. Take any book, the index pages are only few, database can continue to run adjacent to object store, it's a clean architecture.But why would you put database itself on object store? When I saw a transactional database like MySQL, changing the [InnoDB 00:27:14] to [RocksDB 00:27:15], and making changes at that layer to write the SS tables [unintelligible 00:27:19] to MinIO, and then I was like, where do you store the memory, the journal? They said, “That will go to Kafka.” And I was like—I thought that was insane when it started. But it continued to grow and grow.Nowadays, I see most of the databases have gone to object store, but their argument is, the databases also saw explosive growth in data. And they couldn't scale the persistence part. That is where they realized that they still got very good at the indexing part that object storage would never give. There is no API to do sophisticated query of the data. You cannot peek inside the data, you can just do streaming read and write.And that is where the databases were still necessary. But databases were also growing in data. One thing that triggered this was the use case moved from data that was generated by people to now data generated by machines. Machines means applications, all kinds of devices. Now, it's like between seven billion people to a trillion devices is how the industry is changing. And this led to lots of machine-generated, semi-structured, structured data at giant scale, coming into database. The databases need to handle scale. There was no other way to solve this problem other than leaving the—[unintelligible 00:28:31] if you looking at columnar data, most of them are machine-generated data, where else would you store? If they tried to build their own object storage embedded into the database, it would make database mentally complicated. Let them focus on what they are good at: Indexing and mutations. Pull the data table segments which are immutable, mutate in memory, and then commit them back give the right mix. What you saw what's the fastest step that happened, we saw that consistently across. Now, it is actually the standard.Corey: So, you started working on this in 2014, and here we are—what is it—eight years later now, and you've just announced a Series B of $100 million dollars on a billion-dollar valuation. So, it turns out this is not just one of those things people are using for test labs; there is significant momentum behind using this. How did you get there from—because everything you're saying makes an awful lot of sense, but it feels, at least from where I sit, to be a little bit of a niche. It's a bit of an edge case that is not the common case. Obviously, I missing something because your investors are not the types of sophisticated investors who see something ridiculous and, “Yep. That's the thing we're going to go for.” There right more than they're not.AB: Yeah. The reason for that was the saw what we were set to do. In fact, these are—if you see the lead investor, Intel, they watched us grow. They came into Series A and they saw, everyday, how we operated and grew. They believed in our message.And it was actually not about object store, right? Object storage was a means for us to get into the market. When we started, our idea was, ten years from now, what will be a big problem? A lot of times, it's hard to see the future, but if you zoom out, it's hidden in plain sight.These are simple trends. Every major trend pointed to world producing more data. No one would argue with that. If I solved one important problem that everybody is suffering, I won't go wrong. And when you solve the problem, it's about building a product with fine craftsmanship, attention to details, connecting with the user, all of that standard stuff.But I picked object storage as the problem because the industry was fragmented across many different data stores, and I knew that won't be the case ten years from now. Applications are not going to adopt different APIs across different clouds, S3 to GCS to Azure Blob to HDFS to everything is incompatible. I saw that if I built a data store for persistence, industry will consolidate around S3 API. Amazon S3, when we started, it looked like they were the giant, there was only one cloud industry, it believed mono-cloud. Almost everyone was talking to me like AWS will be the world's data center.I certainly see that possibility, Amazon is capable of doing it, but my bet was the other way, that AWS S3 will be one of many solutions, but not—if it's all incompatible, it's not going to work, industry will consolidate. Our bet was, if world is producing so much data, if you build an object store that is S3 compatible, but ended up as the leading data store of the world and owned the application ecosystem, you cannot go wrong. We kept our heads low and focused on the first six years on massive adoption, build the ecosystem to a scale where we can say now our ecosystem is equal or larger than Amazon, then we are in business. We didn't focus on commercialization; we focused on convincing the industry that this is the right technology for them to use. Once they are convinced, once you solve business problems, making money is not hard because they are already sold, they are in love with the product, then convincing them to pay is not a big deal because data is so critical, central part of their business.We didn't worry about commercialization, we worried about adoption. And once we got the adoption, now customers are coming to us and they're like, “I don't want open-source license violation. I don't want data breach or data loss.” They are trying to sell to me, and it's an easy relationship game. And it's about long-term partnership with customers.And so the business started growing, accelerating. That was the reason that now is the time to fill up the gas tank and investors were quite excited about the commercial traction as well. And all the intangible, right, how big we grew in the last few years.Corey: It really is an interesting segment, that has always been something that I've mostly ignored, like, “Oh, you want to run your own? Okay, great.” I get it; some people want to cosplay as cloud providers themselves. Awesome. There's clearly a lot more to it than that, and I'm really interested to see what the future holds for you folks.AB: Yeah, I'm excited. I think end of the day, if I solve real problems, every organization is moving from compute technology-centric to data-centric, and they're all looking at data warehouse, data lake, and whatever name they give data infrastructure. Data is now the centerpiece. Software is a commodity. That's how they are looking at it. And it is translating to each of these large organizations—actually, even the mid, even startups nowadays have petabytes of data—and I see a huge potential here. The timing is perfect for us.Corey: I'm really excited to see this continue to grow. And I want to thank you for taking so much time to speak with me today. If people want to learn more, where can they find you?AB: I'm always on the community, right. Twitter and, like, I think the Slack channel, it's quite easy to reach out to me. LinkedIn. I'm always excited to talk to our users or community.Corey: And we will of course put links to this in the [show notes 00:33:58]. Thank you so much for your time. I really appreciate it.AB: Again, wonderful to be here, Corey.Corey: Anand Babu Periasamy, CEO and co-founder of MinIO. I'm Cloud Economist Corey Quinn, and this is Screaming in the Cloud. If you've enjoyed this podcast, please leave a five-star review on your podcast platform of choice, whereas if you've hated this podcast, please leave a five-star review on your podcast platform of choice, along with what starts out as an angry comment but eventually turns into you, in your position on the S3 product team, writing a thank you note to MinIO for helping validate your market.Corey: If your AWS bill keeps rising and your blood pressure is doing the same, then you need The Duckbill Group. We help companies fix their AWS bill by making it smaller and less horrifying. The Duckbill Group works for you, not AWS. We tailor recommendations to your business and we get to the point. Visit duckbillgroup.com to get started.Announcer: This has been a HumblePod production. Stay humble.
Today we have a fun episode lined up for you! Over the last year of 2021, we've been honored to have some incredibly smart people on the show to share their views and practices in the DevSecCon space with us all. And in each episode, they were asked a slightly open-ended question: if you took out your crystal ball and you thought about someone sitting in your position or your type of role in five years' time, what would be most different about their reality? For this special installment, we've put together some highlights of these brilliant answers! Hear perspectives that cover everything from changes on the data, AI, and ML front to the idea of ownership when it comes to security. We also touch on the increased fragmentation in the DevOps scene that we're going to need to work with, bigger picture concerns about how regulation might be different in five years, and some final optimistic predictions on ways we could all be in a much better place! We hear some golden nuggets from the likes of Robert wood from CMS, cybersecurity influencer Ashish Rajan, Liz Rice from eBPF pioneers Isovalent, our very own Simon Maple who weighs in with his concrete expectations of what will happen, Dev Akhawe, Daniel Bryant, Rinki Sethi, and so many more! So to hear what these top industry professionals have to say about the future, join us today!
Cloud Security News this week 21 October 2021 It's a month full of conferences and as promised we are back with our 2nd episode this week to bring you the cloud security highlights from KubeCon. In this episode we will share some of our team's favourite from Kubecon 2021 North America If you aren't quite familiar with the wonderful world of Kubernetes, there are a few weird and wonderful open source acronyms in today's episode. TUF refers to The Update Framework, SPIFFE refers to Secure Production Identity Framework for Everyone SPIFFE, SPIRE is the SPIFFE's Runtime Environment). Now that we are all across cool Kube words - lets into the talks Starting off with the talk from Andrew Martin, Co-Founder of Control Plane and Author of Hacking Kubernetes and Kubernetes Threat Modelling. He spoke about Kubernetes Supply Chain Security - he showcased work to build a Kubernetes Software Factory with Tekton and Deep dived on signing and verification approaches to securely build software with (TUF) SPIFFE, SPIRE and sigstore Ian Coldwater from Twilio; Brad Geesaman & Rory McCune from Aqua Security Duffie Cooley from Isovalent combined forces to share with the community how they do security research or hacking Kubenetes clusters using a recently discovered Kubernetes CVE (Common Vulnerability and exposure) - Their talk was called Exploiting a Slightly Peculiar Volume Configuration with SIG-Honk Matt Jarvis from Synk shared what to do if your container has a huge number of Vulnerabilities - how to prioritise them and remediate them in his talk My Container Image has 500 Vulnerabilities, Now What? Talking about containers and Vulnerability scanning If you want to know about how vulnerability scanners work, their blind spots and how to implement a practical risk based approach to remedy vulnerabilities that really matter to your organisation - check out Pushkar Joglekar's Keeping Up with the CVEs: How to Find a Needle in a Haystack? If you find yourself asking “How do I access my S3 bucket in AWS from my GCP cluster?” Brandon Lum & Mariusz Sabath, IBM may have the answer for you in their talk Untangling the Multi-Cloud Identity and Access Problem With SPIFFE Tornjak where they talk about a proposed shift in the perspective of workload identity from being “platform specific” to “organization wide” using SPIFFE/SPIRE and the new SPIFFE Tornjak project. Episode Show Notes on Cloud Security Podcast Website. Podcast Twitter - Cloud Security Podcast (@CloudSecPod) Instagram - Cloud Security News If you want to watch videos of this LIVE STREAMED episode and past episodes, check out: - Cloud Security Podcast: - Cloud Security Academy:
Cloud Security News this week 21 October 2021 It's a month full of conferences and as promised we are back with our 2nd episode this week to bring you the cloud security highlights from KubeCon. In this episode we will share some of our team's favourite from Kubecon 2021 North America If you aren't quite familiar with the wonderful world of Kubernetes, there are a few weird and wonderful open source acronyms in today's episode. TUF refers to The Update Framework, SPIFFE refers to Secure Production Identity Framework for Everyone SPIFFE, SPIRE is the SPIFFE's Runtime Environment). Now that we are all across cool Kube words - lets into the talks Starting off with the talk from Andrew Martin, Co-Founder of Control Plane and Author of Hacking Kubernetes and Kubernetes Threat Modelling. He spoke about Kubernetes Supply Chain Security - he showcased work to build a Kubernetes Software Factory with Tekton and Deep dived on signing and verification approaches to securely build software with (TUF) SPIFFE, SPIRE and sigstore Ian Coldwater from Twilio; Brad Geesaman & Rory McCune from Aqua Security Duffie Cooley from Isovalent combined forces to share with the community how they do security research or hacking Kubenetes clusters using a recently discovered Kubernetes CVE (Common Vulnerability and exposure) - Their talk was called Exploiting a Slightly Peculiar Volume Configuration with SIG-Honk Matt Jarvis from Synk shared what to do if your container has a huge number of Vulnerabilities - how to prioritise them and remediate them in his talk My Container Image has 500 Vulnerabilities, Now What? Talking about containers and Vulnerability scanning If you want to know about how vulnerability scanners work, their blind spots and how to implement a practical risk based approach to remedy vulnerabilities that really matter to your organisation - check out Pushkar Joglekar's Keeping Up with the CVEs: How to Find a Needle in a Haystack? If you find yourself asking “How do I access my S3 bucket in AWS from my GCP cluster?” Brandon Lum & Mariusz Sabath, IBM may have the answer for you in their talk Untangling the Multi-Cloud Identity and Access Problem With SPIFFE Tornjak where they talk about a proposed shift in the perspective of workload identity from being “platform specific” to “organization wide” using SPIFFE/SPIRE and the new SPIFFE Tornjak project. Episode Show Notes on Cloud Security Podcast Website. Podcast Twitter - Cloud Security Podcast (@CloudSecPod) Instagram - Cloud Security News If you want to watch videos of this LIVE STREAMED episode and past episodes, check out: - Cloud Security Podcast: - Cloud Security Academy:
Welcome to another episode of the Secure Developer! During today's conversation, Guy Podjarny, founder of Snyk, speaks with Liz Rice, Chief Open-Source Officer with eBPF pioneers Isovalent, where she works on the Cilium project, which provides cloud native networking, observability and security. They touch on plenty of current and relevant topics, with a focus on eBPF and the CNCF and its role in security. You'll hear all about her role and her journey into the world of cyber security, and what it was like to transition into the sometimes intimidating world of security. We touch on why containers are essentially just processes, and Liz gives us an introduction to eBPF, how it benefits security, and the renaissance it is currently experiencing. Liz tells us all about her work at CNCF and the Technical Oversight Committee, and how it is building much of the foundation for cloud native computing. Join us today to hear all this and more!
Why the Linux kernel received so much mainstream attention this week, some of our favorite open-source projects get great updates, and why we're concerned about Linux Foundation members transferring innovation from Linux to closed source software at an industrial scale.
Why the Linux kernel received so much mainstream attention this week, some of our favorite open-source projects get great updates, and why we're concerned about Linux Foundation members transferring innovation from Linux to closed source software at an industrial scale.
Why the Linux kernel received so much mainstream attention this week, some of our favorite open-source projects get great updates, and why we're concerned about Linux Foundation members transferring innovation from Linux to closed source software at an industrial scale.
What's new in Debian 11, and an example of the Linux Foundation funneling free software to their corporate friends. Plus, why Western Digital might be to thank for your next ultimate Linux workstation.
What's new in Debian 11, and an example of the Linux Foundation funneling free software to their corporate friends. Plus, why Western Digital might be to thank for your next ultimate Linux workstation.
What's new in Debian 11, and an example of the Linux Foundation funneling free software to their corporate friends. Plus, why Western Digital might be to thank for your next ultimate Linux workstation.
I speak with Neela Jacque about eBPF and what does Cilium and Isovalent offer to cloud-native networking and security. I also cover the rise of VSCode, the re-rise of co-working, Telegram v Signal, and more! --- Send in a voice message: https://anchor.fm/theweeklysqueak/message
Cilium is open-source software built to provide improved networking and security controls for Linux systems operating in containerized environments along with technologies like Kubernetes. In a containerized environment, traditional Layer 3 and Layer 4 networking and security controls based on IP addresses and ports, like firewalls, can be difficult to operate at scale because of the volatility of the system. Cilium is eBPF, which is an in-kernel virtual machine which attaches applications directly to code paths in the kernel. In effect, this makes the Linux kernel “programmable” without changing kernel source code or loading modules. Cilium takes advantage of this functionality to insert networking and security functions at the kernel level rather than in traditional Layer 3 or Layer 4 controls. This allows Cilium to combine metadata from Layer 3 and Layer 4 with application-layer metadata such as HTTP method and header values in order to establish rules and provide visibility based on service, pod, or container identity. Isovalent, co-founded by the creator of Cilium, maintains the Cilium Open Source Project and also offers Cilium Enterprise, which is a suite of tools helping organizations adopt Cilium and overcome the hurdles of building a secure, stable cloud-native application. Dan Wendlant and Thomas Graf are the co-founders of Isovalent. Thomas, the firm's CTO, was the original creator of the Cilium open-source project and spent 15 years working on the Linux kernel prior to founding Isovalent. Dan, Isovalent's CEO, has also worked at VMWare and Nicira. They join the show today to talk about why Cilium and Cilium Enterprise are a great choice for organizations looking to build cloud-native applications.
Isovalent is essentially a commercially supported flavor of Cilium, although it's more than that. Isovalent is offering Cilium Enterprise, which adds more capability to the Cilium Community project. Is there enough “more” to make you want to invest in Cilium Enterprise? That will depend on your organizational needs, of course, but the differences are substantial enough to warrant investigation.
Isovalent is essentially a commercially supported flavor of Cilium, although it's more than that. Isovalent is offering Cilium Enterprise, which adds more capability to the Cilium Community project. Is there enough “more” to make you want to invest in Cilium Enterprise? That will depend on your organizational needs, of course, but the differences are substantial enough to warrant investigation.
Isovalent is essentially a commercially supported flavor of Cilium, although it's more than that. Isovalent is offering Cilium Enterprise, which adds more capability to the Cilium Community project. Is there enough “more” to make you want to invest in Cilium Enterprise? That will depend on your organizational needs, of course, but the differences are substantial enough to warrant investigation.
This week's Network Break discusses a serious flaw in some Aruba switches, why Kemp acquired Flowmon, what makes a networking startup worth a $29 million investment, what Extreme Networks is up to with its AWS Outpost competitor, how Cisco is spinning a tough financial quarter, and more tech news.
This week's Network Break discusses a serious flaw in some Aruba switches, why Kemp acquired Flowmon, what makes a networking startup worth a $29 million investment, what Extreme Networks is up to with its AWS Outpost competitor, how Cisco is spinning a tough financial quarter, and more tech news.
This week's Network Break discusses a serious flaw in some Aruba switches, why Kemp acquired Flowmon, what makes a networking startup worth a $29 million investment, what Extreme Networks is up to with its AWS Outpost competitor, how Cisco is spinning a tough financial quarter, and more tech news.
Isovalent, a startup that aims to bring networking into the cloud-native era, today announced that it has raised a $29 million Series A round led by Andreesen Horowitz and Google. In addition, the company today officially launched its Cilium platform (which was in stealth until now) to help enterprises connect, observe and secure their applications. […]
Dans le monde actuel, les réseaux informatiques sont devenus aussi indispensables que le sont les réseaux de distribution d'eau, de gaz et d'électricité, ou les autoroutes et les voies aériennes. Sans internet, pas de world wide web, et sans réseau, pas d'applications distribuées. Il est même aujourd'hui considéré comme une simple commodité.Cependant, contrairement à l'eau et à l'électricité, les réseaux informatiques sont devenus de plus en plus complexes à comprendre dû au fait que de nos jours, eux aussi sont virtualisés. Et s'il est vrai qu'ils ont évolué en même temps que nos machines virtuelles, c'est encore plus vrai au niveau de Kubernetes, avec notamment la création des CNI pour Container Network Interface.Pour y voir plus clair dans tous ces concepts, aujourd'hui je n'ai pas un invité, mais bien trois invités : Paul Chaignon, Quentin Monnet et Robin Hahling. Paul, Quentin et Robin travailent tous les trois pour Isovalent, et contribuent au projet open source Cilium. Avec eux, nous allons découvrir pourquoi Cilium n'est pas un CNI comme les autres !Support the show (https://www.patreon.com/electromonkeys)