Linux Foundation project
POPULARITY
AI is reshaping the fundamental economics of startups—lowering product development costs, compressing GTM cycles, and rewriting the rules of competition. In this episode, Craig McLuckie (Co-Founder & CEO @ Stacklok, co-creator of Kubernetes) unpacks “the epoch of the startup,” a moment of massive disruption where fast-moving founders have a unique edge over incumbents. We explore how Craig is navigating this new era from rethinking cost structure, value capture, and defensibility to leveraging open-source, community, and asymmetric advantages as core pillars of Stacklok's strategy. Craig shares lessons from pivotal product shifts, frameworks for identifying moats, and the broader societal implications of AI-driven disruption. Whether you're leading a startup, pivoting in the face of AI, or thinking about your next big move, this conversation offers a strategic playbook for thriving in today's shifting landscape.How do you see AI reshaping the startup landscape? Join the discussion on our forum and share your insights, questions, and takeaways. ABOUT CRAIG MCLUCKIECraig is the CEO and co-founder of Stacklok, where his team is working to tip AI code generation on its side, from vertical, closed solutions to horizontal, aligned systems. Craig was previously CEO and co-founder of Heptio, which was acquired by VMware in 2018; he has also led product and engineering teams at Google and Microsoft. Craig is a co-creator of Kubernetes and he bootstrapped and chaired the Cloud Native Computing Foundation. xThis episode is brought to you by Side – delivering award-winning QA, localization, player support, and tech services for the world's leading games and technology brands.For over 30 years, Side has helped create unforgettable user experiences—from indies to AAA blockbusters like Silent Hill 2 and Baldur's Gate 3.Learn more about Side's global solutions at side.inc. SHOW NOTES:Why this moment is “the epoch of the startup” (2:03)How AI shifts startup economics: from cost structures to value capture (4:18)Why incumbents struggle during disruption—and how startups can win (8:17)The origin story behind Stacklok & lessons from Craig's pivot (11:04)Frameworks for identifying asymmetric advantages as a founder (14:48)How to map your unique asymmetric advantages to new opportunities and secure stakeholder buy-in (16:34)Rethinking defensibility & value capture in the AI era (16:29)How Craig applied cost, GTM & product perspectives to strategic pivots @ Stacklok (18:07)Building investment theses: Aligning cultural strengths & asymmetric advantages with evolving opportunities (20:05)Determining your startup's investment themes (22:53)Structuring experiments & validating opportunities (24:15)Defensibility & building community-driven moats in early ideation phases (26:54)Signals of early community-product alignment (31:24)Conversation frameworks to assess asymmetric advantages (32:22)Societal implications of AI disruption & the “startup epoch” (35:14)Rapid fire questions (38:12)This episode wouldn't have been possible without the help of our incredible production team:Patrick Gallagher - Producer & Co-HostJerry Li - Co-HostNoah Olberding - Associate Producer, Audio & Video Editor https://www.linkedin.com/in/noah-olberding/Dan Overheim - Audio Engineer, Dan's also an avid 3D printer - https://www.bnd3d.com/Ellie Coggins Angus - Copywriter, Check out her other work at https://elliecoggins.com/about/
Ever heard of Kubernetes? Envoy? Prometheus? You probably have. But what you may not know is that these projects are managed and sustained by the Cloud Native Computing Foundation, aka the CNCF. Cloud native computing allows IT and software to move faster, and open source projects are essential to furthering these innovations and keeping them accessible for everyone. The CNCF is dedicated to fostering and sustaining an ecosystem of open source, vendor-neutral projects. In fact, Spotify donated Backstage to the CNCF in September 2020 and it moved to incubating maturity level March 2022. To celebrate the CNCF's 10th year, host and principal engineer Dave Zolotusky speaks with Chris Aniszczyk, Chief Technology Officer at the CNCF to better understand what an open source foundation is, what it does, and why it matters so much to the developer community. Plus, we'll get a sneak peek at KubeCon + CloudNativeCon Europe.Learn more about Chris Aniszczyk, the CNCF, and open source:NerdOut@Spotify, Ep.02: Open IssuesNerdOut@Spotify, Ep.11: Open Source Work Is WorkCloud Native Computing Foundation blogChris Aniszczyk on TwitterChris's Open Source Velocity ReportsCertified Backstage Associate certification and other certificationsRegister for KubeCon + CloudNativeCon LondonRead what else we're nerding out about on the Spotify Engineering Blog: engineering.atspotify.comYou should follow us on Twitter @SpotifyEng, LinkedIn, and YouTube!
Red Hat donates a bunch of container tools to the Cloud Native Computing Foundation and we are broadly positive about it. Plus some of our highlights from the recent KubeCon + CloudNativeCon North America 2024 conferences. News/discussion Red Hat to Contribute Comprehensive Container Tools Collection to Cloud Native Computing Foundation KubeCon + CloudNativeCon North... Read More
Red Hat donates a bunch of container tools to the Cloud Native Computing Foundation and we are broadly positive about it. Plus some of our highlights from the recent KubeCon + CloudNativeCon North America 2024 conferences. News/discussion Red Hat to Contribute Comprehensive Container Tools Collection to Cloud Native Computing Foundation KubeCon + CloudNativeCon North … Continue reading "Hybrid Cloud Show – Episode 20"
Recebemos uma convidada incrível: uma embaixadora oficial da Cloud Native Computing Foundation (CNCF)! Vamos conhecer sua trajetória, desde os primeiros passos na tecnologia até se tornar referência na comunidade cloud native. Descubra o que é a CNCF, sua contribuição para ferramentas como Kubernetes e Prometheus, e como esses projetos estão moldando o futuro da tecnologia. Além disso, discutimos a importância de participar de comunidades: como elas ajudam a impulsionar carreiras, criar conexões valiosas e acompanhar as últimas tendências do mercado. Edição completa por Rádiofobia Podcast e Multimídia: https://radiofobia.com.br/ --- Nos siga no Twitter e no Instagram: @luizalabs @cabecadelab Dúvidas, cabeçadas e sugestões, mande e-mail para o cabecadelab@luizalabs.com ou uma DM no Instagram Participantes: YOHAN RODRIGUES | https://www.linkedin.com/in/yohan-rodrigues/ NATÁLIA GRANATO | https://www.linkedin.com/in/nataliagranato https://www.nataliagranato.xyzhttps://github.com/nataliagranato
Jetstack's cert-manager, a leading open-source project in Kubernetes certificate management, began as a job interview challenge. Co-founder Matt Barker recalls asking a prospective engineer to automate Let's Encrypt within Kubernetes. By Monday, the candidate had created kube-lego, which evolved into cert-manager, now downloaded over 500 million times monthly.Cert-manager's journey to CNCF graduation, achieved in September, began with its donation to the foundation four years ago. Relaunched as cert-manager, the project grew under engineer James Munnelly, becoming the de facto standard for certificate lifecycle management. The thriving community and ecosystem around cert-manager highlighted its suitability for CNCF stewardship. However, maintainers, including Ashley Davis, noted challenges in navigating differing opinions within its vast user base.With graduation achieved, cert-manager's roadmap includes sub-projects like trust-manager, addressing TLS trust bundle management and Istio integration. Barker aims to streamline enterprise-scale deployments and educate security teams on cert-manager's impact. Cert-manager has become integral to cloud-native workflows, promising to simplify hybrid, multicloud, and edge deployments.Learn more from The New Stack about cert-manager:Jetstack's cert-manager Joins the CNCF Sandbox of Cloud Native TechnologiesJetstack Secure Promises to Ease Kubernetes TLS SecurityJoin our community of newsletter subscribers to stay on top of the news and at the top of your game.
Catch up on everything you missed at KubeCon North America 2024! Join us for a special recap that brings you closer to the action. This is a special episode in collaboration with the Cloud Native Computing Foundation (CNCF), the foundation behind KubeCon+CloudNativeCon and the cloud-native projects. Dotan Horovits, our host and a CNCF Ambassador, will be joined by an all-star panel of cloud-native experts—CNCF Ambassadors Viktor Farcic and Max Körbächer—each bringing their unique insights and takeaways from the conference. Together, they unpack the major project announcements and key themes from this year's event: the standout talks, co-located events, maintainer meetings and those memorable hallway conversations. Get insights from the experts who know the cloud-native space inside out. Viktor Farcic is a lead rapscallion at Upbound and a published author. He is a host of the YouTube channel DevOps Toolkit and a co-host of DevOps Paradox. Max is Co-Founder at Liquid Reply. He is Co-Chair of the CNCF Environmental Sustainability Technical Advisory Group and served 3 years at the Kubernetes release team. He runs the Munich Kubernetes Meetup as well as the Munich and Ukraine Kubernetes Community Days. Dotan Horovits is a DevOps specialist with special focus on observability solutions and related open source projects such as OpenTelemetry, Jaeger, Prometheus and OpenSearch. He runs the OpenObservability Talks podcast, now in its 5th year. Don't miss this expert-led KubeCon recap, in collaboration with the Cloud Native Computing Foundation's official channel! The episode was live-streamed on 19 November 2024 in collaboration with the Cloud Native Computing Foundation, and the video is available at https://www.youtube.com/watch?v=1TrPev5IzB8 You can read the recap post: https://medium.com/@horovits/1362959030c1 OpenObservability Talks episodes are released monthly, on the last Thursday of each month and are available for listening on your favorite podcast app and on YouTube. We live-stream the episodes on Twitch and YouTube Live - tune in to see us live, and chime in with your comments and questions on the live chat. https://www.youtube.com/@openobservabilitytalks https://www.twitch.tv/openobservability Show Notes: 00:00 - episode and speaker intro 02:45 - KubeCon Salt Lake City stats and trends 05:26 - The cloud-native stack is maturing up 08:12 - KubeCon's role in the cloud-native space 11:23 - Platform Engineering trend 14:07 - Open specifications and Kubernetes API 18:44 - Flatcar joins the CNCF with container focused OS 24:54 - wasmCloud moves to CNCF incubation and WASMCon highlights 31:49 - CNCF Ambassador program and recent Community Awards 35:24 - KubeCon event plan and expansion, and local KCDs 43:34 - Environmental Sustainability TAG 47:46 - Dapr and cert-manager reached CNCF graduation 51:11 - Cloud Native Reference Architectures 54:39 - observability updates for Jaeger, Prometheus and more 58:53 - episode outro Resources: CNCF community awards: https://www.cncf.io/announcements/2024/11/14/cloud-native-computing-foundation-announces-the-2024-community-awards-winners/ Dapr graduation: https://www.cncf.io/announcements/2024/11/12/cloud-native-computing-foundation-announces-dapr-graduation/ wasmCloud moves to incubation: https://www.cncf.io/blog/2024/11/12/cncf-welcomes-wasmcloud-to-the-cncf-incubator/ More on wasmCloud: https://medium.com/p/02a5025c6115 OpenTelemetry expands into CI/CD observability https://www.linkedin.com/feed/update/urn:li:share:7259200802689273856 Jaeger v2 unveiled https://medium.com/p/be612dbee774 Prometheus 3.0 unveiled https://medium.com/p/1c5edca32c87 Flatcar joins the CNCF https://www.linkedin.com/feed/update/urn:li:share:7257278073824288768/ OpenCost matured into incubation https://www.linkedin.com/feed/update/urn:li:share:7257826394179522562 New Cloud Native Reference Architecture hub: https://architecture.cncf.io/ CNCF upcoming events: https://www.cncf.io/events/ Kubernetes Community Days events around the world https://www.cncf.io/kcds/ Socials: Twitter: https://twitter.com/OpenObserv YouTube: https://www.youtube.com/@openobservabilitytalks Dotan Horovits ============ Twitter: @horovits LinkedIn: www.linkedin.com/in/horovits Mastodon: @horovits@fosstodon BlueSky: @horovits.bsky.social Viktor Farcic =========== Twitter: https://twitter.com/vfarcic LinkedIn: https://www.linkedin.com/in/viktorfarcic BlueSky: https://bsky.app/profile/vfarcic.bsky.social Max Körbächer ============= Twitter: https://twitter.com/mkoerbi LinkedIn: https://www.linkedin.com/in/maxkoerbaecher BlueSky: https://bsky.app/profile/mkoerbi.bsky.social Mastodon: https://fosstodon.org/@mkorbi@mastodon.social
Argo is an open-source suite of tools to enhance continuous delivery and workflow orchestration in Kubernetes environments. The project had its start at Applatix and was accepted to the Cloud Native Computing Foundation in 2020. Michael Crenshaw and Zach Aller are both lead maintainers for Argo. They join the show with Lee Atchison to talk The post Argo and Kubernetes with Michael Crenshaw and Zach Aller appeared first on Software Engineering Daily.
Argo is an open-source suite of tools to enhance continuous delivery and workflow orchestration in Kubernetes environments. The project had its start at Applatix and was accepted to the Cloud Native Computing Foundation in 2020. Michael Crenshaw and Zach Aller are both lead maintainers for Argo. They join the show with Lee Atchison to talk The post Argo and Kubernetes with Michael Crenshaw and Zach Aller appeared first on Software Engineering Daily.
Fighting Patent Trolls that threaten Open Source innovation - Featuring Joanna Lee and Sean Ambwani In this episode of Hashtag Trending's weekend edition, host Jim Love dives into the world of patent trolls and the impact on innovation. Featuring guests Joanna Lee from the Linux Foundation and Cloud Native Computing Foundation, and Sean Ambwani, co-founder of Unified Patents. They discuss the challenges of defending against non-practicing entities (patent trolls), the importance of community and collective effort in countering these threats, and the role of the Unified Patents' open source zone in protecting open source technologies. Learn about how developers and companies can join the fight, contribute crowd-sourced prior art, and the benefits of participating in Unified Patents programs. Don't miss this engaging discussion on protecting innovation and the future of open source. 00:00 Introduction and Host Welcome 00:09 The Challenges of Reviving the IT World 01:27 Legal Trolling and Its Impact 01:56 Introducing the Guests: Joanna Lee and Sean Ambwani 04:35 Understanding Patent Trolls 12:45 The Role of Prior Art in Invalidating Patents 17:09 Community Efforts Against Patent Trolls 29:11 Call to Action and Conclusion
Send us a textWhat truly defines "cloud native"? Join Tim, Alex, Chris, and special guest Will Collins as they challenge conventional wisdom and dissect the evolving concept of cloud-native infrastructure. This episode kicks off by exploring the Cloud Native Computing Foundation's (CNCF) definition, examining whether it serves as a prescriptive guide or merely a loose framework. We touch on the array of technologies and practices it encompasses—such as containers, service meshes, microservices, and serverless architectures—and discuss the hallmark characteristics like loose coupling that are essential in cloud-native systems. Gain insights into how these principles are being applied beyond cloud networking.Dive into the complexities of what it means to be "cloud native" by exploring the nuances of using cloud provider constructs versus third-party tools. We debate the pros and cons of these approaches, examining the trade-offs between optimal performance, simplicity, and the need for compliance and scalability. The conversation also highlights the flexibility and ambiguity in the CNCF definition, which has led to varied interpretations and marketing claims, making it clear that while cloud-native solutions offer operational simplicity, enterprises often juggle these with third-party integrations to meet their diverse and complex needs.Lastly, we tackle the pressing issue of misleading marketing tactics in the tech industry. From SASE to SD-WAN, we expose how vendors often inflate minimal features to fit trendy labels, creating a confusing environment for decision-makers. We discuss the critical role of trust in long sales cycles and ponder whether stricter standards or certifications, akin to those from IEEE, could help mitigate these issues. The episode wraps up with a look at the challenges of integrating legacy systems with cloud-native services and the ambiguity surrounding buzzwords, emphasizing the need for clearer definitions to benefit customers rather than just cloud providers. Don't miss this episode if you're seeking a deeper understanding of cloud-native technologies and the current landscape.Check out the Fortnightly Cloud Networking NewsVisit our website and subscribe: https://www.cables2clouds.com/Follow us on Twitter: https://twitter.com/cables2cloudsFollow us on YouTube: https://www.youtube.com/@cables2clouds/Follow us on TikTok: https://www.tiktok.com/@cables2cloudsMerch Store: https://store.cables2clouds.com/Join the Discord Study group: https://artofneteng.com/iaatjArt of Network Engineering (AONE): https://artofnetworkengineering.com
Thank you to the folks at Sustain (https://sustainoss.org/) for providing the hosting account for CHAOSSCast! CHAOSScast – Episode 91 In this episode of CHAOSScast, host Matt Germonprez is joined by Red Hat's Senior Data Scientist Cali Dolfi and Community Architect Josh Berkus to discuss their experiences in measuring and maintaining open source community health. They delve into their day-to-day roles, challenges, and key projects like Project Aspen, the importance of contextual metrics, and the impact of generative AI on their work. Also, they emphasize the importance of goal-oriented metrics and establishing repeatable processes in OSPOs. Press download to hear much more! [00:00:40] Cali and Josh share their backgrounds. [00:02:02] Cali talks about her work as a data scientist at Red Hat, focusing on community open source metrics and mentions her recent projects, including Project Aspen, and her role in developing platforms for data visualization and metrics. [00:04:34] Josh discusses his day-to-day responsibilities which include stewarding Red Hat's involvement in cloud native projects and committee work with the Cloud Native Computing Foundation. [00:06:17] The discussion shifts towards the health of collections of projects or ecosystems and Cali and Josh share their thoughts on how they approach ecosystem health, particularly with the cloud native space. Josh focuses on Kubernetes and its connection to various projects. [00:09:17] Matt questions if Red Hat often plays a stabilizing role within these ecosystems, especially in times of crisis or instability. [00:10:29] Cali discusses current hot topics in open source community health at Red Hat, focusing on SBOM (Software Bill of Materials) analysis and its implications for security and maintenance within the tech industry. They discuss the importance of understanding vulnerabilities within open source projects and the role of maintainers in mitigating these vulnerabilities. [00:14:51] Matt asks about identifying vulnerabilities in upstream projects and notes the challenges of visibility due to numerous projects. Cali explains their approach of analyzing the entire codebase, using visualizations on the ‘8not dashboard' to monitor active maintainers in different project areas. [00:16:43] Josh discusses mainstream tooling focused on known vulnerabilities and emphasizes the need to predict future vulnerabilities. [00:19:16] Matt inquires about handling the variability and contextual specificity of metrics across numerous projects. Cali discusses the importance of contextual understanding in interpreting data and metrics, emphasizing the need for community involvement to enrich the interpretation. Josh argues that improving data collection methods to incorporate contextual knowledge is crucial, aiming to shift some analytical responsibilities from humans to algorithms. [00:24:19] A discussion starts on the role of generative AI in current tech, prompting Cali to reflect on the impact of AI hype cycles on resource allocation within the industry. Josh acknowledges that while some open source machine learning tools have benefited from increased resources due to the AI wave, the introduction of generative AI in community projects has often been problematic. [00:30:03] The conversation shifts back to the challenge of AI-generated contributions to open source projects. Josh and Matt discuss the potential need for Red Hat's Open Source Program Office (OSPO) to adapt its analytics and policies to manage the influx of such contributions. [00:31:35] We close with Cali offering advice to new OSPOs on setting up robust data analysis infrastructures from the start, and Josh reinforces the need for goal-oriented metrics and processes advising OSPOs to design operations that are sustainable and scalable. Value Adds (Picks) of the week: [00:34:52] Matt's pick is being a wildflower gardener. [00:35:24] Josh's pick is being a vegetable gardener. [00:36:20] Cali's pick is the Big Brother show being back on. Panelist: Matt Germonprez Guests: Josh Berkus Cali Dolfi Links: CHAOSS (https://chaoss.community/) CHAOSS Project X/Twitter (https://twitter.com/chaossproj?lang=en) CHAOSScast Podcast (https://podcast.chaoss.community/) podcast@chaoss.community (mailto:podcast@chaoss.community) Matt Germonprez X/Twitter (https://twitter.com/germ) Josh Berkus Website (https://berkus.org/) Josh Berkus Mastodon (https://m6n.io/@fuzzychef) Cali Dolfi LinkedIn (https://www.linkedin.com/in/calidolfi/) Cali Dolfi- Red Hat Research Quarterly (https://research.redhat.com/blog/article-author/cali-dolfi/) Red Hat (https://www.redhat.com/en) Project Aspen-GitHub (https://github.com/oss-aspen) US Government Proposes SBOM Rules for Contractors (https://www.infosecurity-magazine.com/news/us-government-proposes-sbom-rules/) Special Guests: Cali Dolfi and Josh Berkus.
Taylor Dolezal from the Cloud Native Computing Foundation discusses his role as the Head of Ecosystem, working closely with end-users implementing CNCF projects. He shares his open source origin story, tracing back to high school programming experiences. We touched on community dynamics, experiences with project forks, and the evolving landscape of AI and its intersection with open source. We also discuss the importance of sustainability in open source communities and the critical role of vendor neutrality. 00:00 Introduction 01:45 Open Source Origin Story 11:04 Project Forks and Community Dynamics 17:20 HashiCorp and OpenTofu: A Fork in the Road 19:46 Navigating the AI Frontier 23:28 The Challenges of AI Standardization 26:17 The Importance of Vendor Neutrality 28:02 Balancing Priorities in Open Source 29:51 Sustaining Open Source Communities Guest: Taylor Dolezal navigates the cloud native universe with a knack for puns and a keen eye for psychology. Living in the heart of LA, he blends tech innovation with mental insights, one punny cloud at a time. Avid reader, thinker, and cloud whisperer.
In this episode of BragTalks, host Heather VanCura interviews Patrick Chanezon about participating in standards and open source projects &/or communities. Patrick shares his experiences and the impact this participation in the projects and communities has had on his career. Listen to hear about how he approached getting involved and even some of the mistakes made along the way. Season 7 is about sharing the experiences of technical professionals and building on the interviews from the recently published book 'Developer Career Masterplan'. This episode is a story that links to Chapters 11 and 14 of the book..hope you enjoy our new look and Season 7 of BragTalks! Biography: Patrick Chanezon manages the Cloud Developer Advocacy team in Developer Relations at Microsoft, helping developers achieve more with AI on Microsoft Cloud. Previously, at Docker Inc., he helped to build Docker, the world's leading software container platform, for developers and sysadmins. He helped establish open source and standards organizations such as Open Container Initiative, Cloud Native Computing Foundation or Green Software Foundation. Software developer and storyteller, he spent 8 years building platforms at Netscape & Sun, then 19 years evangelizing platforms at Google, VMware & Microsoft. His main professional interest is in building and kickstarting the network effect for these wondrous two-sided markets called Platforms. He has worked on platforms for AI, Cloud, Distributed Systems, Web, Social, Commerce, Ads, and Portals.
Time to explore the next frontier in cloud-native evolution: WebAssembly (WASM). Moving beyond containers and Kubernetes, WASM bears the promise to revolutionize the cloud landscape with unparalleled performance, portability, and security. Can it actually deliver on this promise? We discussed this and more it in this episode. We delved into how WASM is transforming the way we build and run cloud-native applications, enabling a more efficient, scalable, and flexible infrastructure. We also got latest insights from the Cloud Native Computing Foundation's work in the domain, the wasmCloud open source project and the tool landscape, along with the work of the WASM working group and standardization efforts with the Bytecode Alliance. This episode's guest is Taylor Thomas, Engineering Director working on WebAssembly platforms at Cosmonic. He serves as a co-chair for the CNCF's WASM working group, and as a CNCF Ambassador. He actively participates in the open source community and is one of the creators of Krustlet and Bindle. His work at Intel, Nike, and Microsoft spanned various containers and Kubernetes platforms as well as WebAssembly platforms. The episode was live-streamed on 18 July 2024 and the video is available at https://www.youtube.com/watch?v=t2xIoVNwtKM OpenObservability Talks episodes are released monthly, on the last Thursday of each month and are available for listening on your favorite podcast app and on YouTube. We live-stream the episodes on Twitch and YouTube Live - tune in to see us live, and chime in with your comments and questions on the live chat. https://www.youtube.com/@openobservabilitytalks https://www.twitch.tv/openobservability Show Notes: 00:00 - Show, episode and guest intro 04:50 - Celebrating a decade to Kubernetes and the power of open source communities 07:18 - What is WebAssembly (WASM) 11:29 - WASM support among programming languages 15:24 - IDE, debuggers and developer experience using WASM 18:48 - WASM support for browser and Frontend (DOM manipulation etc.) 21:13 - Standardization of WASM in operating systems 23:40 - WASM component model 29:43 - WASM working groups in the CNCF and Bytecode Alliance 31:36 - WASM ecosystem 36:57 - Which workloads WASM fits best 40:01 - what's wasmCloud 44:18 - wasmCloud benefits for Platform Engineering, IoT and Edge Computing 47:22 - WASM compatibility with Kubernetes 49:54 - Observability in wasmCloud, OpenTelemetry support, and WASI-Observe 52:23 - Who's behind wasmCloud 56:21 - wasmCloud roadmap and community forum 59:07 - CNCF 2024 mid-year survey of top open source projects velocity 1:00:05 - OpenSearch project has just turned 3 Resources: https://webassembly.org/ W3C WebAssembly (WASM) standard: https://www.w3.org/TR/wasm-core-2/ W3C WebAssembly community group: https://www.w3.org/groups/wg/wasm/ Bytecode Alliance: https://bytecodealliance.org/ CNCF's WASM working group: https://tag-runtime.cncf.io/wgs/wasm/ WebAssembly System Interface (WASI) specification: https://wasi.dev/ WASI-Observe observability API specification: https://github.com/WebAssembly/wasi-observe wasmCloud https://wasmcloud.com/ wasmCloud 1.0: https://wasmcloud.com/blog/wasmcloud-1-brings-components-to-enterprise wasmCloud roadmap: https://wasmcloud.com/docs/roadmap/q2 Socials: Twitter: https://twitter.com/OpenObserv YouTube: https://www.youtube.com/@openobservabilitytalks Dotan Horovits ============ Twitter: @horovits LinkedIn: in/horovits Mastodon: @horovits@fosstodon Taylor Thomas ============ LinkedIn: https://www.linkedin.com/in/oftaylor/
In dieser besonderen Folge feiern Enrico, Volkmar und Moritz das 10-jährige Jubiläum von Kubernetes und blicken auf ein Jahrzehnt cloudnativer Innovationen und Transformationen zurück. Im Juni 2014 begann mit dem ersten Commit im Kubernetes Repository die Reise der revolutionären Container-Orchestrierungsplattform, welche für die moderne Infrastruktur eine fundamentale Rolle spielt. Auch wir reflektieren unsere ersten Schritte und Erfahrungen mit Kubernetes und lassen die Meilensteine der Entwicklung Revue passieren. Ein essenzieller Wegbereiter für den Erfolg und die Entwicklung von Kubernetes ist natürlich auch die Community rund um die Cloud Native Computing Foundation und darüber hinaus. Zahlreiche Projekte im cloudnativen Ökosystem werden neben Kubernetes von dieser Community vorangetrieben und entwickelt und stellen zahlreiche neue Lösungen für Kubernetes-Integrationen bereit. Wir haben unsere jeweiligen Top Fünf Projekte herausgesucht und wollen sie euch in dieser Folge präsentieren. Was waren eure ersten Erfahrungen mit Kubernetes? Wie betrachtet ihr die Entwicklung der letzten zehn Jahre? Lasst es uns gerne wissen und schreibt uns unter podcast@sva.de.
Over a decade ago, Marc Andreessen said that software was eating the world and today we could easily say that open source is eating the software world. It's widely adopted and deployed and has become a key element of the IT landscape. The last year has seen some players shifting their approaches to the market, but have these affected the fundamental values of the open source community? Dr. Thomas Di Giacomo, Chief Technical and Product Officer at SUSE and Cloud Native Computing Foundation board member, joins host Eric Hanselman at the SUSECON conference to talk about some of the dynamics of the open source market. Enterprises are leveraging open source to reduce risk in experimentation and tapping into the innovations that are coming out of the open source community. It's not always a simple path, but it's one that more are finding attractive, particularly in light of upheavals in the proprietary markets. Guest: Dr. Thomas Di Giacomo Credits: Host/Author: Eric Hanselman Producer/Editor: Kyle Cangialosi, Donovan Menard Published with assistance from: Sophie Carr
Most message systems have an opinion on the right way to do inter-systems communication. Whether it's actors, queues, message logs or just plain ol' request response, nearly every tool has decided on The Right Way to do messaging, and it optimises heavily for that specific approach. But NATS is absolutely running against that trend. In this week's episode, Jeremey Saenz joins us to talk about NATS, the Cloud Native Computing Foundation's configurable message-passing and data-transfer system. The promise is a tool that can happily behave like a queue for one channel, a log like another and a request/response protocol for the third, all with a few client flags.But how does that work? What's it doing under the hood, what features does it offer, and what do we lose in return for that flexibility? Jeremy has all the answers as we ask, what is NATS really?–NATS on Github: https://github.com/nats-io/nats-serverNATS Homepage: https://nats.io/Getting Started with NATS: https://youtu.be/hjXIUPZ7ArMDeveloper Voices Episode on Benthos: https://youtu.be/labzg-YfYKwCNCF: https://www.cncf.io/The Ballerina Language: https://ballerina.io/Kris on Mastodon: http://mastodon.social/@krisajenkinsKris on LinkedIn: https://www.linkedin.com/in/krisjenkins/Kris on Twitter: https://twitter.com/krisajenkinsSupport Developer Voices via Patreon: https://patreon.com/DeveloperVoicesSupport Developer Voices via YouTube: https://www.youtube.com/@developervoices/join
Curious about how OCI Container Engine for Kubernetes (OKE) can transform the way your development team builds, deploys, and manages cloud-native applications? Listen to hosts Lois Houston and Nikita Abraham explore OKE's key features and benefits with senior OCI instructor Mahendra Mehra. Mahendra breaks down complex concepts into digestible bits, making it easy for you to understand the magic behind OKE. OCI Container Engine for Kubernetes Specialist: https://mylearn.oracle.com/ou/course/oci-container-engine-for-kubernetes-specialist/134971/210836 Oracle University Learning Community: https://education.oracle.com/ou-community LinkedIn: https://www.linkedin.com/showcase/oracle-university/ X (formerly Twitter): https://twitter.com/Oracle_Edu Special thanks to Arijit Ghosh, David Wright, Radhika Banka, and the OU Studio Team for helping us create this episode. -------------------------------------------------------- Episode Transcript: 00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we'll bring you foundational training on the most popular Oracle technologies. Let's get started! 00:25 Nikita: Hello and welcome to the Oracle University Podcast. I'm Nikita Abraham, Principal Technical Editor with Oracle University, and with me is Lois Houston, Director of Innovation Programs. Lois: Hi there! If you've been listening to us these last few weeks, you'll know we've been discussing containerization, the Oracle Cloud Infrastructure Registry, and the basics of Kubernetes. Today, we'll dive into the world of OCI Container Engine for Kubernetes, also referred to as OKE. Nikita: We're joined by Mahendra Mehra, a senior OCI instructor with Oracle University, who will take us through the key features and benefits of OKE and also talk about working with managed nodes. Hi Mahendra! Thanks for joining us today. 01:09 Lois: So, Mahendra, what is OKE exactly? Mahendra: Oracle Cloud Infrastructure Container Engine for Kubernetes is a fully managed, scalable, and highly available service that empowers you to effortlessly deploy your containerized applications to the cloud. But that's just the beginning. OKE can transform the way you and your development team build, deploy, and manage cloud native applications. 01:36 Nikita: What would you say are some of its most defining features? Mahendra: One of the defining features of OKE is the flexibility it offers. You can specify whether you want to run your applications on virtual nodes or opt for managed nodes. Regardless of your choice, Container Engine for Kubernetes will efficiently provision them within your existing OCI tenancy on Oracle Cloud Infrastructure. Creating OKE cluster is a breeze, and you have a couple of fantastic tools at your disposal-- the console and the rest API. These make it super easy to get started. OKE relies on Kubernetes, which is an open-source system that simplifies the deployment, scaling, and management of containerized applications across clusters of hosts. Kubernetes is an incredible system that groups containers into logical units known as pods. And these pods make managing and discovering your applications very simple. Not to mention, Container Engine for Kubernetes uses Kubernetes versions that are certified as conformant by the Cloud Native Computing Foundation, also abbreviated as CNCF. And here's the icing on the cake. Container Engine for Kubernetes is ISO-compliant. The other two ISO-IEC standards—27001, 27017, and 27018. That's your guarantee of a secure and reliable platform. 03:08 Lois: That's great. But how do you access all this power? Mahendra: You can define and create your Kubernetes cluster using the intuitive console and the robust rest API. Once your clusters are up and running, you can manage them using the Kubernetes command line, also known as kubectl, the user-friendly Kubernetes dashboard, and the powerful Kubernetes API. 03:32 Nikita: I love the idea of an intuitive console and being able to manage everything from a centralized place. Lois: Yeah, that's fantastic! Mahendra, can you talk us through the magic that happens behind the scenes? What's Oracle's role in all this? Mahendra: All the master nodes or control plane nodes are managed by Oracle. This includes components like etcd, the API server, and the controller manager among others. To ensure reliability, we make sure multiple copies of these master components are distributed across different availability domains. And we don't stop there. We also manage the Kubernetes dashboard and even handle the self-healing mechanism of both the cluster and the worker nodes. All of these are meticulously created and managed within your Oracle tenancy. 04:19 Lois: And what happens at the user's end? What is their responsibility? Mahendra: At your end, you have the power to manage your worker nodes. Using different compute shapes, you can create and control them in your own user tenancy. So, as you can see, it's a perfect blend of Oracle's expertise and your control. 04:38 Nikita: So, in your opinion, why should users consider OKE their go-to solution for all things Kubernetes? Mahendra: Imagine a world where building and maintaining Kubernetes environments, be it master nodes or worker nodes, is no longer complex, costly, or even time-consuming. OKE is here to make your life easier by seamlessly integrating Kubernetes with various container life cycle management products, which includes container registries, CI/CD frameworks, networking solutions, storage options, and top-notch security features. And speaking of security, OKE gives you the tools you need to manage and control team access to production clusters, ensuring granular access to Kubernetes cluster in a straightforward process. It empowers developers to deploy containers quickly, provides devops teams with visibility and control for seamless Kubernetes management, and brings together Kubernetes container orchestration with Oracle's advanced cloud infrastructure. This results in robust control, top tier security, IAM, and consistent performance. 05:50 Nikita: OK…a lot of benefits! Mahendra, I know there have been ongoing enhancements to the OKE service. So, when creating a new cluster with Container Engine for Kubernetes, what is the cluster type we should specify? Mahendra: The first type is the basic clusters. Basic clusters support all the core functionality provided by Kubernetes and Container Engine for Kubernetes. Basic clusters come with a service-level objective, but not a financially backed service level agreement. This means that Oracle guarantees a certain level of availability for the basic cluster, but there is no monetary compensation if that level is not met. On the other hand, we have the enhanced clusters. Enhanced clusters support all available features, including features not supported by basic clusters. 06:38 Lois: OK. So, can you tell us more about the features supported by enhanced clusters? Mahendra: As we move towards a more digitized world, the demand for infrastructure continues to rise. However, with virtual nodes, managing the infrastructure of your cluster becomes much simpler. The burden of manually scaling, upgrading, or troubleshooting worker nodes is removed, giving you more time to focus on your applications rather than the underlying infrastructure. Virtual nodes provide a great solution for managing large clusters with a high number of nodes that require frequent updates or scaling. With this feature, you can easily simplify the management of your cluster and focus on what really matters, that is your applications. Managing cluster add-ons can be a daunting task. But with enhanced clusters, you can now deploy and configure them in a more granular way. This means that you can manage both essential add-ons like CoreDNS and kube-proxy as well as a growing portfolio of optional add-ons like the Kubernetes Dashboard. With enhanced clusters, you have complete control over the add-ons you install or disable, the ability to select specific add-on versions, and the option to opt-in or opt-out of automatic updates by Oracle. You can also manage add-on specific customizations to tailor your cluster to meet the needs of your application. 08:05 Lois: Do users need to worry about deploying add-ons themselves? Mahendra: Oracle manages the lifecycle of add-ons so that you don't have to worry about deploying them yourself. This level of control over add-ons gives you the flexibility to customize your cluster to meet the unique needs of your applications, making managing your cluster a breeze. 08:25 Lois: What about scaling? Mahendra: Scaling your clusters to meet the demands of your workload can be a challenging task. However, with enhanced clusters, you can now provision more worker nodes in a single cluster, allowing you to deploy larger workloads on the same cluster which can lead to better resource utilization and lower operational overhead. Having fewer larger environments to secure, monitor, upgrade, and manage is generally more efficient and can help you save on cost. Remember, there are limits to the number of worker nodes supported on an enhanced cluster, so you should review the Container Engine for Kubernetes limits documentation and consider the additional considerations when defining enhanced clusters with large number of managed nodes. 09:09 Nikita: Ensuring the security of my cluster would be of utmost importance to me, right? How would I do that with enhanced clusters? Mahendra: With enhanced clusters, you can now strengthen cluster security through the use of workload identity. Workload identity enables you to define OCI IAM policies that authorize specific pods to make OCI API calls and access OCI resources. By scoping the policies to Kubernetes service account associated with application pods, you can now allow the applications running inside those pods to directly access the API based on the permissions provided by the policies. 09:48 Nikita: Mahendra, what type of uptime and server availability benefits do enhanced clusters provide? Mahendra: You can now rely on a financially backed service level agreement tied to Kubernetes API server uptime and availability. This means that you can expect a certain level of uptime and availability for your Kubernetes API server, and if it degrades below the stated SLA, you'll receive compensation. This provides an extra level of assurance and helps ensure that your cluster is highly available and performant. 10:20 Lois: Mahendra, do you have any tips for us to remember when creating basic and enhanced clusters? Mahendra: When using the console to create a cluster, a new cluster is created as an enhanced cluster by default unless you explicitly choose to create a basic cluster. If you don't select any enhanced features during cluster creation, you have the option to create the new cluster as a basic cluster. When using CLI or API to create a cluster, you can specify whether to create a basic cluster or an enhanced cluster. If you don't explicitly specify the type of cluster to create, a new cluster is created as a basic cluster by default. Creating a new cluster as an enhanced cluster enables you to easily add enhanced features later even if you didn't select any enhanced features initially. If you do choose to create a new cluster as a basic cluster, you can still choose to upgrade the basic cluster to an enhanced cluster later on. However, you cannot downgrade an enhanced cluster to a basic cluster. These points are really important while you consider selection of a basic cluster or an enhanced cluster for your usage. 11:34 Do you want to stay ahead of the curve in the ever-evolving AI landscape? Look no further than our brand-new OCI Generative AI Professional course and certification. For a limited time only, we're offering both the course and certification for free! So, don't miss out on this exclusive opportunity to get certified on Generative AI at no cost. Act fast because this offer is valid only until July 31, 2024. Visit https://education.oracle.com/genai to get started. That's https://education.oracle.com/genai. 12:13 Nikita: Welcome back! I want to move on to serverless Kubernetes with virtual nodes. But I think before we do that, we first need to have a basic understanding of what managed nodes are. Mahendra: Managed nodes run on compute instances within your tenancy, and are at least partly managed by you. In the context of Kubernetes, a node is a compute host that can be either a virtual machine or a bare metal host. As you are responsible for managing managed nodes, you have the flexibility to configure them to meet your specific requirements. You are responsible for upgrading Kubernetes on managed nodes and for managing cluster capacity. Nodes are responsible for running a collection of pods or containers, and they are comprised of two system components: the kubelet, which is the host brain, and the container runtime such as CRI-O, or containerd. 13:07 Nikita: Ok… so what are virtual nodes, then? Mahendra: Virtual nodes are fully managed and highly available nodes that look and act like real nodes to Kubernetes. They are built using the open source CNCF Virtual Kubelet Project, which provides the translation layer between OCI and Kubernetes. 13:25 Lois: So, what makes Oracle's managed virtual Kubernetes product different? Mahendra: OCI is the first major cloud provider to offer a fully managed virtual Kubelet product that provides a serverless Kubernetes experience through virtual nodes. Virtual nodes are configured by customers and are located within a single availability and fault domain within OCI. Virtual nodes have two main components: port management and container instance management. Virtual nodes delegates all the responsibility of managing the lifecycle of pods to virtual Kubernetes while on a managed node, the kubelet is responsible for managing all the lifecycle state. The key distinction of virtual nodes is that they support up to a 1,000 pods per virtual node with the expectation of supporting more in the future. 14:15 Nikita: What are the other benefits of virtual nodes? Mahendra: Virtual nodes offer a fully managed experience where customers don't have to worry about managing the underlying infrastructure of their containerized applications. Virtual nodes simplifies scaling patterns for customers. Customers can scale their containerized application up or down quickly without worrying about the underlying infrastructure, and they can focus solely on their applications. With virtual nodes, customers only pay for the resources that their containerized application use. This allows customers to optimize their costs and ensures that they are not paying for any unused resources. Virtual nodes can support over 10 times the number of pods that a normal node can. This means that customer can run more containerized applications on virtual nodes, which reduces operational burden and makes it easier to scale applications. Customers can leverage container instances in serverless offering from OCI to take advantage of many OCI functionalities natively. These functionalities include strong isolation and ultimate elasticity with respect to compute capacity. 15:26 Lois: When creating a cluster using Container Engine for Kubernetes, we have the flexibility to customize the worker nodes within the cluster, right? Could you tell us more about this customization? Mahendra: This customization includes specifying two key elements. Firstly, you can select the operating system image to be used for worker nodes. This image serves as a template for the worker node's virtual hard drive, and determines the operating system and other software installed. Secondly, you can choose the shape for your worker nodes. The shape defines the number of CPUs and the amount of memory allocated to each instance, ensuring it meets your specific requirements. This customization empowers you to tailor your OKE cluster to your exact needs. It is important to note that you can define and create OKE clusters using both the console and the REST API. This level of control is specially valuable for your development team when building, deploying, and managing cloud native applications. You have the option to specify whether applications should run on virtual nodes or managed nodes. And Container Engine for Kubernetes efficiently provisions them on Oracle Cloud Infrastructure within your existing OCI tenancy. This flexibility ensures that you can adapt your OKE cluster to suit the specific requirements of your projects and workloads. 16:56 Lois: Thank you so much, Mahendra, for giving us your time today. For more on the topics we discussed, visit mylearn.oracle.com and look for the OCI Container Engine for Kubernetes Specialist course. Join us next week as we dive deeper into working with OKE virtual nodes. Until then, this is Lois Houston… Nikita: And Nikita Abraham, signing off! 17:18 That's all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We'd also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.
On this episode of TWiGS, host Anne Currie is joined by Navveen Balani of Accenture and fellow GSF member. This conversation navigates the landscapes of, and intersections between GreenOps, DevOps, and FinOps, as well as the vital role of Infrastructure as Code in marrying financial and ecological efficiencies in cloud operations. Lastly, they tackle the intersection of cybersecurity and AI development, emphasizing the need for green software principles to fortify AI systems while minimizing energy use.
In this conversation, Johan van Amersfoort interviews William Rizzo about the Cloud Native Computing Foundation (CNCF) and its impact on cloud computing and modern infrastructure. They discuss the role of the CNCF in making cloud native technology ubiquitous and the support it provides to cloud native projects. They also explore the advantages of using CNCF projects for IT admins and platform engineers, including the ability to build platforms with integrated and efficient tools. They touch on the future trends and developments within the CNCF, such as platform engineering and AI/ML workloads, and how IT professionals can prepare for these changes. William also shares his personal project on platform engineering and the importance of calculating the return on investment for platform engineering practices.TakeawaysThe CNCF aims to make cloud native technology ubiquitous by providing support and resources to cloud native projects.Using CNCF projects allows IT admins and platform engineers to build integrated and efficient platforms.Platform engineering and AI/ML workloads are future trends within the CNCF.Calculating the return on investment for platform engineering practices is important.William Rizzo is working on a project to simplify the calculation of return on investment for platform engineering practices.Chapters00:00 - Introduction and William Rizzo's IT Career03:09 - Overview of the Cloud Native Computing Foundation06:39 - Other Core Projects and Technologies Hosted by the CNCF10:17 - Impact of CNCF Projects and Principles on Cloud Computing13:31 - Advantages of Using CNCF Projects for IT Admins and Platform Engineers14:45 - Interoperability and Standardization across Cloud Platforms23:45 - Resources for Learning about Cloud Native Technologies28:17 - Future Trends and Developments within the CNCF38:49 - William Rizzo's Personal Project on Platform Engineering46:04 - Calculating the Return on Investment for Platform Engineering Practices49:28 - Closing RemarksMore about William: https://www.linkedin.com/in/william-rizzo/The link to the CNCF website: https://www.cncf.io/Follow us on X for updates and news about upcoming episodes: https://x.com/UnexploredPod.Last but not least, make sure to hit that subscribe button and share the episode with your friends and colleagues!Disclaimer: The thoughts and opinions shared in this podcast are our own/guest(s), and not necessarily those of Broadcom or VMware by Broadcom.
When Weaveworks, known for pioneering "GitOps," shut down, concerns arose about the future of Flux, a critical open-source project. However, Puja Abbassi, Giant Swarm's VP of Product, reassured Alex Williams, Founder and Publisher of The New Stack at Open Source Summit in Paris that Flux's maintenance is secure in this episode of The New Makers podcast. Giant companies like Microsoft Azure and GitLab have pledged support. Giant Swarm, an avid Flux user, also contributes to its development, ensuring its vitality alongside related projects like infrastructure code plugins and UI improvements. Abbassi highlighted the importance of considering a project's sustainability and integration capabilities when choosing open-source tools. He noted Argo CD's advantage in UI, emphasizing that projects like Flux must evolve to meet user expectations and avoid being overshadowed. This underscores the crucial role of community support, diversity, and compatibility within the Cloud Native Computing Foundation's ecosystem for long-term tool adoption.Learn more from The New Stack about Flux: End of an Era: Weaveworks Closes Shop Amid Cloud Native Turbulence Why Flux Isn't Dying after WeaveworksJoin our community of newsletter subscribers to stay on top of the news and at the top of your game.
The Kubernetes community primarily focuses on improving the development and operations experience for applications and infrastructure, emphasizing DevOps and developer-centric approaches. In contrast, the data science community historically moved at a slower pace. However, with the emergence of the AI engineer persona, the pace of advancement in data science has accelerated significantly. Alex Williams, founder and publisher of The New Stack co-hosted a discussion with Sanjeev Mohan, an independent analyst, which highlighted the challenges faced by data-related tasks on Kubernetes due to the stateful nature of data. Unlike applications, restarting a database node after a failure may lead to inconsistent states and data loss. This discrepancy in pace and needs between developers and data scientists led to Kubernetes and the Cloud Native Computing Foundation initially overlooking data science. Nevertheless, Mohan noted that the pace of data engineers has increased as they explore new AI applications and workloads. Kubernetes now plays a crucial role in supporting these advancements by helping manage resources efficiently, especially considering the high cost of training large language models (LLMs) and using GPUs for AI workloads. Mohan also discussed the evolving landscape of AI frameworks and the importance of aligning business use cases with AI strategies. Learn more from The New Stack about data development and DevOps: AI Will Drive Streaming Data Use — But Not Yet, Report Says https://thenewstack.io/ai-will-drive-streaming-data-adoption-says-redpanda-survey/ The Paradigm Shift from Model-Centric to Data-Centric AI https://thenewstack.io/the-paradigm-shift-from-model-centric-to-data-centric-ai/ AI Development Needs to Focus More on Data, Less on Models https://thenewstack.io/ai-development-needs-to-focus-more-on-data-less-on-models/ Learn more from The New Stack about data development and DevOps: AI Will Drive Streaming Data Use - But Not Yet, Report SaysThe Paradigm Shift from Model-Centric to Data-Centric AIAI Development Needs to Focus More on Data, Less on Models Join our community of newsletter subscribers to stay on top of the news and at the top of your game. https://thenewstack.io/newsletter/
The spring edition of the Cloud Native Computing Foundation's KubeCon/CloudNativeCon is about to get underway and Jean Atelsek and William Fellows return to discuss what will be playing out in the sessions and on the exhibit floor with host Eric Hanselman. The next stage of licensing pull back from a number of open source providers is on display, as Open Core revenue models are realigned. FinOps efforts try to tame operational costs and evolution of platform engineering approaches continues.
Madhav Jivrajani is an engineer at VMware, a tech lead in SIG Contributor Experience and a GitHub Admin for the Kubernetes project. He also contributes to the storage layer of Kubernetes, focusing on reliability and scalability. In this episode we talked with Madhav about a recent post on social media about a very interesting stale reads issue in Kubernetes, and what the community is doing about it. Do you have something cool to share? Some questions? Let us know: - web: kubernetespodcast.com - mail: kubernetespodcast@google.com - twitter: @kubernetespod Chatter of the week Mofi Rahman co-host this episode with Kaslin Twitter/X LinkedIn Kubernetes Podcast episode 211 News of the week Google announced a new partnership with Hugging Face RedHat self-managed offering of Ansible Automation Platform on Microsoft Azure The schedule for KubeCon CloudNativeCon EU 2024 is out CNCF Ambassador applications are open The CNCF Hackathon at KubeCon CloudNativeCon EU 2024 CFP is open now The annual Cloud Native Computing Foundation report for 2023 CNCF's certification expiration period will change to 24 months starting April 1st, 2024. Sysdig 2024 Cloud Native Security and Usage Report Links from the interview Madhav Jivrajani Twitter/X LinkedIn Priyanka Saggu Interview Stale reads Twitter/X thread by Madhav "Kubernetes is vulnerable to stale reads, violating critical pod safety guarantees" - GitHub Issue tracking the stale reads CAP Theorem issue CMU Wasm Research Center "A CAP tradeoff in the wild" blog by Lindsey Kuper "Reasoning about modern datacenter infrastructures using partial histories" research paper The Kubernetes Storage Layer: Peeling the Onion Minus the Tears - Madhav Jivrajani, VMware KEP-3157: allow informers for getting a stream of data instead of chunking. KEP 2340: Consistent Reads from Cache Journey Through Time: Understanding Etcd Revisions and Resource Versions in Kubernetes - Priyanka Saggu, KubeCon NA 2023 Kubernetes API Resource Versions documentation
Whitney Lee and Victor Farcic discuss their unique approach to educating others about the Cloud Native Computing Foundation's (CNCF) landscape through their interactive presentations and YouTube project 'You Choose.' The pair explain how they incorporate live audience voting to determine the 'chosen' technologies implemented in their ongoing demos. Their fun approach helps newcomers in the field make informed decisions on the tools to use, and to understand how these various tools can integrate with each other. They talk about their previous talks and excitement for possible future events where they'll continue their interactive sessions. 00:00 Introduction and Meeting the Guests 00:33 Discussing the Concept of Rejekts Conference 01:45 The Popularity and Impact of Rejekts 02:46 The Experience of Attending KubeCon 03:59 Getting to Know the Guests Outside of KubeCon 06:38 The Idea Behind a 'Choose Your Own Adventure' 09:36 The Origin and Format of the 'You Choose' Streaming Show 13:59 The Excitement of Live Voting 15:50 The Thrill of Live Demos 17:32 The Future: Security Talk 20:13 The Overwhelming Cloud Native Landscape 21:29 The Upcoming YouTube Series 23:12 The Aftermath of KubeCon Resources: Cloud Native Rejekts You Choose series Guests: Whitney Lee is a lovable goofball who enjoys understanding and using tools in the cloud native landscape. Creative and driven, Whitney recently pivoted from an art-related career to one in tech. She is active in the open source community, especially around CNCF projects focused on developer productivity. You can catch her lightboard streaming show ⚡️ Enlightning on Tanzu.TV. And not only does she rock at tech - she literally has toured playing in the band Mutual Benefit on keyboards and vocals. Viktor Farcic is lead rapscallion at Upbound, a member of the Google Developer Experts, CDF Ambassadors, and GitHub Stars groups, and a published author. He is a host of the YouTube channel DevOps Toolkit and a co-host of DevOps Paradox.
Join TWiGS host Chris Adams in talking to Kristina Devochko, tech lead of the Environmental Sustainability TAG at the Cloud Native Computing Foundation. Kristina shares her journey from economics to tech sustainability, and eventually joining this Technical Advisory Group. Further, they discuss the mission and projects of the group, as well as how anyone interested and willing is able to contribute. She elaborates on her experience of diving into this new field with no prior knowledge and acts as a reminder that no matter how scary it seems, you can do it too.
Emily Fox joins us to discuss her role as Security Lead in Emerging Technologies at Red Hat and her involvement in the open source community as Chair of the Cloud Native Computing Foundation's Technical Oversight Committee. She discusses her team's research focusing on refining Sigstore and working on remote attestation and her career journey from working as a Creative Director in an entertainment company to becoming a Developer Security Lead for the National Security Agency. The conversation further touches on the need for better diversity, accessibility, and the imperative of a supportive community within the open source ecosystem. Lastly, she shares her perspectives on developer experience, its challenges, and the need for empathy and kindness as we navigate post-pandemic life. 00:00 Introduction and Guest Background 00:24 Role and Responsibilities at Red Hat 01:46 Involvement in Open Source and Cloud Native Computing Foundation 03:07 Journey from Creative Director to Tech Ecosystem 06:09 Challenges in Open Source Project Security 08:03 Improving Security Practices in Software Development 09:22 Expanding Security Expertise in Developers 11:23 Security in AI and Machine Learning 15:24 Importance of Diversity and Inclusion in Tech 18:40 Improving Developer Experience in Open Source 21:00 Closing Thoughts and Parting Words Guest: Emily Fox is a DevOps enthusiast, security unicorn, and advocate for Women in Technology. She promotes the cross-pollination of development and security practices. She has worked in security for over 13 years to drive a cultural change where security is unobstructive, natural, and accessible to everyone. Her technical interests include containerization, least privilege, automation, and promoting women in technology. She holds a BS in Information Systems and an MS in cybersecurity. Serving as chair on the Cloud Native Computing Foundation's (CNCF) Technical Oversight Committee (TOC) and co-chair for KubeCon+CloudNativeCon China 2021, Europe 2022, North America 2022, Europe 2023, and CloudNativeSecurityCon 2023, she is involved in a variety of open source communities and activities.
Plattform Engineering, Interne Developer Plattformen und das Product-Mindset: 2023 wird als “Das Jahr der Effizienz” bezeichnet. Viele Firmen schauen sich im Detail an, wie die Arbeit der eigenen Software-Entwicklungsteams effizienter gestaltet werden kann. Die Bereiche Infrastruktur, Cloud, Build Pipelines, Deployment und Co stehen oft im Mittelpunkt der Frage “Was kann optimiert werden, damit wir uns schneller bewegen?”.In der Regel dauert es nicht lange, bis die Buzzwords “Interne Developer Plattformen”, “Developer Experience” und “Plattform Engineering” fallen. Doch worum geht es eigentlich beim Plattform Engineering? Was ist eine interne Developer Plattform?Genau darüber sprechen wir mit unserem Gast Puja Abbassi.Wir klären, was das alles ist, welche Probleme eigentlich gelöst werden sollen, wie eine erfolgreiche Plattform aussieht, was klassische Fallstricke sind, ab wann sich die ganze Sache eigentlich lohnt und noch vieles mehr.Bonus: Was ist FinOps?Das schnelle Feedback zur Episode:
Kubevirt, a relatively new capability within Kubernetes, signifies a shift in the virtualization landscape, allowing operations teams to run KVM virtual machines nested in containers behind the Kubernetes API. This integration means that the Kubernetes API now encompasses the concept of virtual machines, enabling VM-based workloads to operate seamlessly within a cluster behind the API. This development addresses the challenge of transitioning traditional virtualized environments into cloud-native settings, where certain applications may resist containerization or require substantial investments for adaptation.The emerging era of virtualization simplifies the execution of virtual machines without concerning the underlying infrastructure, presenting various opportunities and use cases. Noteworthy advantages include simplified migration of legacy applications without the need for containerization, thereby reducing associated costs.Kubevirt 1.1, discussed at KubeCon in Chicago by Red Hat's Vladik Romanovsky and Nvidia's Ryan Hallisey, introduces features like memory hotplug and vCPU hotplug, emphasizing the stability of Kubevirt. The platform's stability now allows for the implementation of features that were previously constrained.Learn more from The New Stack about Kubevirt and the Cloud Native Computing Foundation:The Future of VMs on Kubernetes: Building on KubeVirtA Platform for KubernetesScaling Open Source Community by Getting Closer to Users
In this episode, we talk about the role of CNCF in shaping the future of generative AI whether that's hardware optimization, infrastructure scalability, and innovative application development. Our guest is Jorge Castro, developer advocate at the Cloud Native Computing Foundation.Listeners can also tune in on-demand to some of the great talks that took place at AI.dev on Dec. 12-13th here!
Boeing, with around 6,000 engineers, is emphasizing open source engagement by focusing on three main themes, according to Damani Corbin, who heads Boeing's Open Source office. He joined our host, Alex Williams, for a discussion at KubeCon+CloudNativeCon in Chicago.The first priority Corbin talks about is simplifying the consumption of open source software for developers. Second, Boeing aims to facilitate developer contributions to open source projects, fostering involvement in communities like the Cloud Native Computing Foundation and the Linux Foundation. The third theme involves identifying opportunities for "inner sourcing" to share internally developed solutions across different groups.Boeing is actively working to break down barriers and encourage code reuse across the organization, promoting participation in open source initiatives. Corbin highlights the importance of separating business-critical components from those that can be shared with the community, prioritizing security and extending efforts to enhance open source security practices. The organization is consolidating its open source strategy by collaborating with legal and information security teams.Corbin emphasizes the goal of making open source involvement accessible and attractive, with a phased approach to encourage meaningful contributions and ultimately enabling the compensation of engineers for open source work in the future.Learn more from The New Stack about Boeing and CNCF open source projects:How Boeing Uses Cloud NativeHow Open Source Has Turned the Tables on Enterprise SoftwareScaling Open Source Community by Getting Closer to UsersMercedes-Benz: 4 Reasons to Sponsor Open Source Projects
At KubeCon + CloudNativeCon North America 2022, Amazon Web Services (AWS) revealed plans to mirror Kubernetes assets hosted on Google Cloud, addressing Cloud Native Computing Foundation's (CNCF) egress costs. A year later, the project, led by AWS's Davanum Srinivas, redirects image requests to the nearest cloud provider, reducing egress costs for users.AWS's Todd Neal and Jonathan Innis discussed this on The New Stack Makers podcast recorded at KubeCon North America 2023. Neal explained the registry's functionality, allowing users to pull images directly from the respective cloud provider, avoiding egress costs.The discussion also highlighted AWS's recent open source contributions, including beta features in Kubectl, prerelease of Containerd 2.0, and Microsoft's support for Karpenter on Azure. Karpenter, an AWS-developed Kubernetes cluster autoscaler, simplifies node group configuration, dynamically selecting instance types and availability zones based on running pods.The AWS team encouraged developers to contribute to Kubernetes ecosystem projects and join the sig-node CI subproject to enhance kubelet reliability. The conversation in this episode emphasized the benefits of open development for rapid feedback and community collaboration.Learn more from The New Stack about AWS and Open Source:Powertools for AWS Lambda Grows with Help of VolunteersAmazon Web Services Open Sources a KVM-Based Fuzzing FrameworkAWS: Why We Support Sustainable Open Source
This week, we recap Matt's experience at KubeCon Chicago, provide some hot takes on OpenAI's impending App Store, and delve into Apple's claim that 8 GB is all you need. Watch the YouTube Live Recording of Episode (https://www.youtube.com/watch?v=BK4tldNTIOk) 440 (https://www.youtube.com/watch?v=BK4tldNTIOk) Runner-up Titles Keep on keeping on They're not sandbox projects they're litterbox projects It was USB thing If you spent all week in the OpenCost kiosk, this is the report The platter days Wait a second, I'm a pro Microsoft Benchmark Home Edition Vanity Metrics are for Vanity Rub some A.I. on it Rubbing A.I. on all of it Of course this is the way you're going to do it Rundown Apple insists 8GB unified memory equals 16GB regular RAM (https://appleinsider.com/articles/23/11/08/apple-insists-8gb-unified-memory-equals-16gb-regular-ram) CNCF October 2023: where we are with velocity of CNCF, LF, and top 30 open source projects | Cloud Native Computing Foundation (https://www.cncf.io/blog/2023/10/27/october-2023-where-we-are-with-velocity-of-cncf-lf-and-top-30-open-source-projects/) AKS Cost Analysis: an Azure-native cost visibility experience built on the OpenCost project (https://techcommunity.microsoft.com/t5/apps-on-azure-blog/aks-cost-analysis-an-azure-native-cost-visibility-experience/ba-p/3973401) Buoyant and SUSE Expand Partnership to Provide Secure Edge Computing Deployments (https://www.prweb.com/releases/buoyant-and-suse-expand-partnership-to-provide-secure-edge-computing-deployments-301978286.html) OpenAI OpenAI is letting anyone create their own version of ChatGPT (https://www.theverge.com/2023/11/6/23948957/openai-chatgpt-gpt-custom-developer-platform) All the news from OpenAI's first developer conference (https://www.theverge.com/2023/11/6/23948619/openai-chatgpt-devday-developer-conference-news) ChatCSV (https://x.com/SteveMoraco/status/1721683288576737612?s=20) How OpenAI is building a path toward AI agents (https://www.platformer.news/p/how-openai-is-building-a-path-toward?utm_source=post-email-title&publication_id=7976&post_id=138646378&utm_campaign=email-post-title&isFreemail=true&r=2l9&utm_medium=email) OpenAI debuts GPT-4 Turbo and fine-tuning program for GPT-4 | TechCrunch (https://techcrunch.com/2023/11/06/openai-launches-gpt-4-turbo-and-launches-fine-tuning-program-for-gpt-4/) CIQ, Oracle, and SUSE unite behind OpenELA to take on Red Hat Enterprise Linux (https://www.zdnet.com/article/ciq-oracle-and-suse-unite-behind-openela-to-take-on-red-hat-enterprise-linux/) Matt Ray's Keyboard Quest Andrew says gets switch tester (https://www.thockking.com/collections/switch-tester/products/custom-keyboard-switch-tester-fidget-toy) Relevant to your Interests Despite having just 5.8% sales, over 38% of bug reports come from the Linux community (https://www.reddit.com/r/gamedev/comments/qeqn3b/despite_having_just_58_sales_over_38_of_bug/) IBM to scrap 401(k) matching, offer alternative benefit (https://www.theregister.com/2023/11/02/ibm_401k_changes/) AWS to Azure services comparison - Azure Architecture Center (https://learn.microsoft.com/en-us/azure/architecture/aws-professional/services) PagerDuty To Acquire Jeli, Bolstering its End-to-End, Automated Incident Management Solution for the Enterprise (https://www.pagerduty.com/newsroom/pagerduty-to-acquire-jeli/) Verdict reached in Sam Bankman-Fried fraud trial (https://www.cnn.com/2023/11/02/business/ftx-sbf-fraud-trial-verdict/index.html) Sam Bankman-Fried found guilty of fraud (https://www.theverge.com/policy/2023/11/2/23943236/sam-bankman-fried-trial-sbf-fraud-guilty) The GPU Math", AI's impact on Cloud Rev and Capex (https://x.com/fredaduan/status/1720239195699269903?s=46&t=zgzybiDdIcGuQ_7WuoOX0A) Developer Productivity Engineering at Netflix (https://thenewstack.io/developer-productivity-engineering-at-netflix/) Europe is in Decline: A Concerning Future Ahead (https://x.com/sabben/status/1709105432193650726?s=46&t=zgzybiDdIcGuQ_7WuoOX0A) Data observability platform Kloudfuse launches out of stealth with $23M (https://techcrunch.com/2023/11/06/data-observability-platform-kloudfuse-launches-out-of-stealth-with-23m/) Elon Musk debuts 'Grok' AI bot to rival ChatGPT, others (https://www.cnbc.com/2023/11/05/elon-musk-debuts-grok-ai-bot-to-rival-chatgpt-others-.html) Post Mortem on Cloudflare Control Plane and Analytics Outage (https://blog.cloudflare.com/post-mortem-on-cloudflare-control-plane-and-analytics-outage/) Datadog stock surges 30% after cloud company beats estimates, revises guidance up (https://www.cnbc.com/2023/11/07/datadog-stock-surges-after-earnings-strong-guidance.html) Elevating Cloud-Native Innovation: Craig Box joins the Solo.io Team! (https://www.solo.io/blog/cloud-native-innovation-craig-box-solo/) Mozilla will move Firefox development from Mercurial to Microsoft's GitHub • DEVCLASS (https://devclass.com/2023/11/07/mozilla-will-move-firefox-development-from-mercurial-to-microsofts-github/?td=rt-3a) Understanding Open Source Adoption: Insights from the 9th State of the Software Supply Chain Report. (https://www.sonatype.com/state-of-the-software-supply-chain/Introduction) New Report Shows Disconnect Between Developers and Security Teams on Software Supply Chain Security Priorities and Responsibilities (https://www.chainguard.dev/unchained/new-report-shows-disconnect-between-developers-and-security-teams-on-software-supply-chain-security-priorities-and-responsibilities) Nvidia announces January event after rumors of an RTX 4080 Super launch (https://www.theverge.com/2023/11/9/23953641/nvidia-ces-2024-event-rtx-4070-4080-super-rumors) Former Apple designers launch $700 Humane AI Pin as smartphone replacement (https://www.cnbc.com/2023/11/09/former-apple-designers-at-humane-launch-hands-free-ai-powered-pin.html) Big Blue Can Still Catch The AI Wave If It Hurries - The Next Platform (https://www.nextplatform.com/2023/11/06/big-blue-can-still-catch-the-ai-wave-if-it-hurries/) Nonsense Mint is shutting down, and it's pushing users toward Credit Karma (https://www.theverge.com/2023/11/2/23943254/mint-intuit-shutting-down-credit-karma) Jeff Bezos Says He Is Leaving Seattle for Miami (https://www.nytimes.com/2023/11/03/business/jeff-bezos-amazon-miami-seattle.html) Listener Feedback Software Defined Talk now available on YouTube Music (https://music.youtube.com/playlist?list=PLk19Plf_pEnSdwXf_fSSBSZ2gH9v8Hg0l) Conferences Jan 29, 2024 to Feb 1, 2024 That Conference Texas (https://that.us/events/tx/2024/schedule/) If you want your conference mentioned, let's talk media sponsorships. SDT news & hype Join us in Slack (http://www.softwaredefinedtalk.com/slack). Get a SDT Sticker! Send your postal address to stickers@softwaredefinedtalk.com (mailto:stickers@softwaredefinedtalk.com) and we will send you free laptop stickers! Follow us: Twitch (https://www.twitch.tv/sdtpodcast), Twitter (https://twitter.com/softwaredeftalk), Instagram (https://www.instagram.com/softwaredefinedtalk/), Mastodon (https://hachyderm.io/@softwaredefinedtalk), BlueSky (https://bsky.app/profile/softwaredefinedtalk.com), LinkedIn (https://www.linkedin.com/company/software-defined-talk/), TikTok (https://www.tiktok.com/@softwaredefinedtalk), Threads (https://www.threads.net/@softwaredefinedtalk) and YouTube (https://www.youtube.com/channel/UCi3OJPV6h9tp-hbsGBLGsDQ/featured). Use the code SDT to get $20 off Coté's book, Digital WTF (https://leanpub.com/digitalwtf/c/sdt), so $5 total. Become a sponsor of Software Defined Talk (https://www.softwaredefinedtalk.com/ads)! Recommendations Brandon: Same as Ever (https://www.audible.com/pd/Same-as-Ever-Audiobook/B0C1HS5WG1#:~:text=Same%20as%20Ever%20reverses%20the,and%20living%20your%20best%20life.) Morgan Housel | Acquired Podcast (https://www.acquired.fm/episodes/morgan-housel) The Morgan Housel Podcast: My New Book, Same As Ever: A Guide to What Never Changes (https://podcasts.apple.com/us/podcast/my-new-book-same-as-ever-a-guide-to-what-never-changes/id1675310669?i=1000633970682) Matt: DisplayLink (https://www.synaptics.com/products/displaylink-graphics) for multiple external monitors on M1 Macs (Asahi Linux discussion (https://forums.macrumors.com/threads/asahi-creator-1-monitor-support-is-because-of-hardware-limitation.2351766/)) Photo Credits Header (https://unsplash.com/photos/a-close-up-of-a-computer-motherboard-y4_xZ3cs96w)
KubeCon 2023 is set to feature three hot topics, according to Taylor Dolezal from the Cloud Native Computing Foundation. Firstly, GenAI and Large Language Models (LLMs) are taking the spotlight, particularly regarding their security and integration with legacy infrastructure. Platform engineering is also on the rise, with over 25 sessions at KubeCon Chicago focusing on its definition and how it benefits internal product teams by fostering a culture of product proliferation. Lastly, WebAssembly is emerging as a significant topic, with a dedicated day during the conference week. It is maturing and finding its place, potentially complementing containers, especially in edge computing scenarios. Wasm allows for efficient data processing before data reaches the cloud, adding depth to architectural possibilities.Overall, these three trends are expected to dominate discussions and presentations at KubeCon NA 2023, offering insights into the future of cloud-native technology.See what came out of the last KubeCon event in Amsterdam earlier this year:AI Talk at KubeConDon't Force Containers and Disrupt WorkflowsA Boring Kubernetes Release
IEEE Spectrum's Stephen Cass talks with Arun Gupta, vice president and general manager of Open Ecosystem Initiatives at Intel and chair of the Cloud Native Computing Foundation, about Intel's contributions to open source software projects and efforts to make open source greener and more secure.
Welcome to episode 224 of The CloudPod Podcast - where the forecast is always cloudy! This week, your hosts Justin, Jonathan, and Ryan discuss some major changes at Terraform, including switching from open source to a BSL License. Additionally, we cover updates to Amazon S3, goodies from Storage Day, and Google Gemini vs. Open AI. Titles we almost went with this week: None! This week's title was ✨chef's kiss✨ A big thanks to this week's sponsor: Foghorn Consulting provides top-notch cloud and DevOps engineers to the world's most innovative companies. Initiatives stalled because you have trouble hiring? Foghorn can be burning down your DevOps and Cloud backlogs as soon as next week.
Jorge Castro of the Cloud Native Computing Foundation joins us to geek out on taking the desktop cloud native with immutable Linux, talk open source community sustainability, and have a lot of fun along the way. Episode Transcript Resources: Universal Blue The Cloud Native Linux Desktop Model (video) Architecture Of The Immutable uBlue Linux (video) The Cloud Native Landscape Guest: Jorge O. Castro is a community manager, specializing in Open Source. He's basically a cat herder – a combination of engineering, developer relations, and user advocacy. Jorge graduated with a degree in Telecommunications from Michigan State University and rode with the 11th Armored Cavalry Regiment for four years. He first entered the technology field at SAIC and then moved to system administration at the School of Engineering and Computer Science at Oakland University in Rochester Hills, Michigan. Jorge then joined Canonical to work on Ubuntu for about 10 years before moving to Heptio to work on Kubernetes. Heptio was then acquired by VMware in December 2018. He's currently at the CNCF working on developer relations. Guest Host: Chris Norman An avid promoter of open source ecosystems, Chris writes documentation and presents at open source events, helping developers better understand Intel's contributions to operating systems, languages, and runtimes. He also moderates the Clear Linux community forum.
What did software engineers at KubeCon say about how AI is coming up in their work? That's a question we posed Taylor Dolezal, head of ecosystem for the Cloud Native Computing Foundation at KubeCon in Amsterdam. Dolezal said AI did come up in conversation."I think that when it's come to this, typically with KubeCons, and other CNCF and LF events, there's always been one or two topics that have bubbled to the top," Dolezal said.At its core, AI surfaces a data issue for users that correlates to data sharing issues, said Dolezal in this latest episode of The New Stack Makers.Read more about AI and Kubernetes on The New Stack:3 Important AI/ML Tools You Can Deploy on KubernetesFlyte: An Open Source Orchestrator for ML/AI WorkflowsOvercoming the Kubernetes Skills Gap with ChatGPT Assistance
Kubernetes release 1.27 is boring, says Xander Grzywinski, a senior product manager at Microsoft.It's a stable release, Grzywinski said on this episode of The New Stack Makers from KubeCon Europe in Amsterdam."It's reached a level of stability at this point," said Grzywinski. "The core feature set has become more fleshed out and fully realized.The release has 60 total features, Grzywinski said. The features in 1.27 are solid refinements of features that have been around for a while. It's helping Kubernetes be as stable as it can be.Examples?It has a better developer experience, Grzywinski said. Storage primitives and APIs are more stable.
Hoi Europe and beyond!Once again it is time for cloud native enthusiasts and professionals to converge and discuss cloud native computing in all its efficiency and complexity. The Cloud Native Computing Foundation's KubeCon+CloudNativeCon 2023 is being held later this month in Amsterdam, April 18 - 21, at the Rai Convention Centre.In this latest edition of The New Stack podcast, we spoke with two of the event's co-chairs who helped define this year's themes for the show, which is expected to draw over 9,000 attendees: Aparna Subramanian, Shopify's Director of Production Engineering for Infrastructure; and Cloud Native Infra and Security Enterprise Architect Frederick Kautz.
Joining the podcast this week is Mishi Choudhary, SVP and General Counsel at Virtru. Mishi shares with us some legal perspective on the privacy discussion including freedom of thought, the right to be forgotten, end-to-end encryption for protecting user data, finding a middle ground between meeting customer privacy demands and complying with legal requirements, getting to a federal privacy regulation, and so much more! You won't want to miss what is a truly spirited and candid conversation – in two parts! Mishi Choudhary SVP and General Counsel, Virtru A technology lawyer with over 17 years of legal experience, Mishi has served as a legal representative for many of the world's most prominent free and open source software developers and distributors, including the Free Software Foundation, Cloud Native Computing Foundation, Linux Foundation, Debian, the Apache Software Foundation, and OpenSSL. At Virtru, she leads all legal and compliance activities, builds internal processes to continue to accelerate growth, helps shape Virtru and open source strategy, and activates global business development efforts. For links and resources discussed in this episode, please visit our show notes at https://www.forcepoint.com/govpodcast/e224
Joining the podcast this week is Mishi Choudhary, SVP and General Counsel at Virtru. Mishi shares with us some legal perspective on the privacy discussion including freedom of thought, the right to be forgotten, end-to-end encryption for protecting user data, finding a middle ground between meeting customer privacy demands and complying with legal requirements, getting to a federal privacy regulation, and so much more! You won't want to miss what is a truly spirited and candid conversation – in two parts! Mishi Choudhary, SVP and General Counsel, Virtru A technology lawyer with over 17 years of legal experience, Mishi has served as a legal representative for many of the world's most prominent free and open source software developers and distributors, including the Free Software Foundation, Cloud Native Computing Foundation, Linux Foundation, Debian, the Apache Software Foundation, and OpenSSL. At Virtru, she leads all legal and compliance activities, builds internal processes to continue to accelerate growth, helps shape Virtru and open source strategy, and activates global business development efforts. For links and resources discussed in this episode, please visit our show notes at https://www.forcepoint.com/govpodcast/e223
How can you use OpenTelemetry to gain insight into your Apache Kafka® event systems? Roman Kolesnev, Staff Customer Innovation Engineer at Confluent, is a member of the Customer Solutions & Innovation Division Labs team working to build business-critical OpenTelemetry applications so companies can see what's happening inside their data pipelines. In this episode, Roman joins Kris to discuss tracing and monitoring in distributed systems using OpenTelemetry. He talks about how monitoring each step of the process individually is critical to discovering potential delays or bottlenecks before they happen; including keeping track of timestamps, latency information, exceptions, and other data points that could help with troubleshooting.Tracing each request and its journey to completion in Kafka gives companies access to invaluable data that provides insight into system performance and reliability. Furthermore, using this data allows engineers to quickly identify errors or anticipate potential issues before they become significant problems. With greater visibility comes better control over application health - all made possible by OpenTelemetry's unified APIs and services.As described on the OpenTelemetry.io website, "OpenTelemetry is a Cloud Native Computing Foundation incubating project. Formed through a merger of the OpenTracing and OpenCensus projects." It provides a vendor-agnostic way for developers to instrument their applications across different platforms and programming languages while adhering to standard semantic conventions so the traces/information can be streamed to compatible systems following similar specs.By leveraging OpenTelemetry, organizations can ensure their applications and systems are secure and perform optimally. It will quickly become an essential tool for large-scale organizations that need to efficiently process massive amounts of real-time data. With its ability to scale independently, robust analytics capabilities, and powerful monitoring tools, OpenTelemetry is set to become the go-to platform for stream processing in the future.Roman explains that the OpenTelemetry APIs for Kafka are still in development and unavailable for open source. The code is complete and tested but has never run in production. But if you want to learn more about the nuts and bolts, he invites you to connect with him on the Confluent Community Slack channel. You can also check out Monitoring Kafka without instrumentation with eBPF - Antón Rodríguez to learn more about a similar approach for domain monitoring.EPISODE LINKSOpenTelemetry java instrumentationOpenTelemetry collectorDistributed Tracing for Kafka with OpenTelemetryMonitoring Kafka without instrumentation with eBPFKris Jenkins' TwitterWatch the videoJoin the Confluent CommunityLearn more with Kafka tutorials, resources, and guides at Confluent DeveloperLive demo: Intro to Event-Driven Microservices with ConfluentUse PODCAST100 to get $100 of free Confluent Cloud usage (details)
At an SaaS company like Intuit that has hundreds of services spread out across multiple products, maintaining development velocity at scale means baking some of the features that every service needs into the architecture of their systems. That's where a service mesh comes in. It automatically adds features like observability, traffic management, and security to every service in the network without adding any code. In this sponsored episode of the podcast, we talk with Anil Attuluri, principal software engineer, and Yasen Simeonov, senior product manager, both of Intuit, about how their engineering organization uses a service mesh to solve problems, letting their engineers stay focused on writing business logic. Along the way, we discuss how the service mesh keeps all the financial data secure, how it moves network traffic to where it needs to go, and the open source software they've written on top of the mesh. Episode notes:For those looking to get the same service mesh capabilities as Intuit, check out Istio, a Cloud Native Computing Foundation project. In order to provide a better security posture for their products, each business case operates on a discrete network. But much of the Istio service mesh needs to discover services across all products. Enter Admiral, their open-sourced solution. When Intuit deploys a new service version, they can progressively scale the amount of traffic that hits it instead of the old version using Argo Rollouts. It's better to find a bug in production on 1% of requests than 100%.If you want to learn more about what Intuit engineering is doing, check out their blog. Congrats to Great Question badge winner, HelpMeStackOverflowMyOnlyHope, for asking Detect whether input element is focused within ReactJS
jE is joined by Jahed Momand, co-founder of Cerulean Ventures, for a deep-dive into his experiences, learnings, and insights that sit at the intersection of Climate & Web3. Topics covered include:
Wayfair describes itself as the “the destination for all things home: helping everyone, anywhere create their feeling of home.” It provides an online platform to acquire home furniture, outdoor decor and other furnishings. It also supports its suppliers so they can use the platform to sell their home goods, explained Natali Vlatko, global lead, open source program office (OSPO) and senior software engineering manager, for Wayfair as the featured guest in Detroit during KubeCon + CloudNativeCon North America 2022. “It takes a lot of technical, technical work behind the scenes to kind of get that going,” Vlatko said. This is especially true as Wayfair scales its operations worldwide. The infrastructure must be highly distributed, relying on containerization, microservices, Kubernetes, and especially, open source to get the job done. “We have technologists throughout the world, in North America and throughout Europe as well,” Vlatko said. “And we want to make sure that we are utilizing cloud native and open source, not just as technologies that fuel our business, but also as the ways that are great for us to work in now.” Open source has served as a “great avenue” for creating and offering technical services, and to accomplish that, Vlatko amassed the requite tallent, she said. Vlatko was able to amass a small team of engineers to focus on platform work, advocacy, community management and internally on compliance with licenses. About five years ago when Vlatko joined Wayfair, the company had yet to go “full tilt into going all cloud native,” Vlatko said. Wayfair had a hybrid mix of on-premise and cloud infrastructure. After decoupling from a monolith into a microservices architecture “that journey really began where we understood the really great benefits of microservices and got to a point where we thought, ‘okay, this hybrid model for us actually would benefit our microservices being fully in the cloud,” Vlatko said. In late 2020, Wayfair had made the decision to “get out of the data centers” and shift operations to the cloud, which was completed in October, Vlatko said. The company culture is such that engineers have room to experiment without major fear of failure by doing a lot of development work in a sandbox environment. “We've been able to create production environments that are close to our production environments so that experimentation in sandboxes can occur. Folks can learn as they go without actually fearing failure or fearing a mistake,” Vlatko said. “So, I think experimentation is a really important aspect of our own learning and growth for cloud native. Also, coming to great events like KubeCon + CloudNativeCon and other events [has been helpful]. We're hearing from other companies who've done the same journey and process and are learning from the use cases.”
In this latest podcast from The New Stack, we spoke with Ricardo Torres, who is the chief engineer of open source and cloud native for aerospace giant Boeing. Torres also joined the Cloud Native Computing Foundation in May to serve as a board member. In this interview, recorded at KubeCon+CloudNativeCon last month, Torres speaks about Boeing's use of open source software, as well as its adoption of cloud native technologies. While we may think of Boeing as an airplane manufacturer, it would be more accurate to think of the company as a large-scale system integrator, one that uses a lot of software. So, like other large-scale companies, Boeing sees a distinct advantage in maintaining good relations with the open source community. "Being able to leverage the best technologists out there in the rest of the world is of great value to us strategically," Torres said. This strategy allows Boeing to "differentiate on what we do as our core business rather than having to reinvent the wheel all the time on all of the technology." Like many other large companies, Boeing has created an open source office to better work with the open source community. Although Boeing is primarily a consumer of open source software, it still wants to work with the community. "We want to make sure that we have a strategy around how we contribute back to the open source community, and then leverage those learnings for inner sourcing," he said. Boeing also manages how it uses open source internally, keeping tight controls on the supply chain of open source software it uses. "As part of the software engineering organization, we partner with our internal IT organization, to look at our internet traffic and assure nobody's going out and downloading directly from an untrusted repository or registry. And then we host instead, we have approved sources internally." It's not surprising that Boeing, which deals with a lot of government agencies, embraces the practice of using software bills of material (SBOMs), which provide a full listing of what components are being used in a software system. In fact, the company has been working to extend the comprehensiveness of SBOMs, according to Torres. " I think one of the interesting things now is the automation," he said of SBOMs. "And so we're always looking to beef up the heuristics because a lot of the tools are relatively naïve, and that they trust that the dependencies that are specified are actually representative of everything that's delivered. And that's not good enough for a company like Boeing. We have to be absolutely certain that what's there is exactly what did we expected to be there."Cloud Native ComputingWhile Boeing builds many systems that reside in private data centers, the company is also increasingly relying on the cloud as well. Earlier this year, Boeing had signed agreements with the three largest cloud service providers (CSPs): Amazon Web Services, Microsoft Azure and the Google Cloud Platform. "A lot of our cloud presence is about our development environments. And so, you know, we have cloud-based software factories that are using a number of CNCF and CNCF-adjacent technologies to enable our developers to move fast," Torres said.
Thomas Stringer is a Software Engineering Lead in the Open Service Mesh team at Microsoft and he gives us insights into the OSM add-on for AKS and tells us why that makes applications on AKS so much more secure. Media file: https://azpodcast.blob.core.windows.net/episodes/Episode422.mp3 YouTube: https://youtu.be/DICsJmFSGCs Resources: https://openservicemesh.io/ Cloud Native Computing Foundation (cncf.io) Other updates: General availability: App Service - Networking capabilities added to Basic pricing tier | Azure updates | Microsoft Azure Public preview: App Service - Configure networking in Azure Portal during app creation | Azure updates | Microsoft Azure
About CaseyCasey spends his days leveraging AWS to help organizations improve the speed at which they deliver software. With a background in software development, he has spent the past 20 years architecting, building, and supporting software systems for organizations ranging from startups to Fortune 500 enterprises.Links Referenced: “17 Ways to Run Containers in AWS”: https://www.lastweekinaws.com/blog/the-17-ways-to-run-containers-on-aws/ “17 More Ways to Run Containers on AWS”: https://www.lastweekinaws.com/blog/17-more-ways-to-run-containers-on-aws/ kubernetestheeasyway.com: https://kubernetestheeasyway.com snark.cloud/quinntainers: https://snark.cloud/quinntainers ECS Chargeback: https://github.com/gaggle-net/ecs-chargeback twitter.com/nektos: https://twitter.com/nektos TranscriptAnnouncer: Hello, and welcome to Screaming in the Cloud with your host, Chief Cloud Economist at The Duckbill Group, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.Corey: This episode is sponsored by our friends at Revelo. Revelo is the Spanish word of the day, and its spelled R-E-V-E-L-O. It means “I reveal.” Now, have you tried to hire an engineer lately? I assure you it is significantly harder than it sounds. One of the things that Revelo has recognized is something I've been talking about for a while, specifically that while talent is evenly distributed, opportunity is absolutely not. They're exposing a new talent pool to, basically, those of us without a presence in Latin America via their platform. It's the largest tech talent marketplace in Latin America with over a million engineers in their network, which includes—but isn't limited to—talent in Mexico, Costa Rica, Brazil, and Argentina. Now, not only do they wind up spreading all of their talent on English ability, as well as you know, their engineering skills, but they go significantly beyond that. Some of the folks on their platform are hands down the most talented engineers that I've ever spoken to. Let's also not forget that Latin America has high time zone overlap with what we have here in the United States, so you can hire full-time remote engineers who share most of the workday as your team. It's an end-to-end talent service, so you can find and hire engineers in Central and South America without having to worry about, frankly, the colossal pain of cross-border payroll and benefits and compliance because Revelo handles all of it. If you're hiring engineers, check out revelo.io/screaming to get 20% off your first three months. That's R-E-V-E-L-O dot I-O slash screaming.Corey: Couchbase Capella Database-as-a-Service is flexible, full-featured and fully managed with built in access via key-value, SQL, and full-text search. Flexible JSON documents aligned to your applications and workloads. Build faster with blazing fast in-memory performance and automated replication and scaling while reducing cost. Capella has the best price performance of any fully managed document database. Visit couchbase.com/screaminginthecloud to try Capella today for free and be up and running in three minutes with no credit card required. Couchbase Capella: make your data sing.Corey: Welcome to Screaming in the Cloud. I'm Corey Quinn. My guest today is someone that I had the pleasure of meeting at re:Invent last year, but we'll get to that story in a minute. Casey Lee is the CTO with a company called Gaggle, which is—as they frame it—saving lives. Now, that seems to be a relatively common position that an awful lot of different tech companies take. “We're saving lives here.” It's, “You show banner ads and some of them are attack platforms for JavaScript malware. Let's be serious here.” Casey, thank you for joining me, and what makes the statement that Gaggle saves lives not patently ridiculous?Casey: Sure. Thanks, Corey. Thanks for having me on the show. So Gaggle, we're ed-tech company. We sell software to school districts, and school districts use our software to help protect their students while the students use the school-issued Google or Microsoft accounts.So, we're looking for signs of bullying, harassment, self-harm, and potentially suicide from K-12 students while they're using these platforms. They will take the thoughts, concerns, emotions they're struggling with and write them in their school-issued accounts. We detect that and then we notify the school districts, and they get the students the help they need before they can do any permanent damage to themselves. We protect about 6 million students throughout the US. We ingest a lot of content.Last school year, over 6 billion files, about the equal number of emails ingested. We're looking for concerning content and then we have humans review the stuff that our machine learning algorithms detect and flag. About 40 million items had to go in front of humans last year, resulted in about 20,000 what we call PSSes. These are Possible Student Situations where students are talking about harming themselves or harming others. And that resulted in what we like to track as lives saved. 1400 incidents last school year where a student was dealing with suicide ideation, they were planning to take their own lives. We detect that and get them help within minutes before they can act on that. That's what Gaggle has been doing. We're using tech, solving tech problems, and also saving lives as we do it.Corey: It's easy to lob a criticism at some of the things you're alluding to, the idea of oh, you're using machine learning on student data for young kids, yadda, yadda, yadda. Look at the outcome, look at the privacy controls you have in place, and look at the outcomes you're driving to. Now, I don't necessarily trust the number of school administrations not to become heavy-handed and overbearing with it, but let's be clear, that's not the intent. That is not what the success stories you have alluded to. I've got to say I'm a fan, so thanks for doing what you're doing. I don't say that very often to people who work in tech companies.Casey: Cool. Thanks, Corey.Corey: But let's rewind a bit because you and I had passed like ships in the night on Twitter for a while, but last year at re:Invent something odd happened. First, my business partner procrastinated at getting his ticket—that's not the odd part; he does that a lot—but then suddenly ticket sales slammed shut and none were to be had anywhere. You reached out with a, “Hey, I have a spare ticket because someone can't go. Let me get it to you.” And I said, “Terrific. Let me pay you for the ticket and take you to dinner.”You said, “Yes on the dinner, but I'd rather you just look at my AWS bill and don't worry about the cost of the ticket.” “All right,” said I. I know a deal when I see one. We grabbed dinner at the Venetian. I said, “Bust out your laptop.” And you said, “Oh, I was kidding.” And I said, “Great. I wasn't. Bust it out.”And you went from laughing to taking notes in about the usual time that happens when I start looking at these things. But how was your recollection of that? I always tend to romanticize some of these things. Like, “And then everyone's restaurant just turned, stopped, and clapped the entire time.” Maybe that part didn't happen.Casey: Everything was right up until the clapping part. That was a really cool experience. I appreciate you walking through that with me. Yeah, we've got lots of opportunity to save on our AWS bill here at Gaggle, and in that little bit of time that we had together, I think I walked away with no more than a dozen ideas for where to shave some costs. The most obvious one, the first thing that you keyed in on, is we had RIs coming due that weren't really well-optimized and you steered me towards savings plans. We put that in place and we're able to apply those savings plans not just to our EC2 instances but also to our serverless spend as well.So, that was a very worthwhile and cost-effective dinner for us. The thing that was most surprising though, Corey, was your approach. Your approach to how to review our bill was not what I thought at all.Corey: Well, what did you expect my approach was going to be? Because this always is of interest to me. Like, do you expect me to, like, whip a portable machine learning rig out of my backpack full of GPUs or something?Casey: I didn't know if you had, like, some secret tool you were going to hit, or if nothing else, I thought you were going to go for the Cost Explorer. I spend a lot of time in Cost Explorer, that's my go-to tool, and you wanted nothing to do with Cost Exp—I think I was actually pulling up Cost Explorer for you and you said, “I'm not interested. Take me to the bills.” So, we went right to the billing dashboard, you started opening up the invoices, and I thought to myself, “I don't remember the last time I looked at an AWS invoice.” I just, it's noise; it's not something that I pay attention to.And I learned something, that you get a real quick view of both the cost and the usage. And that's what you were keyed in on, right? And you were looking at things relative to each other. “Okay, I have no idea about Gaggle or what they do, but normally, for a company that's spending x amount of dollars in EC2, why is your data transfer cost the way it is? Is that high or low?” So, you're looking for kind of relative numbers, but it was really cool watching you slice and dice that bill through the dashboard there.Corey: There are a few things I tie together there. Part of it is that this is sort of a surprising thing that people don't think about but start with big numbers first, rather than going alphabetically because I don't really care about your $6 Alexa for Business spend. I care a bit more about the $6 million, or whatever it happens to be at EC2—I'm pulling numbers completely out of the ether, let's be clear; I don't recall what the exact magnitude of your bill is and it's not relevant to the conversation.And then you see that and it's like, “Huh. Okay, you're spending $6 million on EC2. Why are you spending 400 bucks on S3? Seems to me that those two should be a little closer aligned. What's the deal here? Oh, God, you're using eight petabytes of EBS volumes. Oh, dear.”And just, it tends to lead to interesting stuff. Break it down by region, service, and use case—or usage type, rather—is what shows up on those exploded bills, and that's where I tend to start. It also is one of the easiest things to wind up having someone throw into a PDF and email my way if I'm not doing it in a restaurant with, you know, people clapping standing around.Casey: [laugh]. Right.Corey: I also want to highlight that you've been using AWS for a long time. You're a Container Hero; you are not bad at understanding the nuances and depths of AWS, so I take praise from you around this stuff as valuing it very highly. This stuff is not intuitive, it is deeply nuanced, and you have a business outcome you are working towards that invariably is not oriented day in day out around, “How do I get these services for less money than I'm currently paying?” But that is how I see the world and I tend to live in a very different space just based on the nature of what I do. It's sort of a case study and the advantage of specialization. But I know remarkably little about containers, which is how we wound up reconnecting about a week or so before we did this recording.Casey: Yeah. I saw your tweet; you were trying to run some workload—container workload—and I could hear the frustration on the other end of Twitter when you were shaking your fist at—Corey: I should not tweet angrily, and I did in this case. And, eh, every time I do I regret it. But it played well with the people, so that does help. I believe my exact comment was, “‘me: I've got this container. Run it, please.' ‘Google Cloud: Run. You got it, boss.' AWS has 17 ways to run containers and they all suck.”And that's painting with an overly broad brush, let's be clear, but that was at the tail end of two or three days of work trying to solve a very specific, very common, business problem, that I was just beating my head off of a wall again and again and again. And it took less than half an hour from start to finish with Google Cloud Run and I didn't have to think about it anymore. And it's one of those moments where you look at this and realize that the future is here, we just don't see it in certain ways. And you took exception to this. So please, let's dive in because 280 characters of text after half a bottle of wine is not the best context to have a nuanced discussion that leaves friendships intact the following morning.Casey: Nice. Well, I just want to make sure I understand the use case first because I was trying to read between the lines on what you needed, but let me take a guess. My guess is you got your source code in GitHub, you have a Docker file, and you want to be able to take that repo from GitHub and just have it continuously deployed somewhere in Run. And you don't want to have headaches with it; you just want to push more changes up to GitHub, Docker Build runs and updates some service somewhere. Am I right so far?Corey: Ish, but think a little further up the stack. It was in service of this show. So, this show, as people who are listening to this are probably aware by this point, periodically has sponsors, which we love: We thank them for participating in the ongoing support of this show, which empowers conversations like this. Sometimes a sponsor will come to us with, “Oh, and here's the URL we want to give people.” And it's, “First, you misspelled your company name from the common English word; there are three sublevels within the domain, and then you have a complex UTM tagging tracking co—yeah, you realize people are driving to work when they're listening to this?”So, I've built a while back a link shortener, snark.cloud because is it the shortest thing in the world? Not really, but it's easily understandable when I say that, and people hear it for what it is. And that's been running for a long time as an S3 bucket with full of redirects, behind CloudFront. So, I wind up adding a zero-byte object with a redirect parameter on it, and it just works.Now, the challenge that I have here as a business is that I am increasingly prolific these days. So, anything that I am not directly required to be doing, I probably shouldn't necessarily be the one to do it. And care and feeding of those redirect links is a prime example of this. So, I went hunting, and the things that I was looking for were, obviously, do the redirect. Now, if you pull up GitHub, there are hundreds of solutions here.There are AWS blog posts. One that I really liked and almost got working was Eric Johnson's three-part blog post on how to do it serverlessly, with API Gateway, and DynamoDB, no Lambdas required. I really liked aspects of what that was, but it was complex, I kept smacking into weird challenges as I went, and front end is just baffling to me. Because I needed a front end app for people to be able to use here; I need to be able to secure that because it turns out that if you just have a, anyone who stumbles across the URL can redirect things to other places, well, you've just empowered a whole bunch of spam email, and you're going to find that service abused, and everyone starts blocking it, and then you have trouble. Nothing lasts the first encounter with jerks.And I was getting more and more frustrated, and then I found something by a Twitter engineer on GitHub, with a few creative search terms, who used to work at Google Cloud. And what it uses as a client is it doesn't build any kind of custom web app. Instead, as a database, it uses not S3 objects, not Route 53—the ideal database—but a Google sheet, which sounds ridiculous, but every business user here knows how to use that.Casey: Sure.Corey: And it looks for the two columns. The first one is the slug after the snark.cloud, and the second is the long URL. And it has a TTL of five seconds on cache, so make a change to that spreadsheet, five seconds later, it's live. Everyone gets it, I don't have to build anything new, I just put it somewhere around the relevant people can access it, I gave him a tutorial and a giant warning on it, and everyone gets that. And it just works well. It was, “Click here to deploy. Follow the steps.”And the documentation was a little, eh, okay, I had to undo it once and redo it again. Getting the domain registered was getting—ported over took a bit of time, and there were some weird SSL errors as the certificates were set up, but once all of that was done, it just worked. And I tested the heck out of it, and cold starts are relatively low, and the entire thing fits within the free tier. And it is reminiscent of the magic that I first saw when I started working with some of the cloud providers services, years ago. It's been a long time since I had that level of delight with something, especially after three days of frustration. It's one of the, “This is a great service. Why are people not shouting about this from the rooftops?” That was my perspective. And I put it out on Twitter and oh, Lord, did I get comments. What was your take on it?Casey: Well, so my take was, when you're evaluating a platform to use for running your applications, how fast it can get you to Hello World is not necessarily the best way to go. I just assumed you're wrong. I assumed of the 17 ways AWS has to run containers, Corey just doesn't understand. And so I went after it. And I said, “Okay, let me see if I can find a way that solves his use case, as I understand it, through a quick tweet.”And so I tried to App Runner; I saw that App Runner does not meet your needs because you have to somehow get your Docker image pushed up to a repo. App Runner can take an image that's already been pushed up and deployed for you or it can build from source but neither of those were the way I understood your use case.Corey: Having used App Runner before via the Copilot CLI, it is the closest as best I can tell to achieving what I want. But also let's be clear that I don't believe there's a free tier; there needs to be a load balancer in front of it, so you're starting with 15 bucks a month for this thing. Which is not the end of the world. Had I known at the beginning that all of this was going to be there, I would have just signed up for a bit.ly account and called it good. But here we are.Casey: Yeah. I tried Copilot. Copilot is a great developer experience, but it also is just pulling together tons of—I mean just trying to do a Copilot service deploy, VPCs are being created and tons IAM roles are being created, code pipelines, there's just so much going on. I was like 20 minutes into it, and I said, “Yeah, this is not fitting the bill for what Corey was looking for.” Plus, it doesn't solve my the way I understood your use case, which is you don't want to worry about builds, you just want to push code and have new Docker images get built for you.Corey: Well, honestly, let's be clear here, once it's up and running, I don't want to ever have to touch the silly thing again.Casey: Right.Corey: And that's so far has been the case, after I forked the repo and made a couple of changes to it that I wanted to see. One of them was to render the entire thing case insensitive because I get that one wrong a lot, and the other is I wanted to change the permanent 301 redirect to a temporary 302 redirect because occasionally, sponsors will want to change where it goes in the fullness of time. And that is just fine, but I want to be able to support that and not have to deal with old cached data. So, getting that up and running was a bit of a challenge. But the way that it worked, was following the instructions in the GitHub repo.The developer environment had spun up in the Google's Cloud Shell was just spectacular. It prompted me for a few things and it told me step by step what to do. This is the sort of thing I could have given a basically non-technical user, and they would have had success with it.Casey: So, I tried it as well. I said, “Well, okay, if I'm going to respond to Corey here and challenge him on this, I need to try Cloud Run.” I had no experience with Cloud Run. I had a small example repo that loosely mapped what I understood you were trying to do. Within five minutes, I had Cloud Run working.And I was surprised anytime I pushed a new change, within 45 seconds the change was built and deployed. So, here's my conclusion, Corey. Google Cloud Run is great for your use case, and AWS doesn't have the perfect answer. But here's my challenge to you. I think that you just proved why there's 17 different ways to run containers on AWS, is because there's that many different types of users that have different needs and you just happen to be number 18 that hasn't gotten the right attention yet from AWS.Corey: Well, let's be clear, like, my gag about 17 ways to run containers on AWS was largely a joke, and it went around the internet three times. So, I wrote a list of them on the blog post of “17 Ways to Run Containers in AWS” and people liked it. And then a few months later, I wrote “17 More Ways to Run Containers on AWS” listing 17 additional services that all run containers.And my favorite email that I think I've ever received in feedback was from a salty AWS employee, saying that one of them didn't really count because of some esoteric reason. And it turns out that when I'm trying to make a point of you have a sarcastic number of ways to run containers, pointing out that well, one of them isn't quite valid, doesn't really shatter the argument, let's be very clear here. So, I appreciate the feedback, I always do. And it's partially snark, but there is an element of truth to it in that customers don't want to run containers, by and large. That is what they do in service of a business goal.And they want their application to run which is in turn to serve as the business goal that continues to abstract out into, “Remain a going concern via the current position the company stakes out.” In your case, it is saving lives; in my case, it is fixing horrifying AWS bills and making fun of Amazon at the same time, and in most other places, there are somewhat more prosaic answers to that. But containers are simply an implementation detail, to some extent—to my way of thinking—of getting to that point. An important one [unintelligible 00:18:20], let's be clear, I was very anti-container for a long time. I wrote a talk, “Heresy in the Church of Docker” that then was accepted at ContainerCon. It's like, “Oh, boy, I'm not going to leave here alive.”And the honest answer is many years later, that Kubernetes solves almost all the criticisms that I had with the downside of well, first, you have to learn Kubernetes, and that continues to be mind-bogglingly complex from where I sit. There's a reason that I've registered kubernetestheeasyway.com and repointed it to ECS, Amazon's container service that is not requiring you to cosplay as a cloud provider yourself. But even ECS has a number of challenges to it, I want to be very clear here. There are no silver bullets in this.And you're completely correct in that I have a large, complex environment, and the application is nuanced, and I'm willing to invest a few weeks in setting up the baseline underlying infrastructure on AWS with some of these services, ideally not all of them at once because that's something a lunatic would do, but getting them up and running. The other side of it, though, is that if I am trying to evaluate a cloud provider's handling of containers and how this stuff works, the reason that everyone starts with a Hello World-style example is that it delivers ideally, the meantime to dopamine. There's a reason that Hello World doesn't have 18 different dependencies across a bunch of different databases and message queues and all the other complicated parts of running a modern application. Because you just want to see how it works out of the gate. And if getting that baseline empty container that just returns the string ‘Hello World' is that complicated and requires that much work, my takeaway is not that this user experience is going to get better once I'd make the application itself more complicated.So, I find that off-putting. My approach has always been find something that I can get the easy, minimum viable thing up and running on, and then as I expand know that you'll be there to catch me as my needs intensify and become ever more complex. But if I can't get the baseline thing up and running, I'm unlikely to be super enthused about continuing to beat my head against the wall like, “Well, I'll just make it more complex. That'll solve the problem.” Because it often does not. That's my position.Casey: Yeah, I agree that dopamine hit is valuable in getting attached to want to invest into whatever tech stack you're using. The challenge is your second part of that. Your second part is will it grow with me and scale with me and support the complex edge cases that I have? And the problem I've seen is a lot of organizations will start with something that's very easy to get started with and then quickly outgrow it, and then come up with all sorts of weird Rube Goldberg-type solutions. Because they jumped all in before seeing—I've got kind of an example of that.I'm happy to announce that there's now 18 ways to run containers on AWS. Because in your use case, in the spirit of AWS customer obsession, I hear your use case, I've created an open-source project that I want to share called Quinntainers—Corey: Oh, no.Casey: —and it solves—yes. Quinntainers is live and is ready for the world. So, now we've got 18 ways to run containers. And if you have Corey's use case of, “Hey, here's my container. Run it for me,” now we've got a one command that you can run to get things going for you. I can share a link for you and you could check it out. This is a [unintelligible 00:21:38]—Corey: Oh, we're putting that in the [show notes 00:21:37], for sure. In fact, if you go to snark.cloud/quinntainers, you'll find it.Casey: You'll find it. There you go. The idea here was this: There is a real use case that you had, and I looked at AWS does not have an out-of-the-box simple solution for you. I agree with that. And Google Cloud Run does.Well, the answer would have been from AWS, “Well, then here, we need to make that solution.” And so that's what this was, was a way to demonstrate that it is a solvable problem. AWS has all the right primitives, just that use case hadn't been covered. So, how does Quinntainers work? Real straightforward: It's a command-line—it's an NPM tool.You just run a [MPX 00:22:17] Quinntainer, it sets up a GitHub action role in your AWS account, it then creates a GitHub action workflow in your repo, and then uses the Quinntainer GitHub action—reusable action—that creates the image for you; every time you push to the branch, pushes it up to ECR, and then automatically pushes up that new version of the image to App Runner for you. So, now it's using App Runner under the covers, but it's providing that nice developer experience that you are getting out of Cloud Run. Look, is container really the right way to go with running containers? No, I'm not making that point at all. But the point is it is a—Corey: It might very well be.Casey: Well, if you want to show a good Hello World experience, Quinntainer's the best because within 30 seconds, your app is now set up to continuously deliver containers into AWS for your very specific use case. The problem is, it's not going to grow for you. I mean that it was something I did over the weekend just for fun; it's not something that would ever be worthy of hitching up a real production workload to. So, the point there is, you can build frameworks and tools that are very good at getting that initial dopamine hit, but then are not going to be there for you unnecessarily as you mature and get more complex.Corey: And yet, I've tilted a couple of times at the windmill of integrating GitHub actions in anything remotely resembling a programmatic way with AWS services, as far as instance roles go. Are you using permanent credentials for this as stored secrets or are you doing the [OICD 00:23:50][00:23:50] handoff?Casey: OIDC. So, what happens is the tool creates the IAM role for you with the trust policy on GitHub's OIDC provider, sets all that up for you in your account, locks it down so that just your repo and your main branch is able to push or is able to assume the role, the role is set up just to allow deployments to App Runner and ECR repository. And then that's it. At that point, it's out of your way. And you're just git push, and couple minutes later, your updates are now running an App Runner for you.Corey: This episode is sponsored in part by our friends at Vultr. Optimized cloud compute plans have landed at Vultr to deliver lightning fast processing power, courtesy of third gen AMD EPYC processors without the IO, or hardware limitations, of a traditional multi-tenant cloud server. Starting at just 28 bucks a month, users can deploy general purpose, CPU, memory, or storage optimized cloud instances in more than 20 locations across five continents. Without looking, I know that once again, Antarctica has gotten the short end of the stick. Launch your Vultr optimized compute instance in 60 seconds or less on your choice of included operating systems, or bring your own. It's time to ditch convoluted and unpredictable giant tech company billing practices, and say goodbye to noisy neighbors and egregious egress forever.Vultr delivers the power of the cloud with none of the bloat. "Screaming in the Cloud" listeners can try Vultr for free today with a $150 in credit when they visit getvultr.com/screaming. That's G E T V U L T R.com/screaming. My thanks to them for sponsoring this ridiculous podcast.Corey: Don't undersell what you've just built. This is something that—is this what I would use for a large-scale production deployment, obviously not, but it has streamlined and made incredibly accessible things that previously have been very complex for folks to get up and running. One of the most disturbing themes behind some of the feedback I got was, at one point I said, “Well, have you tried running a Docker container on Lambda?” Because now it supports containers as a packaging format. And I said no because I spent a few weeks getting Lambda up and running back when it first came out and I've basically been copying and pasting what I got working ever since the way most of us do.And response is, “Oh, that explains a lot.” With the implication being that I'm just a fool. Maybe, but let's be clear, I am never the only person in the room who doesn't know how to do something; I'm just loud about what I don't know. And the failure mode of a bad user experience is that a customer feels dumb. And that's not okay because this stuff is complicated, and when a user has a bad time, it's a bug.I learned that in 2012. From Jordan Sissel the creator of LogStash. He has been an inspiration to me for the last ten years. And that's something I try to live by that if a user has a bad time, something needs to get fixed. Maybe it's the tool itself, maybe it's the documentation, maybe it's the way that GitHub repo's readme is structured in a way that just makes it accessible.Because I am not a trailblazer in most things, nor do I intend to be. I'm not the world's best engineer by a landslide. Just look at my code and you'd argue the fact that I'm an engineer at all. But if it's bad and it works, how bad is it? Is sort of the other side of it.So, my problem is that there needs to be a couple of things. Ignore for a second the aspect of making it the right answer to get something out of the door. The fact that I want to take this container and just run it, and you and I both reach for App Runner as the default AWS service that does this because I've been swimming in the AWS waters a while and you're a frickin AWS Container Hero, where it is expected that you know what most of these things do. For someone who shows up on the containers webpage—which by the way lists, I believe 15 ways to run containers on mobile and 19 ways to run containers on non-mobile, which is just fascinating in its own right—and it's overwhelming, it's confusing, and it's not something that makes it is abundantly clear what the golden path is. First, get it up and working, get it running, then you can add nuance and flavor and the rest, and I think that's something that's gotten overlooked in our mad rush to pretend that we're all Google engineers, circa 2012.Casey: Mmm. I think people get stressed out when they tried to run containers in AWS because they think, “What is that golden path?” You said golden path. And my advice to people is there is no golden path. And the great thing about AWS is they do continue to invest in the solutions they come up with. I'm still bitter about Google Reader.Corey: As am I.Casey: Yeah. I built so much time getting my perfect set of RSS feeds and then I had to find somewhere else to—with AWS, the different offerings that are available for running containers, those are there intentionally, it's not by accident. They're there to solve specific problems, so the trick is finding what works best for you and don't feel like one is better than the other is going to get more attention than others. And they each have different use cases.And I approach it this way. I've seen a couple of different people do some great flowcharts—I think Forrest did one, Vlad did one—on ways to make the decision on how to run your containers. And I break it down to three questions. I ask people first of all, where are you going to run these workloads? If someone says, “It has to be in the data center,” okay, cool, then ECS Anywhere or EKS Anywhere and we'll figure out if Kubernetes is needed.If they need specific requirements, so if they say, “No, we can run in the cloud, but we need privileged mode for containers,” or, “We need EBS volumes,” or, “We want really small container sizes,” like, less than a quarter-VCP or less than half a gig of RAM—or if you have custom log requirements, Fargate is not going to work for you, so you're going to run on EC2. Otherwise, run it on Fargate. But that's the first question. Figure out where are you going to run your containers. That leads to the second question: What's your control plane?But those are different, sort of related but different questions. And I only see six options there. That's App Runner for your control plane, LightSail for your control plane, Rosa if you're invested in OpenShift already, EKS either if you have Momentum and Kubernetes or you have a bunch of engineers that have a bunch of experience with Kubernetes—if you don't have either, don't choose it—or ECS. The last option Elastic Beanstalk, but let's leave that as a—if you're not currently invested in Elastic Beanstalk don't start today. But I look at those as okay, so I—first question, where am I going to run my containers? Second question, what do I want to use for my control plane? And there's different pros and cons of each of those.And then the third question, how do I want to manage them? What tools do I want to use for managing deployment? All those other tools like Copilot or App2Container or Proton, those aren't my control plane; those aren't where I run my containers; that's how I manage, deploy, and orchestrate all the different containers. So, I look at it as those three questions. But I don't know, what do you think of that, Corey?Corey: I think you're onto something. I think that is a terrific way of exploring that question. I would argue that setting up a framework like that—one or very similar—is what the AWS containers page should be, just coming from the perspective of what is the neophyte customer experience. On some level, you almost need a slide of have choose your level of experience ranging from, “What's a container?” To, “I named my kid Kubernetes because I make terrible life decisions,” and anywhere in between.Casey: Sure. Yeah, well, and I think that really dictates the control plane level. So, for example, LightSail, where does LightSail fit? To me, the value of LightSail is the simplicity. I'm looking at a monthly pricing: Seven bucks a month for a container.I don't know how [unintelligible 00:30:23] works, but I can think in terms of monthly pricing. And it's tailored towards a console user, someone just wants to click in, point to an image. That's a very specific user, there's thousands of customers that are very happy with that experience, and they use it. App Runner presents that scale to zero. That's one of the big selling points I see with App Runner. Likewise, with Google Cloud Run. I've got that scale to zero. I can't do that with ECS, or EKS, or any of the other platforms. So, if you've got something that has a ton of idle time, I'd really be looking at those. I would argue that I think I did the math, Google Cloud Run is about 30% more expensive than App Runner.Corey: Yeah, if you disregard the free tier, I think that's have it—running persistently at all times throughout the month, the drop-out cold starts would cost something like 40 some odd bucks a month or something like that. Don't quote me on it. Again and to be clear, I wound up doing this very congratulatory and complimentary tweet about them on I think it was Thursday, and then they immediately apparently took one look at this and said, “Holy shit. Corey's saying nice things about us. What do we do? What do we do?” Panic.And the next morning, they raised prices on a bunch of cloud offerings. Whew, that'll fix it. Like—Casey: [laugh].Corey: Di-, did you miss the direction you're going on here? No, that's the exact opposite of what you should be doing. But here we are. Interestingly enough, to tie our two conversation threads together, when I look at an AWS bill, unless you're using Fargate, I can't tell whether you're using Kubernetes or not because EKS is a small charge. And almost every case for the control plane, or Fargate under it.Everything else just manifests as EC2 spend. From the perspective of the cloud provider. If you're running a Kubernetes cluster, it is a single-tenant application that can have some very funky behaviors like cross-AZ chatter back and fourth because there's no internal mechanism to say talk to the free thing, rather than the two cents a gigabyte thing. It winds up spinning up and down in a bunch of different ways, and the behavior patterns, because of how placement works are not necessarily deterministic, depending upon workload. And that becomes something that people find odd when, “Okay, we look at our bill for a week, what can you say?”“Well, first question. Are you running Kubernetes at all?” And they're like, “Who invited these clowns?” Understand, we're not prying into your workloads for a variety of excellent legal and contractual reasons, here. We are looking at how they behave, and for specific workloads, once we have a conversation engineering team, yeah, we're going to dive in, but it is not at all intuitive from the outside to make any determination whether you're running containers, or whether you're running VMs that you just haven't done anything with in 20 years, or what exactly is going on. And that's just an artifact of the billing system.Casey: We ran into this challenge in Gaggle. We don't use EKS, we use ECS, but we have some shared clusters, lots of EC2 spend, hard to figure out which team is creating the services that's running that up. We actually ended up creating a tool—we open-sourced it—ECS Chargeback, and what it does is it looks at the CPU memory reservations for each task definition, and then prorates the overall charge of the ECS cluster, and then creates metrics in Datadog to give us a breakdown of cost per ECS service. And it also measures what we like to refer to as waste, right? Because if you're reserving four gigs of memory, but your utilization never goes over two gigs, we're paying for that reservation, but you're underutilizing.So, we're able to also show which services have the highest degree of waste, not just utilization, so it helps us go after it. But this is a hard problem. I'd be curious, how do you approach these shared ECS resources and slicing and dicing those bills?Corey: Everyone has a different approach, too. This there is no unifiable, correct answer. A previous show guest, Peter Hamilton, over at Remind had done something very similar, open-sourced a bunch of these things. Understanding what your spend is important on this, and it comes down to getting at the actual business concern because in some cases, effectively dead reckoning is enough. You take a look at the cluster that is really hard to attribute because it's a shared service. Great. It is 5% of your bill.First pass, why don't we just agree that it is a third for Service A, two-thirds for Service B, and we'll call it mostly good at that point? That can be enough in a lot of cases. With scale [laugh] you're just sort of hand-waving over many millions of dollars a year there. How about we get into some more depth? And then you start instrumenting and reporting to something, be it CloudWatch, be a Datadog, be it something else, and understanding what the use case is.In some cases, customers have broken apart shared clusters for that specific reason. I don't think that's necessarily the best approach from an engineering perspective, but again, this is not purely an engineering decision. It comes down to serving the business need. And if you're taking up partial credits on that cluster, for a tax credit for R&D for example, you want that position to be extraordinarily defensible, and spending a few extra dollars to ensure that it is the right business decision. I mean, again, we're pure advisory; we advise customers on what we would do in their position, but people often mistake that to be we're going to go for the lowest possible price—bad idea, or that we're going to wind up doing this from a purely engineering-centric point of view.It's, be aware of that in almost every case, with some very notable weird exceptions, the AWS Bill costs significantly less than the payroll expense that you have of people working on the AWS environment in various ways. People are more expensive, so the idea of, well, you can save a whole bunch of engineering effort by spending a bit more on your cloud, yeah, let's go ahead and do that.Casey: Yeah, good point.Corey: The real mark of someone who's senior enough is their answer to almost any question is, “It depends.” And I feel I've fallen into that trap as well. Much as I'd love to sit here and say, “Oh, it's really simple. You do X, Y, and Z.” Yeah… honestly, my answer, the simple answer, is I think that we orchestrate a cyber-bullying campaign against AWS through the AWS wishlist hashtag, we get people to harass their account managers with repeated requests for, “Hey, could you go ahead and [dip 00:36:19] that thing in—they give that a plus-one for me, whatever internal system you're using?”Just because this is a problem we're seeing more and more. Given that it's an unbounded growth problem, we're going to see it more and more for the foreseeable future. So, I wish I had a better answer for you, but yeah, that's stuff's super hard is honest, but it's also not the most useful answer for most of us.Casey: I'd love feedback from anyone from you or your team on that tool that we created. I can share link after the fact. ECS Chargeback is what we call it.Corey: Excellent. I will follow up with you separately on that. That is always worth diving into. I'm curious to see new and exciting approaches to this. Just be aware that we have an obnoxious talent sometimes for seeing these things and, “Well, what about”—and asking about some weird corner edge case that either invalidates the entire thing, or you're like, “Who on earth would ever have a problem like that?” And the answer is always, “The next customer.”Casey: Yeah.Corey: For a bounded problem space of the AWS bill. Every time I think I've seen it all, I just have to talk to one more customer.Casey: Mmm. Cool.Corey: In fact, the way that we approached your teardown in the restaurant is how we launched our first pass approach. Because there's value in something like that is different than the value of a six to eight-week-long, deep-dive engagement to every nook and cranny. And—Casey: Yeah, for sure. It was valuable to us.Corey: Yeah, having someone come in to just spend a day with your team, diving into it up one side and down the other, it seems like a weird thing, like, “How much good could you possibly do in a day?” And the answer in some cases is—we had a Honeycomb saying that in a couple of days of something like this, we wound up blowing 10% off their entire operating budget for the company, it led to an increased valuation, Liz Fong-Jones says that—on multiple occasions—that the company would not be what it was without our efforts on their bill, which is just incredibly gratifying to hear. It's easy to get lost in the idea of well, it's the AWS bill. It's just making big companies spend a little bit less to another big company. And that's not exactly, you know, saving the lives of K through 12 students here.Casey: It's opening up opportunities.Corey: Yeah. It's about optimizing for the win for everyone. Because now AWS gets a lot more money from Honeycomb than they would if Honeycomb had not continued on their trajectory. It's, you can charge customers a lot right now, or you can charge them a little bit over time and grow with them in a partnership context. I've always opted for the second model rather than the first.Casey: Right on.Corey: But here we are. I want to thank you for taking so much time out of well, several days now to argue with me on Twitter, which is always appreciated, particularly when it's, you know, constructive—thanks for that—Casey: Yeah.Corey: For helping me get my business partner to re:Invent, although then he got me that horrible puzzle of 1000 pieces for the Cloud-Native Computing Foundation landscape and now I don't ever want to see him again—so you know, that happens—and of course, spending the time to write Quinntainers, which is going to be at snark.cloud/quinntainers as soon as we're done with this recording. Then I'm going to kick the tires and send some pull requests.Casey: Right on. Yeah, thanks for having me. I appreciate you starting the conversation. I would just conclude with I think that yes, there are a lot of ways to run containers in AWS; don't let it stress you out. They're there for intention, they're there by design. Understand them.I would also encourage people to go a little deeper, especially if you got a significantly large workload. You got to get your hands dirty. As a matter of fact, there's a hands-on lab that a company called Liatrio does. They call it their Night Lab; it's a one-day free, hands-on, you run legacy monolithic job applications on Kubernetes, gives you first-hand experience on how to—gets all the way up into observability and doing things like Canary deployments. It's a great, great lab.But you got to do something like that to really get your hands dirty and understand how these things work. So, don't sweat it; there's not one right way. There's a way that will probably work best for each user, and just take the time and understand the ways to make sure you're applying the one that's going to give you the most runway for your workload.Corey: I will definitely dig into that myself. But I think you're right, I think you have nailed a point that is, again, a nuanced one and challenging to put in a rage tweet. But the services don't exist in a vacuum. They're not there because, despite the joke, someone wants to get promoted. It's because there are customer needs that are going on that, and this is another way of meeting those needs.I think there could be better guidance, but I also understand that there are a lot of nuanced perspectives here and that… hell is someone else's workflow—Casey: [laugh].Corey: —and there's always value in broadening your perspective a bit on those things. If people want to learn more about you and how you see the world, where's the best place to find you?Casey: Probably on Twitter: twitter.com/nektos, N-E-K-T-O-S.Corey: That might be the first time Twitter has been described as a best place for anything. But—Casey: [laugh].Corey: Thank you once again, for your time. It is always appreciated.Casey: Thanks, Corey.Corey: Casey Lee, CTO at Gaggle and AWS Container Hero. And apparently writing code in anger to invalidate my points, which is always appreciated. Please do more of that, folks. I'm Cloud Economist Corey Quinn, and this is Screaming in the Cloud. If you've enjoyed this podcast, please leave a five-star review on your podcast platform of choice, or the YouTube comments, which is always a great place to go reading, whereas if you've hated this podcast, please leave a five-star review in the usual places and an angry comment telling me that I'm completely wrong, and then launching your own open-source tool to point out exactly what I've gotten wrong this time.Corey: If your AWS bill keeps rising and your blood pressure is doing the same, then you need The Duckbill Group. We help companies fix their AWS bill by making it smaller and less horrifying. The Duckbill Group works for you, not AWS. We tailor recommendations to your business and we get to the point. Visit duckbillgroup.com to get started.Announcer: This has been a HumblePod production. Stay humble.