Software to manage containers on a server-cluster
POPULARITY
Categories
Cloud cost is no longer just a finance team issue. It is now a business-wide concern, and treating it as a budgeting exercise is holding companies back. In this episode, I'm joined by John Purcell, Chief Product Officer at DoiT, to unpack how organisations can rethink cloud financial management through a risk-first lens. With the FinOps X conference just around the corner, John also gives us a preview of the themes likely to dominate the agenda this year, from Kubernetes complexity to the role of AI in governance. DoiT is not just another optimisation tool. Their Cloud Intelligence platform helps companies align performance, reliability, and security with financial strategy. John explains how cost spikes are not just common, they are almost guaranteed. He shares a real example of a ninety-thousand dollar AWS bill caused by a minor configuration error and what could have prevented it. We talk about the shift from reactive cost reviews to proactive financial defence. Think of it like cybersecurity but applied to your cloud budget. That mindset change is something FinOps teams need to embrace quickly. It is not just about watching what is spent, but understanding intent and outcomes across the business. John also introduces the concept of the FinOps fabric, a combination of technology, processes, and culture that helps teams align on goals and mitigate risks. And while AI and automation are transforming how teams interact with cloud platforms, they are still responsible for driving most cloud spend today. We dig into what needs to change before AI can truly become an optimisation asset rather than another cost driver. Whether you're gearing up for FinOps X or trying to get a better grip on cloud cost management, this episode offers practical insight into the tools, strategies, and cultural shifts that can help your team stay ahead. To learn more, visit doit.com or connect with the team at booth G10 if you are attending the FinOps X event in San Diego.
Guests are Nick Eberts and Jon Li. Nick is a Product Manager at Google working on Fleets and Multi-Cluster and Jon is a Software Engineer at Google working on AI Inference on Kubernetes. We discussed the newly announced Multi Cluster Orchestrator (MCO) and the challenges of running multiple clusters. Do you have something cool to share? Some questions? Let us know: - web: kubernetespodcast.com - mail: kubernetespodcast@google.com - twitter: @kubernetespod - bluesky: @kubernetespodcast.com News of the week Etcd has released version 3.6.0 Kubernetes 1.33 is now available in the Rapid channel in GKE Kyverno 1.14.0 was released Links from the interview Nick Eberts on LinkedIn Jon Li on LinkedIn MCO Blog MCO Repo Cluster Inventory API ClusterProfile API Gemma 3 vLLM Sample (deploy on Google Cloud using Terraform and Argo CD) Hello World Sample (deploy on Google Cloud using Terraform and Argo CD) Gateway API Inference Extension
Deploying cloud-centric technologies such as Kubernetes in edge environments poses challenges, especially for mission-critical defense systems. In this podcast from the Carnegie Mellon University Software Engineering Institute (SEI), Patrick Earl, Doug Reynolds, and Jeffrey Hamed, all DevOps engineers in the SEI's Software Solutions Division, sit down with senior reesearcher Jose Morales to discuss a recent case study involving the deployment of a hypervisor onto edge devices in a resource-constrained environment.
Fresh off Red Hat Summit, Chris is eyeing an exit from NixOS. What's luring him back to the mainstream? Our highlights, and the signal from the noise from open source's biggest event of the year.Sponsored By:Tailscale: Tailscale is a programmable networking software that is private and secure by default - get it free on up to 100 devices! 1Password Extended Access Management: 1Password Extended Access Management is a device trust solution for companies with Okta, and they ensure that if a device isn't trusted and secure, it can't log into your cloud apps. Support LINUX UnpluggedLinks:
At KubeCon EU 2025 in London, Nirmal and I discussed the important (and not-so-important) things you might have missed. There's also a video version of this show on YouTube.Creators & Guests Cristi Cotovan - Editor Beth Fisher - Producer Bret Fisher - Host Nirmal Mehta - Host (00:00) - DDT Audio Podcast Edited (00:04) - Intro (01:24) - KubeCon 2025 EU Overview (03:24) - Platform Engineering and AI Trends (07:03) - AI and Machine Learning in Kubernetes (15:38) - Project Pavilions at KubeCon (17:05) - FinOps and Cost Optimization (20:39) - HAProxy and AI Gateways (24:00) - Proxy Intelligence and Network Layer Optimization (26:52) - Developer Experience and Organizational Challenges (29:23) - Platform Engineering and Cognitive Load (35:54) - End of Life for CNCF Projects You can also support my free material by subscribing to my YouTube channel and my weekly newsletter at bret.news!Grab the best coupons for my Docker and Kubernetes courses.Join my cloud native DevOps community on Discord.Grab some merch at Bret's Loot BoxHomepage bretfisher.com
In this episode, we dissect industry-shaping stories, debating CoreWeave's $35 billion IPO, AWS Transform: AI for legacy app modernisation, and the exciting intersection of quantum computing and AI and how much Nvidia is investing in the Market - are they becoming the new Microsoft, Apple or Google?Whether you're deep in tech, cloud services, AI innovation, or market dynamics, this episode delivers sharp analysis, insightful predictions, and essential context to stay ahead in a rapidly evolving technological landscape.Hosts:https://www.linkedin.com/in/jonathanshanks/https://www.linkedin.com/in/lewismarshall/
In this episode, Bill Kennedy interviews Ryan Ryke, founder of CloudLife Consulting, focusing on AWS and cloud computing. They discuss the challenges of understanding AWS billing, the importance of managing cloud storage, and the benefits of using services like Cloud Run and Fargate. The conversation also touches on the evolution of engineering perspectives on complexity, the shift towards simpler infrastructure solutions, and personal experiences with technology. 00:00 Introduction00:30 What is Ryan Doing Today?9:00 Cloud Run Experience13:00 Handling Complexity21:00 Running Local LLMs25:30 First Memory of a Computer33:20 Entering University36:30 Relevant Education42:00 Early Industry53:00 Trading Stocks1:05:00 Discovering AWS 1:10:00 Starting a Business1:16:00 Maintaining Steady Clients1:22:00 Contact Info Connect with Ryan: Linkedin: https://www.linkedin.com/in/ryanrykeX: https://x.com/itsacloudlife99Email: ryan@cloudlife.ioMentioned in this Episode:CloudLife Consulting: https://www.cloudlife.io/Fargate: https://aws.amazon.com/fargate/Want more from Ardan Labs? You can learn Go, Kubernetes, Docker & more through our video training, live events, or through our blog!Online Courses : https://ardanlabs.com/education/ Live Events : https://www.ardanlabs.com/live-training-events/ Blog : https://www.ardanlabs.com/blog Github : https://github.com/ardanlabs
AI is reshaping the fundamental economics of startups—lowering product development costs, compressing GTM cycles, and rewriting the rules of competition. In this episode, Craig McLuckie (Co-Founder & CEO @ Stacklok, co-creator of Kubernetes) unpacks “the epoch of the startup,” a moment of massive disruption where fast-moving founders have a unique edge over incumbents. We explore how Craig is navigating this new era from rethinking cost structure, value capture, and defensibility to leveraging open-source, community, and asymmetric advantages as core pillars of Stacklok's strategy. Craig shares lessons from pivotal product shifts, frameworks for identifying moats, and the broader societal implications of AI-driven disruption. Whether you're leading a startup, pivoting in the face of AI, or thinking about your next big move, this conversation offers a strategic playbook for thriving in today's shifting landscape.How do you see AI reshaping the startup landscape? Join the discussion on our forum and share your insights, questions, and takeaways. ABOUT CRAIG MCLUCKIECraig is the CEO and co-founder of Stacklok, where his team is working to tip AI code generation on its side, from vertical, closed solutions to horizontal, aligned systems. Craig was previously CEO and co-founder of Heptio, which was acquired by VMware in 2018; he has also led product and engineering teams at Google and Microsoft. Craig is a co-creator of Kubernetes and he bootstrapped and chaired the Cloud Native Computing Foundation. xThis episode is brought to you by Side – delivering award-winning QA, localization, player support, and tech services for the world's leading games and technology brands.For over 30 years, Side has helped create unforgettable user experiences—from indies to AAA blockbusters like Silent Hill 2 and Baldur's Gate 3.Learn more about Side's global solutions at side.inc. SHOW NOTES:Why this moment is “the epoch of the startup” (2:03)How AI shifts startup economics: from cost structures to value capture (4:18)Why incumbents struggle during disruption—and how startups can win (8:17)The origin story behind Stacklok & lessons from Craig's pivot (11:04)Frameworks for identifying asymmetric advantages as a founder (14:48)How to map your unique asymmetric advantages to new opportunities and secure stakeholder buy-in (16:34)Rethinking defensibility & value capture in the AI era (16:29)How Craig applied cost, GTM & product perspectives to strategic pivots @ Stacklok (18:07)Building investment theses: Aligning cultural strengths & asymmetric advantages with evolving opportunities (20:05)Determining your startup's investment themes (22:53)Structuring experiments & validating opportunities (24:15)Defensibility & building community-driven moats in early ideation phases (26:54)Signals of early community-product alignment (31:24)Conversation frameworks to assess asymmetric advantages (32:22)Societal implications of AI disruption & the “startup epoch” (35:14)Rapid fire questions (38:12)This episode wouldn't have been possible without the help of our incredible production team:Patrick Gallagher - Producer & Co-HostJerry Li - Co-HostNoah Olberding - Associate Producer, Audio & Video Editor https://www.linkedin.com/in/noah-olberding/Dan Overheim - Audio Engineer, Dan's also an avid 3D printer - https://www.bnd3d.com/Ellie Coggins Angus - Copywriter, Check out her other work at https://elliecoggins.com/about/
Max Körbächer, Managing Partner at Liquid Reply, discusses the coming of age of the Kubernetes ecosystem and how and when an organisation should use it to build its platform. Also, he touches on how to measure its success and how WebAssembly and Kubernetes can play together to obtain the most effective usage of your infrastructure. Read a transcript of this interview: https://bit.ly/3RK7DuP Subscribe to the Software Architects' Newsletter for your monthly guide to the essential news and experience from industry peers on emerging patterns and technologies: https://www.infoq.com/software-architects-newsletter Upcoming Events: InfoQ Dev Summit Boston (June 9-10, 2025) Actionable insights on today's critical dev priorities. devsummit.infoq.com/conference/boston2025 InfoQ Dev Summit Munich (October 15-16, 2025) Essential insights on critical software development priorities. https://devsummit.infoq.com/conference/munich2025 QCon San Francisco 2025 (November 17-21, 2025) Get practical inspiration and best practices on emerging software trends directly from senior software developers at early adopter companies. https://qconsf.com/ QCon AI New York 2025 (December 16-17, 2025) https://ai.qconferences.com/ The InfoQ Podcasts: Weekly inspiration to drive innovation and build great teams from senior software leaders. Listen to all our podcasts and read interview transcripts: - The InfoQ Podcast https://www.infoq.com/podcasts/ - Engineering Culture Podcast by InfoQ https://www.infoq.com/podcasts/#engineering_culture - Generally AI: https://www.infoq.com/generally-ai-podcast/ Follow InfoQ: - Mastodon: https://techhub.social/@infoq - Twitter: twitter.com/InfoQ - LinkedIn: www.linkedin.com/company/infoq - Facebook: bit.ly/2jmlyG8 - Instagram: @infoqdotcom - Youtube: www.youtube.com/infoq Write for InfoQ: Learn and share the changes and innovations in professional software development. - Join a community of experts. - Increase your visibility. - Grow your career. https://www.infoq.com/write-for-infoq
Sean tells us about bootable containers and asks for our opinions on how he plans to use them with Kubernetes. He mentions Talos Linux. Send your questions and feedback to show@hybridcloudshow.com Insta360 X5 Camera To get a free invisible selfie stick worth US$24.99 with your purchase, go to... Read More
Sean tells us about bootable containers and asks for our opinions on how he plans to use them with Kubernetes. He mentions Talos Linux. Send your questions and feedback to show@hybridcloudshow.com Insta360 X5 Camera To get a free invisible selfie stick worth US$24.99 with your purchase, go to … Continue reading "Hybrid Cloud Show – Episode 30"
In this episode, we're bringing you a curated selection of conversations from the KubeCon EU 2025 showfloor. We'll be diving into the rise of platform engineering, exploring some cutting-edge technologies, getting updates on core Kubernetes components, and hearing some truly unique user stories, like using Kubernetes on a dairy farm! Do you have something cool to share? Some questions? Let us know: - web: kubernetespodcast.com - mail: kubernetespodcast@google.com - twitter: @kubernetespod - bluesky: @kubernetespodcast.com News of the week CNCF Blog - Announcing the Automated Governance Maturity Model Kubernetes Blog CNCF Blog - Understanding Kubernetes Gateway API: A Modern Approach to Traffic Management Open Observability Summit Links from the interview NAIS at NAV, with Hans Kristian Flaatten and Audun Fauchald Strand Audun Fauchald Strand Hans Kristian Flaatten NAV (Norwegian Labor and Welfare Administration) Kubernetes Podcast 216: NAIS, with Johnny Horvi and Frode Sundby NAIS KubeCon EU 2025 Keynote: Adventures of Building a Platform as a Service for the Government - Hans Kristian Flaatten, Lead Platform Engineer, NAV & Audun Fauchald Strand, Principal Software Engineer, NAV GKE release notes Platform Engineering, with Max Körbächer and Andreas (Andi) Grabner Max Körbächer Andreas (Andi) Grabner Book: “Platform Engineering for Architects: Crafting modern platforms as a product” by Max Körbächer, Andreas Grabner, and Hilliary Lipsig Cloud Native Summit Munich Kubernetes at LinkedIn, with Ahmet Alp Balkan and Ronak Nathani Ahmet Alp Balkan Ronak Nathani Kubernetes Podcast 249: Kubernetes at LinkedIn, with Ahmet Alp Balkan and Ronak Nathani Ahmet's Blog Introducing Multi-Cluster Orchestrator: Scale your Kubernetes workloads across regions LLMs on Kubernetes, with Mofi and Abdel KubeCon EU 2025 talk: Yes You Can Run LLMs on Kubernetes - Abdel Sghiouar & Mofi Rahman, Google Cloud About the Gateway API Gateway API Inference Extension Deploy GKE Inference Gateway SIG etcd with Ivan Valdes Ivan Valdes etcd.io SIG etcd on GitHub Open Source Kubernetes, with Jago Macleod Jago Macleod Google Open Source: Kubernetes Schedmd Slurm Ray Run:ai from Nvidia Medium blog: “Deploy Slurm on GKE” by Abdel Sghiouar AI-Hypercomputer, xpk XPK (Accelerated Processing Kit, pronounced x-p-k) is a command line interface that simplifies cluster creation and workload execution on Google Kubernetes Engine (GKE). XPK generates preconfigured, training-optimized clusters and allows easy workload scheduling without any Kubernetes expertise. Cursor AI Editor Dairy Farm Automation & Banking with Kubernetes, with Clément Nussbaumer Clément Nussbaumer Talos Linux Cluster-api Cluster API is a Kubernetes subproject focused on providing declarative APIs and tooling to simplify provisioning, upgrading, and operating multiple Kubernetes clusters. KubeCon EU 2025 Talk: “Day-2'000 - Migration From Kubeadm+Ansible To ClusterAPI+Talos: A Swiss Bank's Journey” - Clément Nussbaumer, PostFinance Kubeadm Kubeadm is a tool built to provide kubeadm init and kubeadm join as best-practice "fast paths" for creating Kubernetes clusters. Being a First-Time KubeCon Attendee, with Nick Taylor Kubernetes The Hard Way K3s - “The certified Kubernetes distribution built for IoT & Edge computing” Kubernetes Ingress Controllers Kubernetes Up and Running Kubernetes Docs KubeCon EU 2025 Sponsored Keynote: The Science of Winning: Oracle Red Bull Racing's Formula with Open Source, Kubernetes and AI - Sudha Raghavan, SVP of OCI Developer Platform, Oracle
In this episode, Danielle Tal and Thilo Fromm join us to discuss Flatcar Linux. They introduce Flatcar as a Linux operating system designed specifically for containers and Kubernetes workloads, highlighting its automation, self-healing capabilities, and security features. They emphasize how Flatcar simplifies operations for startups and large companies alike by automating OS provisioning and maintenance. We discussed contributor engagement and the project's involvement with the CNCF. They also share intriguing use cases, like a Kubernetes cluster running on a tractor fleet, and stress the importance of community contributions, not just in code but in evangelism and documentation. 00:00 Introduction 01:05 What is Flatcar? 02:01 Flatcar's Automation and Self-Healing Capabilities 04:10 User Experience and Testing 05:06 Ideal Users and Use Cases 10:36 Community and Contributions 13:38 Getting Started with Contributions 16:59 Impact and Future Directions 19:58 Conclusion and Final Thoughts Guest: Danielle Tal is a Program Manager at Microsoft and an integral part of the team responsible for maintaining Flatcar Container Linux. The team is contributes to Linux OS distributions and Linux Security within Azure and other upstream projects. With a background in supporting diverse enterprise cloud applications as a support engineer, Danielle has transitioned into a management role, overseeing Docker EMEA support before joining the Flatcar team. Thilo Fromm is an engineering manager and works on Community Linux distributions and Linux Security at Azure. Thilo's team helps maintaining Flatcar Container Linux. He has given talks at FOSDEM, FrOSCon, KubeCon, Open Source Summit, Cloud-Native Rejekts, and various meetups like Kubernetes Community Days. Thilo started his career in embedded systems with hardware design and roll-your-own /from scratch embedded Linux, kernel and plumbing level development, and later virtualisation. After working for various cloud providers in engineering and management positions, he went full cloud native in 2019. Nowadays Thilo works on operating systems for cloud-native environments with a special focus on Flatcar Container Linux.
No episódio #168 do Kubicast, recebemos Natália Granato, especialista em engenharia de plataformas, para uma conversa afiada sobre DevOps real, com foco em troubleshooting, incidentes em produção e boas práticas que nascem do caos — e não de um whitepaper.A Natália compartilhou histórias que todo time de infraestrutura vai reconhecer: aquele bug fantasma que só aparece em produção, a enxurrada de alertas inúteis que escondem o problema real, e os aprendizados que só surgem depois de passar pelo fogo. Literalmente, às vezes.Mais do que falar de ferramentas, o papo mergulha na cultura por trás de um bom processo de resolução de incidentes, onde a comunicação e a confiança do time são tão importantes quanto qualquer script ou dashboard.Entre uma piada e outra, discutimos:Como construir uma cultura de pós-mortem honesta e sem caça às bruxasO impacto de uma stack de observabilidade mal calibrada (spoiler: ruído é inimigo)Quando não é problema de DNS — mas você ainda acha que éFerramentas que ajudam (e as que atrapalham)A pressão de manter ambientes críticos funcionando sem perder a sanidadeSe você já passou por um incidente em produção e pensou “não é possível que só comigo isso acontece”, esse episódio é pra você. E se ainda não passou, ouça para estar melhor preparado — porque vai acontecer.Assista também o Kubicast no Youtube, e compartilhe com aquele colega que vive dizendo “sobe em produção e a gente vê” — talvez ele veja mesmo.O Kubicast é uma produção da Getup, empresa especialista em Kubernetes e projetos open source para Kubernetes. Os episódios do podcast estão nas principais plataformas de áudio digital e no YouTube.com/@getupcloud.
In this conversation, Infant Mystica shares her journey of building a professional network through social media, the innovative language translation technology she works on, and the dynamics of remote work across different time zones. She discusses her aspirations for career growth, the importance of attending tech conferences, and her experiences as a software developer in the evolving tech landscape.00:00 Introduction00:30 What is Mystica Doing Today?03:10 First Experiences with a Computer05:30 Highschool Interests / Classes13:00 Entering University15:00 Interest in Computer Science22:00 College Living27:00 COVID-19 and Remote Learning34:00 Entering Industry / Networking41:30 First Job at Translate Live50:00 Working Hours and Time Zones55:00 Traveling and ExplorationConnect with Mystica: Linkedin: https://www.linkedin.com/in/infant-mystica-810776195/X: https://x.com/mysticainfMystica's Site: https://mystica.meMentioned in this Episode:TranslateLive: https://www.translatelive.com/Want more from Ardan Labs? You can learn Go, Kubernetes, Docker & more through our video training, live events, or through our blog!Online Courses : https://ardanlabs.com/education/ Live Events : https://www.ardanlabs.com/live-training-events/ Blog : https://www.ardanlabs.com/blog Github : https://github.com/ardanlabs
Cloud Posse holds LIVE "Office Hours" every Wednesday to answer questions on all things related to AWS, DevOps, Terraform, Kubernetes, CI/CD. Register at https://cloudposse.com/office-hoursSupport the show
“Before starting a new AI project, it is really worthwhile defining the business priority first,” asserts Joanna Hodgson, the UK and Ireland regional leader at Red Hat.“What specific problem are you trying to solve with AI? Do we need a general purpose AI application or would a more focused model be better? How will we manage security, compliance and governance of that model? This process can help to reveal where AI adoption makes sense and where it doesn't," she added. In this episode of the Tech Transformed podcast, host Shubhangi Dua, podcast producer at EM360Tech speaks with Hodgson, a seasoned business and technical leader with over 25 years of experience at IBM and Red Hat. They talk about the challenges of scaling AI projects, the importance of open source in compliance with GDPR, and the geopolitical aspects of AI innovation. They also discuss the role of small language models (SLMs) in enterprise applications and the collaboration between IBM and Red Hat in advancing AI technology. Joanna emphasises the need for a strategic approach to AI and the importance of data quality for sustainable business practices. While large language models (LLMs) dominate headlines, SLMs offer a cost-effective and efficient alternative for specific tasks.The podcast answers key questions, like ‘how do businesses balance ethical considerations, moral obligations, and even patriotism with the drive for AI advancement?' Hodgson shares her perspective on how open source can facilitate this balance, ensuring AI works for everyone, not just those with the deepest pockets.Hodgson also provides her vision on the future of AI. It comprises interconnected small AI models, agentic AI, and a world where AI frees up teams to create personal connections and exceptional customer experiences.TakeawaysCuriosity is a strength in technology.AI is becoming embedded in existing applications.Regulatory compliance is crucial for AI systems.Open source can enhance trust and transparency.Small language models are efficient for specific tasks.AI should free teams to create personal connections.A strategic AI platform is essential for businesses.Data quality is key for sustainable business success.Collaboration in open source accelerates innovation.AI can be used for both good and bad outcomes.Chapters00:00 Introduction to the Tech Transform Podcast01:35 Pivotal Moments in Joanna's Career05:12 Challenges in Scaling AI Projects09:15 Open Source and GDPR Compliance13:11 Regulatory Compliance and Data Security17:30 Geopolitical Aspects of AI Innovation22:31 Collaboration Between IBM and Red Hat23:58 Understanding Small Language Models29:54 Future Trends in AI and SustainabilityAbout Red HatRed Hat is a leading provider of enterprise open source solutions, using a community-powered approach to deliver high-performing Linux, hybrid cloud, edge, and Kubernetes technologies. The company is known for Enterprise Linux.They offer a wide range of hybrid cloud platforms and open source...
Gros épisode qui couvre un large spectre de sujets : Java, Scala, Micronaut, NodeJS, l'IA et la compétence des développeurs, le sampling dans les LLMs, les DTO, le vibe coding, les changements chez Broadcom et Red Hat ainsi que plusieurs nouvelles sur les licences open source. Enregistré le 7 mai 2025 Téléchargement de l'épisode LesCastCodeurs-Episode-325.mp3 ou en vidéo sur YouTube. News Langages A l'occasion de JavaOne et du lancement de Java 24, Oracle lance un nouveau site avec des ressources vidéo pour apprendre le langage https://learn.java/ site plutôt à destination des débutants et des enseignants couvre la syntaxe aussi, y compris les ajouts plus récents comme les records ou le pattern matching c'est pas le site le plus trendy du monde. Martin Odersky partage un long article sur l'état de l'écosystème Scala et les évolutions du language https://www.scala-lang.org/blog/2025/03/24/evolving-scala.html Stabilité et besoin d'évolution : Scala maintient sa position (~14ème mondial) avec des bases techniques solides, mais doit évoluer face à la concurrence pour rester pertinent. Axes prioritaires : L'évolution se concentre sur l'amélioration du duo sécurité/convivialité, le polissage du langage (suppression des “rugosités”) et la simplification pour les débutants. Innovation continue : Geler les fonctionnalités est exclu ; l'innovation est clé pour la valeur de Scala. Le langage doit rester généraliste et ne pas se lier à un framework spécifique. Défis et progrès : L'outillage (IDE, outils de build comme sbt, scala-cli, Mill) et la facilité d'apprentissage de l'écosystème sont des points d'attention, avec des améliorations en cours (partenariat pédagogique, plateformes simples). Des strings encore plus rapides ! https://inside.java/2025/05/01/strings-just-got-faster/ Dans JDK 25, la performance de la fonction String::hashCode a été améliorée pour être principalement constant foldable. Cela signifie que si les chaînes de caractères sont utilisées comme clés dans une Map statique et immuable, des gains de performance significatifs sont probables. L'amélioration repose sur l'annotation interne @Stable appliquée au champ privé String.hash. Cette annotation permet à la machine virtuelle de lire la valeur du hash une seule fois et de la considérer comme constante si elle n'est pas la valeur par défaut (zéro). Par conséquent, l'opération String::hashCode peut être remplacée par la valeur de hash connue, optimisant ainsi les lookups dans les Map immuables. Un cas limite est celui où le code de hachage de la chaîne est zéro, auquel cas l'optimisation ne fonctionne pas (par exemple, pour la chaîne vide “”). Bien que l'annotation @Stable soit interne au JDK, un nouveau JEP (JEP 502: Stable Values (Preview)) est en cours de développement pour permettre aux utilisateurs de bénéficier indirectement de fonctionnalités similaires. AtomicHash, une implémentation Java d'une HashMap qui est thread-safe, atomique et non-bloquante https://github.com/arxila/atomichash implémenté sous forme de version immutable de Concurrent Hash Trie Librairies Sortie de Micronaut 4.8.0 https://micronaut.io/2025/04/01/micronaut-framework-4-8-0-released/ Mise à jour de la BOM (Bill of Materials) : La version 4.8.0 met à jour la BOM de la plateforme Micronaut. Améliorations de Micronaut Core : Intégration de Micronaut SourceGen pour la génération interne de métadonnées et d'expressions bytecode. Nombreuses améliorations dans Micronaut SourceGen. Ajout du traçage de l'injection de dépendances pour faciliter le débogage au démarrage et à la création des beans. Nouveau membre definitionType dans l'annotation @Client pour faciliter le partage d'interfaces entre client et serveur. Support de la fusion dans les Bean Mappers via l'annotation @Mapping. Nouvelle liveness probe détectant les threads bloqués (deadlocked) via ThreadMXBean. Intégration Kubernetes améliorée : Mise à jour du client Java Kubernetes vers la version 22.0.1. Ajout du module Micronaut Kubernetes Client OpenAPI, offrant une alternative au client officiel avec moins de dépendances, une configuration unifiée, le support des filtres et la compatibilité Native Image. Introduction d'un nouveau runtime serveur basé sur le serveur HTTP intégré de Java, permettant de créer des applications sans dépendances serveur externes. Ajout dans Micronaut Micrometer d'un module pour instrumenter les sources de données (traces et métriques). Ajout de la condition condition dans l'annotation @MetricOptions pour contrôler l'activation des métriques via une expression. Support des Consul watches dans Micronaut Discovery Client pour détecter les changements de configuration distribuée. Possibilité de générer du code source à partir d'un schéma JSON via les plugins de build (Gradle et Maven). Web Node v24.0.0 passe en version Current: https://nodejs.org/en/blog/release/v24.0.0 Mise à jour du moteur V8 vers la version 13.6 : intégration de nouvelles fonctionnalités JavaScript telles que Float16Array, la gestion explicite des ressources (using), RegExp.escape, WebAssembly Memory64 et Error.isError. npm 11 inclus : améliorations en termes de performance, de sécurité et de compatibilité avec les packages JavaScript modernes. Changement de compilateur pour Windows : abandon de MSVC au profit de ClangCL pour la compilation de Node.js sur Windows. AsyncLocalStorage utilise désormais AsyncContextFrame par défaut : offrant une gestion plus efficace du contexte asynchrone. URLPattern disponible globalement : plus besoin d'importer explicitement cette API pour effectuer des correspondances d'URL. Améliorations du modèle de permissions : le flag expérimental --experimental-permission devient --permission, signalant une stabilité accrue de cette fonctionnalité. Améliorations du test runner : les sous-tests sont désormais attendus automatiquement, simplifiant l'écriture des tests et réduisant les erreurs liées aux promesses non gérées. Intégration d'Undici 7 : amélioration des capacités du client HTTP avec de meilleures performances et un support étendu des fonctionnalités HTTP modernes. Dépréciations et suppressions : Dépréciation de url.parse() au profit de l'API WHATWG URL. Suppression de tls.createSecurePair. Dépréciation de SlowBuffer. Dépréciation de l'instanciation de REPL sans new. Dépréciation de l'utilisation des classes Zlib sans new. Dépréciation du passage de args à spawn et execFile dans child_process. Node.js 24 est actuellement la version “Current” et deviendra une version LTS en octobre 2025. Il est recommandé de tester cette version pour évaluer son impact sur vos applications. Data et Intelligence Artificielle Apprendre à coder reste crucial et l'IA est là pour venir en aide : https://kyrylo.org/software/2025/03/27/learn-to-code-ignore-ai-then-use-ai-to-code-even-better.html Apprendre à coder reste essentiel malgré l'IA. L'IA peut assister la programmation. Une solide base est cruciale pour comprendre et contrôler le code. Cela permet d'éviter la dépendance à l'IA. Cela réduit le risque de remplacement par des outils d'IA accessibles à tous. L'IA est un outil, pas un substitut à la maîtrise des fondamentaux. Super article de Anthropic qui essaie de comprendre comment fonctionne la “pensée” des LLMs https://www.anthropic.com/research/tracing-thoughts-language-model Effet boîte noire : Stratégies internes des IA (Claude) opaques aux développeurs et utilisateurs. Objectif : Comprendre le “raisonnement” interne pour vérifier capacités et intentions. Méthode : Inspiration neurosciences, développement d'un “microscope IA” (regarder quels circuits neuronaux s'activent). Technique : Identification de concepts (“features”) et de “circuits” internes. Multilinguisme : Indice d'un “langage de pensée” conceptuel commun à toutes les langues avant de traduire dans une langue particulière. Planification : Capacité à anticiper (ex: rimes en poésie), pas seulement de la génération mot par mot (token par token). Raisonnement non fidèle : Peut fabriquer des arguments plausibles (“bullshitting”) pour une conclusion donnée. Logique multi-étapes : Combine des faits distincts, ne se contente pas de mémoriser. Hallucinations : Refus par défaut ; réponse si “connaissance” active, sinon risque d'hallucination si erreur. “Jailbreaks” : Tension entre cohérence grammaticale (pousse à continuer) et sécurité (devrait refuser). Bilan : Méthodes limitées mais prometteuses pour la transparence et la fiabilité de l'IA. Le “S” dans MCP veut dire Securité (ou pas !) https://elenacross7.medium.com/%EF%B8%8F-the-s-in-mcp-stands-for-security-91407b33ed6b La spécification MCP pour permettre aux LLMs d'avoir accès à divers outils et fonctions a peut-être été adoptée un peu rapidement, alors qu'elle n'était pas encore prête niveau sécurité L'article liste 4 types d'attaques possibles : vulnérabilité d'injection de commandes attaque d'empoisonnement d'outils redéfinition silencieuse de l'outil le shadowing d'outils inter-serveurs Pour l'instant, MCP n'est pas sécurisé : Pas de standard d'authentification Pas de chiffrement de contexte Pas de vérification d'intégrité des outils Basé sur l'article de InvariantLabs https://invariantlabs.ai/blog/mcp-security-notification-tool-poisoning-attacks Sortie Infinispan 15.2 - pre rolling upgrades 16.0 https://infinispan.org/blog/2025/03/27/infinispan-15-2 Support de Redis JSON + scripts Lua Métriques JVM désactivables Nouvelle console (PatternFly 6) Docs améliorées (métriques + logs) JDK 17 min, support JDK 24 Fin du serveur natif (performances) Guillaume montre comment développer un serveur MCP HTTP Server Sent Events avec l'implémentation de référence Java et LangChain4j https://glaforge.dev/posts/2025/04/04/mcp-client-and-server-with-java-mcp-sdk-and-langchain4j/ Développé en Java, avec l'implémentation de référence qui est aussi à la base de l'implémentation dans Spring Boot (mais indépendant de Spring) Le serveur MCP est exposé sous forme de servlet dans Jetty Le client MCP lui, est développé avec le module MCP de LangChain4j c'est semi independant de Spring dans le sens où c'est dépendant de Reactor et de ses interface. il y a une conversation sur le github d'anthropic pour trouver une solution, mais cela ne parait pas simple. Les fallacies derrière la citation “AI won't replace you, but humans using AI will” https://platforms.substack.com/cp/161356485 La fallacie de l'automatisation vs. l'augmentation : Elle se concentre sur l'amélioration des tâches existantes avec l'IA au lieu de considérer le changement de la valeur de ces tâches dans un nouveau système. La fallacie des gains de productivité : L'augmentation de la productivité ne se traduit pas toujours par plus de valeur pour les travailleurs, car la valeur créée peut être capturée ailleurs dans le système. La fallacie des emplois statiques : Les emplois sont des constructions organisationnelles qui peuvent être redéfinies par l'IA, rendant les rôles traditionnels obsolètes. La fallacie de la compétition “moi vs. quelqu'un utilisant l'IA” : La concurrence évolue lorsque l'IA modifie les contraintes fondamentales d'un secteur, rendant les compétences existantes moins pertinentes. La fallacie de la continuité du flux de travail : L'IA peut entraîner une réimagination complète des flux de travail, éliminant le besoin de certaines compétences. La fallacie des outils neutres : Les outils d'IA ne sont pas neutres et peuvent redistribuer le pouvoir organisationnel en changeant la façon dont les décisions sont prises et exécutées. La fallacie du salaire stable : Le maintien d'un emploi ne garantit pas un salaire stable, car la valeur du travail peut diminuer avec l'augmentation des capacités de l'IA. La fallacie de l'entreprise stable : L'intégration de l'IA nécessite une restructuration de l'entreprise et ne se fait pas dans un vide organisationnel. Comprendre le “sampling” dans les LLMs https://rentry.co/samplers Explique pourquoi les LLMs utilisent des tokens Les différentes méthodes de “sampling” : càd de choix de tokens Les hyperparamètres comme la température, top-p, et leur influence réciproque Les algorithmes de tokenisation comme Byte Pair Encoding et SentencePiece. Un de moins … OpenAI va racheter Windsurf pour 3 milliards de dollars. https://www.bloomberg.com/news/articles/2025-05-06/openai-reaches-agreement-to-buy-startup-windsurf-for-3-billion l'accord n'est pas encore finalisé Windsurf était valorisé à 1,25 milliards l'an dernier et OpenAI a levé 40 milliards dernièrement portant sa valeur à 300 milliards Le but pour OpenAI est de rentrer dans le monde des assistants de code pour lesquels ils sont aujourd'hui absent Docker desktop se met à l'IA… ? Une nouvelle fonctionnalité dans docker desktop 4.4 sur macos: Docker Model Runner https://dev.to/docker/run-genai-models-locally-with-docker-model-runner-5elb Permet de faire tourner des modèles nativement en local ( https://docs.docker.com/model-runner/ ) mais aussi des serveurs MCP ( https://docs.docker.com/ai/mcp-catalog-and-toolkit/ ) Outillage Jetbrains défend la suppression des commentaires négatifs sur son assistant IA https://devclass.com/2025/04/30/jetbrains-defends-removal-of-negative-reviews-for-unpopular-ai-assistant/?td=rt-3a L'IA Assistant de JetBrains, lancée en juillet 2023, a été téléchargée plus de 22 millions de fois mais n'est notée que 2,3 sur 5. Des utilisateurs ont remarqué que certaines critiques négatives étaient supprimées, ce qui a provoqué une réaction négative sur les réseaux sociaux. Un employé de JetBrains a expliqué que les critiques ont été supprimées soit parce qu'elles mentionnaient des problèmes déjà résolus, soit parce qu'elles violaient leur politique concernant les “grossièretés, etc.” L'entreprise a reconnu qu'elle aurait pu mieux gérer la situation, un représentant déclarant : “Supprimer plusieurs critiques d'un coup sans préavis semblait suspect. Nous aurions dû au moins publier un avis et fournir plus de détails aux auteurs.” Parmi les problèmes de l'IA Assistant signalés par les utilisateurs figurent : un support limité pour les fournisseurs de modèles tiers, une latence notable, des ralentissements fréquents, des fonctionnalités principales verrouillées aux services cloud de JetBrains, une expérience utilisateur incohérente et une documentation insuffisante. Une plainte courante est que l'IA Assistant s'installe sans permission. Un utilisateur sur Reddit l'a qualifié de “plugin agaçant qui s'auto-répare/se réinstalle comme un phénix”. JetBrains a récemment introduit un niveau gratuit et un nouvel agent IA appelé Junie, destiné à fonctionner parallèlement à l'IA Assistant, probablement en réponse à la concurrence entre fournisseurs. Mais il est plus char a faire tourner. La société s'est engagée à explorer de nouvelles approches pour traiter les mises à jour majeures différemment et envisage d'implémenter des critiques par version ou de marquer les critiques comme “Résolues” avec des liens vers les problèmes correspondants au lieu de les supprimer. Contrairement à des concurrents comme Microsoft, AWS ou Google, JetBrains commercialise uniquement des outils et services de développement et ne dispose pas d'une activité cloud distincte sur laquelle s'appuyer. Vos images de README et fichiers Markdown compatibles pour le dark mode de GitHub: https://github.blog/developer-skills/github/how-to-make-your-images-in-markdown-on-github-adjust-for-dark-mode-and-light-mode/ Seulement quelques lignes de pure HTML pour le faire Architecture Alors, les DTOs, c'est bien ou c'est pas bien ? https://codeopinion.com/dtos-mapping-the-good-the-bad-and-the-excessive/ Utilité des DTOs : Les DTOs servent à transférer des données entre les différentes couches d'une application, en mappant souvent les données entre différentes représentations (par exemple, entre la base de données et l'interface utilisateur). Surutilisation fréquente : L'article souligne que les DTOs sont souvent utilisés de manière excessive, notamment pour créer des API HTTP qui ne font que refléter les entités de la base de données, manquant ainsi l'opportunité de composer des données plus riches. Vraie valeur : La valeur réelle des DTOs réside dans la gestion du couplage entre les couches et la composition de données provenant de sources multiples en formes optimisées pour des cas d'utilisation spécifiques. Découplage : Il est suggéré d'utiliser les DTOs pour découpler les modèles de données internes des contrats externes (comme les API), ce qui permet une évolution et une gestion des versions indépendantes. Exemple avec CQRS : Dans le cadre de CQRS (Command Query Responsibility Segregation), les réponses aux requêtes (queries) agissent comme des DTOs spécifiquement adaptés aux besoins de l'interface utilisateur, pouvant inclure des données de diverses sources. Protection des données internes : Les DTOs aident à distinguer et protéger les modèles de données internes (privés) des changements externes (publics). Éviter l'excès : L'auteur met en garde contre les couches de mapping excessives (mapper un DTO vers un autre DTO) qui n'apportent pas de valeur ajoutée. Création ciblée : Il est conseillé de ne créer des DTOs que lorsqu'ils résolvent des problèmes concrets, tels que la gestion du couplage ou la facilitation de la composition de données. Méthodologies Même Guillaume se met au “vibe coding” https://glaforge.dev/posts/2025/05/02/vibe-coding-an-mcp-server-with-micronaut-and-gemini/ Selon Andrey Karpathy, c'est le fait de POC-er un proto, une appli jetable du weekend https://x.com/karpathy/status/1886192184808149383 Mais Simon Willison s'insurge que certains confondent coder avec l'assistance de l'IA avec le vibe coding https://simonwillison.net/2025/May/1/not-vibe-coding/ Guillaume c'est ici amusé à développer un serveur MCP avec Micronaut, en utilisant Gemini, l'IA de Google. Contrairement à Quarkus ou Spring Boot, Micronaut n'a pas encore de module ou de support spécifique pour faciliter la création de serveur MCP Sécurité Une faille de sécurité 10/10 sur Tomcat https://www.it-connect.fr/apache-tomcat-cette-faille-activement-exploitee-seulement-30-heures-apres-sa-divulgation-patchez/ Une faille de sécurité critique (CVE-2025-24813) affecte Apache Tomcat, permettant l'exécution de code à distance Cette vulnérabilité est activement exploitée seulement 30 heures après sa divulgation du 10 mars 2025 L'attaque ne nécessite aucune authentification et est particulièrement simple à exécuter Elle utilise une requête PUT avec une charge utile Java sérialisée encodée en base64, suivie d'une requête GET L'encodage en base64 permet de contourner la plupart des filtres de sécurité Les serveurs vulnérables utilisent un stockage de session basé sur des fichiers (configuration répandue) Les versions affectées sont : 11.0.0-M1 à 11.0.2, 10.1.0-M1 à 10.1.34, et 9.0.0.M1 à 9.0.98 Les mises à jour recommandées sont : 11.0.3+, 10.1.35+ et 9.0.99+ Les experts prévoient des attaques plus sophistiquées dans les prochaines phases d'exploitation (upload de config ou jsp) Sécurisation d'un serveur ssh https://ittavern.com/ssh-server-hardening/ un article qui liste les configurations clés pour sécuriser un serveur SSH par exemple, enlever password authentigfication, changer de port, desactiver le login root, forcer le protocol ssh 2, certains que je ne connaissais pas comme MaxStartups qui limite le nombre de connections non authentifiées concurrentes Port knocking est une technique utile mais demande une approche cliente consciente du protocol Oracle admet que les identités IAM de ses clients ont leaké https://www.theregister.com/2025/04/08/oracle_cloud_compromised/ Oracle a confirmé à certains clients que son cloud public a été compromis, alors que l'entreprise avait précédemment nié toute intrusion. Un pirate informatique a revendiqué avoir piraté deux serveurs d'authentification d'Oracle et volé environ six millions d'enregistrements, incluant des clés de sécurité privées, des identifiants chiffrés et des entrées LDAP. La faille exploitée serait la vulnérabilité CVE-2021-35587 dans Oracle Access Manager, qu'Oracle n'avait pas corrigée sur ses propres systèmes. Le pirate a créé un fichier texte début mars sur login.us2.oraclecloud.com contenant son adresse email pour prouver son accès. Selon Oracle, un ancien serveur contenant des données vieilles de huit ans aurait été compromis, mais un client affirme que des données de connexion aussi récentes que 2024 ont été dérobées. Oracle fait face à un procès au Texas concernant cette violation de données. Cette intrusion est distincte d'une autre attaque contre Oracle Health, sur laquelle l'entreprise refuse de commenter. Oracle pourrait faire face à des sanctions sous le RGPD européen qui exige la notification des parties affectées dans les 72 heures suivant la découverte d'une fuite de données. Le comportement d'Oracle consistant à nier puis à admettre discrètement l'intrusion est inhabituel en 2025 et pourrait mener à d'autres actions en justice collectives. Une GitHub action très populaire compromise https://www.stepsecurity.io/blog/harden-runner-detection-tj-actions-changed-files-action-is-compromised Compromission de l'action tj-actions/changed-files : En mars 2025, une action GitHub très utilisée (tj-actions/changed-files) a été compromise. Des versions modifiées de l'action ont exposé des secrets CI/CD dans les logs de build. Méthode d'attaque : Un PAT compromis a permis de rediriger plusieurs tags de version vers un commit contenant du code malveillant. Détails du code malveillant : Le code injecté exécutait une fonction Node.js encodée en base64, qui téléchargeait un script Python. Ce script parcourait la mémoire du runner GitHub à la recherche de secrets (tokens, clés…) et les exposait dans les logs. Dans certains cas, les données étaient aussi envoyées via une requête réseau. Période d'exposition : Les versions compromises étaient actives entre le 12 et le 15 mars 2025. Tout dépôt, particulièrement ceux publiques, ayant utilisé l'action pendant cette période doit être considéré comme potentiellement exposé. Détection : L'activité malveillante a été repérée par l'analyse des comportements inhabituels pendant l'exécution des workflows, comme des connexions réseau inattendues. Réaction : GitHub a supprimé l'action compromise, qui a ensuite été nettoyée. Impact potentiel : Tous les secrets apparaissant dans les logs doivent être considérés comme compromis, même dans les dépôts privés, et régénérés sans délai. Loi, société et organisation Les startup the YCombinateur ont les plus fortes croissances de leur histoire https://www.cnbc.com/2025/03/15/y-combinator-startups-are-fastest-growing-in-fund-history-because-of-ai.html Les entreprises en phase de démarrage à Silicon Valley connaissent une croissance significative grâce à l'intelligence artificielle. Le PDG de Y Combinator, Garry Tan, affirme que l'ensemble des startups de la dernière cohorte a connu une croissance hebdomadaire de 10% pendant neuf mois. L'IA permet aux développeurs d'automatiser des tâches répétitives et de générer du code grâce aux grands modèles de langage. Pour environ 25% des startups actuelles de YC, 95% de leur code a été écrit par l'IA. Cette révolution permet aux entreprises de se développer avec moins de personnel - certaines atteignant 10 millions de dollars de revenus avec moins de 10 employés. La mentalité de “croissance à tout prix” a été remplacée par un renouveau d'intérêt pour la rentabilité. Environ 80% des entreprises présentées lors du “demo day” étaient centrées sur l'IA, avec quelques startups en robotique et semi-conducteurs. Y Combinator investit 500 000 dollars dans les startups en échange d'une participation au capital, suivi d'un programme de trois mois. Red Hat middleware (ex-jboss) rejoint IBM https://markclittle.blogspot.com/2025/03/red-hat-middleware-moving-to-ibm.html Les activités Middleware de Red Hat (incluant JBoss, Quarkus, etc.) vont être transférées vers IBM, dans l'unité dédiée à la sécurité des données, à l'IAM et aux runtimes. Ce changement découle d'une décision stratégique de Red Hat de se concentrer davantage sur le cloud hybride et l'intelligence artificielle. Mark Little explique que ce transfert était devenu inévitable, Red Hat ayant réduit ses investissements dans le Middleware ces dernières années. L'intégration vise à renforcer l'innovation autour de Java en réunissant les efforts de Red Hat et IBM sur ce sujet. Les produits Middleware resteront open source et les clients continueront à bénéficier du support habituel sans changement. Mark Little affirme que des projets comme Quarkus continueront à être soutenus et que cette évolution est bénéfique pour la communauté Java. Un an de commonhaus https://www.commonhaus.org/activity/253.html un an, démarré sur les communautés qu'ils connaissaient bien maintenant 14 projets et put en accepter plus confiance, gouvernance legère et proteger le futur des projets automatisation de l'administratif, stabiilité sans complexité, les developpeurs au centre du processus de décision ils ont besoins de members et supporters (financiers) ils veulent accueillir des projets au delà de ceux du cercles des Java Champions Spring Cloud Data Flow devient un produit commercial et ne sera plus maintenu en open source https://spring.io/blog/2025/04/21/spring-cloud-data-flow-commercial Peut-être sous l'influence de Broadcom, Spring se met à mettre en mode propriétaire des composants du portefeuille Spring ils disent que peu de gens l'utilisaent en mode OSS et la majorité venait d'un usage dans la plateforme Tanzu Maintenir en open source le coutent du temps qu'ils son't pas sur ces projets. La CNCF protège le projet NATS, dans la fondation depuis 2018, vu que la société Synadia qui y contribue souhaitait reprendre le contrôle du projet https://www.cncf.io/blog/2025/04/24/protecting-nats-and-the-integrity-of-open-source-cncfs-commitment-to-the-community/ CNCF : Protège projets OS, gouvernance neutre. Synadia vs CNCF : Veut retirer NATS, licence non-OS (BUSL). CNCF : Accuse Synadia de “claw back” (reprise illégitime). Revendications Synadia : Domaine nats.io, orga GitHub. Marque NATS : Synadia n'a pas transféré (promesse rompue malgré aide CNCF). Contestation Synadia : Juge règles CNCF “trop vagues”. Vote interne : Mainteneurs Synadia votent sortie CNCF (sans communauté). Support CNCF : Investissement majeur ($ audits, légal), succès communautaire (>700 orgs). Avenir NATS (CNCF) : Maintien sous Apache 2.0, gouvernance ouverte. Actions CNCF : Health check, appel mainteneurs, annulation marque Synadia, rejet demandes. Mais finalement il semble y avoir un bon dénouement : https://www.cncf.io/announcements/2025/05/01/cncf-and-synadia-align-on-securing-the-future-of-the-nats-io-project/ Accord pour l'avenir de NATS.io : La Cloud Native Computing Foundation (CNCF) et Synadia ont conclu un accord pour sécuriser le futur du projet NATS.io. Transfert des marques NATS : Synadia va céder ses deux enregistrements de marque NATS à la Linux Foundation afin de renforcer la gouvernance ouverte du projet. Maintien au sein de la CNCF : L'infrastructure et les actifs du projet NATS resteront sous l'égide de la CNCF, garantissant ainsi sa stabilité à long terme et son développement en open source sous licence Apache-2.0. Reconnaissance et engagement : La Linux Foundation, par la voix de Todd Moore, reconnaît les contributions de Synadia et son soutien continu. Derek Collison, PDG de Synadia, réaffirme l'engagement de son entreprise envers NATS et la collaboration avec la Linux Foundation et la CNCF. Adoption et soutien communautaire : NATS est largement adopté et considéré comme une infrastructure critique. Il bénéficie d'un fort soutien de la communauté pour sa nature open source et l'implication continue de Synadia. Finalement, Redis revient vers une licence open source OSI, avec la AGPL https://foojay.io/today/redis-is-now-available-under-the-agplv3-open-source-license/ Redis passe à la licence open source AGPLv3 pour contrer l'exploitation par les fournisseurs cloud sans contribution. Le passage précédent à la licence SSPL avait nui à la relation avec la communauté open source. Salvatore Sanfilippo (antirez) est revenu chez Redis. Redis 8 adopte la licence AGPL, intègre les fonctionnalités de Redis Stack (JSON, Time Series, etc.) et introduit les “vector sets” (le support de calcul vectoriel développé par Salvatore). Ces changements visent à renforcer Redis en tant que plateforme appréciée des développeurs, conformément à la vision initiale de Salvatore. Conférences La liste des conférences provenant de Developers Conferences Agenda/List par Aurélie Vache et contributeurs : 6-7 mai 2025 : GOSIM AI Paris - Paris (France) 7-9 mai 2025 : Devoxx UK - London (UK) 15 mai 2025 : Cloud Toulouse - Toulouse (France) 16 mai 2025 : AFUP Day 2025 Lille - Lille (France) 16 mai 2025 : AFUP Day 2025 Lyon - Lyon (France) 16 mai 2025 : AFUP Day 2025 Poitiers - Poitiers (France) 22-23 mai 2025 : Flupa UX Days 2025 - Paris (France) 24 mai 2025 : Polycloud - Montpellier (France) 24 mai 2025 : NG Baguette Conf 2025 - Nantes (France) 3 juin 2025 : TechReady - Nantes (France) 5-6 juin 2025 : AlpesCraft - Grenoble (France) 5-6 juin 2025 : Devquest 2025 - Niort (France) 10-11 juin 2025 : Modern Workplace Conference Paris 2025 - Paris (France) 11-13 juin 2025 : Devoxx Poland - Krakow (Poland) 12 juin 2025 : Positive Design Days - Strasbourg (France) 12-13 juin 2025 : Agile Tour Toulouse - Toulouse (France) 12-13 juin 2025 : DevLille - Lille (France) 13 juin 2025 : Tech F'Est 2025 - Nancy (France) 17 juin 2025 : Mobilis In Mobile - Nantes (France) 19-21 juin 2025 : Drupal Barcamp Perpignan 2025 - Perpignan (France) 24 juin 2025 : WAX 2025 - Aix-en-Provence (France) 25-26 juin 2025 : Agi'Lille 2025 - Lille (France) 25-27 juin 2025 : BreizhCamp 2025 - Rennes (France) 26-27 juin 2025 : Sunny Tech - Montpellier (France) 1-4 juillet 2025 : Open edX Conference - 2025 - Palaiseau (France) 7-9 juillet 2025 : Riviera DEV 2025 - Sophia Antipolis (France) 5 septembre 2025 : JUG Summer Camp 2025 - La Rochelle (France) 12 septembre 2025 : Agile Pays Basque 2025 - Bidart (France) 18-19 septembre 2025 : API Platform Conference - Lille (France) & Online 23 septembre 2025 : OWASP AppSec France 2025 - Paris (France) 25-26 septembre 2025 : Paris Web 2025 - Paris (France) 2-3 octobre 2025 : Volcamp - Clermont-Ferrand (France) 3 octobre 2025 : DevFest Perros-Guirec 2025 - Perros-Guirec (France) 6-10 octobre 2025 : Devoxx Belgium - Antwerp (Belgium) 7 octobre 2025 : BSides Mulhouse - Mulhouse (France) 9-10 octobre 2025 : Forum PHP 2025 - Marne-la-Vallée (France) 9-10 octobre 2025 : EuroRust 2025 - Paris (France) 16 octobre 2025 : PlatformCon25 Live Day Paris - Paris (France) 16-17 octobre 2025 : DevFest Nantes - Nantes (France) 30-31 octobre 2025 : Agile Tour Bordeaux 2025 - Bordeaux (France) 30-31 octobre 2025 : Agile Tour Nantais 2025 - Nantes (France) 30 octobre 2025-2 novembre 2025 : PyConFR 2025 - Lyon (France) 4-7 novembre 2025 : NewCrafts 2025 - Paris (France) 6 novembre 2025 : dotAI 2025 - Paris (France) 7 novembre 2025 : BDX I/O - Bordeaux (France) 12-14 novembre 2025 : Devoxx Morocco - Marrakech (Morocco) 13 novembre 2025 : DevFest Toulouse - Toulouse (France) 15-16 novembre 2025 : Capitole du Libre - Toulouse (France) 20 novembre 2025 : OVHcloud Summit - Paris (France) 21 novembre 2025 : DevFest Paris 2025 - Paris (France) 27 novembre 2025 : Devfest Strasbourg 2025 - Strasbourg (France) 28 novembre 2025 : DevFest Lyon - Lyon (France) 5 décembre 2025 : DevFest Dijon 2025 - Dijon (France) 10-11 décembre 2025 : Devops REX - Paris (France) 10-11 décembre 2025 : Open Source Experience - Paris (France) 28-31 janvier 2026 : SnowCamp 2026 - Grenoble (France) 2-6 février 2026 : Web Days Convention - Aix-en-Provence (France) 23-25 avril 2026 : Devoxx Greece - Athens (Greece) 17 juin 2026 : Devoxx Poland - Krakow (Poland) Nous contacter Pour réagir à cet épisode, venez discuter sur le groupe Google https://groups.google.com/group/lescastcodeurs Contactez-nous via X/twitter https://twitter.com/lescastcodeurs ou Bluesky https://bsky.app/profile/lescastcodeurs.com Faire un crowdcast ou une crowdquestion Soutenez Les Cast Codeurs sur Patreon https://www.patreon.com/LesCastCodeurs Tous les épisodes et toutes les infos sur https://lescastcodeurs.com/
In this episode, Michael Lieberman, Co-founder and CTO of Kusari, walks us through the intersection of open source software and security. We discuss Mike's extensive involvement in OpenSSF projects like SLSA and GUAC, which provide essential frameworks for securing the software development life cycle (SDLC) and managing software supply chains. He explains how these tools help verify software provenance and manage vulnerabilities. Additionally, we explore regulatory concerns such as the Cyber Resilience Act (CRA) and the vital role of the recently released Open SSF Security Baseline (OSPS Baseline) in helping organizations comply with such regulations. Mike also shares insights into the evolution of open source security practices, the importance of reducing complexity for developers, and the potential benefits of orchestrating security similarly to Kubernetes. We conclude with a look at upcoming projects and current pilots aiming to simplify and enhance open source security. 00:00 Introduction and Guest Welcome 00:19 Mike's Background and Role in Open Source 01:35 Exploring SLSA and GUAC Projects 04:57 Cyber Resiliency Act Overview 06:54 OpenSSF Security Baseline 11:29 Encouraging Community Involvement 18:39 Final Thoughts Resources: OpenSSF's OSPS Baseline GUAC SLSA KubeCon Keynote: Cutting Through the Fog: Clarifying CRA Compliance in C... Eddie Knight & Michael Lieberman Guest: Michael Lieberman is co-founder and CTO of Kusari where he helps build transparency and security in the software supply chain. Michael is an active member of the open-source community, co-creating the GUAC and FRSCA projects and co-leading the CNCF's Secure Software Factory Reference Architecture whitepaper. He is an elected member of the OpenSSF Governing Board and Technical Advisory Council along with CNCF TAG Security Lead and an SLSA steering committee member.
At Google Cloud Next, Bobby Allen, Group Product Manager for Google Kubernetes Engine (GKE), emphasized GKE's foundational role in supporting AI platforms. While AI dominates current tech conversations, Allen highlighted that cloud-native infrastructure like Kubernetes is what enables AI workloads to function efficiently. GKE powers key Google services like Vertex AI and is trusted by organizations including DeepMind, gaming companies, and healthcare providers for AI model training and inference. Allen explained that GKE offers scalability, elasticity, and support for AI-specific hardware like GPUs and TPUs, making it ideal for modern workloads. He noted that Kubernetes was built with capabilities—like high availability and secure orchestration—that are now essential for AI deployment. Looking forward, GKE aims to evolve into a model router, allowing developers to access the right AI model based on function, not vendor, streamlining the development experience. Allen described GKE as offering maximum control with minimal technical debt, future-proofed by Google's continued investment in open source and scalable architecture.Learn more from The New Stack about the latest insights with Google Cloud: Google Kubernetes Engine Customized for Faster AI WorkKubeCon Europe: How Google Will Evolve Kubernetes in the AI EraApache Ray Finds a Home on the Google Kubernetes EngineJoin our community of newsletter subscribers to stay on top of the news and at the top of your game.
No ar mais um episódio do Kubicast! E dessa vez, o papo foi com Gui Santos, fundador da comunidade de Platform Engineering Brazil, sobre como construir plataformas que realmente entregam valor para devs e negócios — sem se perder na hype.Mas não é qualquer conversa sobre o assunto. Gui trouxe uma visão prática, direta e cheia de aprendizados reais — sem enrolação e sem hype vazia.O que rolou nesse episódio:A diferença entre montar uma plataforma e entregar valor de verdadeComo saber se sua engenharia de plataforma está realmente funcionandoA relação entre DevX, produto e cultura organizacionalOs riscos de empacotar ferramentas sem propósitoE claro... onde o Rock entra nisso tudo
In this 30‑minute episode, Jon and Lewis unpick the coordinated ransomware wave that struck Britain's high‑street giants. They trace the attack chain that emptied Co‑op shelves, froze M&S online orders and attempted, but failed, to extort Harrods.Lewis takes a look at Amazon's latest generative‑AI arsenal: Amazon Q's new developer‑first agents, the multimodal Nova Premier family running on Bedrock, and AWS's landmark decision to let any SaaS vendor list in Marketplace regardless of where the software runs, a direct play to become the app store for the whole cloud economy. Finally, they ask whether enterprises can really keep their data out of Google's AI engines.Hosts:https://www.linkedin.com/in/jonathanshanks/https://www.linkedin.com/in/lewismarshall/
Send us a textThe relentless race for cloud dominance continues as AWS reports 16.9% growth year-over-year—a number that would thrill most companies but falls short of expectations for the cloud giant. Tim and Chris dig into what's behind these numbers and why Microsoft and Google continue gaining ground with stronger-than-anticipated growth rates. Microsoft's massive 53% increase in capital spending signals their aggressive push into AI infrastructure, raising questions about whether AWS can maintain its leadership position.Cybersecurity threats are evolving in concerning ways according to Threat Labs' 2025 Phishing Report. While overall phishing attempts have declined, attacks have become more targeted and sophisticated. The hosts explore disturbing trends including the rise of cryptocurrency scams with fake wallets and an increase in job-related phishing that exploits today's challenging employment market. Most alarming is how threat actors are capitalizing on AI hype, creating fraudulent agent websites that mimic legitimate platforms to steal credentials. This exposes a critical vulnerability in emerging technologies like Multi-agent Collaboration Protocol systems that currently lack robust security frameworks.The conversation shifts to more positive developments with Kubernetes 1.33 "Octarine" release, which brings sidecar containers to stable status—a significant improvement for managing service mesh implementations. The hosts break down how this and other updates like in-place resource resizing make Kubernetes more flexible and easier to manage for enterprise deployments.The episode closes with a somber discussion of Intel's announcement of potential layoffs affecting up to 20% of its workforce following substantial quarterly losses. Tim and Chris challenge the new CEO's assertion that requiring more office days will make the company "lean, fast and agile," questioning whether return-to-office mandates have more to do with real estate investments than actual productivity improvements.What tech news matters most to you? We'd love your feedback on our news format and what topics you'd like us to cover in future episodes!Purchase Chris and Tim's new book on AWS Cloud Networking: https://www.amazon.com/Certified-Advanced-Networking-Certification-certification/dp/1835080839/ Check out the Fortnightly Cloud Networking Newshttps://docs.google.com/document/d/1fkBWCGwXDUX9OfZ9_MvSVup8tJJzJeqrauaE6VPT2b0/Visit our website and subscribe: https://www.cables2clouds.com/Follow us on BlueSky: https://bsky.app/profile/cables2clouds.comFollow us on YouTube: https://www.youtube.com/@cables2clouds/Follow us on TikTok: https://www.tiktok.com/@cables2cloudsMerch Store: https://store.cables2clouds.com/Join the Discord Study group: https://artofneteng.com/iaatj
Kenneth Stott, Field CTO at Hasura, joins the Ardan Labs podcast to recount his journey from a BYU student to innovating computer audits at Deloitte and navigating the high-pressure world of Wall Street. He explains how early hands-on experiences with computers and his organizational behavior classes ignited an entrepreneurial mindset. Throughout the conversation, Stott underscores the critical role of relationships, cultural awareness, and the art of storytelling in advancing one's career. He wraps up by highlighting his current work at Hasura, where he's driving breakthroughs in data management and leveraging AI to empower smarter business decisions.00:00 Introduction00:40 What is Kenneth Doing Today?06:05 First Memory of a Computer11:00 Early Entrepreneurship / Interests19:00 Entering University33:00 First Job After Graduation41:00 Moving to New York47:00 Working on Wall Street52:00 Entering C-Suite Jobs1:06:00 Desired Characteristics1:09:20 Roles before Retirement1:12:00 Working at Hasura1:15:00 PromptQL1:17:00 Contact InfoConnect with Kenneth: Linkedin: https://www.linkedin.com/in/kenstott/Mentioned in this Episode:Hasura: https://hasura.io/Want more from Ardan Labs? You can learn Go, Kubernetes, Docker & more through our video training, live events, or through our blog!Online Courses : https://ardanlabs.com/education/ Live Events : https://www.ardanlabs.com/live-training-events/ Blog : https://www.ardanlabs.com/blog Github : https://github.com/ardanlabs
Cloud Posse holds LIVE "Office Hours" every Wednesday to answer questions on all things related to AWS, DevOps, Terraform, Kubernetes, CI/CD. Register at https://cloudposse.com/office-hoursSupport the show
SANS Internet Stormcenter Daily Network/Cyber Security and Information Security Stormcast
Mirai Now Exploits Samsung MagicINFO CMS CVE-2024-7399 The Mirai botnet added a new vulnerability to its arsenal. This vulnerability, a file upload and remote code execution vulnerability in Samsung s MagicInfo 9 CMS, was patched last August but attracted new attention last week after being mostly ignored so far. https://isc.sans.edu/diary/Mirai+Now+Exploits+Samsung+MagicINFO+CMS+CVE20247399/31920 New Kali Linux Signing Key The Kali Linux maintainers lost access to the secret key used to sign packages. Users must install a new key that will be used going forward. https://www.kali.org/blog/new-kali-archive-signing-key/ The Risk of Default Configuration: How Out-of-the-Box Helm Charts Can Breach Your Cluster Many out-of-the-box Helm charts for Kubernetes applications deploy vulnerable configurations with exposed ports and no authentication https://techcommunity.microsoft.com/blog/microsoftdefendercloudblog/the-risk-of-default-configuration-how-out-of-the-box-helm-charts-can-breach-your/4409560
Anurag Goel grew up in New Delhi, but moved to Boston after college for his first job. He worked at Stripe, as the 8th employee, before eventually moving on and launching his current venture. Outside of tech, he is married, living in San Francisco. He likes to read science fiction, especially prior to bedtime. He also enjoys eating Thai food on the regular, though he mentioned he could eat pizza every day.Post leaving Stripe, Anurag decided to work on an ambitious problem, and he started doing this by building a bunch of stuff in many different domains. After noticing a common problem in building out Kubernetes, he decided to start a new business to abstract these problems, and allow builders to focus on the differentiating factors to their solutions.This is the creation story of Render.SponsorsMailtrapSpeakeasyQA WolfSnapTradeLinkshttps://render.com/https://www.linkedin.com/in/anuragoel/Our Sponsors:* Check out Vanta: https://vanta.com/CODESTORYSupport this podcast at — https://redcircle.com/code-story/donationsAdvertising Inquiries: https://redcircle.com/brandsPrivacy & Opt-Out: https://redcircle.com/privacy
Kubernetes revolutionized the way software is built, deployed, and managed, offering engineers unprecedented agility and portability. But as Edera co-founder and CEO Emily Long shares, the speed and flexibility of containerization came with overlooked tradeoffs—especially in security. What started as a developer-driven movement to accelerate software delivery has now left security and infrastructure teams scrambling to contain risks that were never part of Kubernetes' original design.Emily outlines a critical flaw: Kubernetes wasn't built for multi-tenancy. As a result, shared kernels across workloads—whether across customers or internal environments—introduce lateral movement risks. In her words, “A container isn't real—it's just a set of processes.” And when containers share a kernel, a single exploit can become a system-wide threat.Edera addresses this gap by rethinking how containers are run—not rebuilt. Drawing from hypervisor tech like Xen and modernizing it with memory-safe Rust, Edera creates isolated “zones” for containers that enforce true separation without the overhead and complexity of traditional virtual machines. This isolation doesn't disrupt developer workflows, integrates easily at the infrastructure layer, and doesn't require retraining or restructuring CI/CD pipelines. It's secure by design, without compromising performance or portability.The impact is significant. Infrastructure teams gain the ability to enforce security policies without sacrificing cost efficiency. Developers keep their flow. And security professionals get something rare in today's ecosystem: true prevention. Instead of chasing billions of alerts and layering multiple observability tools in hopes of finding the needle in the haystack, teams using Edera can reduce the noise and gain context that actually matters.Emily also touches on the future—including the role of AI and “vibe coding,” and why true infrastructure-level security is essential as code generation becomes more automated and complex. With GPU security on their radar and a hardware-agnostic architecture, Edera is preparing not just for today's container sprawl, but tomorrow's AI-powered compute environments.This is more than a product pitch—it's a reframing of how we define and implement security at the container level. The full conversation reveals what's possible when performance, portability, and protection are no longer at odds.Learn more about Edera: https://itspm.ag/edera-434868Note: This story contains promotional content. Learn more.Guest: Emily Long, Founder and CEO, Edera | https://www.linkedin.com/in/emily-long-7a194b4/ResourcesLearn more and catch more stories from Edera: https://www.itspmagazine.com/directory/ederaLearn more and catch more stories from RSA Conference 2025 coverage: https://www.itspmagazine.com/rsac25______________________Keywords:sean martin, emily long, containers, kubernetes, hypervisor, multi-tenancy, devsecops, infrastructure, virtualization, cybersecurity, brand story, brand marketing, marketing podcast, brand story podcast______________________Catch all of our event coverage: https://www.itspmagazine.com/technology-and-cybersecurity-conference-coverageWant to tell your Brand Story Briefing as part of our event coverage? Learn More
Without this, developers waste time managing infrastructure instead of focusing on code. VMware addresses this with VCF, a pre-integrated Kubernetes solution that includes components like Harbor, Valero, and Istio, all managed by VMware. While some worry about added complexity from abstraction, Turner dismissed concerns about virtualization overhead, pointing to benchmarks showing 98.3% of bare metal performance for virtualized AI workloads. He emphasized that AI is driving nearly half of Kubernetes deployments, prompting VMware's partnership with Nvidia to support GPU virtualization. Turner also highlighted VMware's open source leadership, contributing to major projects and ensuring Kubernetes remains cloud-independent and standards-based. VMware aims to simplify Kubernetes and AI workload management while staying committed to the open ecosystem.Learn more from The New Stack about the latest insights with VMware Has VMware Finally Caught Up With Kubernetes?VMware's Golden PathJoin our community of newsletter subscribers to stay on top of the news and at the top of your game.
Have you seen this latest AI-powered test automation planform? What is the Agent2Agent and how does it relate to MCP How does SEGA use automation testing? Find out in this episode of the Test Guild New Shows for the week of May 4. So, grab your favorite cup of coffee or tea, and let's do this. 0:18 ZAPTEST AI https://testguild.me/ZAPTESTNEWS 0:57 Autify 3-Month Free https://testguild.me/a5n62h 2:14 mcp-axe https://testguild.me/5x9jvz 3:21 SEGA automation https://testguild.me/bbvqg2 4:47 Google A2A https://testguild.me/naktl2 6:37 Kubernetes Performance https://testguild.me/p5lq05 7:40 MCP Security Testing https://testguild.me/8amonh 8:08 BrightStar https://testguild.me/4jxlag
On this episode of The Defense Unicorns Podcast, host Rebecca Lively chats with Case Wylie, Software Engineering Lead, about building security-minded software that keeps up with developer velocity. From his early days at Red Hat to architecting open-source tools at Defense Unicorns, Case shares how Pepr—a TypeScript-based operator framework—is redefining how Kubernetes clusters are secured and managed in airgapped environments. It's not just about enforcing policy; it's about enabling developers to move faster, safer, and smarter.Through real-world metaphors (ever been to a nightclub with strict bouncers?), Case breaks down the roles of admission controllers, operator frameworks, and how Pepr works seamlessly with GitOps without adding friction. He explains why Pepr isn't just a tool, but part of a broader movement to standardize security postures, reduce configuration drift, and empower app teams to focus on delivering real value. With a human-first API and open-source DNA, Pepr is built to be accessible to all, not just Kubernetes power users.If you're curious about what it takes to scale secure software in complex, mission-critical environments—or just want a fresh, practical take on DevSecOps—this episode delivers. Case also shares his philosophy on open-source collaboration and what it means to build tools that truly stand the test of scale and scrutiny.Key Quote:“Pepr will always be open source and the reason why it's open source is because frankly, open source software, when your software is open source, you expose the application or the software or the platform, whatever it is to exponentially more eyes and more eyes over time and then more people start adopting it and using it and saying like, ‘Hey, you know what? I do have this simple thing I always have to do in my cluster. Maybe I try Pepr for that.' Right? And then they do it with a simple task, and then they say, ‘Hey, you know what? It would be great if Pepr could do this thing. And they put in a feature request. Then we develop that feature request, or they develop it, and they submit a PR to Pepr. And now Pepr as a whole is better because now you're using it. I'm using it. They're using it. The more people that use it, the better.”Case WylieTime Stamps:(02:44) Introduction to UDS and Pepr(05:59) The Importance of Air-Gapped Environments(11:40) Understanding Kubernetes Admission Control(16:05) Comparing Pepr with Other Tools(22:00) Why Pepr Uses TypeScript(34:03) The Benefits of Open Source for Pepr(43:31) Lightning RoundLinks:Connect with Casey WylieConnect with Rebecca LivelyLearn More About Defense Unicorns
Aaron and Shane both recently had a bad experience when buying hard drives, the hardware we picked for our homelabs, why gigabit LANs aren't quite cutting it anymore, an update on Shane's janky Kubernetes setup, and more. Send your questions and feedback to show@hybridcloudshow.com Insta360 X5 Camera To... Read More
Tim Irnich from SUSE shares his work with Edge computing, focusing on the telecommunications industry. He highlights the importance of standardization and interoperability across the industry, specifically focusing on the widespread adoption of Linux and Kubernetes. Tim also elaborates on Project Sylva, an initiative under the Linux Foundation, aimed at creating a standardized stack for the European telco operators. We also discuss challenges and opportunities presented by the vast array of open source projects within the CNCF landscape and the potential for AI to enhance network efficiency and reliability. The episode provides a comprehensive look into the collaborative efforts and technological advancements shaping the telecom sector. 00:00 Welcome 01:14 Open Source Adoption in the Telco Industry 02:14 Challenges and Standardization in Telco Networks 04:35 Curating Reliable Stacks for Telco 06:11 Project Silva: An Open Source Initiative 18:55 AI in the Telecom Industry 22:11 Conclusion and Final Thoughts Tim Irnich is the product manager for SUSE Edge for Telco, an open source based horizontal telco cloud solution. He is also a member of the Board of Directors at the LF Europe Sylva Project. Tim has been active in telco related open source communities such as LF Networking, OPNFV, OpenDaylight, OpenStack/OpenInfra for over a decade and held positions on several committees including the LFN TAC, TSC and Board of Directors in OPNFV and OpenDaylight. Before joining SUSE in 2018, Tim worked at Ericsson, where he ran the open source and ecosystem program for Ericsson's cloud business unit and helped found Ericsson's open source development arm that is today known as Ericsson Software Technologies.
The crush of traffic going into training and running AI has quickly turned into a major cost and resource headache for organizations. Cast AI is a startup building tools to ease and optimize AI and other workloads with automation — is raising a major round of funding on the back of its strong growth Learn more about your ad choices. Visit podcastchoices.com/adchoices
In this 30-minute episode, Jon and Lewis delve into the root causes of Europe's recent Spain, France and Portugal blackouts, scrutinising grid-failure reports and probing whether a sophisticated cyber-attack might have been to blame. They then analyse Meta's newly unveiled AI alongside the arrival of Qwen 3 and Google's AgentSpace update, and how Jon progressed with integrating it into Google Drive to streamline internal teams.Next, they unpack NASA's groundbreaking study showing how three massive dams have redistributed Earth's water mass, subtly altering the planet's shape and extending our days. Hosts: https://www.linkedin.com/in/jonathanshanks/https://www.linkedin.com/in/lewismarshall/
At Arm, open source is the default approach, with proprietary software requiring justification, says Andrew Wafaa, fellow and senior director of software communities. Speaking at KubeCon + CloudNativeCon Europe, Wafaa emphasized Arm's decade-long commitment to open source, highlighting its investment in key projects like the Linux kernel, GCC, and LLVM. This investment is strategic, ensuring strong support for Arm's architecture through vital tools and system software.Wafaa also challenged the hype around GPUs in AI, asserting that CPUs—especially those enhanced with Arm's Scalable Matrix Extension (SME2) and Scalable Vector Extension (SVE2)—are often more suitable for inference workloads. CPUs offer greater flexibility, and Arm's innovations aim to reduce dependency on expensive GPU fleets.On the AI framework front, Wafaa pointed to PyTorch as the emerging hub, likening its ecosystem-building potential to Kubernetes. As a PyTorch Foundation board member, he sees PyTorch becoming the central open source platform in AI development, with broad community and industry backing.Learn more from The New Stack about the latest insights about Arm: Edge Wars Heat Up as Arm Aims to Outflank Intel, Qualcomm Arm: See a Demo About Migrating a x86-Based App to ARM64Join our community of newsletter subscribers to stay on top of the news and at the top of your game.
Mark Freydl (CEO/Founder @codiacio) talks about the challenges of blending IaaC, DevOps and Platform Engineering to drive efficient software development lifecycles.SHOW: 919SHOW TRANSCRIPT: The Cloudcast #919 TranscriptSHOW VIDEO: https://youtube.com/@TheCloudcastNET CLOUD NEWS OF THE WEEK: http://bit.ly/cloudcast-cnotwNEW TO CLOUD? CHECK OUT OUR OTHER PODCAST: "CLOUDCAST BASICS"SHOW NOTES:Codiac websiteCodiac at Tech Field DayTopic 1 - Welcome to the show, Mark. Give everyone a quick introduction.Topic 2 - Before we dig into the tech, let's start with the problem. What problem were you seeing in software development that wasn't being solved with IaaC, Platform Engineering, DevOps, etc.? Where does SDLC fit into this? Topic 3 - Have microservices helped or hurt? We hear all the time about the loose coupling of microservices and benefits towards production, but doesn't that also make it harder to develop? How do you recreate an environment where you are dependent on a bunch of microservices in a development pipeline?Topic 4 - I get the feeling this is all about removing friction. But where and how? I see Kubernetes as a blessing and curse many times. It's an awesome application platform, as long as you aren't the one that has to do the care and feeding on it. Thoughts?Topic 5 - The goal here I believe is a closed loop system that is beneficial for developers and SRE's, but how do you balance closed loops vs. extensibility and abstraction of different platforms to the systems that are truly write once.Topic 6 - How does the culture and relationships in the org have to change to meet the changes in the tech?FEEDBACK?Email: show at the cloudcast dot netBluesky: @cloudcastpod.bsky.socialTwitter/X: @cloudcastpodInstagram: @cloudcastpodTikTok: @cloudcastpod
In this conversation, Bill Kennedy and Austin Vance discuss the critical role of relationships in business, the balance between AI and human interaction, and the importance of cultivating client trust. They explore how AI can enhance efficiency without replacing the human touch in consulting. The discussion also delves into pricing strategies in a changing economic landscape, the significance of legacy code, and the need for developers to maintain empathy towards existing systems. Ultimately, they emphasize the importance of creating awareness and demonstrating expertise in the market.00:00 Introduction00:30 What is Austin Doing Today?02:10 First Experiences with a Computer06:00 Highschool Interests9:30 Early Laptops16:20 Entering College23:30 The Journey to Computer Science 26:30 First Jobs in Tech32:00 Starting Focused 35:00 Agency vs Products and Reputation44:30 Moving on From Pivotal52:48 Balancing AI and Human Interaction01:08:30 Future Market Concerns01:20:30 Controlling Chaos1:28:00 Contact InfoConnect with Austin: Linkedin: https://www.linkedin.com/in/austinbv/Mentioned in this Episode:Focused: https://focused.io/Want more from Ardan Labs? You can learn Go, Kubernetes, Docker & more through our video training, live events, or through our blog!Online Courses : https://ardanlabs.com/education/ Live Events : https://www.ardanlabs.com/live-training-events/ Blog : https://www.ardanlabs.com/blog Github : https://github.com/ardanlabs
Intel is still looking to smooth out their operations under new CEO Lip-Bu Tan. The announcement came last week that Intel is going to be cutting more positions. Initial reports rumored that it could be as much 20,000 employees but CFO David Zinsner refuted that number last Friday. Instead, each department will assess their structure and decide whether to cut positions or offer voluntary packages. Zinsner cautioned that not all buyout offers would be accepted. Additionally, early retirements would not be offered, as previously these had led to other issues in the long term. This and more on the Tech Field Day News Rundown. Time Stamps: 0:00 - Welcome to the Tech Field Day News Rundown1:36 - Native Sidecars Land in Kubernetes 1.335:39 - Q-CTRL Reveals Quantum Navigation System10:24 - TSMC Launches A14, Targets AI Speed14:06 - Factory Settings added to Commvault's Cleanroom Recovery17:46 - Safeguarding NATS and Open Source23:13 - HPE Expands Cybersecurity Across Platforms26:54 - Another Round of Layoffs for Intel as they Streamline33:44 - The Weeks Ahead36:17 - Thanks for WatchingHosts: Tom Hollingsworth: https://www.linkedin.com/in/networkingnerd/Stephen Foskett: https://www.linkedin.com/in/sfoskett/Alastair Cooke: https://www.linkedin.com/in/alastaircooke/Follow Tech Field Day: Website: https://techfieldday.com/ LinkedIn: https://www.linkedin.com/company/tech-field-day/X/Twitter: https://x.com/TechFieldDay Bluesky: https://bsky.app/profile/techfieldday.com
We all know that virtual machines have dominated IT infrastructure for 25 years... but is that era really at an end?In Episode 98 of Great Things with Great Tech, Lucas Gentele, CEO of Loft Labs, shares how Loft is reshaping infrastructure by eliminating the need for virtual machines. Starting from early entrepreneurial roots and open-source projects like DevSpace, Lucas explains how Loft Labs tackled Kubernetes inefficiencies by building multi-tenant solutions like vCluster and vNode. Their innovations enable Kubernetes clusters to run more securely, efficiently, and cost-effectively without traditional VM overhead. As companies seek alternatives to legacy virtualization, Loft Labs mission is to bring lightweight, scalable, and open-source virtual clusters into more environments, making Kubernetes accessible for both development and production workloadsKey Takeaways:The journey from DevSpace to founding Loft Labs and the rise of Kubernetes multi-tenancy.Why the traditional VM-based model is becoming obsolete — and what's replacing it.How Loft's vCluster and vNode are revolutionizing Kubernetes cluster management.The impact of Broadcom's VMware acquisition on cloud infrastructure strategies.The future of cloud-native infrastructure: virtual clusters, no VMs, better scalability.Links & Resources: Loft Labs: https://www.loft.shLukas Gentele on LinkedIn: https://www.linkedin.com/in/genteleLoft Labs on LinkedIn: https://www.linkedin.com/company/loftlabs☑️ Support the Channel: https://ko-fi.com/gtwgt☑️ Be on #GTwGT: Contact via Twitter @GTwGTPodcast or visit https://www.gtwgt.com☑️ Subscribe to YouTube: https://www.youtube.com/@GTwGTPodcast?sub_confirmation=1Check out the full episode on our platforms:Spotify: https://open.spotify.com/episode/2l9aZpvwhWcdmL0lErpUHC?si=x3YOQw_4Sp-vtdjyroMk3QApple Podcasts: https://podcasts.apple.com/us/podcast/darknet-diaries-with-jack-rhysider-episode-83/id1519439787?i=1000654665731Follow Us:Website: https://gtwgt.comTwitter: https://twitter.com/GTwGTPodcastInstagram: https://instagram.com/GTwGTPodcast☑️ Music: https://www.bensound.com
In today's uncertain economy, businesses are tightening costs, including for Kubernetes (K8s) operations, which are notoriously difficult to optimize. Yodar Shafrir, co-founder and CEO of ScaleOps, explained at KubeCon + CloudNativeCon Europe that dynamic, cloud-native applications have constantly shifting loads, making resource allocation complex. Engineers must provision enough resources to handle spikes without overspending, but in large production clusters with thousands of applications, manual optimization often fails. This leads to 70–80% resource waste and performance issues. Developers typically prioritize application performance over operational cost, and AI workloads further strain resources. Existing optimization tools offer static recommendations that quickly become outdated due to the dynamic nature of workloads, risking downtime. Shafrir emphasized that real-time, fully automated solutions like ScaleOps' platform are crucial. By dynamically adjusting container-level resources based on real-time consumption and business metrics, ScaleOps improves application reliability and eliminates waste. Their approach shifts Kubernetes management from static to dynamic resource allocation. Listen to the full episode for more insights and ScaleOps' roadmap.Learn more from The New Stack about the latest in scaling Kubernetes and managing operational costs: ScaleOps Adds Predictive Horizontal Scaling, Smart Placement ScaleOps Dynamically Right-Sizes Containers at Runtime Join our community of newsletter subscribers to stay on top of the news and at the top of your game.
In this fact-busting 30-minute episode, Jon and Lewis explore Google's AI AgentSpace technology and how it could let business teams run complete workflows without ever leaving the agent environment. They then test the marketing claims behind Openai's latest model features, before dissecting Oracle's cloud data breach. Finally, they map out the potential impact of the proposed Clean Cloud Act on the energy footprint. Hosts:https://www.linkedin.com/in/jonathanshanks/https://www.linkedin.com/in/lewismarshall/
Nina Polshakova is a software engineer at Solo.io, where she's worked on Istio and API Gateway projects. She's been part of the Kubernetes release team since v1.27 and is currently serving as the Release Lead for v1.33. Do you have something cool to share? Some questions? Let us know: - web: kubernetespodcast.com - mail: kubernetespodcast@google.com - twitter: @kubernetespod - bluesky: @kubernetespodcast.com News of the week 229 new things Google announced at Next 25 MCO: Multi-Cluster Orchestrator Golden Kubestronaut Cloud Native Platform Engineering Associate The kube-scheduler-simulator K0s and k0smotron are now CNCF Sandbox projects Links from the interview Nina Polshakova Kubernetes Deprecation Policy Kubernetes Dev Google Group solo.io Istio API Gateway (General concept, linking to K8s Gateway API) Kubernetes Release Team GitHub Istio revisions Working in Public by Nadia Eghbal (Link to publisher's site about the book) Kubernetes Maintainers Read Mean Comments (KubeCon EU 2024) Kubernetes 1.33 release blog (Link to release announcement blog) Kubernetes Enhancement Proposals (KEPs) Sidecar Containers Multiple Service CIDR support (KEP link) Dynamic Resource Allocation (DRA) DRA support for partitioned devices (KEP link) DRA device taints and tolerations (KEP link) DRA: Prioritized Alternatives in Device Requests (KEP link) Kubernetes 1.33 sneak peak (Link to pre-release highlights) EndpointSlices API Kubernetes Gateway API node.status.nodeInfo.kubeProxyVersion is a lie (issue) KEP-4004: Deprecate the kubeProxyVersion field of v1.Node #4005 (KEP link) Kubelet Removal: Host network support for Windows pods (KEP link) Containerd SIG Windows HostProcess Containers (Windows) Removal: KEP-5040: Disable git_repo volume driver (KEP link) User Namespaces (Beta, Enabled by Default) CRI-O Runc In-place Resource Resize for Pods (Link to the alpha announcement, but now beta) Vertical Pod Autoscaler (VPA) KEP-5080: Ordered Namespace Deletion PyTorch Linkerd Terry Pratchett's Discworld series Tiffany Aching series Guards! Guards! Going Postal Kubernetes Slack New Contributor Orientation
Join Ashish Rajan in this episodeas he dives deep into the evolving world of cloud security with Sergej Epp, formerly of Deutsche Bank and Palo Alto Networks, now with Sysdig.Discover why traditional security approaches fall short in today's dynamic cloud-native environments, where workloads resemble swarms of drones rather than predictable trains. Sergej explains the critical shift from basic posture management (CSPM/CNAPP) towards runtime security, emphasizing the need for an "assume breach" mindset.Learn about the staggering reality that over 60% of containers now live for less than a minute and the immense challenges this poses for detection, incident response, and forensics.This episode covers:The evolution from traditional security to cloud-native and runtime security.Why CNAPP/CSPM is like a map, but runtime security is the essential radar.The complexities of modern incident response with ephemeral workloads.Key strategies for Security Operations Centers (SOC) adapting to the cloud.The importance of visibility, data collection, and tools for hybrid and even air-gapped environments.How AI is starting to aid security operations and forensics.Guest Socials: Sergej LinkedinPodcast Twitter - @CloudSecPod If you want to watch videos of this LIVE STREAMED episode and past episodes - Check out our other Cloud Security Social Channels:-Cloud Security Podcast- Youtube- Cloud Security Newsletter - Cloud Security BootCampIf you are interested in AI Cybersecurity, you can check out our sister podcast - AI Cybersecurity PodcastQuestions asked:(00:00) Introduction: Cloud Security & The One-Minute Container Problem(01:31) Meet Sergej Epp: 20+ Years in Cybersecurity (Deutsche Bank, Palo Alto, Sysdig)(02:44) What is Cloud Native Today? From Train Stations to Airports with Drones(05:34) Runtime Security Explained: Why It's Crucial Now(11:05) The Evolution of Cloud Security: Beyond Basic Posture Management(13:49) Incident Response Evolution: Tackling One-Minute Containers(18:34) Who Needs Runtime Security? Platform Engineers, SOC Teams & More(21:01) Runtime Security as a Platform: Beyond Detection to Prevention & Insights(24:45) Cloud Security Program Maturity: From On-Prem to Cloud Native SOC(29:20) AI in SOC Operations: Speeding Up Forensics & Context
Working with multiple Infrastructure as Code (IAC) tools can be painful. Spacelift provides a platform that operates on top of disparate IaC tools, including Ansible, Kubernetes, Pulumi, Terraform, and OpenTofu. Spacelift helps build the automation with and between these tools, creating graphs of graphs that make your dependencies just work. On today's show, we talk... Read more »
Working with multiple Infrastructure as Code (IAC) tools can be painful. Spacelift provides a platform that operates on top of disparate IaC tools, including Ansible, Kubernetes, Pulumi, Terraform, and OpenTofu. Spacelift helps build the automation with and between these tools, creating graphs of graphs that make your dependencies just work. On today's show, we talk... Read more »
Cloud native patterns and open source developments were on display at the KubeCon + CloudNativeCon Europe conference. The biannual gathering was showing how the container ecosystem continues to mature and analysts Jean Atelsek and William Fellows join host Eric Hanselman to explore their insights. The Cloud Native Computing Foundation (CNCF), part of the Linux Foundation, continues to expand the event and advance the maturity of the open source projects that are part of its purview. Day 2 operations have been gaining focus and the pre-conference FinOps X event was an indication of the emphasis on operational controls as it digs into infrastructure cost management. The opening “Day 0” events at KubeCon, which have been the forum for specialized project meetings, have become a key part of the conference, with over 6,000 attendees, almost half of the reported 13,000 total. The Kubernetes container management project is now over ten years old and one of the other signs of technology evolution was the integration of the OpenInfra Foundation, which managed the OpenStack project and other infrastructure elements, into the Linux Foundation. Open source projects are gaining wider adoption and one of the messages from projects and vendors at KubeCon, was the hope that it could offer alternatives to enterprise infrastructure stalwart, VMware. The CNCF is expanding its investments in improving security across the projects under its umbrella. There was also continued development of platform engineering initiatives. Bounding the expanding world of open source projects to create consistent development and operational tool chains for enterprise is one more sign of maturity in the container world. More S&P Global Content: AWS, Microsoft Azure and Google Cloud enter the FinOps vortex For S&P Global subscribers: Kubernetes meets the AI moment in Europe with technology, security, investment Data management, GenAI, hybrid cloud are top Kubernetes workloads – Highlights from VotE: DevOps Kubernetes ecosystem tackles new technical and market challenges Kubernetes, serverless adoption evolve with cloud-native maturity – Highlights from VotE: DevOps Credits: Host/Author: Eric Hanselman Guests: Jean Atelsek, William Fellows Producer/Editor: Adam Kovalsky Published With Assistance From: Sophie Carr, Feranmi Adeoshun, Kyra Smith
Mark Cusack, CTO of Yellowbrick Data unpacks the evolving data protection landscape. We explore the shift from cloud-first to cloud-optimized strategies, the power of private LLMs, the resurgence of data sovereignty, and how hybrid cloud is shaping enterprise resilience. Mark breaks down the critical role of Kubernetes, data governance, and the growing need for secure, scalable analytics. Plus, tips for job seekers navigating the modern tech terrain.
Modern cloud-native systems are highly dynamic and distributed, which makes it difficult to monitor cloud infrastructure using traditional tools designed for static environments. This has motivated the development and widespread adoption of dedicated observability platforms. Prometheus is an open-source observability tool designed for cloud-native environments. Its strong integration with Kubernetes and pull-based data collection model The post Prometheus and Open-Source Observability with Eric Schabell appeared first on Software Engineering Daily.
What if managing databases on Kubernetes didn't require a team of specialists or endless configuration? In this episode of Tech Talks Daily, I sit down with Tamal Saha, founder and CEO of AppsCode, to explore how his team is building a more intuitive, scalable, and enterprise-ready approach to Kubernetes-native data management. Recorded during the IT Press Tour in London, this conversation traces Tamal's journey from Bangladesh to Google, and ultimately to launching AppsCode in 2016. He shares how early experiences with Google's internal systems helped shape his vision for a cloud-native data platform built for modern application environments. What began as an open-source passion project has evolved into a comprehensive enterprise suite that includes solutions like KubeDB, Stash, Voyager, and KubeVault. We discuss the operational realities of managing databases in Kubernetes—from simplifying provisioning and backups to solving problems around TLS management, multi-tenancy, and even secret rotation. Tamal outlines the benefits of a Kubernetes-native architecture for platform engineers, DevOps teams, and developers who want to deploy faster, automate more, and keep full control over their environments. With real-world insights from enterprise deployments, including large-scale telcos modernizing their infrastructure, Tamal explains how AppsCode is helping organizations move from ticket-based database provisioning to true self-service. He also looks ahead to what's coming next, including support for vector databases, AI-powered provisioning interfaces, and deeper integrations with open telemetry and observability tools. Whether you're running stateful workloads in Kubernetes today or exploring how to modernize your data layer, this episode is packed with insight into building platforms that work with developers, not against them. So, is it time to rethink your approach to data in the cloud-native era? Let me know what you think.