POPULARITY
Aja Hammerly, director of developer relations at Google, sees AI as the always-available coding partner developers have long wished for—especially in those late-night bursts of inspiration. In a conversation with Alex Williams at Google Cloud Next, she described AI-assisted coding as akin to having a virtual pair programmer who can fill in gaps and offer real-time support. Hammerly urges developers to start their AI journey with tools that assist in code writing and explanation before moving into more complex AI agents. She distinguishes two types of DevEx AI: using AI to build apps and using it to eliminate developer toil. For Hammerly, this includes letting AI handle frontend work while she focuses on backend logic. The newly launched Firebase Studio exemplifies this dual approach, offering an AI-enhanced IDE with flexible tools like prototyping, code completion, and automation. Her advice? Developers should explore how AI fits into their unique workflow—because development, at its core, is deeply personal and individual.Learn more from The New Stack about the latest AI insights with Google Cloud:Google AI Coding Tool Now Free, With 90x Copilot's OutputGemini 2.5 Pro: Google's Coding Genius Gets an UpgradeQ&A: How Google Itself Uses Its Gemini Large Language ModelJoin our community of newsletter subscribers to stay on top of the news and at the top of your game.
At Google Cloud Next '25, the company introduced Ironwood, its most advanced custom Tensor Processing Unit (TPU) to date. With 9,216 chips per pod delivering 42.5 exaflops of compute power, Ironwood doubles the performance per watt compared to its predecessor. Senior product manager Chelsie Czop explained that designing TPUs involves balancing power, thermal constraints, and interconnectivity. Google's long-term investment in liquid cooling, now in its fourth generation, plays a key role in managing the heat generated by these powerful chips. Czop highlighted the incremental design improvements made visible through changes in the data center setup, such as liquid cooling pipe placements. Customers often ask whether to use TPUs or GPUs, but the answer depends on their specific workloads and infrastructure. Some, like Moloco, have seen a 10x performance boost by moving directly from CPUs to TPUs. However, many still use both TPUs and GPUs. As models evolve faster than hardware, Google relies on collaborations with teams like DeepMind to anticipate future needs.Learn more from The New Stack about the latest AI infrastructure insights from Google Cloud:Google Cloud Therapist on Bringing AI to Cloud Native InfrastructureA2A, MCP, Kafka and Flink: The New Stack for AI AgentsJoin our community of newsletter subscribers to stay on top of the news and at the top of your game.
At Google Cloud Next, Bobby Allen, Group Product Manager for Google Kubernetes Engine (GKE), emphasized GKE's foundational role in supporting AI platforms. While AI dominates current tech conversations, Allen highlighted that cloud-native infrastructure like Kubernetes is what enables AI workloads to function efficiently. GKE powers key Google services like Vertex AI and is trusted by organizations including DeepMind, gaming companies, and healthcare providers for AI model training and inference. Allen explained that GKE offers scalability, elasticity, and support for AI-specific hardware like GPUs and TPUs, making it ideal for modern workloads. He noted that Kubernetes was built with capabilities—like high availability and secure orchestration—that are now essential for AI deployment. Looking forward, GKE aims to evolve into a model router, allowing developers to access the right AI model based on function, not vendor, streamlining the development experience. Allen described GKE as offering maximum control with minimal technical debt, future-proofed by Google's continued investment in open source and scalable architecture.Learn more from The New Stack about the latest insights with Google Cloud: Google Kubernetes Engine Customized for Faster AI WorkKubeCon Europe: How Google Will Evolve Kubernetes in the AI EraApache Ray Finds a Home on the Google Kubernetes EngineJoin our community of newsletter subscribers to stay on top of the news and at the top of your game.
Without this, developers waste time managing infrastructure instead of focusing on code. VMware addresses this with VCF, a pre-integrated Kubernetes solution that includes components like Harbor, Valero, and Istio, all managed by VMware. While some worry about added complexity from abstraction, Turner dismissed concerns about virtualization overhead, pointing to benchmarks showing 98.3% of bare metal performance for virtualized AI workloads. He emphasized that AI is driving nearly half of Kubernetes deployments, prompting VMware's partnership with Nvidia to support GPU virtualization. Turner also highlighted VMware's open source leadership, contributing to major projects and ensuring Kubernetes remains cloud-independent and standards-based. VMware aims to simplify Kubernetes and AI workload management while staying committed to the open ecosystem.Learn more from The New Stack about the latest insights with VMware Has VMware Finally Caught Up With Kubernetes?VMware's Golden PathJoin our community of newsletter subscribers to stay on top of the news and at the top of your game.
Prequel is launching a new developer-focused service aimed at democratizing software error detection—an area typically dominated by large cloud providers. Co-founded by Lyndon Brown and Tony Meehan, both former NSA engineers, Prequel introduces a community-driven observability approach centered on Common Reliability Enumerations (CREs). CREs categorize recurring production issues, helping engineers detect, understand, and communicate problems without reinventing solutions or working in isolation. Their open-source tools, cre and prereq, allow teams to build and share detectors that catch bugs and anti-patterns in real time—without exposing sensitive data, thanks to edge processing using WebAssembly.The urgency behind Prequel's mission stems from the rapid pace of AI-driven development, increased third-party code usage, and rising infrastructure costs. Traditional observability tools may surface symptoms, but Prequel aims to provide precise problem definitions and actionable insights. While observability giants like Datadog and Splunk dominate the market, Brown and Meehan argue that engineers still feel overwhelmed by data and underpowered in diagnostics—something they believe CREs can finally change.Learn more from The New Stack about the latest Observability insights Why Consolidating Observability Tools Is a Smart MoveBuilding an Observability Culture: Getting Everyone Onboard Join our community of newsletter subscribers to stay on top of the news and at the top of your game.
At Arm, open source is the default approach, with proprietary software requiring justification, says Andrew Wafaa, fellow and senior director of software communities. Speaking at KubeCon + CloudNativeCon Europe, Wafaa emphasized Arm's decade-long commitment to open source, highlighting its investment in key projects like the Linux kernel, GCC, and LLVM. This investment is strategic, ensuring strong support for Arm's architecture through vital tools and system software.Wafaa also challenged the hype around GPUs in AI, asserting that CPUs—especially those enhanced with Arm's Scalable Matrix Extension (SME2) and Scalable Vector Extension (SVE2)—are often more suitable for inference workloads. CPUs offer greater flexibility, and Arm's innovations aim to reduce dependency on expensive GPU fleets.On the AI framework front, Wafaa pointed to PyTorch as the emerging hub, likening its ecosystem-building potential to Kubernetes. As a PyTorch Foundation board member, he sees PyTorch becoming the central open source platform in AI development, with broad community and industry backing.Learn more from The New Stack about the latest insights about Arm: Edge Wars Heat Up as Arm Aims to Outflank Intel, Qualcomm Arm: See a Demo About Migrating a x86-Based App to ARM64Join our community of newsletter subscribers to stay on top of the news and at the top of your game.
In today's uncertain economy, businesses are tightening costs, including for Kubernetes (K8s) operations, which are notoriously difficult to optimize. Yodar Shafrir, co-founder and CEO of ScaleOps, explained at KubeCon + CloudNativeCon Europe that dynamic, cloud-native applications have constantly shifting loads, making resource allocation complex. Engineers must provision enough resources to handle spikes without overspending, but in large production clusters with thousands of applications, manual optimization often fails. This leads to 70–80% resource waste and performance issues. Developers typically prioritize application performance over operational cost, and AI workloads further strain resources. Existing optimization tools offer static recommendations that quickly become outdated due to the dynamic nature of workloads, risking downtime. Shafrir emphasized that real-time, fully automated solutions like ScaleOps' platform are crucial. By dynamically adjusting container-level resources based on real-time consumption and business metrics, ScaleOps improves application reliability and eliminates waste. Their approach shifts Kubernetes management from static to dynamic resource allocation. Listen to the full episode for more insights and ScaleOps' roadmap.Learn more from The New Stack about the latest in scaling Kubernetes and managing operational costs: ScaleOps Adds Predictive Horizontal Scaling, Smart Placement ScaleOps Dynamically Right-Sizes Containers at Runtime Join our community of newsletter subscribers to stay on top of the news and at the top of your game.
Heroku has been undergoing a major transformation, re-platforming its entire Platform as a Service (PaaS) offering over the past year and a half. This ambitious effort, dubbed “Fir,” will soon reach general availability. According to Betty Junod, CMO and SVP at Heroku (owned by Salesforce), the overhaul includes a shift to Kubernetes and OCI standards, reinforcing Heroku's commitment to open source. The platform now features Heroku Cloud Native Buildpacks, which let developers create container images without Dockerfiles. Originally built on Ruby on Rails and predating Docker and AWS, Heroku now supports eight programming languages. The company has also deepened its open source engagement by becoming a platinum member of the Cloud Native Computing Foundation (CNCF), contributing to projects like OpenTelemetry. Additionally, Heroku has open sourced its Twelve-Factor Apps methodology, inviting the community to help modernize it to address evolving needs such as secrets management and workload identity. This signals a broader effort to align Heroku's future with the cloud native ecosystem. Learn more from The New Stack about Heroku's approach to Platform-as-a-Service:Return to PaaS: Building the Platform of Our DreamsHeroku Moved Twelve-Factor Apps to Open Source. What's Next?How Heroku Is Positioned To Help Ops Engineers in the GenAI EraJoin our community of newsletter subscribers to stay on top of the news and at the top of your game.
In this episode of The New Stack Makers, recorded at KubeCon + CloudNativeCon Europe, Alex Williams speaks with Ville Aikas, Chainguard founder and early Kubernetes contributor. They reflect on the evolution of container security, particularly how early assumptions—like trusting that users would validate container images—proved problematic. Aikas recalls the lack of secure defaults, such as allowing containers to run as root, stemming from the team's internal Google perspective, which led to unrealistic expectations about external security practices.The Kubernetes community has since made strides with governance policies, secure defaults, and standard practices like avoiding long-lived credentials and supporting federated authentication. Aikas founded Chainguard to address the need for trusted, minimal, and verifiable container images—offering zero-CVE images, transparent toolchains, and full SBOMs. This security-first philosophy now extends to virtual machines and Java dependencies via Chainguard Libraries.The discussion also highlights the rising concerns around AI/ML security in Kubernetes, including complex model dependencies, GPU integrations, and potential attack vectors—prompting Chainguard's move toward locked-down AI images.Learn more from The New Stack about Container Security and AIChainguard Takes Aim At Vulnerable Java LibrariesClean Container Images: A Supply Chain Security RevolutionRevolutionizing Offensive Security: A New Era With Agentic AI Join our community of newsletter subscribers to stay on top of the news and at the top of your game.
In a candid episode of The New Stack Makers, Kubernetes pioneer Kelsey Hightower and AWS's Eswar Bala explored the evolving relationship between enterprise cloud providers and open source software at KubeCon+CloudNativeCon London. Hightower highlighted open source's origins as a grassroots movement challenging big vendors, and shared how it gave people—especially those without traditional tech credentials—a way into the industry. Recalling his own journey, Hightower emphasized that open source empowered individuals through contribution over credentials.Bala traced the early development of Kubernetes and his own transition from building container orchestration systems to launching AWS's Elastic Kubernetes Service (EKS), driven by growing customer demand. The discussion, recorded at KubeCon + CloudNativeCon Europe, touched on how open source is now central to enterprise cloud strategies, with AWS not only contributing but creating projects like Karpenter, Cedar, and Kro.Both speakers agreed that open source's collaborative model—where companies build in public and customers drive innovation—has reshaped the cloud ecosystem, turning former tensions into partnerships built on community-driven progress.Learn more from The New Stack about the relationship between enterprise cloud providers and open source software:The Metamorphosis of Open Source: An Industry in TransitionThe Complex Relationship Between Cloud Providers and Open SourceHow Open Source Has Turned the Tables on Enterprise SoftwareJoin our community of newsletter subscribers to stay on top of the news and at the top of your game.
In a rare show of collaboration, Google, Amazon, and Microsoft have joined forces on Kro — the Kubernetes Resource Orchestrator — an open source, cloud-agnostic tool designed to simplify custom resource orchestration in Kubernetes. Announced during KubeCon + CloudNativeCon Europe, Kro was born from strong customer demand for a Kubernetes-native solution that works across cloud providers without vendor lock-in. Nic Slattery, Product Manager at Google and Jesse Butler, Principal Product Manager, AWS shared with The New Stack that unlike many enterprise products, Kro didn't stem from top-down strategy but from consistent customer "pull" experienced by all three companies. It aims to reduce complexity by allowing platform teams to offer simplified interfaces to developers, enabling resource requests without needing deep service-specific knowledge. Kro also represents a unique cross-company collaboration, driven by a shared mission and open source values. Though still in its alpha stage, the project has already attracted 57 contributors in just seven months. The team is now focused on refining core features and preparing for a production-ready release — all while maintaining a narrowly scoped, community-first approach.Learn more from The New Stack about KRO:One Mighty kro; One Giant Leap for Kubernetes Resource OrchestrationKubernetes Gets a New Resource Orchestrator in the Form of KroOrchestrate Cloud Native Workloads With Kro and KubernetesJoin our community of newsletter subscribers to stay on top of the news and at the top of your game.
OpenSearch has evolved significantly since its 2021 launch, recently reaching a major milestone with its move to the Linux Foundation. This shift from company-led to foundation-based governance has accelerated community contributions and enterprise adoption, as discussed by NetApp's Amanda Katona in a New Stack Makers episode recorded at KubeCon + CloudNativeCon Europe. NetApp, an early adopter of OpenSearch following Elasticsearch's licensing change, now offers managed services on the platform and contributes actively to its development.Katona emphasized how neutral governance under the Linux Foundation has lowered barriers to enterprise contribution, noting a 56% increase in downloads since the transition and growing interest from developers. OpenSearch 3.0, featuring a Lucene 10 upgrade, promises faster search capabilities—especially relevant as data volumes surge. NetApp's ongoing investments include work on machine learning plugins and developer training resources.Katona sees the Linux Foundation's involvement as key to OpenSearch's long-term success, offering vendor-neutral governance and reassuring users seeking openness, performance, and scalability in data search and analytics.Learn more from The New Stack about OpenSearch: Report: OpenSearch Bests ElasticSearch at Vector ModelingAWS Transfers OpenSearch to the Linux Foundation OpenSearch: How the Project Went From Fork to FoundationJoin our community of newsletter subscribers to stay on top of the news and at the top of your game.
AI applications are evolving beyond chatbots into more complex and transformative solutions, according to Marco Palladino, CTO and co-founder of Kong. In a recent episode of The New Stack Makers, he discussed the rise of AI agents, which act as "virtual employees" to enhance organizational efficiency. For instance, AI can now function as a product manager for APIs—analyzing documentation, detecting inaccuracies, and making corrections.However, reliance on AI agents brings security risks, such as data leakage and governance challenges. Organizations need observability and safeguards, but developers often resist implementing these requirements manually. As GenAI adoption matures, teams seek ways to accelerate development without rebuilding security measures repeatedly.To address these challenges, Kong introduced AI Gateway, an open-source plugin for its API Gateway. AI Gateway supports multiple AI models across providers like AWS, Microsoft, and Google, offering developers a universal API to integrate AI securely and efficiently. It also features automated retrieval-augmented generation (RAG) pipelines to minimize hallucinations.Palladino emphasized the need for consistent security in AI infrastructure, ensuring developers can focus on innovation while leveraging built-in protections.Learn more from The New Stack about Kong's AI GatewayKong: New ‘AI-Infused' Features for API Management, Dev ToolsFrom Zero to a Terraform Provider for Kong in 120 HoursJoin our community of newsletter subscribers to stay on top of the news and at the top of your game.
Platform engineering was meant to ease the burdens of Devs and Ops by reducing cognitive load and repetitive tasks. However, building internal development platforms (IDPs) has proven challenging. Despite this, Gartner predicts that by 2026, 80% of software engineering organizations will have a platform team.In a recent New Stack Makers episode, Mallory Haigh of Humanitec and Nathen Harvey of Google discussed the current state and future of platform engineering. Haigh emphasized that many organizations rush to build IDPs without understanding why they need them, leading to ineffective implementations. She noted that platform engineering is 10% technical and 90% cultural change, requiring deep introspection and strategic planning.AI-driven automation, particularly agentic AI, is expected to shape platform engineering's future. Haigh highlighted how AI can enhance platform orchestration and optimize GPU resource management. Harvey compared platform engineering to generative AI—both aim to reduce toil and improve efficiency. As AI adoption grows, platform teams must ensure their infrastructure supports these advancements.Learn more from The New Stack about platform engineering: Platform Engineering on the Brink: Breakthrough or Bust?Platform Engineers Must Have Strong OpinionsThe Missing Piece in Platform Engineering: Recognizing ProducersJoin our community of newsletter subscribers to stay on top of the news and at the top of your game.
AI agents are set to transform software development, but software itself isn't going anywhere—despite the dramatic predictions. On this episode of The New Stack Makers, Mark Hinkle, CEO and Founder of Peripety Labs, discusses how AI agents relate to serverless technologies, infrastructure-as-code (IaC), and configuration management. Hinkle envisions AI agents as “dumb robots” handling tasks like querying APIs and exchanging data, while the real intelligence remains in large language models (LLMs). These agents, likely implemented as serverless functions in Python or JavaScript, will automate software development processes dynamically. LLMs, leveraging vast amounts of open-source code, will enable AI agents to generate bespoke, task-specific tools on the fly—unlike traditional cloud tools from HashiCorp or configuration management tools like Chef and Puppet. As AI-generated tooling becomes more prevalent, managing and optimizing these agents will require strong observability and evaluation practices. According to Hinkle, this shift marks the future of software, where AI agents dynamically create, call, and manage tools for CI/CD, monitoring, and beyond. Check out the full episode for more insights. Learn more from The New Stack about emerging trends in AI agents: Lessons From Kubernetes and the Cloud Should Steer the AI RevolutionAI Agents: Why Workflows Are the LLM Use Case to Watch Join our community of newsletter subscribers to stay on top of the news and at the top of your game.
The transition from SaaS to Services as Software with AI agents is underway, necessitating new orchestration methods similar to Kubernetes for containers. AI agents will require resource allocation, workflow management, and scalable infrastructure as they evolve. Two key trends are driving this shift: Data Evolution – From spreadsheets to AI agents, data has progressed through relational databases, big data, predictive analytics, and generative AI. Computing Evolution – Starting from mainframes, the journey has moved through desktops, client servers, web/mobile, SaaS, and now agentic workflows. Janakiram MSV, an analyst, notes on this episode of The New Stack Makers that SaaS depends on data—without it, platforms like Salesforce and SAP lack value. As data becomes more actionable and compute more agentic, a new paradigm emerges: Services as Software. AI agents will automate tasks previously requiring human intervention, like emails and sales follow-ups. However, orchestrating them will be complex, akin to Kubernetes managing containers. Unlike deterministic containers, AI agents depend on dynamic, trained data, posing new enterprise challenges in memory management and infrastructure. Learn more from The New Stack about evolution to AI agents: How AI Agents Are Starting To Automate the Enterprise Can You Trust AI To Be Your Data Analyst? Agentic AI is the New Web App, and Your AI Strategy Must Evolve Join our community of newsletter subscribers to stay on top of the news and at the top of your game.
Amazon Q Developer is streamlining the software development lifecycle by integrating AI-powered tools into AWS. In an interview at AWS in Seattle, Srini Iragavarapu, director of generative AI Applications and Developer Experiences at AWS, discussed how Amazon Q Developer enhances the developer experience. Initially focused on inline code completions, Amazon Q Developer evolved by incorporating generative AI models like Amazon Nova and Anthropic models, improving recommendations and accelerating development. British Telecom reported a 37% acceptance rate for AI-generated code.Beyond code completion, Amazon Q Developer enables developers to interact with Q for code reviews, test generation, and migrations. AWS also developed agentic frameworks to automate undifferentiated tasks, such as upgrading Java versions. Iragavarapu noted that internally, AWS used Q Developer to migrate 30,000 production applications, saving $260 million annually. The platform offers code generation, testing suites, RAG capabilities, and access to AWS custom chips, further flattening the SDLC by automating routine work. Listen to The New Stack Makers for the full discussion.Learn more from The New Stack about Amazon Q Developer: Amazon Q Developer Now Handles Your Entire Code Pipeline Amazon Q Apps: AI-Powered Development for All Amazon Revamps Developer AI With Code Conversion, Security Join our community of newsletter subscribers to stay on top of the news and at the top of your game.
Maya Kaczorowski noticed that AI identity and AI agent identity concerns were emerging from outside the security industry, rather than from CISOs and security leaders. She concluded that OAuth, the open standard for authentication, already serves the purpose of granting access without exposing passwords. Kaczorowski, a respected technologist and founder of Oblique, a startup focused on self-serve access controls, recently wrote about OAuth and AI agents and shared her insights on this episode of The New Stack Makers. She noted that developers see AI agents as extensions of themselves, granting them limited access to data and capabilities—precisely what OAuth is designed to handle. The challenges with AI agent identity are vast, involving different approaches to authentication, such as those explored by companies like AuthZed. While existing authorization models like RBAC or ABAC may still apply, the real challenge lies in scale. The exponential growth of AI-related entities—from users to LLMs—could mean even small organizations manage hundreds of thousands of agents. Future solutions must accommodate this massive scale efficiently. For the full discussion, check out The New Stack Makers interview with Kaczorowski. Learn more from The New Stack about OAuth requirements for AI Agents: OAuth 2.0: A Standard in Name Only? AI Agents Are Redefining the Future of Identity and Access ManagementJoin our community of newsletter subscribers to stay on top of the news and at the top of your game.
The rise of the World Wide Web enabled developers to build tools and platforms on top of it. Similarly, the advent of large language models (LLMs) allows for creating new AI-driven tools, such as autonomous agents that interact with LLMs, execute tasks, and make decisions. However, verifying these decisions is crucial, and critical reasoning may be a solution, according to Yam Marcovitz, tech lead at Parlant.io and CEO of emcie.co.Marcovitz likens LLM development to the evolution of programming languages, from punch cards to modern languages like Python. Early LLMs started with small transformer models, leading to systems like BERT and GPT-3. Now, instead of mere text auto-completion, models are evolving to enable better reasoning and complex instructions.Parlant uses "attentive reasoning queries (ARQs)" to maintain consistency in AI responses, ensuring near-perfect accuracy. Their approach balances structure and flexibility, preventing models from operating entirely autonomously. Ultimately, Marcovitz argues that subjectivity in human interpretation extends to LLMs, making perfect objectivity unrealistic.Learn more from The New Stack about the evolution of LLMs: AI Alignment in Practice: What It Means and How to Get It Agentic AI: The Next Frontier of AI Power Make the Most of AI Agents: Tips and Tricks for Developers Join our community of newsletter subscribers to stay on top of the news and at the top of your game.
Adam Jacob, CEO of System Initiative, discusses a shift in infrastructure automation—moving from writing code to building models that enable rapid simulations and collaboration. In The New Stack Makers, he compares this approach to Formula One racing, where teams use high-fidelity models to simulate race conditions, optimizing performance before hitting the track.System Initiative applies this concept to enterprise automation, creating a model that understands how infrastructure components interact. This enables fast, multiplayer feedback loops, simplifying complex tasks while enhancing collaboration. Engineers can extend the system by writing small, reactive JavaScript functions that automate processes, such as transforming existing architectures into new ones. The platform visually represents these transformations, making automation more intuitive and efficient.By leveraging models instead of traditional code-based infrastructure management, System Initiative enhances agility, reduces complexity, and improves DevOps collaboration. To explore how this ties into the concept of the digital twin, listen to the fullNew Stack Makers episode.Learn more from The New Stack about System Initiative:Beyond Infrastructure as Code: System Initiative Goes LiveHow System Initiative Treats AWS Components as Digital TwinsSystem Initiative Code Now Open SourceJoin our community of newsletter subscribers to stay on top of the news and at the top of your game.
Morgan McLean, co-founder of OpenTelemetry and senior director of product management at Splunk, has long tackled the challenges of observability in large-scale systems. In a conversation with Alex Williams onThe New Stack Makers, McLean reflected on his early frustrations debugging high-scale services and the need for better observability tools.OpenTelemetry, formed in 2019 from OpenTracing and OpenCensus, has since become a key part of modern observability strategies. As a Cloud Native Computing Foundation (CNCF) incubating project, it's the second most active open source project after Kubernetes, with over 1,200 developers contributing monthly. McLean highlighted OpenTelemetry's role in solving scaling challenges, particularly in Kubernetes environments, by standardizing distributed tracing, application metrics, and data extraction.Looking ahead, profiling is set to become the fourth major observability signal alongside logs, tracing, and metrics, with general availability expected in 2025. McLean emphasized ongoing improvements, including automation and ease of adoption, predicting even faster OpenTelemetry adoption as friction points are resolved.Learn more from The New Stack about the latest trends in Open Telemetry:What Is OpenTelemetry? The Ultimate GuideObservability in 2025: OpenTelemetry and AI to Fill In GapsHoneycomb.io's Austin Parker: OpenTelemetry In-DepthJoin our community of newsletter subscribers to stay on top of the news and at the top of your game.
Observability is expensive because traditional tools weren't designed for the complexity and scale of modern cloud-native systems, explains Christine Yen, CEO of Honeycomb.io. Logging tools, while flexible, were optimized for manual, human-scale data reading. This approach struggles with the massive scale of today's software, making logging slow and resource-intensive. Monitoring tools, with their dashboards and metrics, prioritized speed over flexibility, which doesn't align with the dynamic nature of containerized microservices. Similarly, traditional APM tools relied on “magical” setups tailored for consistent application environments like Rails, but they falter in modern polyglot infrastructures with diverse frameworks.Additionally, observability costs are rising due to evolving demands from DevOps, platform engineering, and site reliability engineering (SRE). Practices like service-level objectives (SLOs) emphasize end-user experience, pushing teams to track meaningful metrics. However, outdated observability tools often hinder this, forcing teams to cut back on crucial data. Yen highlights the potential of AI and innovations like OpenTelemetry to address these challenges.Learn more from The New Stack about the latest trends in observability:Honeycomb.io's Austin Parker: OpenTelemetry In-DepthObservability in 2025: OpenTelemetry and AI to Fill In GapsObservability and AI: New Connections at KubeConJoin our community of newsletter subscribers to stay on top of the news and at the top of your game.
Generative AI is a data-driven story with significant infrastructure and operational implications, particularly around the rising demand for GPUs, which are better suited for AI workloads than CPUs. In an episode ofThe New Stack Makersrecorded at KubeCon + CloudNativeCon North America, Sudha Raghavan, SVP for Developer Platform at Oracle Cloud Infrastructure, discussed how AI's rapid adoption has reshaped infrastructure needs.The release of ChatGPT triggered a surge in GPU demand, with organizations requiring GPUs for tasks ranging from testing workloads to training large language models across massive GPU clusters. These workloads run continuously at peak power, posing challenges such as high hardware failure rates and energy consumption.Oracle is addressing these issues by building GPU superclusters and enhancing Kubernetes functionality. Tools like Oracle's Node Manager simplify interactions between Kubernetes and GPUs, providing tailored observability while maintaining Kubernetes' user-friendly experience. Raghavan emphasized the importance of stateful job management and infrastructure innovations to meet the demands of modern AI workloads.Learn more from The New Stack about how Oracle is addressing the GPU demand for AI workloads with its GPU superclusters and enhancing Kubernetes functionality: Oracle Code Assist, Java-Optimized, Now in BetaOracle's Code Assist: Fashionably Late to the GenAI PartyOracle Unveils Java 23: Simplicity Meets Enterprise PowerJoin our community of newsletter subscribers to stay on top of the news and at the top of your game.
The hardware industry is surging, driven by AI's demanding workloads, with Arm—a 35-year-old pioneer in processor IP—playing a pivotal role. In an episode ofThe New Stack Makersrecorded at KubeCon + CloudNativeCon North America, Pranay Bakre, principal solutions engineer at Arm, discussed how Arm is helping organizations migrate and run applications on its technology.Bakre highlighted Arm's partnership with hyperscalers like AWS, Google, Microsoft, and Oracle, showcasing processors such as AWS Graviton and Google Axion, built on Arm's power-efficient, cost-effective Neoverse IP. This design ethos has spurred wide adoption, with 90-95% of CNCF projects supporting native Arm binaries.Attendees at Arm's booth frequently inquired about its plans to support AI workloads. Bakre noted the performance advantages of Arm-based infrastructure, delivering up to 60% workload improvements over legacy architectures. The episode also features a demo on migrating x86 applications to ARM64 in both cloud and containerized environments, emphasizing Arm's readiness for the AI era.Learn more from The New Stack about Arm: Arm Eyes AI with Its Latest Neoverse Cores and SubsystemBig Three in Cloud Prompts ARM to Rethink SoftwareJoin our community of newsletter subscribers to stay on top of the news and at the top of your game.
Heroku has open-sourced its Twelve-Factor App methodology, initially created in 2011 to help developers build portable, resilient cloud applications. Heroku CTO Gail Frederick announced this shift at KubeCon + CloudNativeCon North America, explaining the move aims to involve the community in modernizing the framework. While the methodology inspired a generation of cloud developers, certain factors are now outdated, such as the focus on logs as event streams. Frederick highlighted the need for updates to address current practices like telemetry and metrics visualization, reflecting the rise of OpenTelemetry.The updated Twelve-Factor methodology will expand to accommodate modern cloud-native realities, such as deploying interconnected systems of apps with diverse backing services. Planned enhancements include supporting documents, reference architectures, and code examples illustrating the principles in action. Success will be measured by its applicability to use cases involving edge computing, IoT, serverless, and distributed systems. Heroku views this open-source effort as an opportunity to redefine best practices for the next era of cloud development.Learn more from The New Stack about Heroku: How Heroku Is Positioned To Help Ops Engineers in the GenAI EraThe Data Stack Journey: Lessons from Architecting Stacks at Heroku and MattermostJoin our community of newsletter subscribers to stay on top of the news and at the top of your game.
Falco, an open-source runtime observability and security tool, was created by Sysdig founder Loris Degioanni to collect real-time system events directly from the kernel. Leveraging eBPF technology for improved safety and performance, Falco gathers data like pod names and namespaces, correlating them with customizable rules. Unlike static analysis tools, it operates in real-time, monitoring events as they occur. In this episode of The New Stack Makers, TNS Editor-in-Chief, Heather Joslyn spoke with Thomas Labarussias, Senior Developer Advocate at Sysdig, Leonardo Grasso, Open Source Tech Lead Manager at Sysdig and Luca Guerra, Sr. Open Source Engineer at Sysdig to get the latest update on Falco. Graduating from the Cloud Native Computing Foundation (CNCF) in February 2023 after entering its sandbox six years prior, Falco's maintainers have focused on technical maturity and broad usability. This includes simplifying installations across diverse environments, thanks in part to advancements from the Linux Foundation.Looking ahead, the team is enhancing core functionalities, including more customizable rules and alert formats. A key innovation is Falco Talon, introduced in September 2023, which provides a no-code response engine to link alerts with real-time remediation actions. Talon addresses a longstanding gap in automating responses within the Falco ecosystem, advancing its capabilities for runtime security.Learn more from The New Stack about Falco:Falco Is a CNCF Graduate. Now What?Falco Plugins Bring New Data Sources to Real-Time SecurityeBPF Tools: An Overview of Falco, Inspektor Gadget, Hubble and CiliumJoin our community of newsletter subscribers to stay on top of the news and at the top of your game.
Jetstack's cert-manager, a leading open-source project in Kubernetes certificate management, began as a job interview challenge. Co-founder Matt Barker recalls asking a prospective engineer to automate Let's Encrypt within Kubernetes. By Monday, the candidate had created kube-lego, which evolved into cert-manager, now downloaded over 500 million times monthly.Cert-manager's journey to CNCF graduation, achieved in September, began with its donation to the foundation four years ago. Relaunched as cert-manager, the project grew under engineer James Munnelly, becoming the de facto standard for certificate lifecycle management. The thriving community and ecosystem around cert-manager highlighted its suitability for CNCF stewardship. However, maintainers, including Ashley Davis, noted challenges in navigating differing opinions within its vast user base.With graduation achieved, cert-manager's roadmap includes sub-projects like trust-manager, addressing TLS trust bundle management and Istio integration. Barker aims to streamline enterprise-scale deployments and educate security teams on cert-manager's impact. Cert-manager has become integral to cloud-native workflows, promising to simplify hybrid, multicloud, and edge deployments.Learn more from The New Stack about cert-manager:Jetstack's cert-manager Joins the CNCF Sandbox of Cloud Native TechnologiesJetstack Secure Promises to Ease Kubernetes TLS SecurityJoin our community of newsletter subscribers to stay on top of the news and at the top of your game.
The tech industry faces a paradox: despite high demand for skills, many developers and engineers are unemployed. At KubeCon + CloudNativeCon North America in Salt Lake City, Utah, Andela and the Cloud Native Computing Foundation (CNCF) announced an initiative to train 20,000 technologists in cloud native computing over the next decade. oss O'neill, Senior Program Manager at Andela and Chris Aniszczyk, CNCF's CTO, highlighted the lack of Kubernetes-certified professionals in regions like Africa and emphasized the need for global inclusivity to make cloud native technology ubiquitous.Andela, operating in over 135 countries and founded in Nigeria, views this program as a continuation of its mission to upskill African talent, aligning with its partnerships with tech giants like Google, AWS, and Nvidia. This initiative also addresses the increasing employer demand for Kubernetes and modern cloud skills, reflecting a broader skills mismatch in the tech workforce.Aniszczyk noted that companies urgently seek expertise in cloud native infrastructure, observability, and platform engineering. The partnership aims to bridge these gaps, offering opportunities to meet evolving global tech needs.Learn more from The New Stack about developer talent, skills and needs: Top Developer Skills for AI and Cloud Jobs5 Software Development Skills AI Will Render ObsoleteCloud Native Skill Gaps are Killing Your GainsJoin our community of newsletter subscribers to stay on top of the news and at the top of your game.
When open source projects shift to proprietary licensing, forks and new communities often emerge. Such was the case with MapLibre, born from Mapbox's 2020 decision to make its map rendering engine proprietary. In conjunction with All Things Open 2024, Seth Fitzsimmons, a principal engineer at AWS and Tarus Balog, principal technical strategist for open source at AWS shared that this engine, popular for its WebGL-powered vector maps and dynamic customization features, was essential for organizations like BMW, The New York Times, and Instacart. However, Mapbox's move disappointed its open-source user base by tying the upgraded Mapbox GL JS library to proprietary products.In response, three users forked the engine to create MapLibre, committing to modernizing and preserving its open-source ethos. Despite challenges—forking often struggles to sustain momentum—MapLibre has thrived, supported by contributors and corporate sponsors like AWS, Meta, and Microsoft. Notably, a community member transitioned the project from JavaScript to TypeScript over nine months, showcasing the dedication of unpaid contributors.Thanks to financial backing, MapLibre now employs maintainers, enabling it to reciprocate community efforts while fostering equality among participants. The project illustrates the resilience of open-source communities when proprietary shifts occur.Learn more from The New Stack about forking open source projects:Why Do Open Source Projects Fork?OpenSearch: How the Project Went From Fork to FoundationJoin our community of newsletter subscribers to stay on top of the news and at the top of your game.
At All Things Open in October, Anandhi Bumstead, AWS's director of software engineering, highlighted OpenSearch's journey and the advantages of the Linux Foundation's stewardship. OpenSearch, an open source data ingestion and analytics engine, was transferred by Amazon Web Services (AWS) to the Linux Foundation in September 2024, seeking neutral governance and broader community collaboration. Originally forked from Elasticsearch after a licensing change in 2021, OpenSearch has evolved into a versatile platform likened to a “Swiss Army knife” for its broad use cases, including observability, log and security analytics, alert detection, and semantic and hybrid search, particularly in generative AI applications.Despite criticism over slower indexing speeds compared to Elasticsearch, significant performance improvements have been made. The latest release, OpenSearch 2.17, delivers 6.5x faster query performance and a 25% indexing improvement due to segment replication. Future efforts aim to enhance indexing, search, storage, and vector capabilities while optimizing costs and efficiency. Contributions are welcomed via opensearch.org.Learn more from The New Stack about deploying applications on OpenSearchAWS Transfers OpenSearch to the Linux FoundationFrom Flashpoint to Foundation: OpenSearch's Path ClearsSemantic Search with Amazon OpenSearch Serverless and TitanJoin our community of newsletter subscribers to stay on top of the news and at the top of your game.
Is Apache Spark too costly? Amazon Principal Engineer Patrick Ames tackled this question during an interview with The New Stack Makers, sharing insights into transitioning from Spark to Ray for managing large-scale data. Ames, described as a "go-to" engineer for exabyte-scale projects, emphasized a goal-driven approach to solving complex engineering problems, from simplifying daily chores to optimizing software solutions.Initially, Spark was chosen at Amazon for its simplicity and open-source flexibility, allowing efficient merging of data with minimal SQL code. The team leveraged Spark in a decoupled architecture over S3 storage, scaling it to handle thousands of jobs daily. However, as data volumes grew to hundreds of terabytes and beyond, Spark's limitations became apparent. Long processing times and high costs prompted a search for alternatives.Enter Ray—a unified framework designed for scaling AI and Python applications. After experimentation, Ames and his team noted significant efficiency improvements, driving the shift from Spark to Ray to meet scalability and cost-efficiency needs.Learn more from The New Stack about Apache Spark and Ray: Amazon to Save Millions Moving From Apache Spark to RayHow Ray, a Distributed AI Framework, Helps Power ChatGPT Join our community of newsletter subscribers to stay on top of the news and at the top of your game.
In this New Stack Makers, Codiac aims to simplify app deployment on Kubernetes by offering a unified interface that minimizes complexity. Traditionally, Kubernetes is powerful but challenging for teams due to its intricate configurations and extensive manual coding. Co-founded by Ben Ghazi and Mark Freydl, Codiac provides engineers with infrastructure on demand, container management, and advanced software development life cycle (SDLC) tools, making Kubernetes more accessible.Codiac's interface streamlines continuous integration and deployment (CI/CD), reducing deployment steps to a single line of code within CI/CD pipelines. Developers can easily deploy, manage containers, and configure applications without mastering Kubernetes' esoteric syntax. Codiac also offers features like "cabinets" to organize assets across multi-cloud environments and enables repeatable processes through snapshots, making cluster management smoother.For experienced engineers, Codiac alleviates the burden of manually managing YAML files and configuring multiple services. With ephemeral clusters and repeatable snapshots, Codiac supports scalable, reproducible development workflows, giving engineers a practical way to manage applications and infrastructure seamlessly across complex Kubernetes environments.Learn more from The New Stack about deploying applications on Kubernetes:Kubernetes Needs to Take a Lesson from Portainer on Ease-of-Use Three Common Kubernetes Challenges and How to Solve Them Join our community of newsletter subscribers to stay on top of the news and at the top of your game.
Valkey, an open-source fork of Redis launched in March, introduced its multithreaded Version 8.0 in September, now available through AWS ElastiCache. At All Things Open 2024 in Raleigh, AWS's Kyle Davis explains that Valkey was developed after Redis changed to a restrictive license, drawing contributors from companies like AWS, Google, Alibaba, and Oracle. Notably, some contributors emerged independently, including a significant contributor from Vietnam. Version 8.0 differentiates itself from Redis by leveraging multithreaded CPUs, addressing the efficiency of I/O operations in modern hardware. Additionally, data structure refinements were made to improve memory efficiency by up to 20%, particularly benefiting large-key databases.Looking ahead, Valkey plans two annual updates, with the next release expected in 2025. New modules are anticipated, including a JSON module for efficient data manipulation and a Bloom filter for probabilistic data presence checks. Version 9.0 may bring substantial changes to clustering, updating it to better leverage modern technologies. The Valkey project aims to continue evolving its capabilities to meet the demands of advanced data storage needs.Learn more from The New Stack about Valkey: Valkey Is a Different Kind of Fork AWS Adds Support, Drops Prices, for Redis-Forked Valkey Valkey: A Redis Fork With a Future Join our community of newsletter subscribers to stay on top of the news and at the top of your game.
Deb Nicholson, executive director of the Python Software Foundation, attributes Python's popularity to its minimal syntactical complexity, which appeals to beginners and seasoned developers alike. Python allows flexibility for those exploring coding without a specific focus, unlike purpose-built languages. Since her leadership began in 2022, Nicholson has overseen the foundation's role in managing Python's fiscal and operational needs, including the package index that hosts over half a million add-ons. This open ecosystem enables contributions from large corporations and individual developers while demanding vigilant security measures.Nicholson envisions Python's future advancements, particularly in improving multi-threading and expanding usage in mobile development. She acknowledges Python's critical role in AI and data science but remains cautious about AI's pervasive application, likening it to a temporary trend. On open source in the enterprise, Nicholson critiques companies profiting from open-source tools while adopting restrictive licenses. Instead, she admires models like Red Hat's, which leverage open source sustainably without compromising accessibility or innovation.Learn more from The New Stack about Python: Python 3.13: Blazing New Trails in Performance and ScaleThe Top 5 Python Packages and What They DoPython Mulls a Change in Version NumberingJoin our community of newsletter subscribers to stay on top of the news and at the top of your game.
Platform engineering will be a key focus at KubeCon this year, with a special emphasis on AI platforms. Priyanka Sharma, executive director of the Linux Foundation, highlighted the convergence of platform engineering and AI during an interview on The New Stack Makers with Adobe's Joseph Sandoval. KubeCon will feature talks from experts like Chen Goldberg of CoreWeave and Aparna Sinha of CapitalOne, showcasing how AI workloads will transform platform operations.Sandoval emphasized the growing maturity of platform engineering over the past two to three years, now centered on addressing user needs. He also discussed Adobe's collaboration on CNOE, an open-source initiative for internal developer platforms. The intersection of platform engineering, Kubernetes, cloud-native technologies, and AI raises questions about scaling infrastructure management with AI, potentially improving efficiency and reducing toil for roles like SRE and DevOps. Sharma noted that reference architectures, long requested by the CNCF community, will be highlighted at the event, guiding users without dictating solutions. Learn more from The New Stack about Kubernetes: Cloud Native Networking as Kubernetes Starts Its Second DecadePrimer: How Kubernetes Came to Be, What It Is, and Why You Should Care How Cloud Foundry Has Evolved With Kubernetes Join our community of newsletter subscribers to stay on top of the news and at the top of your game. game. https://thenewstack.io/newsletter/
Rohit Choudhary, co-founder and CEO of Acceldata, placed an early bet on data observability, which has proven prescient. In a New Stack Makers podcast episode, Choudhary discussed three key insights that shaped his vision: First, the exponential growth of data in enterprises, further amplified by generative AI and large language models. Second, the rise of a multicloud and multitechnology environment, with a majority of companies adopting hybrid or multiple cloud strategies. Third, a shortage of engineering talent to manage increasingly complex data systems.As data becomes more essential across industries, challenges in data observability have intensified. Choudhary highlights the complexity of tracking where data is produced, used, and its compliance requirements, especially with the surge in unstructured data. He emphasized that data's operational role in business decisions, marketing, and operations heightens the need for better traceability. Moving forward, traceability and the ability to manage the growing volume of alerts will become areas of hyper-focus for enterprises.Learn more from The New Stack about data observability: What Is Data Observability and Why Does It Matter?The Looming Crisis in the Observability MarketThe Growth of Observability Data Is Out of Control!Join our community of newsletter subscribers to stay on top of the news and at the top of your game.
Rust has maintained its place among the top 15 programming languages and has been the most admired language for nine consecutive years. In a New Stack Makers podcast, Joel Marcey, director of technology at the Rust Foundation, discussed the language's growing importance, including initiatives to improve its security, performance, and adoption in various domains. While Rust is widely used in systems and backend programming, it's also gaining traction in embedded systems, safety-critical applications, game development, and even the Linux kernel.Marcey highlighted Rust's strengths as a safe and fast systems language, noting its use on the web through WebAssembly (Wasm), though adoption there is still early. He also addressed Rust vs. Go, explaining that Rust excels in performance-critical applications. Marcey discussed recent updates, such as Rust 1.81, and project goals for 2024, which include a new edition and async improvements.He also touched on government interest in Rust, including DARPA's initiative to convert C code to Rust, and the Rust Security Initiative, aimed at maintaining the language's strong security reputation.Learn more from The New Stack about Rust Could Rust be the Future of JavaScript Infrastructure?Rust Growing Fastest, But JavaScript Reigns SupremeRust vs. Zig in Reality: A (Somewhat) Friendly DebateJoin our community of newsletter subscribers to stay on top of the news and at the top of your game.
In a New Stack Makers episode, Ashley Williams, founder and CEO of axo, highlights how the software world depends on open-source code, which is largely maintained by unpaid volunteers. She likens this to a CVS relying on volunteer-run shipping companies, pointing out how unsettling that might be for customers. The conversation focuses on open-source maintainers' reluctance to be seen as "suppliers" of software, an idea explored in a 2022 blog post by Thomas Depierre. Many maintainers reject the label, as there is no contractual obligation to support the software they provide. Williams critiques the industry's response to this, noting that instead of involving maintainers in software supply chain security, companies have relied on third-party vendors. However, these vendors have no relationship with the maintainers, leading to increased vulnerabilities. Williams advocates for better engagement with maintainers, especially at build time, to improve security. She also reflects on the growing pressures on maintainers and the underappreciation of release teams.Learn more from The New Stack about open source software supply chain2023: The Year Open Source Security Supply Chain Grew UpFortifying the Software Supply ChainThe Challenges of Securing the Open Source Supply ChainJoin our community of newsletter subscribers to stay on top of the news and at the top of your game.
In this New Stack Makers podcast, Xun Wang, CTO of Bloomreach, brings insights from his time at Nvidia, particularly lessons from its founder, Jensen Huang, to his current role in e-commerce personalization. Wang emphasizes structuring organizations to reflect the architecture of the products they build, applying a hands-on, detail-oriented approach that encourages deep understanding of engineering challenges. He credits Huang for teaching him the importance of focusing on fundamental architecture rather than relying on iterative testing alone. Wang highlights the impact of generative AI (GenAI) on Bloomreach, explaining how AI-driven search is essential to understanding human language and user intent. As GenAI reshapes application development, Wang stresses the need for engineers to adopt new skills in AI manipulation, while still maintaining traditional coding expertise. He advocates for continuous learning, acknowledging the challenge of staying updated in a rapidly evolving field. Wang, himself, reads extensively to keep pace with innovations, underscoring the importance of staying curious and adaptable in today's tech landscape. Learn more from The New Stack about Entrepreneurship for Engineers: How to Grow into Leadership Engineering Leaders: Switch to Wartime Management Now How Teleport's Leader Transitioned from Engineer to CEO Join our community of newsletter subscribers to stay on top of the news and at the top of your game.
Code reviews can be highly beneficial but tricky to execute well due to the human factors involved, says Adrienne Braganza Tacke, author of *Looks Good to Me: Actionable Advice for Constructive Code Review.* In a recent conversation with *The New Stack*, Tacke identified three challenges teams must address for successful code reviews: ambiguity, subjectivity, and ego.Ambiguity arises when the goals or expectations for the code are unclear, leading to miscommunication and rework. Tacke emphasizes the need for clarity and explicit communication throughout the review process. Subjectivity, the second challenge, can derail reviews when personal preferences overshadow objective evaluation. Reviewers should justify their suggestions based on technical merit rather than opinion. Finally, ego can get in the way, with developers feeling attached to their code. Both reviewers and submitters must check their egos to foster a constructive dialogue.Tacke encourages programmers to first review their own work, as self-checks can enhance the quality of the code before it reaches the reviewer. Ultimately, code reviews can improve code quality, mentor developers, and strengthen team knowledge. Learn more from The New Stack about code reviews:The Anatomy of Slow Code Reviews One Company Rethinks Diff to Cut Code Review TimesHow Good Is Your Code Review Process?Join our community of newsletter subscribers to stay on top of the news and at the top of your game.
In the New Stack Makers episode, Adi Polak, Director, Advocacy and Developer Experience Engineering at Confluent discusses the operational and analytical estates in data infrastructure. The operational estate focuses on fast, low-latency event-driven applications, while the analytical estate handles long-running data crunching tasks. Challenges arise due to the "schema evolution" from upstream operational changes impacting downstream analytics, creating complexity for developers. Apache Iceberg and Flink help mitigate these issues. Iceberg, a table format developed by Netflix, optimizes querying by managing file relationships within a data lake, reducing processing time and errors. It has been widely adopted by major companies like Airbnb and LinkedIn. Apache Flink, a versatile data processing framework, is driving two key trends: shifting some batch processing tasks into stream processing and transitioning microservices into Flink streaming applications. This approach enhances system reliability, lowers latency, and meets customer demands for real-time data, like instant flight status updates. Together, Iceberg and Flink streamline data infrastructure, addressing developer pain points and improving efficiency. Learn more from The New Stack about Apache Iceberg and Flink:Unfreeze Apache Iceberg to Thaw Your Data LakehouseApache Flink: 2023 Retrospective and Glimpse into the Future 4 Reasons Why Developers Should Use Apache Flink Join our community of newsletter subscribers to stay on top of the news and at the top of your game.
Bob Wise, CEO of Heroku, discussed the impact of generative AI (GenAI) coding tools on software development in a recent episode of The New Stack Makers. He compared the rise of these tools to adding an "infinite number of interns" to development teams, noting that while they accelerate code writing, they don't yet simplify testing, deployment, or production operations. Wise likened this to the early days of Kubernetes, which focused on improving operations rather than the frontend experience. He emphasized that Kubernetes' success was due to its focus on easing the operational burden, something current GenAI tools have yet to achieve.Heroku, acquired by Salesforce in 2010, is positioned to benefit from these changes by helping teams transition to more automated systems. Wise highlighted Heroku's strategic bet on Postgres, a database technology that's gaining traction, especially for GenAI workloads. He also discussed Heroku's ongoing migration to Kubernetes, aligning with industry standards to enhance its platform.Learn more from The New Stack about HerokuThe Data Stack Journey: Lessons from Architecting Stacks at Heroku and MattermostKubernetes and the Next Generation of PaaS Join our community of newsletter subscribers to stay on top of the news and at the top of your game.
After the XZ Utils backdoor vulnerability was uncovered in March, the OpenJS Foundation saw a surge in inquiries from potential open source JavaScript contributors. Robin Ginn, executive director of the foundation, noted that volunteer-led JavaScript communities often face challenges in managing these contributions. The discovery that a single contributor, "Jia Tan," planted the backdoor heightened vigilance, especially when new contributors requested admin privileges. Ginn emphasized that trust is not synonymous with security, especially in open source projects where maintainers must be vigilant about who can access their repositories.The XZ vulnerability highlighted broader concerns about the security of open source software, particularly in projects with only a single maintainer. Despite receiving a significant grant from Germany's Sovereign Tech Fund, the foundation remains under-resourced, with just two full-time staffers supporting 35 projects. Ginn urged companies that rely on open source software to invest in it by hiring maintainers, ensuring these critical projects are properly supported.Learn more from The New Stack about open source vulnerabilityLinux xz Backdoor Damage Could Be Greater Than Feared Unzipping the XZ Backdoor and Its Lessons for Open Source Linux xz and the Great Flaws in Open Source Join our community of newsletter subscribers to stay on top of the news and at the top of your game.
Paige Bailey, who began coding at age 9 in rural Texas, now leads the GenAI developer experience at Google. In a conversation with Chris Pirillo on The New Stack Makers, Bailey reflected on the evolving role of software development in the era of generative AI. While she once urged her nieces and nephews to pursue computer science degrees, Bailey now believes that critical thinking and problem-solving may be more crucial for future tech careers. She emphasized that generative AI is democratizing software development, making it more accessible and enabling developers to focus on creative tasks rather than the minutiae of coding. Bailey's experience at Google highlights this shift, as she now acts more as a reviewer and overseer of AI-generated code. She sees GenAI not as a replacement for developers but as a tool to accelerate their creativity and tackle longstanding backlogs. Bailey believes the key is ensuring everyone understands how to effectively apply generative AI to their work.Learn more from The New Stack about the future of development: 7 Ways to Future Proof Your Developer Job in the Age of AI The Future of Developer Careers 4 Forecasts for the Future of Developer RelationsJoin our community of newsletter subscribers to stay on top of the news and at the top of your game.
Anne Currie, a leading expert in sustainable tech and part of the Green Software Foundation, discusses practical steps for building resilient, sustainable software in an episode of The New Stack Makers. With 30 years of experience, Currie co-authored Building Green Software, emphasizing the tech industry's role in the energy transition. She highlights the complexity of adapting technology to renewable energy, involving extensive research and debunking misinformation. Currie discusses the importance of energy proportionality—the idea that increased utilization improves a computer's energy efficiency—and how this concept aligns with modern DevOps practices that reduce carbon emissions while enhancing speed, cost efficiency, and security.Currie also emphasizes architecting systems to operate on renewable power and draws parallels between managing variable grid power and internet bandwidth. Using examples like video conferencing, she illustrates how software can adapt to fluctuating resources. The episode also touches on potential pitfalls like greenwashing and the challenges in accurately naming concepts like energy proportionality.Learn more from The New Stack about sustainability: Sustainability: How Did Amazon, Azure, Google Perform in 2023? Sustainability Focus: Cloud Efficiency, Not Carbon Emissions Developers Should Press Cloud Providers on Sustainability Join our community of newsletter subscribers to stay on top of the news and at the top of your game. https://thenewstack.io/newsletter/
In an era marked by complexity, the golden path is essential for software architects, asserts James Watters, senior director of R&D at VMware Tanzu, Broadcom. This approach, emphasizing fewer application patterns, simplifies life for security personnel, developers, and infrastructure teams. VMware defines the golden path as streamlining software development, crucial in today's economic climate. Watters highlights this in the Broadcom report: State of Cloud Native App Platforms 2024, noting that 55% of organizations favor this method for its consistency and security. Watters, a pioneer in platform as a service since 2009, helped establish Cloud Foundry and now drives VMware Tanzu. Tanzu's golden operations offer standardized, consistent processes across platforms, crucial for efficiency and security. Watters advocates for minimal DIY in favor of operational consistency, providing commands for building, deploying, and scaling applications. Tanzu's focus is on integrating AI to enhance user interfaces and data access, impacting platform engineering significantly in the coming years. This integration aims to offer a better developer experience while maintaining security and efficiency. Learn more from The New Stack about golden paths: Golden Paths Start with a Shift Left Platform Engineering Not Working Out? You're Doing It Wrong. How to Pave Golden Paths That Actually Go Somewhere Join our community of newsletter subscribers to stay on top of the news and at the top of your game.
Maintaining and ensuring the success of a microservice-based system can be challenging. Sarah Wells, a seasoned tech consultant with over 20 years of experience, offers valuable insights in her book "Enabling Microservices Success" and a discussion on The New Stack Makers podcast. Drawing from her tenure at the Financial Times (FT), Wells illustrates how transitioning to microservices and adopting DevOps and SRE practices enabled FT to accelerate software releases from 12 annually to over 20,000. This transformation required merging IT organizations, investing in automation, and fostering team autonomy. Wells emphasizes that successful microservices adoption depends not only on developer expertise but also on organizational structures. She highlights the importance of continuous delivery and proactive communication, especially during critical periods like major news events. Additionally, she discusses the evolving roles of senior engineers and the need for flexibility in defining architectural responsibilities. Wells advocates for "engineering enablement" over "platform teams" to better support effective service management and evolution. Learn more from The New Stack about enabling successful outcomes of microservices: What Is Microservices Architecture? 4 Strategies for Migrating Monolithic Apps to Microservices Continuous Improvement Metrics for Scaling Engineering Teams Join our community of newsletter subscribers to stay on top of the news and at the top of your game. https://thenewstack.io/newsletter/
Show NotesIn this episode we speak with Charles Humble, climate polymath and author, about why we as software engineers should think about energy consumption of our workloads. We also talk about how nascent legislation and standards are turning externalities into real costs, forcing innovation that gives us more control over software energy usage. Charles has encyclopedic knowledge of the history of the industry and is a former CTO himself, so his judgments are practical, realistic, and applicable. It's easy to become cynical when you learn that our industry uses about 300TWh per year, about the consumption of Brazil. What I admired about Charles is that despite his knowledge, he was optimistic. He used a comparison to acid rain, to bring home “We can fix this problem,” and goes on to tell us how we can do the same for tech industry as well. Listen to learn that there may actually be useful things you can do, now and in the future, to shape your system's energy usage and energy policy of the large hyperscalers. Our Guest, Charles HumbleCharles Humble is a freelance consultant, author and podcaster. A former software engineer, architect, and CTO he has worked as a senior leader and executive of both technology and content groups. He was InfoQ's editor-in-chief from 2014-2020, and was chief editor for Container Solutions from 2020-2023.He writes regularly for The New Stack and other publications, is a highly experienced content strategist, and has spoken at multiple international conferences including Devoxx, GOTO, WTF is SRE and QCon. His primary areas of interest are how we build software better, including sustainability and ethics, cloud computing, remote working, diversity and inclusion, and inspiring the next generation of developers.Charles is also a keyboard player, and half of the ambient techno band Twofish.You can find him on linkedIn at https://www.linkedin.com/in/charleshumble/Links to Charles' Mentions The BBC 4 program, “Costing The Earth” https://www.bbc.co.uk/programmes/b006r4wnWATTime, information and APIs for optimizing energy usage: https://watttime.org/Green Software Foundation: We are building a trusted ecosystem of people, standards, tooling and best practices for green software: https://greensoftware.foundation/. They make an API that is a wrapper around WATTime and other carbon intensity data for real-time optimization: https://sci-guide.greensoftware.foundation/I/APIBased/Holly Cummins, senior principal software engineer at Quarkus for Red Hat: https://www.linkedin.com/in/holly-k-cummins/Amazon is making sustainability datasets more readily available: https://aws.amazon.com/blogs/publicsector/aws-announces-simpler-access-sustainability-data-launches-hackathon-accelerate-innovation-sustainability/Books“Building Green Software” by by Anne Currie, Sarah Hsu, Sara Bergman: Your Hosts Mansi Shah - Joshua Marker ClimateStack website - https://climatestack.podcastpage.io/
Emily Freeman believes the greatest challenges we face aren't technical, but human. She is a bestselling author of two books, DevOps for Dummies and 97 Things Every Cloud Engineer Should Know, and a prolific speaker, traveling across the globe to educate executives and engineers on the best approaches to AI, DevOps, and cloud engineering. Emily has been studying, leading, and shaping the developer journey for the last 15 years. She has led developer relations and product marketing at AWS, Microsoft, and cutting-edge startups. Her mission is to transform technology organizations by creating company cultures in which diverse, collaborative teams can thrive. Emily's work has been featured in outlets such as Bloomberg, SiliconANGLE, and the New Stack. She is widely recognized as a thoughtful, entertaining, and professional keynote speaker. Emily is best known for her creative approach to identifying and solving the human challenges of software engineering. It is rare in the technology industry to find individuals equally adept with code and words, but her career has been defined by precisely that combination. You can find Emily on the following sites: Twitter Website Mastodon Here are some links provided by Emily: Giving a Tech Talk DevOps for Dummies PLEASE SUBSCRIBE TO THE PODCAST Spotify Apple Podcasts YouTube Music Amazon Music RSS Feed You can check out more episodes of Coffee and Open Source on https://www.coffeeandopensource.com Coffee and Open Source is hosted by Isaac Levin --- Support this podcast: https://podcasters.spotify.com/pod/show/coffeandopensource/support
In this episode, Sasha sits down with Riya Grover, CEO and Cofounder of Sequence, to discuss the evolving landscape of B2B revenue models and pricing strategies. They dive deep into the challenges modern software companies face in managing complex hybrid pricing models and the increasing adoption of usage and hybrid-based pricing. Riya also shares Sequence's approach to unifying the quote-to-cash process, bridging the gap between sales and finance teams, the limitations of traditional revenue management tools, and much more.