POPULARITY
Categories
Software Engineering Radio - The Podcast for Professional Software Developers
Brian Demers, Developer Advocate at Gradle, speaks with host Giovanni Asproni about the importance of having observability in the toolchain. Such information about build times, compiler warnings, test executions, and any other system used to build the production code can help to reduce defects, increase productivity, and improve the developer experience. During the conversation they touch upon what is possible with today's tools; the impact on productivity and developer experience; and the impact, both in terms of risks and opportunities, introduced by the use of artificial intelligence. Brought to you by IEEE Computer Society and IEEE Software magazine.
LLMs are reshaping the future of data and AI—and ignoring them might just be career malpractice. Yoni Michael and Kostas Pardalis unpack what's breaking, what's emerging, and why inference is becoming the new heartbeat of the data pipeline.// BioKostas PardalisKostas is an engineer-turned-entrepreneur with a passion for building products and companies in the data space. He's currently the co-founder of Typedef. Before that, he worked closely with the creators of Trino at Starburst Data on some exciting projects. Earlier in his career, he was part of the leadership team at Rudderstack, helping the company grow from zero to a successful Series B in under two years. He also founded Blendo in 2014, one of the first cloud-based ELT solutions.Yoni MichaelYoni is the Co-Founder of typedef, a serverless data platform purpose-built to help teams process unstructured text and run LLM inference pipelines at scale. With a deep background in data infrastructure, Yoni has spent over a decade building systems at the intersection of data and AI — including leading infrastructure at Tecton and engineering teams at Salesforce.Yoni is passionate about rethinking how teams extract insight from massive troves of text, transcripts, and documents — and believes the future of analytics depends on bridging traditional data pipelines with modern AI workflows. At Typedef, he's working to make that future accessible to every team, without the complexity of managing infrastructure.// Related LinksWebsite: https://www.typedef.aihttps://techontherocks.showhttps://www.cpard.xyz~~~~~~~~ ✌️Connect With Us ✌️ ~~~~~~~Catch all episodes, blogs, newsletters, and more: https://go.mlops.community/TYExploreMLOps Swag/Merch: [https://shop.mlops.community/]Connect with Demetrios on LinkedIn: /dpbrinkmConnect with Kostas on LinkedIn: /kostaspardalis/Connect with Yoni on LinkedIn: /yonimichael/Timestamps:[00:00] Breaking Tools, Evolving Data Workloads[06:35] Building Truly Great Data Teams[10:49] Making Data Platforms Actually Useful[18:54] Scaling AI with Native Integration[24:04] Empowering Employees to Build Agents[28:17] Rise of the AI Sherpa[36:09] Real AI Infrastructure Pain Points[38:05] Fixing Gaps Between Data, AI[46:04] Smarter Decisions Through Better Data[50:18] LLMs as Human-Machine Interfaces[53:40] Why Summarization Still Falls Short[01:01:15] Smarter Chunking, Fixing Text Issues[01:09:08] Evaluating AI with Canary Pipelines[01:11:46] Finding Use Cases That Matter[01:17:38] Cutting Costs, Keeping AI Quality[01:25:15] Aligning MLOps to Business Outcomes[01:29:44] Communities Thrive on Cross-Pollination[01:34:56] Evaluation Tools Quietly Consolidating
What happens when you try to monitor something fundamentally unpredictable? In this featured guest episode, Wayne Segar from Dynatrace joins Corey Quinn to tackle the messy reality of observing AI workloads in enterprise environments. They explore why traditional monitoring breaks down with non-deterministic AI systems, how AI Centers of Excellence are helping overcome compliance roadblocks, and why “human in the loop” beats full automation in most real-world scenarios.From Cursor's AI-driven customer service fail to why enterprises are consolidating from 15+ observability vendors, this conversation dives into the gap between AI hype and operational reality, and why the companies not shouting the loudest about AI might be the ones actually using it best.Show Highlights(00:00) - Cold Open(00:48) – Introductions and what Dynatrace actually does(03:28) – Who Dynatrace serves(04:55) – Why AI isn't prominently featured on Dynatrace's homepage(05:41) – How Dynatrace built AI into its platform 10 years ago(07:32) – Observability for GenAI workloads and their complexity(08:00) – Why AI workloads are "non-deterministic" and what that means for monitoring(12:00) – When AI goes wrong(13:35) – “Human in the loop”: Why the smartest companies keep people in control(16:00) – How AI Centers of Excellence are solving the compliance bottleneck(18:00) – Are enterprises too paranoid about their data?(21:00) – Why startups can innovate faster than enterprises(26:00) – The "multi-function printer problem" plaguing observability platforms(29:00) – Why you rarely hear customers complain about Dynatrace(31:28) – Free trials and playground environmentsAbout Wayne SegarWayne Segar is Director of Global Field CTOs at Dynatrace and part of the Global Center of Excellence where he focuses on cutting-edge cloud technologies and enabling the adoption of Dynatrace at large enterprise customers. Prior to joining Dynatrace, Wayne was a Dynatrace customer where he was responsible for performance and customer experience at a large financial institution. LinksDynatrace website: https://dynatrace.comDynatrace free trial: https://dynatrace.com/trialDynatrace AI observability: https://dynatrace.com/platform/artificial-intelligence/Wayne Segar on LinkedIn: https://www.linkedin.com/in/wayne-segar/SponsorDynatrace: http://www.dynatrace.com
In this episode, Patrick McKenzie (@patio11) is joined by Jennifer Li, a general partner at a16z investing in enterprise, infrastructure and AI. Jennifer breaks down how AI workloads are creating new demands on everything from inference pipelines to observability systems, explaining why we're seeing a bifurcation between language models and diffusion models at the infrastructure level. They explore emerging categories like reinforcement learning environments that help train agents, the evolution of web scraping for agentic workflows, and why Jennifer believes the API economy is about to experience another boom as agents become the primary consumers of software interfaces.–Full transcript: www.complexsystemspodcast.com/the-ai-infrastructure-stack-with-jennifer-li-a16z/–Sponsor: VantaVanta automates security compliance and builds trust, helping companies streamline ISO, SOC 2, and AI framework certifications. Learn more at https://vanta.com/complex–Links:Jennifer Li's writing at a16z https://a16z.com/author/jennifer-li/ –Timestamps:(00:00) Intro(00:55) The AI shift and infrastructure(02:24) Diving into middleware and AI models(04:23) Challenges in AI infrastructure(07:07) Real-world applications and optimizations(15:15) Sponsor: Vanta(16:38) Real-world applications and optimizations (cont'd)(19:05) Reinforcement learning and synthetic environments(23:05) The future of SaaS and AI integration(26:02) Observability and self-healing systems(32:49) Web scraping and automation(37:29) API economy and agent interactions(44:47) Wrap
In this episode, Vercel CEO Guillermo Rauch goes deep on how V0, their text-to-app platform, has already generated over 100 million applications and doubled Vercel's user base in under a year.Guillermo reveals how a tiny SWAT team inside Vercel built V0 from scratch, why “vibe coding” is making software creation accessible to everyone (not just engineers), and how the AI Cloud is automating DevOps, making cloud infrastructure self-healing, and letting companies expose their data to AI agents in just five lines of code.You'll hear why “every company will have to rethink itself as a token factory,” how Vercel's Next.js went from a conference joke to powering Walmart, Nike, and Midjourney, and why the next billion app creators might not write a single line of code. Guillermo breaks down the difference between vibe coding and agentic engineering, shares wild stories of users building apps from napkin sketches, and explains how Vercel is infusing “taste” and best practices directly into their AI models.We also dig into the business side: how Vercel's AI-powered products are driving explosive growth, why retention and margins are strong, and how the company is adapting to a new wave of non-technical users. Plus: the future of MCP servers, the security challenges of agent-to-agent communication, and why prompting and AI literacy are now must-have skills.VercelWebsite - https://vercel.comX/Twitter - https://x.com/vercelGuillermo RauchLinkedIn - https://www.linkedin.com/in/rauchgX/Twitter - https://x.com/rauchgFIRSTMARKWebsite - https://firstmark.comX/Twitter - https://twitter.com/FirstMarkCapMatt Turck (Managing Director)LinkedIn - https://www.linkedin.com/in/turck/X/Twitter - https://twitter.com/mattturck(00:00) Intro (02:08) What Is V0 and Why Did It Take Off So Fast? (04:10) How Did a Tiny Team Build V0 So Quickly? (07:51) V0 vs Other AI Coding Tools (10:35) What is Vibe Coding? (17:05) Is V0 Just Frontend? Moving Toward Full Stack and Integrations (19:40) What Skills Make a Great Vibe Coder? (23:35) Vibe Coding as the GUI for AI: The Future of Interfaces (29:46) Developer Love = Agent Love (33:41) Having Taste as Developer (39:10) MCP Servers: The New Protocol for AI-to-AI Communication (43:11) Security, Observability, and the Risks of Agentic Web (45:25) Are Enterprises Ready for the Agentic Future? (49:42) Closing the Feedback Loop: Customer Service and Product Evolution (56:06) The Vercel AI Cloud: From Pixels to Tokens (01:10:14) How Vercel Adapts to the ICP Change? (01:13:47) Retention, Margins, and the Business of AI Products (01:16:51) The Secret Behind Vercel Last Year Growth (01:24:15) The Importance of Online Presence (01:30:49) Everything, Everywhere, All at Once: Being CEO 101 (01:34:59) Guillermo's Advice to Younger Self
How do you keep complex digital experiences running smoothly when every layer, from networks to cloud infrastructure to applications, can break in ways that frustrate customers and burn out IT teams? This question is at the heart of my conversation recorded live at Cisco Live in San Diego with Patrick Lin, Senior Vice President and General Manager for Observability at Splunk, now part of Cisco. In this episode, Patrick explains how observability has evolved far beyond simple monitoring and is becoming the nerve centre for digital resilience in a world where reactive alerts no longer cut it. We unpack how Splunk and Cisco ThousandEyes are now deeply integrated, giving teams a single source of truth that connects application behaviour, infrastructure health, and network performance, even across systems they do not directly control. Patrick also shares what these two-way integrations mean in practice: faster incident resolution, fewer blame games, and far less time wasted chasing false alerts. We explore how AI is enhancing this vision by cutting through the noise to detect real anomalies, correlate related events, and suggest root causes at a speed no human team could match. If your business depends on staying online and your teams are drowning in disconnected data, this conversation offers a glimpse into the next phase of unified observability and assurance. It might even help quiet the flood of alerts that keep IT professionals awake at night. How is your organisation tackling alert fatigue and rising complexity? Listen in and tell me what strategies you have found that actually work.
Coralogix, an Israeli startup offering a full-stack observability and security platform, has raised $115 million at a pre-money valuation of over $1 billion, almost doubling in three years from its last round in 2022. Learn more about your ad choices. Visit podcastchoices.com/adchoices
Hi, Spring fans! In this episode, I talk to Micrometer.io lead Tommy Ludwig on the latest-and-greatest in observability for the Spring developer
Mark Ericksen, creator of the Elixir LangChain framework, joins the Elixir Wizards to talk about LLM integration in Elixir apps. He explains how LangChain abstracts away the quirks of different AI providers (OpenAI, Anthropic's Claude, Google's Gemini) so you can work with any LLM in one more consistent API. We dig into core features like conversation chaining, tool execution, automatic retries, and production-grade fallback strategies. Mark shares his experiences maintaining LangChain in a fast-moving AI world: how it shields developers from API drift, manages token budgets, and handles rate limits and outages. He also reveals testing tactics for non-deterministic AI outputs, configuration tips for custom authentication, and the highlights of the new v0.4 release, including “content parts” support for thinking-style models. Key topics discussed in this episode: • Abstracting LLM APIs behind a unified Elixir interface • Building and managing conversation chains across multiple models • Exposing application functionality to LLMs through tool integrations • Automatic retries and fallback chains for production resilience • Supporting a variety of LLM providers • Tracking and optimizing token usage for cost control • Configuring API keys, authentication, and provider-specific settings • Handling rate limits and service outages with degradation • Processing multimodal inputs (text, images) in Langchain workflows • Extracting structured data from unstructured LLM responses • Leveraging “content parts” in v0.4 for advanced thinking-model support • Debugging LLM interactions using verbose logging and telemetry • Kickstarting experiments in LiveBook notebooks and demos • Comparing Elixir LangChain to the original Python implementation • Crafting human-in-the-loop workflows for interactive AI features • Integrating Langchain with the Ash framework for chat-driven interfaces • Contributing to open-source LLM adapters and staying ahead of API changes • Building fallback chains (e.g., OpenAI → Azure) for seamless continuity • Embedding business logic decisions directly into AI-powered tools • Summarization techniques for token efficiency in ongoing conversations • Batch processing tactics to leverage lower-cost API rate tiers • Real-world lessons on maintaining uptime amid LLM service disruptions Links mentioned: https://rubyonrails.org/ https://fly.io/ https://zionnationalpark.com/ https://podcast.thinkingelixir.com/ https://github.com/brainlid/langchain https://openai.com/ https://claude.ai/ https://gemini.google.com/ https://www.anthropic.com/ Vertex AI Studio https://cloud.google.com/generative-ai-studio https://www.perplexity.ai/ https://azure.microsoft.com/ https://hexdocs.pm/ecto/Ecto.html https://oban.pro/ Chris McCord's ElixirConf EU 2025 Talk https://www.youtube.com/watch?v=ojL_VHc4gLk Getting started: https://hexdocs.pm/langchain/gettingstarted.html https://ash-hq.org/ https://hex.pm/packages/langchain https://hexdocs.pm/igniter/readme.html https://www.youtube.com/watch?v=WM9iQlQSFg @brainlid on Twitter and BlueSky Special Guest: Mark Ericksen.
Industry veteran Andrew Mallaband of Breakthrough Moments joins Dash0's Mirko Novakovic to explore critical gaps in modern observability. Drawing on his "Observability 2025" series, Andrew breaks down why cost, data overload and poor strategy are holding teams back. They discuss the rise of AI-powered SRE agents, the challenge of missing telemetry, and how aligning data collection with intention is key to unlocking AI's potential.
In episode 83 of o11ycast, the Honeycomb team chats with Dan Ravenstone, the o11yneer. Dan unpacks the crucial, often underappreciated, role of the observability engineer. He discusses how this position champions the user, bridging the gap between technical performance and real-world customer experience. Learn about the challenges of mobile observability, the importance of clear terminology, and how building alliances across an organization drives successful observability practices.
In episode 83 of o11ycast, the Honeycomb team chats with Dan Ravenstone, the o11yneer. Dan unpacks the crucial, often underappreciated, role of the observability engineer. He discusses how this position champions the user, bridging the gap between technical performance and real-world customer experience. Learn about the challenges of mobile observability, the importance of clear terminology, and how building alliances across an organization drives successful observability practices.
Bob and Eric discuss Network Observability with VMware tools.
If you're keen to share your story, please reach out to us!Guest:https://www.linkedin.com/in/imjaredz/https://www.promptlayer.com/careers/Powered by Artifeks!https://www.linkedin.com/company/artifeksrecruitmenthttps://www.artifeks.co.ukhttps://www.linkedin.com/in/agilerecruiterLinkedIn: https://www.linkedin.com/company/enginearsioTwitter: https://x.com/EnginearsioAll Podcast Platforms: https://smartlink.ausha.co/enginears00:00 - Enginears Intro.00:37 - PromptLayer origin.02:24 - PromptLayer Intro.03:52 - PromptLayer & prompt engineering today and the evolution going forward.08:18 - Challenges building PromptLayer.11:04 - How is Jared and PromptLayer focusing the team on what matters?15:17 - What is Vibe coder?17:00 - What Vibe coding lacks?18:10 - Prompt engineers don't have to be technical.21:30 - Taking an idea exploring it through A/B testing.24:56 - Observability in prompting.28:26 - How would Jared best advise someone to build an operational LLM?30:50 - Next 12 months at PromptLayer.33:38 - Jared & PromptLayer Outro.34:16 - Enginears Outro.Hosted by Ausha. See ausha.co/privacy-policy for more information.
Jack Herrington, podcaster, software engineer, writer and YouTuber, joins the pod to uncover the truth behind server functions and why they don't actually exist in the web platform. We dive into the magic behind frameworks like Next.js, TanStack Start, and Remix, breaking down how server functions work, what they simplify, what they hide, and what developers need to know to build smarter, faster, and more secure web apps. Links YouTube: https://www.youtube.com/@jherr Twitter: https://x.com/jherr Github: https://github.com/jherr ProNextJS: https://www.pronextjs.dev Discord: https://discord.com/invite/KRVwpJUG6p LinkedIn: https://www.linkedin.com/in/jherr Website: https://jackherrington.com Resources Server Functions Don't Exist (It Matters) (https://www.youtube.com/watch?v=FPJvlhee04E) We want to hear from you! How did you find us? Did you see us on Twitter? In a newsletter? Or maybe we were recommended by a friend? Let us know by sending an email to our producer, Em, at emily.kochanek@logrocket.com (mailto:emily.kochanek@logrocket.com), or tweet at us at PodRocketPod (https://twitter.com/PodRocketpod). Follow us. Get free stickers. Follow us on Apple Podcasts, fill out this form (https://podrocket.logrocket.com/get-podrocket-stickers), and we'll send you free PodRocket stickers! What does LogRocket do? LogRocket provides AI-first session replay and analytics that surfaces the UX and technical issues impacting user experiences. Start understanding where your users are struggling by trying it for free at LogRocket.com. Try LogRocket for free today. (https://logrocket.com/signup/?pdr) Special Guest: Jack Herrington.
In this episode of the CaSE Podcast, Mirko Novakovic, a seasoned entrepreneur and investor, shares his journey through the waves of technological innovation—from the early days of online banking to the rise of AI and open telemetry. We explore with him how the lessons learned in diverse industries, including the food business, can reshape our approach to software development and architecture, emphasizing the importance of curiosity, adaptability, and a solid grasp of the fundamentals.
Raza Habib, the CEO of LLM Eval platform Humanloop, talks to us about how to make your AI products more accurate and reliable by shortening the feedback loop of your evals. Quickly iterating on prompts and testing what works, along with some of his favorite Dario from Anthropic AI Quotes.// BioRaza is the CEO and Co-founder at Humanloop. He has a PhD in Machine Learning from UCL, was the founding engineer of Monolith AI, and has built speech systems at Google. For the last 4 years, he has led Humanloop and supported leading technology companies such as Duolingo, Vanta, and Gusto to build products with large language models. Raza was featured in the Forbes 30 Under 30 technology list in 2022, and Sifted recently named him one of the most influential Gen AI founders in Europe.// Related LinksWebsites: https://humanloop.com~~~~~~~~ ✌️Connect With Us ✌️ ~~~~~~~Catch all episodes, blogs, newsletters, and more: https://go.mlops.community/TYExploreMLOps Swag/Merch: [https://shop.mlops.community/]Connect with Demetrios on LinkedIn: /dpbrinkmConnect with Raza on LinkedIn: /humanloop-razaTimestamps:[00:00] Cracking Open System Failures and How We Fix Them[05:44] LLMs in the Wild — First Steps and Growing Pains[08:28] Building the Backbone of Tracing and Observability[13:02] Tuning the Dials for Peak Model Performance[13:51] From Growing Pains to Glowing Gains in AI Systems[17:26] Where Prompts Meet Psychology and Code[22:40] Why Data Experts Deserve a Seat at the Table[24:59] Humanloop and the Art of Configuration Taming[28:23] What Actually Matters in Customer-Facing AI[33:43] Starting Fresh with Private Models That Deliver[34:58] How LLM Agents Are Changing the Way We Talk[39:23] The Secret Lives of Prompts Inside Frameworks[42:58] Streaming Showdowns — Creativity vs. Convenience[46:26] Meet Our Auto-Tuning AI Prototype[49:25] Building the Blueprint for Smarter AI[51:24] Feedback Isn't Optional — It's Everything
Wer darf eigentlich was? Und sollten wir alle wirklich alles dürfen?Jedes Tech-Projekt beginnt mit einer simplen Frage: Wer darf eigentlich was? Doch spätestens wenn das Startup wächst, Kunden Compliance fordern oder der erste Praktikant an die Produktionsdatenbank rührt, wird Role Based Access Control (RBAC) plötzlich zur Überlebensfrage – und wer das Thema unterschätzt, hat schnell die Rechtehölle am Hals.In dieser Folge nehmen wir das altbekannte Konzept der rollenbasierten Zugriffskontrolle auseinander. wir klären, welches Problem RBAC eigentlich ganz konkret löst, warum sich hinter den harmlosen Checkboxen viel technische Tiefe und organisatorisches Drama verbirgt und weshalb RBAC nicht gleich RBAC ist.Dabei liefern wir dir Praxis-Insights: Wie setzen Grafana, Sentry, Elasticsearch, OpenSearch oder Tracing-Tools wie Jäger dieses Rechtekonzept um? Wo liegen die Fallstricke in komplexen, mehrmandantenfähigen Systemen?Ob du endlich verstehen willst, warum RBAC, ABAC (Attribute-Based), ReBAC (Relationship-Based) und Policy Engines mehr als nur Buzzwords sind oder wissen möchtest, wie du Policies, Edge Cases und Constraints in den Griff bekommst, darum geht es in diesem Deep Dives.Auch mit dabei: Open Source-Highlights wie Casbin, SpiceDB, OpenFGA und OPA und echte Projekt- und Startup-Tipps für pragmatischen Start und spätere Skalierung.Bonus: Ein Märchen mit Kevin und Max, wo auch manchmal der Praktikant trotzdem gegen den Admin gewinnt
This episode was sponsored by Elastic! Elastic is the company behind Elasticsearch, they help teams find, analyze, and act on their data in real-time through their Search, Observability, and Security solutions. Thanks Elastic! This episode was recorded at Elastic's offices in San Francisco during a meetup.Find info about the show, past episodes including transcripts, our swag store, Patreon link, and more at https://cupogo.dev/.
Highlights from this week's conversation include:Pete's Background and Journey in Data (1:36)Evolution of Data Practices (3:02)Integration Challenges with Acquired Companies (5:13)Trust and Safety as a Service (8:12)Transition to Dagster (11:26)Value Creation in Networking (14:42)Observability in Data Pipelines (18:44)The Era of Big Complexity (21:38)Abstraction as a Tool for Complexity (24:41)Composability and Workflow Engines (28:08)The Need for Guardrails (33:13)AI in Development Tools (36:24)Internal Components Marketplace (40:14)Reimagining Data Integration (43:03)Importance of Abstraction in Data Tools (46:17)Parting Advice for Listeners and Closing Thoughts (48:01)The Data Stack Show is a weekly podcast powered by RudderStack, customer data infrastructure that enables you to deliver real-time customer event data everywhere it's needed to power smarter decisions and better customer experiences. Each week, we'll talk to data engineers, analysts, and data scientists about their experience around building and maintaining data infrastructure, delivering data and data products, and driving better outcomes across their businesses with data.RudderStack helps businesses make the most out of their customer data while ensuring data privacy and security. To learn more about RudderStack visit rudderstack.com.
Send us a textWe're back for Part 2 of our Automation deep-dive—and the hits just keep coming! Host Al Martin reunites with IBM automation aces Sarah McAndrew (WW Automation Technical Sales) and Vikram Murali (App Mod & IT Automation Development) to push past the hype and map out the road ahead.
Send us a textWe're back for Part 2 of our Automation deep-dive—and the hits just keep coming! Host Al Martin reunites with IBM automation aces Sarah McAndrew (WW Automation Technical Sales) and Vikram Murali (App Mod & IT Automation Development) to push past the hype and map out the road ahead.
The ClickHouse® project is a rising star in observability and analytics, challenging performance conventions with its breakneck speed. This open source OLAP column store, originally developed at Yandex to power their web analytics platform at massive scale, has quickly evolved into one of the hottest open source observability data stores around. Its published performance benchmarks have been the topic of conversation, outperforming many legacy databases and setting a new bar for fast queries over large volumes of data.Our guest for this episode is Robert Hodges, CEO of Altinity — the second largest contributor to the ClickHouse project. With over 30 years of experience in databases, Robert brings deep insights into how ClickHouse is challenging legacy databases at scale. We'll also explore Altinity's just-launched groundbreaking open source project—Project Antalya—which extends ClickHouse with Apache Iceberg shared storage, unlocking dramatic improvements in both performance and cost efficiency. Think 90% reductions in storage costs and 10 to 100x faster queries, all without requiring any changes to your existing applications.The episode was live-streamed on 20 May 2025 and the video is available at https://www.youtube.com/watch?v=VeyTL2JlWp0You can read the recap post: https://medium.com/p/2004160b2f5e/ OpenObservability Talks episodes are released monthly, on the last Thursday of each month and are available for listening on your favorite podcast app and on YouTube.We live-stream the episodes on Twitch and YouTube Live - tune in to see us live, and chime in with your comments and questions on the live chat.https://www.youtube.com/@openobservabilitytalks https://www.twitch.tv/openobservabilityShow Notes:00:00 - Intro01:38 - ClickHouse elevator pitch02:46 - guest intro04:48 - ClickHouse under the hood08:15 - SQL and the database evolution path 11:20 - the return of SQL16:13 - design for speed 17:14 - use cases for ClickHouse19:18 - ClickHouse ecosystem22:22 - ClickHouse on Kubernetes 31:45 - know how ClickHouse works inside to get the most out of it 38:59 - ClickHouse for Observability46:58 - Project Antalya55:03 - Kubernetes 1.33 release55:32 - OpenSearch 3.0 release56:01 - New Permissive License for ML Models Announced by the Linux Foundation57:08 - OutroResources:ClickHouse on GitHub: https://github.com/ClickHouse/ClickHouse Shopify's Journey to Planet-Scale Observability: https://medium.com/p/9c0b299a04ddProject Antalya: https://altinity.com/blog/getting-started-with-altinitys-project-antalya https://cmtops.dev/posts/building-observability-with-clickhouse/ Kubernetes 1.33 release highlights: https://www.linkedin.com/feed/update/urn:li:activity:7321054742174924800/ New Permissive License for Machine Learning Models Announced by the Linux Foundation: https://www.linkedin.com/feed/update/urn:li:share:7331046183244611584 Opensearch 3.0 major release: https://www.linkedin.com/posts/horovits_opensearch-activity-7325834736008880128-kCqrSocials:Twitter: https://twitter.com/OpenObservYouTube: https://www.youtube.com/@openobservabilitytalksDotan Horovits============X (Twitter): @horovitsLinkedIn: www.linkedin.com/in/horovitsMastodon: @horovits@fosstodonBlueSky: @horovits.bsky.socialRobert Hodges=============LinkedIn: https://www.linkedin.com/in/berkeleybob2105/
Jordon Peeple is Head of IT Infrastructure Operations at Acrisure—the fast-growing fintech powerhouse you've probably used without even knowing it.In this episode, Jordon shares how his team turned 5,000 ignored alerts into a focused, AI-ready monitoring system. He explains how they cut through the noise, rebuilt escalation chains, and shifting from reactive ops towards a proactive, business-aligned observability—supported by a complete IT org restructure.You'll learn:1. How to reorganize your ops team for scale and collaboration2. Why reducing alerts boosts reliability and response time3. What makes or breaks your escalation chain strategy4. Why involving business owners early pays dividends in monitoring5. What it takes to future-proof observability for hybrid infrastructure___________Get in touch with Jordon on LinkedIn: https://www.linkedin.com/in/jordonpeeple/ ___________About the host Elias Voelker: Elias is the VP for North America at Checkmk. He comes from a strategy consulting background but has been an entrepreneur for the better part of the last 10 years. In his spare time, he likes to do triathlons.Get in touch with Elias via LinkedIn or email podcast@checkmk.com.___________Podcast Music:Music by Ströme, used by permission‚Panta Rhei‘ written by Mario Schoenhofer(c)+p 2022, Compost Medien GmbH & Co KGhttps://stroeme.com/https://compost-rec.com/Thanks to our friends at SAWOO for producing this episode with us!
Scientific research is the foundation of many innovative solutions in any field. Did you know that Dynatrace runs its own Research Lab within the Campus of the Johannes Kepler University (JKU) in Linz, Austria - just 2 kilometers away from our global engineering headquarter? What started in 2020 has grown to 20 full time researchers and many more students that do research on topics such as GenAI, Agentic AI, Log Analytics, Procesesing of Large Data Sets, Sampling Strategies, Cloud Native Security or Memory and Storage Optimizations.Tune in and hear from Otmar and Martin how they are researching on the N+2 generation of Observability and AI, how they are contributing to open source projects such as OpenTelemetry, and what their predictions are when AI is finally taking control of us humans!To learn more about their work check out these links:Martin's LinkedIn: https://www.linkedin.com/in/mflechl/Otmar's LinkedIn: https://www.linkedin.com/in/otmar-ertl/Dynatrace Research Lab: https://careers.dynatrace.com/locations/linz/#__researchLab
No episódio 169 do Kubicast, batemos um papo com Rafael Ferreira sobre um tema fundamental, mas muitas vezes negligenciado: a arte de conversar. Sim, a gente conversou sobre conversar! De forma descontraída e bem-humorada, destrinchamos como a comunicação impacta nossas carreiras, nosso networking e até o modo como nos vestimos em eventos tech.Falamos sobre gifs em palestras, sobre a "cara de pau" que ajuda a romper bolhas, e sobre como não adianta ser o melhor se ninguém souber disso. O Rafael compartilhou aprendizados de eventos, bastidores do Low Ops e sua jornada até virar MVP da Microsoft. Spoiler: ele usou o podcast como estratégia de networking. E funcionou.Participe do nosso programa de acesso antecipado de Imagens Zero CVE: getup.io/zerocveO Kubicast é uma produção da Getup, empresa especialista em Kubernetes e projetos open source para Kubernetes. Os episódios do podcast estão nas principais plataformas de áudio digital e no YouTube.com/@getupcloud.
Highlights from this week's conversation include:Background of ClickHouse (1:14)PostgreSQL Data Replication Tool (3:19)Emerging Technologies Observations (7:25)Observability and Market Dynamics (11:26)Product Development Challenges (12:39)Challenges with PostgreSQL Performance (15:30)Philosophy of Open Source (18:01)Open Source Advantages (22:56)Simplified Stack Vision (24:48)End-to-End Use Cases (28:13)Migration Strategies (30:21)Final Thoughts and Takeaways (33:29)The Data Stack Show is a weekly podcast powered by RudderStack, the CDP for developers. Each week we'll talk to data engineers, analysts, and data scientists about their experience around building and maintaining data infrastructure, delivering data and data products, and driving better outcomes across their businesses with data.RudderStack helps businesses make the most out of their customer data while ensuring data privacy and security. To learn more about RudderStack visit rudderstack.com.
Federal Tech Podcast: Listen and learn how successful companies get federal contracts
Connect to John Gilroy on LinkedIn https://www.linkedin.com/in/john-gilroy/ Want to listen to other episodes? www.Federaltechpodcast.com AFCEA'S TechNet Cyber conference held in Baltimore, Maryland was the perfect opportunity to sit down with Bryan Rosensteel, Head of Public Sector Marketing at Wiz. Wiz is the “new kid on the block,” and it has had tremendous growth. During the interview, Bryan Rosensteel shows how agentless approaches can improve visibility and assist with compliance. We all know how complexity has infiltrated federal technology. We have the usual suspect of Cloud Service Providers, hybrid clouds, private clouds, and, if that was not complicated enough, alt-clouds. As a result, it is almost impossible to get a “bird's eye” visibility to provide cyber security. Two main ways have been proposed to secure this much-desired system's view. Agent. One approach is to put a bit of code on each device, called an “agent” method. It is good for granular control, but can slow down a scan and must be maintained Agentless. Bryan Rosensteel from Wiz describes something called a “agentless” method to gain visibility into complex systems. This method leverages infrastructure and protocols to accomplish the scanning objective much faster. Bryan Rosensteel states that in a world of constant attacks, this faster method allows for rapid updates to threats. Beyond better observation, an agentless method, like the one provided by Wiz, allows for compliance automation, continuous monitoring, and sets the groundwork for effective Zero Trust implementation.
Martin Mao is the co-founder and CEO of Chronosphere, an observability platform built for the modern containerized world. Prior to Chronosphere, Martin led the observability team at Uber, tackling the unique challenges of large-scale distributed systems. With a background as a technical lead at AWS, Martin brings unique experience in building scalable and reliable infrastructure. In this episode, he shares the story behind Chronosphere, its approach to cost-efficient observability, and the future of monitoring in the age of AI.What you'll learn:The specific observability challenges that arise when transitioning to containerized environments and microservices architectures, including increased data volume and new problem sources.How Chronosphere addresses the issue of wasteful data storage by providing features that identify and optimize useful data, ensuring customers only pay for valuable insights.Chronosphere's strategy for competing with observability solutions offered by major cloud providers like AWS, Azure, and Google Cloud, focusing on specialized end-to-end product.The innovative ways in which Chronosphere's products, including their observability platform and telemetry pipeline, improve the process of detecting and resolving problems.How Chronosphere is leveraging AI and knowledge graphs to normalize unstructured data, enhance its analytics engine, and provide more effective insights to customers.Why targeting early adopters and tech-forward companies is beneficial for product innovation, providing valuable feedback for further improvements and new features. How observability requirements are changing with the rise of AI and LLM-based applications, and the unique data collection and evaluation criteria needed for GPUs.Takeaways:Chronosphere originated from the observability challenges faced at Uber, where existing solutions couldn't handle the scale and complexity of a containerized environment.Cost efficiency is a major differentiator for Chronosphere, offering significantly better cost-benefit ratios compared to other solutions, making it attractive for companies operating at scale.The company's telemetry pipeline product can be used with existing observability solutions like Splunk and Elastic to reduce costs without requiring a full platform migration.Chronosphere's architecture is purposely single-tenanted to minimize coupled infrastructures, ensuring reliability and continuous monitoring even when core components go down.AI-driven insights for observability may not benefit from LLMs that are trained on private business data, which can be diverse and may cause models to overfit to a specific case.Many tech-forward companies are using the platform to monitor model training which involves GPU clusters and a new evaluation criterion that is unlike general CPU workload.The company found a huge potential by scrubbing the diverse data and building knowledge graphs to be used as a source of useful information when problems are recognized.Subscribe to Startup Project for more engaging conversations with leading entrepreneurs!→ Email updates: https://startupproject.substack.com/#StartupProject #Chronosphere #Observability #Containers #Microservices #Uber #AWS #Monitoring #CloudNative #CostOptimization #AI #ArtificialIntelligence #LLM #MLOps #Entrepreneurship #Podcast #YouTube #Tech #Innovation
In episode 81 of o11ycast, Charity Majors and Martin Thwaites dive into a lively discussion with Hazel Weakly and Matt Klein on the evolving landscape of observability. The guests explore the concept of observability versioning, the challenges of cost and ROI, and the future of observability tools, including the potential convergence with AI and business intelligence.
In this episode of the DevOps Toolchain podcast, Joe Colantonio sits down with Jacob Leverich, cofounder and Chief Product Officer at Observe, to explore how AI and cutting-edge data strategies are transforming the world of observability. With a career spanning heavyweight roles from Splunk to Google and Kuro Labs, Jacob shares his journey from banging out Perl scripts as a Linux sysadmin to building scalable, data-driven solutions that address the complex realities of today's digital infrastructure. Tune in as Joe and Jacob explore why traditional monitoring approaches are struggling with massive data volumes, how knowledge graphs and data lakes are breaking down tool silos, and what engineering leaders often get wrong when scaling visibility across teams. Whether you're a tester, developer, SRE, or team lead, get ready to discover actionable insights on maximizing the value of your data, the true role of AI in troubleshooting, and practical tips for leading your organization into the future of DevOps observability. Don't miss it! Try out Insight Hub free for 14 days now: https://testguild.me/insighthub. No credit card required.
In episode 81 of o11ycast, Charity Majors and Martin Thwaites dive into a lively discussion with Hazel Weakly and Matt Klein on the evolving landscape of observability. The guests explore the concept of observability versioning, the challenges of cost and ROI, and the future of observability tools, including the potential convergence with AI and business intelligence.
Agentic AI is equally as daunting as it is dynamic. So…… how do you not screw it up? After all, the more robust and complex agentic AI becomes, the more room there is for error. Luckily, we've got Dr. Maryam Ashoori to guide our agentic ways. Maryam is the Senior Director of Product Management of watsonx at IBM. She joined us at IBM Think 2025 to break down agentic AI done right. Newsletter: Sign up for our free daily newsletterMore on this Episode: Episode PageJoin the discussion: Have a question? Join the convo here.Upcoming Episodes: Check out the upcoming Everyday AI Livestream lineupWebsite: YourEverydayAI.comEmail The Show: info@youreverydayai.comConnect with Jordan on LinkedInTopics Covered in This Episode:Agentic AI Benefits for EnterprisesWatson X's New Features & AnnouncementsAI-Powered Enterprise Solutions at IBMResponsible Implementation of Agentic AILLMs in Enterprise Cost OptimizationDeployment and Scalability EnhancementsAI's Impact on Developer ProductivityProblem-Solving with Agentic AITimestamps:00:00 AI Agents: A Business Imperative06:14 "Optimizing Enterprise Agent Strategy"09:15 Enterprise Leaders' AI Mindset Shift09:58 Focus on Problem-Solving with Technology13:34 "Boost Business with LLMs"16:48 "Understanding and Managing AI Risks"Keywords:Agentic AI, AI agents, Agent lifecycle, LLMs taking actions, WatsonX.ai, Product management, IBM Think conference, Business leaders, Enterprise productivity, WatsonX platform, Custom AI solutions, Environmental Intelligence Suite, Granite Code models, AI-powered code assistant, Customer challenges, Responsible AI implementation, Transparency and traceability, Observability, Optimization, Larger compute, Cost performance optimization, Chain of thought reasoning, Inference time scaling, Deployment service, Scalability of enterprise, Access control, Security requirements, Non-technical users, AI-assisted coding, Developer time-saving, Function calling, Tool calling, Enterprise data integration, Solving enterprise problems, Responsible implementation, Human in the loop, Automation, IBM savings, Risk assessment, Empowering workforce.Send Everyday AI and Jordan a text message. (We can't reply back unless you leave contact info)
Apologies for the hiatus! Dave needed some time off to recover from burnout, and these episodes remained in the can. Thanks for Waiting for us
Network monitoring, Internet monitoring, and observability are all key components of NetOps. We speak with sponsor Catchpoint to understand how Catchpoint can help network operators proactively identify and resolve issues before they impact customers. We discuss past and current network monitoring strategies and the challenges that operators face with both on-prem and cloud monitoring, along... Read more »
Network monitoring, Internet monitoring, and observability are all key components of NetOps. We speak with sponsor Catchpoint to understand how Catchpoint can help network operators proactively identify and resolve issues before they impact customers. We discuss past and current network monitoring strategies and the challenges that operators face with both on-prem and cloud monitoring, along... Read more »
AI can still sometimes hallucinate and give less than optimal answers. To address this, we are joined by Arize AI's Co-Founder a Aparna Dhinakaran for a discussion on Observability and Evaluation for AI. We begin by discussing the challenges AI Observability and Evaluation. For example, how does “LLM as a Judge” work? We conclude with some valuable advice from Aparna for first time entrepreneurs.Begin Observing and Evaluating your AI Applications with Open Source Phoenix:https://phoenix.arize.com/AWS Hosts: Nolan Chen & Malini ChatterjeeEmail Your Feedback: rethinkpodcast@amazon.com
Send us a textJustin Ryburn is the Field CTO at Kentik and works as a Limited Partner (LP) for Stage 2 Capital. Justin has 25 years of experience in network operations, engineering, sales, and marketing with service providers and vendors. In this conversation, we discuss startup funding, the challenges that organizations face with hybrid and multi-cloud visibility, the impact of AI on network monitoring, and explore how companies can build more reliable systems through proper observability practices.Where to Find JustinLinkedIn: https://www.linkedin.com/in/justinryburn/Twitter: https://x.com/JustinRyburnBlog: http://ryburn.org/Talks: https://www.youtube.com/playlist?list=PLRrjaaisdWrYaue9KVLRdq5mlGE_2i0RTShow LinksKentik: https://www.kentik.com/Day One: Deploying BGP FlowSpec: https://www.juniper.net/documentation/en_US/day-one-books/DO_BGP_FLowspec.pdfStage 2 Capital: https://www.stage2.capital/Doug Madory's Internet Analysis: https://www.kentik.com/blog/author/doug-madory/Netflix Tech Blog: https://netflixtechblog.com/Multi-Region AWS: https://www.pluralsight.com/resources/blog/cloud/why-and-how-do-we-build-a-multi-region-active-active-architectureAutoCon: https://events.networktocode.com/autocon/Follow, Like, and Subscribe!Podcast: https://www.thecloudgambit.com/YouTube: https://www.youtube.com/@TheCloudGambitLinkedIn: https://www.linkedin.com/company/thecloudgambitTwitter: https://twitter.com/TheCloudGambitTikTok: https://www.tiktok.com/@thecloudgambit
Prequel is launching a new developer-focused service aimed at democratizing software error detection—an area typically dominated by large cloud providers. Co-founded by Lyndon Brown and Tony Meehan, both former NSA engineers, Prequel introduces a community-driven observability approach centered on Common Reliability Enumerations (CREs). CREs categorize recurring production issues, helping engineers detect, understand, and communicate problems without reinventing solutions or working in isolation. Their open-source tools, cre and prereq, allow teams to build and share detectors that catch bugs and anti-patterns in real time—without exposing sensitive data, thanks to edge processing using WebAssembly.The urgency behind Prequel's mission stems from the rapid pace of AI-driven development, increased third-party code usage, and rising infrastructure costs. Traditional observability tools may surface symptoms, but Prequel aims to provide precise problem definitions and actionable insights. While observability giants like Datadog and Splunk dominate the market, Brown and Meehan argue that engineers still feel overwhelmed by data and underpowered in diagnostics—something they believe CREs can finally change.Learn more from The New Stack about the latest Observability insights Why Consolidating Observability Tools Is a Smart MoveBuilding an Observability Culture: Getting Everyone Onboard Join our community of newsletter subscribers to stay on top of the news and at the top of your game.
Barr Moses, CEO & Co-Founder of Monte Carlo, challenges the notion that models alone create competitive advantage, arguing instead that the real moat lies in how organizations manage their proprietary data and ensure end-to-end reliability. Tim and Juan chat with Barr to get the Honest, No-BS scoop of what AI observability is (hint, it's really data + AI) and how organizations can build resilient AI applications.
Barr Moses, CEO & Co-Founder of Monte Carlo, challenges the notion that models alone create competitive advantage, arguing instead that the real moat lies in how organizations manage their proprietary data and ensure end-to-end reliability. Tim and Juan chat with Barr to get the Honest, No-BS scoop of what AI observability is (hint, it's really data + AI) and how organizations can build resilient AI applications.
New Relic's Head of AI and ML Innovation, Camden Swita discusses their four-cornered AI strategy and envisions a future of "agentic orchestration" with specialized agents.Topics Include:Introduction of Camden Swita, Head of AI at New Relic.New Relic invented the observability space for monitoring applications.Started with Java workloads monitoring and APM.Evolved into full-stack observability with infrastructure and browser monitoring.Uses advanced query language (NRQL) with time series database.AI strategy focuses on AI ops for automation.First cornerstone: Intelligent detection capabilities with machine learning.Second cornerstone: Incident response with generative AI assistance.Third cornerstone: Problem management with root cause analysis.Fourth cornerstone: Knowledge management to improve future detection.Initially overwhelmed by "ocean of possibilities" with LLMs.Needed narrow scope and guardrails for measurable progress.Natural language to NRQL translation proved immensely complex.Selecting from thousands of possible events caused accuracy issues.Shifted from "one tool" approach to many specialized tools.Created routing layer to select right tool for each job.Evaluation of NRQL is challenging even when syntactically correct.Implemented multi-stage validation with user confirmation step.AWS partnership involves fine-tuning models for NRQL translation.Using Bedrock to select appropriate models for different tasks.Initially advised prototyping on biggest, best available models.Now recommends considering specialized, targeted models from start.Agent development platforms have improved significantly since beginning.Future focus: "Agentic orchestration" with specialized agents.Envisions agents communicating through APIs without human prompts.Integration with AWS tools like Amazon Q.Industry possibly plateauing in large language model improvements.Increasing focus on inference-time compute in newer models.Context and quality prompts remain crucial despite model advances.Potential pros and cons to inference-time compute approach.Participants:Camden Swita – Head of AI & ML Innovation, Product Management, New RelicSee how Amazon Web Services gives you the freedom to migrate, innovate, and scale your software company at https://aws.amazon/isv/
KubeCon Europe 2025 in London has wrapped up, and we're bringing you all the highlights, trends, and behind-the-scenes insights straight from the show floor!In this special recap episode, I'm joined by two CNCF Ambassadors and community powerhouses: Kasper Borg Nissen, the Co-Chair of this KubeCon as well as of the KubeCon 2024 editions, and a Developer Relations Engineer at Dash0; and William Rizzo, Consulting Architect at Mirantis and Linkerd Ambassador.Together, we unpack the major themes from the event—from platform engineering and internal developer platforms, to open source observability, and where Kubernetes is headed next. We also chat about the vibe of the community, emerging projects to watch, and important trends in European tech sphere.Whether you missed the conference or want to catch up on important updates you might have missed, this episode gives you a curated take straight from the experts who know the cloud-native space inside out.The episode was live-streamed on 22 April 2025 and the video is available at https://www.youtube.com/watch?v=JyxJOmOEBvQYou can read the recap post: https://medium.com/p/740258a5fa46OpenObservability Talks episodes are released monthly, on the last Thursday of each month and are available for listening on your favorite podcast app and on YouTube.We live-stream the episodes on Twitch and YouTube Live - tune in to see us live, and chime in with your comments and questions on the live chat.https://www.youtube.com/@openobservabilitytalks https://www.twitch.tv/openobservabilityShow Notes:00:00 - intro03:28 - KubeCon impressions09:59 - Backstage turns 518:56 - CNCF turns 10 and CNCF annual survey27:22 - Sovereign cloud in Europe and the NeoNephos initiative33:55 - CI/CD use in production increases36:52 - OpenInfra joins the Linux Foundation40:16 - Cloud native local communities, DEI and the BIPOC initiative 51:11 - Observability query standardization SIG updates59:36 - outroResources:CNCF 2024 Annual Survey https://www.cncf.io/reports/cncf-annual-survey-2024/NeoNephos initiative for sovereign EU cloud: https://www.linkedin.com/feed/update/urn:li:share:7313115943075766273/ OpenInfra Foundation and OpenStack join The Linux Foundation: https://www.linkedin.com/feed/update/urn:li:share:7307839934072066048/ Backstage turns 5: https://www.linkedin.com/feed/update/urn:li:activity:7318163557206966272/ Kubernetes 1.33 release: https://www.linkedin.com/feed/update/urn:li:activity:7321054742174924800/Socials:Twitter: https://twitter.com/OpenObservYouTube: https://www.youtube.com/@openobservabilitytalksDotan Horovits============Twitter: @horovitsLinkedIn: www.linkedin.com/in/horovitsMastodon: @horovits@fosstodonBlueSky: @horovits.bsky.socialKasper Borg Nissen===============Twitter: https://www.twitter.com/phennexLinkedIn: https://www.linkedin.com/in/kaspernissen/BlueSky: https://bsky.app/profile/kaspernissen.xyzWilliam Rizzo===========Twitter: https://twitter.com/WilliamRizzo19LinkedIn: https://www.linkedin.com/in/william-rizzo/BlueSky: https://bsky.app/profile/williamrizzo.bsky.social
Today on the Tech Bytes podcast we're talking AI readiness with sponsor Broadcom. More specifically, getting your network observability ready to support AI operations. This isn't just a hardware or software issue. It's also a data issue. We'll get some tips with our guest Jeremy Rossbach. Jeremy is Chief Technical Evangelist and Lead Product Marketing... Read more »
Modern content systems are complex and abstract, presenting problems for managers who want to understand how their content is performing. At Autogram, Jeff Eaton and Karen McGrane have developed a content observability framework to address this complexity. Their framework evaluates the composition, quality, health, and effectiveness of content programs to help enterprises measure the return on their content investment. https://ellessmedia.com/csi/jeff-eaton-2/
Today on the Tech Bytes podcast we're talking AI readiness with sponsor Broadcom. More specifically, getting your network observability ready to support AI operations. This isn't just a hardware or software issue. It's also a data issue. We'll get some tips with our guest Jeremy Rossbach. Jeremy is Chief Technical Evangelist and Lead Product Marketing... Read more »
Modern cloud-native systems are highly dynamic and distributed, which makes it difficult to monitor cloud infrastructure using traditional tools designed for static environments. This has motivated the development and widespread adoption of dedicated observability platforms. Prometheus is an open-source observability tool designed for cloud-native environments. Its strong integration with Kubernetes and pull-based data collection model The post Prometheus and Open-Source Observability with Eric Schabell appeared first on Software Engineering Daily.
Software Engineering Radio - The Podcast for Professional Software Developers
Tyler Flint, CEO of qpoint.io, joins host Robert Blumen for a conversation about managing external vendor dependencies, including several best practices for adoption. They start with a look at internal versus external services, including details such as the footprint of external services within a micro-services application, and difficulties organizations have tracking their service consumption, quantifying service consumption, and auditing external services. Tyler also discusses the security implications of external services, including authentication and authorization. They examine metrics and monitoring, with recommendations on the key metrics to collect, as well as acceptable error rates for external services. From there they consider what can go wrong, how to respond to external service outages, and challenges related to testing external services. The episode wraps up with a discussion of qPoint's migration from a proxy-based solution to one based on eBPF kernel probes. Brought to you by IEEE Computer Society and IEEE Software magazine.
In this episode of SolarWinds TechPod, hosts Chrystal Taylor and Sean Sebring explore the key differences between monitoring and observability with guest Jeff Stewart, GVP of Product Management at SolarWinds. Observability goes beyond traditional monitoring, offering AI-driven insights and a holistic view of system health. Like understanding the anatomy of the body, observability reveals how IT systems are interconnected—where one issue can ripple across the entire environment. They discuss how businesses can leverage observability to reduce downtime, improve efficiency, and stay ahead in a rapidly evolving tech landscape. © 2025 SolarWinds Worldwide, LLC. All rights reserved