The Ravit Show aims to interview interesting guests, panels, companies and help the community to gain valuable insights and trends in the Data Science and AI space! The show has CEOs, CTOs, Professors, Tech Authors, Data Scientists, Data Engineers, Data A

From SHIFT by Commvault New York, I sat with Christopher Mierzwa on culture, clarity, and execution!!!!What you will get• Real takeaways from his panel• Why people, mindset, and culture decide security outcomes• Practical advice for leaders, CISOs, and CIOsHighlights• Culture beats tools when pressure hits. If teams do not trust each other, runbooks stall.• Mindset sets the tone. Treat incidents as system problems, not hero moments.• Practice builds confidence. Short drills with clear ownership move every metric that matters.Advice from Chris• Start with people. Define roles, practice handoffs, review the tape after every drill.• Build muscle memory. Run small, frequent exercises across IT, SecOps, and the business.• Keep the board close. Explain risk in plain language and track progress like product work.My takeSecurity is a team sport. The best programs invest in culture first, then controls.#data #ai #cloud #security #cybersecurity #recovery #resilience #commvault #shift2025 #shift #theravitshow

Stop chasing tools. Start fixing decisions. I spoke to Stephen Sciortino, CEO and Founder of Database Tycoon LLC, at Small Data SF by MotherDuck. Clear takeaways for anyone running or advising a data team.What we covered• The real shift from his Brooklyn Data days to independent consulting• Early signals a team will win vs signs they are in trouble• How AI is changing expectations and what must stay the sameWatch the complete interview! Practical, direct, and worth your time.#Data #AI #SmallDataSF #DataEngineering #AI #Analytics #TheRavitShow

Real-time data is no longer a future problem. At Small Data SF by MotherDuck, I sat down with David Yaffe, Co-Founder & CEO at Estuary, to talk about what has changed in the world of data streaming!!!!A few years ago, real-time data was something most teams put on their “later” list. Expensive. Hard to scale. Too complex for most use cases.But as David shared, that story has shifted fast.Here are some takeaways from our conversation:- Streaming is now viable for everyoneWith cheaper compute, mature tooling, and simpler developer experiences, real-time data isn't a luxury anymore. The barriers that once made it a niche capability are gone- Batch vs Real-time: Asking the right questionsBefore jumping to streaming, David suggests asking what problems you're solving — speed for the sake of speed rarely pays off. Sometimes batch is just fine. The goal is fit, not flash- Architecture mattersMoving from batch to streaming means thinking end-to-end: from schema evolution and error handling to observability. Teams that skip this planning end up redoing pipelines- CDC done rightChange Data Capture is powerful, but it's easy to misuse. The most common mistake? Treating CDC as an ETL replacement rather than an event system. Understanding that difference prevents pain later- The conversation was practical, focused, and refreshing.Real-time isn't about chasing trends, it's about enabling faster insights and cleaner data movement with less friction.If you've been wondering when “real-time” becomes realistic, this one will give you a clear answer.#data #ai #motherduck #smalldatasf #theravitshow

Small Data. Real outcomes. I covered MotherDuck's Small Data SF in person and spoke to my long-time connection with Colin Zima, CEO of Omni, to cut through the AI noise in BI. We focused on what actually moves the needle for teams today.Here is what we got into:• Where AI is creating real BI valuePractical wins that ship now, not hype cycles• Flexibility vs governanceHow Omni gives analysts room to explore while keeping the shared truth intact• Why build a modeling layerWhat Omni's own model unlocks for speed, trust, and how far AI can go• Embedded analytics after the Explo acquisitionWhen it makes sense to put live insights inside your product and what to avoid• Simple over cleverWays AI can remove clicks, clean up metrics, and make BI easier to use• Common mistake with AI in dashboardsTeams bolt on features before they fix definitions and owners• The agent futureIf agents run dashboards tomorrow, why export to Excel might still matterIf you care about getting answers faster with clear guardrails, you will like this one.#data #ai #motherduck #smalldatasf #theravitshow

Small Data is having a big moment!!!! I covered Small Data SF by MotherDuck in person and sat down with Brittany Bafandeh, CEO at Data Culture. We talked about the real blockers to impact and how teams can move faster with the data they already have.Here is what we got into:When it is not a data problem - Brittany walked through a case where dashboards, pipelines, and new tools were not the fix. The real issue was slow decisions and unclear ownership. Once they set decision rights and clear KPIs, outcomes changed without buying more tech.Do you have a data culture or just tools - As a consultant, she looks for simple signals. Are decisions tied to metrics. Do teams review outcomes every week. Are definitions shared. If the answer is no, that is an infrastructure shell without culture inside it.Consultant vs in house - Consultants can push for focus and bring patterns from many teams. In house leaders win by staying close to the business and building habits that last. The best results happen when both mindsets meet.One modeling habit that breaks things - Teams jump to complex models too soon. Brittany's fix is to model around decisions first. Keep names, metrics, and grain simple. Let complexity come only when the use case proves it.Why this mattersMost teams do not need more tools to get value. They need faster decisions, shared language, and simple models that match the business. Small data, used well, beats big stacks used poorly!!!!I am publishing the full interview next. If you care about real outcomes with the stack you already have, you will like this one.#data #ai #motherduck #smalldatasf #theravitshow

Most companies say they are “doing AI.” Very few are actually ready for it. In my new episode of The Ravit Show, I sat down with Simon Miceli, Managing Director, Cisco, who leads Cloud and AI Infrastructure across Asia Pacific, Japan, and Greater China. He sits right where big AI ambitions meet the hard reality of networks, security, and technical debt.This conversation builds on my earlier episode with Jeetu Patel, President and CPO Cisco and goes deeper into what it really takes to get AI working in production in APJC.Here are a few themes we unpacked:-- Only a small group is truly AI ready- Cisco's latest AI Readiness Index shows that just a small percentage of organizations globally are able to turn AI into real business value. Cisco calls them “Pacesetters.”- They are not just running pilots. They have use cases in production and are seeing returns.-- We are entering the agentic phase of AI- Simon talked about how we are moving from simple chatbots to AI agents that can take action.- That shift changes everything for infrastructure.- Instead of short bursts of activity, you now have systems that are always working in the background, automating processes and touching real operations.-- AI infrastructure debt is the new technical debt*- Many organizations are carrying years of compromises in their networks, data centers, and security.- Simon called this “AI infrastructure debt” and described how it quietly slows down innovation, increases costs, and makes it harder to scale AI safely.-- Network as a foundation, not an afterthought- One of his strongest points: leaders often think first about compute, but the companies that are ahead treat the network as the base layer for AI.- When workloads double, your network can become the bottleneck before your GPUs do. - The Pacesetters are already investing to make their networks “AI ready” and integrating AI deeply with both network and cloud.Three things leaders must fix in the next 2–3 yearsSimon shared a very clear checklist for CIOs and business leaders who are serious about agentic AI: 1. Solve for power before it becomes a constraint 2. Treat deployment as day one and keep optimizing models after they go live 3. Build security into the infrastructure from the start so it accelerates innovation instead of blocking itThis was a very honest, no-nonsense view of where APJC really stands on AI, and what the leading organizations are doing differently!!!!Thank you Simon for joining me and sharing how Cisco is thinking about AI infrastructure across the region.#data #ai #cisco #CiscoLiveAPJC #Sponsored #CiscoPartner #TheRavitShow

These are some of the most exciting times to be in AI. And some leaders are not just watching the shift. They are building it. Excited to share, I sat down with Jeetu Patel, President and Chief Product Officer at Cisco, for a conversation I have been wanting to do for a long time. Cisco is right in the middle of AI, networking, security, and data, and this interview felt like a front row seat to how the next decade is being shaped.In this episode of The Ravit Show, we spoke about:- The key AI trends Jeetu is seeing right now and how he explains Cisco's AI vision- What being at the intersection of networking, security, and data allows Cisco to do with AI that most pure AI companies cannot- How AI adoption in Asia Pacific, Japan, Greater China, and India looks different from North America and Europe- Why India is so important to Cisco, both as a market and as a serious talent hub- The early career moments that still shape how he leads today- The one piece of career advice he wishes someone had given him at 25, for everyone starting out in India and across APJCFor me, this was part AI roadmap, part masterclass in leadership at global scale. You can feel how seriously he takes this moment and the responsibility that comes with it.If you care about AI, infrastructure, or building your career in this space, this is one you will want to watch.#data #ai #cisco #CiscoLiveAPJC #Sponsored #CiscoPartner #TheRavitShow

Gartner has named K2view a Visionary in the 2025 Magic Quadrant for Data Integration Tools, and they have moved up inside the Visionary quadrant. This is a big signal for anyone who cares about data and AI in the enterprise.I had to cover this news in person and what better place than my friend, Ronen Schwartz's home in Palo Alto, talking to him about what this actually means. We did not just speak about a report. We spoke about whether data integration still matters in an AI world and why K2View's approach is getting attention right now.Here is how I see it.- First, data integration is more relevant than ever. Your AI agents, copilots, and analytics are only as good as the data foundation behind them. K2View's bet has been simple to understand. Give every business domain a clean, real time, governed view of its data, and make it available to any use case, including AI.- Second, the move up in the Visionary quadrant is about more than a label. It reflects how K2View is executing on this idea of “AI ready data,” not just talking about it. They are helping customers move away from scattered pipelines to a consistent way of delivering trustworthy data products into AI, analytics, and operations.- Third, when you compare their position with the large leaders, you see a different angle. The big platforms are broad. K2View is sharp and focused.They model data around real business entities, not just tablesThey support real time views without forcing you into one storage patternThey are designing with GenAI and agentic AI in mind from day oneFinally, the strategic outlook. Ronen is very clear that this is not about selling “yet another integration tool.” It is about being the data layer that lets enterprises move faster with AI while staying in control of privacy, compliance, and performance.For leaders who are serious about AI and tired of slideware, K2View's move in the Magic Quadrant is one of those signals worth paying attention to.#data #ai #gartner #gartnermagicquadrant #agenticai #agents #k2view #theravitshow

Some conversations shift how you think about the future of AI. This one did. I just sat down with David Flynn, Founder and CEO of Hammerspace, to talk about something enterprises rarely discuss openly: the real engine behind AI is no longer compute. It is data.We went deep into why NVIDIA's AI Data Platform has become the blueprint for modern AI architecture and why Hammerspace is emerging as the layer that actually makes this blueprint real for enterprises.David broke down how the industry is moving from building AI around compute to building AI around data. He talked about what the AI Anywhere era looks like, and why the next generation of AI systems will need a global, unified view of data across cloud, edge, and physical environments.We also talked about the partnership with NVIDIA, how it boosts the productivity of agentic AI, and why enterprises will need data that can move as fast as their models. David shared how Hammerspace is preparing for what comes next in 2026 and beyond, from scale to power efficiency to open standards.This is one of those conversations that gives you clarity on where the industry is going and why data architecture is about to become the biggest competitive advantage.#data #ai #nvidia #hammerspace #gpu #enterprise #agenticai #theravitshow

AI doesn't fail because of GPUs. It fails because of data.I had a blast chatting with Jeff Echols, Vice President, AI and Strategic Partners at Hammerspace, from NVIDIA GTC in Washington. We talked about the part of AI nobody is fixing fast enough: getting data to GPUs at the speed the GPUs need it.Jeff breaks down what makes the Hammerspace AI Data Platform different from traditional AI storage. This isn't “more storage.” It's orchestration. Move data globally. Feed it to the right workload. Keep GPUs busy instead of waiting.We also got into MCP and why an intelligent data control layer is now core to any real AI strategy, plus how Hammerspace lines up with the NVIDIA AI Data Platform reference design so enterprises can actually run this in production, not just in a lab.And we talked Tier 0. If you want GPU ROI, Tier 0 is about one thing: keep the GPUs fed at full speed.If you're trying to scale AI past a pilot, watch this. #data #ai #nvidiagtc #nvidia #hammerspace #gpu #theravitshow

AI in the public sector isn't a pilot anymore. It's running in the real world. Check out my conversation from NVIDIA GTC in Washington with Robin Braun, VP AI Business Development, Hybrid Cloud at Hewlett Packard Enterprise, and Russell Forrest, Town Manager of Town of Vail. This one is important because it's about AI for cities, not just AI for big tech. I had a blast interviewing both Robin and Russell.We talked about how HPE is using AI to tackle real problems every city deals with: traffic, safety, and energy efficiency. Robin walked through how you build a smarter, more connected city by turning live data into decisions that actually help people on the ground.Russell brought the city view from Vail. He explained what it takes to move from “we're testing AI” to “we're using this in operations.” We got into risk, cost, and how you deploy without adding complexity or slowing down public services.We also discussed agentic AI. Not as a buzzword, but as something that can help a town react in real time while still keeping humans in control.Better safety. Better visitor experience. Better use of resources. Same team.This is AI as public service infrastructure.Full interview is now live on LinkedIn and YouTube.#data #ai #nvidiagtc #nvidia #hammerspace #gpu #storage #theravitshow

Public sector AI is moving fast. The big question is how to build it the right way. I had a blast chatting with Andrew Wheeler, Hewlett Packard Enterprise Fellow and Vice President at Hewlett Packard Labs and HPC & AI Advanced Development at NVIDIA GTC in Washington, DC. We talked about:* How HPE helps agencies build and scale sovereign AI ecosystems* Why the public sector is a core focus for HPE in AI* Practical steps for data sovereignty, compliance, and security without slowing innovation* Where sovereign AI shows up first: government, defense, citizen services, large-scale research* How HPC and supercomputing power national-scale AI* What quantum could unlock for government programs and where HPE fitsIf you care about trusted AI for cities, states, and national labs, this one is worth a watch.Full interview now live!!!!#data #ai #hpe #nvidiagtc #gtc2025 #gpu #sovereign #nvidia #theravitshow

AI ROI is now the real test. I got a chance to chat Joe Corvaia, SVP Sales at DDN at NVIDIA GTC in Washington. This one is for CEOs and exec teams who are being pushed to “do AI” but still can't show a return.We started with a simple question. Why are some companies actually getting ROI from AI while others are still stuck in pilots. Joe was very direct on what separates the ones who are scaling from the ones who are still presenting slides.We talked about infrastructure as a board-level strategy. Not just “buy more GPUs,” but “are you using the GPUs you already bought.” Joe walked through how data infrastructure and data flow have to be part of the conversation in the boardroom, not just in IT.We got into AI factories and the new DDN Enterprise AI HyperPOD. Built with Supermicro and accelerated by NVIDIA, HyperPOD is designed to take teams from first deployment to serious scale. The idea is you should be able to stand up production AI without rebuilding the stack every time you grow the workload.Joe also broke down why platforms like HyperPOD, and GPU-aware networking and acceleration like NVIDIA BlueField 4, are about more than performance. They are about efficiency. Max GPU utilization. No idle spend. Faster time to value. This matters not just for big tech, but for regulated industries and sovereign AI programs that need capability and control.We closed on one topic that every CEO is thinking about right now. How do you future proof AI investments. Joe shared the one principle leaders should follow so they are not buying hardware for headlines, but building an AI foundation that still makes financial sense five years out.If you own AI strategy, budget, or delivery, watch this.#data #ai #nvidiagtc #nvidia #hammerspace #gpu #storage #theravitshow

I have seen a lot of AI demos this year. Very few make hard, messy work feel simple. Next week I am going live with Sarah McKenna, CEO of Sequentum, for an AI Magic Wand Launch Celebration on The Ravit Show.What is happening -We are going to walk through how Sequentum is using AI to change web data work. Not slides. Actual product.Here is what we will get into during the show:- AI Magic Wand (beta)A new feature that turns high level intent into working web data flows. Think less trial and error, more “tell it what you want and refine.”- Command TemplatesHow reusable templates help teams stop rebuilding the same patterns and start sharing what works across the company.- New tools coming in the next weeksUnblocker, Paginations and more. All focused on enhancing Sequentum's data collection capabilities - Latest in standardsThe standards and good practices that matter if you want web data and AI that can stand up in an enterprise.Why I am excited about this oneMost teams I meet are still stuck between scripts, manual fixes, and brittle tools when it comes to web data. Sequentum is trying to give them a cleaner path with AI on top. This session is about showing that work in public and talking through the real trade offs.If you care about web data, automation, and using AI for real work, this will be a good one to watch.

Think Mumbai was electric. India's AI build-out just moved into a higher gear.I sat down with Sandip Patel, Managing Director, IBM India & South Asia at IBM's Mumbai office. We unpack what Think Mumbai means for teams building with AI, hybrid cloud, and data at scale.What stood out and why it matters:IBM and airtel partnership• Aim: give regulated industries a safe and fast path to run AI at scale• How: combine Airtel's cloud footprint and network with IBM's hybrid cloud and watsonx stack• Why it helps: data stays controlled and compliant while workloads flex across on-prem, cloud, and edge• Impact areas: banking, insurance, public sector, large enterprises with strict governanceFirst cricket activation on watsonx• What: AI-driven insights powering a live cricket experience• Why it matters: shows real-time analytics, content, and decisioning are ready for prime time• Enterprise takeaway: the same pattern applies to contact centers, fraud, supply chains, and field ops where seconds countAI value for Indian enterprises today• Start with governed data and clear ownership• Use hybrid patterns so models run where the work and data live• Blend predictive models with generative workflows inside watsonx for measurable lift• Track outcomes in productivity, risk reduction, customer experience, and time to valueSkills as the force multiplier• Priority skills: data governance, MLOps, orchestration, security on hybrid cloud• Team model: small core teams operating a shared platform, federated use cases across business units• Result: faster move from pilots to production with repeatable guardrailsMy takeIndia is moving from talk to build. The blueprint is open, hybrid, and governed. Partnerships that keep control local while staying flexible will unlock scale. Sports gave us a sharp demo of real-time AI. The next wins will be in operations, customer journeys, and risk.The interview is live now. Link to the complete interview in the comments!#data #ai #agentic #ibm #ThinkMumbai #governance #cloud #watsonx #IBMPartner #theravitshow

Flink Forward Barcelona 2025 was not just about streaming. It was about what comes next for enterprise AI. I sat down with Qingsheng Ren, Team Lead, Flink Connectors & Catalogs at Ververica, and Xintong Song, Staff Software Engineer at Alibaba Cloud, to talk about something that could change how enterprises build AI systems in production: Flink Agents.Flink Agents is being introduced as an open source sub-project under Apache Flink. The goal is simple and ambitious at the same time: bring agentic AI into the same reliable, scalable, fault-tolerant world that already powers real-time data infrastructure.We talked about why this matters.First, why Flink Agents and why now?They walked me through the motivation. Most AI agent frameworks today look exciting in a demo, but they break once you try to run them against live data, streaming events, strict SLAs, audit requirements, cost pressure, and real users. There's a big gap between prototypes and reliable operations. That's the gap Flink Agents is aiming to close.Why open source?Both Ververica and Alibaba made it clear that this is not meant to be a proprietary, closed feature. They want this to be a community effort under Apache Flink, not a vendor lock-in story. The belief is that enterprises will only bet on AI agents at scale if the runtime is open, portable, and battle tested.How is building an AI agent different from building a normal Flink job?This part was interesting. A standard Flink job processes streams. An agent has to do more. It has to reason, take actions, call tools, maintain context, react to feedback, and keep doing that continuously. You're not just transforming data. You're orchestrating behavior. Flink Agents is meant to give you those building blocks on top of Flink instead of forcing teams to stitch this together themselves.What kind of companies is this for?We got into enterprise workloads that actually need this. Think about environments where fast decisions matter and you can't afford to go offline:-- Fraud detection and response-- Customer support and workflow automation-- Operational monitoring, alert triage, and remediation-- Real-time personalization and recommendations-- Anywhere you need an autonomous loop, not just a dashboard-- And finally, roadmap.We talked about the next 2 to 3 years. The focus is on deeper runtime primitives for agent behavior, cleaner developer experience, and patterns that large enterprises can trust and repeat.My takeaway:Flink Agents is not just “yet another agent framework.” It's an attempt to operationalize agentic AI on top of a streaming backbone that already runs at massive scale in production.This is the conversation every enterprise AI team needs to be having right now.#FlinkForward #Ververica #Streaming #RealTime #DataEngineering #AI #TheRavitShow

Real time is getting simpler. At Flink Forward, I sat down with Josep Prat, Director, Aiven. We discussed about Aiven and Ververica | Original creators of Apache Flink® new partnership and what it unlocks for data teams!!!!What we covered: • Why this partnership makes sense now and the outcomes it targets • Fastest ROI use cases for joint customers • How Aiven and Ververica split support, SLAs, and upgrades • The first deployment patterns teams should try. POCs, phased rollouts, or full cutovers • Support for AI projects that need fresh data with low latency • What is coming next on the shared roadmap over the next two quartersIf you care about streaming in production and a cleaner path to value, this one is worth a watch.Full interview now live!!!!#data #ai #streaming #Flink #Aiven #Ververica #realtimestreaming #theravitshow

Flink Forward Barcelona 2025 was a big week for streaming and the streamhouse.I sat down with Jark Wu, Staff Software Engineer at Alibaba Cloud, and Giannis Polyzos, Staff Streaming Architect at Ververica, to talk about Apache Fluss and what is coming next.First, a quick primer. Fluss is built for real-time data at scale. It sits cleanly in the broader ecosystem, connects to the tools teams already use, and focuses on predictable performance and simple operations.What stood out in our chat:• Enterprise features that matterSecurity, durability, and consistent throughput. Cleaner ops, stronger governance, and a smoother path from POC to production.• Zero-state analyticsThey walked me through how Fluss cuts network hops and lowers latency. Less shuffling. Faster results. More efficient pipelines.• Fluss 0.8 highlightsBetter developer experience, more stable primitives, and upgrades that help teams standardize on one streaming backbone.• AI-ready directionVendors are shifting to AI. Fluss is adapting with functions that support agents, retrieval, and low-latency model workflows without bolting on complexity.• Streamhouse alignmentThe new capabilities strengthen Fluss in a streamhouse architecture. One place to handle fast ingest, storage, and analytics so teams do not stitch together five systems.We also covered the roadmap. Expect continued work on latency, cost control, and easier day-two operations, plus patterns that large teams can repeat with confidence.Want to get involvedJoin the community, review the open issues, try the latest builds, and share feedback from real workloads. That is how this moves forward.The full conversation with Jark and Giannis is live now on The Ravit Show.#data #ai #FlinkForward #Flink #Streaming #Ververica #TheRavitShow

How does PlayStation run real time at massive scale. I sat down with Bahar Pattarkine from PlayStation the team to unpack how they use Apache Flink across telemetry and player experiences.What we covered:-- Why they chose Flink and what problem it solved first-- Running 15,000+ events per second, launch peaks, regional latency SLOs, and avoiding hot partitions across titles-- Phasing the move from Kafka consumers to a unified Flink pipeline without double processing during cutover-- How checkpointing and async I/O keep latency low during spikes or failures-- Privacy controls and regional rules enforced in real time-- What Flink simplified in their pipelines and the impact on cost and ops#data #ai #streaming #Flink #Playstation #Ververica #realtimestreaming #theravitshow

#EVOLVE25 London was a clear signal that Big Data is entering its third era, and it is about outcomes, not buzzwords. I sat down with Sergio Gago, CTO, Cloudera and we went straight to the shift everyone is feeling. This era is defined by convergence. The work now is to bring on-prem and cloud together so teams can move fast, stay compliant, and keep costs in check. That is where the real AI wins will come from.Here is what we covered in the interview I am publishing next:- What truly defines the third era of Big Data and how it differs from the last decade- Why convergence matters now for performance, cost, and control- Where Cloudera wins today, and where it chooses not to compete- How a unified data foundation raises trust in AI- The new Cloudera + Dell “AI-in-a-box” approach for private, trusted AI- A five-year view of on-prem, cloud, and AI working together- Cloudera's vision to support this shift end to endIf you care about building trustworthy AI on real enterprise data, this conversation will be useful. #data #ai #EVOLVE25 #cloudera #theravitshow

Sovereign cloud in Europe just moved from idea to action. At EVOLVE London I sat down with Christopher Royles from Cloudera. Cloudera has been named a launch partner for the new AWS European Sovereign Cloud, and we unpacked what this means for builders, leaders, and regulators across EMEA.Here is what we covered in the interview I am publishing next:- What “sovereign by design” means in practice for data, control planes, encryption, and operational access- How this model helps organizations meet strict European requirements while keeping teams productive- How it connects to Cloudera's Private AI strategy so enterprises can run AI on their terms- Where sovereign cloud demand is strongest across EMEA and why industries like public sector, financial services, and healthcare are leaning in- A practical path to comply without stalling innovation, from architecture choices to operating modelsIf you care about trusted AI, data control, and real-world compliance in Europe, this conversation will be useful. I will share the full interview with Chris next.#data #ai #sovereign #cloudera #theravitshow

Live from POSSIBLE by Teradata with Sumeet Arora, Chief Product Officer. had a blast chatting with him about five things: Trends. New announcements. Data and AI trends inside the enterprise. Real use cases. What is next for the industry.New announcements -- - Autonomous Customer Intelligence. Turning customer signals into timely actions across the journey.- NewtonX research on AI for customer experience. Where leaders are investing and where gaps still exist.Trends in Data and AI- Moving from pilots to production with tighter links between trusted data and AI.- Clear governance and cost control built in from the start.Use cases- Retention and growth with real-time signals and simple next best actions.- Smarter personalization without copying data all over the place.Future- Faster paths from idea to impact.- Smaller, focused teams shipping measurable outcomes.#data #ai #possible2025 #teradata #theravitshow

WOW! It was so nice to meet the man himself and even interview him on The Ravit Show at Possible 2025, Steve McMillan, President and CEO of Teradata. Getting this time on camera felt special.We kept it human and future focused:- His favorite Possible moment and what it revealed about the crowd- The one belief he hopes people take home and act on- When ideas strike for him, morning or night- The place he goes to dream bigger- What is still on his “possible” bucket listBetween the lines, you will hear the direction Teradata is setting:-- Moving from snapshots to signals so decisions fire in real time-- A cleaner path from data products to activation across the stack-- Agents that don't just chat but lift outcomes like CLV-- Guardrails and services that help teams run this at scale-- A builder mindset with new tools on the waIf you care about where enterprise CX is heading, you will want to hear this one. I have also shared the link to all announcements from Possible 2025!!!!#data #ai #possible2025 #teradata #theravitshow

Breaking down trust in AI, not just talking about it. I just sat with Manisha Khanna, Global Product Marketing Leader for AI at SAS, to unpack the SAS–IDC Data and AI Pulse. The core theme is simple. Trust drives ROI.Key takeaways:- Trustworthy AI leaders outperform because they do the basics well. Data lineage, access control, model monitoring, and clear ownership.- Order matters. Fix data quality and governance first, then productize, then scale. Skipping steps is how pilots stall.- Guardrails in SAS Viya make “safe by default” real. Clear policies, repeatable workflows, and measurable outcomes.- Agentic AI readiness is not a tool choice. It is about reliable data, governed actions, and feedback loops that teams can audit.Why this matters:Enterprises keep chasing bigger models while the wins come from cleaner foundations. If you want impact, make trust a requirement, not a marketing line.Watch it, share it with your team, and pressure test your own roadmap against these basics.#data #ai #agenticai #sas #theravitshow

Agents are here. Governance decides who wins.I just published my interview with my friend Marinela Profi from SAS. Marinela breaks down agentic AI in a way leaders can use today. Clear. Practical. Actionable.What we covered:- What makes agentic AI different and why enterprises should care- Autonomy levels, decisioning, orchestration, and the human-AI balance- Where teams go wrong: hype vs readiness, data maturity, governance, and missing orchestration- Real use cases on Viya across banking, insurance, and manufacturing- Why “LLMs are not agents” and how to combine deterministic and probabilistic methods with governance- What a CIO or CDO should do now to move from pilots to productionMarinela is sharp and grounded. She touched the important points and kept it real for enterprise teams.The interview is live. Watch it and share your takeaways.

AI without observability is guesswork.I har a blast chatting with Patrick Lin, SVP and GM of Observability at Splunk on The Ravit Show. We get straight into how teams keep AI reliable and how leaders turn telemetry into business results.What we cover: • .conf25 updates in Splunk Observability • AI Agentic Monitoring and AI Infrastructure Monitoring • How a unified experience with Splunk AppDynamics and Splunk Observability Cloud helps teams ship faster with fewer surprises • Why observability is now a growth lever, not just a safety net • Fresh insights from the State of Observability 2025 reportMy take: • The nervous system of AI is observability • Signal quality beats signal volume • OpenTelemetry works best when tied to business context • When SecOps and Observability work together, incidents become learning momentsIf you care about reliable AI, faster recovery, and clear impact on productivity and revenue, this one will help.#data #ai #conf2025 #splunk #splunkconf25 #SplunkSponsored #theravitshow

I sat down with Jeff Baxter, NetApp to discuss about the announcements from INSIGHT to go deep on the new announcements today and why they matter for teams building with AI.We covered what this means in practice for customers. The NetApp AI Data Engine moves data to the right place at the right time, manages metadata, lineage, and versioning so work is reproducible, and adds AI powered ransomware detection so teams can ship with confidence. It runs as one platform across on premises and all major clouds, so hybrid stays simple and cost aware.Highlights from our conversation:• From data to innovation. A single data platform that reduces handoffs and cuts wait time for data scientists and engineers.• Reproducible AI by design. Metadata, lineage, and versioning are first class so you can rerun, compare, and promote models with clarity.• Security that keeps pace with AI. AI powered ransomware detection plus built in controls to protect sensitive data without slowing teams down.• One control plane. On prem and across major clouds with consistent operations, cost visibility, and policy enforcement.My take:• This is about operational discipline, not hype. Reproducibility and lineage are the difference between a demo and a dependable AI program• Security has to be native to the data platform. If it is bolted on later, teams hesitate and AI work stalls• Hybrid is the real world. A unified approach across on prem and clouds reduces complexity and keeps options openIf you are scaling AI and want fewer blockers between data and outcomes, this will help.“Explore NetApp AI solutions” AI solutions page: https://www.netapp.com/artificial-intelligence/?utm_campaign=cross-aiml-multi-all-ww-digi-spp-ravit_/_baxter_yt_interview-1760561150310&utm_source=youtube&utm_medium=video&utm_content=video#data #ai #insight2025 #netapp #theravitshow

Ransomware is getting faster. Your recovery needs to be faster.I had a blast chatting with Ryan Howard from BMC Software AMI Security on building cyber resiliency for mainframe environments. Simple, practical, and focused on what actually works in the real world.What we covered:* What cyber resiliency really means for mainframes* The core controls that matter: prevention, detection, isolation, recovery* Common blockers teams hit when rolling out new controls* A real incident walkthrough and how recovery stayed on trackWho should watch:* Security leaders who own mainframe risk* Infra and ops teams running mission-critical workloads* Anyone tightening their ransomware playbookWatch the interview now and share it with your team!!!!#cyberresilience #ransomware #mainframe #security #incidentresponse #bmc #theravitshow

What happens when AI agents become your teammates on the mainframe?I sat down with Anthony DiStauro from BMC on The Ravit Show to explore how agentic AI is moving from hype to real work. We unpacked the building blocks, the use cases, and what this shift means for teams who keep mission-critical systems running.Highlights we covered:- The teammate you didn't hire: where agents plug in first across monitoring, remediation, change checks, and capacity tuning.- The basics in plain English: AI Agents, Agentic Workflows, and MCP Servers, and how they connect to form an execution layer that can act, not just alert.- Why now: falling hype, rising adoption as teams want safer automation with clear guardrails.- From mundane to strategic: operators focusing on performance engineering, cost optimization, and resilience design while agents handle the repetitive loops.- Capturing know-how: using agents to encode runbooks, tacit fixes, and tribal knowledge so it survives turnover.- Five-year picture: proactive, self-healing mainframes where agents predict drift, test changes, and roll back safely.- Humans + agents: trust comes from transparency, audit trails, and clear handoffs.- Invisible infrastructure: agentic workflows that hum in the background and surface only when needed.- From dashboards to decisions: moving beyond graphs to actions with approval gates for high-risk steps.- Future talent: a shorter learning curve for newcomers, making mainframe roles more attractive.If you care about reliability, cost, and speed on the mainframe, this is the next chapter.#data #ai #mainframe #bmc #theravitshow

What if your mainframe could talk back and guide the fix? I spoke to Liat Sokolov, Product Manager for AI solutions and an AI Evangelist at BMC Software. We explored how teams move from manuals and dashboards to real conversations with the platform. Liat breaks down why “guided resolution” beats “just answers” by giving the next right step in context, cutting time to recovery and reducing errors.We dug into the GenAI knowledge expert that keeps hard-won expertise in house as veterans retire. It becomes a coach for new developers and operators, helping them ramp faster and avoid costly mistakes.We also separated conversational AI from AI Agents and showed why enterprises need both. Picture agents coordinating across dev, ops, and cloud to roll out a change with checks, traceability, and rollback. That is how you modernize with confidence. Liat also explained why the mainframe can be the most explainable platform in the AI era, which matters for trust and safety.We finished with a practical path forward. Start with one high-value workflow, capture the expert playbook, pilot a conversational assistant with guardrails, then add agents as you prove value.If you care about making the mainframe simpler, safer, and faster, this interview is worth your time.#data #ai #mainframe #bmc #theravitshow

What happens when enterprise-grade AI visualization meets on-prem reality? I sat down with Leo Brunnick, Chief Product Officer at Cloudera on The Ravit Show, to talk about a major shift: bringing Cloudera Data Visualization to on-prem environments.We got deep into:-- Why now? What pushed Cloudera to extend this capability beyond the cloud-- How AI Visual and natural language querying are finally breaking barriers for non-technical users - right at the source-- The actual features that make this visualization layer powerful—not just dashboards, but intelligent, explainable insights-- Real business impact: we talked through use cases where organizations are solving high-stakes problems by giving their teams access to AI-powered visualizations on-prem-- And most importantly—where this is all headed. Leo shared a vision that includes GenAI, real-time visualization, and enabling large enterprises to move faster, smarter, and more transparently with their data-- The future of enterprise BI isn't about choosing between cloud or on-prem. It's about bringing AI to wherever the data livesIf you're navigating complex environments or looking to scale AI-driven insights inside the firewall, this conversation is worth your time.#data #dataviz #ai #cloudera #theravitshow

During GraphSummit London, Neo4j put $100M to become the default knowledge layer for agentic systems. I spoke with Sudhir Hasbe, President & Chief Product Officer at Neo4j to break it down.What we covered* The $100M push. Why Neo4j is betting on graph as the knowledge layer for agentic systems* Graph Intelligence. Turning disconnected data into explainable context your agents can trust* Aura Agent. Build, test, and deploy agents on your graph data in minutes. Early access now. GA in Q4* MCP Server for Neo4j. A cleaner path to add graph memory to the agents you already run* Use cases. Fast wins in healthcare R&D, procurement and supply chain, and financial operations* What's next. GA timelines, first milestones, and how customers will measure impactWhy it mattersMost pilots fail without context and memory. Graphs give agents structure, reasoning, and traceability. That is how you ship production outcomes, not demos.#Neo4j #GraphSummit #GenAI #AgenticAI #GraphIntelligence #TheRavitShow

I had a blast at GraphSummit by Neo4j yesterday in London. I spoke to Michael Hunger on The Ravit Show and we went deep on Neo4j's $100M GenAI push and what it means right now. Neo4j Aura Agent: Create Your Own GraphRAG Agent in Minutes — https://bit.ly/3KTjxlJWe discussed about these developments in the interview • How this investment helps teams move past stalled pilots and get real results on production data • Aura Agent explained in plain language, with the first two use cases to try for fast wins • MCP Server for Neo4j and how it lets existing agents plug into graph memory with natural language and text to query • What “default knowledge layer” looks like on day one, including how to keep results explainable and traceable • Timelines and signals to watch as Aura Agent and the MCP Server move to GA in Q4The interview is live now. If reliability, speed, and explainability are on your roadmap, you will find this useful.#data #ai #neo4j #graphs #theravitshow

I had a blast at Neo4j's GraphSummit in London. I also got a chance to speak with Jesús Barrasa, AI Field CTO about the following topics -- - Customer Use Cases, - Challenges Enterprise Leaders Face - New Book about Graphs and more#Neo4j #GraphSummit #Infinigraph #GenAI #AgenticAI #GraphIntelligence #TheRavitShow

AI at massive scale needs a graph engine that does not blink at 100 TB. Enter Infinigraph.I had a blast at GraphSummit, London, I spoke to Ivan Zoratti, VP Product Management, Neo4j to dig into the Infinigraph announcements and what they unlock for real workloads.What we covered* What Infinigraph is and who needs it now* Property sharding in plain terms. How data spreads across nodes without losing graph semantics* One engine for ops and analytics. How HTAP stays fast without starving either side* Migration path for current Neo4j users. What carries over and what to plan for* Where it shines at 100 TB and up. Boundaries, guardrails, and real results on speed and cost* Timelines to ship and what this means for agentic AI nextWhy it mattersBigger graphs with lower latency change what agents can do. If you want real-time reasoning on live data, the storage and compute model must scale without falling apart.Watch if you care about scale, cost, and a clean path from today's Neo4j to what is coming next.#Neo4j #GraphSummit #Infinigraph #GenAI #AgenticAI #GraphIntelligence #TheRavitShow

When someone builds the foundation of modern streaming, every insight counts.At Data Streaming Summit 2025, I spoke with Sijie Guo, Co-founder and CEO of StreamNative, about how real-time data is reshaping the way companies move and act on information. Sijie shared how StreamNative continues to evolve Apache Pulsar's mission — giving teams the ability to process, store, and serve data in motion with performance and simplicity.We talked about what's next for the streaming world. Over the next year, Sijie expects deeper convergence between streaming and AI, where real-time pipelines become intelligent enough to drive automated decision-making across industries.He also emphasized how this summit stood out for its openness — not just a product showcase, but a true ecosystem of technologies working together. His favorite track? The Streaming Lakehouse discussions, where unifying data and streaming meets real-world scalability.The conversation captures where the future of streaming is heading — and how StreamNative is helping enterprises get there faster.#data #ai #datastreaming #streamnative #theravitshow

Real-time data isn't just fast—it's collaborative. At Data Streaming Summit 2025, I spoke with Rayees Pasha, CPO, RisingWave, a key partner of StreamNative, about how streaming ecosystems are evolving together instead of in silos. Rayess shared how RisingWave's real-time database helps teams run complex analytics directly on live data, and how their partnership with StreamNative brings true interoperability to customers.We discussed the future of streaming over the next year, where streaming meets AI, and intelligent pipelines start driving automated, data-driven actions. He also appreciated the summit's open and multi-technology approach, which encourages collaboration rather than competition.His favorite moments came from the AI + Stream Processing track, where discussions focused on turning streaming data into real-time intelligence.This conversation captures how openness and partnership are powering the next wave of streaming innovation.#data #ai #streamnative #datastreamingsummit #theravitshow

Real time without the noise.At Data Streaming Summit 2025, I spoke to Matteo Merli, Co-founder and CTO at StreamNative. We talked about why streaming matters now and what his team is building to make data in motion simple, scalable, and open. Matteo expects the next 12 months to be about streaming meeting AI in practical ways. Think instant enrichment, faster feedback loops, and agents that act safely on live context.He liked the summit's open, multi-technology format. It mirrors how real systems get built. His favorite threads connected Architectural Innovations with AI plus Stream Processing and the Streaming Lakehouse story.Catch the conversation to hear Matteo's take on where streaming is headed and why openness across the ecosystem will decide who moves fastest.#data #ai #streaming #datastreamingsummit #theravitshow

From the floor of Data Streaming Summit 2025, I had the chance to sit down with Ashwin Raja from Motorq, a StreamNative customer, to talk about the real-world impact of streaming.Ashwin shared how his team turns live vehicle signals into immediate decisions, where waiting even a few minutes is not an option. Batch systems couldn't keep up, so Motorq leaned into StreamNative to build reliable pipelines, manage costs, and move from event to action with confidence.We spoke about what excites him most over the next year: the intersection of streaming and AI, smarter enrichment, anomaly detection, and agents acting safely on live data. The Streaming Lakehouse vision also stood out, making analytics-ready data available without extra hops.Ashwin appreciated the open, multi-technology approach of this summit, which mirrors how real systems get built. His takeaways and reflections are in the episode now, watch the full conversation to hear how Motorq is shaping the future of connected-vehicle data with streaming.#data #ai #datastreamingsummit2025 #streamnative #theravitshow

AI that can actually help you build. I had a blast interviewing Vojta Tuma, Field CTO at Keboola, on The Ravit Show, we discussed AI assistants and MCP for data engineering!Here are a few key takeaways — • MCP turns a chatbot into a teammate. It can use tools, call APIs, and take actions. That is why it matters for engineers. • New workflows open up. You can sketch a pipeline in conversation, run a job, debug a failure, or trigger a transform from one place. • Safety is built in. Step by step permissions. Full logs. Reviews and approvals before changes go live. Propose a fix, dry run it, then merge. • The result is faster work without losing control. Less glue code. More time on the real problems.Let developers, analysts, and data engineers ship faster—AI handles the heavy lifting, Keboola keeps it reliable -- https://bit.ly/46Fkvt1Don't forget to follow Keboola!!!!#data #ai #keboola #mcp #theravitshow

Last week at BDL, we caught up with my friend Pavel Dolezalžal, co-founder at Keboola - and let me tell you, this conversation was a ride.We started with Keboola Agents, which are already live and helping data teams debug pipelines, document, and automate safely inside a governed platform.Then Pavel dropped the big news: An open-source, conversational ETL pipeline generator - Osiris.You literally describe the problem, and Osiris drafts a transparent, deterministic YAML pipeline.You review, approve, commit - and it just runs. No black boxes, no daily AI gambling.That would be a big shift: AI proposes. Humans approve. Execution stays auditable.Too bold? Have a look for yourself!Links & Resources- Learn more about Keboola Agents: http://bit.ly/4pL6a7h- Explore Osiris on GitHub: https://github.com/keboola/osiris - Connect with Pavel on LinkedIn: https://www.linkedin.com/in/paveld/#data #ai #agents #keboola #theravitshow #dataengineering

I'm at Big Data London and just wrapped a conversation with Jordan Burger, Applied AI Lead at Keboola. He's focused on building AI that data teams can trust in production, and he's one of the builders behind Keboola Data Agents.Here's what we got into:* Why a full data platform matters when AI needs real context, not just a few tables* The lifecycle context agents need that standalone tools never see* What breaks when AI only sees fragments of the data story* How platform-native AI handles messy data, lineage, policies, and tribal knowledge* Why context-aware implementation can matter more than the model itself* The role of historical data stores for agents and what this means for future architecturesJordan is on stage tomorrow to showcase this work. Both agents are live on the Keboola platform. Link in the description to spin up your first flow with your own data.If you're at Big Data London, meet the Keboola team at Booth L40.#data #ai #bigdataldn2025 #keboola #theravitshow

Speed to insight wins. If analytics does not move faster than the business, it gets ignored.I caught up with the amazing Deborah from Alteryx at Big Data London to talk about what is working right now.We spoke about — • The floor is hands on this year. Fewer slides, more real builds • Alteryx One fits that mood. One place to prep, analyze, automate, and share so teams move faster • Customers are turning hours into minutes on routine work. More people can self serve without waiting in line • The edge is in clean handoffs. Less copy and paste, clearer ownership, smoother reviews • Next up is scale with control. Bring governance into the same flow so speed does not slipWe will keep the conversation going! Don't forget to follow Deborah for the great work she does in the Data & AI space!!!!#data #ai #alteryx #theravitshow

Orchestration is where AI work meets reality. If it fails, everything slows.I caught up with Luke Lipan, Sales Leader for North America at Kestra, to get the view from the field.Here's the signal:Execs start with risk and reliability. Is it visible. Is it repeatable.Teams love open source to move fast. Scale, support, and ownership tip them to enterprise.Security and governance are not add-ons. They decide the deal.Success is measured in fewer incidents, faster cycles, and cleaner handoffs. Cost is a result, not the target.Consolidation is happening. One platform beats a patchwork when the stakes rise.If you are at Big Data London, meet the Kestra team at Booth N48.#data #ai #bigdataldn2025 #kestra #theravitshow

Declarative beats duct tape. I spoke to Eddie Jaoude, Lead Education Engineer at Kestra at the Big Data London, to unpack how teams actually ship with modern orchestration.Here is the flow we covered:* Why defining workflows declaratively changes the game for developers coming from Python DAGs* How teams adopt Kestra step by step without a risky forklift migration* Using the Playground in Kestra 1.0 to test steps safely and tighten the feedback loop* Plugging into existing stacks with plugins and a language-agnostic design so you do not rewrite everything* The first usage patterns Eddie sees in the wild, from scheduled jobs and data pipelines to event-driven tasksClear takeaways. Fewer moving parts. Faster iteration. Stronger reliability.#data #ai #kestra #bigdataldn #theravitshow

Everyone is racing to build AI. Almost no one is securing it end to end. I sat down with Vidya Shankaran, CISSP, Field CTO, Commvault and we talked about the real picture. AI risk is not just about models. It is about data, access, and clean recovery. Most teams are missing the biggest gaps.We covered:• The top AI security threats right now and where teams underestimate risk• The AI stack no one is securing in practice• If a CISO asks where to start and how to prioritize controls• Why traditional data access governance is broken• What is at stake if enterprises do not modernize access to sensitive data• How Satori delivers faster access with tighter control• How Commvault protects AI end to end and even recovers vector indexes and configs after an incidentWe also talked about SHIFT 2025 in New York on Nov 11–12. This event will bring together AI security, data access, and resilience with real answers.In-person: https://lnkd.in/dZ6t8nbY?If you cannot attend in person, there is a full virtual experience on Nov 19.Virtual: https://lnkd.in/dz8yhf-cThis was a raw and tactical conversation. If you care about building AI that moves fast with control, you should watch it.SHIFT will set the tone for how enterprises secure AI in 2025.#data #ai #security #shift2025 #theravitshow

A candid conversation with Navin Budhiraja, CTO and Head of Products at Vianai Systems, Inc. on The Ravit Show in Palo Alto. From Bhilai to Palo Alto. Navin topped the IIT entrance exam, studied at Cornell, and led at IBM, AWS, and SAP. We sat down to talk about building AI that enterprises can actually use.What we covered:- Vianai's mission and the hila platform: why it exists and the problem it solves- How hila turns enterprise data into something teams can interact with in plain language- Responsible AI in practice: tackling hallucinations and earning trust- Why a platform like hila is needed even with powerful foundation models- Conversational Finance: what makes it useful for finance teams- Real integrations: ERP, CRM, HR systems, and how this works end to end- Security for the real world: air-gapped deployments, privacy, and certifications- The road ahead: how AI, IoT, and cloud are converging in the next 2 to 3 years- Advice for the next generation of builders from Bhilai, the IITs, and beyondWhy this matters:Enterprises want outcomes, not hype. Navin's lens on trust, flexibility, and scale shows how AI moves from pilot to production.Thank you, Navin, for the clear thinking and straight answers. Full interview on The Ravit Show YouTube channel is live.#data #ai #vianai #theravitshow

Stop moving data. Start shipping models. I sat down with Molly Presley from Hammerspace and David A. Chapa from Hitachi Vantara to unpack the new Hitachi iQ M Series and how it pairs with Hammerspace for real GenAI work.What we cover• Running AI on data where it lives across sites and clouds• One global namespace so teams stop copying data back and forth• Flexible GPU choices that fit your budget today and scale later• Independent scaling of compute and storage to control costs• How this shows up on day one for training and inferenceWhy this matters• Faster first results without re-architecting your data• Fewer bottlenecks keeping GPUs idle• A simple path from pilot to productionWatch the full conversation. Link in the first comment.Thank you Molly and David for the clear breakdown and the practicality teams need right now.#data #ai #hammerspace #theravitshow

Big traffic moments expose weak systems. Ticketmaster treats them as a proving ground.At .conf2025 I sat with Stephen from Ticketmaster to break down how they run reliable, resilient operations with Splunk. We started with his role and what his team owns. Then we went into the daily rhythm: where Splunk sits in their stack, how they monitor live traffic, and how signals turn into action during spikes.We talked about impact. Faster incident response. Tighter collaboration across teams when every second counts. Clear visibility across services so they can move from symptoms to root cause with less back and forth.Digital resilience was a major theme. Stephen walked through how they use Splunk products to harden critical paths, pressure test failure scenarios, and keep fan experiences stable during on-sales and marquee events.We also covered outcomes the business cares about. Better uptime. Fewer fire drills. Cleaner handoffs. The ability to learn from every incident and feed it back into playbooks and automation.We closed on what is next. More proactive detection. More use of data to predict hot spots before they flare. A roadmap that keeps resilience and customer experience front and center.#splunkconf25 #SplunkSponsored #data #ai #theravitshow

Competitions can change careers. Katie Brown is proof.At .conf2025 I sat with Katie Brown, Director of Platform Security at Splunk. We talked about her path from winning Boss of the SOC to joining Splunk, how she still supports the competition, and what her day to day looks like now. She shared a memorable challenge from BOTS that shaped how she works, practical advice for anyone thinking about capture the flag events, and why hands on contests help close the cybersecurity talent gap.Highlights • From BOTS champion to leading platform security, a clear example of skills turning into opportunity • Staying close to BOTS as a mentor and builder so more people can learn by doing • Lessons that stick, pressure testing analysis, teamwork, and clear thinking under time limits • Simple advice for newcomers, start small, practice often, document what you learn, share your work • Why competitions matter, real signals of skill, faster hiring signals for teams, confidence for candidatesIf you want a grounded view of how hands on learning opens doors in security, this will help.#splunkconf25 #SplunkSponsored #data #ai #theravitshow

I had a blast at .conf25 2025 by Splunk and an on-site interview I recorded with Lizzy Li, Principal PM for dashboards at Splunk. I spent time at the Data Visualizations area to understand how teams bring Splunk data to life for investigations and exec reviews. Lizzy walked me through three layers:- Splunk UI Toolkit for reusable UI components so internal teams and developers can build apps faster- The dashboard framework and visualizations that power charts across the Splunk portfolio and let developers create custom experiences- Dashboard Studio, the general purpose tool most customers use to build and share dashboardsWhat stood out:- Flexibility. Customers are not limited to charts in a grid. Think floor plans, architecture diagrams, and network maps that match real-world layouts- One view of more data. Dashboard Studio can bring together logs, observability metrics, and with federation, external data sources- Scale and performance. Tabs let you pack multiple dashboards into one, and performance updates keep large numbers of charts responsive- New this year. More advanced logic and the ability to show or hide panels dynamically so analysts can tailor the view during an investigationIf you care about clear, flexible dashboards that can handle real-world complexity, you will like this conversation. Full interview is below.#splunkconf25 #SplunkSponsored #data #ai #theravitshow