The Ravit Show aims to interview interesting guests, panels, companies and help the community to gain valuable insights and trends in the Data Science and AI space! The show has CEOs, CTOs, Professors, Tech Authors, Data Scientists, Data Engineers, Data A

Most AI systems look impressive in controlled environments.But the real test begins when they meet production data.In my latest conversation, I sat down with Antonio Bustamante, Co-Founder and CEO of bem on The Ravit Show, to unpack one of the hardest problems in enterprise AI today: making unstructured data reliable enough to power real decisions.We talked about why extraction alone is not the problem anymore. The harder challenge is what comes after. Validation. Business rules. Schema enforcement. Exception handling. And making sure the system does not quietly fail when confidence drops.Antonio shared how bem approaches this differently by treating unstructured data pipelines more like critical infrastructure than experimental AI workflows. Instead of relying on prompts and probabilistic outputs, their focus is on deterministic systems that enforce constraints, match against existing data, and flag uncertainty rather than guessing.We also explored why unstructured pipelines tend to fail silently, how teams should rethink data quality in a probabilistic world, and where the real intellectual property lies when building AI-powered products today.What I found most interesting is how this shifts the conversation from model capability to operational trust. Enterprises are not just asking whether AI works. They are asking whether they can depend on it when money, compliance, or customers are involved.If you are building AI products, data platforms, or automation workflows, this discussion goes deeper than the usual AI hype cycle.Watch the full interview below and share your biggest takeaway.#data #ai #bem #theravitshow

They built one of the fastest growing newsletter platforms for creators.I just published my conversation with Tyler Denk, Co-Founder & CEO of beehiiv on The Ravit Show and this one is fully practical.Instead of talking about newsletters in theory, we went inside his own newsletter, Big Desk Energy, and broke down how he actually runs it.On screen.Step by step.No fluff.We covered:* How Big Desk Energy crossed 118,000 subscribers* What is automated vs what still needs direct input* Website builder from landing page to archive* What happens the second someone signs up* How signup data powers smarter automations* Welcome journeys and re-engagement campaigns explained simply* Mistakes new creators make with automations* The post editor and what actually improves open rates* Growth tools like Boosts, Marketplace, referrals, and recommendations* Monetization through digital products, paid subscriptions, and ads* Why beehiiv charges 0% platform fees on paid subscriptionsPersonally, this conversation meant a lot to me.I use beehiiv myself to run our newsletter with 137,000 subscribers. It powers everything behind the scenes for us. Publishing, segmentation, automations, monetization.This episode felt like opening the backend of a real newsletter business and understanding how to think about it as a system, not just emails.If you are serious about building a newsletter in 2026, this is not inspiration content. It is execution content.We also have a special offer for The Ravit Show community:30% off for your first three monthsCode: RAVIT30This link auto-applies the discount:https://beehiiv.link/7twzg8I built 137K subscribers on beehiiv.Now you can see exactly how the CEO himself runs it.#data #ai #newsletter #beehiiv #creators #community #automations #theravitshow

Let's go 2026!!!! This Enterprise series is huge. Loved recording this one with real leaders who are using GenAI and leveraging right Data. Everyone is talking about GenAI in customer service. Very few are talking about what actually works in production. Next week, we are going live with a real, honest conversation on how enterprises are using GenAI with customer data to transform customer service.This is not about theory.This is not about shiny demos.We will talk about:- What GenAI chatbots and virtual agents are actually doing in production- Why most pilots fail when they hit real enterprise data- The kinds of customer questions that create the most value- How teams think about security, privacy, real-time access, and scaleThe business impact leaders really care about like resolution time, agent efficiency, and customer satisfactionJoining me are Ronen Schwartz and Miguel Navarro, who are working hands-on with these challenges inside large financial institutions.If you are building GenAI for customer-facing use cases, or planning to, this session will help you avoid costly mistakes and focus on what actually matters.Customer service is being redefined.But only for teams that get data right.#data #ai #genai #agents #governance #dataquality #k2view #theravitshow

Once teams start trusting AI to explain systems and guide decisions, expectations change. The conversation shifts from understanding to action. In Part 2 of the podcast, we go deeper into what happens next in the AI journey on the mainframe.We talk about trust and why it is the real blocker to scaling AI.How organizations move from alerts and recommendations to agentic workflows.And what safe autonomy actually looks like in real enterprise environments.Liat Sokolov from BMC Software shares how trust in AI develops over time and why it cannot be rushed.Anthony DiStauro from BMC Software breaks down how AI agents and orchestration change day-to-day mainframe operations without removing human accountability.This episode is about delegation, scale, and using AI in a way that strengthens reliability rather than putting it at risk.#data #ai #mainframe #agents #bmi #mainframe #skills #bmc #theravitshow


AI agents did not suddenly appear this year. What changed is that enterprises are finally ready to use them. At AWS re:Invent, I spoke with Ronen Schwartz, CEO of K2view, to talk about what has really shifted in the agent space from last year to this year and why this moment feels different!!!!Here is what we covered • What changed in the agent landscapeLast year was about ideas and experiments. This year is about execution. Ronen shared why agents are moving from demos into real enterprise workflows • K2View on the AWS MarketplaceWe talked about what it means for K2View to be available on the marketplace and how customers can make the most of it, from faster onboarding to simpler adoption • The AWS and K2View partnershipWe discussed how the partnership with AWS helps customers deploy agent driven use cases faster while keeping control over their data • Agentic AI and the re:Invent keynotesWe reflected on the keynote announcements and why agentic AI has become a central theme for AWS and its ecosystemIf you are trying to separate agent hype from what is actually working in production, this conversation brings a grounded perspective!#data #ai #awsreinvent #agents #agenticai #aws #enterprise #k2view #theravitshow

Cloudera and AWS Partnership is extraordinary!!!! At AWS re: Invent, I stopped by the Cloudera booth to sit down with Michelle H., who leads Global Alliances and Channels worldwide for Cloudera. She sits right at the intersection of Cloudera, AWS, and the customers who are trying to move faster with data and AI.A few highlights from our conversation • Cloudera x AWS as a strategic betThis is not a surface-level partnership. There is a Strategic Collaboration Agreement in place with AWS that goes deep into joint roadmaps and hands-on POC workshops, all focused on delivering more value for customers. • AI, but with control and flexibilityCustomers want AI, but they also want to de-risk it. Cloudera is helping with offerings like AI Studios, while still giving customers an open base and control over their data in their own AWS environment. • Hybrid as a real differentiatorCloudera is becoming a true “bridge to the cloud” for many teams, helping them move workloads between cloud, on-prem, and even the edge instead of locking them into a single pattern. • Real impact, not just slidesWe talked about work with Mercy Corps, where data and AI are helping save lives in crisis zones like Sudan and Gaza, and about a major pharma company running 100-plus GenAI use cases on Cloudera AI to bring drugs to market faster. • Innovation and sovereigntyCloudera is also leaning into Sovereign Cloud, including being a launch partner for the EU Sovereign Cloud in AMIA and working in regions like the Kingdom of Saudi Arabia.Michelle's advice to enterprise leaders was simple and practical: lean on your partners as trusted advisors, and do not wait for the perfect plan. Get started today with one use case and learn from it.#data #ai #awsreinvent #aws #agents #awspartners #agenticai #theravitshow

At AWS re: Invent this month, I spoke to one and only Ed Huang, Co-Founder and CTO of TiDB, powered by PingCAP, to talk about something many teams are quietly struggling with.The last few years in data were all about unbundling.- One database for transactions.- One for analytics.- One for vectors.That worked when workloads were separate.But AI changed the rules.As Ed put it, today the same application, or even a single AI agent, jumps across transactions, analytics, and vector search in milliseconds. No developer wants to manage five databases. Agents cannot do that at all.That is why PingCAP is betting on unification with TiDB X.One system that can:- Scale like an analytical warehouse- Behave like a transactional database- Support vectors and search natively- Run on object storage with elastic computeKeep a single SQL interface with strong guaranteesThis does not mean specialized databases disappear. Many of them will continue to win in narrow use cases.But the center of gravity is clearly shifting.As AI agents become first-class users of data systems, platforms that remove boundaries matter more than platforms that optimize for one workload in isolation.This was a sharp, honest conversation on where databases are heading in the AI era.#data #ai #awsreinvent #qlik #agenticai #sovereigncloud #aws #theravitshow

Big companies do not switch databases unless something is broken. At AWS re: Invent I spoke to Max Liu, Co-Founder and CEO of TiDB, powered by PingCAP, and Bo Liu, Head of Online Infrastructure at Pinterest, to talk about why teams like Pinterest are moving core systems back to SQL with TiDB.This was a deep, honest conversation about scale, reliability, and where databases are headed next.We covered:- Why Pinterest finally hit a wall with NoSQL after years on HBase- What really breaks first at massive scale: consistency, operations, or developer velocity- How Pinterest migrated critical graph systems to TiDB with no downtime and five nines consistency- The tension CIOs face between rock-solid stability and fast-moving AI teams- Whether embeddings and AI workloads should live inside the same database as transactions- How agentic workloads are changing what enterprises expect from a databaseThis was not about hype. It was about real systems, real migrations, and real trade-offs.#data #ai #awsreinvent #agents #agenticai #aws #database#enterprise AWS Partners #pingcap #theravitshow

Most cloud conversations ignore one simple truth. AI only moves as fast as your connectivity. At AWS re:Invent, I spoke to Ed Baichtal, Principal Engineer at Equinix, to talk about what actually sits underneath modern cloud and AI architectures!!!!A few takeaways from the conversation- Equinix operates more than 270 data centers worldwide and acts as the physical home for servers, networks, and cloud on ramps- Equinix is the largest AWS on-ramp provider, with native AWS connectivity built directly into its data centers- The Equinix Fabric Cloud Router is now available on the AWS Marketplace, making multi-cloud connectivity much simpler- Customers can virtually connect up to 25 Gbps to AWS across different regions without deploying physical hardware- AI workloads are pushing serious demand for higher throughput and more reliable connectivity between cloudsOver 3,000 customers are already connected through the Equinix Fabric, which means new connections can happen virtuallyOne customer moved massive volumes of data to AWS in under 30 days using the Fabric Cloud RouterLooking ahead, Equinix plans to introduce Fabric Intelligence in Q1 2026 to make multi-cloud and NeoCloud connectivity even smarterIf you are building AI or multi-cloud systems, this layer matters more than most people realize.#data #ai #awsreinvent #aws #agents #awspartners #agenticai #theravitshow

Developers are not just writing code anymore. They are starting to run a virtual team. At AWS re:Invent, I had a conversation with Jemiah Sius, VP, Market Strategy and Developer Relations, from New Relic about how AI is changing the day-to-day life of developers. This was one of those chats that makes you pause and rethink how software will be built very soon.Here is what stood out-- Agentic AI is becoming real for developers Teams are excited about agents that behave like a digital team or a virtual SRE, taking care of reliability and performance while developers focus on building features-- Developers are becoming orchestrators Over the next 6 to 8 months, the role of the developer is shifting. Less time writing every line of code, more time directing agents and tools. This shift is already driving a big jump in productivity-- Observability matters more than ever As agents start working across multiple LLM servers and interacting with other agents, visibility becomes critical. Without observability across the full agent layer, things can quickly create more work instead of less-- New Relic and AWS coming together We talked about the New Relic integration with AWS Q, which brings observability data directly into AWS DevOps workflows, and the new security agent that surfaces real production data on vulnerabilitiesIt was great catching up with Jemiah again and hearing how New Relic is thinking about the future of developers and reliability.#Data #AI #AWSRecipes #NewRelic #AgenticAI #Security #MCP #reinvent #NewRelic #TheRavitShow

Fantastic catching up with Camden Swita, Head of AI at New Relic at AWS re:Invent last week to discuss the biggest changes in the world of Agentic AI over the last year! We talked about how New Relic is connecting the dots at AWS re:Invent with major announcements including:- The Security RX agent, an AI agent that automates the process of finding and remediating security vulnerabilities in runtime, saving engineers a huge amount of time- Integrating the Model Context Protocol (MCP) server with AWS DevOps agents to ensure agents can access live context and make informed decisions from production- Camden also shared advice for leaders entering the agent tech space: don't treat observability as a secondary concern, as end-to-end tracing will be "fundamentally essential" for compliance and performanceWatch the video to learn more!#Data #AI #AWSRecipes #NewRelic #AgenticAI #Security #MCP #reinvent #NewRelic #TheRavitShow

Most companies want AI agents talking to their customers. Very few have the data to back it up. I had a blast chatting with Ali Behnam, Founder of Tealium on The Ravit Show at AWS re:Invent. If you work with customer data or AI, this one is worth watching.Tealium helps organizations bring customer data together in one place and use it in real time across marketing, product, and AI use cases. I keep hearing their name when people talk about getting data ready for AI, so I sat down with Ali to go deeper.We spoke about- Who uses Tealium and the main problem they solve for enterprises that are serious about customer data- Why agentic AI is such a big theme this year and what is actually changing inside large companies- Why so many AI agent projects get stuck in pilots and what is missing to make them work in the real world- How Tealium works with AWS, and what being built on AWS unlocks for customers- What Ali wants customers to be able to do by next re:Invent that they cannot easily do todayIf you are trying to make AI agents useful for real customers, not just in demos, I think you will find this helpful.#data #ai #aws #reinvent2025 AWS Partners #databases #agents #agenticai #tealium #theravitshow

Most mainframe challenges today are not caused by broken systems.They are caused by disappearing knowledge.Senior experts are retiring.Documentation is incomplete or outdated.And newer team members are expected to operate some of the most critical systems in the enterprise with very little context.In Part 1 of this podcast, we focus on where the AI journey really begins.- Why mainframe teams are under pressure right now.- What types of institutional knowledge are most at risk.- And how AI changes the way that knowledge is captured, shared, and used.Liat Sokolov walks through why AI is showing up at this moment and why traditional approaches have not been enough.Anthony DiStauro explains how AI helps close the skills gap by shortening learning curves and guiding teams through complex systems with more confidence.This episode is not about autonomy yet.It is about preserving what matters and helping people work better with the systems they already depend on.Part 1 is live now.#data #ai #mainframe #agents #bmi #mainframe #skills #bmc #theravitshow

I had a blast chatting with Anthony Anter, DevOps Evangelist at BMC Software on The Ravit Show and this one goes deep into a topic many enterprises are struggling with quietly. Mainframe modernization. Not tools. Not hype. Real ground reality.We started with a simple but uncomfortable truth Tony writes aboutBefore you even think about converting code, explain what you already have.That single line sets the tone for the entire conversation.- We talked about why so many COBOL to Java projects fail even before they begin.- Why teams rush into conversion without understanding decades of business logic buried in code.- And why mainframe systems often look like a long game of telephone, where intent is lost but code survives.A big part of the discussion focused on generative AI.Not as a magic converter, but as a way to explain, map, and document existing systems before touching a single line of Java.When teams finally see dependencies and flows clearly, the surprises are often eye opening.We also broke down a critical distinction that is often ignoredCode explanation is not the same as code translation.Missing this is where most modernization programs go wrong.Tony also shared why refactoring before rewriting matters, what practical cleanup really looks like, and how GenAI can help create Java code that is actually maintainable, not just converted.One part I personally found valuable was the balance between automation and human expertise.Where AI helps, where humans are still irreplaceable, and what governance is needed so AI output can be trusted.We wrapped with Tony's checklist for smarter modernization and one clear takeaway for anyone working on or around mainframes today.If you are a CIO, architect, or mainframe professional thinking about modernization, this conversation will save you from expensive mistakes.#data #ai #mainframes #bmc #theravitshow

AI is moving fast. But the data foundations underneath it are not. That is the core theme of my latest interview with Rajan Goyal, Founder and CEO of DataPelago on The Ravit Show, recorded at their office in Mountain View.Here is what we unpacked -- Why now- Three shifts are colliding at once. Hardware acceleration is now mainstream. Generative AI has changed how data is created and consumed. Data complexity has exploded beyond what existing systems were designed to handle. The result is clear. Enterprises do not just need faster systems. They need a more unified data foundation.Where the tension is- AI models are advancing quickly, from multimodal systems to agents and domain-specific LLMs. But the data infrastructure beneath them is still built for an analytics-first world. Most companies spend more time moving data between systems than actually innovating with it.What DataPelago is building- We spent time breaking down DataPelago Nucleus, described as the world's first universal data processing engine. One engine that can handle batch, streaming, relational, vector, and tensor workloads together. The key idea is simple but powerful. Ingest, transform, and query data without constantly moving it across systems.We also talked about what makes their approach different.- A DataOS layer that intelligently maps workloads across CPUs, GPUs, and other accelerators.A DataApp layer that plugs into engines like Spark and Trino.And DataVM, a data-focused virtual machine that unifies execution across heterogeneous hardware.Why Spark acceleration matters- For teams running Spark today, we discussed the DataPelago Accelerator for Spark. It runs existing Spark workloads on accelerated compute with zero code changes. Faster joins, shuffles, preprocessing, and lower cost, without rewriting pipelines.Why today's stack is breaking- Warehouses, lakes, and lakehouses were built for SQL analytics. AI workloads need tight coupling between data and compute. The separation we see today leads to redundant pipelines, silos, and expensive data movement. Many teams are forced to optimize for analytics or AI, but not both.Why DataPelago was founded and what customers see- The founding insight was clear. Data systems were never designed for AI-scale throughput. Customers adopting this approach are unifying analytics and AI pipelines on one platform, simplifying infrastructure while improving performance, governance, and observability. Rajan made an interesting comparison. This shift for data processing is similar to what GPUs did for compute.What's next- We closed by talking about how the data and AI relationship will evolve over the next few years, and what this looks like in real-world deployments. That is what the next episode will dive into.If you are building AI systems and still relying on analytics-era data foundations, this one is worth your time.#data #ai #gpu #datapelago #lakehouse #sql #analytics #theravitshow

AI pilots are not the story anymore. The real story is who can turn AI into everyday, reliable work. Here at AWS re:Invent, I had the chance to sit down with Arvind Jain, Founder and CEO of Glean, for a conversation I have wanted to do for a long time!!!!We spoke about why so many enterprises are stuck in pilot mode and what actually has to change for AI to move from experiments to real impact on how people work. Arvind went deep on why enterprise context is still missing in most AI projects and what it looks like in practice when you finally get that piece right.We also talked about what CIOs and CTOs are really worried about this week at re:Invent. Not the buzzwords, but the hard problems around adoption, scale, and the misconceptions that keep slowing teams down!Since so many organizations here already run on AWS, I asked Arvind how the Glean and AWS partnership shows up in the real world. He shared how customers are thinking about reliability, security, and scale, and what a simple, low-risk starting point looks like if you want to see value fast.To close it out, Arvind shared one clear prediction for where AI is heading by 2026 and how it will change the way organizations think about work.Thanks Arvind Jain for always sharing amazing insights!!!!#data #ai #awsreinvent #aws #agents #awscompetencypartners #agenticai #theravitshow

The future of reliability is not one tool. It is a team of agents working together. At AWS re:Invent, I had a chat with Francois Martel, Field CTO at NeuBird.ai, to talk about how AI is changing the way developers and SREs handle reliability in the real world.Here are the key takeaways from our conversation-- Coding agents are becoming the front door to AITools like GitHub Copilot and Cursor are getting massive adoption. When paired with NeuBird's Hawkeye agentic SRE server, these agents can jump straight into root cause analysis and even take action to remediate issues-- SREs are a natural fit for agentsSREs already live in the command line and think in scripts. Coding agents are an easy and practical entry point for bringing AI into day to day SRE workflows-- Agent adoption is speeding upWe are past experimentation. Customers are seeing value from early use cases, which is pushing broader and faster adoption of agent based systems-- Enterprise security still mattersFor larger organizations, NeuBird can deploy the agent inside the customer's VPC. The data stays in their environment and the full data path remains under their control-- AWS partnership momentumNeuBird is launching a pay as you go offering on the AWS Marketplace. This makes it one of the first agentic SRE servers you can try without long term commitment and connect to tools like AWS, Datadog, Dynatrace, and GrafanaIf you want to see how agentic SRE works in practice, you can start with the pay as you go option or the two week free trial and pairing it with your favorite coding agent.It was great catching up with François again and seeing how NeuBird is pushing the agentic SRE space forward.#data #ai #awsreinvent #aws #agents #awspartners #copilot #agents #theravitshow

Frontline healthcare is where AI either proves itself or stays a buzzword. At AWS re:Invent, I sat down with Max Mosky from TouchPoint Support Services, a Qlik customer working across more than 650 healthcare facilities!!!!Max and his team support the people who are closest to patients. I wanted to understand how they are using data and AI to make life easier for frontline staff, not harder.We spoke about- What TouchPoint Support Services does and Max's role in supporting frontline teams- The day to day challenges their staff face inside hospitals and care facilities- Why they chose to partner with Qlik and AWS instead of trying to build everything in house- How Qlik Answers is bringing AI into frontline operations and what has changed for staff and patients so far- How AWS Bedrock and Qlik work together behind the scenes to deliver real time support at scale- What it means for Max and the team to be recognized as an AWS GenAI Trailblazer- If you care about how AI can support real people in real hospitals, this one will be worth your time.#data #ai #aws #reinvent2025 AWS Partners #analytics #agents #agenticai #qlik #theravitshow

Where your data lives will decide what AI you can run tomorrow. On The Ravit Show at AWS re:Invent, I spoke to Jessica DuBois, Senior Director for the AWS partnership at Qlik. Jessica is right at the center of how Qlik and AWS work together for customers.We spoke about- Her role in leading the Qlik x AWS partnership and how that work shows up for customers in real projects- How she describes the Qlik x AWS partnership today and why it has become so important for customers who want trusted analytics and AI on AWS- Why data sovereignty is now a non negotiable topic in boardrooms and how Qlik becoming a launch partner for AWS European Sovereign Cloud opens new options for customers with strict regulatory needs- What this means in practice for organizations in regulated industries who want to move faster with analytics and AI without losing control of their data- Where she sees the Qlik and AWS partnership heading as we move into 2026 and the next wave of AI driven use cases- If you care about data sovereignty, trust, and how partners like Qlik and AWS are shaping the next phase of AI, this one is a good listen.Feel free to follow Jessica and learn more about the partnership!!!!#data #ai #aws #reinvent2025 AWS Partners #analytics #agents #agenticai #qlik #theravitshow

At the AWS re:Invent in Vegas, I spoke to Christopher Powell, CMO at Qlik. We last spoke at Qlik Connect 2025 in Florida, so this was a good moment to step back and look at what has changed since then.In our conversation, we covered- What Chris and the Qlik team have been focused on since Qlik Connect and how priorities have evolved through the year- The announcements from AWS re:Invent that stood out most to him and why they matter for customers- Qlik's new master reseller partnership with Megazone Cloud in Korea and why this collaboration is strategically important for the region- How Qlik's push into agentic AI is shaping customer adoption and what to expect over the next year- Why the AWS partnership continues to be a core pillar for Qlik and where the biggest opportunities lie- The trends Chris is watching as we move into 2026 and what they mean for data and AI leadersIf you want a clear view of how Qlik is thinking about AI, partnerships, and what is coming next, this conversation is worth your time.#data #ai #aws #reinvent2025 AWS Partners #analytics #agents #agenticai #qlik #theravitshow

At AWS re:Invent I spoke to Woon Ho Jung, CTO for Cloud Native at Commvault, to talk about how they are helping AWS customers protect more than just one type of workloadWe spoke about how they started with BackTrack for S3 and now support DynamoDB and Apache Iceberg, and what real problem that solves when your data is spread across so many services!For teams who are new to Apache Iceberg on AWS, I asked Woon to break down the basics. What do you need in place so that recovery is not a theory, but something you can rely on when a table, job, or pipeline goes wrong!If you care about resilience across modern AWS workloads, this one will be worth watching.#data #ai #awsreinvent #aws #agents Amazon Web Services (AWS) AWS Partners AWS Events #awspartners #awscompetencypartners #agenticai #theravitshow

AI workloads do not fail where people expect. They fail where resilience was never designed in. At AWS re:Invent, I spoke with my friend Michael Fasulo, Senior Director for Portfolio, AI Resilience and ResOps, Commvault to understand where AI systems on AWS are actually breaking and what teams should do differently.This conversation was practical and grounded in what customers are seeing today. We covered- Where resilience breaks first in real world AI workloads running on AWS- What ResOps means in simple terms and why it is becoming essential for AWS customers running AI at scale- How architects should think about recovery when AI spreads across S3, DynamoDB, and Iceberg based data lakes- The design rules that make recovery easier instead of more complex- The most important AI driven resilience capability Commvault is building into its AWS portfolio over the next yearIf you are building AI on AWS and assuming resilience will take care of itself, this conversation is a good wake up call.#data #ai #awsreinvent #aws #agents #awspartners #awscompetencypartners #agenticai #theravitshow

Winning AWS Storage Partner of the Year does not happen by accident. It happens because real customers see real value. Commvault did it!!!! At AWS re: Invent, I spoke to John Drake, Head of WW AWS Storage Data Protection at AWS, and Elliot Hujarski, VP, Global SaaS GTM at Commvault, to talk about how this partnership actually works behind the scenes.We spoke about • What it took for Commvault to be named Global AWS Storage Partner of the Year and the journey behind that recognition • How the Commvault and AWS partnership is structured and what each side brings to the table • The concrete ways customers are benefiting, from data protection and backup to cyber recovery and resilience on AWS • How this partnership helps customers simplify complexity while keeping their data safe and ready for AI and analytics • What is coming next and how they plan to expand the joint value for customers in the months aheadIf you care about how to protect and manage data at scale on AWS, this one will be useful.#data #ai #awsreinvent #aws #agents #awspartners #awscompetencypartners #agenticai #theravitshow

At Microsoft Ignite last week, one theme kept coming up in every hallway conversation: SRE teams are exhausted. Right in the middle of that, I sat down with Francois Martel, Field CTO at NeuBird.ai, to talk about how they are trying to change that story together with Microsoft.NeuBird is working closely with Microsoft to bring Hawkeye, their AI SRE assistant, into the Azure world. Instead of giving teams yet another dashboard, Hawkeye connects to signals across your stack and helps answer the basic questions every on call engineer has at 3am: What is actually broken, where should I look first, and what should we try next.Microsoft also featured NeuBird as one of only three startups to watch at Ignite this year. That is a strong signal that reliability and SRE are finally getting a real seat at the AI table, not just sitting behind analytics and chat use cases.In our conversation we went deep on three areas:- The Microsoft and NeuBird partnership and what it unlocks for Azure customers- Why being named a startup to watch matters for a young company in a crowded AI space- The real SRE pain points they are focused on: noisy alerts, slow incident bridges, growing hybrid environments, and burnout on the teams who keep everything runningIf you care about reliability, SRE work, and how AI can support teams instead of adding more noise, this interview will be worth your time.I will drop the NeuBird and Microsoft blogs, the Ignite startup article.#data #ai #ignite #microsoft Microsoft Azure Microsoft Azure Microsoft for Startups #microsoft2025 #copilot #agents #theravitshow

30 people. Over $1B valuation in six months.And a clear signal that AI chat alone is no longer enough.I just sat on The Ravit Show with Wen Sang, Co-Founder & COO of Genspark.Genspark is building an all-in-one AI workspace where AI agents do the actual work. Not just answers, but finished outputs. Slides, sheets, websites, designs, inbox management, and more, all in one place.In this conversation, we break down:- Why Genspark chose execution over conversation- How a 30-person team scaled at an unbelievable pace- What real AI agent workflows look like in production- Why the future of work is moving from tools to AI teammatesIf you are still juggling multiple AI tools, this episode will change how you think about productivity.You can try Genspark with free credits, and they are running two major promotions right now:Jan 1–10 only: Get 40% off any Annual plan. Save $100 on Plus Annual or $1,000 on Pro AnnualUnlimited AI Chat and AI Image usage for all of 2026, with access to top models like GPT-5.2, Claude Opus 4.5, Gemini 3 Pro, Flux, Seedream, Nano Banana Pro, and moreWatch and explore here: https://www.genspark.ai/?utm_source=yt&utm_campaign=ravitshowThis is not another AI chat story.This is about how work actually gets done in the AI era.#data #ai #Genspark #WorkWithGenspark #gensparkpartner #agents #openai #theravitshow

I'm here at Microsoft Ignite!!!! Excited to cover every announcement on ground. Before the big keynotes, I sat down with Shireesh Thota, VP of Azure Databases at Microsoft, for an exclusive conversation that sets the stage for what's coming.We talked about how Microsoft is redefining databases for the AI era, moving beyond just compute to focus on data power as the real foundation of AI.Shireesh walked me through the key differentiators of Azure Database offerings, how they're being built to handle the next wave of intelligent workloads, and how Microsoft Fabric is bringing everything together as a unified data platform.We also discussed how Microsoft is infusing AI directly into databases, making them faster, smarter, and adaptive, while also enabling agentic AI systems that can act on insights in real time.And something many don't know: Microsoft is one of the top contributors to the PostgreSQL open-source project.Shireesh shared how this community collaboration continues to drive innovation and scalability across the ecosystem.We wrapped up with a look ahead, on what's next in database innovation and how AI will continue to reshape performance, automation, and intelligence at the core of enterprise data systems.This is just the beginning of an exciting week at Microsoft Ignite 2025, stay tuned for more coverage and interviews right from the floor.#Data #AI #BI #AzureData #PostgreSQL #MicrosoftIgnite #Azure #Data #AI #TheRavitShow

Are enterprise agents finally ready for prime time at scale in the enterprise world? At Microsoft Ignite, I sat down with the one and only Jay Parikh, Executive VP, Microsoft, to understand how Microsoft is actually building the full foundation for enterprise grade agents, not just talking about it.In our conversation, we went deeper into what “any agent, anywhere, at any scale” really means for developers and teams who are trying to move from experiments to production.We spoke about:-- How Foundry IQ in public preview changes the way agents connect to enterprise and web data through one secure entry point-- How Model Router being generally available helps teams pick the right model for each task instead of guessing-- Why the new Foundry Control Plane is so important for centralizing agent management and scaling safely-- How Foundry Observability being GA gives teams real time visibility into multi agent systems so they can troubleshoot and improve faster-- What Microsoft Agent Factory solves by bundling 33 products for teams that want to deploy agents quickly without stitching everything together on their ownWe closed by looking ahead at what the next phase of multi agent systems inside large enterprises could look like when you treat agents, data, models, and infrastructure as one connected stack instead of separate pieces.If you are serious about building or scaling enterprise agents, this is one conversation you will want to watch.#data #ai #ignite #microsoft Microsoft Azure Microsoft Azure Microsoft for Startups #microsoft2025 #copilot #agents #theravitshow

Last week at Microsoft Ignite, I finally got to interview the man himself, Arun Ulag, President, Azure Data, Microsoft. I have known Arun for years. He is one of the best in the data and analytics space, and meeting him in person was so good. I have a new friend!!!This was not a surface level chat. We went straight into the big announcements and what they unlock for customers: • The real takeaways from his keynote and how they help enterprises move faster with data and AI • What Fabric IQ actually is, how it works behind the scenes, and why it is a big deal • How customers can start seeing value from Fabric IQ from day one, not after months of setup • What makes Azure HorizonDB for PostgreSQL different and where it fits for modern workloads • What is coming next for data services in the era of AI • How leaders should think about the next 4 to 5 months with all these updates landing at onceIf you are trying to understand where the Microsoft data and AI stack is really heading, this conversation will give you a very clear picture!!!!#data #ai #ignite #microsoft Microsoft Azure Microsoft Azure Microsoft for Startups #microsoft2025 #copilot #agents #theravitshow

Agentic AI does not matter if it stays in slides. What matters is whether it can run safely in production. After AWS re:Invent, I sat down with Hemant Mohan from Amazon Web Services (AWS) for a follow up conversation focused on agentic AI and the AWS and IBM partnership. This discussion was less about announcements and more about what actually changes for enterprises now.Here is the core of what stood out.- AI is moving from response to actionWe are past the phase of just experimenting with models. The next phase is agentic AI systems that can reason, plan, and execute across tools and workflows. Business problems are never single step. They involve legacy systems, approvals, fragmented data, and humans in the loop. Agents are built to work across that reality.- Why the AWS and IBM partnership mattersMany enterprises struggle with the same question. How do we move agents from proof of concept to production without breaking security, compliance, or reliability? This partnership is about solving that gap. AWS brings the trusted, global infrastructure to run agents at scale. IBM brings orchestration, governance, and deep enterprise integration. Together, they give customers a clear path from experimentation to operational trust.- From promise to productionIBM becoming a launch partner for the AWS Marketplace AI and Tools category was an important signal. Customers want flexibility and choice, but they also want enterprise grade controls. This partnership is designed to give them both.- Watson Orchestrate and Amazon Bedrock Agent CoreAnnounced at re:Invent, this integration brings together AWS's managed runtime for agents with IBM's orchestration layer. The result is a full agentic stack where agents can run securely, maintain context, be audited, and coordinate across complex business workflows. This is what running agents responsibly actually looks like.- Project BobWe also spoke about IBM's Project Bob, which tackles a problem many enterprises quietly struggle with. Most development budgets still go toward modernizing old code instead of building new features. Project Bob brings agentic AI into software modernization, helping teams refactor faster, reduce risk, and deal with long standing skill shortages.My takeaway.Agentic AI will not be adopted because it is powerful. It will be adopted because it is trustworthy. The AWS and IBM partnership is focused exactly on that transition.Learn more about the partnership here -- https://lnkd.in/gyRRuvdk#data #ai #awsreinvent #agents #agenticai #aws #enterprise #IBM #theravitshow

Healthcare runs on data. Most of it is still unstructured!!!! I had a great conversation on The Ravit Show with Lyle McMillin, AVP of Product Management at Hyland, where we went deep into how healthcare organizations are turning unstructured content into real decisions.A few points that stood out to me:- Most healthcare data is unstructured. In many organizations, it is close to 80 percent. This includes clinical and admin documents, faxes, emails, and massive imaging files like X-rays, CTs, and MRIs- Hyland's Content Innovation Cloud is changing how this data is used. Lyle shared how their Intelligent Med Record solution is helping teams move faster. A beta customer, Nebraska Medicine, saw a 99 percent improvement in time to value, with classification accuracy improving from 95 to 96 percent, driven by new LLM-based capabilities.- AI is no longer just about automation. It is about decision support.With Knowledge Discovery, teams can ask natural language questions and get answers with direct links to the exact place in the document. This can cut 60 to 80 percent of the time spent searching and stitching information together.- Agentic AI was the most interesting part of the discussion.From classifying documents to pulling missing records, bundling them, and sending claims back to payers, the system can handle most of the workflow. When humans step in, 80 percent of the work is already done.Healthcare transformation is not only about new systems.It is about finally making sense of the data we already have.#data #ai #agentic #healthcare #unstructureddata #agents #knowledgehub #hyland #theravitshow

I had a chance to attend and cover SHIFT by Commvault in New York last week. I just spoke to Tim Zonca, VP of Product Marketing at Commvault, to talk ResOps, Keynote takeaways and more.What we covered• ResOps in one line and why it matters now• How the Unity Platform turns resilience into daily practice across cloud, SaaS, and on-prem• Synthetic Recovery and what it does to RTO and incident playbooks• Identity resilience for AD with real-time detection and safe rollback• Cloud-native protection with cost and energy views that drive smarter policiesLeaders want AI speed with clean recovery and tighter identity control. Tim breaks down how Unity brings these pieces together so teams can move fast and stay safe. Simple language. Clear outcomes. No hype.My takeSynthetic Recovery plus Cleanroom is a real shift. The AD rollback story closes a gap I see in many incident reviews. The cost and energy view belongs in every daily dashboard!!!!The interview is live now. I will share more clips and floor notes from SHIFT as sessions go up.Check out all the announcement links in the comments!#data #ai #cloud #security #cybersecurity #recovery #resilience #commvault #shift2025 #shift #theravitshow

Most companies think they have a backup problem. What they really have is a recovery problem. I spent the day at Commvault SHIFT, on the ground talking to customers, partners and the Commvault team. I also spoke with Darren Thomson, Field CTO at Commvault, for a focused chat on what they announced and what it means in the middle of real attacks and real pressure on IT teams.In the interview, we got into:-- The new Unity Platform and why Commvault is trying to bring data security, data access, identity patterns and cyber recovery into one view-- How they are thinking about very fast, clean recovery after an attack, not just “we have backups somewhere”-- Cloud Native Protection and what that actually changes for existing customers and large enterprises already deep into cloud-- Identity Resilience and why identity is now part of the data security story, not a separate topic-- Synthetic Recovery and how they use ML signals from anomalies, entropy, threat scans and third party tools to compose a clean state across multiple backup points-- Real world use cases from regulated industries, where downtime and data loss are simply not an optionWhat I liked most is that this was not only about shiny new features, but about how security, backup and AI are starting to come together in one place. For teams who are tired of stitching ten tools to get back up after an attack, this direction will be interesting to watch.If you care about data, security and recovery, this is worth your time.Check out all the announcements links in the comments!#data #ai #cloud #security #cybersecurity #recovery #resilience #commvault #shift2025 #shift #theravitshow

From SHIFT by Commvault New York, I sat with Christopher Mierzwa on culture, clarity, and execution!!!!What you will get• Real takeaways from his panel• Why people, mindset, and culture decide security outcomes• Practical advice for leaders, CISOs, and CIOsHighlights• Culture beats tools when pressure hits. If teams do not trust each other, runbooks stall.• Mindset sets the tone. Treat incidents as system problems, not hero moments.• Practice builds confidence. Short drills with clear ownership move every metric that matters.Advice from Chris• Start with people. Define roles, practice handoffs, review the tape after every drill.• Build muscle memory. Run small, frequent exercises across IT, SecOps, and the business.• Keep the board close. Explain risk in plain language and track progress like product work.My takeSecurity is a team sport. The best programs invest in culture first, then controls.#data #ai #cloud #security #cybersecurity #recovery #resilience #commvault #shift2025 #shift #theravitshow

Stop chasing tools. Start fixing decisions. I spoke to Stephen Sciortino, CEO and Founder of Database Tycoon LLC, at Small Data SF by MotherDuck. Clear takeaways for anyone running or advising a data team.What we covered• The real shift from his Brooklyn Data days to independent consulting• Early signals a team will win vs signs they are in trouble• How AI is changing expectations and what must stay the sameWatch the complete interview! Practical, direct, and worth your time.#Data #AI #SmallDataSF #DataEngineering #AI #Analytics #TheRavitShow

Real-time data is no longer a future problem. At Small Data SF by MotherDuck, I sat down with David Yaffe, Co-Founder & CEO at Estuary, to talk about what has changed in the world of data streaming!!!!A few years ago, real-time data was something most teams put on their “later” list. Expensive. Hard to scale. Too complex for most use cases.But as David shared, that story has shifted fast.Here are some takeaways from our conversation:- Streaming is now viable for everyoneWith cheaper compute, mature tooling, and simpler developer experiences, real-time data isn't a luxury anymore. The barriers that once made it a niche capability are gone- Batch vs Real-time: Asking the right questionsBefore jumping to streaming, David suggests asking what problems you're solving — speed for the sake of speed rarely pays off. Sometimes batch is just fine. The goal is fit, not flash- Architecture mattersMoving from batch to streaming means thinking end-to-end: from schema evolution and error handling to observability. Teams that skip this planning end up redoing pipelines- CDC done rightChange Data Capture is powerful, but it's easy to misuse. The most common mistake? Treating CDC as an ETL replacement rather than an event system. Understanding that difference prevents pain later- The conversation was practical, focused, and refreshing.Real-time isn't about chasing trends, it's about enabling faster insights and cleaner data movement with less friction.If you've been wondering when “real-time” becomes realistic, this one will give you a clear answer.#data #ai #motherduck #smalldatasf #theravitshow

Small Data. Real outcomes. I covered MotherDuck's Small Data SF in person and spoke to my long-time connection with Colin Zima, CEO of Omni, to cut through the AI noise in BI. We focused on what actually moves the needle for teams today.Here is what we got into:• Where AI is creating real BI valuePractical wins that ship now, not hype cycles• Flexibility vs governanceHow Omni gives analysts room to explore while keeping the shared truth intact• Why build a modeling layerWhat Omni's own model unlocks for speed, trust, and how far AI can go• Embedded analytics after the Explo acquisitionWhen it makes sense to put live insights inside your product and what to avoid• Simple over cleverWays AI can remove clicks, clean up metrics, and make BI easier to use• Common mistake with AI in dashboardsTeams bolt on features before they fix definitions and owners• The agent futureIf agents run dashboards tomorrow, why export to Excel might still matterIf you care about getting answers faster with clear guardrails, you will like this one.#data #ai #motherduck #smalldatasf #theravitshow

Small Data is having a big moment!!!! I covered Small Data SF by MotherDuck in person and sat down with Brittany Bafandeh, CEO at Data Culture. We talked about the real blockers to impact and how teams can move faster with the data they already have.Here is what we got into:When it is not a data problem - Brittany walked through a case where dashboards, pipelines, and new tools were not the fix. The real issue was slow decisions and unclear ownership. Once they set decision rights and clear KPIs, outcomes changed without buying more tech.Do you have a data culture or just tools - As a consultant, she looks for simple signals. Are decisions tied to metrics. Do teams review outcomes every week. Are definitions shared. If the answer is no, that is an infrastructure shell without culture inside it.Consultant vs in house - Consultants can push for focus and bring patterns from many teams. In house leaders win by staying close to the business and building habits that last. The best results happen when both mindsets meet.One modeling habit that breaks things - Teams jump to complex models too soon. Brittany's fix is to model around decisions first. Keep names, metrics, and grain simple. Let complexity come only when the use case proves it.Why this mattersMost teams do not need more tools to get value. They need faster decisions, shared language, and simple models that match the business. Small data, used well, beats big stacks used poorly!!!!I am publishing the full interview next. If you care about real outcomes with the stack you already have, you will like this one.#data #ai #motherduck #smalldatasf #theravitshow

Most companies say they are “doing AI.” Very few are actually ready for it. In my new episode of The Ravit Show, I sat down with Simon Miceli, Managing Director, Cisco, who leads Cloud and AI Infrastructure across Asia Pacific, Japan, and Greater China. He sits right where big AI ambitions meet the hard reality of networks, security, and technical debt.This conversation builds on my earlier episode with Jeetu Patel, President and CPO Cisco and goes deeper into what it really takes to get AI working in production in APJC.Here are a few themes we unpacked:-- Only a small group is truly AI ready- Cisco's latest AI Readiness Index shows that just a small percentage of organizations globally are able to turn AI into real business value. Cisco calls them “Pacesetters.”- They are not just running pilots. They have use cases in production and are seeing returns.-- We are entering the agentic phase of AI- Simon talked about how we are moving from simple chatbots to AI agents that can take action.- That shift changes everything for infrastructure.- Instead of short bursts of activity, you now have systems that are always working in the background, automating processes and touching real operations.-- AI infrastructure debt is the new technical debt*- Many organizations are carrying years of compromises in their networks, data centers, and security.- Simon called this “AI infrastructure debt” and described how it quietly slows down innovation, increases costs, and makes it harder to scale AI safely.-- Network as a foundation, not an afterthought- One of his strongest points: leaders often think first about compute, but the companies that are ahead treat the network as the base layer for AI.- When workloads double, your network can become the bottleneck before your GPUs do. - The Pacesetters are already investing to make their networks “AI ready” and integrating AI deeply with both network and cloud.Three things leaders must fix in the next 2–3 yearsSimon shared a very clear checklist for CIOs and business leaders who are serious about agentic AI: 1. Solve for power before it becomes a constraint 2. Treat deployment as day one and keep optimizing models after they go live 3. Build security into the infrastructure from the start so it accelerates innovation instead of blocking itThis was a very honest, no-nonsense view of where APJC really stands on AI, and what the leading organizations are doing differently!!!!Thank you Simon for joining me and sharing how Cisco is thinking about AI infrastructure across the region.#data #ai #cisco #CiscoLiveAPJC #Sponsored #CiscoPartner #TheRavitShow

These are some of the most exciting times to be in AI. And some leaders are not just watching the shift. They are building it. Excited to share, I sat down with Jeetu Patel, President and Chief Product Officer at Cisco, for a conversation I have been wanting to do for a long time. Cisco is right in the middle of AI, networking, security, and data, and this interview felt like a front row seat to how the next decade is being shaped.In this episode of The Ravit Show, we spoke about:- The key AI trends Jeetu is seeing right now and how he explains Cisco's AI vision- What being at the intersection of networking, security, and data allows Cisco to do with AI that most pure AI companies cannot- How AI adoption in Asia Pacific, Japan, Greater China, and India looks different from North America and Europe- Why India is so important to Cisco, both as a market and as a serious talent hub- The early career moments that still shape how he leads today- The one piece of career advice he wishes someone had given him at 25, for everyone starting out in India and across APJCFor me, this was part AI roadmap, part masterclass in leadership at global scale. You can feel how seriously he takes this moment and the responsibility that comes with it.If you care about AI, infrastructure, or building your career in this space, this is one you will want to watch.#data #ai #cisco #CiscoLiveAPJC #Sponsored #CiscoPartner #TheRavitShow

Gartner has named K2view a Visionary in the 2025 Magic Quadrant for Data Integration Tools, and they have moved up inside the Visionary quadrant. This is a big signal for anyone who cares about data and AI in the enterprise.I had to cover this news in person and what better place than my friend, Ronen Schwartz's home in Palo Alto, talking to him about what this actually means. We did not just speak about a report. We spoke about whether data integration still matters in an AI world and why K2View's approach is getting attention right now.Here is how I see it.- First, data integration is more relevant than ever. Your AI agents, copilots, and analytics are only as good as the data foundation behind them. K2View's bet has been simple to understand. Give every business domain a clean, real time, governed view of its data, and make it available to any use case, including AI.- Second, the move up in the Visionary quadrant is about more than a label. It reflects how K2View is executing on this idea of “AI ready data,” not just talking about it. They are helping customers move away from scattered pipelines to a consistent way of delivering trustworthy data products into AI, analytics, and operations.- Third, when you compare their position with the large leaders, you see a different angle. The big platforms are broad. K2View is sharp and focused.They model data around real business entities, not just tablesThey support real time views without forcing you into one storage patternThey are designing with GenAI and agentic AI in mind from day oneFinally, the strategic outlook. Ronen is very clear that this is not about selling “yet another integration tool.” It is about being the data layer that lets enterprises move faster with AI while staying in control of privacy, compliance, and performance.For leaders who are serious about AI and tired of slideware, K2View's move in the Magic Quadrant is one of those signals worth paying attention to.#data #ai #gartner #gartnermagicquadrant #agenticai #agents #k2view #theravitshow

Some conversations shift how you think about the future of AI. This one did. I just sat down with David Flynn, Founder and CEO of Hammerspace, to talk about something enterprises rarely discuss openly: the real engine behind AI is no longer compute. It is data.We went deep into why NVIDIA's AI Data Platform has become the blueprint for modern AI architecture and why Hammerspace is emerging as the layer that actually makes this blueprint real for enterprises.David broke down how the industry is moving from building AI around compute to building AI around data. He talked about what the AI Anywhere era looks like, and why the next generation of AI systems will need a global, unified view of data across cloud, edge, and physical environments.We also talked about the partnership with NVIDIA, how it boosts the productivity of agentic AI, and why enterprises will need data that can move as fast as their models. David shared how Hammerspace is preparing for what comes next in 2026 and beyond, from scale to power efficiency to open standards.This is one of those conversations that gives you clarity on where the industry is going and why data architecture is about to become the biggest competitive advantage.#data #ai #nvidia #hammerspace #gpu #enterprise #agenticai #theravitshow

AI doesn't fail because of GPUs. It fails because of data.I had a blast chatting with Jeff Echols, Vice President, AI and Strategic Partners at Hammerspace, from NVIDIA GTC in Washington. We talked about the part of AI nobody is fixing fast enough: getting data to GPUs at the speed the GPUs need it.Jeff breaks down what makes the Hammerspace AI Data Platform different from traditional AI storage. This isn't “more storage.” It's orchestration. Move data globally. Feed it to the right workload. Keep GPUs busy instead of waiting.We also got into MCP and why an intelligent data control layer is now core to any real AI strategy, plus how Hammerspace lines up with the NVIDIA AI Data Platform reference design so enterprises can actually run this in production, not just in a lab.And we talked Tier 0. If you want GPU ROI, Tier 0 is about one thing: keep the GPUs fed at full speed.If you're trying to scale AI past a pilot, watch this. #data #ai #nvidiagtc #nvidia #hammerspace #gpu #theravitshow

AI in the public sector isn't a pilot anymore. It's running in the real world. Check out my conversation from NVIDIA GTC in Washington with Robin Braun, VP AI Business Development, Hybrid Cloud at Hewlett Packard Enterprise, and Russell Forrest, Town Manager of Town of Vail. This one is important because it's about AI for cities, not just AI for big tech. I had a blast interviewing both Robin and Russell.We talked about how HPE is using AI to tackle real problems every city deals with: traffic, safety, and energy efficiency. Robin walked through how you build a smarter, more connected city by turning live data into decisions that actually help people on the ground.Russell brought the city view from Vail. He explained what it takes to move from “we're testing AI” to “we're using this in operations.” We got into risk, cost, and how you deploy without adding complexity or slowing down public services.We also discussed agentic AI. Not as a buzzword, but as something that can help a town react in real time while still keeping humans in control.Better safety. Better visitor experience. Better use of resources. Same team.This is AI as public service infrastructure.Full interview is now live on LinkedIn and YouTube.#data #ai #nvidiagtc #nvidia #hammerspace #gpu #storage #theravitshow

Public sector AI is moving fast. The big question is how to build it the right way. I had a blast chatting with Andrew Wheeler, Hewlett Packard Enterprise Fellow and Vice President at Hewlett Packard Labs and HPC & AI Advanced Development at NVIDIA GTC in Washington, DC. We talked about:* How HPE helps agencies build and scale sovereign AI ecosystems* Why the public sector is a core focus for HPE in AI* Practical steps for data sovereignty, compliance, and security without slowing innovation* Where sovereign AI shows up first: government, defense, citizen services, large-scale research* How HPC and supercomputing power national-scale AI* What quantum could unlock for government programs and where HPE fitsIf you care about trusted AI for cities, states, and national labs, this one is worth a watch.Full interview now live!!!!#data #ai #hpe #nvidiagtc #gtc2025 #gpu #sovereign #nvidia #theravitshow

AI ROI is now the real test. I got a chance to chat Joe Corvaia, SVP Sales at DDN at NVIDIA GTC in Washington. This one is for CEOs and exec teams who are being pushed to “do AI” but still can't show a return.We started with a simple question. Why are some companies actually getting ROI from AI while others are still stuck in pilots. Joe was very direct on what separates the ones who are scaling from the ones who are still presenting slides.We talked about infrastructure as a board-level strategy. Not just “buy more GPUs,” but “are you using the GPUs you already bought.” Joe walked through how data infrastructure and data flow have to be part of the conversation in the boardroom, not just in IT.We got into AI factories and the new DDN Enterprise AI HyperPOD. Built with Supermicro and accelerated by NVIDIA, HyperPOD is designed to take teams from first deployment to serious scale. The idea is you should be able to stand up production AI without rebuilding the stack every time you grow the workload.Joe also broke down why platforms like HyperPOD, and GPU-aware networking and acceleration like NVIDIA BlueField 4, are about more than performance. They are about efficiency. Max GPU utilization. No idle spend. Faster time to value. This matters not just for big tech, but for regulated industries and sovereign AI programs that need capability and control.We closed on one topic that every CEO is thinking about right now. How do you future proof AI investments. Joe shared the one principle leaders should follow so they are not buying hardware for headlines, but building an AI foundation that still makes financial sense five years out.If you own AI strategy, budget, or delivery, watch this.#data #ai #nvidiagtc #nvidia #hammerspace #gpu #storage #theravitshow

I have seen a lot of AI demos this year. Very few make hard, messy work feel simple. Next week I am going live with Sarah McKenna, CEO of Sequentum, for an AI Magic Wand Launch Celebration on The Ravit Show.What is happening -We are going to walk through how Sequentum is using AI to change web data work. Not slides. Actual product.Here is what we will get into during the show:- AI Magic Wand (beta)A new feature that turns high level intent into working web data flows. Think less trial and error, more “tell it what you want and refine.”- Command TemplatesHow reusable templates help teams stop rebuilding the same patterns and start sharing what works across the company.- New tools coming in the next weeksUnblocker, Paginations and more. All focused on enhancing Sequentum's data collection capabilities - Latest in standardsThe standards and good practices that matter if you want web data and AI that can stand up in an enterprise.Why I am excited about this oneMost teams I meet are still stuck between scripts, manual fixes, and brittle tools when it comes to web data. Sequentum is trying to give them a cleaner path with AI on top. This session is about showing that work in public and talking through the real trade offs.If you care about web data, automation, and using AI for real work, this will be a good one to watch.

Think Mumbai was electric. India's AI build-out just moved into a higher gear.I sat down with Sandip Patel, Managing Director, IBM India & South Asia at IBM's Mumbai office. We unpack what Think Mumbai means for teams building with AI, hybrid cloud, and data at scale.What stood out and why it matters:IBM and airtel partnership• Aim: give regulated industries a safe and fast path to run AI at scale• How: combine Airtel's cloud footprint and network with IBM's hybrid cloud and watsonx stack• Why it helps: data stays controlled and compliant while workloads flex across on-prem, cloud, and edge• Impact areas: banking, insurance, public sector, large enterprises with strict governanceFirst cricket activation on watsonx• What: AI-driven insights powering a live cricket experience• Why it matters: shows real-time analytics, content, and decisioning are ready for prime time• Enterprise takeaway: the same pattern applies to contact centers, fraud, supply chains, and field ops where seconds countAI value for Indian enterprises today• Start with governed data and clear ownership• Use hybrid patterns so models run where the work and data live• Blend predictive models with generative workflows inside watsonx for measurable lift• Track outcomes in productivity, risk reduction, customer experience, and time to valueSkills as the force multiplier• Priority skills: data governance, MLOps, orchestration, security on hybrid cloud• Team model: small core teams operating a shared platform, federated use cases across business units• Result: faster move from pilots to production with repeatable guardrailsMy takeIndia is moving from talk to build. The blueprint is open, hybrid, and governed. Partnerships that keep control local while staying flexible will unlock scale. Sports gave us a sharp demo of real-time AI. The next wins will be in operations, customer journeys, and risk.The interview is live now. Link to the complete interview in the comments!#data #ai #agentic #ibm #ThinkMumbai #governance #cloud #watsonx #IBMPartner #theravitshow

Flink Forward Barcelona 2025 was not just about streaming. It was about what comes next for enterprise AI. I sat down with Qingsheng Ren, Team Lead, Flink Connectors & Catalogs at Ververica, and Xintong Song, Staff Software Engineer at Alibaba Cloud, to talk about something that could change how enterprises build AI systems in production: Flink Agents.Flink Agents is being introduced as an open source sub-project under Apache Flink. The goal is simple and ambitious at the same time: bring agentic AI into the same reliable, scalable, fault-tolerant world that already powers real-time data infrastructure.We talked about why this matters.First, why Flink Agents and why now?They walked me through the motivation. Most AI agent frameworks today look exciting in a demo, but they break once you try to run them against live data, streaming events, strict SLAs, audit requirements, cost pressure, and real users. There's a big gap between prototypes and reliable operations. That's the gap Flink Agents is aiming to close.Why open source?Both Ververica and Alibaba made it clear that this is not meant to be a proprietary, closed feature. They want this to be a community effort under Apache Flink, not a vendor lock-in story. The belief is that enterprises will only bet on AI agents at scale if the runtime is open, portable, and battle tested.How is building an AI agent different from building a normal Flink job?This part was interesting. A standard Flink job processes streams. An agent has to do more. It has to reason, take actions, call tools, maintain context, react to feedback, and keep doing that continuously. You're not just transforming data. You're orchestrating behavior. Flink Agents is meant to give you those building blocks on top of Flink instead of forcing teams to stitch this together themselves.What kind of companies is this for?We got into enterprise workloads that actually need this. Think about environments where fast decisions matter and you can't afford to go offline:-- Fraud detection and response-- Customer support and workflow automation-- Operational monitoring, alert triage, and remediation-- Real-time personalization and recommendations-- Anywhere you need an autonomous loop, not just a dashboard-- And finally, roadmap.We talked about the next 2 to 3 years. The focus is on deeper runtime primitives for agent behavior, cleaner developer experience, and patterns that large enterprises can trust and repeat.My takeaway:Flink Agents is not just “yet another agent framework.” It's an attempt to operationalize agentic AI on top of a streaming backbone that already runs at massive scale in production.This is the conversation every enterprise AI team needs to be having right now.#FlinkForward #Ververica #Streaming #RealTime #DataEngineering #AI #TheRavitShow

Real time is getting simpler. At Flink Forward, I sat down with Josep Prat, Director, Aiven. We discussed about Aiven and Ververica | Original creators of Apache Flink® new partnership and what it unlocks for data teams!!!!What we covered: • Why this partnership makes sense now and the outcomes it targets • Fastest ROI use cases for joint customers • How Aiven and Ververica split support, SLAs, and upgrades • The first deployment patterns teams should try. POCs, phased rollouts, or full cutovers • Support for AI projects that need fresh data with low latency • What is coming next on the shared roadmap over the next two quartersIf you care about streaming in production and a cleaner path to value, this one is worth a watch.Full interview now live!!!!#data #ai #streaming #Flink #Aiven #Ververica #realtimestreaming #theravitshow

Flink Forward Barcelona 2025 was a big week for streaming and the streamhouse.I sat down with Jark Wu, Staff Software Engineer at Alibaba Cloud, and Giannis Polyzos, Staff Streaming Architect at Ververica, to talk about Apache Fluss and what is coming next.First, a quick primer. Fluss is built for real-time data at scale. It sits cleanly in the broader ecosystem, connects to the tools teams already use, and focuses on predictable performance and simple operations.What stood out in our chat:• Enterprise features that matterSecurity, durability, and consistent throughput. Cleaner ops, stronger governance, and a smoother path from POC to production.• Zero-state analyticsThey walked me through how Fluss cuts network hops and lowers latency. Less shuffling. Faster results. More efficient pipelines.• Fluss 0.8 highlightsBetter developer experience, more stable primitives, and upgrades that help teams standardize on one streaming backbone.• AI-ready directionVendors are shifting to AI. Fluss is adapting with functions that support agents, retrieval, and low-latency model workflows without bolting on complexity.• Streamhouse alignmentThe new capabilities strengthen Fluss in a streamhouse architecture. One place to handle fast ingest, storage, and analytics so teams do not stitch together five systems.We also covered the roadmap. Expect continued work on latency, cost control, and easier day-two operations, plus patterns that large teams can repeat with confidence.Want to get involvedJoin the community, review the open issues, try the latest builds, and share feedback from real workloads. That is how this moves forward.The full conversation with Jark and Giannis is live now on The Ravit Show.#data #ai #FlinkForward #Flink #Streaming #Ververica #TheRavitShow