Fed up with tech hype? Looking for a tech podcast where you can learn from tech leaders and startup stories about how technology is transforming businesses and reshaping industries? In this daily tech podcast, Neil interviews tech leaders, CEOs, entrepreneurs, futurists, technologists, thought lead…
Listeners of The Tech Blog Writer Podcast that love the show mention: neil asks, bram, neil hughes, neil does a great, neil's podcast, charismatic host, insightful and engaging, tech topics, love tuning, great tech, engaging podcast, tech industry, emerging, tech podcast, startups, founder, best tech, predictions, technology, innovative.
The Tech Blog Writer Podcast is a must-listen for anyone interested in the intersection of technology and various industries. Hosted by Neil Hughes, this podcast features interviews with a wide range of guests, including visionary entrepreneurs and industry experts. Neil has a remarkable talent for breaking down complex topics into easily understandable discussions, making it accessible to listeners from all backgrounds. One of the best aspects of this podcast is the diversity of guests, as they come from different industries and share their cutting-edge technology solutions. It provides a great source of inspiration and knowledge for staying up to date with the latest advancements in tech.
The worst aspect of The Tech Blog Writer Podcast is that sometimes the discussions can feel a bit rushed due to the time constraints of each episode. With so many interesting guests and topics to cover, it would be great if there was more time for in-depth conversations. Additionally, while Neil does an excellent job at selecting diverse guests, occasionally it would be beneficial to have more representation from underrepresented communities in tech.
In conclusion, The Tech Blog Writer Podcast is an excellent resource for those looking to stay informed about the latest tech advancements while learning from visionary entrepreneurs across various industries. Neil's ability to break down complex topics and his engaging interviewing style make this podcast a valuable source of inspiration and knowledge. Despite some minor flaws, it remains a must-listen for anyone interested in staying up-to-date with cutting-edge technology solutions and developments.

How do AI agents safely access the data they need without exposing the business to risk? In this episode, I speak with Alex Gallego, CEO and founder of Redpanda, about why streaming data is becoming such an important foundation for enterprise AI. Redpanda began as a high-performance streaming data platform, but the company is now building what it calls the Agentic Data Plane, a governed access layer designed to connect AI agents with enterprise data and systems. Alex shares the story behind Redpanda's journey, from solving a personal engineering frustration to powering mission-grade systems for some of the world's largest organizations. We discuss why many enterprises are racing toward agentic AI while still lacking the permissions, controls, context, and observability needed to make agents safe in production. One of the standout moments in our conversation is Alex's comparison between hiring AI agents and forgetting to onboard them. Businesses are deploying accounting agents, coding agents, customer success agents, and security agents, yet many still lack a reliable way to decide what those agents can access, what actions they can take, and how to prove what happened when something goes wrong. We also talk about explainability, agent transcripts, and why enterprises need a full record of agent behavior across complex chains of activity. Alex explains how this matters in regulated sectors such as banking, where organizations may need to prove that an AI agent is acting helpfully and responsibly, and in manufacturing, where a faulty agent action could affect months of production. Alex also shares Redpanda's work with NVIDIA Vera, where benchmarks showed 5.5 times lower latency and 73% higher throughput. For business leaders, that means faster systems, lower costs, better customer experiences, and the ability to monitor agent behavior in real time. This conversation is a practical look at what enterprise AI needs next. Speed matters, but governance, trust, and control may decide which companies can move AI agents from experiments into real operations. So, are we ready to give AI agents access to the enterprise, or do we first need to learn how to manage them like part of the workforce? Please check the partners of the Tech Tech Talks Network Learn more about the NordLayer Browser Visit Denodo.com

What happens when AI moves beyond experimentation and becomes part of the creative process itself? At Google Cloud Next in Las Vegas, I sat down with Albert Lai to explore how AI is transforming the media and entertainment industry from content creation and production to personalization, localization, and audience engagement. Albert works across Google Cloud, Google, and the wider Alphabet ecosystem to help media organizations rethink how they create and distribute content using cloud infrastructure, multimodal AI, and agentic workflows. And one thing became very clear in this conversation, the industry has moved beyond asking "What if?" and is now firmly focused on production-scale execution and measurable business outcomes. We discuss why media companies are fighting a growing battle for audience attention, and how AI is helping them create content more efficiently while also unlocking the value hidden inside vast archives of existing material. Albert explains why the conversation has shifted from simply producing more content to maximizing what already exists, and how AI is helping organizations rediscover and reimagine decades of footage, audio, and intellectual property. The conversation also explores one of the biggest themes emerging at both Google Cloud Next and NAB Show, the rise of agentic AI workflows. Albert shares how media companies are using orchestrated AI systems to streamline complex production processes, support editors and creators inside existing workflows, and improve everything from localization and dubbing to monetization and personalization. We also dive into real-world examples, including how companies like Avid Technology are integrating Google AI directly into production environments, and how Indonesian media company MTech used Google Cloud AI tools to create and distribute a 26-episode animated series with measurable improvements in production speed, cost efficiency, and audience engagement. This is not a conversation about replacing creativity. It is about augmenting it. If you work in media, content, streaming, sports, publishing, or simply want to understand how AI is changing storytelling itself, this episode is packed with practical insight and real-world examples. How will AI change the stories we create, and the way audiences experience them? Useful Links Connect with Albert, Lai Google Cloud Next 26 Please check the partners of the Tech Tech Talks Network Learn more about the NordLayer Browser Visit Denodo.com

What does real ROI from AI and analytics actually look like in the fast-food industry? At SAS Innovate, I sat down with David Gardner, Senior Director of Analytics at Boddie-Noell Enterprises, the largest franchise operator of Hardee's in the United States, to explore how a 60-year-old family business is transforming itself through data, forecasting, and AI. This is a company processing around 40 million transactional records every single day across more than 300 restaurants, where even shaving a few seconds off a drive-thru experience can have a measurable impact on customer satisfaction and revenue. What makes this conversation so interesting is how grounded it is in operational reality. David shares how the company moved from relying on spreadsheets, summarized reports, and gut instinct toward real-time analytics powered by SAS. One of the standout stories involves extending breakfast hours. Operational teams initially resisted the idea, convinced it would create chaos in the kitchen. But once David dug into the transactional data, the numbers told a very different story. Breakfast sales during the extended hours were growing dramatically, proving the demand was real and helping the business make a decision based on evidence rather than instinct. We also discuss how analytics is helping optimize labor scheduling, forecasting, payroll, inventory planning, and customer throughput at scale. David explains how his team can now analyze profitability hour by hour for every restaurant in the business, helping local managers make faster and more informed decisions. With forecasting accuracy improving to within fractions of a percentage point, the business can plan more effectively in an industry facing inflation, labor pressures, delivery app disruption, and shifting customer habits. Another major theme is accessibility. David talks about the importance of data democratization and making analytics understandable for non-technical teams. Restaurant managers are not data scientists, and they should not need to be. The goal is to put insights directly into their hands in a way that is simple, actionable, and easy to understand. AI is now becoming part of that journey too, acting as what David describes as a mentor for newer managers, helping them identify opportunities, improve operations, and get up to speed faster. We also explore how customer behavior has changed dramatically with the rise of delivery platforms like DoorDash and Uber Eats, creating entirely different purchasing patterns compared to traditional in-store diners. Through analytics, the company can better understand those differences and optimize everything from promotions to staffing and menu strategy. What stood out most to me is that this is not a story about flashy AI demos or abstract transformation projects. It is about using analytics to solve practical business problems in real time while quietly improving the customer experience behind the scenes. Because at the end of the day, customers do not care about dashboards or machine learning models. They care about getting good food quickly, accurately, and consistently. The technology only matters if it helps deliver that outcome. So as businesses continue chasing AI opportunities, are they focusing on the use cases that actually move the needle, or getting distracted by the hype? Useful Links Connect with David Gardner Learn More About Boddie-Noell Ent. Catchup With What You Missed at Google Cloud Next Please check the partners of the Tech Tech Talks Network Denodo Learn more about the NordLayer Browser

What happens when the excitement around AI collides with the reality of deploying it inside a business? At SAS Innovate, that question came up repeatedly, and in this episode, I sit down with Manisha Khanna, global product marketing lead for AI at SAS, to unpack why so many organizations are still struggling to move from AI pilots to meaningful business outcomes. While headlines continue to celebrate the rapid rise of generative AI and agentic systems, Manisha brings a far more practical perspective shaped by working directly with enterprises trying to operationalize AI at scale. One of the most striking parts of our conversation centers on why AI projects continue to stall. According to Manisha, the biggest problems are not weak models or lack of ambition. Instead, organizations are running into unpredictable inference costs, operational complexity, governance challenges, and internal resistance to change. She explains why many companies still approach AI as a technology purchase rather than a transformation strategy, and why governance built in from the beginning can actually accelerate adoption rather than slow it down. We also spend time exploring what agentic AI really means beyond the hype. Manisha shares why SAS chose supply chain as the launch point for its first industry-packaged agent and how agentic systems differ from copilots by acting more like coworkers than assistants. Rather than simply providing recommendations, these systems can actively participate in business workflows, helping organizations move from monthly optimization cycles to near real-time decision-making. Another major theme is the growing importance of governance and accountability. As organizations deploy AI into regulated industries and customer-facing environments, the focus is shifting away from "whose model is best" toward "who is deploying the best use cases responsibly." Manisha explains why governing the use case itself matters more than obsessing over model benchmarks, and why companies that bolt governance on afterward create friction for themselves later. The conversation also touches on where AI is already delivering measurable value today. From customer complaint management in banking to aircraft maintenance support systems powered by retrieval-augmented generation, we discuss how organizations are seeing success when AI augments existing workflows rather than attempting wholesale disruption overnight. What stood out most for me is how often the human side of AI came back into focus. Manisha repeatedly emphasized that leadership communication, employee trust, and organizational readiness are just as important as the technology itself. If leaders position AI purely as a cost-cutting tool, fear and resistance follow. But when AI is framed as a way to empower people and improve outcomes, adoption becomes much easier. As organizations continue to implement AI and agentic systems, the biggest question is no longer whether the technology works, but whether businesses are ready to lay the foundations needed to make it succeed. Useful Links Connect with Manisha Khanna SAS Blog SAS Innovate Please check the partners of the Tech Tech Talks Network Denodo Learn more about the NordLayer Browser

What happens when cybercrime becomes as easy to access as a subscription service, and what does that mean for every business connected to the internet today? In this episode, I sit down with SentinelOne AI and Cloud Security Evangelist Chris Hosking to unpack a shift that feels both inevitable and deeply unsettling. The rise of what Chris describes as an AI threat market is changing the rules of engagement. Cybercrime is no longer limited to highly skilled operators working in isolation. Instead, it has evolved into a thriving ecosystem where tools, services, and even AI-powered attack kits are bought and sold with alarming ease. As Chris explains during our conversation, "cyber crime is quite an ecosystem… the dark web has always been a place for cyber criminals to meet and to sell their wares." We explore how AI has accelerated this shift, lowering the barrier to entry to the point where attacks can be launched for as little as £35. That democratization of cybercrime is already having real-world consequences. Chris shares how individuals without deep technical expertise are now able to orchestrate sophisticated attacks using AI assistance, and why that surge in accessibility is driving both the volume and impact of cyber incidents. It also reframes a common misconception. Smaller businesses are not flying under the radar. In fact, many are being targeted precisely because of weaker defenses, with attacks increasingly automated and opportunistic. The conversation also moves into more complex territory, where organized cybercrime and nation-state activity begin to overlap. Chris highlights how governments and criminal groups are drawing from the same AI marketplaces, blurring the lines between financial motivation and geopolitical intent. The implications stretch far beyond corporate risk, touching on critical infrastructure and everyday services that people rely on. It raises a difficult question about preparedness in a world where attacks are faster, more frequent, and harder to predict. At the same time, there is a practical thread running through this discussion. Chris challenges the instinct to immediately invest in more tools and instead encourages leaders to look inward first. From improving basic security hygiene to using AI to reduce manual workload and noise, there are tangible steps organizations can take right now. The goal is not perfection, but resilience in an environment where, as Chris points out, incidents are becoming a regular occurrence rather than a rare event. This episode offers a clear-eyed look at where cybersecurity is heading, without the hype or fear-driven narratives. It is a conversation about scale, speed, and the uncomfortable reality that the threat landscape has changed in ways many organizations are still catching up with. So as AI continues to reshape both innovation and risk, how prepared is your organization for a world where anyone can launch an attack with a few prompts and a subscription? Useful Links SentinalOne Blog Connect with Chris Hosking Please check the partners of the Tech Tech Talks Network Denodo Learn more about the NordLayer Browser

What happens when the race to deploy AI starts to outpace the ability to control it? In this episode of Tech Talks Daily, I sit down with Ken Englund from EY to unpack findings from the latest 2026 Technology Pulse Poll, and the conversation quickly moves beyond theory into something many leaders will recognize from their own organizations. There is a growing tension between speed and oversight, a "velocity paradox" Ken describes, in which businesses are accelerating AI adoption while governance struggles to keep up. The numbers behind that story are hard to ignore. A large majority of tech leaders are prioritizing speed to market over careful vetting, while more than half of AI initiatives are happening outside formal IT oversight. For anyone responsible for security, compliance, or risk, that gap raises immediate concerns. But as Ken explains, it is not as simple as labeling this as reckless behavior. Much of this activity is driven by real innovation happening closer to the business, where teams are experimenting, solving problems, and creating value quickly. We spend time breaking down what that looks like in practice. From the rise of shadow AI tools to the growing risk of sensitive data exposure, there is already evidence that the consequences are beginning to show. At the same time, nearly every executive surveyed sees autonomous AI as central to future competitiveness, which means slowing down is not really an option either. One of the most useful parts of the conversation focuses on what organizations can actually do about it. Ken shares practical insight into why architecture matters more than ambition, how companies should think about optionality in a fast-moving AI ecosystem, and why observability is becoming a missing layer in many deployments. We also get into the reality of measuring AI value, where the conversation is shifting from promised returns to the often-overlooked cost side, including token usage and uncontrolled spending across departments. There is also a broader discussion around leadership and culture. Governance frameworks may exist on paper, but the real challenge lies in operationalizing them across a business that is already moving at speed. Add in geopolitical pressures, evolving regulations, and the complexity of deploying AI globally, and it becomes clear why many organizations feel overwhelmed. This episode is not about slowing innovation down. It is about understanding where things are breaking, what leaders are getting wrong, and how to build a path forward that balances progress with accountability. So, as AI budgets continue to rise and autonomous systems become part of everyday operations, how will your organization close the gap between ambition and control, and are you already further along that path than you realize? Useful Links Ernst & Young Technology Pulse Poll Connect with Ken Englund on LinkedIn Follow on LinkedIn Please check the partners of the Tech Tech Talks Network Learn more about the NordLayer Browser Visit Denodo.com

What happens when your financial advisor is no longer limited by time, availability, or even geography, but is always there when you need them, ready to listen, respond, and guide you in real time? At Citi's announcement at Google Cloud Next 2026, I sat down with Joe Bonanno, Head of Wealth Intelligence, and Karolina Belwal, Global Head of Data Intelligence and Automation for Citi Wealth, to unpack what could become a defining shift in how wealth management is delivered. The launch of Citi Sky, built in partnership with Google Cloud and powered by Google DeepMind, is not another digital feature layered onto an existing app. It signals a move toward an always-on, conversational, and highly personalized experience that blends human expertise with AI-driven intelligence. What stood out in our conversation was how grounded this initiative is in real-world client behavior. Joe explained how traditional engagement models, whether phone calls, emails, or app notifications, often feel disconnected from what clients actually need in the moment. Life events, changing market conditions, and personal priorities rarely align with scheduled interactions. Citi Sky attempts to close that gap by being present at the exact moment a client has a question, whether that is late at night, between meetings, or during a moment of financial uncertainty. Karolina brought that point to life with a simple but relatable example. As a working parent, she highlighted how difficult it can be to connect with an advisor during the day. Citi Sky allows clients to engage on their own terms, asking questions when it suits them, in a way that feels natural and responsive. That shift from scheduled interaction to on-demand conversation could change how people think about financial guidance altogether. Under the hood, the technology is just as ambitious. Built on Gemini models through Google's enterprise agent platform, Citi Sky combines real-time voice, video, and multilingual capabilities into a single experience. But what makes it interesting is how it moves beyond reacting to questions. The system can anticipate needs, surface insights, and even guide advisors by identifying which clients may require attention during market events. In Joe's words, it becomes a teammate, one that can scale expertise across hundreds of clients while maintaining a sense of personalization. There is also a broader implication here for the industry. Wealth management has long relied on relationships built over time, supported by human intuition and experience. Citi is not replacing that model, but it is extending it. Advisors are still central, yet their reach is amplified by AI that handles routine interactions, summarizes conversations, and provides context before the next client meeting even begins. Of course, this raises familiar questions around trust, governance, and the role of AI in financial decision-making. Citi is clearly aware of that tension, emphasizing secure data foundations, regulatory compliance, and the importance of embedding its Chief Investment Office's institutional knowledge directly into the system. This is not positioned as a generic AI assistant, but as a reflection of Citi's own expertise, delivered through a new interface. What I found most compelling, though, was how both Joe and Karolina kept returning to the human side of the story. Yes, this is about agentic AI and advanced models. Still, it is also about reducing friction, improving access, and helping people answer a simple but powerful question: Am I financially okay? As Citi Sky rolls out to Citigold clients in the U.S., it will be fascinating to see how customers respond and how competitors react. If this model gains traction, it could reshape expectations far beyond wealth management and into every corner of financial services. As we move into the next phase of AI-driven client engagement, are we ready to trust a system that listens, understands, and acts on our financial lives in real time, and how much of that responsibility are we willing to share? Useful Links Learn More About Citi Sky, the AI-Powered Member of the Citi Wealth Team. Connect with Joseph V. Bonanno Jr. Connect with Karolina Belwal Please check the partners of the Tech Tech Talks Network Learn more about the NordLayer Browser Visit Denodo.com

Are businesses really making progress with AI, or are many still stuck using it for the digital equivalent of making phone calls on a smartphone? In this episode, I sit down with Alison Kay, VP / Managing Director AWS UKI, to unpack what is actually happening behind the headlines of AI adoption across the UK. On paper, the numbers look strong. Around 64% of UK businesses are now using AI, a sharp rise from the previous year. But when you look closer, the story shifts. Only one in four organizations have moved into more advanced use cases, where real productivity gains, efficiency improvements, and innovation start to show up in meaningful ways. So what is holding everyone back? In our conversation, Alison shares insights from AWS research and her work with organizations ranging from major enterprises like Barclays and the BBC to fast-moving startups. We explore why skill shortages are slowing progress, why many companies struggle to move beyond basic use cases, and how governance and trust are becoming central to scaling AI responsibly. We also spend time breaking down the rise of agentic AI, a term that is starting to appear everywhere. Instead of simply generating answers, these systems are beginning to take action, writing code, testing software, and working alongside humans to dramatically accelerate delivery timelines. Alison shares a powerful example where a project that might have taken 40 engineers over two years was completed by six engineers in just 76 days with the support of AI agents. Along the way, we look at real-world examples from companies like Trainline and Evri, showing how AI is already reshaping customer experience and operational efficiency in ways that go far beyond theory. This episode is a must-listen for business leaders trying to understand where AI is delivering real value today, where the biggest gaps still exist, and how to move from experimentation to meaningful transformation. So if your organization is already using AI, the real question becomes this, are you using it to improve what you already do, or are you ready to rethink how your business operates entirely? Useful Links Connect with Alison Kay Unlocking the UK's AI Potential" report. Please check the partners of the Tech Tech Talks Network Learn more about the NordLayer Browser Visit Denodo.com

What if one of the most influential figures in modern technology had almost ignored the opportunity that would define his career? In this episode, I sit down with Werner Vogels, Chief Technology Officer at Amazon, to explore the story behind Amazon Web Services as it marks its 20th anniversary, and how a near-dismissed phone call turned into a front-row seat to one of the biggest shifts in computing history. Werner takes me back to the early days when Amazon was still seen as "just a bookstore," and shares what he discovered when he first stepped inside what he calls Amazon's "technology kitchen." What he found was a company solving problems at a scale that commercial software simply could not handle, forcing them to build everything themselves. That mindset would go on to shape everything from Dynamo to the foundations of modern cloud infrastructure. We also unpack the thinking behind one of the most important shifts in enterprise technology, the move from upfront licensing to pay-as-you-go. It sounds obvious now, but at the time it challenged how the entire industry operated, giving businesses the ability to experiment, scale, and take control of their own costs in ways that were not possible before. Looking ahead, Werner offers a refreshing perspective on AI and what he describes as a developer renaissance. While many headlines focus on replacement, he sees AI as a tool that amplifies human capability, placing even greater importance on curiosity, ownership, and collaboration. It is a reminder that while tools will continue to evolve, responsibility and decision-making still sit firmly with the people using them. This episode is a must-listen for anyone building, leading, or investing in technology. It connects the dots between past, present, and what comes next, showing how today's AI wave echoes the same patterns that shaped the cloud revolution. So as we look toward the next era of computing, the question is simple, are we ready to think at the scale required to build what comes next? Useful Links Connect with Werner Vogels Please check the partners of the Tech Tech Talks Network Learn more about the NordLayer Browser Visit Denodo.com

Can faster access to real-world data actually change patient outcomes, or are we still too reliant on controlled clinical trials to see the full picture? In this episode, I sit down with Dr. Alex Asiimwe, Executive Director of Epidemiology at Gilead Sciences, to explore a topic that doesn't get enough attention in the AI conversation, real-world evidence. While much of the industry focuses on AI in drug discovery or diagnostics, Alex brings a different perspective, one rooted in what happens after treatments reach real patients in the real world. As he explains, clinical trials may be the gold standard, but they are still controlled environments. Real-world evidence is where we begin to understand how treatments perform across diverse populations, healthcare systems, and everyday conditions. What stood out in our conversation is just how messy and fragmented that real-world data can be. Much of it is not collected for research purposes, which means it takes months, sometimes up to a year, to clean, structure, and analyze before it can inform decisions. Alex shares how AI is beginning to change that, not by replacing human expertise, but by automating the most time-consuming parts of the process. If that timeline can be cut in half, the impact is immediate. Faster evidence means faster decisions, and in healthcare, delays in evidence can directly affect patient outcomes. We also explore what Alex describes as the "analytics gap," the disconnect between where data exists and where insights are actually generated. Today, much of the evidence used in drug development still comes from limited datasets, often from a single country or region. Yet the treatments themselves are global. That mismatch creates blind spots, particularly in low and middle-income countries where data is often unstructured, fragmented, or simply not accessible. AI has the potential to standardize and unlock that data, helping to create a more complete and representative view of patient populations worldwide. Of course, the challenges are not just technical. Trust, governance, and politics all play a role in whether data can be shared and used effectively. Alex is clear that the biggest barrier is not the science or the analytics, it is building trust between organizations, governments, and communities. Without that, even the most advanced AI models cannot deliver meaningful outcomes. This conversation also touches on the importance of collaboration, not just between healthcare organizations and technology providers like SAS, but across the global ecosystem. Alex highlights how partnerships, open standards, and shared frameworks can help close the analytics gap and accelerate progress in areas like HIV prevention, where understanding real-world patient behavior is critical. As we wrap up, one message comes through clearly. AI is not a miracle solution, and it will not transform healthcare overnight. But when applied to the right parts of the workflow, especially around data preparation and evidence generation, it can create measurable, meaningful change. So as healthcare leaders look to move beyond pilots and into real impact, the question becomes, are we focusing on the right problems, and are we ready to open up the data needed to solve them? Useful Links Connect with Dr. Alex Asiimwe OHDSI – Observational Health Data Sciences and Informatics Please check our partners of Tech Tech Talks Network Learn more about the NordLayer Browser

In this episode of Tech Talks Daily, I welcome back Dennis Woodside, CEO of Freshworks, to unpack the growing conversation around the so-called SaaS-pocalypse and what it really means for the future of software businesses. There is no shortage of dramatic headlines suggesting SaaS is under threat, but Dennis offers a far more practical perspective. He explains that this is less about the collapse of software and more about a major reset in how software is judged, bought, and valued. As AI changes customer expectations, businesses are no longer willing to pay for incremental features or vague AI claims. They want clear outcomes, measurable ROI, and platforms that can prove they belong inside an AI-augmented tech stack. We discuss how the traditional seat-based pricing model is shifting toward consumption, outcomes, and usage-based models. Dennis shares why software companies without a strong AI strategy risk being squeezed out. At the same time, those with mission-critical systems of record and deep workflow intelligence are better positioned to thrive. He explains why deterministic software still matters in a world obsessed with generative AI and why the future belongs to platforms that combine trusted operational data with secure, embedded AI experiences. Dennis also shares how customers are changing the way they evaluate software, with many now using tools like ChatGPT and Google Gemini to compare vendors, analyze RFPs, and arrive at buying decisions far earlier in the sales process. This shift is forcing software vendors to rethink marketing, product design, and customer engagement from the ground up. We also explore the balance between governance and experimentation, why AI adoption must happen from both the top down and bottom up, and why speed, not just cost reduction, is becoming the real business driver. Dennis shares examples of how organizations are redesigning workflows, accelerating engineering output, and freeing up high-value talent from repetitive work. As he puts it, most companies are no longer asking if they need AI; they are asking how fast they can make it part of everything they do. If you have been wondering whether the SaaS model is broken or simply evolving into something smarter, this conversation offers a sharp and realistic look at what comes next. How is your business thinking about durability in an AI-first world, and are you building to last or simply building to grow? Useful Links Connect with Dennis Woodside on LinkedIn Learn more about Freshworks Refresh 2026 Event Follow on LinkedIn Visit the Sponsors of Tech Talks Network and learn more about the NordLayer Browser.

How much of your working day is actually spent doing meaningful work, and how much is lost chasing emails, searching for documents, sitting in meetings, and trying to remember where that one important conversation happened? At Google Cloud Next in Las Vegas, I sat down with Yulie Kwon Kim, Vice President of Product for Google Workspace at Google, to talk about how AI is changing the way billions of people work every day. Yulie leads the products many of us rely on constantly, Gmail, Google Calendar, Drive, Docs, Sheets, Slides, and newer tools like Google Vids. At this year's event, she introduced Workspace Intelligence, a major step forward in how AI works inside those everyday tools. Instead of acting like a disconnected assistant, Workspace Intelligence understands your context across emails, meetings, files, and organizational knowledge to help create documents, prioritize inboxes, take meeting notes, and automate the repetitive work that quietly drains productivity. We explore what Workspace Intelligence actually is, how it differs from third-party AI tools, and why context matters just as much as model capability. Yulie explains why being a truly AI-first enterprise requires more than powerful models, it needs grounded context, governance, and security that people can trust. We also discuss one of the biggest concerns for business leaders: how to adopt AI without creating new risks around data security and access control. Yulie shares how Google approaches governance inside Workspace and why existing permissions and protections remain central to how AI operates. This conversation also touches on something bigger, the shift from individual productivity to shared organizational intelligence, where knowledge moves from living inside one person's head to becoming something the entire company can benefit from. If AI could remove one frustrating task from your workday tomorrow, what would you choose first? Useful Links Connect with Yulie Kwon Kim, Vice President of Product for Google Google Cloud Next 26 Visit the Sponsors of Tech Talks Network and learn more about the NordLayer Browser.

What happens when one of the world's most heavily regulated industries starts moving at AI speed? At Google Cloud Next in Las Vegas, I sat down with Sid Nadella, Director of Financial Services and Market Leader at Google Cloud, to talk about how AI is reshaping banking, wealth management, and capital markets from the inside out. With more than 20 years of financial services experience, including a long career at Goldman Sachs, Sid brings a rare perspective on how traditional institutions are balancing innovation with regulation, trust, and zero tolerance for error. We explore why the industry is moving beyond simple AI pilots and into what he calls the "doing era," where agentic AI is helping firms move from static dashboards and fragmented workflows toward intelligent systems that can reason, anticipate, and act in real time. Sid shares where he sees the biggest business impact today, from fraud detection and risk management to operational efficiency and unlocking new growth. We also discuss real-world examples from firms like Citi Wealth, Citadel, Scotiabank, and Starling Bank, and why the real opportunity lies in building the right foundations first: governance, compliance, observability, and strong data access across increasingly complex environments. We also tackle one of the biggest concerns around AI adoption, the fear that it replaces people. Sid explains why the real story is augmentation, helping teams remove repetitive work and focus on better decisions, stronger customer relationships, and higher-value outcomes. If you work in financial services, enterprise technology, or simply want to understand what agentic AI looks like beyond the headlines, this is a conversation packed with practical insight. How close is your organization to becoming truly agentic? Useful Links Connect with Sid Nadella, Director of Financial Services and Market Leader at Google Cloud. Google Cloud Next 26 Visit the Sponsors of Tech Talks Network and learn more about the NordLayer Browser.

What happens when AI starts moving faster than the people meant to control it? In this episode, I'm joined by Bernard Montel, Field CTO EMEA at Tenable, for a timely conversation about the AI risks many organizations may be underestimating. Bernard believes we are heading toward a defining AI accident and that the first major incident may come through speed, scale, and unintended consequences rather than a malicious attack. We talk about why so many companies feel pressure to adopt AI at pace, while visibility, governance, and control struggle to keep up. Bernard describes this moment as "driving faster than we can steer," and explains why shadow AI, overprivileged identities, cloud misconfigurations, and exposed AI projects are already creating real business risk. The conversation also looks at agentic AI and why giving systems the ability to take action changes the security equation. A chatbot giving a wrong answer is one problem. An AI agent making flawed decisions, leaking data, or interacting with industrial systems is something very different. Bernard also shares why AI can become a distraction from the security basics that still matter, including cloud security, identity, exposure management, and vulnerability remediation. Attackers may be using AI to move faster, but many of the weaknesses they exploit remain painfully familiar. We also discuss Tenable's new agentic AI framework, announced during RSA, and how the company is using AI to help security teams respond at machine speed while reducing exposure across IT, cloud, OT, identity, and AI environments. For business and security leaders, this episode offers a clear warning and a practical takeaway. AI adoption is no longer a future conversation, but control, governance, and exposure management need to move with it. How prepared is your organization for an AI incident caused by accident rather than attack? Share your thoughts. Useful Links Connect with Bernard Montel, Field CTO EMEA at Tenable Learn More About Tenable Visit the Sponsors of Tech Talks Network and learn more about the NordLayer Browser.

What if the smartest climate technology strategy isn't about inventing something new, but rethinking the buildings we already spend 90% of our lives in? In this episode of Tech Talks Daily, I sit down with Ben Stapleton, Executive Director of US Green Building Council California or USGBC California, to talk about why buildings sit at the center of sustainability, resilience, and community well-being. From energy use and air quality to wildfire resilience and climate justice, Ben makes a compelling case that the built environment may be one of the most practical places to create real change. Ben and his team launched the California Building Performance Hub, a platform designed to help building owners, operators, and policymakers understand how to improve building performance through policy guidance, technical resources, rebates, and even an AI-powered assistant trained on building codes and compliance pathways. We discuss how this platform is helping accelerate California's move toward healthier, lower-energy, high-performance buildings and why AI is becoming a useful sidekick rather than a replacement for human expertise. Our conversation also moves beyond technology and into something far more human: community. Ben shares how sustainability only works when people feel they have both awareness and agency. From helping low-income communities understand electrification and indoor air quality, to taking a "BuildSMART Trailer" filled with real building materials into neighborhoods so people can touch and understand the future of their homes, this episode is a reminder that climate progress starts with education and trust. We also talk about wildfire resilience in California, where simple low-cost building decisions can dramatically reduce fire risk while also improving energy efficiency and health outcomes. Ben explains why many of the solutions already exist, and why the challenge is often less about invention and more about implementation, policy, and long-term thinking. For business leaders, public sector teams, and anyone thinking about the future of cities, this episode offers a fresh perspective on sustainability as both a financial and human opportunity. Healthier buildings create healthier people, and healthier people create stronger businesses. Is the future of climate action already built around us, and are we finally ready to look up and see it? I'd love to hear your thoughts. Useful Links Connect with Ben on LinkedIn CA Building Hub USGBC California Follow USGBC California on LinkedIn Visit the Sponsors of Tech Talks Network and learn more about the NordLayer Browser.

How is AI really changing professional services work today, beyond the demos, predictions, and LinkedIn hype? In today's episode, I'm joined by DJ Paoni, CEO of Certinia, and Jay Laabs, CEO of Spaulding Ridge, to discuss how AI is already being used inside services organizations to improve project delivery, resource planning, workforce optimization, and client outcomes. DJ shares what a hybrid workforce of people and AI agents looks like in practice. Rather than thinking of AI as a search bar, he explains why services firms should think of agents as specialized colleagues that can handle repeatable tasks, draft project blueprints, support configuration work, and help teams deliver faster without losing the human judgment clients still rely on. Jay brings the adoption reality from the consulting front line. He explains why the biggest barrier is rarely the technology itself, but the processes, incentives, data models, and cultural habits wrapped around it. The most successful firms are moving away from broad experimentation and focusing on specific business problems where AI can deliver clear ROI. We also discuss the risks of rushing in without a plan. From disconnected AI agents creating a "spaghetti web" across the enterprise to teams automating broken workflows, DJ and Jay share practical warnings for leaders who want AI to create value without adding another layer of complexity. This episode offers a clear look at what is working, what is failing, and what needs to change as professional services firms rethink billable hours, project economics, and the role of human expertise in an AI-enabled workplace. Are services firms ready to measure success by outcomes rather than hours, and what will that mean for the future of consulting?

How much revenue is lost because the systems behind pricing, quoting, billing, and finance still do not talk to each other properly? In today's episode, I'm joined by Tina Kung, CTO and Co-Founder of Nue, the quote-to-revenue platform helping AI and SaaS companies rethink how they sell, bill, and grow. Tina brings more than two decades of experience across enterprise software, CPQ, billing, and revenue operations, with previous roles at Oracle, Zuora, SteelBrick, and Salesforce. Tina shares the story behind Nue and why she saw a growing gap between the systems that handle selling and the systems that manage revenue. As SaaS companies move from traditional subscriptions into usage-based pricing, credit burn-down models, product-led growth, partner channels, and enterprise sales, the old way of stitching together tools with manual work and spreadsheets starts to break down. We discuss how AI is changing go-to-market operations and why transaction-level intelligence matters. Tina explains how Nue connects quoting, billing, usage, and revenue data into a single system, then applies AI so teams can understand what is happening, spot opportunities, and take action faster. One of the standout stories is OpenAI, which rolled out Nue in just eight weeks to support the rapid growth of its ChatGPT Enterprise business. Tina shares what that process revealed about the speed of modern AI companies and why flexible revenue infrastructure is now a serious advantage. We also talk about the rise of agentic AI in revenue operations, from creating quotes and orders to handling subscription changes and surfacing upsell opportunities. As the SaaS model comes under pressure from AI, Tina offers a practical view of what needs to change behind the scenes for companies to stay competitive. If SaaS is entering a new chapter, are your revenue systems ready for how customers now buy, use, and pay for software?

Can AI really remove emotion from investing, or does human judgment still matter most when money is on the line? In today's episode, I'm joined by Jack Fu, Founder and CEO of Draco Evolution, a company using AI, quantitative models, and decades of market experience to help investors make smarter and more disciplined decisions. Jack's journey began during the 2008 financial crisis while working as a financial advisor at Union Bank of California, where watching investors lose life-changing amounts of money completely reshaped how he thought about risk, discipline, and long-term wealth creation. That experience led him to focus on one simple principle: avoiding big losses matters just as much as chasing returns. From managing assets for family offices and institutional clients to leading major investment operations across the Asia-Pacific region, Jack built his career around protecting capital first and helping investors stay in the market long enough to benefit from long-term growth. We explore how Draco Evolution is bringing institutional-level investment tools to everyday investors through AI-powered ETFs and a more dynamic approach to portfolio management. Jack explains how ETFs actually work, why they have become such a popular choice for investors, and the important difference between investing in AI companies and using AI itself to manage investment decisions. We also discuss the future of robo-advisors and why the next generation will move far beyond static questionnaires and occasional portfolio rebalancing. Jack shares why he believes the future lies in systems that adapt continuously to market conditions and investor behavior, creating something far more personal and responsive. From algorithmic trading and AlphaGo to today's world of agentic AI, Jack offers a practical perspective on how technology is changing finance without replacing human oversight. He also shares why investors should treat AI as an enhancement tool rather than blindly trusting every recommendation. If you've ever wondered how AI is changing investing, what makes AI-driven ETFs different, or how to stay disciplined in unpredictable markets, this conversation offers plenty of insight. How much would you trust AI to help manage your financial future, and where would you still want a human in the loop?

What does it actually take to move from AI experiments and pilot projects to real business outcomes that customers can feel? At Adobe Summit in Las Vegas, I sat down with Neil Letchford, Vice President of Digital Engineering at Virgin Atlantic, to talk about how the airline is doing exactly that. While many organizations are still debating ROI, governance, and where agentic AI fits into the customer journey, Virgin Atlantic has already launched an AI concierge that is actively helping customers book holidays, find answers faster, and create a smoother travel experience from the first search to stepping onboard the aircraft. Neil shared how a proof of concept built in just two months evolved into a live multi-agent system that now helps customers plan trips, book holidays, and move seamlessly between digital channels and human support when needed. We talked about the importance of "knowledge over data," why observability and model evaluation matter when deploying AI at scale, and how the team built trust internally by focusing on real customer pain points rather than chasing shiny technology trends. What stood out most was how Virgin Atlantic has kept its famously human customer experience at the center of every decision. This is not automation for the sake of efficiency. It is about using AI to strengthen relationships, preserve brand personality, and create better outcomes for both customers and the business. From personalized holiday planning to agent-to-agent interactions that may soon redefine travel booking, this conversation offers a practical look at what happens when AI moves beyond theory and starts delivering value today. If you want to understand what agentic AI looks like in the real world, and why the companies moving early may gain a serious advantage, this is an episode you do not want to miss. What would AI need to do in your business before you would trust it to take the lead? Useful LInks Connect with Neil Letchford Learn more about Adobe Brand Concierge Check out Virgin AI Concierge Visit the Sponsors of Tech Talks Network and learn more about the NordLayer Browser.

How has streaming changed from simply delivering video to becoming one of the most important business engines behind sports, media, and customer engagement? In this episode of Tech Talks Daily, I sit down with Filippo de Salazar, who leads the Brightcove team following its acquisition by Bending Spoons, to talk about how the company is evolving and where the future of streaming is heading next. With more than 20 years in the industry and powering over a billion streams every week, Brightcove has become the invisible backbone behind many of the broadcasters, publishers, sports networks, and enterprise video experiences we all rely on without ever thinking about the technology behind them. Filippo shares how the past year has accelerated Brightcove's product velocity, with major releases including AI capabilities, live 4K, live DRM, and automation tools that help customers move faster without compromising reliability. While the business has gained speed, he explains that Brightcove's focus on stability and customer obsession remains unchanged, especially when customers depend on mission-critical video workflows that leave no room for failure. We also unpack how AI is moving beyond hype and creating measurable value for broadcasters today. From automatically detecting live sports highlights and clipping them for instant social sharing, to improving ad placement relevance, generating live captions, and translating content into more than 70 languages, AI is reshaping both operational efficiency and revenue generation. Filippo explains how tools like Brightcove's Universal Translator and Metadata Optimizer are helping broadcasters unlock ROI that simply was not possible before. Our conversation also covers personalized streaming, fan engagement, cloud-native automation, and the rise of FAST channels. We discuss why sports audiences now expect low latency, instant highlights, and highly personalized viewing experiences, and how broadcasters must balance those expectations with the realities of infrastructure costs and monetization pressure. Filippo also shares why discoverability has become one of the biggest battlegrounds in streaming, with some viewers spending more time searching for content than actually watching it. Looking ahead, Filippo outlines the three trends he believes will define the next phase of streaming: intelligent automation, stronger monetization discipline, and managing fragmented viewing behaviors across live, subscription, ad-supported, and FAST environments. As media companies try to unify these experiences without adding complexity, platforms like Brightcove are becoming increasingly central to how modern video businesses operate. What does the future of streaming really look like when AI, automation, and personalization all collide, and are broadcasters ready for what comes next? Useful Links Connect with Filippo de Salazar Learn more about Brightcove following its acquisition by Bending Spoons Visit the Sponsors of Tech Talks Network and learn more about the NordLayer Browser.

What happens when the biggest breakthrough in AI isn't a flashy new tool, but finally getting rid of 450 spreadsheets? Recording live from Qlik Connect, I sat down with Ed Dunger from HelloFresh to talk about what operational transformation actually looks like inside one of the world's most complex supply chain environments. Because when your business depends on forecasting demand, managing perishable food, coordinating deliveries, and making sure customers receive the right box at the right time, small inefficiencies quickly become expensive problems. Ed leads operational technology and analytics enablement across global teams at HelloFresh, covering everything from forecasting through to final-mile logistics. In this conversation, he shares how the company moved away from hundreds of disconnected Google Sheets and manual processes toward a near real-time, data-driven operating model that gives teams faster, clearer, and more reliable decision-making. We talk about the practical reality of replacing over 450 spreadsheets, building trust in the data, and creating systems that operational teams actually want to use. Ed explains why this was a two to three year journey rather than an overnight transformation, and how early wins, like predicting waste before it happened, helped build confidence across the business. We also explore how HelloFresh is using predictive AI to improve exception management when deliveries fail. From triggering recovery boxes faster to improving customer communication when something goes wrong, the focus is not on AI for the sake of AI, but on solving real problems that directly affect customer experience. There is also a valuable lesson here for any business trying to move from experimentation to operational reality. Start small, build trust gradually, and focus on solving one problem well before trying to transform everything at once. So as more organizations race to adopt AI, are we sometimes overlooking the simple operational fixes that create the biggest impact? And is real transformation less about the technology itself, and more about how people learn to trust it? Join me for a practical and honest conversation from Qlik Connect, and let me know your thoughts. Are you still managing around old processes, or are you building systems people can truly rely on?

How do you turn complex regulatory data into something customers can actually use, trust, and act on? Recording live from Qlik Connect, I sat down with Robin Astle, Head of Qlik Analytics at Reconomy Group, to explore how data is becoming far more than an internal reporting tool. In Robin's world, it has become a product in its own right, helping some of the world's largest retailers manage compliance, reduce costs, and make smarter sustainability decisions. Robin works across Valpak, a business at the center of environmental compliance and packaging regulation, supporting over 100 enterprise customers across the UK, Europe, and the US. From packaging taxes and recycling targets to government submissions and sustainability reporting, the amount of data involved is enormous, and the stakes are high. In our conversation, Robin shares how the Valpak Insight Platform evolved from manual SQL extracts and spreadsheets into a fully scaled cloud-based analytics platform ingesting millions of rows of data every day. We discuss how that transformation helped reduce onboarding from weeks to days, created up to 90% time savings on CSR and analytics requests, and helped customers reduce compliance costs by up to 15%. We also explore the launch of PackChat, which uses natural language queries to help customers interact with compliance and packaging data without needing deep technical knowledge. Robin explains why context is everything when dealing with environmental regulations, and why building trust in the data model is essential before AI can deliver real value. There is also a bigger conversation here around how businesses can use data to serve customers directly, not just support internal teams. From OEM partnerships and cloud automation to scaling AI-powered services across global markets, Robin shares what it takes to turn data into a revenue-generating service. So as more organizations look to unlock value from the information they already hold, are we still thinking too narrowly about what data can do? And could your greatest untapped product actually be the data sitting inside your business today? Join me for a fascinating conversation from Qlik Connect, and let me know your thoughts. Are you still using data for reporting, or are you starting to think about it as a product?

How do you turn powerful AI technology into something customers actually trust, adopt, and use? Recording live from Qlik Connect, I sat down with Mary Kern, Vice President of Analytics Product Go-To-Market at Qlik, to explore one of the most overlooked challenges in enterprise AI today. Not building the technology, but making it real for the people expected to use it every day. Because while AI innovation is moving at incredible speed, many organizations are still struggling with a much more practical question. How do you move from exciting product announcements and pilot projects to real adoption, measurable outcomes, and business value? In our conversation, Mary shares how Qlik is approaching that challenge by shifting the focus away from shiny features and toward outcomes that matter. We discuss why agentic AI is creating so much excitement, why customers are often much closer to operationalizing it than they realize, and how years of investment in data quality, governance, and analytics are now becoming the foundation for what comes next. We also talk about the growing importance of trusted data and context, especially as AI moves from generating insights to influencing decisions and actions. Mary explains why simply adding a large language model on top of existing systems rarely works, and why organizations need to think more carefully about how AI is trained, governed, and integrated into the environments where people already work. There is also a refreshingly honest conversation around cost, experimentation, and imperfection. Mary makes the case that organizations should start now, even if the data is not perfect, because using AI often reveals where the real gaps are and what needs to improve next. So as businesses look ahead to the next 12 months, what will separate those who successfully scale AI from those still stuck in pilot mode? And are we spending too much time talking about the technology, and not enough time understanding how people will actually use it? Join me for a candid conversation from the heart of Qlik Connect, and let me know your thoughts. Is your organization closing the gap between AI capability and real adoption, or is that still the biggest challenge?

What if the reason most AI projects fail has less to do with the technology and more to do with how the work itself is designed? Recording live from Qlik Connect, I sat down with Nick Magnuson, Head of AI at Qlik, for a conversation about the gap between AI ambition and operational reality. Because while many organizations are still focused on models, tools, and the race to deploy new capabilities, the real challenge often sits somewhere much less glamorous. Workflow design, trusted data, and making sure AI fits the way a business actually runs. Nick brings more than two decades of experience in machine learning and predictive analytics, and in this conversation, he shares why so many AI initiatives fail before they ever create value. His view is refreshingly direct. Most failures are not technology failures at all. They are workflow failures, where teams try to force AI into the business without first understanding the outcomes they are trying to achieve. We also explore the rise of agentic AI and what it means when systems move from generating insights to taking action. Nick explains why governance becomes even more important in that world, how organizations can balance speed with control, and why trusted data has to move beyond being "good enough for reporting" to becoming reliable enough for decisions and automated execution. There is also a strong discussion around openness, portability, and the growing risk of vendor lock-in. As enterprises build more complex AI ecosystems, flexibility is becoming a strategic advantage, especially for organizations trying to scale without creating expensive dependencies they will regret later. For mid-market businesses with limited resources, Nick also shares a practical path to production. A reminder that operationalizing AI does not require massive teams or unlimited budgets, but it does require clarity, discipline, and a focus on the right problems first. So as the next wave of enterprise AI moves from experimentation to execution, what will separate the organizations that scale successfully from those still stuck in pilot mode? And are we asking the wrong questions by focusing on more AI, instead of better AI? Join me for a thoughtful conversation from the heart of Qlik Connect, and let me know your view. Is workflow design the missing piece in your AI strategy?

What does it really take to turn AI from a flashy experiment into something that creates measurable business value? In this episode of Tech Talks Daily, I sat down with Angela Virtu from American University's Kogod School of Business to talk about what business leaders should actually be paying attention to as AI moves into a new phase in 2026. This conversation goes far beyond the usual headlines about bigger models and faster tools. Angela brings a rare mix of academic leadership and hands-on startup experience, which means she understands both the technical side of AI and the hard business questions around adoption, trust, and ROI. One of the most interesting parts of our discussion centered on how American University's Kogod School of Business became one of the first AI-first business schools. Angela shared how that shift was never really about chasing hype. It was about recognizing a real change in the workplace and preparing students for jobs, workflows, and expectations that are already being shaped by AI. From faculty training to culture change, she explained how transformation only works when leadership is willing to support experimentation and accept that some ideas will fail before the right ones take hold. We also spent time unpacking where businesses stand right now in the AI adoption cycle. After years of pilots and proof-of-concept projects, many companies are under pressure to show results. Angela offered a refreshingly honest take on why so many AI projects stall and why adoption alone is a weak metric. Instead, she argued that companies need to tie AI initiatives to clear business problems and existing KPIs. Whether that means customer support resolution times, employee productivity, or operational efficiency, the point is simple. AI needs to earn its place. Another thread running through this episode is governance. As AI becomes more deeply embedded inside organizations, the conversation is shifting toward oversight, accountability, and trust. Angela explains why the strongest governance models are often shared across the company rather than locked inside one team. She also discusses the need for closed systems, stronger communication, and honest disclosure when businesses use AI in customer-facing environments. That part of the conversation feels especially timely as more brands try to balance innovation with customer expectations. We also looked ahead at what is coming next, from model orchestration and vertical AI to the rise of physical world models and even the possibility of AI agents becoming a customer audience in their own right. It is one of those episodes that will give business leaders, technologists, educators, and curious listeners plenty to think about. If you are trying to understand where AI strategy is headed in 2026, and how to separate real value from noise, this episode is for you. What did you make of Angela's views on governance, ROI, and the next phase of AI adoption, and where do you think businesses are still getting it wrong? Share your thoughts with me. Useful Links: Connect with Angela Virtu Kogod School of Business Visit the Sponsors of Tech Talks Network and learn more about the NordLayer Browser.

What actually separates AI that delivers real value from AI that never makes it past the demo stage? Recording live from Qlik Connect, I sat down with Ryan Welsh, Field CTO of Generative AI at Qlik, to get a grounded, practitioner-led view of what it really takes to make AI work inside a business. While the industry has spent the past few years racing to experiment, build, and deploy new capabilities, many organizations are still struggling to turn that progress into capabilities people use every day. In our conversation, Ryan cuts through the noise and explains why so many AI initiatives fail. Not because the models aren't powerful enough, but because they're not designed to fit into real workflows. He shares why context is far more than just a buzzword and how getting the right data, in the right place, at the right time, enables AI to deliver meaningful outcomes. We also explore the growing shift toward agentic AI and the responsibilities that come with it. From designing systems that can act autonomously while remaining under control to understanding where humans need to stay involved, Ryan offers a practical view of how organizations can move forward without introducing unnecessary risk. There's also a refreshing honesty around where we are right now. After a wave of investment and expectation, many companies struggled to see immediate value from AI. But as Ryan explains, that period is changing, with more organizations finding ways to scale what works and move beyond isolated use cases. So, as businesses look ahead, what does it really take to move from experimentation to execution? And are we focusing too much on building more AI rather than the right AI for how our organizations actually operate? Join me for a candid conversation from the heart of Qlik Connect, and let me know your thoughts. Are you seeing AI deliver real outcomes in your business, or is it still stuck in the demo phase? Useful Links Connect with Ryan Walsh on LinkedIn Learn more about Qlik. Follow on Twitter, Facebook, and LinkedIn Visit the May Sponsors of Tech Talks Network and learn more about the NordLayer Browser.

What does it really take to move beyond AI experimentation and build something a business can rely on? Recording live from Qlik Connect, I sat down with James Fisher, Chief Strategy Officer at Qlik, to unpack what's actually changing as AI moves from hype into real-world execution. Because while many organizations have spent the past few years exploring use cases and running pilots, the harder challenge is now in front of them. Turning that early momentum into something scalable, governed, and aligned with business outcomes. In our conversation, James offers a candid view of where companies are getting this wrong. He describes a period of what he calls "AI madness," where everything became a potential use case, but very little translated into measurable value. Now, he sees a shift toward more focused, outcome-driven thinking, where success depends on understanding the user, the data, and the specific problem being solved. One of the most thought-provoking moments comes when James challenges the idea of having an AI strategy at all. Instead, he argues that AI should be embedded directly into the broader business strategy, shaping how decisions are made, how processes operate, and how organizations compete. We also explore the realities that many businesses are only just beginning to face. The complexity of data access and governance, the growing pressure around cost and sustainability, and the risks of vendor lock-in in a rapidly evolving AI ecosystem. James shares why openness and flexibility are becoming critical, and why some of the same patterns seen in previous technology waves are starting to repeat themselves. So as organizations look ahead to the next 12 to 24 months, what will separate those that successfully operationalize AI from those that remain stuck in cycles of experimentation? And are we focusing too much on the technology, and not enough on the business problems it's meant to solve? Join me for a grounded and strategic conversation from the heart of Qlik Connect, and let me know your thoughts. Are you still experimenting with AI, or are you starting to embed it into the core of how your business operates? Useful Links Connect with Mike Capone on LinkedIn Learn more about Qlik. Follow on Twitter, Facebook, and LinkedIn Visit the May Sponsors of Tech Talks Network and learn more about the NordLayer Browser.

What happens when your AI agents start making decisions faster than your security team can even see them? In this episode, I sit down with Sunil Agrawal, Chief Information Security Officer at Glean, to unpack a shift already underway in enterprises. With predictions that 40 percent of enterprise applications will include autonomous AI agents by the end of 2026, we are moving from human-led workflows to machine-to-machine interactions at a scale most organizations are not fully prepared for. Sunil brings a rare perspective, blending more than 25 years of cybersecurity experience with an inventor's mindset shaped by over 40 patents. What stood out to me in our conversation is how quickly the traditional security model is becoming outdated. As he explained, "autonomous agents break those assumptions because they operate across tools, varying permissions and data sources with alarming speed and autonomy." This creates what he calls the "autonomy gap," in which the CIO's drive for speed collides with the CISO's need for visibility and control. We explore how that tension is playing out in real organizations today, and why so many are already falling behind. Nearly half of businesses still lack the AI-specific controls needed to prevent untraceable incidents, and the risks are not always what you might expect. Sunil argues that the first major rogue-agent incident is unlikely to be a malicious attack. Instead, it will come from confusion: a well-intentioned system taking the wrong action in the wrong context, with consequences that ripple across the business. The conversation then turns practical. Sunil breaks down his AWARE framework, a structured way to introduce real-time guardrails that evaluate intent, context, and risk before an agent takes action. Rather than relying on static policies, this approach focuses on continuous runtime enforcement, where systems are constantly assessed based on behavior rather than assumptions. What I found particularly valuable is how this moves beyond theory into something leaders can act on today. From starting with tightly scoped use cases to investing in full observability, this episode offers a clear roadmap for balancing innovation with accountability. As Sunil put it, organizations that succeed will not be the ones that move fastest, but the ones that prove trust at scale. So how do you embrace the productivity gains of autonomous AI without opening the door to invisible risk, and are your current security models ready for a world where the "user" is no longer human? Useful Links Connect with Sunil Agrawal on LinkedIn Learn more about Glean Follow Glean on LinkedIn Visit the Tech Talks Network Sponsor NordLayer Browser

What does it actually take to move AI from experimentation into something a business can depend on every single day? Recording live from the show floor at Qlik Connect in Florida, I sat down with Qlik CEO Mike Capone to cut through the noise and get to the reality behind enterprise AI in 2026. Because while the headlines are still dominated by rapid innovation and new capabilities, many organizations are quietly facing a different challenge. They are struggling to turn AI ambition into measurable outcomes. In our conversation, Mike shares what he is hearing from customers around the world and why so many companies remain stuck in cycles of pilots and proof of concepts. We talk about the growing pressure from boards and leadership teams to move faster, and why that urgency is often leading to what he calls a "ready, fire, aim" approach that fails to deliver real business value. We also explore one of the biggest themes emerging at Qlik Connect this year. The shift toward agentic AI. But rather than focusing on the hype, Mike breaks down what this actually means inside a real enterprise workflow, where insights are not just generated but turned into decisions and actions. He also explains why getting the data foundation right is no longer optional, and how poor data quality can quickly turn AI from an opportunity into a risk. From data trust and governance to the challenges of operating across increasingly complex regulatory environments, this episode offers a clear view of what it takes to build AI systems that are reliable, scalable, and grounded in real business context. So as organizations look ahead to the next 12 to 24 months, what will separate those that successfully operationalize AI from those that remain stuck in pilot mode? And are we focusing too much on building more AI, rather than building better AI? Join me for a candid conversation from the heart of Qlik Connect, and let me know where you stand on this shift. Are you seeing real progress, or are the same challenges holding things back?

How are brands supposed to deliver AI-powered customer experiences when their data is scattered across systems that were never designed to work together? In this episode, I sit down with Peter Bell, VP EMEA Marketing at Twilio, to unpack one of the most important AI topics that still does not get enough attention outside technical circles, Model Context Protocol, or MCP. While many conversations about AI remain stuck on model hype, chatbots, and the latest product launch, Peter brings the discussion back to something far more practical. If businesses want AI to deliver real outcomes in customer service, marketing, and brand engagement, they first need a reliable way to connect large language models to the right data, in the right systems, with the right controls in place. That is why this conversation matters. Peter explains how MCP could become one of the biggest unlocks for enterprise AI by creating a standard way for LLMs to access information across fragmented tools like CRM platforms, marketing systems, and other business applications. Instead of forcing every company to build custom integrations from scratch, MCP creates a more consistent path for connecting models to the context they need. For me, that is where this episode really earns its place, because it moves the AI conversation away from vague ambition and toward the plumbing that actually makes useful AI possible. We also talk about why first-party data remains so important, especially as businesses try to create customer experiences that feel seamless, personal, and trustworthy. Peter makes the point that public models may be useful for general knowledge, but brands cannot rely on generic internet-trained systems to solve precise business problems. If you want AI to support travel bookings, customer service, or commerce journeys, you need specific data, strong governance, and a much clearer understanding of the problem you are trying to solve. That sounds obvious, but it is still where many AI projects fall apart. Another part of our conversation focuses on trust, which feels especially relevant right now. From scams and impersonation to consumer fatigue and poor automation, brands are under pressure to move faster without losing credibility. Peter shares how Twilio is thinking about branded calling, RCS, conversational AI, and voice experiences that feel modern without becoming intrusive or robotic. We also discuss why too many companies still automate too broadly, too quickly, without defining the actual use case first. What I enjoyed most here was Peter's balanced view. He is optimistic about where AI is heading, but he is also realistic about the work still required to get there. This is not a conversation about AI magic. It is about data access, governance, trust, brand experience, and the standards that may quietly shape the next phase of AI adoption far more than the flashy headlines. So if you have been hearing more people mention MCP and wondering why it matters, or if you are trying to understand what needs to happen before enterprise AI can move from promise to practical value, this episode will give you plenty to think about. Is Model Context Protocol the missing layer that finally helps AI connect with the real world of business data?

What does it actually take to make AI work inside a real business, where messy data, human judgment, and operational risk all collide? In this episode, I sit down with Matt Fitzpatrick, CEO of Invisible Technologies, to talk about why the biggest barrier to enterprise AI is not model quality, it is everything that comes before the model ever gets to work. Since stepping into the CEO role in January 2025, Matt has moved quickly, raising $100 million and expanding Invisible's footprint across major cities including New York, San Francisco, DC, Austin, London, and Poland. But this conversation is far less about headlines and far more about what happens in the trenches of AI adoption, where companies are trying to move from pilots and PowerPoint promises to systems that actually deliver results. A huge theme throughout our discussion is data readiness. Matt makes a compelling case that most businesses are still dealing with fragmented systems, inconsistent records, and information spread across disconnected tools. That reality makes it incredibly hard to deploy AI in a way that creates trust and value. We talk about SwissGear, where Invisible used its Neuron platform to clean and structure 750 scattered tables in just one week, a task that could have taken a large engineering team months or longer. We also discuss why that kind of work matters so much, because once the data foundation is fixed, companies can start making better decisions on forecasting, operations, and planning with a level of confidence that simply was not there before. We also spend time on Invisible's human-in-the-loop approach, which I think will resonate with a lot of listeners trying to cut through the noise around job displacement and agentic AI. Matt argues that the real opportunity is not replacing people, but giving them better tools to handle repetitive work while preserving room for human expertise, judgment, and oversight. He shares examples from commercial credit workflows, healthcare, and sports analytics, including a fascinating story about the Charlotte Hornets using AI to turn broadcast footage into detailed tracking data. What stood out to me was how practical his perspective felt. This was not theory. It was about building systems around how organizations actually work, rather than expecting businesses to reshape themselves around a generic AI product. Another part of the conversation that deserves attention is governance. As boards rush to understand agentic AI, Matt explains why trust, standards, and responsible deployment are now driving buying decisions just as much as raw capability. We talk about privacy in healthcare, the risks of scaling autonomous systems without mature governance, and why enterprise adoption still trails consumer AI by a wide margin. That gap between excitement and execution may be one of the most important stories in AI right now. If you are wondering why so many AI projects never make it into production, or what it will take for enterprise AI to finally deliver on its promise, this episode is packed with insight. It is a conversation about data, deployment, governance, and the role humans will continue to play as AI becomes part of everyday business operations. After listening, I would love to know where you stand, is the future of AI really about bigger models, or is it about making AI fit the messy reality of how work gets done?

In this episode, I speak with Bert Van Hoof, CEO of Willow, about how AI is starting to reshape the built world in ways that go far beyond smart dashboards and efficiency reports. Bert brings decades of experience from the front lines of digital infrastructure, including his time at Microsoft, where he helped create Azure Digital Twins and Smart Places. Today at Willow, he is focused on a much bigger idea, using AI to help buildings, campuses, hospitals, airports, and other complex environments operate with greater intelligence, lower waste, and better outcomes for the people who rely on them every day. One of the most interesting parts of our conversation is how Bert explains the shift from passive building software to active management systems. For years, many digital twin and smart building tools were good at showing what had already happened. But operators do not need another screen full of charts. They need systems that can connect live data, static records, spatial context, and operational history to help them make better decisions in real time. That is where Willow comes in, creating a digital foundation where AI can reason across everything from HVAC and air quality to occupancy, refrigeration, maintenance history, and even energy usage patterns. We also unpack why this matters right now. Energy costs remain under pressure, sustainability goals are getting harder to ignore, and many organizations are still stuck with fragmented systems that do not talk to each other. Bert shares how AI can help move building teams from reactive maintenance to predictive performance, spotting issues earlier, cutting downtime, reducing waste, and extending the life of expensive assets. He also explains why the future of building operations will depend on a stronger data foundation, operational AI copilots, and systems that can support an aging workforce while making these roles more appealing to the next generation. What stood out for me was how practical this all became once we moved past the buzzwords. This was not a conversation about futuristic hype. It was about real examples, from occupancy-based HVAC control in offices and campuses to leak detection in schools, vaccine refrigeration monitoring, and hospital environments where downtime can carry enormous consequences. Bert makes a strong case that buildings are no longer just static structures. They are living operational environments filled with signals, systems, and opportunities that have been hiding in plain sight. We also touch on the wider picture, including what Bert learned from smart cities and energy grid modernization, and how those lessons now apply to commercial real estate, airports, research labs, and higher education campuses. There is a real sense that the physical world is entering a new chapter, one where AI starts to bridge the gap between digital intelligence and real-world action. If you have ever wondered what AI looks like when it leaves the screen and starts improving the places where people work, heal, travel, learn, and live, this episode will give you plenty to think about. As always, I would love to know what you think, are buildings finally ready to become truly responsive, and what opportunities or risks do you see ahead?

What does it really take to build the next generation of AI companies when the hype around scale begins to fade and real-world impact takes center stage? In this episode, I sit down with David Blumberg, founder and managing partner at Blumberg Capital, to unpack what he believes will define the next wave of AI startups. With a track record that includes being the first investor in companies like Nutanix, Braze, and DoubleVerify, David brings a perspective shaped by decades of identifying breakout innovation early. But what stood out most in our conversation was his belief that 2026 marks a turning point where intelligence moves beyond experimentation and becomes operational. We explore what that shift actually means in practice. David explains how AI is evolving from systems that generate insights into systems that take action, and why that distinction matters for founders, investors, and enterprise leaders alike. He shares how the most compelling startups today are not simply layering AI onto existing products, but embedding it deeply into workflows across industries like finance, security, and supply chain. These are companies built on proprietary data and real operational context, designed to make decisions with precision rather than simply process information. Our conversation also challenges some widely held assumptions about success in the AI space. David makes it clear that scale alone will not separate winners from the rest. Instead, the focus is shifting toward accuracy, reliability, and domain expertise. Founders who have lived the problems they are solving, rather than approaching them from the outside, are far more likely to build something defensible and lasting. It is a subtle shift, but one that could redefine how value is created in the years ahead. There is also a broader discussion about where investment is flowing and why. With the vast majority of companies Blumberg Capital now evaluates being rooted in AI, the bar for differentiation is rising fast. David offers insight into what his team is really looking for in founders entering this next cycle, and how startups can stand out in an increasingly crowded field. So as AI moves from promise to execution, and from experimentation to real-world outcomes, the question becomes harder to ignore. Are we ready to rethink how we measure success in the AI era, and what kind of companies will truly earn their place at the top?

How do we talk about artificial intelligence without ignoring the very human consequences it can have on our mental health? In this episode, I sit down with Dr. Ragy Girgis, Professor of Clinical Psychiatry at Columbia University, to unpack a topic that has quietly moved from the fringes of academic discussion into mainstream headlines. You have probably seen the term "AI psychosis" appearing more frequently, often surrounded by speculation, fear, or misunderstanding. But what does it actually mean, and how should we be thinking about it as these technologies become part of everyday life? Ragy brings a clinical and deeply considered perspective to the conversation. He explains that what we are seeing is not AI creating entirely new delusions out of thin air, but something more subtle and arguably more concerning. Large language models can reflect and reinforce ideas that already exist within a person's mind. For someone already vulnerable, that reinforcement can push a belief from uncertainty into absolute conviction. That shift, even if small, can have life-altering consequences. It raises uncomfortable questions about how persuasive technology interacts with fragile mental states. We also explore the comparison many people make with older internet rabbit holes, and why this new generation of AI tools feels different. There is something about conversational systems that mimic human interaction so convincingly that they can blur the line between reflection and validation. Ragy introduces a powerful analogy rooted in the story of Narcissus, which reframes the issue in a way that feels both timeless and unsettling. It is not about an external voice planting ideas, but about a mirror that becomes impossible to look away from. But this conversation is not about fear. It is about responsibility and awareness. We discuss practical steps that could help reduce risk, from how AI systems communicate their limitations, to the role of families and clinicians, and even the responsibility of tech companies to invest in research around early warning signs. There is a sense that we are only at the beginning of understanding this phenomenon, and that the decisions made now will shape how safely these tools evolve. So as AI continues to move closer to us, speaking in our language and responding in real time, how do we make sure it supports human wellbeing rather than quietly amplifying our most vulnerable moments? Useful Links Connect with Dr. Ragy Girgis, Professor of Clinical Psychiatry at Columbia University Time Magazine Article Visit the May Sponsors of Tech Talks Network and learn more about the NordLayer Browser.

Why are so many AI projects failing to deliver real business value, despite the hype and investment? In this episode, I sit down with Jay Litkey, SVP of Cloud & FinOps at Flexera, to explore the growing gap between AI ambition and measurable results. We discuss why findings from PwC reveal that only a small percentage of CEOs are seeing both revenue growth and cost savings from AI, and why the issue often comes down to a lack of clear outcomes, financial discipline, and governance rather than the technology itself. Jay shares what organizations are getting wrong, why many are stuck in experimentation mode, and what it really means to go back to basics in 2026. The conversation also reframes FinOps for the AI era, moving beyond cost control to a model that connects AI usage directly to business value, aligns finance with engineering, and introduces the guardrails needed to scale responsibly. If you are investing in AI or planning your next move, this episode offers a clear lens on how to turn potential into performance. Useful Links Connect with Jay Litkey from Flexera Learn More About Flexera Visit the May Sponsors of Tech Talks Network and learn more about the NordLayer Browser.

How do you bring people together to do better work when everything around them feels increasingly complex, distributed, and uncertain? In today's episode, I sat down with Jessica Guistolise from Lucid Software, and what struck me straight away was her belief that work has always been a group project, even if many organizations still behave as though it is not. Jessica shared how much of the friction we experience at work comes from misalignment, unclear expectations, and a lack of shared understanding. When teams are spread across time zones, systems, and now AI-powered workflows, those gaps only widen. Her perspective is simple but powerful. When people can actually see the work, rather than interpret it through documents, meetings, or assumptions, something shifts. Conversations become clearer, decisions become faster, and collaboration starts to feel human again. We also explored how visual collaboration platforms like those from Lucid Software are helping teams move away from scattered tools and disconnected workflows toward a more unified way of working. Jessica described it as having everything on one workbench, where teams can brainstorm, plan, and execute without constantly switching context. What really stayed with me was her focus on inclusivity in collaboration. Not everyone contributes in the same way, and visual environments can create space for different thinking styles, whether someone is outspoken, reflective, or somewhere in between. That idea of creating a shared language across teams, roles, and even personalities feels increasingly relevant in a world where communication often breaks down. Of course, no conversation right now would be complete without talking about AI. Jessica offered a refreshingly honest view. There is uncertainty, and there should be. But rather than avoiding it, she believes leaders need to make AI visible, map how it is used, define where human judgment matters, and encourage teams to experiment openly. One of the most interesting ideas she shared was reframing mistakes as early learnings. When teams feel safe to test, fail, and share what they discover, progress accelerates. When fear or blame enters the picture, everything slows down. We also touched on AI literacy and what it really means in practice. For Jessica, it comes down to clarity. Clear workflows, clear guardrails, and clear expectations about accountability. AI might assist, but humans remain responsible for outcomes. That mindset, combined with leadership that actively participates in experimentation, creates an environment where people feel confident stepping forward rather than holding back. This conversation left me thinking about how many organizations are still trying to layer AI onto unclear processes and expecting better results. Jessica's message is that clarity comes first, then technology can amplify it. So if work really is a group project, are we giving our teams the visibility and confidence they need to succeed, or are we still asking them to figure it out in the dark?

What happens when the very pricing model meant to speed up AI adoption ends up slowing it down? In this episode of Tech Talks Daily, I sit down with Sameet Gupte, CEO and co-founder of EvoluteIQ, to discuss a part of the enterprise AI story that still doesn't get enough attention. While so much of the conversation around AI focuses on models, copilots, and the latest agentic promises, Sameet brings the discussion back to a business reality that every enterprise leader understands. If the economics do not work, adoption stalls. And if success in a pilot makes the final rollout even more expensive, something has gone wrong long before the board signs off on scale. Sameet argues that many organizations are still trapped by legacy pricing structures built for an earlier generation of automation. Per-user and per-bot pricing may look manageable at the pilot stage. Once a company tries to expand automation across departments, processes, and geographies, the numbers can quickly stop making sense. That creates what many now call pilot purgatory, where a company proves something can work, but cannot justify taking it any further. It is a problem rooted in incentives, procurement, and fragmented technology stacks, and it is one that CFOs are watching very closely. What I found especially interesting in this conversation is how Sameet frames the issue. He believes most enterprises do not actually have an automation problem. They have an orchestration problem. In other words, the challenge is rarely a lack of tools. It is getting all the systems, workflows, approvals, data flows, and legacy infrastructure to work together to produce a clean business outcome. That idea changes the conversation from buying isolated features to rethinking the process as a whole. We also discuss why outcomes-based pricing is increasingly resonating with enterprise buyers. Sameet explains why predictable costs, transparent commercial models, and shared accountability are helping move automation conversations out of innovation teams and into the CFO's office. For public companies and large global enterprises, that matters. Leaders want fewer surprises, fewer overlapping vendors, and a much clearer line between spend and return. There is also a broader theme running through this episode about where the market is heading next. Sameet sees real urgency around vendor consolidation, enterprise simplification, and the need to rethink how AI is introduced into the business. His view is that companies need to pause, define what they actually want AI to do, and then choose tools that fit the business, rather than reshaping the business around the latest platform pitch. If you are trying to make sense of AI adoption beyond the hype, this conversation offers a practical and timely perspective on pricing, scale, and what real transformation could look like inside the enterprise. After listening, do you think the future of enterprise AI will be shaped as much by commercial models as by the technology itself, and what are you seeing in your own organization? Useful Links Connect with Sameet Gupte, CEO and co-founder of EvoluteIQ Learn More About EvoluteIQ

How many bad customer experiences does it take before someone walks away for good? In my conversation with Amitha Pulijala, we explore why the answer might be fewer than most businesses are prepared for, and what that means for anyone investing in AI-powered customer experience. New research from Cyara reveals a stark reality. Twenty-eight percent of consumers will abandon a brand after just one poor interaction, and nearly half will do the same after only two or three. That leaves very little room for error at a time when more organizations are introducing AI into customer journeys, often at speed and at scale. Amitha, who leads product strategy in the AI and CX space, brings a grounded perspective shaped by years of working with large enterprises and complex contact center environments. What stood out in our discussion is how the real challenge is no longer about whether AI can handle customer interactions. In many cases, it already can. The issue is whether customers trust it enough to let it try. We unpack the growing perception gap: 73 percent of consumers still believe human agents resolve issues faster, even though AI systems can deliver near-instant responses. That disconnect often comes down to past experiences, from bots that fail to understand context to systems that trap users in frustrating loops with no clear way out. There is also a clear line that customers draw around where AI belongs. Routine, high-volume tasks such as password resets or appointment confirmations are widely accepted. But when conversations shift toward financial security, healthcare, or legal advice, expectations change. People want human judgment involved and reassurance that the outcome is reliable. What makes this conversation particularly relevant is the generational divide shaping expectations. Younger users are far more open to AI-led interactions, provided they work seamlessly. Older generations remain more cautious, often preferring the certainty of speaking with a human. That creates a design challenge for businesses trying to serve everyone without alienating anyone. Throughout the episode, Amitha emphasizes that trust is built through experience, not intention. That means testing AI systems in real-world conditions, monitoring how they perform over time, and ensuring that when things do go wrong, the transition to a human feels smooth and informed rather than abrupt and frustrating. This is not a conversation about replacing humans with machines. It is about understanding where AI can add speed and efficiency, where it should support human agents, and where it should step back entirely. The organizations getting this balance right are not the ones deploying AI the fastest, but the ones validating it most carefully before customers ever see it. As businesses race to embed AI at every touchpoint, a bigger question emerges. Are we building systems that customers actually trust, or are we creating new points of friction that push them away? Useful Links Connect with Amitha on LinkedIn Survey Data Cyara Website Follow Cyara on LinkedIn

What does it really take to move AI from endless experimentation into something that creates real business value? In this episode, I sat down with Tom Alexander, Head of Innovation and Transformation at CrossCountry Consulting, to talk about why so many organizations still struggle to turn AI ambition into meaningful outcomes. Tom works closely with executive and CFO teams that are either unsure where to begin or frustrated that early AI efforts have not delivered what they hoped for. We talked about why this is rarely just a technology issue. In many cases, the real blockers are ownership, change management, weak alignment across the business, and a failure to connect AI initiatives to the problems that matter most. One of the big themes in our conversation was the need to treat AI as an enterprise-wide program rather than a collection of isolated tools. Tom shared how leaders can focus on business processes first, identify where automation can genuinely improve performance, and avoid getting distracted by hype. We also unpacked the growing accountability challenge around AI, including who should own it, how stakeholders can align, and why strong foundations in data, governance, and training matter so much. This episode is packed with practical takeaways for anyone trying to make sense of AI adoption inside a business. If you are trying to figure out where to start, how to scale, or how to avoid another stalled initiative, there is a lot in here for you. After listening, I would love to hear your thoughts. How is your organization approaching AI, and where do you think most businesses are still getting it wrong? Useful Links CrossCountry website Connect with Tom Alexander on LinkedIn Field Notes podcast

Is the endpoint still just a device, or has it quietly become one of the most important control points in modern enterprise security? Recording live from IGEL Now And Next in Miami, I sat down once again with Darren Fields for what has become an annual check-in on how fast the industry is really changing. And this time, the conversation feels very different. Over the last 12 months, the discussion has moved well beyond traditional endpoint management. From global supply chain pressure driven by AI demand to rising hardware costs and unpredictable refresh cycles, the assumptions that once shaped endpoint strategy are starting to fall apart. Darren shares how organizations are now being forced into difficult decisions, absorb rising costs, delay investment, or rethink the model entirely. We also explore how that shift is changing the conversation at the leadership level. What was once seen as a procurement decision is increasingly being reframed as a resilience strategy. Extending hardware life, reducing dependency on supply chains, and maintaining operational continuity are becoming just as important as performance and cost. Security, of course, sits at the center of it all. With the majority of breaches still originating at the endpoint, Darren highlights how organizations are starting to rethink where they focus their efforts. Rather than focusing solely on data centers and cloud environments, there is growing recognition that control, visibility, and enforcement must occur at the edge. The conversation also touches on the reality of modern cyber threats. From constant attack attempts to incidents that leave organizations offline for weeks, the challenge is no longer just restoring systems but restoring access. And that shift has major implications for how recovery and continuity are designed moving forward. We also look at the growing convergence of IT and OT, the role of contextual access, and the balancing act between stronger security and user experience. With organizations at very different stages of their journey, there is no single path forward, but there is a clear sense that change is already underway. So as the pace of technology, risk, and demand continues to accelerate, one question remains. Are organizations adapting fast enough, or are they still relying on models that no longer reflect the world they are operating in? What do you think, are we finally seeing a shift toward treating the endpoint as a strategic priority, or is there still a gap between awareness and action?

What does it really take to move from talking about Zero Trust… to actually making it work in the real world? Recording live from IGEL Now And Next in Miami, I caught up with John Walsh for what has now become something of a tradition, our third conversation together, and one that reflects just how much has changed in the last 12 months. When we last spoke, the focus was on securing the edge and rethinking security through a preventative lens. This time, the conversation has expanded from IT to OT, from devices to platforms, and from theory to real-world implementation across manufacturing floors, healthcare environments, and government organizations. John shared how IGEL is increasingly being adopted as a global standard across both IT and operational environments, bringing new challenges and new insights. From kiosks and signage on factory floors to shared workstations in hospitals, the need for persona-based and now context-aware access is becoming far more than a technical concept. It is shaping how organizations think about identity, risk, and control at scale. We also explored how the idea of the "adaptive secure desktop" is evolving beyond traditional VDI thinking. Instead of static devices, the focus is shifting toward environments that respond dynamically to the user, their role, their location, and the level of risk in that moment. It raises an important question. How do you deliver that level of control without introducing friction for the user? AI inevitably entered the conversation, but not in the way many might expect. Rather than focusing on features, John highlighted the acceleration of threat velocity. The time between vulnerability discovery and exploitation is shrinking rapidly, and with AI amplifying that speed, traditional detection and response models are struggling to keep up. The implication is clear. Security strategies need to shift toward prevention and control, not just reaction. We also touched on emerging challenges around agentic AI, non-human identities, and the need to apply Zero Trust principles beyond people to machines. As organizations begin to explore these new models, questions around identity, access, and guardrails are becoming more complex and more urgent. And throughout the conversation, one theme kept coming back and reducing complexity while increasing control. Whether it is through immutable operating systems, centralized policy enforcement, or contextual access, the goal is to simplify the environment while strengthening security outcomes. As organizations continue their journey toward modernization, one question remains: Are we still layering new technology onto old models, or are we ready to rethink how access, identity, and control are delivered from the ground up? What do you think, is Zero Trust finally becoming real at the endpoint, or is there still a gap between strategy and execution?

How long would it actually take your organization to recover every endpoint after a major cyber incident? Recording live from IGEL Now And Next in Miami, I sat down with James Millington to explore a question that most businesses think they've answered, but rarely have. Because when you move beyond theory and start mapping out the real process, the numbers tell a very different story. James shared examples from real organizations that tried to calculate recovery at scale. One estimated it would take over 5,000 person-hours to rebuild their estate. Another believed they could recover quickly, until they realized the scale of their environment made that assumption unrealistic. It raises a deeper question. Are we focusing too much on recovery and not enough on resilience? The conversation quickly moved into what James calls the "endpoint recovery gap." While most organizations have invested heavily in data center resilience, failover environments, and backup strategies, far fewer have a clear plan for reconnecting users when endpoints are compromised. And without a working endpoint, even the most advanced infrastructure becomes inaccessible. We also explored why so many organizations continue to rely on reimaging devices as a primary recovery strategy, despite the time, complexity, and operational disruption it creates. In many cases, it's not just slow. It's impractical at scale. And perhaps more concerning, some organizations still admit to having no defined plan at all. One of the most memorable moments in the conversation came through a simple analogy. For years, we've been carrying the weight of outdated endpoint strategies, even though the solution has been sitting in front of us. Just like it took thousands of years to put wheels on a suitcase, the shift toward simpler, more resilient models often requires a moment of realization before change actually happens. As application delivery continues to move toward SaaS, DaaS, and cloud environments, the role of the endpoint is also being redefined. Analysts are now calling for a move toward immutable, non-persistent endpoints that reduce attack surface and enable faster recovery. But as James points out, the real challenge is not awareness. It's an action. As organizations continue to invest in security, infrastructure, and AI, one question remains: Are we still planning for recovery from failure, or are we finally designing systems that avoid it in the first place? What do you think, are businesses ready to rethink endpoint strategy, or are we still carrying the baggage of the past?

What does it actually take to rethink the endpoint in a world shaped by AI, Zero Trust, and the growing convergence of IT and operational technology? Recording live from IGEL Now and Next in Miami, I sat down with Matthias Haas to unpack what he describes as a genuine transformation moment for enterprise computing. This wasn't a conversation about incremental change. It was about challenging long-held assumptions around devices, security models, and how work is delivered in modern organizations. Matthias shared how the idea of the "adaptive secure desktop" is moving beyond traditional thinking around VDI and desktop delivery. Instead of treating endpoints as static devices, the focus is shifting toward dynamic, context-aware environments that respond to who the user is, where they are, and what they need access to in that moment. It raises an important question for any organization. Are we still designing for devices, or for outcomes? We also explored the growing complexity that comes with flexibility. With multiple ways to deliver applications across SaaS, DaaS, browsers, and local environments, there's a real risk of recreating the same fragmented systems companies are trying to move away from. Matthias offered insight into how orchestration, policy enforcement, and centralized management can help bring order to that complexity without adding friction for users. Another key theme was the shift from static security models to continuous, contextual decision-making. As organizations move toward Zero Trust, the ability to evaluate risk in real time becomes essential. But that raises a delicate balance. How do you strengthen security without slowing people down? And how do you ensure that the user experience doesn't become the casualty of tighter controls? The conversation also touched on the challenges of bringing IT and OT environments together. While the opportunity to unify these worlds is significant, the realities are far more complex. Different risk tolerances, legacy systems, and operational priorities all come into play. Matthias offered a candid perspective on what it will take to make that convergence work in practice, not just in theory. So as enterprises continue to rethink their infrastructure in an AI-driven world, one question keeps coming up. Are we simply layering new technology onto old models, or are we ready to fundamentally change how the endpoint fits into the bigger picture? What do you think, are organizations truly ready to embrace adaptive, context-driven computing, or are we still holding on to outdated ways of working?

How do you rebuild an entire industry that most people accept as slow, fragmented, and frustrating? In this episode, I sit down with Dan Lifshits, co-founder of Dwelly, to explore how AI is being used to rethink the rental market from the inside out. What struck me most in this conversation is how Dwelly isn't approaching property management as a software layer you simply bolt on. Instead, they are acquiring rental agencies and rebuilding the operating model itself, embedding AI into every workflow, from tenant communication to maintenance coordination and rent collection. It is a very different mindset, and one that challenges how many businesses think about digital transformation. Dan brings a fascinating perspective shaped by his time competing in high-growth environments at companies like Uber and Gett. We talk about what those years taught him about scaling complex, operational businesses and how those lessons now apply to one of the largest and least digitized sectors in the economy. There is a clear parallel between ride-hailing and rentals, both are fragmented, both rely on two-sided marketplaces, and both have historically depended on manual processes that struggle to scale. As Dan explains, "long-term residential rentals ticks very similar boxes" to ride-hailing, which makes it ripe for reinvention. We also spend time unpacking what an AI-powered rollup actually means in practice. This is where the conversation becomes particularly interesting for founders and business leaders. Rather than selling software into traditional businesses and hoping for adoption, Dwelly takes control of both the operations and the technology. That allows them to redesign workflows, remove bottlenecks, and deliver a more consistent experience for landlords and tenants alike. The result is a model where a single operator can manage hundreds, even thousands, of properties with a level of service that would have been impossible just a few years ago. Of course, there are bigger implications here too. If this model works at scale, it raises questions about how many other service industries could be rebuilt in a similar way. It also highlights the growing role of venture-backed rollups, particularly with firms like General Catalyst backing this approach as a new investment category. But it is not without challenges. Changing operational behavior, integrating acquisitions, and maintaining service quality while scaling fast are all complex problems that cannot be solved by technology alone. This episode left me thinking about where the real value in AI sits. Is it in the tools themselves, or in the willingness to rethink how a business actually operates? And if AI can transform something as established as property management, which industries are next in line for the same kind of reinvention? I would love to hear your thoughts. Are AI-powered rollups the future of service industries, or do they introduce a new set of risks we are only beginning to understand?

How are businesses supposed to grow when technology is moving faster than regulation, customer expectations keep shifting, and AI is changing the rules in real time? In this episode, I sat down with Derya Matras, Vice President of EMEA at Meta, to talk about what growth really looks like for businesses operating in Europe, the Middle East, and Africa right now. This was a fascinating conversation because it went far beyond the usual talking points around AI and advertising. Derya brought a broader view of the pressure many businesses are under today, from macroeconomic uncertainty and political complexity to changing consumer behavior, tighter margins, and the need to adapt to a world where AI is now part of everyday decision making. What really stood out to me was her point that this moment is about far more than adopting new tools. It is about culture, leadership, and having the discipline to know what you are actually trying to achieve. Derya spoke about the importance of having a clear North Star goal, getting the foundations right, and making sure businesses are not simply adding AI into broken systems or unclear strategies. Because as she put it, AI can make everything more powerful, but it can also amplify mistakes. That is such an important point, especially at a time when so many companies are racing to show they are doing something with AI without always knowing what success should look like. We also explored how Meta sees its role in supporting growth across Europe's digital economy. Derya shared insights into how Meta's platforms are helping businesses of all sizes reach customers in ways they simply could not do on their own. For large companies, that may mean better measurement, faster optimization, and more personalized engagement. But for smaller businesses, the stakes can be even higher. She shared examples that brought those numbers to life, including entrepreneurs using Instagram and WhatsApp to reach global markets, support their families, and create jobs in ways that would have been out of reach just a few years ago. Another part of the conversation I found especially interesting was the tension between innovation and regulation in Europe. Derya was honest about how complicated and fragmented the environment has become, and how that complexity can slow progress or delay the rollout of new products. At the same time, she made a strong case that Europe still has a real opportunity ahead if it can find the right balance. That balance matters not only for big tech companies, but for startups, small businesses, creators, and the wider economy that increasingly depends on digital tools to compete and grow. We also talked about creativity, measurement, AI assistants, wearables, and even how these technologies are beginning to shape life at home as much as at work. It all made for a conversation that felt very current, but also deeply practical. So as AI becomes woven into advertising, business operations, and everyday life, are organizations truly building the foundations they need to benefit from it, or are they still chasing the next shiny thing? And what do you think Europe needs to get right to make sure innovation and opportunity can keep moving forward?

What happens when AI ambition starts moving faster than the infrastructure built to support it? In this episode, I spoke with Lee Caswell, SVP of Product and Solutions at Nutanix, about the latest Enterprise Cloud Index and what it tells us about where enterprise IT really is right now. There is no shortage of AI headlines, product launches, and promises about what comes next, but this conversation gets behind the noise and into the operational reality that many business and technology leaders are now facing. As Lee explained, AI is not arriving in isolation. It is pulling containers, data strategy, hardware decisions, governance, and application modernization along with it. One of the biggest themes in our conversation was the growing link between AI workloads and container adoption. Lee made the point that applications still sit at the top of the org chart, and infrastructure exists to serve them. As more AI-enabled applications are built by developers who favor containers and Kubernetes-based environments, enterprises are being pushed to rethink how they support those new workloads. We talked about why containers are becoming such an important part of modern application strategy, how they help organizations handle distributed AI use cases, and why many businesses are trying to balance speed and flexibility without giving up the resilience and control they have spent years building into their infrastructure. We also spent time on the less glamorous side of AI adoption, but arguably the part that matters most. Shadow AI, data sovereignty, unpredictable token costs, and infrastructure readiness are all becoming board-level issues. Lee shared why so many organizations are realizing that AI cannot simply be layered onto existing systems without deeper changes underneath. New hardware, new software, new governance models, and a more consistent approach across edge, on-prem, private cloud, and public cloud environments are all part of the picture now. What I enjoyed most about this conversation was that it never framed AI as magic. It framed it as work. Real work that demands better architecture, sharper oversight, and faster decision-making from IT teams that are already under pressure. So if your organization is racing to adopt AI, are you also building the foundation needed to support it responsibly, and where do you think the biggest risk sits right now? Share your thoughts with me.

How far can we trust research that is generated without asking a single human being? In this episode, I sat down with Jordan Harper from Qualtrics to unpack one of the most talked-about developments at the Qualtrics X4 Summit, synthetic research. It is a topic that sparks curiosity, excitement, and a fair amount of skepticism in equal measure. And honestly, that tension is exactly why this conversation matters. Jordan brings a rare mix of scientific thinking and real-world technology experience, which makes him well placed to cut through the hype. We explored what synthetic panels actually are, and just as importantly, what they are not. While many assume this is simply about asking a large language model for answers, the reality is far more nuanced. The approach Jordan and his team are building is grounded in how humans respond to surveys, trained on vast datasets to reflect the inconsistencies, biases, and unpredictability that make human insight valuable in the first place. What stood out throughout our conversation was the idea that synthetic research should be seen as additive rather than a replacement. It offers speed, flexibility, and the ability to test ideas quickly, but it does not replace the depth and lived experience that only real people can provide. In fact, some of the most interesting insights come from comparing synthetic responses with human ones, revealing patterns, biases, and even blind spots in traditional research methods. We also got into the practical side of things. From controlling for issues like survey fatigue and social desirability bias, to experimenting with question design in ways that would be difficult with human respondents, synthetic research opens up new ways of working. At the same time, it raises important questions about validation, trust, and where to draw the line when decisions carry real-world consequences. For me, this episode is about perspective. In a world where AI is accelerating everything, it can be tempting to look for shortcuts. But as Jordan explains, the real value comes from using these tools thoughtfully, alongside human insight rather than in place of it. So as this technology continues to evolve, how should researchers and business leaders strike that balance? And where could synthetic research help you ask better questions before you make your next big decision?

What does customer experience really mean when every company claims to put the customer first? In this episode, I sat down with Jeannie Walters, founder of Experience Investigators, to unpack why so many organizations talk about customer experience yet struggle to turn it into something that drives real business outcomes. With more than two decades of hands-on work across industries, Jeannie brings a perspective that cuts through the noise and focuses on what actually works inside complex organizations. Our conversation took place at the Qualtrics X4 Summit, where one theme kept resurfacing. While AI dominated headlines, there was a noticeable shift back toward strategy, discipline, and accountability. Jeannie has been making that case for years. As she explained, customer experience cannot sit on the sidelines as a reporting function or a collection of metrics. It has to become a daily business discipline, one that shapes decisions across leadership, operations, and culture. We explored the thinking behind her new book, Experience Is Everything, and the patterns she has seen repeated across organizations. Leaders invest in tools, gather feedback, and build dashboards, yet still struggle to connect those efforts to outcomes like retention, revenue, and long-term trust. Jeannie argues that the missing piece is often clarity. What does customer-centric actually mean for your organization? What are you trying to achieve, and how will you measure success in a way that matters to the business? Without those answers, even the best technology will fall short. There were also some honest reflections on AI. While it is accelerating everything, it also raises the stakes. Customers are becoming more aware of how their data is used, and trust is becoming harder to earn and easier to lose. That creates both an opportunity and a risk. Organizations that treat customer experience as a strategic priority can use AI to strengthen relationships, while those that treat it as a cost center may simply scale poor experiences faster. What stood out most in this conversation was the shift from theory to action. From redefining teams that were stuck reporting on metrics to empowering them to lead business change, Jeannie shared practical examples of how mindset, strategy, and execution come together. It is a reminder that customer experience is not owned by one team. It is something that either shows up in every interaction or not at all. So as AI continues to reshape how businesses operate, are we using it to deepen trust and deliver better experiences, or are we simply amplifying what already exists? And where does customer experience truly sit inside your organization today?

What does a great patient experience really look like when people are at their most vulnerable? In this episode, I sat down with Stanford Health Care's SVP and Chief Patient Experience and Operational Performance Officer, Alpa Vyas, to explore how one of the world's leading healthcare organizations is rethinking the human side of care. From the outside, healthcare is often seen as a system of processes, technology, and clinical outcomes. But as Alpa explains, every interaction sits within a deeply emotional moment in someone's life, where fear, uncertainty, and complexity collide. That reality shapes everything. Our conversation goes back to the early days of Stanford's transformation, where Alpa recognized a gap that many organizations still struggle with today. Improvement efforts were underway, systems were being optimized, yet the patient voice was largely absent. Inspired by design thinking principles from Stanford's own d.school, her team began with empathy as the foundation. That shift changed the direction of everything that followed, from how feedback was gathered to how decisions were made across the organization. We also explored the role of technology, and where it truly fits. There is often a temptation to lead with AI or automation, but Alpa brings the focus back to culture, behavior, and trust. Technology, including platforms like Qualtrics, became powerful once the right questions were being asked and the right mindset was in place. Moving from delayed paper surveys to real-time feedback transformed not only how quickly issues could be addressed, but how patients felt heard. One story stood out where a patient received a follow-up call before even leaving the parking lot, a simple moment that redefined their perception of care. We also touched on "Operation Blue Sky," an initiative that looks beyond traditional surveys to capture insight from call recordings, messages, and other unstructured data sources. It opens the door to a future where healthcare providers can anticipate problems before they happen and intervene at the right moment. That raises important questions around pace, trust, and readiness, especially in an industry that has good reason to move carefully. This episode is ultimately a conversation about balance. Between innovation and responsibility, between efficiency and empathy, and between data and human connection. So how do we ensure that as healthcare becomes more advanced, it also becomes more human? And what lessons from this journey could apply far beyond healthcare?

What happens when customer experience stops being a soft metric and starts becoming a direct driver of revenue, retention, and real-time action? In this episode, I sat down with Jeff Gelfuso, SVP and Chief Product and Experience Officer at Qualtrics, during X4 Summit in Seattle to talk about how AI is changing the way businesses understand and improve customer relationships. Jeff shared how his role sits at the point where product, experience, and business outcomes meet, helping customers use Qualtrics in ways that are both practical and measurable. One of the biggest themes in our conversation was the shift from simply listening to customers to actually doing something in the moment. For years, many companies have relied on surveys, dashboards, and reports that told them what had already gone wrong. Jeff explained how that model is changing fast. With AI, organizations can now understand signals as they happen and trigger action before a poor experience turns into churn, frustration, or lost revenue. We talked about examples from brands like Marriott and TruGreen, and this is where the conversation became especially interesting. In TruGreen's case, AI-powered analysis helped reveal that service quality, not price, was the real reason customers were leaving. That kind of insight changed the conversation from guesswork to financial impact. When one point of retention can mean $10 million in annual revenue, experience suddenly becomes a boardroom issue, not just a customer service metric. Jeff also offered a refreshingly clear view on agentic AI. Instead of treating it as another layer of hype, he described it as a way to turn experience data into action, using context to help businesses close the loop faster and with greater precision. That means moving beyond smarter dashboards and toward systems that can surface priorities, recommend next steps, and help teams act without getting buried in complexity. Another standout part of the discussion was how Qualtrics is helping customers move beyond pilot purgatory. Jeff was candid that meaningful AI progress still takes work, focus, and the discipline to solve the right problems first. The companies seeing real value are not trying to do everything at once. They are identifying specific use cases, tying them to real business outcomes, and building from there. What I enjoyed most about this conversation was how clearly Jeff connected technology to human experience. Yes, there was plenty of discussion around AI, automation, and context, but at the heart of it all was something much simpler. Better experiences build stronger relationships, and stronger relationships drive loyalty, trust, and growth. So if your business is still treating experience as a nice-to-have instead of a measurable driver of performance, what might you be missing right in front of you? I would love to hear your thoughts after listening.

What does it really mean to lead in AI when the headlines are loud, the claims are endless, and the real signals are often buried under hype? In this episode, I sit down with Ed White from Clarivate to make sense of one of the most important questions in technology right now, who is actually leading the AI innovation race, and what does the data really tell us? Ed leads the Clarivate Centre for IP and Innovation Research, where his team analyzes enormous volumes of intellectual property and innovation data to understand where technology is heading, who is building it, and which ideas are likely to shape the future. That matters because AI is no longer a side story inside tech. It is becoming an economic issue, a business issue, and increasingly a geopolitical one too. Our conversation centers on fresh Clarivate research showing that AI patent filings passed 1.1 million overall by 2025, with growth accelerating at a pace that is hard to ignore. Ed helps unpack what that actually means in practical terms. I found this especially interesting because the report does not simply point to the familiar names everyone already talks about. It also highlights academic institutions, automotive companies, and businesses working behind the scenes with far less noise. What I enjoyed most about this discussion is that Ed brings a rare mix of technical depth and real clarity. He does not just throw out huge numbers and leave them hanging there. He explains what they mean for investors, enterprise leaders, governments, and anyone trying to understand where this market is heading next. We also get into one of the biggest tensions in AI today, the balance between speed and assurance. That part really stayed with me. In a market obsessed with moving fast, Ed makes a strong case that trust, explainability, and usability may end up shaping who actually wins. This is a conversation about much more than patents. It is about power, strategy, timing, and how innovation spreads across borders, industries, and institutions. If you want to cut through the noise and hear a more data-led view of the AI race, this episode will give you plenty to think about. As always, I would love to hear what stood out to you most after listening, so please share your thoughts with me. When you look at the AI race today, do you think the real leaders are the companies making the most noise, or the ones quietly building for the long term?