Fed up with tech hype? Looking for a tech podcast where you can learn from tech leaders and startup stories about how technology is transforming businesses and reshaping industries? In this daily tech podcast, Neil interviews tech leaders, CEOs, entrepreneurs, futurists, technologists, thought lead…
Listeners of The Tech Blog Writer Podcast that love the show mention: neil asks, bram, neil hughes, neil does a great, neil's podcast, charismatic host, insightful and engaging, tech topics, love tuning, great tech, engaging podcast, tech industry, emerging, tech podcast, startups, founder, best tech, predictions, technology, innovative.
The Tech Blog Writer Podcast is a must-listen for anyone interested in the intersection of technology and various industries. Hosted by Neil Hughes, this podcast features interviews with a wide range of guests, including visionary entrepreneurs and industry experts. Neil has a remarkable talent for breaking down complex topics into easily understandable discussions, making it accessible to listeners from all backgrounds. One of the best aspects of this podcast is the diversity of guests, as they come from different industries and share their cutting-edge technology solutions. It provides a great source of inspiration and knowledge for staying up to date with the latest advancements in tech.
The worst aspect of The Tech Blog Writer Podcast is that sometimes the discussions can feel a bit rushed due to the time constraints of each episode. With so many interesting guests and topics to cover, it would be great if there was more time for in-depth conversations. Additionally, while Neil does an excellent job at selecting diverse guests, occasionally it would be beneficial to have more representation from underrepresented communities in tech.
In conclusion, The Tech Blog Writer Podcast is an excellent resource for those looking to stay informed about the latest tech advancements while learning from visionary entrepreneurs across various industries. Neil's ability to break down complex topics and his engaging interviewing style make this podcast a valuable source of inspiration and knowledge. Despite some minor flaws, it remains a must-listen for anyone interested in staying up-to-date with cutting-edge technology solutions and developments.

What happens when AI ambition starts moving faster than the infrastructure built to support it? In this episode, I spoke with Lee Caswell, SVP of Product and Solutions at Nutanix, about the latest Enterprise Cloud Index and what it tells us about where enterprise IT really is right now. There is no shortage of AI headlines, product launches, and promises about what comes next, but this conversation gets behind the noise and into the operational reality that many business and technology leaders are now facing. As Lee explained, AI is not arriving in isolation. It is pulling containers, data strategy, hardware decisions, governance, and application modernization along with it. One of the biggest themes in our conversation was the growing link between AI workloads and container adoption. Lee made the point that applications still sit at the top of the org chart, and infrastructure exists to serve them. As more AI-enabled applications are built by developers who favor containers and Kubernetes-based environments, enterprises are being pushed to rethink how they support those new workloads. We talked about why containers are becoming such an important part of modern application strategy, how they help organizations handle distributed AI use cases, and why many businesses are trying to balance speed and flexibility without giving up the resilience and control they have spent years building into their infrastructure. We also spent time on the less glamorous side of AI adoption, but arguably the part that matters most. Shadow AI, data sovereignty, unpredictable token costs, and infrastructure readiness are all becoming board-level issues. Lee shared why so many organizations are realizing that AI cannot simply be layered onto existing systems without deeper changes underneath. New hardware, new software, new governance models, and a more consistent approach across edge, on-prem, private cloud, and public cloud environments are all part of the picture now. What I enjoyed most about this conversation was that it never framed AI as magic. It framed it as work. Real work that demands better architecture, sharper oversight, and faster decision-making from IT teams that are already under pressure. So if your organization is racing to adopt AI, are you also building the foundation needed to support it responsibly, and where do you think the biggest risk sits right now? Share your thoughts with me.

How far can we trust research that is generated without asking a single human being? In this episode, I sat down with Jordan Harper from Qualtrics to unpack one of the most talked-about developments at the Qualtrics X4 Summit, synthetic research. It is a topic that sparks curiosity, excitement, and a fair amount of skepticism in equal measure. And honestly, that tension is exactly why this conversation matters. Jordan brings a rare mix of scientific thinking and real-world technology experience, which makes him well placed to cut through the hype. We explored what synthetic panels actually are, and just as importantly, what they are not. While many assume this is simply about asking a large language model for answers, the reality is far more nuanced. The approach Jordan and his team are building is grounded in how humans respond to surveys, trained on vast datasets to reflect the inconsistencies, biases, and unpredictability that make human insight valuable in the first place. What stood out throughout our conversation was the idea that synthetic research should be seen as additive rather than a replacement. It offers speed, flexibility, and the ability to test ideas quickly, but it does not replace the depth and lived experience that only real people can provide. In fact, some of the most interesting insights come from comparing synthetic responses with human ones, revealing patterns, biases, and even blind spots in traditional research methods. We also got into the practical side of things. From controlling for issues like survey fatigue and social desirability bias, to experimenting with question design in ways that would be difficult with human respondents, synthetic research opens up new ways of working. At the same time, it raises important questions about validation, trust, and where to draw the line when decisions carry real-world consequences. For me, this episode is about perspective. In a world where AI is accelerating everything, it can be tempting to look for shortcuts. But as Jordan explains, the real value comes from using these tools thoughtfully, alongside human insight rather than in place of it. So as this technology continues to evolve, how should researchers and business leaders strike that balance? And where could synthetic research help you ask better questions before you make your next big decision?

What does customer experience really mean when every company claims to put the customer first? In this episode, I sat down with Jeannie Walters, founder of Experience Investigators, to unpack why so many organizations talk about customer experience yet struggle to turn it into something that drives real business outcomes. With more than two decades of hands-on work across industries, Jeannie brings a perspective that cuts through the noise and focuses on what actually works inside complex organizations. Our conversation took place at the Qualtrics X4 Summit, where one theme kept resurfacing. While AI dominated headlines, there was a noticeable shift back toward strategy, discipline, and accountability. Jeannie has been making that case for years. As she explained, customer experience cannot sit on the sidelines as a reporting function or a collection of metrics. It has to become a daily business discipline, one that shapes decisions across leadership, operations, and culture. We explored the thinking behind her new book, Experience Is Everything, and the patterns she has seen repeated across organizations. Leaders invest in tools, gather feedback, and build dashboards, yet still struggle to connect those efforts to outcomes like retention, revenue, and long-term trust. Jeannie argues that the missing piece is often clarity. What does customer-centric actually mean for your organization? What are you trying to achieve, and how will you measure success in a way that matters to the business? Without those answers, even the best technology will fall short. There were also some honest reflections on AI. While it is accelerating everything, it also raises the stakes. Customers are becoming more aware of how their data is used, and trust is becoming harder to earn and easier to lose. That creates both an opportunity and a risk. Organizations that treat customer experience as a strategic priority can use AI to strengthen relationships, while those that treat it as a cost center may simply scale poor experiences faster. What stood out most in this conversation was the shift from theory to action. From redefining teams that were stuck reporting on metrics to empowering them to lead business change, Jeannie shared practical examples of how mindset, strategy, and execution come together. It is a reminder that customer experience is not owned by one team. It is something that either shows up in every interaction or not at all. So as AI continues to reshape how businesses operate, are we using it to deepen trust and deliver better experiences, or are we simply amplifying what already exists? And where does customer experience truly sit inside your organization today?

What does a great patient experience really look like when people are at their most vulnerable? In this episode, I sat down with Stanford Health Care's SVP and Chief Patient Experience and Operational Performance Officer, Alpa Vyas, to explore how one of the world's leading healthcare organizations is rethinking the human side of care. From the outside, healthcare is often seen as a system of processes, technology, and clinical outcomes. But as Alpa explains, every interaction sits within a deeply emotional moment in someone's life, where fear, uncertainty, and complexity collide. That reality shapes everything. Our conversation goes back to the early days of Stanford's transformation, where Alpa recognized a gap that many organizations still struggle with today. Improvement efforts were underway, systems were being optimized, yet the patient voice was largely absent. Inspired by design thinking principles from Stanford's own d.school, her team began with empathy as the foundation. That shift changed the direction of everything that followed, from how feedback was gathered to how decisions were made across the organization. We also explored the role of technology, and where it truly fits. There is often a temptation to lead with AI or automation, but Alpa brings the focus back to culture, behavior, and trust. Technology, including platforms like Qualtrics, became powerful once the right questions were being asked and the right mindset was in place. Moving from delayed paper surveys to real-time feedback transformed not only how quickly issues could be addressed, but how patients felt heard. One story stood out where a patient received a follow-up call before even leaving the parking lot, a simple moment that redefined their perception of care. We also touched on "Operation Blue Sky," an initiative that looks beyond traditional surveys to capture insight from call recordings, messages, and other unstructured data sources. It opens the door to a future where healthcare providers can anticipate problems before they happen and intervene at the right moment. That raises important questions around pace, trust, and readiness, especially in an industry that has good reason to move carefully. This episode is ultimately a conversation about balance. Between innovation and responsibility, between efficiency and empathy, and between data and human connection. So how do we ensure that as healthcare becomes more advanced, it also becomes more human? And what lessons from this journey could apply far beyond healthcare?

What happens when customer experience stops being a soft metric and starts becoming a direct driver of revenue, retention, and real-time action? In this episode, I sat down with Jeff Gelfuso, SVP and Chief Product and Experience Officer at Qualtrics, during X4 Summit in Seattle to talk about how AI is changing the way businesses understand and improve customer relationships. Jeff shared how his role sits at the point where product, experience, and business outcomes meet, helping customers use Qualtrics in ways that are both practical and measurable. One of the biggest themes in our conversation was the shift from simply listening to customers to actually doing something in the moment. For years, many companies have relied on surveys, dashboards, and reports that told them what had already gone wrong. Jeff explained how that model is changing fast. With AI, organizations can now understand signals as they happen and trigger action before a poor experience turns into churn, frustration, or lost revenue. We talked about examples from brands like Marriott and TruGreen, and this is where the conversation became especially interesting. In TruGreen's case, AI-powered analysis helped reveal that service quality, not price, was the real reason customers were leaving. That kind of insight changed the conversation from guesswork to financial impact. When one point of retention can mean $10 million in annual revenue, experience suddenly becomes a boardroom issue, not just a customer service metric. Jeff also offered a refreshingly clear view on agentic AI. Instead of treating it as another layer of hype, he described it as a way to turn experience data into action, using context to help businesses close the loop faster and with greater precision. That means moving beyond smarter dashboards and toward systems that can surface priorities, recommend next steps, and help teams act without getting buried in complexity. Another standout part of the discussion was how Qualtrics is helping customers move beyond pilot purgatory. Jeff was candid that meaningful AI progress still takes work, focus, and the discipline to solve the right problems first. The companies seeing real value are not trying to do everything at once. They are identifying specific use cases, tying them to real business outcomes, and building from there. What I enjoyed most about this conversation was how clearly Jeff connected technology to human experience. Yes, there was plenty of discussion around AI, automation, and context, but at the heart of it all was something much simpler. Better experiences build stronger relationships, and stronger relationships drive loyalty, trust, and growth. So if your business is still treating experience as a nice-to-have instead of a measurable driver of performance, what might you be missing right in front of you? I would love to hear your thoughts after listening.

What does it really mean to lead in AI when the headlines are loud, the claims are endless, and the real signals are often buried under hype? In this episode, I sit down with Ed White from Clarivate to make sense of one of the most important questions in technology right now, who is actually leading the AI innovation race, and what does the data really tell us? Ed leads the Clarivate Centre for IP and Innovation Research, where his team analyzes enormous volumes of intellectual property and innovation data to understand where technology is heading, who is building it, and which ideas are likely to shape the future. That matters because AI is no longer a side story inside tech. It is becoming an economic issue, a business issue, and increasingly a geopolitical one too. Our conversation centers on fresh Clarivate research showing that AI patent filings passed 1.1 million overall by 2025, with growth accelerating at a pace that is hard to ignore. Ed helps unpack what that actually means in practical terms. I found this especially interesting because the report does not simply point to the familiar names everyone already talks about. It also highlights academic institutions, automotive companies, and businesses working behind the scenes with far less noise. What I enjoyed most about this discussion is that Ed brings a rare mix of technical depth and real clarity. He does not just throw out huge numbers and leave them hanging there. He explains what they mean for investors, enterprise leaders, governments, and anyone trying to understand where this market is heading next. We also get into one of the biggest tensions in AI today, the balance between speed and assurance. That part really stayed with me. In a market obsessed with moving fast, Ed makes a strong case that trust, explainability, and usability may end up shaping who actually wins. This is a conversation about much more than patents. It is about power, strategy, timing, and how innovation spreads across borders, industries, and institutions. If you want to cut through the noise and hear a more data-led view of the AI race, this episode will give you plenty to think about. As always, I would love to hear what stood out to you most after listening, so please share your thoughts with me. When you look at the AI race today, do you think the real leaders are the companies making the most noise, or the ones quietly building for the long term?

What does it really take to move AI from impressive demos into the hands of the people who keep the world running every day? In this episode of Tech Talks Daily, I sat down with Kriti Sharma, CEO of IFS Nexus Black, to explore a side of AI that rarely gets the spotlight. While much of the conversation around artificial intelligence focuses on chatbots and copilots, Kriti is working in environments where failure is not an option. Manufacturing plants, energy grids, airlines, and field service operations all depend on precision, experience, and consistency. What struck me early in our conversation was how she reframes the entire AI debate. The challenge is not building the technology, it is building trust in it. Kriti's journey into AI began long before it became a boardroom priority. From building her first robot as a teenager to advising global organizations and policymakers, she has always focused on solving real problems rather than chasing trends. That perspective carries through into her work today, where she spends time on factory floors wearing safety gear alongside engineers and technicians. It is a hands-on approach that reveals something many leaders miss. People do not adopt AI because it is advanced. They adopt it when it solves a problem they recognize in their day-to-day work. One of the most interesting themes we explored was the widening gap between what AI can do and how quickly organizations are ready to use it. Kriti described how that gap plays out on the ground, especially among deskless workers who make up the majority of the global workforce. In these environments, the conversation is far less about replacing jobs and far more about preserving knowledge, improving consistency, and helping people perform at their best. When a veteran worker with decades of experience walks out the door, that expertise often leaves with them. AI, when designed well, can help capture and share that knowledge across an entire workforce. We also discussed how IFS Nexus Black is tackling what many describe as "pilot purgatory," where companies experiment with AI but struggle to deploy it at scale. Kriti shared how building solutions alongside customers, rather than handing over generic tools, leads to faster adoption and measurable results. Real-world examples brought this to life, including how industrial AI is helping organizations move from reactive firefighting to proactive decision-making, reducing downtime and improving operational performance in ways that directly impact the bottom line. As our conversation moved toward the future, Kriti offered a clear message for leaders. The best way to prepare for AI is to start using it. Not as a novelty, but as a daily tool that can amplify how work gets done. The organizations that encourage experimentation and share those learnings across teams are the ones most likely to see real impact. So as AI continues to evolve at pace, the question is no longer whether the technology is ready. It is whether organizations and their people are ready to meet it halfway, and what happens if they are not?

How are global payment systems quietly shifting beneath our feet, and what does that mean for businesses trying to grow across borders? In this episode of Tech Talks Daily, I sat down with Stuart Neal, CEO of Boku, to unpack a transformation that many consumers barely notice but every global business feels. Payments have long been dominated by familiar names like Visa and Mastercard, yet Stuart explains how that dominance is slowly being challenged by a surge in local payment methods. From mobile wallets in emerging markets to direct carrier billing in places where credit cards are far from universal, the way people pay is becoming far more fragmented, and far more local. What stood out for me in this conversation was the geopolitical and economic dimension behind it all. Stuart highlighted how events like the pandemic and even global conflicts have pushed governments and central banks to rethink their reliance on external payment networks. When entire payment systems can be switched off overnight, it forces countries to consider building their own infrastructure. That shift is not only about sovereignty, it is about control over financial ecosystems, consumer behavior, and ultimately economic stability. We also explored what this means for businesses still operating with a card-first mindset. While card payments are not disappearing, their relative share is being overtaken by a growing ecosystem of alternative methods. That creates both opportunity and complexity. Companies now face the challenge of integrating hundreds of payment options across multiple markets, each with its own regulations, currencies, and customer expectations. Stuart offered a candid view that for most organizations, building this infrastructure alone is unrealistic, which is why aggregation platforms like Boku are stepping in to bridge that gap. The conversation then turned toward the future, particularly the rise of agentic AI and what Stuart described as the "last mile problem" in payments. While AI may soon handle discovery and purchasing decisions, the moment of payment still requires trust, authentication, and verification. That friction is not a flaw, it is a safeguard, and it raises important questions about how seamless commerce can really become. We also touched on subscription fatigue, cross-border expansion, and the lessons global brands like Microsoft and Netflix have learned about meeting customers where they are. One thing became clear throughout our discussion. If you ignore local payment preferences, you are effectively turning away a large portion of your potential audience. So as payment methods continue to evolve and diversify, are businesses ready to rethink their assumptions about how money moves, or will they risk being left behind in a world that is becoming increasingly local at scale?

What does it really take to turn a massive AI infrastructure investment into actual business value? In this episode, I'm joined by Alex Bouzari, founder and CEO of DDN, for a conversation that gets right to the heart of where AI infrastructure is heading next. There is a lot of noise in the market about faster chips, larger models, and bigger data centers, but Alex argues that the real story has changed. According to him, GPUs are no longer the main constraint. The true bottleneck now lies in the data layer, where data is moved, cached, served, and managed across increasingly complex AI environments. That shift matters because many organizations are still thinking about AI in terms of hardware acquisition. Buy more GPUs, add more power, build more capacity. But as Alex explains, that mindset misses the bigger picture. If your data architecture cannot keep pace, those expensive systems stall, efficiency drops, and the return on investment quickly becomes shaky. It was a timely discussion, especially as NVIDIA's Rubin platform points toward rack-scale AI factories where compute, networking, storage, and offload all need to work together as one operational system. One part I found especially interesting was Alex's focus on measuring efficiency. He argued that the future winners in AI will not simply be the companies with the most hardware. They will be the ones who think like industrial operators, measuring cost per token, rack utilization, time-to-value, and power consumption per unit of intelligence output. That is a very different conversation from the hype cycle, and it is one that business leaders need to hear. AI value is no longer about showing that something can work. It is about proving that it can work predictably, securely, and economically at scale. We also talked about DDN's collaboration with NVIDIA, the role of BlueField-4 DPUs, and why inference performance now depends on intelligent memory architecture and data movement just as much as raw compute. Alex shared how DDN is helping customers reach up to 99 percent GPU utilization and reduce time to first token for long context workloads. Those numbers are impressive on their own, but what matters most is what they represent—better throughput, lower waste, and AI systems that move from science project to production reality. There is also an important leadership lesson running through this conversation. DDN has been profitable for over a decade, powers more than one million GPUs worldwide, and has built its business by staying close to real customer pain points. Alex speaks with the kind of clarity that comes from building through constraints rather than simply talking around them. If AI factories are going to define the next phase of enterprise technology, how should leaders rethink infrastructure, efficiency, and value creation before they invest in the next wave, and what do you think?

Are employees really ready for AI in the workplace, or are we moving faster than people can realistically keep up? In this episode, I'm joined by David Evans, Chief Product Strategist at GoTo, to explore what is actually happening inside organizations as AI becomes part of everyday work. There is a growing assumption that businesses are already well on their way, with employees confidently using AI tools and leaders rolling out strategies at pace. But David brings a more measured view, backed by research and real-world insight, that suggests the picture is far more complex. One of the biggest themes in our conversation is the gap between expectation and reality. Many companies assume that younger employees, particularly Gen Z, naturally understand how to use AI in a professional setting. David challenges that idea directly. He explains that while familiarity with technology is high, the ability to apply AI effectively, responsibly, and in a business context is something that every generation is still learning. Without clear guidance, training, and governance, organizations risk creating confusion rather than progress. We also talk about how AI is quietly becoming embedded in everyday workflows. Instead of replacing roles outright, it is helping people shift their focus toward higher-value work. That shift is already visible in areas like customer support, where contact centers are evolving through smarter automation, better tools for agents, and a growing acceptance of remote and distributed teams. David shares what this could look like over the next year, and why the balance between human and machine will remain central to delivering good experiences. Another area we explore is the growing need for integration. Many organizations are dealing with fragmented communication tools, rising costs, and increasing complexity. David explains why there is a clear move toward unified platforms that bring communication, collaboration, and AI together in a more cohesive way. That includes the rise of conversational AI, with tools like AI receptionists becoming easier to deploy and more widely trusted. Of course, none of this happens without challenges. Security, data privacy, and the risks associated with shadow IT and generative AI are becoming more visible. David outlines how technology providers are responding, and what leaders need to think about as they balance innovation with responsibility. This conversation offers a grounded look at where workplace AI is heading, cutting through assumptions and focusing on what leaders need to understand right now. So as AI becomes part of the fabric of everyday work, are organizations doing enough to support their people, or are they expecting too much too soon?

How can companies be drowning in customer data and still struggle to make better decisions? In this episode, I speak with Jochem van der Veer, CEO and co-founder of TheyDo, about a problem that many business leaders quietly recognize but rarely solve. Organizations are investing heavily in customer experience and AI, yet the results often fall short. There is more data than ever before, more dashboards, more reporting, and still a disconnect between insight and action. Jochem offers a refreshing perspective shaped by his work with global brands like Ford, Atlassian, Cisco, and Home Depot. He explains that the issue is not a lack of data, but a lack of alignment. Teams operate in silos, each working with their own version of the truth, which leads to fragmented decisions that make sense internally but fail from the customer's point of view. It is not intentional, but the outcome is the same. A disconnected experience that slows progress and creates hidden costs across the business. We spend time unpacking what this looks like in practice. Many customer experience teams are still focused on collecting and reporting data rather than influencing decisions. Insights travel up the organization, often reaching senior leadership, but rarely translate into meaningful action. That gap, as Jochem describes it, turns customer experience into a cost center rather than a driver of growth. What makes this conversation particularly relevant right now is the role of AI. While AI has made it easier to process vast amounts of unstructured data, it has also exposed how unprepared many organizations are to act on it. Jochem shares how experience intelligence is emerging as a new way of thinking, one that connects customer feedback, operational data, and business outcomes into a single, actionable view. It shifts the focus from understanding what happened to deciding what to do next. We also explore the partnership between TheyDo and PwC, and how combining structured frameworks with journey management technology can help organizations move from strategy to execution. From reducing wasted investment to identifying the real root causes behind customer issues, there is a clear opportunity to rethink how decisions are made. This episode challenges some widely held assumptions, including the idea that customer experience is a standalone function. Instead, it is becoming a capability that needs to be embedded across the entire organization. So as AI continues to accelerate the pace of business, are companies ready to move beyond reporting and finally turn customer insight into meaningful action?

What happens when financial markets stop reacting to data and start reacting to narratives in real time? In this episode, I'm joined by Wilson Chan, CEO and founder of Permutable AI, to explore how artificial intelligence is reshaping the way financial institutions interpret the world around them. Wilson brings a rare perspective, combining years of experience as a trader with a deep background in computer science, and it shows in the way he describes this shift. We talk about how markets are moving away from traditional quant models and toward AI-native systems that can reason over vast amounts of unstructured global information. That includes everything from policy changes and geopolitical events to the subtle ways narratives form and spread across media. What stood out to me in this conversation is how Wilson challenges the idea that markets are driven purely by fundamentals. Instead, he argues that perception and reality are increasingly intertwined. If enough people believe a story, that belief can influence price movements just as much as financial performance. Permutable AI is built on this idea, scanning hundreds of thousands of articles in real time to identify how narratives evolve and impact commodities, energy markets, and currencies. It's a fascinating shift that raises important questions about how investors separate meaningful insight from noise. We also explore the role of vertical LLMs and why generic AI models fall short in financial environments. Wilson explains how embedding financial relationships and ontology directly into models creates outputs that are structured, traceable, and ready for decision-making. That focus on explainability and auditability becomes even more important as AI systems take on greater responsibility. If something goes wrong, understanding why it happened is what maintains trust, and without that, adoption quickly stalls. There's also a broader conversation here about where all of this is heading. From multi-agent systems replacing traditional analytics stacks to the ambition to build a full-world simulator for capital markets, it feels like we are at the early stages of something much bigger. But at the same time, Wilson is honest about the challenges, from integration hurdles to the human skills gap that continues to hold many organizations back. So if markets are now shaped by narratives, AI reasoning, and real-time global signals, how should business leaders and investors rethink their decision-making in the future?

What does customer experience look like inside a company most people associate with switches, infrastructure, and engineering rather than surveys, empathy, and brand perception? In this episode, recorded at the Qualtrics X4 event in Seattle, I sit down with Jerome Boissou, Head of Global Customer and Brand Experience at Legrand. Jerome has been with the company for 28 years and now leads a customer experience program designed to help Legrand better understand a customer base that is changing fast. This matters because Legrand is no longer serving only its traditional markets. The company now operates across a huge product portfolio, serves commercial buildings as well as residential markets, and plays a significant role in areas such as data centers and hospitality. At the heart of our conversation is Legrand's "Best Of Us" program, which was originally launched in 2018 and then revamped in 2021. Jerome explains that the original focus was on personas and journey mapping, but the company soon realized it needed a more quantitative approach too. What followed was a broader strategy built around three connected pillars: customer satisfaction, customer centricity, and brand equity. Rather than treating customer experience as a dashboard exercise, Legrand is using those pillars to improve business performance, spread customer knowledge internally, and help teams understand what different customer groups really want, expect, and struggle with. One of the strongest themes in this conversation is that feedback without action creates frustration. Jerome is very clear on that point. He explains how Legrand built a "close the loop" process, then went further with what the company calls a "customer room" process. That means identifying pain points and weak signals, routing them to the right internal teams, tracking them with KPIs, and making sure action follows. He shares that 100 percent of detractors are meant to be handled through that closed-loop approach, and that around 80 percent of pain points can be solved as quick wins. That is a refreshing reminder that customer experience only matters when it changes something. We also talk about the scale of measuring experience in a global B2B organization. Legrand runs yearly relational surveys for both direct and indirect customers, covering around 50 different personas, and supplements that with transactional surveys across 17 touchpoints. These include digital interactions, training, product launches, and post-case feedback from call centers. Jerome explains how Qualtrics became a key part of making that global program work, helping Legrand roll out surveys worldwide and giving teams a way to analyze feedback more easily and consistently. Of course, this being a tech podcast recorded at X4, we also get into AI. But what stood out to me is that Jerome does not talk about AI as a magic layer dropped on top of everything. He talks about context. In fact, context becomes one of the defining ideas in our conversation. Capturing feedback is useful, but understanding the environment around that feedback is what allows better decisions to happen. For Jerome, that is where AI becomes more useful, especially when it is trained within the reality of Legrand's complex markets rather than operating as a generic tool. Another part of this episode I found especially interesting is how Legrand brings employees into the customer experience process. Jerome shares an example of sending the same surveys to employees and asking them to answer from the customer's point of view. By comparing employee perception with actual customer feedback, Legrand can spot gaps, adjust training, and help teams build more empathy. In one case, factory teams thought customers were far less satisfied than they really were, simply because the internal metrics they saw every day focused only on pressure and output. Reframing that with real customer satisfaction data, including a product quality satisfaction score of around 95 percent, helped restore pride and perspective. This episode is really about something bigger than surveys or software. It is about how a global company can embed customer thinking into the culture, make people feel part of the process, and use data in a way that stays human. Jerome makes a strong case that customer experience in B2B is not separate from performance. It shapes brand perception, trust, internal alignment, and ultimately business outcomes. I'd love to hear your thoughts. How is your organization making sure customer feedback leads to action rather than just another report?

What does it take to turn millions of customer interactions into meaningful relationships instead of missed opportunities? In this episode, recorded live at the Qualtrics X4 Summit in Seattle, I sit down with James Bauman, Senior Director and Head of Experience, Analytics, and Insights at TruGreen. James leads customer experience, analytics, and retention strategy across a business that manages around 60 million customer touchpoints every year. And as he explains, that scale creates both opportunity and risk. At the center of our conversation is a challenge he describes as the "leaky bucket." TruGreen was investing heavily in acquiring customers, but too many were slipping away due to inconsistent experiences and missed moments. The real question became how to understand what customers actually need, when they need it, and how to respond in a way that builds trust and long-term loyalty. We explore how TruGreen built an omnichannel customer experience program designed to listen across every interaction, from digital channels to service calls, and connect that feedback with real customer behavior. But what stood out to me was how they moved beyond simply collecting feedback and into taking action in the moment. That's where AI agents come in. Rather than relying solely on traditional follow-up processes, TruGreen is now embedding AI directly into customer check-ins and surveys. These agents respond in real time, using context from the customer's history and recent interactions to provide relevant, immediate support. It changes the experience from something reactive to something far more responsive. The impact has been significant. James shares how AI agents are now addressing around 51% of customer concerns upfront and cutting escalations by more than 30%. At the same time, they are freeing up human teams to focus on the conversations that truly require empathy and relationship-building, rather than spending time on repetitive follow-ups that may never get a response. We also talk about the reality behind making this work. There's no shortcut. The speed of implementation came from the groundwork TruGreen had already put in place, building a strong data foundation and connecting systems across the business. Without that, the AI would lack the context needed to be useful. James also challenges some of the common narratives around AI. It's not something you can simply switch on and expect instant results. But it's also far from hype when applied thoughtfully. In his experience, AI agents can deliver real value, both in customer outcomes and business performance, when they are placed in the right moments and supported by the right data. For me, this conversation is a reminder that customer experience is shifting. It's moving away from slow feedback loops and into something far more immediate, where businesses can listen, understand, and act in real time. And I'd love to hear your perspective. Are you seeing AI agents genuinely improve customer experience in your organization, or are you still trying to figure out where they fit? Useful Links Connect with James Bauman Learn more about TruGreen Qualtrics X4 Summit

What does it really take to move from AI hype to something that actually works inside a business? In this episode, I sit down with Shibani Ahuja, SVP of Enterprise IT Strategy at Salesforce, to talk about why so many enterprise AI projects stall long before they deliver real value. While the market is full of noise around agents, copilots, and automation, Shibani makes the case that the real issue is often much simpler and much harder at the same time. Design. She explains why model capability alone will never rescue poor architecture, weak governance, or unclear data ownership. Our conversation goes well beyond the usual agentic AI headlines. Shibani shares what she learned from speaking with hundreds of C-suite leaders over the past year, and why many early enterprise AI conversations were too focused on models instead of ecosystems. We unpack the difference between predictive, generative, and agentic AI, why trusted data means more than having lots of information, and how Salesforce's own internal journey revealed conflicting knowledge, governance gaps, and the importance of determinism in enterprise settings. I also loved Shibani's perspective on the human side of this transformation. We talk about why successful organizations are framing agents as a capacity multiplier rather than a headcount story, how to bring employees along through visible wins and shared learning, and why the best starting point is often a simple, boring use case that removes pain for frontline teams. She also shares her thoughts on the eight design principles for the agentic enterprise, the myths that frustrate her most, and what will separate the leaders from the laggards over the next 18 to 24 months. This is a conversation for anyone feeling pressure to do something with AI, but wanting a clearer view of what meaningful progress actually looks like. Are businesses building the right foundations for an agentic future, or are too many still mistaking experimentation for strategy? Have a listen and let me know your thoughts. Useful Links Connect with Shibani Ahuja Agentic Enterprise Architecture

How is AI reshaping our relationship with work, and what does that mean for the tools we rely on every day? In this episode of Tech Talks Daily, I'm joined by Cory McElroy, Vice President of Commercial Product Management at HP. Our conversation begins with a reflection on one of the most famous garages in technology history. The original HP garage in Palo Alto is often described as the birthplace of Silicon Valley, and standing there recently reminded me how far the industry has come since those early days. But as Cory explains, we may be entering another turning point. The nature of work has shifted rapidly in just a few years. Hybrid work is now the norm for millions of people, and expectations around workplace technology have changed with it. Employees no longer see technology as a basic productivity tool. They expect it to adapt to them, reduce friction, and help them focus on meaningful work. Cory shares insights from HP's Work Relationship Index, which highlights a striking reality. Only around 20 percent of employees say they have a healthy relationship with work. That number sounds concerning at first, but it also points to an opportunity. When organizations provide the right tools and experiences, employees become more productive, more creative, and more likely to stay. A big theme throughout our conversation is the growing role of AI directly on devices. Running AI locally on PCs changes how people interact with technology. Tasks that once took hours, such as analyzing documents or extracting insights from data, can now happen almost instantly. In some internal deployments at HP, employees reported saving up to four hours each week. We also talk about the hardware innovations that are emerging in response to this shift. Cory explains how new devices like the HP EliteBook X and the EliteBoard reflect a rethink of the PC itself. The EliteBoard, for example, integrates a full PC inside a keyboard, allowing users to connect to any display and instantly access desktop-level performance. It is a design that reflects the flexibility people now expect from modern workspaces. Looking ahead, Cory believes the next few years will bring even bigger change. Devices will increasingly understand context, connect seamlessly with other tools, and respond to natural language requests. Instead of jumping between multiple applications to complete a task, users may simply ask their device to assemble information and produce the outcome they need. So as AI becomes embedded into the devices we use every day and work continues to evolve, what would a truly frictionless workday look like for you, and how will your relationship with technology change as a result?

How do you secure a modern business when identities no longer belong only to employees, but also to partners, machines, applications, and increasingly AI agents? In this episode of Tech Talks Daily, I sat down with Paul Zolfaghari, President of Saviynt, to unpack why identity security has moved from a background IT function to one of the defining challenges facing modern enterprises. Over the past decade, the identity problem has expanded far beyond the traditional office worker logging into internal systems. Today's organizations must manage access across a vast digital ecosystem that includes contractors, suppliers, customers, APIs, machines, and now autonomous AI agents. Paul explains how this shift has fundamentally changed the way security leaders think about identity governance. The challenge is no longer limited to preventing unauthorized access from outside attackers. Instead, companies must manage the complex question of who, or what, should have access to specific data, systems, and processes at any given moment. When thousands of employees, partners, and automated systems interact across multiple cloud platforms, the complexity grows rapidly. We also explore how the rise of non-human identities is reshaping the security landscape. Machines, software services, and AI agents now operate alongside human employees inside enterprise environments. In many cases, these digital identities are already beginning to outnumber people. As AI agents gain the ability to gather information, adapt to context, and take actions autonomously, organizations must rethink how access permissions are granted, monitored, and governed. Another theme that emerged during our conversation is the idea that identity security is not only about protection. While it clearly sits within the cybersecurity domain, Paul argues that identity governance also acts as a business enabler. When the right people and systems can access the right information at the right time, organizations operate more efficiently and collaborate more effectively across complex supply chains and partner ecosystems. We also discussed findings from Saviynt's CISO AI Risk Report, which highlights a growing concern among security leaders. AI adoption is accelerating rapidly, often moving faster than the governance frameworks designed to manage it. This creates a challenge for organizations trying to adopt AI responsibly while maintaining visibility and control over how these technologies interact with enterprise systems. With more than 600 enterprise customers and a recent $700 million growth investment backing its expansion, Saviynt is operating in a market that many investors now view as one of the defining layers of modern digital infrastructure. Identity, in many ways, is becoming the control plane for how businesses operate in an AI driven world. Looking ahead, Paul believes organizations must begin preparing for a future where digital identities dramatically outnumber human employees. That shift will require new approaches to governance, visibility, and control. So as AI adoption accelerates and businesses continue expanding across cloud platforms and digital ecosystems, one question becomes impossible to ignore. Is identity security ready to serve as the foundation for how organizations operate in the next decade? Useful Links Connect with Paul Zolfaghari Check out the Saviynt Website Follow on Facebook, LinkedIn, and X

How prepared are organizations for a world where today's encrypted communications could be quietly stored and cracked years from now? In this episode of Tech Talks Daily, I sat down with Nate Jenniges, Senior Vice President and General Manager at BlackBerry, to talk about why the conversation around quantum computing is moving from academic curiosity to operational reality. For many leaders, quantum threats still feel distant, something for researchers and cryptographers to worry about. But as Nate explained, governments and adversaries are already capturing encrypted data today with the expectation that it can be decrypted later when quantum capabilities mature. This idea of "harvest now, decrypt later" attacks completely changes the timeline for security planning. If sensitive information needs to remain confidential for five, ten, or even twenty years, the exposure may already have started. That means the challenge is no longer theoretical. It is becoming a strategic issue that boards, CISOs, and government leaders must begin addressing right now. One of the most interesting parts of our conversation focused on something many people rarely think about. Metadata. While encryption protects the content of a message or phone call, the surrounding patterns often reveal just as much. Who spoke to whom, how often, from where, and at what time can tell a surprisingly detailed story. With modern analytics and AI tools, these patterns can expose command structures, business relationships, or crisis response activity even if the message itself remains encrypted. Nate explained why this is becoming a frontline issue in the emerging post-quantum era. As organizations integrate AI into communication platforms, new forms of metadata are emerging from model interactions, system queries, and inference activities. That means protecting communications requires a broader view than simply upgrading encryption algorithms. We also explored how governments and highly regulated sectors are preparing for this shift. BlackBerry today operates in a very different space than many people remember, focusing on identity-verified, mission-critical communications used by governments and institutions that cannot afford uncertainty. These systems are designed to operate during the moments that matter most, whether that involves cyber incident response, national security coordination, or emergency response to climate-related events. Another theme that stood out was the leadership challenge behind quantum readiness. Nate believes organizations should avoid treating quantum as a separate security initiative. Instead, it should be integrated into the technology refresh cycles that companies already manage, including hardware updates, software upgrades, and certificate renewals. The organizations that begin asking the right questions today will avoid scrambling later when regulatory expectations tighten and deadlines arrive. By the end of our conversation, one message became very clear. The first real defense in the post-quantum era may not come from stronger encryption alone. It may come from understanding and controlling the communication patterns and metadata that surround every digital interaction. As quantum computing research accelerates and governments begin setting deadlines for post-quantum security readiness, the question becomes increasingly hard to ignore. Are organizations truly prepared for the communications challenges that the next decade may bring?

Why are employees still drowning in administrative work despite years of digital transformation, new software platforms, and constant promises that technology will make work easier? In this episode of Tech Talks Daily, I explore that question with Jason Spry from Ricoh Europe. What begins as a discussion about a new Ricoh research report quickly turns into a much broader conversation about how modern workplaces actually operate day to day. The findings are striking. Employees across Europe are losing an average of 15 hours every week to routine administrative tasks. That is time spent searching for documents, reentering data across systems, preparing reports manually, and navigating layers of disconnected tools. For many organizations, this creates a strange contradiction. Leadership teams often believe that new platforms and software will simplify workflows, yet many employees feel the opposite. The tools designed to make work easier sometimes create additional layers of complexity. Jason shares his perspective from nearly three decades in document processing and outsourcing, explaining how years of digital initiatives have often resulted in systems stacked on top of one another rather than genuinely simplified workflows. The result is a fragmented experience where finding the latest version of a document or locating the right information for a meeting can consume far more time than it should. We also discuss the hidden risks behind these inefficiencies. When documents are scattered across systems or poorly managed, the consequences go beyond frustration. Ricoh's research shows that many organizations have experienced compliance breaches or near misses because important documents were missing, misfiled, or simply impossible to locate at the right moment. Jason explains why governance, visibility, and consistent document management are becoming increasingly important in a world where decisions rely on accurate information. Another theme that runs throughout this conversation is the idea of marginal gains. Small inefficiencies like searching for files, reentering data, or preparing documents for meetings might seem trivial in isolation. Yet when they happen hundreds of times across a workforce, they add up to a serious productivity drain. Jason compares it to the concept of improving performance by one percent at a time. Removing even a few of these micro frustrations can transform how people experience their workday. Naturally, we also talk about automation and AI. But Jason offers a refreshing perspective here as well. Rather than starting with the technology, he argues that organizations should begin by identifying the real pain points employees face. That often means speaking directly with the people doing the work and asking what frustrates them most. Once those challenges are clear, automation and intelligent document management tools can start delivering results quickly, sometimes within weeks rather than years. By the end of the conversation, it becomes clear that solving the admin overload problem does not always require massive transformation projects. Often the answer lies in simplifying processes, connecting systems more intelligently, and removing the small friction points that slow everyone down. So I am curious. How much time do you think your organization loses to administrative work each week, and what simple changes could give employees that time back?

How do you build trust in a business environment where security reviews, compliance demands, and vendor risk checks can slow everything down just when companies are trying to move faster? In this episode, I sit down with Adam Markowitz, CEO and co-founder of Drata, to talk about why trust has become one of the most important business conversations in tech. Adam brings a fascinating perspective to the table. Before building Drata, he worked on NASA's space shuttle program, and today he leads a company that has grown rapidly by helping organizations rethink compliance, governance, risk, and assurance through automation and AI. What stood out to me in this conversation was how clearly he framed the real issue. Compliance may have been where many companies started, but trust is the bigger story. In a world shaped by cloud services, third party vendors, and constant security scrutiny, old point in time audits and reactive processes are starting to look painfully outdated. We also talked about Drata's acquisition of SafeBase and what that says about the direction of the market. Adam explained how security and GRC teams have too often been treated as back office functions, expected to stay quiet and keep the company out of trouble. But he sees things very differently. He argues that these teams can actively help close deals, accelerate revenue, and remove friction from the buying process. That shift matters because trust now plays a direct role in business growth. If customers can quickly get answers to security questions and understand how a company manages risk, sales cycles move faster and security teams stop being bottlenecks at the final stage of a deal. Another part of the conversation that really stayed with me was Adam's view on AI. He sees it as both a tailwind and a test. AI is helping automate highly manual GRC workflows, improve continuous compliance monitoring, and support newer frameworks tied to AI risk itself. At the same time, he is realistic about the pressure this puts on businesses. AI may introduce fresh concerns, but it also shines a harsher light on issues that have been around for years, things like access creep, weak controls, and data integrity problems. That honesty gave this discussion a lot of weight because it moved beyond hype and focused on what companies actually need to do. We also touched on Drata's momentum as a business, from opening a new San Francisco headquarters to expanding globally and moving further into the enterprise market. But even there, Adam kept coming back to culture, discipline, and a deep understanding of the customer problem. For me, that was the thread running through the whole episode. Trust is not a side issue. It is part of how modern companies grow, compete, and prove they can be relied on. If your business still sees compliance as a checkbox exercise or a cost center, this conversation will give you plenty to think about. Where do you see the relationship between trust, security, and growth heading next, and what did this episode make you question about the way your own organization handles compliance? Share your thoughts with me.

What happens when the most frustrating part of customer service, waiting on hold, repeating yourself, and fighting your way through endless phone menus, finally starts to disappear? In this episode, I sit down with Neil Hammerton, CEO and co-founder of Natterbox, to talk about how AI is reshaping customer experience in ways that feel practical rather than theatrical. We begin with a conversation about the gap between what customers have tolerated for years and what they expect now. Whether it is a bank that still puts you through layers of outdated IVR menus or a service team that answers straight away and solves the issue, those experiences stay with us. Neil makes the case that voice is far from dead. In fact, he believes voice is becoming one of the most exciting places to apply AI, especially when businesses want faster, more human interactions at scale. What I found especially interesting was Neil's view that AI should be treated like a new employee. That means training matters. Tone matters. Context matters. If businesses want AI assistants and agents to succeed, they have to teach them how the organization works, how conversations should sound, and when a human needs to step in. We talk about the difference between using AI for simple triage and using it to complete tasks end to end, from handling password resets to helping callers outside office hours or during spikes in demand. Neil also shares why the smartest path is rarely a giant leap. It is usually a series of smaller, lower-risk steps that build confidence and real results over time. We also get into one of the biggest concerns hanging over every AI conversation right now, whether these tools are replacing people or helping them do better work. Neil's answer is refreshingly balanced. In many cases, AI is taking care of the repetitive jobs that frustrate staff and slow down service, while freeing human agents to handle the conversations where empathy, judgment, and experience still matter most. That shift can improve customer experience while also making work more rewarding for the people on the front line. There is also a strong message here for business leaders who are still stuck in pilot mode, testing AI without ever quite moving forward. Neil explains why smart pilots need clear goals, good training data, and realistic expectations. He also shares how Natterbox is using AI internally, including producing board packs in a fraction of the time, while still keeping people involved to check, challenge, and refine the output. This episode is a thoughtful look at where customer experience is heading next, and why the future probably belongs to businesses that know when to let AI lead, when to keep humans in the loop, and how to blend both into something customers actually value. What are your thoughts on the balance between AI efficiency and human connection in customer service, and where do you think businesses are still getting it wrong?

How do you turn trillions of user interactions into meaningful decisions without drowning in data? In this episode of Tech Talks Daily, I sit down with Todd Olson, co-founder and CEO of Pendo, to talk about the future of product-led organizations and why AI is reshaping how software companies grow, build, and compete. Pendo tracks trillions of product usage events to help organizations understand how customers actually interact with their software. That level of data sounds powerful, but it also raises a challenge many teams face today. How do you turn massive data sets into clear signals that teams can act on without falling into analysis paralysis? Todd explains how Pendo approaches this problem by organizing product data around real user journeys, feature adoption, and areas where people drop off. Instead of leaving teams buried in dashboards, the goal is to surface insights that matter. Increasingly, AI is helping by acting as a kind of embedded analyst that highlights the patterns product teams should focus on. Our conversation also revisits the idea behind Todd's book, The Product-Led Organization. When it was published around the time of the pandemic, it argued that great products should do much of the heavy lifting traditionally done by sales or support teams. Looking back now, Todd believes the core idea remains intact. AI simply accelerates the model by allowing companies to experiment faster and scale product-driven experiences with far fewer people. But that shift is also creating tension in the software industry. We talk about the so-called reckoning in SaaS economics and the growing debate around whether AI will make traditional software companies obsolete. Todd offers a more measured perspective. While AI allows anyone to prototype software quickly, the companies that survive will still be the ones solving difficult problems, navigating compliance requirements, and building products that customers trust. Another theme we explore is geography and innovation. Pendo is headquartered in Raleigh, North Carolina, far from the usual coastal tech hubs. Todd shares how building outside Silicon Valley has shaped the company's culture, talent strategy, and mindset. There are advantages to being close to the center of the AI boom, but there is also value in building away from the echo chamber. We also spend time unpacking the rise of AI-assisted development and the trend many people call "vibe coding." Todd believes AI will dramatically reshape product teams, but he also pushes back against the idea that humans will disappear from the development process. Engineers will still need to review code, teach AI systems best practices, and ensure security and reliability. One of the most interesting moments in our conversation comes near the end when Todd shares a belief that originality will become one of the most valuable assets in the age of AI. As automated content and automated code become easier to generate, he believes people will increasingly value craft, taste, and original thinking. So in a world where AI can generate almost anything with a prompt, the real question becomes far more human. What problems are actually worth solving? If you care about the future of software, product strategy, and how AI is reshaping the economics of building companies, this is a conversation that offers plenty to think about. And after listening, I would love to hear your perspective. As AI becomes embedded in every product and workflow, do you believe originality and craft will become the true differentiators in the software industry?

Have you ever contacted customer support with a simple request, only to find yourself trapped in a loop of scripted chatbot responses that never actually solve the problem? It's an experience many of us know all too well. AI has made customer service more conversational over the last few years, yet there is still a gap between answering a question and actually resolving an issue. That gap is exactly where today's conversation begins. In this episode of Tech Talks Daily, I spoke with Mike Szilagyi, SVP and General Manager of Product Management at Genesys Cloud, about a new chapter in AI-powered customer experience. Genesys has announced what it describes as the industry's first agentic virtual agent built on Large Action Models, or LAMs. While Large Language Models have dominated the conversation around AI for the past few years, they have largely focused on generating responses, retrieving knowledge, or answering questions. What they have struggled with is execution. Mike explained how Large Action Models take the next step. Rather than simply telling a customer how to solve a problem, these systems can plan and execute the steps needed to complete a task. Imagine contacting an airline after a sudden flight cancellation. Instead of navigating multiple menus or repeating information to a human agent, an agentic virtual assistant could understand your situation, check alternative flights, apply airline policies, and complete the rebooking process across several systems. In other words, the AI moves from conversation to action. We also explored how Genesys approached the design of this technology with enterprise governance in mind. From explainable decision paths and audit logs to guardrails that ensure every automated action can be traced and understood, the goal is to make autonomous AI trustworthy inside complex organizations. Mike also shared insights into Genesys' partnership with Scaled Cognition and how integrating specialized models helps deliver reliable execution in real-world customer service environments. Perhaps most interesting was our discussion about the human role in this evolving contact center landscape. As automation begins to handle routine and multi-step workflows, human agents are free to focus on situations that require empathy, judgment, and expertise. That shift raises interesting questions about how organizations design customer experiences in the years ahead. So how will customers respond when virtual agents move beyond answering questions and begin resolving problems on their behalf? And once one brand delivers that experience, will it quickly become the expectation? Useful Links Connect with Mike Szilagyi Learn more about Genesys Genesys Agentic Virtual Agent Powered by LAMs for Enterprise CX Follow on LinkedIn

How do global companies make confident decisions when supply chains are constantly disrupted by tariffs, geopolitical tension, shifting consumer demand, and unpredictable global events? In this episode of Tech Talks Daily, I sat down with Dr. Ashwin Rao, EVP of AI and R&D at o9 Solutions, to talk about how artificial intelligence is changing the way organizations plan, forecast, and respond to uncertainty. Ashwin brings a fascinating mix of experience to the conversation. After earning a PhD in mathematics and computer science, he spent fifteen years on Wall Street working on derivatives trading strategies at Goldman Sachs and Morgan Stanley before moving into the world of enterprise technology. Today, he operates at the meeting point between business and academia as both a senior AI leader and an adjunct professor at Stanford University. Our conversation begins with Ashwin's unusual career path and how those early experiences in finance shaped the way he thinks about risk, decision making, and real world AI deployment. The journey from theoretical mathematics to trading floors and eventually into Silicon Valley offers an interesting lens on how analytical thinking can travel across industries and still remain highly relevant. We then move into the work happening at o9 Solutions, where AI is helping organizations make smarter decisions across supply chain planning, demand forecasting, and inventory management. In a world that Ashwin describes using the acronym VUCA, volatility, uncertainty, complexity, and ambiguity, businesses are under pressure to react faster and make better informed decisions. He explains how enterprise AI platforms can connect fragmented data across departments and create a more complete view of the business. One example he shares brings the concept down to earth. Even predicting how many bananas a grocery store should stock on any given day requires analyzing internal sales trends alongside external signals such as weather, social media trends, and economic conditions. Machine learning systems can now process those signals in real time and continuously update forecasts so businesses can respond quickly to changes. We also explore the rise of neuro- and symbolic AI, a concept Ashwin believes represents the next stage in enterprise decision-making. Rather than relying only on large language models, this approach blends the structured reasoning of symbolic systems with the pattern recognition of neural networks. The result, he suggests, feels less like a chatbot and more like having an expert coach embedded inside the decision-making process. Along the way, we also discuss why many organizations still struggle to embed AI successfully. Technology is only one piece of the puzzle. Ashwin believes the toughest obstacle is organizational change management, bringing teams together, connecting data across silos, and helping leaders guide their organizations through transformation. If you have ever wondered how AI moves beyond chatbots and into the systems that quietly power global supply chains, this conversation offers a thoughtful and practical perspective. So, how prepared is your organization to make decisions in a world defined by volatility and uncertainty, and could AI become the trusted partner that helps guide those choices? Useful Links Ashwin's blog Ashwin's LinkedIn o9 Solutions Website o9 LinkedIn

What does it take to design a data center for a world where the technology inside it may change several times before the building even opens? In this episode of Tech Talks Daily, I sit down with Jackson Metcalf, Principal at Gensler, to talk about how AI is forcing a complete rethink of data center design. Jackson has spent nearly two decades working on critical facilities, and in our conversation he explains how the shift from traditional cloud workloads to dense AI environments is changing everything from building form and cooling strategy to long-term infrastructure planning. What struck me most in this conversation is the sheer mismatch in timescales. Data centers can take two and a half to three years to design and build, while chip and GPU roadmaps are evolving in cycles of months. Jackson explains why that means designing for a fixed end state no longer makes sense. Instead, the future may belong to facilities built with flexibility at their core, spaces that can be reconfigured, upgraded, and even conceptually rebuilt over time rather than treated as static assets. We also talk about what hyper-flexibility actually means in practice. This is not just a buzzword. It is about designing buildings with enough structural and engineering headroom to support very different cooling and power models over their lifespan. As AI workloads push cabinet densities to levels that would have sounded impossible only a few years ago, the need for plug-and-play mechanical and electrical infrastructure becomes far more than a design preference. It becomes essential. Another fascinating part of the conversation centers on sustainability. Jackson shares why durable, well-built structures can create long-term environmental value, even in an industry often criticized for its energy demands. We discuss embodied carbon, adaptive reuse, and why a high-quality building may have a much better second life than something built purely for short-term speed. That leads into a wider conversation about repositioning underused real estate, from former industrial facilities to vacant office buildings, as potential digital infrastructure. We also get into the growing energy challenge behind AI. With demand for power rising fast, and the US grid under increasing pressure, many operators are now weighing options such as on-site natural gas generation while waiting for cleaner long-term alternatives to mature. Jackson offers a thoughtful perspective on the tension between urgent infrastructure needs and environmental responsibility, as well as the uncertainty surrounding future energy roadmaps. Looking further ahead, I ask Jackson what will define a successful data center campus in the years to come. Will it be raw megawatts, adaptability, carbon intensity, location strategy, or something else entirely? His answer opens up a much bigger conversation about whether these buildings can become more connected to the communities around them, and what role they may play in a future where digital infrastructure is no longer hidden in the background, but central to how society functions. So if AI is pushing data center design to extremes, how do we build facilities that are ready for what comes next without becoming obsolete almost as soon as they open? And what does sustainable, adaptable digital infrastructure really look like in practice?

How close are we to the moment when quantum computing moves from scientific curiosity to real-world infrastructure? In today's episode of Tech Talks Daily, I speak with Christian Weedbrook, Founder and CEO of Xanadu, a company pushing the boundaries of what quantum computers might soon achieve. Xanadu has taken an unconventional route in the race to build practical quantum systems. Instead of relying on electronic approaches used by many others in the field, the company builds quantum computers using photonics, effectively computing with particles of light. Christian explains why this matters and how working with photons could unlock advantages in energy efficiency, scalability, and networking as quantum machines grow into large data center–scale systems. The conversation also arrives at a fascinating moment for the company. Xanadu has announced plans to go public through a SPAC deal that values the company at around $3.1 billion. Christian shares what that milestone means, not only for Xanadu but for the broader quantum ecosystem. According to him, the excitement surrounding quantum computing is no longer limited to research labs. Governments, enterprise partners, and investors are increasingly paying attention as the technology edges closer to commercial relevance. One of the most engaging parts of our conversation is Christian's own journey into the world of quantum physics. Before earning a PhD in photonic quantum computing, he began as a film student who admits he once dreamed of becoming a filmmaker. That winding path eventually led him into physics and entrepreneurship, where he founded Xanadu in 2016 with a mission to make quantum computers useful and accessible to everyone. We also discuss PennyLane, the open-source quantum programming framework developed by Xanadu that has quietly become one of the most widely used tools in the quantum developer community. Now taught in universities across more than 30 countries, PennyLane plays an important role in building the next generation of quantum talent. Christian also shares a realistic timeline for where the industry stands today. Quantum computers already exist, but they remain smaller than what is needed for commercial breakthroughs. Xanadu's roadmap points toward large-scale quantum data centers by the end of the decade, systems capable of tackling problems in drug discovery, materials science, logistics, and finance that traditional computers struggle to simulate. For enterprise leaders listening today, the message is clear. The quantum future is closer than many people assume, and organizations that begin exploring use cases now will be far better prepared when these systems mature. So how should businesses prepare for a computing paradigm based on the mathematics of quantum physics rather than traditional software logic? And what lessons can founders learn from a journey that began with filmmaking ambitions and led to building one of the most ambitious quantum companies in the world? Let's find out together.

How can companies invest heavily in AI and still struggle to see meaningful returns? In this episode of Tech Talks Daily, I sit down with Thomas Scott, CEO of Wrike, to unpack a growing tension many organizations are facing right now. Artificial intelligence adoption is accelerating rapidly across the workplace, yet the structures needed to support it are struggling to keep pace. Wrike's latest research into the "Age of Connected Intelligence" reveals that more than 80 percent of employees are already using AI at work. Yet fewer than half have received any formal training, guidance, or governance around how these tools should be used. That gap between enthusiasm and enablement is creating a new workplace phenomenon that many leaders are only just beginning to notice. Shadow AI. When employees cannot find approved tools that solve their problems quickly, they often turn to unapproved applications or personal accounts instead. Wrike's data shows that 42 percent of workers admit they have already done this. For organizations handling sensitive data, intellectual property, or regulated information, that trend raises serious questions about security, compliance, and trust. Thomas explains why this pattern is not surprising. Whenever a new technology emerges, the builders and experimenters move first. They explore possibilities, test new tools, and discover productivity gains long before formal policies or training frameworks arrive. The challenge for leadership teams is learning how to harness that momentum without letting experimentation turn into fragmentation. We also explore one of the most overlooked barriers to AI return on investment. Integration. Many employees are now juggling multiple AI tools every week, yet those systems rarely communicate with one another or connect deeply into the core business platforms where real work happens. As a result, context gets lost, workflows become fragmented, and organizations end up running expensive pilots that never scale into meaningful transformation. Thomas introduces the idea of connected intelligence as a possible solution. Instead of deploying AI tools in isolation, companies need systems that understand context across projects, teams, and workflows. When AI can access structured data, shared history, and operational context, it becomes far more capable of supporting real decision making rather than simply generating isolated outputs. Our conversation also explores how leaders can move beyond scattered experimentation and start building structured AI adoption across their organizations. Thomas argues that the most successful companies start with highly specific problems, empower small groups of motivated builders, and maintain strong executive involvement throughout the process. AI transformation is rarely driven by technology alone. It requires people, process, and leadership alignment working together. So if your organization has already deployed AI tools but still struggles to see real impact, perhaps the question is not whether you are using AI. The real question might be whether those tools are truly connected to the work your teams are trying to do every day.

How can organizations use AI to transform hiring while still protecting the human element at the heart of work? In this episode of Tech Talks Daily, I sit down with Mahe Bayireddi, co-founder and CEO of Phenom, to explore how artificial intelligence is reshaping the way companies attract, hire, and develop talent. Our conversation comes at an interesting moment for the company, following the announcement that Phenom has acquired Be Applied, an AI-driven cognitive assessment platform designed to validate candidate and employee capabilities at scale. The move follows an earlier acquisition of Included, an AI-native people analytics platform focused on delivering deeper workforce insights and faster decision making. Mahe shares how Phenom's long-term mission to help a billion people find the right job is evolving as AI becomes embedded throughout the HR lifecycle. From candidate discovery to onboarding and internal mobility, organizations are now experimenting with automation, personalization, and intelligent workflows that aim to improve both productivity and employee experience. One theme that runs throughout our discussion is how AI adoption in HR varies dramatically depending on geography, regulation, and industry. In Europe, regulatory frameworks are shaping how companies deploy automation. In the United States, state-level policies introduce additional complexity. Meanwhile, organizations across Asia are often approaching AI with entirely different priorities. As a result, many global companies are experimenting carefully, introducing AI into specific business units or regions before rolling it out more broadly. We also talk about a challenge that has caught many HR teams by surprise: the growing issue of fraudulent candidates and identity manipulation in the hiring process. As job applications become easier to submit and remote work expands global talent pools, organizations must rethink how they validate candidate identity and credentials. Mahe explains how AI-driven fraud detection tools can help highlight suspicious patterns while still keeping humans in the loop for final decisions. Another important point raised in the conversation is the need to preserve humanity in the workplace while introducing intelligent automation. While AI can dramatically improve efficiency across recruiting and workforce planning, Mahe believes HR leaders must be careful to ensure technology strengthens human potential rather than reducing people to data points in a system. Looking ahead, we discuss how organizations can begin adopting AI responsibly by starting small, focusing on high-impact areas, and building guardrails that reflect regional regulations and company culture. For many companies, the most successful path forward will involve testing AI within specific workflows, measuring outcomes quickly, and scaling what works. So as artificial intelligence becomes a central part of hiring, workforce planning, and employee development, the big question for leaders is this. Can organizations use AI to create faster, smarter talent decisions while still keeping people at the center of the workplace experience?

How does a CISO turn cybersecurity from a technical conversation into a business conversation that boards actually care about? In this episode of Tech Talks Daily, I sit down with Thom Langford, EMEA CTO at Rapid7 and a former CISO, to explore what he calls the second phase of cybersecurity leadership. For years, the industry worked hard to secure a seat at the boardroom table. In many organizations, that mission has largely succeeded. But as Thom explains, gaining access was only the first step. The real challenge now is communicating security in a way that drives meaningful business decisions. Thom shares why many CISOs still approach board conversations in the same way they did a decade ago, even though boardroom awareness of cybersecurity has changed dramatically. Today, many boards include members with cybersecurity knowledge or direct security experience. That means security leaders can no longer rely on technical jargon, complex frameworks, or compliance language to make their case. One of the most interesting insights from our conversation is the disconnect between how CISOs frame risk and what boards are actually focused on. While security teams often lead with risk reduction, boards tend to think in terms of revenue growth and operational costs. Thom argues that security leaders must learn to translate cybersecurity into the language of profit and loss if they want their message to resonate at the executive level. We also explore how traditional security tools such as risk frameworks, audits, and compliance standards can sometimes create distance rather than clarity in board discussions. Instead of helping executives understand security priorities, these models can obscure the real question boards are trying to answer. How secure are we, and what does that mean for the business? Another area we discuss is the growing role of tabletop exercises. Thom explains why these simulations are becoming one of the most effective ways for CISOs to demonstrate the real-world impact of security decisions. By walking executives through a realistic incident scenario, leaders can see how security, operations, legal teams, and business priorities intersect during a crisis. Looking ahead, Thom believes the most successful CISOs will increasingly need to think like business leaders rather than purely technical specialists. Communication skills, relationship building, and understanding the organization's financial priorities may prove just as important as deep technical expertise. So if cybersecurity leaders have already earned their place in the boardroom, the next question becomes much more interesting. Are they speaking the language the board actually understands, or are they still trying to solve business problems using only security vocabulary?

What if the next big shift in personal audio is not about blocking the world out, but staying connected to it? In this episode of Tech Talks Daily, I sit down with Nicole from Shokz to talk about why open-ear headphones are suddenly everywhere, and why this category is moving from niche curiosity to everyday essential. For years, the audio market was obsessed with sealing users off from the outside world. Now the conversation is changing. More people want to hear their music, podcasts, and calls without losing awareness of traffic, fellow commuters, colleagues, or the world happening around them. Nicole helps unpack what open-ear audio actually means in simple terms, and why it is resonating with runners, commuters, parents, office workers, and anyone trying to balance comfort, safety, and sound quality. We talk about the cultural shift behind this rise, from growing health and fitness habits to the way hybrid work and always-on lifestyles have changed how people use earbuds throughout the day. We also get into why Shokz has become one of the defining brands in this space. Long before open-ear audio became a trend, Shokz was investing in bone conduction, open-ear design, and the kind of product research needed to make this category work in real life. Nicole shares how years of persistence, technical innovation, and consumer education helped the company move from specialist player to category leader. During our conversation, we explore how real-world behavior shapes product design. That means thinking beyond audio specs and focusing on how headphones actually fit into daily life. Whether someone is running in the rain, commuting to work, wearing glasses, sitting in an office, or trying to stay aware while walking the dog, those everyday moments are shaping the next generation of audio devices. Nicole also talks me through some of Shokz's latest product thinking, including the OpenDots One and the OpenFit Pro. From compact clip-on designs that feel almost like wearable accessories to new approaches around noise reduction in open-ear listening, this episode looks at how the category is becoming more sophisticated and more versatile without losing the awareness that made it appealing in the first place. Looking ahead, we discuss whether open-ear audio will live alongside sealed earbuds as part of a two-device lifestyle, or whether it could eventually become the default choice for more people. We also touch on what comes next, from smarter audio experiences to the role AI and even connected glasses could play in the future of listening. So if you have been seeing the phrase open-ear audio more often and wondering what all the fuss is about, this conversation will bring it to life. Are open-ear headphones simply having a moment, or are we watching a bigger shift in how people want to hear the world around them?

What happens when the real bottleneck in artificial intelligence is no longer training models, but actually running them at scale? In this episode of Tech Talks Daily, I sit down with Satyam Srivastava from d-Matrix to explore a shift that is quietly reshaping the entire AI infrastructure landscape. While much of the early AI race focused on training ever larger models, the next phase of AI adoption is increasingly defined by inference. That is the moment when trained models are deployed and used to generate real-world results millions of times a day. Satyam brings a unique perspective shaped by years of experience in signal processing, machine learning, and hardware architecture, including time spent at NVIDIA and Intel working on graphics, media technologies, and AI systems. Now at d-Matrix, he is helping design next-generation computing architectures focused on one of the biggest challenges facing the AI industry today: efficiently running large language models without overwhelming data centers with unsustainable power and infrastructure demands. During our conversation, we explored why the industry underestimated the infrastructure implications of inference at scale. While training large models grabs headlines, the real operational pressure often comes later when those models must serve millions of queries in real time. That shift places enormous strain on memory bandwidth, energy consumption, and data movement inside modern data centers. Satyam explains how d-Matrix identified this challenge years before generative AI exploded into the mainstream. Instead of focusing on training hardware like many AI startups at the time, the company concentrated on inference efficiency. That decision is becoming increasingly relevant as organizations begin to realize that simply adding more GPUs to data centers is not a sustainable long-term strategy. We also discuss the growing power constraints surrounding AI infrastructure, and why efficiency-driven design may be the only realistic path forward. With electricity supply, cooling capacity, and semiconductor availability all becoming limiting factors, the industry is being forced to rethink how AI systems are architected. Custom silicon, purpose-built accelerators, and heterogeneous computing environments are now emerging as key pieces of the puzzle. The conversation also touches on the geopolitical and economic importance of AI semiconductor leadership, and why the relationship between frontier AI labs, infrastructure providers, and chip designers is becoming increasingly strategic. As governments and companies compete to maintain technological leadership, the question of who controls the hardware powering AI may prove just as important as the models themselves. Looking ahead, Satyam shares his perspective on how the role of engineers will evolve as AI infrastructure becomes more specialized and energy-aware. Foundational engineering skills remain essential, but the next generation of engineers will also need to think in terms of entire systems, combining software, hardware, and AI tools to build more efficient computing environments. As AI continues to move from research labs into everyday products and services, are organizations prepared for the infrastructure shift that comes with an inference-driven future? And could efficiency, rather than raw computing power, become the defining metric of the next phase of the AI race?

How should businesses rethink infrastructure when applications, data, and users are increasingly spread across thousands of locations? In this episode of Tech Talks Daily, I sit down with Mark Cree, President and Chief Operating Officer at Scale Computing, to talk about why the future of enterprise infrastructure is moving closer to where data is actually created. This conversation was recorded following the 66th edition of The IT Press Tour, where some of the most interesting conversations in enterprise infrastructure centered on what happens when businesses move away from oversized, monolithic stacks and start focusing on practical, distributed solutions. From retail stores and airports to remote industrial sites, the edge is becoming a critical part of modern IT strategy. Mark shares how Scale Computing has spent years building an edge-first platform designed to run critical workloads reliably across everything from a single location to tens of thousands of distributed sites. Mark also reflects on his own journey through the technology industry, which includes founding companies acquired by Cisco and NetApp, working as a venture capitalist, and leading major storage initiatives at AWS. That experience gives him a unique perspective on how enterprise infrastructure has evolved, particularly as organizations reconsider the balance between centralized cloud environments and local processing closer to users and devices. During our conversation, we explore why edge computing is becoming increasingly important for AI workloads, especially when large volumes of data are generated outside traditional data centers. Mark explains how processing information locally can reduce costs, improve performance, and enable entirely new use cases, from monitoring customer behavior in retail environments to running intelligent systems in remote locations. We also talk about the ongoing reassessment happening across enterprise IT teams following major industry shifts, including changes in the virtualization market and growing concerns around vendor lock-in. Mark explains how Scale Computing is positioning itself as a flexible alternative by combining virtualization, containerization, networking, and security into a platform designed specifically for distributed environments. Looking ahead, Mark shares his perspective on where enterprise infrastructure is heading over the next five years. As smaller AI models become more capable and organizations seek greater control over their data and systems, the role of edge platforms may become even more important. Instead of relying solely on massive centralized environments, companies may find new value in distributing intelligence closer to the places where real-world activity happens. So as organizations rethink how they deploy applications, manage data, and control infrastructure, is the next big shift in enterprise IT happening right at the edge? And how prepared is your organization for that change?

How confident are you that your business could recover from a cyberattack, cloud outage, or infrastructure failure in minutes rather than hours or even days? In this episode of Tech Talks Daily, I explore the changing nature of enterprise resilience with Joseph D'Angelo and Cassie Stanek from InfoScale, now part of Cloud Software Group. Our conversation looks at why many organizations still rely on backup and replication strategies that were designed for a very different era of IT. In a world of hybrid infrastructure, multi-cloud deployments, and increasingly complex application stacks, those traditional tools often protect the data but often fail to restore the business services that depend on it. My guests shares how InfoScale approaches resilience from the application layer outward. Instead of focusing on individual components such as storage or infrastructure, the platform looks at the relationships between applications, services, and data so entire systems can be orchestrated and recovered as a coordinated unit. That distinction becomes especially important during a ransomware attack or cloud outage, where restoring a single database rarely brings a digital business back online. We also discuss how growing regulatory pressure is changing the conversation. Enterprises are no longer expected to simply claim they have disaster recovery processes in place. Increasingly they must demonstrate, test, and prove that recovery capabilities actually work. Cassie explains how controlled "fire drill" rehearsals allow organizations to validate recovery plans without disrupting production systems, creating defensible proof that systems can be restored when it matters most. We also look ahead to the next phase of resilience, where environments will increasingly diagnose, adapt, and respond to disruptions in real time. Instead of reacting after an outage occurs, operational resilience will rely on predictive analytics, anomaly detection, and automated response capabilities that allow systems to self-correct before users ever notice a problem. Throughout our discussion, one theme becomes clear. IT resilience is no longer just an infrastructure conversation. It has become a business continuity strategy that directly affects revenue, customer trust, and competitive advantage. As organizations depend more heavily on digital services, the ability to recover quickly from disruption is becoming one of the defining capabilities of modern enterprise technology. So after listening, I'm curious about your perspective. Do you think most organizations are truly prepared for operational resilience in a multi-cloud world, or are many still relying on backup strategies that were built for a much simpler IT environment?

Have you ever bought a ticket to a show and wondered why the experience still feels strangely disconnected, with one app for ticketing, another for marketing, another for refunds, and a dozen spreadsheets held together by late nights and good intentions? In this episode of Tech Talks Daily, I'm joined by Ritesh Patel, co-founder of Ticket Fairy, to talk about the technology behind live events and why it has lagged behind other industries in some surprisingly familiar ways. Ritesh makes the case that most organizers are operating more like creative founders than corporate operators, building "mini cities" for a weekend with tiny teams, tight budgets, and very little margin for error. That reality shapes every technology decision, and it explains why fragmented tools and siloed data can become a hidden tax on the business. Ritesh walks me through Ticket Fairy's full stack approach, bringing ticketing, marketing, CRM, logistics, and payments into a single system, and why unifying data changes the economics of running an event. We dig into practical examples that go beyond vague AI talk, including how small workflow fixes can speed up entry, improve the on-site experience, and even translate into real revenue uplift once you multiply time savings across thousands of attendees. We also get into where AI agents and large language models are already finding a foothold in events, particularly around unstructured documents like artist specs, supplier agreements, and operational paperwork that can swallow hundreds of hours. Ritesh shares why "AI-native" should mean more than a writing assistant in a text box, and what it looks like when AI becomes an extension of a lean events team, including a prototype voice agent designed to handle common ticket-holder questions without creating new support bottlenecks. If you're interested in the real business mechanics of events, and how SaaS, payments, data, and AI can quietly shape everything from entry lines to repeat attendance, this conversation offers a fresh way to think about an industry that touches all of us, even when we don't think of it as a tech story. And as a bonus, Ritesh leaves a music recommendation that sent me back to an album I had not played in years, Burial's Untrue, with "Archangel" as the track to start with. After listening, tell me this, where do you think unified data and practical AI will make the biggest difference in live experiences over the next couple of years, on the promoter side or the fan side, and why?

Have you ever looked at a global hiring plan and wondered whether you are building a team, or accidentally buying a bundle of hidden fees, legal risk, and avoidable stress? In this episode, I'm joined by Oksana Petrus from Alcor, where she leads customer success and operations, helping tech companies build and scale engineering teams across Eastern Europe and Latin America. If you have ever tried to expand beyond your home market, you know the promise is real, access to great talent, broader coverage across time zones, and the chance to build faster. But the reality can get messy quickly once contracts, compliance, culture, and cost assumptions collide. Oksana brings a sharp perspective because she has seen both sides. Earlier in her career she worked as a lawyer with outsourcing providers, so she understands how pricing structures and contracts can create surprises once a team is already in motion. We talk about why so many leaders start out thinking outsourcing will be simple, then discover they cannot clearly see what they are paying for, who is actually doing the work, or how much of the spend is going to overhead. We also discuss the growing challenge of trust in recruiting, especially as AI tools make it easier to fake profiles, inflate experience, and even perform better in interviews than the person behind the screen can deliver on the job. Oksana shares how teams are responding with stronger verification, background checks, and a more transparent operating model so hiring managers can feel confident about who they are bringing in. We also dig into the real cost of global scaling, and why "salary charts" are only the starting point. Oksana explains how benefits, taxes, local customs like a 13th salary, currency controls, and even language realities can derail budgets and slow hiring if teams do not have local insight. The result is often frustration on both sides, candidates lose momentum, managers lose time, and projects drift. Culture comes through as a theme too, and not in a vague, feel good way. We talk about how different regions communicate, how expectations need to be set early, and why "challenge culture" can be a strength when leaders welcome it. Oksana shares an example of a CTO who came to value Eastern European teams precisely because they questioned decisions and offered alternatives that improved outcomes. If you are a founder, CTO, or business leader thinking about scaling an engineering team this year, this episode is a practical look at what tends to go wrong, why it gets expensive, and how to build a smarter path forward without overcommitting too early. Where do you think the line is between smart global expansion and taking on complexity before your business is ready for it, and what has your own experience taught you?

How can a world that produces more than enough food still leave millions of people struggling to put a healthy meal on the table? In this episode of Tech Talks Daily, I speak with Jordan Schenck, CEO of Flashfood, about the growing paradox at the heart of our global food system. Grocery prices are climbing, families everywhere are making harder choices at the checkout, and food banks are seeing rising demand. Yet at the same time, vast quantities of perfectly edible food never make it onto a plate. Jordan shares the startling scale of the problem. In North America alone, billions of pounds of edible food are thrown away every year, including huge volumes from grocery stores themselves. Fresh produce, meat, and dairy often end up discarded even though they remain safe and nutritious to eat. The result is a system where food waste and food insecurity grow side by side, despite a supply chain that already produces far more calories than the world needs. Flashfood is attempting to change that equation with a simple but powerful idea. Through its marketplace app, the company partners with grocery retailers to sell surplus food at steep discounts before it reaches the landfill. Shoppers gain access to fresh groceries at far lower prices, while retailers recover value from inventory that might otherwise be lost. What emerges is a rare triple win for shoppers, grocers, and the environment. During our conversation, Jordan explains how consumer behavior, retail expectations, and supply chain logistics have shaped today's food waste problem. She also shares how technology and data are beginning to shift the system in a different direction. Flashfood is now working with more than two thousand grocery partners across North America and serving over a million users, using data and AI to help retailers price surplus inventory more effectively and move products before they are discarded.But the story behind Flashfood is also personal. Jordan reflects on her earlier experiences at Impossible Foods and as founder of the beverage brand Sunwink, and how those roles helped her see both the strengths and weaknesses inside modern food production. Over time, she began to question whether the industry truly needed more products on shelves, or whether the bigger opportunity lay in fixing the inefficiencies that already existed. Our discussion touches on the psychology of grocery shopping, the economics of surplus inventory, and the cultural expectations that lead retailers to overstock shelves in the first place. We also explore why many consumers are more open to buying discounted food than retailers once believed, particularly as the cost of living continues to rise. Perhaps most encouraging of all is the idea that solving food waste does not require entirely new supply chains or radical lifestyle changes. Sometimes it simply requires connecting the dots between food that already exists and the people who need it most.

Is 2026 the year AI finally has to prove it is worth the investment? In this episode, I'm joined by Chris Riche-Webber, VP of Business Intelligence and Analytics at SmartRecruiters, to explore why so many AI and agentic AI initiatives stall after the pilot phase and what separates the projects that scale from the ones that quietly disappear. With Gartner predicting that more than 40 percent of agentic AI programs could be cancelled by 2027, Chris brings a pragmatic, data-led perspective on what is really happening inside organizations as the hype meets operational reality. We talk about the fundamentals that have not changed despite the new technology. Influence, clearly defined problems, measurable impact, and adoption still determine success, yet they are often overlooked in the rush to deploy the latest tools. Chris explains why "good vibes" are no longer enough in front of a CFO, how to baseline outcomes properly, and why ownership of results is one of the most common missing pieces in enterprise AI programs. A big part of the conversation focuses on what Chris calls the "agent washing" problem. Just as products are sometimes marketed with fashionable labels that do not reflect their real value, many solutions are being positioned as agentic without delivering true autonomy or business outcomes. We discuss how leaders can cut through the noise by asking better questions, aligning technology to specific use cases, and recognizing when simple automation is the right answer. Trust, adoption, and measurable ROI emerge as the three signals that determine whether an AI initiative survives. Chris shares a clear framework for defining these signals in a way that is consistent, comparable over time, and meaningful to the executive team. We also explore how connecting talent decisions to revenue, productivity, and retention changes the conversation, especially in the context of SmartRecruiters' broader SAP ecosystem and the opportunity to link people data directly to business performance. This is a conversation about moving from experimentation to accountability, from buying narratives to solving real problems, and from technology-first thinking to outcome-first leadership. So as the window for easy wins closes and the demand for proof of value grows, will your AI strategy be remembered as a pilot that generated excitement or as an initiative that delivered measurable business impact?

What if the real AI race in 2026 isn't about building bigger models, but about where decisions are made, how fast they happen, and whether they deliver measurable value? In this episode, I'm joined by John Bradshaw, Director of Cloud Computing Technology and Strategy at Akamai, to unpack his predictions for the next phase of cloud, AI inference, and the economics that will shape enterprise technology over the next 12 months. As organizations move beyond experimentation, John explains why the boardroom conversation has shifted from capability to return on investment, and how spiraling compute demands are forcing leaders to rethink the balance between performance, cost, and innovation. We explore why this new financial scrutiny is not slowing AI adoption, but refining it. John shares how inefficient GPU workflows, centralized inference, and poorly aligned architectures are being challenged by a more disciplined approach that pushes intelligence closer to the edge. This shift is not only about latency and performance. It is about building scalable, value-driven platforms that can support real-time decision-making, agentic workloads, and global user experiences without breaking traditional IT budgets. Trust is another major theme throughout our conversation. From the rise of everyday AI agents that quietly handle routine tasks to the growing importance of secure, resilient inference pipelines, John outlines how low-latency edge infrastructure, local processing, and hybrid cloud models will redefine reliability for both enterprises and consumers. We also discuss the smart home backlash following recent outages, and why the next generation of connected products will be designed to work even when the network does not. The episode also looks at the future of streaming, where consolidation, intelligent content delivery, and AI-driven personalization are reshaping both the user experience and the economics behind the platforms. Behind the scenes, orchestration is emerging as a defining capability, with multiple models and services working together to validate outputs, reduce hallucinations, and create more dependable AI systems. This is a conversation about moving from possibility to production, from experimentation to accountability, and from centralized architectures to distributed intelligence. So as AI becomes embedded in every workflow and every customer interaction, will the winners be the companies with the biggest models, or the ones that know exactly where their AI should live, how it should be orchestrated, and how it proves its value every single day?

What happens when AI moves from a standalone tool to a teammate that works inside the flow of your organization? In this episode, I'm joined by Mick Hodgins, General Manager for EMEA at Notion, to explore how the idea of a connected AI workspace is reshaping the way teams collaborate, make decisions, and measure productivity. With a career that includes more than a decade at Google scaling growth across multiple countries, Mick brings a unique perspective on what it takes to build technology businesses across diverse markets and why this moment in AI feels fundamentally different from previous waves of innovation. We talk about Notion's journey from a flexible, block-based collaboration platform to an AI-native workspace where context is the real differentiator. Mick explains why AI performs better when it understands how work actually happens, and how embedding agents directly into shared workflows allows teams to move from prompting tools to orchestrating outcomes. From automated reporting and knowledge management to self-improving agent loops that learn from their own performance, the conversation brings to life how organizations are already using AI to remove the "work around the work" and focus on higher-value thinking. A major theme throughout the discussion is return on investment. In a world where many companies are still stuck in pilot mode, Mick shares how leaders can reframe ROI around productivity, speed, and the elimination of repetitive tasks rather than treating AI as a single project with a fixed payback period. We also explore how roles, org structures, and hiring priorities are beginning to shift as agents become extensions of team capability rather than experimental add-ons. Because Mick leads the EMEA region, we also dive into the differences in adoption between the US and Europe, from regulatory considerations and cultural attitudes to the growing strength of the European startup ecosystem. It's a balanced view that recognizes both the caution and the creativity emerging across the region. This is ultimately a conversation about friction. What happens to an organization when coordination overhead disappears, when reporting builds itself, and when knowledge stays current without human intervention? So as AI agents move from novelty to infrastructure, are businesses ready to redesign how work gets done, and what becomes possible when teams stop managing tasks and start compounding impact?

Is your cloud foundation ready for the explosion of AI workloads, or are you about to scale technical debt at the speed of innovation? In this episode, I'm joined by Apurva Kadakia, Global Head of Cloud and Partnerships at Hexaware, an AI-first transformation company helping enterprises modernize the core systems that will determine whether their AI strategies succeed or stall. With a front-row seat to large-scale cloud programs across industries, Apurva explains why so many organizations that "moved to the cloud" still find themselves unprepared for what comes next, and why modernization-led migration has become a business priority rather than a technology upgrade. We unpack the real warning signs that cloud environments are not fit for AI, from monolithic architectures and spiraling compute costs to hidden integration complexity and security gaps that only surface at scale. Apurva introduces the idea of "clarity before cloud," a structured approach to understanding sprawling application estates, identifying what truly matters to the business, and matching each workload to the right modernization path using the five R's. It's a conversation that moves beyond theory into the practical decisions leaders need to make now if they want to avoid being locked out of future innovation. The role of AI inside the transformation journey is another major theme. Rather than treating AI as a destination, Apurva shares how AI-led and human-perfected assessment models are already accelerating application discovery, classification, and migration planning, completing the majority of the heavy lifting while keeping human judgment firmly in control. We also explore why governance cannot be an afterthought, and how a dedicated Cloud Transformation Office can drive adoption, reskilling, stakeholder alignment, and data readiness without slowing delivery. Looking ahead to a world of agentic systems and rapidly multiplying cloud workloads, this episode offers a clear message. The organizations that win will not be the ones that adopted cloud first, but the ones that modernized with intent. So as AI moves from experimentation to enterprise scale, are your applications, your architecture, and your operating model truly ready to support it, or is now the moment to rethink your path before the next wave hits?

What if the biggest transformation in hospitality isn't happening in the dining room, but in the kitchen you never see? In this episode, I'm joined by James Pool, Chief Technology and Operations Officer at Middleby, a company quietly powering more than a hundred brands across commercial foodservice and food processing. With more than three decades spent accelerating how food is cooked, prepared, and delivered at scale, James offers a rare look inside the technology, automation, and connected platforms reshaping how some of the world's most recognizable restaurant and retail brands operate. We explore what the connected, IoT-enabled kitchen actually looks like in practice, and why James prefers to think of it as digital automation for the entire restaurant. From front-of-house energy optimization to automated food safety reporting and real-time equipment intelligence in the back, the conversation reveals how data is being used to reduce waste, improve uptime, simplify training, and ultimately increase profitability at the store level. This isn't about adding more screens or more complexity, it's about removing friction from every step of the operation. James also shares how Middleby is bringing together a vast portfolio of technologies, from rapid-cook ovens and ventless kitchens to robotics and AI-driven service insights, into a single harmonized experience. That integration is opening the door to new formats such as ghost kitchens and non-traditional locations, where food can be prepared almost anywhere without the constraints that once defined a commercial kitchen. Along the way, we discuss how brands like Yum! Brands, Dunkin', Domino's, and Kroger are balancing speed, consistency, cost control, and customer experience in an environment where every investment must prove its return. The episode also takes us inside Middleby's Innovation Kitchens around the world, where operators can experiment with layouts, workflows, and equipment in real conditions before committing capital in the field. It's a powerful reminder that the future of hospitality is being prototyped long before it reaches the high street. So as automation, AI, and real-time analytics move from the factory floor into the heart of the restaurant, is the smart kitchen becoming the most important competitive advantage in foodservice, and are brands ready to rethink how their entire operation is designed around it?

In this episode, I'm joined by Ansel Stein, Vice President of Operations at Crisis24, and the leader behind AiiA powered by Palantir, an intelligence platform built to help executives cut through noise and make better calls in uncertain conditions. Ansel's background spans more than two decades across analysis, diplomacy, and high-stakes advisory work, including supporting U.S. national security priorities. Today, he's applying that same discipline to the private sector, helping organizations turn overwhelming streams of information into judgment leaders can actually use. We talk about what "intelligence" really means in this context, and why it's different from collecting more data or running another monitoring program. Ansel breaks down the thinking behind the AiiA President's Brief, inspired by the kind of concise, high-rigor briefings senior government leaders rely on, and explains how that model translates into business decision-making without losing context or nuance. If you have ever felt buried by alerts, headlines, and competing narratives, this conversation puts language around that problem and offers a practical alternative. We also address the concerns many leaders have about AI, privacy, and the fear of being tracked. Ansel is clear on boundaries, what data AiiA uses, why open-source intelligence matters, and how governance needs to be designed upfront if trust is going to hold. From structured analytic techniques and scenario planning to the idea that risk and opportunity often sit side by side, this episode is a look at how organizations can move from reacting to anticipating, without handing accountability over to a machine. If your team is trying to shorten the time from signal to decision while still protecting trust, what would it look like to treat intelligence as a leadership habit rather than a crisis tool, and are you ready to build that muscle before the next disruption hits?

How did a routine request from the FBI turn into a decade-long legal battle that helped reshape modern privacy law and ultimately inspire a new kind of mobile network? In this episode, I sit down with Nicholas Merrill, founder of Phreeli and one of the most influential yet often under-recognized figures in the fight for digital rights. Long before privacy became a mainstream talking point, Nick was running an internet service provider that powered major global brands. That journey took a dramatic turn in 2004 when he became the first person to challenge the constitutionality of a National Security Letter under the Patriot Act, living under a gag order for years while the case unfolded. What followed was a deeply personal and professional transformation that led him to question whether litigation and legislation alone could ever keep pace with the scale of modern surveillance. We explore how that experience pushed him toward a third path, building privacy directly into technology itself. From launching the Calyx Institute and developing privacy-focused Android software to raising a multi-million-dollar endowment for digital rights, Nick has spent decades turning principles into practical tools. Now, with Phreeli, he is taking that philosophy into one of the most data-hungry industries of all, mobile telecoms, reimagining what a carrier looks like when it is designed to know as little about its customers as possible. Our conversation also tackles the shifting balance of power between governments and corporations in the data economy, and why the distinction between the two is becoming increasingly blurred. Nick explains the trade-offs involved in building a privacy-first operator in a heavily regulated market, the cryptographic thinking behind Phreeli's double-blind architecture, and why he believes consent and personal agency should sit at the center of the digital experience. This is a story about resistance, resilience, and the belief that technology can be used to restore choice rather than quietly remove it. It is also a timely reminder that privacy is not an abstract concept for activists and engineers, but something as familiar as closing the curtains in your own home. So after three decades on the front lines of this debate, what does Nick think most of us still misunderstand about our digital rights, and what single shift in mindset could change how we all approach privacy in the connected world?

Have you noticed how every week brings a new headline about AI driven fraud, yet it still feels hard to tell what is real risk and what is noise? In this Tech Talks Daily episode, I'm joined by Tommy Nicholas, CEO of Alloy, for a candid conversation that cuts through the fear driven commentary and gets into what fraud teams are actually dealing with right now. We start with a simple but important distinction that gets blurred all the time. Tommy separates classic "fraud," where institutions take the hit, from "scams," where individuals are manipulated into handing over money or access. That framing changes how you think about solutions, accountability, and where AI is making things worse. Tommy also shares why he believes fraud losses are often massively underreported. It is not because people are trying to hide the truth, it is because organizations rarely have a single, clean view of losses across every product line and channel. Add messy labeling, split ownership across teams, and reporting becomes a best effort estimate rather than an objective number. That reality matters if you're building board level narratives, budgets, or risk models on top of survey data. From there, we talk about what organizations are getting right. Tommy argues there is no magical "undetectable" attack that forces teams to give up, but there is a very real breakdown happening in old fallbacks, especially human review of images and video. The bigger shift he sees is banks and fintechs finally pushing for consistent tooling across every channel, web, mobile, branch, call center, support tickets, because fraud does not respect internal org charts. We then get into why Alloy's AI Assistant is an interesting signal for where agentic AI is heading in regulated work. Tommy explains that agents are only useful when they have rigorous context, strong sources of truth, and clear workflows. Otherwise they guess, and "looks good" is not the same as "safe to run in production." He also lays out where agents can genuinely outperform humans, like scaling investigations during sudden surges, while keeping processes auditable and repeatable. We close by looking ahead at agentic commerce, and why Tommy thinks the breakthrough will arrive through weird, emergent behavior rather than a neat protocol roll out. When you listen back, do you think the next big leap in fraud prevention will come from better models, better data, or better operational discipline, and what would you bet on if your own customers were the ones on the line?

How do you prepare an entire generation for a world where AI is already shaping how we work, create, and solve problems? In this episode of Tech Talks Daily, I'm joined by Dr. Tara Nattrass, Chief Innovation Strategist for Education at Lenovo, for a grounded and thoughtful conversation about what responsible AI integration really looks like in K–12 classrooms. Tara brings more than 25 years of experience inside school districts, including serving as Assistant Superintendent for Teaching and Learning in Arlington Public Schools, so this isn't a theory-led discussion. It's informed by lived experience. We explore how the conversation has shifted over the past 18 months. AI has been present in schools for years through adaptive software and analytics, but the arrival of generative and now agentic AI tools has accelerated everything. As Tara explains, the debate is no longer about whether AI should be in schools. It's about how to approach it responsibly, strategically, and in ways that genuinely improve learning outcomes. A big theme in our conversation is AI literacy. Tara breaks this down in practical terms, moving beyond technical understanding to include critical thinking, creativity, collaboration, and the ability to evaluate risk and bias. She shares real examples of students designing AI tools to solve problems in their communities, shifting the focus from passive consumption to active creation. We also talk about infrastructure readiness. Many school systems have bold ambitions around AI, but there is often a gap between vision and technical capability. AI-ready devices, intelligent infrastructure, cybersecurity, and data governance all play a role in making innovation sustainable rather than experimental. Lenovo's approach, as Tara describes it, centers on building education ecosystems rather than simply refreshing hardware. There is also a careful balance to strike between innovation, privacy, and inclusion. From hybrid AI models to questions around where data is stored and who can access it, schools are navigating complex decisions. Tara shares how Lenovo partners with districts, policymakers, and organizations such as ISTE and ASCD to align infrastructure, professional learning, and governance frameworks. Looking ahead, we discuss what will separate school systems that truly benefit from AI from those that simply layer new tools onto old teaching models. Vision, educator upskilling, cybersecurity, and rethinking assessment all feature prominently in her answer. If you are working in education, technology leadership, or policy, this conversation offers a practical view of how AI-ready classrooms are being built today and what still needs to happen next. As always, I'd love to hear your thoughts. How is AI reshaping learning in your organization, and are you ready for what comes next?

What does autonomous IT really look like when you move beyond the slideware and start wiring systems together in the real world? At Dynatrace Perform in Las Vegas, I sat down with Pablo Stern, EVP and GM of Technology Workflow Products at ServiceNow, to unpack exactly that. Pablo leads the teams focused on CIOs and CISOs, building the workflows and security products that sit at the heart of modern IT organizations. From service desks and command centers to risk and asset management, his remit is clear: enable AI to work for people, not the other way around. We began with ServiceNow's deepening multi-year partnership with Dynatrace. While the announcement made headlines, Pablo was quick to point out that the real story starts with customers. This collaboration is rooted in a shared goal of helping joint customers reduce outages, improve SLA adherence, and shrink mean time to resolution. The vision of autonomous IT operations is not about hype. It is about connecting observability data with deterministic workflows so that insight can evolve into coordinated, system-level action. Pablo walked me through the maturity curve he sees emerging. First came AI-powered insight, summarizing data and surfacing signals from noise. Then came task automation, drafting knowledge articles, paging teams, triggering predefined playbooks. The next step, and the one that excites him most, is orchestrated autonomy. That means stitching together skills, agents, and workflows into systems that can drive end-to-end outcomes. It is a journey measured in years, not months, and it depends as much on digitizing process and building trust as it does on technology. We also explored root cause analysis, still one of the biggest time drains in IT. By combining Dynatrace's AI-driven observability with ServiceNow's workflow engine, enterprises can automate forensic steps, correlate events faster, and shorten the time spent on major incident bridges where teams debate ownership. Even incremental improvements in accuracy can save hours when incidents strike. Trust, of course, remains central. Pablo was candid that full self-healing systems are still some distance away. What we will see first is relief automation, controlled failovers, scripted actions suggested by machines but approved by humans. Over time, as confidence grows and processes become fully digitized, the balance will shift. Beyond the technology, a consistent theme ran through our conversation. Outcomes have not changed. Enterprises still want higher availability, faster resolution, better employee experiences. What is changing is the how. ServiceNow is reimagining its platform to deliver those outcomes at a much higher standard, not through incremental tweaks, but through rethinking workflows for an AI-first world. From design partnerships with banks building pre-flight change checks, to internal teams acting as the toughest customers, this was a grounded, practical conversation about where autonomous operations are headed and what it will take to get there. If you are a CIO, CISO, or IT leader wondering how to move from theory to execution, this episode offers a clear-eyed look behind the curtain.

What happens when nearly half of organizations admit they have no AI-specific security controls, yet AI-driven data leaks are accelerating at the same time? In this episode of Tech Talks Daily, I spoke with Aayush Choudhry, CEO and co-founder of Scrut Automation, about what he sees as a blind spot in the cybersecurity industry. While much of the market continues to design tools for Fortune 500 enterprises with deep pockets and large security teams, Aayush argues that the real existential risk sits with the 99 percent of businesses that cannot survive a serious breach. Aayush brings a founder's perspective shaped by firsthand pain. Before launching Scrut, he and his co-founder experienced the grind of managing compliance and security as a cloud-native startup trying to sell into enterprises. They were outsiders to GRC and security at the time, forced to learn from first principles. That experience became the foundation for Scrut Automation, a modern GRC platform built specifically for small and mid-sized companies that cannot afford six-month implementations, armies of consultants, or half-million-dollar tooling budgets. We explore why treating compliance and security as separate functions increases risk for smaller organizations. In the mid-market, the same small team is often responsible for both. When compliance is handled as a box-ticking exercise and security as a separate technical discipline, gaps emerge. Scrut's approach converges governance, risk, and security signals into a unified layer that translates hundreds of technical alerts into context-aware risks that actually matter to the business. Our conversation also tackles AI complacency. Using the classic confidentiality, integrity, and availability framework, Aayush outlines what minimum viable AI security hygiene looks like in practice. That includes ensuring AI agents are not over-privileged compared to the humans they represent, placing guardrails around sensitive data fed into models, and extending supply chain security thinking to agentic integrations. For resource-constrained teams, these are not theoretical concerns. They are daily realities. Perhaps most compelling is his view that AI can act as a force multiplier for small teams. By embedding accumulated expertise into agents trained on anonymized patterns and edge cases, Scrut aims to democratize security know-how that would otherwise require multiple full-time analysts. The goal is simple but ambitious: make enterprise-grade security outcomes accessible without enterprise-grade headcount. If you are leading a small or mid-sized business and wondering how to balance growth, compliance, and AI risk without breaking the bank, this conversation offers a candid look from the trenches.

How do you build enterprise software for the companies that keep the world turning, while also building a leadership culture where people can actually thrive? In this episode of Tech Talks Daily, I spoke with Kerrie Jordan, Group VP of Product Management at Epicor, about her journey from studying literature to helping shape cloud ERP strategy at a global software company serving more than 20,000 customers worldwide. Kerrie's story is a reminder that there is no single path into technology leadership. Sometimes the foundations are laid in unexpected places, through storytelling, creativity, and a deep curiosity about people. Kerrie shares how her early career in product lifecycle management opened her eyes to the human side of software. Interviewing customers and writing case studies showed her that behind every system implementation is a personal story, a career milestone, or a business trying to survive and grow. That perspective still shapes how she approaches product and marketing today at Epicor, a company recently recognized as a Leader in the Gartner Magic Quadrant for Cloud ERP for Product-Centric Enterprises for the third consecutive year. But this conversation goes far beyond market recognition. We talk openly about burnout, resilience, and the reality of leading through pressure. Kerrie reflects on the importance of protecting time, creating space to reconnect, and building a culture where empathy is practiced, not just discussed. Her view of leadership is grounded in communication, psychological safety, and being tough on problems rather than people. Mentorship is another thread running throughout our discussion. Kerrie explains why powerful mentorship is not passive. It requires vulnerability, preparation, and a willingness to hear difficult advice. A single phrase from a mentor early in her career, "stick-to-itiveness," continues to shape how she approaches hard problems today. We also explore the future of women in manufacturing and technology. Kerrie highlights the need for intentional change across education, early career development, and leadership visibility. She believes technology, particularly AI, can expand access, enable upskilling, and introduce flexibility that supports long-term career growth. At the same time, she makes a simple but powerful point. Women in tech want the same thing as anyone else: the space and autonomy to do their jobs well. From customer co-innovation and community-driven product roadmaps to inclusive leadership under commercial pressure, this episode offers a candid look at what it really takes to lead in enterprise technology today. If you are building products, leading teams, or questioning your own next career step, I think you will find something in Kerrie's story that resonates.

Why do so many of us feel busy all day, yet struggle to point to the meaningful work we actually completed? In this episode of Tech Talks Daily, I sit down with Tomás Dostal Freire, CIO of Miro, to unpack a challenge that quietly drains modern organizations. Tomás brings experience from companies like Google, Netflix, and Booking.com, and now leads both IT and business acceleration at Miro. His focus is simple but ambitious. Move beyond AI experimentation and rethink how work itself gets done. We explore new research revealing that for every hour of creative work, employees lose up to three hours to meetings, admin, emails, and maintenance tasks. That ratio is more than an inconvenience. It affects decision-making speed, employee satisfaction, and ultimately a company's ability to compete. Tomás argues that future candidates will choose employers based on how much unnecessary internal work they are expected to tolerate. In other words, reducing busy work is quickly becoming a talent strategy. One of the biggest culprits? Context switching. With dozens of browser tabs open and information scattered across tools, teams spend more time stitching together fragments than making decisions. Tomás describes how duplication of work, outdated systems, and a lack of shared context quietly erode momentum. AI, he believes, should not create more noise or another standalone tool. It needs to be embedded where collaboration already happens. We discuss the difference between single-player AI moments, where individuals use tools in isolation, and multiplayer AI collaboration, where shared context allows teams to move faster together. At Miro, this philosophy has shaped what they call an AI Innovation Workspace, a shared canvas where human insight and AI assistance coexist in real time. Tomás also shares practical advice for leaders who want to reclaim creative time. Start by identifying tasks you dislike doing that could easily be handled by someone junior. That list often reveals what AI can already automate. Then focus on building transferable skills like cognitive agility and first-principles thinking, rather than chasing every new tool. If you are wrestling with burnout, fragmented workflows, or wondering how AI can genuinely improve collaboration without overwhelming teams, this conversation offers a grounded, optimistic perspective. And yes, we even add a Beatles classic to the Spotify playlist along the way.

How do you protect millions in revenue during your busiest hour of the year when your entire business depends on digital performance? At Perform 2026, I caught up with Alex Hibbitt, Engineering Director responsible for the customer platform at Storio Group, to unpack what happens when observability moves from an engineering afterthought to a board-level priority. Storio Group was formed from the merger of Photobox and Albelli, bringing together multiple brands and five separate e-commerce platforms into one unified customer journey. That consolidation created opportunity, but it also exposed risk, especially during peak trading from Black Friday through Black Sunday and into the Christmas rush. Alex shared what it really looks like when downtime is non-negotiable. At peak, Storio's platform can generate up to 1.5 million euros per hour. A single poorly timed incident is not simply a technical problem, it is a direct threat to revenue and customer trust. Before partnering with Dynatrace, the team was relying heavily on centralized logging, processing over a billion log lines a day and depending on engineers to manually interpret signals. It was reactive, labor intensive, and left too much to chance. What stood out for me was how cultural change led the transformation. Rather than imposing a new tool from the top down, Alex and his team built a maturity model engineers could relate to, created internal champions, and framed observability as risk management and business protection. The result was a reported 65 to 70 percent reduction in log costs, a 50 percent drop in mean time to detect overall, and up to 90 percent improvement for the most severe incidents. We also explored how unifying logs, metrics, and traces into a single AI-driven platform helped Storio move from reactive firefighting to proactive detection. During one Black Sunday alone, three major issues were identified early enough to avoid an estimated 4.5 million euros in potential impact. This conversation goes beyond tooling. It is about protecting customer experience, safeguarding revenue during peak demand, and building an engineering culture that embraces change. If your organization is wrestling with cloud costs, fragmented monitoring, or the pressure to deliver flawless digital performance under load, there are some powerful lessons here.

How do you design financial infrastructure that keeps running when the unexpected hits, whether that is a regional outage, a regulatory shift, or a sudden spike in digital demand? In this episode of Tech Talks Daily, I'm joined by Katsutoshi Itoh from Sony and Masahisa Kawashima from NTT, both representing the IOWN Global Forum, to unpack how photonics-based networks could change the foundations of digital finance. Speaking with me from Kyoto, they share how the Innovative Optical and Wireless Network vision is moving beyond theory and into practical, finance-specific use cases. Financial institutions are under constant pressure to deliver uninterrupted services while meeting ever tighter compliance standards. Yet as we discuss, many existing architectures still rely on asynchronous data replication and layered resilience added after the fact. On paper, it works. In a real disruption, gaps quickly appear. Itoh and Kawashima explain how synchronous replication over ultra-low latency optical networks can reduce the risk of data loss while simplifying disaster recovery and lowering operational complexity. We also explore the role of Open All-Photonic Networks and why reducing packet forwarding layers can dramatically cut latency and infrastructure costs. Instead of concentrating compute and storage in dense urban data centers, photonics enables distributed computing across regions while maintaining deterministic performance. That shift opens the door to improved resilience, better infrastructure utilization, and new approaches to scaling without constant over-provisioning. Sustainability sits alongside resilience in this conversation. Rather than treating energy efficiency as a compromise, the IOWN vision distributes power demand geographically, making better use of locally available renewable energy and reducing concentrated load pressures. It is a subtle but important rethink of how infrastructure supports broader societal goals. Looking ahead, we consider what this could mean for digital banking platforms, AI-driven risk management, and cross-border financial services. If infrastructure limitations fall away, institutions can design services around business needs rather than technical constraints. If you are curious about how photonics could underpin the next generation of financial services, this episode offers a grounded and thoughtful perspective. As always, I would love to hear your thoughts after listening.