POPULARITY
What happens when leaders are confident about AI, but the people expected to use it are not ready? In this episode of Tech Talks Daily, I sat down with Caroline Grant from Slalom Consulting to explore one of the most persistent tensions in enterprise AI adoption right now. Boards and executives are spending more, moving faster, and expecting returns sooner than ever, yet many organizations are struggling to translate that ambition into outcomes that scale. Caroline brings fresh insight from Slalom's latest research into how leadership, culture, and workforce readiness are shaping what actually happens next. We unpack a clear shift in ownership for AI transformation, with CTOs and CDOs increasingly leading organizational redesign rather than HR. That change reflects how deeply AI now cuts across technology, operations, and business models, but it also introduces new risks. Caroline explains why sidelining people teams can create blind spots around skills, incentives, and trust, especially as roles evolve and uncertainty grows inside the workforce. The result is what Slalom describes as a growing AI disconnect between executive optimism and day-to-day reality. Despite the noise around job losses, the data tells a more nuanced story. Many organizations are creating new AI-related roles at a pace, yet almost all are facing skills gaps that threaten progress. We talk about why reskilling at scale is now unavoidable, how unclear career paths fuel employee distrust, and why focusing only on technical capability misses the human side of adoption. Caroline also challenges assumptions about skill priorities, warning that deprioritizing empathy, communication, and change leadership could undermine effective human-AI collaboration. We also dig into ROI expectations, with most UK executives now expecting returns within two years. Caroline shares why that ambition is achievable, where it breaks down, and why so many organizations remain stuck in pilot mode. From governance and decision rights to culture and leadership behavior, this conversation goes beyond tools and platforms to examine what separates experimentation from fundamental transformation. As AI becomes a test of leadership as much as technology, how are you closing the gap between vision and execution within your organization, and are you building a workforce that can keep pace with change rather than resist it? Connect With Caroline Grant from Slalom Consulting The Great AI Disconnect: Slalom's Insights Survey Learn More About Slalom
Is the browser quietly becoming the most powerful and dangerous interface in modern work? In this episode of Tech Talks Daily, I sat down with Karim Toubba, CEO of LastPass, to unpack a shift that many people feel every day but rarely stop to question. The browser is no longer just a window to the internet. It has become the place where work happens, where SaaS lives, and increasingly, where humans and AI agents meet data, credentials, and decisions. From AI-native browsers to prompt-based navigation and headless agents acting on our behalf, the way we access information is changing fast, and so are the risks. Karim shares why this moment feels different from earlier waves like SaaS adoption or remote work. Today, more than ever, productivity, identity, and security collide inside the browser. Shadow AI is spreading faster than most organizations can track, personal accounts are being used to access powerful AI tools, and sensitive data is being uploaded with little visibility or control. At the same time, attackers have noticed that the browser has become the soft underbelly of the enterprise, with a growing share of malware and breaches originating there. We also explore the rise of agentic AI and what happens when software, not people, starts logging into systems. When an agent books travel, pulls data, or completes workflows on a user's behalf, traditional authentication and access models start to break down. Karim explains why identity, visibility, and control must evolve together, and why secure browser extensions are emerging as a practical foundation for this next phase of computing. The conversation goes deep into what users do not see when AI browsers ask for access to email, calendars, and internal apps, and why convenience often masks long-term exposure. Throughout the discussion, Karim brings a grounded perspective shaped by decades in cybersecurity, from risk-based vulnerability management to enterprise threat intelligence. Rather than pushing fear, he focuses on realistic steps organizations and individuals can take, from understanding what data is being shared, to treating security teams as partners, to using tools that bring passwords, passkeys, and authentication into one trusted place as browsing evolves. As AI reshapes how we search, work, and make decisions, the question is no longer whether the browser matters. It is whether we are ready for it to act as the front door to both our productivity and our risk, so are you securing your browser for the future you are already using today? Connect with Karim Toubba LastPass Threat Intelligence, Mitigation, and Escalation (TIME) team page Phish Bowl Podcast
What really happens when AI helps teams write code faster, but everything else in the delivery process starts to slow down? In this episode of Tech Talks Daily, I'm joined once again by returning guest and friend of the show, Martin Reynolds, Field CTO at Harness. It has been two years since we last spoke, and a lot has changed since then. Martin has relocated from London to North Carolina, gaining back hours of his working week. Still, the bigger shift has been in how AI is reshaping software delivery inside modern enterprises. Our conversation centers on what Martin calls the AI velocity paradox. Development teams are producing more code at speed, often thanks to AI coding agents, yet testing, security, governance, and release processes are struggling to keep up. The result is a growing gap between how fast software is written and how safely it can be delivered. Martin shares research showing how this imbalance is already leading to production incidents, hidden vulnerabilities, and mounting technical debt. We also dig into why this AI-driven transition feels different from previous waves, such as cloud, mobile, or DevOps. Many of the same concerns around security, trust, and control still exist, but this time, everything is happening far faster. Martin explains why AI works best as a human amplifier, strengthening good engineering practices while exposing weak ones sooner than ever before. A significant theme in the episode is visibility. From shadow AI usage to expanding attack surfaces, Martin outlines why security teams are finding it harder to see where AI is being used and how data is flowing through systems. Rather than slowing teams down, he argues that the answer lies in embedding governance directly into delivery pipelines, making security automatic rather than an afterthought. We also explore the rise of agentic AI in testing, quality assurance, and security, where specialized agents act like virtual teammates. When well-designed, these agents help developers stay focused while improving reliability and resilience throughout the lifecycle. If you are responsible for engineering, platform, or security teams, this episode offers a grounded look at how to balance speed with responsibility in an AI-native world. As AI becomes part of every stage of software delivery, are your processes designed to safely absorb that change, or are they quietly becoming the bottleneck? Useful Links Learn More About Harness The State of AI in Engineering The State of AI Application Security EngineeringX Follow Harness on LinkedIn Connect With Martin Reynolds Thanks to our sponsors, Alcor, for supporting the show.
In this episode of Tech Talks Daily, I'm joined by Josh Haas, co-founder and co-CEO of Bubble, to unpack why the next phase of software creation is already taking shape. We talk about how the early excitement around AI-powered code generation delivered fast demos and instant gratification, but often fell apart when teams tried to turn those experiments into durable products that could grow with a business. Josh takes us back to Bubble's origins in 2012, long before AI hype cycles and trend-driven development. At the time, the idea was simple but ambitious: give more people the ability to build genuine software without spending months learning traditional programming. That early focus on visual development now feels timely again, especially as builders wrestle with the limits of black-box AI tools that hide logic until something breaks. We spend time on where vibe coding struggles in practice. Josh explains why speed alone is never enough once customers, payments, and sensitive data are involved. As he explains, most product requirements only surface after users arrive, and those edge cases are exactly where opaque AI-generated code can become risky. If you cannot see how your system works, you cannot truly own it, secure it, or fix it when something goes wrong. The conversation also digs into Bubble's hybrid approach, blending AI agents with visual development. Rather than asking builders to trust an AI, Bubble's model unquestioningly emphasizes clarity, auditability, and shared responsibility between humans and machines. Josh explains how visual logic makes software behavior explicit, helping teams understand rules, permissions, and workflows before they cause real-world problems. I learn how this mindset has helped Bubble-powered apps process over $1.1 billion in payments every year, a level of scale that leaves no room for guesswork. We also explore Bubble AI Agent, where conversational AI meets visual editing, and why transparency and control matter more than flashy demos. From governance and rollback logs to builder accountability, this episode looks at what it actually takes to build software that survives beyond the first launch. If you are building with AI or thinking about how software development is changing, this episode offers a grounded perspective on what comes after the hype fades. As AI tools become more powerful, the real question is whether they help you understand your product better over time, or slowly disconnect you from it. Which path should builders choose right now? Useful Links Connect with Josh Haas Learn More About Bubble Thanks to our sponsors, Alcor, for supporting the show.
How do you turn a developer-first product into a growth engine without losing trust, clarity, or focus along the way? In this episode of Tech Talks Daily, I'm joined by Sanjay Sarathy, VP of Developer Experience and Self Service at Cloudinary, for a grounded and thoughtful conversation about product-led growth when developers sit at the center of the story. Sanjay operates at a rare intersection. He leads Cloudinary's high-volume self-service motion while also caring for the developer community that fuels adoption, advocacy, and long-term loyalty. That dual perspective, part business, part builder, shapes everything we discuss. Our conversation picks up on a theme I have been exploring across recent episodes. When technical work is explained clearly, whether that is security, performance, or reliability, it stops being background noise and starts supporting growth. Sanjay shares how Cloudinary approached this from day one, starting with founders who were developers themselves and carried a deep respect for developer trust into the company's DNA. Documentation that reflects reality, platforms that behave exactly as promised, and support that shows up early rather than as an afterthought all play a part. What stood out to me was how early Cloudinary invested in technical support, even before many traditional growth motions were in place. That decision shaped a self-service experience that still feels human at scale. With thousands of developer sign-ups every day and millions of developers using the platform, Sanjay explains how trust compounds into referrals, word of mouth, and sustained adoption. We also dig into developer advocacy and why community is rarely a single thing. Developers gather around frameworks, tools, workflows, and shared problems, and Cloudinary has learned to meet them where they already are rather than forcing them into a single branded space. From React and Next.js users to enterprise advisory boards, feedback loops become part of the product itself. As AI reshapes how software is built and developer tools become more crowded, Sanjay offers a clear-eyed view on what separates companies that grow steadily from those that burn bright and stall. Profitability, experimentation with intent, and the discipline to double down on what works all feature heavily in his thinking. It is a conversation rooted in experience rather than theory. If you care about product-led growth, developer trust, or building platforms that scale without losing their soul, this episode offers plenty to think about. As always, I would love to hear your perspective too. How do you see developer communities shaping the next phase of product growth, and where do you think companies still get it wrong?
Why do today's most powerful AI systems still struggle to explain their decisions, repeat the same mistakes, and undermine trust at the very moment we are asking them to take on more responsibility? In this episode of Tech Talks Daily, I'm joined by Artur d'Avila Garcez, Professor of Computer Science at City, St George's University of London, and one of the early pioneers of neurosymbolic AI. Our conversation cuts through the noise around ever-larger language models and focuses on a deeper question many leaders are now grappling with. If scale alone cannot deliver reliability, accountability, or genuine reasoning, what is missing from today's AI systems? Artur explains neurosymbolic AI in clear, practical terms as the integration of neural learning with symbolic reasoning. Deep learning excels at pattern recognition across language, images, and sensor data, but it struggles with planning, causality, and guarantees. Symbolic AI, by contrast, offers logic, rules, and explanations, yet falters when faced with messy, unstructured data. Neurosymbolic AI aims to bring these two worlds together, allowing systems to learn from data while reasoning with knowledge, producing AI that can justify decisions and avoid repeating known errors. We explore why simply adding more parameters and data has failed to solve hallucinations, brittleness, and trust issues. Artur shares how neurosymbolic approaches introduce what he describes as software assurances, ways to reduce the chance of critical errors by design rather than trial and error. From self-driving cars to finance and healthcare, he explains why combining learned behavior with explicit rules mirrors how high-stakes systems already operate in the real world. A major part of our discussion centers on explainability and accountability. Artur introduces the neurosymbolic cycle, sometimes called the NeSy cycle, which translates knowledge into neural networks and extracts knowledge back out again. This two-way process opens the door to inspection, validation, and responsibility, shifting AI away from opaque black boxes toward systems that can be questioned, audited, and trusted. We also discuss why scaling neurosymbolic AI looks very different from scaling deep learning, with an emphasis on knowledge reuse, efficiency, and model compression rather than ever-growing compute demands. We also look ahead. From domain-specific deployments already happening today to longer-term questions around energy use, sustainability, and regulation, Artur offers a grounded view on where this field is heading and what signals leaders should watch for as neurosymbolic AI moves from research into real systems. If you care about building AI that is reliable, explainable, and trustworthy, this conversation offers a refreshing and necessary perspective. As the race toward more capable AI continues, are we finally ready to admit that reasoning, not just scale, may decide what comes next, and what kind of AI do we actually want to live with? Useful Links Neurosymbolic AI (NeSy) Association website Artur's personal webpage on the City, St George's University of London page Co-authored book titled "Neural-Symbolic Learning Systems" The article about neurosymbolic AI and the road to AGI The Accountability in AI article Reasoning in Neurosymbolic AI Neurosymbolic Deep Learning Semantics
Why does healthcare keep investing in new technology while so many clinicians feel buried under paperwork and admin work that has nothing to do with patient care? In this episode of Tech Talks Daily, I'm joined by Dr. Rihan Javid, psychiatrist, former attorney, and co-founder and president of Edge. Our conversation cuts straight into an issue that rarely gets the attention it deserves, the quiet toll that administrative overload takes on doctors, care teams, and ultimately patients. Nearly half of physicians now link burnout to paperwork rather than clinical work, and Rihan explains why this problem keeps slipping past leadership discussions, even as budgets for digital tools continue to rise. Drawing on his experience inside hospitals and clinics, Rihan shares how operational design shapes outcomes in ways many healthcare leaders underestimate. We talk about why short-term staffing fixes often create new problems down the line, and how practices that invest in stable, well-trained remote administrative teams see real improvements. That includes faster billing cycles, fewer errors, and more time back for clinicians who want to focus on care rather than forms. What stood out for me was his framing of workforce infrastructure as a performance driver rather than a compliance box to tick. We also dig into how hybrid operations are becoming the default model. Local clinicians working alongside remote admin teams, supported by AI-assisted workflows, are now common across healthcare. Rihan is clear that while automation and AI can remove friction and cost, human oversight still matters deeply in high-compliance environments. Trust, accuracy, and patient confidence depend on knowing where automation fits and where human judgment must stay firmly in place. Another part of the discussion that stuck with me was Rihan's idea that stability is emerging as a better success signal than raw cost savings. High turnover may look efficient on paper, but it quietly limits a clinic's ability to grow, retain knowledge, and improve patient outcomes. We unpack why consistent administrative support can influence revenue cycles, satisfaction, and long-term resilience in ways traditional metrics often miss. If you're a healthcare leader, operator, or technologist trying to understand how AI, remote teams, and smarter operations can work together without losing trust or care quality, this conversation offers plenty to reflect on. As healthcare systems rethink how work gets done behind the scenes, what would it look like if stability and clinician well-being were treated as core performance measures rather than afterthoughts, and how might that change the future of care? Useful Links Connect with Dr. Rihan Javid Edge Health Rinova AI Thanks to our sponsors, Alcor, for supporting the show.
Why do small business leaders keep buying more software yet still feel like they are drowning in logins, dashboards, and unfinished work? In this episode of Tech Talks Daily, I sit down with Jesse Lipson, founder and CEO of Levitate, to unpack a frustration I hear from business owners almost daily. After years of being pitched yet another tool, many leaders now spend hours each week troubleshooting software instead of serving customers. Jesse brings a grounded perspective shaped by decades of building SaaS companies, including bootstrapping ShareFile before its acquisition by Citrix, and what stood out to me immediately was how clearly he articulates where the current software model has broken down for small businesses. We talk about why adding more apps has not translated into better outcomes, especially for teams without dedicated specialists in marketing, finance, or sales. Jesse explains how traditional software often solves only part of the problem, leaving owners to become accidental experts in accounting, marketing strategy, or customer communications just to make the tools usable. From there, our conversation shifts toward what he believes will actually matter as AI adoption matures. Rather than chasing full automation or shiny new dashboards, Jesse argues that the real opportunity lies in blending intelligence with human guidance, allowing AI to work quietly behind the scenes while people remain the face of authentic relationships. A big part of our discussion centers on trust and connection in an AI-saturated world. Jesse shares why customers have become incredibly good at spotting automated communication and why relationship-based businesses cannot afford to lose the human element. We explore how AI can act as a second brain, helping business owners remember details, follow up at the right moments, and show up more thoughtfully, without crossing the line into impersonal automation that turns customers away. His examples, from marketing emails to customer support, make it clear that technology should support better relationships rather than replace them. We also look ahead to what small businesses should realistically focus on as AI evolves. Jesse offers practical guidance on getting started, from everyday use of conversational AI, to building internal documentation that allows systems to work more effectively, and eventually moving toward agent-based workflows that can take on real operational tasks. Throughout the conversation, he keeps returning to the same idea, that AI works best when it helps people become the kind of business leaders they already want to be, more present, more consistent, and more human. If you are a founder, operator, or small business leader feeling overwhelmed by tools that promise productivity but deliver friction, this episode offers a refreshing reset. As AI becomes more capable and more embedded in daily work, the real question is not how many systems you deploy, but whether they help you build stronger, more genuine relationships, so how are you choosing to use AI to support the human side of your business rather than bury it? Useful Links Connect with Jesse Lipson Connect with Jesse on X Learn more about Levitate
What happens when power, rather than compute, becomes the limiting factor for AI, robotics, and industrial automation? In this episode of Tech Talks Daily, I'm joined by Ramesh Narasimhan from Nyobolt to unpack a challenge that is quietly reshaping modern infrastructure. As AI training and inference workloads grow more dynamic, power demand is no longer predictable or steady. It can spike and drop in milliseconds, creating stress on systems that were never designed for this level of volatility. We talk about why data center operators, automation leaders, and industrial firms are being forced to rethink how energy is delivered, managed, and scaled. Our conversation moves beyond AI headlines and into the less visible constraints holding progress back. Ramesh explains how automation growth, particularly in robotics and autonomous mobile robot fleets, has exposed hidden inefficiencies. Charging downtime, thermal limits, and oversized systems are eroding productivity in warehouses and factories that aim to run around the clock. Instead of expanding physical footprints or adding redundant capacity, many operators are questioning whether the energy layer itself has become outdated. One of the themes that stood out for me is how energy has shifted from a background utility to a board-level concern. Power density, resilience, and cycle life are now discussed with the same urgency as compute performance or sensor accuracy. Ramesh shares why executives across logistics, automotive, advanced manufacturing, and AI infrastructure are starting to see energy strategy as a direct driver of uptime, cost control, and competitive advantage. We also explore the industry-wide push toward high-power, high-uptime operations. As businesses demand systems that can stay online continuously, the pressure is on energy technologies to respond faster, charge quicker, and occupy less space. This raises difficult questions about oversizing infrastructure for rare peak loads versus designing smarter systems that can flex in real time without waste. If you are building or operating AI clusters, robotics platforms, or industrial automation at scale, this episode offers a clear-eyed look at why energy systems may be the next major bottleneck and opportunity. As power becomes inseparable from performance, how ready is your organization to treat energy as a strategic asset rather than an afterthought?
What happens when artificial intelligence starts accelerating cyberattacks faster than most organizations can test, fix, and respond? In this fast-tracked episode of Tech Talks Daily, I sat down with Sonali Shah, CEO of Cobalt, to unpack what real-world penetration testing data is revealing about the current state of enterprise security. With more than two decades in cybersecurity and a background that spans finance, engineering, product, and strategy, Sonali brings a grounded, operator-level view of where security teams are keeping up and where they are quietly falling behind. Our conversation centers on what happens when AI moves from an experiment to an attack surface. Sonali explains how threat actors are already using the same AI-enabled tools as defenders to automate reconnaissance, identify vulnerabilities, and speed up exploitation. We discuss why this is no longer theoretical, referencing findings from companies like Anthropic, including examples where models such as Claude have demonstrated both power and unpredictability. The takeaway is sobering but balanced. AI can automate a large share of the work, but human expertise still plays a defining role, both for attackers and defenders. We also dig into Cobalt's latest State of Pentesting data, including why median remediation times for serious vulnerabilities have improved while overall closure rates remain stubbornly low. Sonali breaks down why large enterprises struggle more than smaller organizations, how legacy systems slow progress, and why generative AI applications currently show some of the highest risk with some of the lowest fix rates. As more companies rush to deploy AI agents into production, this gap becomes harder to ignore. One of the strongest themes in this episode is the shift from point-in-time testing to continuous, programmatic risk reduction. Sonali explains what effective continuous pentesting looks like in practice, why automation alone creates noise and friction, and how human-led testing helps teams move from assumptions to evidence. We also address a persistent confidence gap, where leaders believe their security posture is strong, even when testing shows otherwise. We close by tackling one of the biggest myths in cybersecurity. Security is never finished. It is a constant process of preparation, testing, learning, and improvement. The organizations that perform best accept this reality and build security into daily operations rather than treating it as a one-off task. So as AI continues to accelerate both innovation and attacks, how confident are you that your security program is keeping pace, and what would continuous testing change inside your organization? I would love to hear your thoughts. Useful Links Connect with Sonali Shah Learn more about Cobalt Check out the Cobalt Learning Center State of Pentesting Report Thanks to our sponsors, Alcor, for supporting the show.
What does it really take to remove decades of technical debt without breaking the systems that still keep the business running? In this episode of Tech Talks Daily, I sit down with Pegasystems leaders Dan Kasun, Head of Global Partner Ecosystem, and John Higgins, Chief of Client and Partner Success, to unpack why legacy modernization has reached a breaking point, and why AI is forcing enterprises to rethink how software is designed, sold, and delivered. Our conversation goes beyond surface-level AI promises and gets into the practical reality of transformation, partner economics, and what actually delivers measurable outcomes. We explore how Pega's AI-powered Blueprint is changing the entry point to enterprise-grade workflows, turning what used to be long, expensive discovery phases into fast, collaborative design moments that business and technology teams can engage with together. Dan and John explain why the old "wrap and renew" approach to legacy systems is quietly compounding technical debt, and why reimagining workflows from the ground up is becoming essential for organizations that want to move toward agentic automation with confidence. The discussion also dives into Pega's deep collaboration with Amazon Web Services, including how tools like AWS Transform and Blueprint work together to accelerate modernization at scale. We talk candidly about the evolving role of partners, why the idea of partners as an extension of a sales force is outdated, and how marketplaces are reshaping buying, building, and operating enterprise software. Along the way, we tackle some uncomfortable truths about AI hype, technical debt, and why adding another layer of technology rarely fixes the real problem. This is an episode for anyone grappling with legacy systems, skeptical of quick-fix AI strategies, or rethinking how partner ecosystems need to operate in a world where speed, clarity, and accountability matter more than ever. As enterprises move toward multi-vendor, agent-driven environments, are we finally ready to retire legacy thinking along with legacy systems, or are we still finding new ways to delay the inevitable? Useful Links Connect with Dan Kasun Connect with John Higgins Learn more about Pega Blueprint Thanks to our sponsors, Alcor, for supporting the show.
What happens when AI stops talking and starts working, and who really owns the value it creates? In this episode of Tech Talks Daily, I'm joined by Sina Yamani, founder and CEO of Action Model, for a conversation that cuts straight to one of the biggest questions hanging over the future of artificial intelligence. As AI systems learn to see screens, click buttons, and complete tasks the way humans do, power and wealth are concentrating fast. Sina argues that this shift is happening far quicker than most people realize, and that the current ownership model leaves everyday users with little say and even less upside. Sina shares the thinking behind Action Model, a community-owned approach to autonomous AI that challenges the idea that automation must sit in the hands of a few giant firms. We unpack the concept of Large Action Models, AI systems trained to perform real online workflows rather than generate text, and why this next phase of AI demands a very different kind of training data. Instead of scraping the internet in the background, Action Model invites users to contribute actively, rewarding them for helping train systems that can navigate software, dashboards, and tools just as a human worker would. We also explore ActionFi, the platform's outcome-based reward layer, and why Sina believes attention-based incentives have quietly broken trust across Web3. Rather than paying for likes or impressions, ActionFi focuses on verifying real actions across the open web, even when no APIs or integrations exist. That raises obvious questions around security and privacy. This conversation does not shy away from the uncomfortable parts. We talk openly about job displacement, the economic reality facing businesses, and why automation is unlikely to slow down. Sina argues that resisting change is futile, but shaping who benefits from it remains possible. He also reflects on lessons from his earlier fintech exit and how movements grow when people feel they are pushing back against an unfair system. By the end of the episode, we look ahead to a future where much of today's computer-based work disappears and ask what success and failure might look like for a community-owned AI model operating at scale. If AI is going to run more of the internet on our behalf, should the people training it have a stake in what it becomes, and would you trust an AI ecosystem owned by its users rather than a handful of billionaires? Useful Links Connect with Sina Yamani on LinkedIn or X Learn more about the Action Model Follow on X Learn more about the Action Model browser extension Check out the whitelabel integration docs Join their Waitlist Join their Discord community Thanks to our sponsors, Alcor, for supporting the show.
What does it really take to move AI from proof-of-concept to something that delivers value at scale? In this episode of Tech Talks Daily, I'm joined by Simon Pettit, Area Vice President for the UK and Ireland at UiPath, for a grounded conversation about what is actually happening inside enterprises as AI and automation move beyond experimentation. Simon brings a refreshingly practical perspective shaped by an unconventional career path that spans the Royal Navy, nearly two decades at NetApp, and more than seven years at UiPath. We talk about why the UK and Ireland remain a strategic region for global technology adoption, how London continues to play a central role for companies expanding into Europe, and why AI momentum in the region is very real despite the broader economic noise. A big part of our discussion focuses on why so many organizations are stuck in pilot mode. Simon explains how hype, fragmented experimentation, and poor qualification of use cases often slow progress, while successful teams take a very different approach. He shares real examples of automation already delivering measurable outcomes, from long-running public sector programs to newer agent-driven workflows that are now moving into production after clear ROI validation. We also explore where the next wave of challenges is emerging. As agentic AI becomes easier for anyone to create, Simon draws a direct parallel to the early days of cloud computing and VM sprawl. Visibility, orchestration, and cost control are becoming just as important as innovation itself. Without them, organizations risk losing control of workflows, spend, and accountability as agents multiply across the business. Looking ahead, Simon outlines why AI success will depend on ecosystems rather than single platforms. Partnerships, vertical solutions, and the ability to swap technologies as the market evolves will shape how enterprises scale responsibly. From automation in software testing to cross-functional demand coming from HR, finance, and operations, this conversation captures where AI is delivering today and where the real work still lies. If you're trying to separate AI momentum from AI noise, this episode offers a clear, experience-led view of what it takes to turn potential into progress. What would need to change inside your organization to move from pilots to production with confidence? Useful Links Learn more about Simon Pettit Connect with UiPath Follow on LinkedIn Thanks to our sponsors, Alcor, for supporting the show.
What happens when speed, scale, and convenience start to erode trust in the images brands rely on to tell their story? In this episode of Tech Talks Daily, I spoke with Dr. Rebecca Swift, Senior Vice President of Creative at Getty Images, about a growing problem hiding in plain sight, the rise of low-quality, generic, AI-generated visuals and the quiet damage they are doing to brand credibility. Rebecca brings a rare perspective to this conversation, leading a global creative team responsible for shaping how visual culture is produced, analyzed, and trusted at scale. We explore the idea of AI "sloppification," a term that captures what happens when generative tools are used because they are cheap, fast, and available, rather than because they serve a clear creative purpose. Rebecca explains how the flood of mass-produced AI imagery is making brands look interchangeable, stripping visuals of meaning, craft, and originality. When everything starts to look the same, audiences stop looking altogether, or worse, stop trusting what they see. A central theme in our discussion is transparency. Research shows that the majority of consumers want to know whether an image has been altered or created using AI, and Rebecca explains why this shift matters. For the first time, audiences are actively judging content based on how it was made, not just how it looks. We talk about why some brands misread this moment, mistaking AI usage for innovation, only to face backlash when consumers feel misled or talked down to. Rebecca also unpacks the legal and ethical risks many companies overlook in the rush to adopt generative tools. From copyright exposure to the use of non-consented training data, she outlines why commercially safe AI matters, especially for enterprises that trade on trust. We discuss how Getty Images approaches AI differently, with consented datasets, creator compensation, and strict controls designed to protect both brands and the creative community. The conversation goes beyond risk and into opportunity. Rebecca makes a strong case for why authenticity, real people, and human-made imagery are becoming more valuable, not less, in an AI-saturated world. We explore why video, photography, and behind-the-scenes storytelling are regaining importance, and why audiences are drawn to evidence of craft, effort, and intent. As generative AI becomes impossible to ignore, this episode asks a harder question. Are brands using AI as a thoughtful tool to support creativity, or are they trading long-term trust for short-term convenience, and will audiences continue to forgive that choice? Useful Links Connect with Dr. Rebecca Swift on LinkedIn VisualGSP Creative Trends Follow on Instagram and LinkedIn Thanks to our sponsors, Alcor, for supporting the show.
What actually happens when a company loses control of its own voice in a world full of channels, platforms, and constant noise? In this episode of Tech Talks Daily, I sat down with Joshua Altman, founder of beltway.media, to unpack what corporate communication really means in 2026 and why it has quietly become one of the most misunderstood leadership functions inside modern organizations. Joshua describes his work as a fractional chief communications officer, a role that sits above individual campaigns, tools, or channels and focuses instead on perception, trust, and consistency across everything a company says and does. Our conversation starts by challenging the assumption that communication is something you "turn on" when a product launches or a crisis hits. Joshua explains why corporate communication is not project-based and not owned by marketing alone. It touches internal updates, investor messaging, brand signals, packaging, email, social platforms, and even the tools teams choose to use every day. If it communicates with internal or external audiences and shapes how the company is perceived, it belongs in the communications function. When that function is missing or fragmented, confusion and noise tend to fill the gap. We also explored why communication has arguably become harder, not easier, despite the explosion of collaboration tools. Email was meant to simplify work, then Slack was meant to replace email, and now AI assistants are transcribing every meeting and surfacing more content than anyone can realistically process. Joshua makes a strong case for simplicity, clarity, and focus, arguing that organizations need to pick channels intentionally and use them well rather than spreading messages everywhere and hoping something lands. Technology naturally plays a big role in the discussion. From the shift away from tape-based media and physical workflows to the accessibility of live global collaboration and affordable computing power, Joshua reflects on how dramatically the workplace has changed since he started his career in video news production. He also shares a grounded view on AI, where it adds real value in speeding up research and reducing busywork, and where human judgment and storytelling still matter most. Toward the end of the conversation, we get into ROI, a question every leader eventually asks. Joshua offers a practical way to think about it, starting with the simple fact that founders, operators, and technical leaders get time back when they no longer have to manage communications themselves. From there, alignment, clarity, and consistency compound over time, even if the impact is not always visible in a single metric. As organizations look ahead and try to make sense of AI, platform shifts, and ever-shorter attention spans, are we investing enough thought into how our companies actually communicate, or are we still mistaking volume for clarity? Useful Links Connect with Joshua Altman Learn more about beltway.media Thanks to our sponsors, Alcor, for supporting the show.
What does it actually take to build trust with developers when your product sits quietly inside thousands of other products, often invisible to the people using it every day? In this episode of Tech Talks Daily, I sat down with Ondřej Chrastina, Developer Relations at CKEditor, to unpack a career shaped by hands-on experience, curiosity, and a deep respect for developer time. Ondřej's story starts in QA and software testing, moves through development and platform work, and eventually lands in developer relations. What makes his perspective compelling is that none of these roles felt disconnected. Each one sharpened his understanding of real developer friction, the kind you only notice when you have lived with a product day in and day out. We talked about what changes when you move from monolithic platforms to API-first services, and why developer relations looks very different depending on whether your audience is an application developer, a data engineer, or an integrator working under tight delivery pressure. Ondřej shared how his time at Kentico, Kontent.ai, and Ataccama shaped his approach to tooling, documentation, and examples. For him, theory rarely lands. Showing something that works, even in a small or imperfect way, tends to earn attention and respect far faster. At CKEditor, that thinking becomes even more interesting. The editor is everywhere, yet rarely recognized. It lives inside SaaS platforms, internal tools, CRMs, and content systems, quietly doing its job. We explored how developer experience matters even more when the product itself fades into the background, and why long-term maintenance, support, and predictability often outweigh short-term feature excitement. Ondřej also explained why building instead of buying an editor is rarely as simple as teams expect, especially when standards, security, and future updates enter the picture. We also got into the human side of developer relations. Balancing credibility with business goals, staying useful rather than loud, and acting as a bridge between engineering, product, marketing, and the outside world. Ondřej was refreshingly honest about the role ego can play, and why staying close to real usage is the fastest way to keep yourself grounded. If you care about developer experience, internal tooling, or how invisible infrastructure shapes modern software, this conversation offers plenty to reflect on. What have you seen work, or fail, when it comes to earning developer trust, and where do you think developer relations still get misunderstood? Useful Links Connect with Ondrej Chrastina Learn more about CK Editor Thanks to our sponsors, Alcor, for supporting the show.
What if your AI systems could explain why something will happen before it does, rather than simply reacting after the damage is done? In this episode of Tech Talks Daily, I sat down with Zubair Magrey, co-founder and CEO of Ergodic AI, to unpack a different way of thinking about artificial intelligence, one that focuses on understanding how complex systems actually behave. Zubair's journey begins in aerospace engineering at Rolls-Royce, moves through a decade of large-scale enterprise AI programs at Accenture, and ultimately leads to building Ergodic, a company developing what he describes as world models for enterprise decision making. World models are often mentioned in research circles, but rarely explained in a way that business leaders can connect to real operational decisions. In our conversation, Zubair breaks that gap down clearly. Instead of training AI to spot patterns in past data and assume the future will look the same, world-model AI focuses on cause and effect. It builds a structured representation of how an organization works, how different parts interact, and how actions ripple through the system over time. The result is an AI approach that can simulate outcomes, test scenarios, and help teams understand the consequences of decisions before they commit to them. We explored why this matters so much as organizations move toward agentic AI, where systems are expected to recommend or even execute actions autonomously. Without an understanding of constraints, dependencies, and system dynamics, those agents can easily produce confident but unrealistic recommendations. Zubair explains how Ergodic uses ideas from physics and system theory to respect real-world limits like capacity, time, inventory, and causality, and why ignoring those principles leads to fragile AI deployments that struggle under pressure. The conversation also gets practical. Zubair shares how world-model simulations are being used in supply chain, manufacturing, automotive, and CPG environments to detect early risks, anticipate disruptions, and evaluate trade-offs before problems cascade across customers and regions. We discuss why waiting for perfect data often stalls AI adoption, how Ergodic's data-agnostic approach works alongside existing systems, and what it takes to deliver ROI that teams actually trust and use. Finally, we step back and look at the organizational side of AI adoption. As AI becomes embedded into daily workflows, cultural change, experimentation, and trust become just as important as models and metrics. Zubair offers a grounded view on how leaders can prepare their teams for faster cycles of change without losing confidence or control. As enterprises look ahead to a future shaped by autonomous systems and real-time decision making, are we building AI that truly understands how our organizations work, or are we still guessing based on the past, and what would it take to change that? Useful Links Connect with Zubair Magrey Learn more about Ergodic AI Thanks to our sponsors, Alcor, for supporting the show.
If artificial intelligence is meant to earn trust anywhere, should banking be the place where it proves itself first? In this episode of Tech Talks Daily, I'm joined by Ravi Nemalikanti, Chief Product and Technology Officer at Abrigo, for a grounded conversation about what responsible AI actually looks like when the consequences are real. Abrigo works with more than 2,500 banks and credit unions across the United States, many of them community institutions where every decision affects local businesses, families, and entire regional economies. That reality makes this discussion feel refreshingly practical rather than theoretical. We talk about why financial services has become one of the toughest proving grounds for AI, and why that is a good thing. Ravi explains why concepts like transparency, explainability, and auditability are not optional add-ons in banking, but table stakes. From fraud detection and lending decisions to compliance and portfolio risk, every model has to stand up to regulatory, ethical, and operational scrutiny. A false positive or an opaque decision is not just a technical issue, it can damage trust, disrupt livelihoods, and undermine confidence in an institution. A big focus of the conversation is how AI assistants are already changing day-to-day banking work, largely behind the scenes. Rather than flashy chatbots, Ravi describes assistants embedded directly into lending, anti-money laundering, and compliance workflows. These systems summarize complex documents, surface anomalies, and create consistent narratives that free human experts to focus on judgment, context, and relationships. What surprised me most was how often customers value consistency and clarity over raw speed or automation. We also explore what other industries can learn from community banks, particularly their modular, measured approach to adoption. With limited budgets and decades-old core systems, these institutions innovate cautiously, prioritizing low-risk, high-return use cases and strong governance from day one. Ravi shares why explainable AI must speak the language of bankers and regulators, not data scientists, and why showing the "why" behind a decision is essential to keeping humans firmly in control. As we look toward 2026 and beyond, the conversation turns to where AI can genuinely support better outcomes in lending and credit risk without sidelining human judgment. Ravi is clear that fully autonomous decisioning still has a long way to go in high-stakes environments, and that the future is far more about partnership than replacement. AI can surface patterns, speed up insight, and flag risks early, but people remain essential for context, empathy, and final accountability. If you're trying to cut through the AI noise and understand how trust, governance, and real-world impact intersect, this episode offers a rare look at how responsible AI is actually being built and deployed today. And once you've listened, I'd love to hear your perspective. Where do you see AI earning trust, and where does it still have something to prove?
What really happens after the startup advice runs out and founders are left facing decisions no pitch deck ever prepared them for? In this episode of Tech Talks Daily, I sit down with Vijay Rajendran, a founder, venture capitalist, UC Berkeley instructor, and author of The Funding Framework, to discuss the realities of company building that rarely appear on social feeds or investor blogs. Vijay has spent years working alongside founders at the sharpest end of growth, from early fundraising conversations through to the personal and leadership shifts that scaling demands. That experience shapes a conversation that feels refreshingly honest, thoughtful, and grounded in lived reality. We explore why building something people actually want sounds simple in theory yet proves brutally difficult in practice. Vijay explains how timing, learning velocity, and the willingness to adapt often matter more than stubborn vision, and why many founders misunderstand what momentum really looks like. From there, the discussion moves into investor relationships, not as transactional events, but as long-term partnerships that require founders to shift their mindset from defense to evaluation. The emotional and psychological dynamics of fundraising come into focus, especially the moments when founders underestimate how much power they actually have in shaping those relationships. A big part of this conversation centers on leadership identity. Vijay breaks down the messy transition from being the "chief everything officer" to becoming a true chief executive, and why the most overlooked stage in that journey is learning how to enable others. We talk about the point where founders become the bottleneck, often without realizing it, and why this tends to surface as teams grow and decisions start happening outside the founder's direct line of sight. The plateau many companies hit around scale becomes less mysterious when viewed through this lens. We also challenge some of the most popular startup advice circulating online today, particularly around fundraising volume, pitching styles, and the idea that persistence alone guarantees outcomes. Vijay shares why treating fundraising like enterprise sales, focusing on alignment over volume, and listening more than pitching often leads to better results. The conversation closes with practical reflections on personal growth, co-founder dynamics, and how leaders can regain clarity during periods of pressure without stepping away from responsibility. If you are building a company, leading a team, or questioning whether you are evolving as fast as your business demands, this episode will likely hit closer to home than you expect. And once you've listened, I'd love to hear what resonated most with you and the leadership questions you're still sitting with after the conversation. Useful Links Connect with Vijay Rajendran The Funding Framework Startup Pitch Deck Thanks to our sponsors, Alcor, for supporting the show.
What happens when decades of clinical research experience collide with a regulatory environment that is changing faster than ever? In this episode of Tech Talks Daily, I sat down with Dr Werner Engelbrecht, Senior Director of Strategy at Veeva Systems, for a wide-ranging conversation that explores how life sciences organizations across Europe are responding to mounting regulatory pressure, rapid advances in AI, and growing expectations around transparency and patient trust. Werner brings a rare perspective to this discussion. His career spans clinical research, pharmaceutical development, health authorities, and technology strategy, shaped by firsthand experience as an investigator and later as a senior industry leader. That background gives him a grounded, practical view of what is actually changing inside pharma and biotech organizations, beyond the headlines around AI Acts, data rules, and compliance frameworks. We talk openly about why regulations such as GDPR, the EU AI Act, and ACT-EU are creating real pressure for organizations that are already operating in highly controlled environments. But rather than framing compliance as a blocker, Werner explains why this moment presents an opening for better collaboration, stronger data foundations, and more consistent ways of working across internal teams. According to him, the real challenge is less about technology and more about how companies manage data quality, align processes, and break down silos that slow everything from trial setup to regulatory response times. Our conversation also digs into where AI is genuinely making progress today in life sciences and where caution still matters. Werner shares why drug discovery and non-patient-facing use cases are moving faster, while areas like trial execution and real-world patient data still demand stronger evidence, cleaner datasets, and clearer governance. His perspective cuts through hype and focuses on what is realistic in an industry where patient safety remains the defining responsibility. We also explore patient recruitment, decentralized trials, and the growing complexity of diseases themselves. Advances in genomics and diagnostics are reshaping how trials are designed, which in turn raises questions about access to electronic health records, data harmonization across Europe, and the safeguards regulators care about most. Werner connects these dots in a way that highlights both the operational strain and the long-term upside. Toward the end, we look ahead at emerging technologies such as blockchain and connected devices, and how they could strengthen data integrity, monitoring, and regulatory confidence over time. It is a thoughtful discussion that reflects both optimism and realism, rooted in lived experience rather than theory. If you are working anywhere near clinical research, regulatory affairs, or digital transformation in life sciences, this episode offers a clear-eyed view of where the industry stands today and where it may be heading next. How should organizations turn regulation into momentum instead of resistance, and what will it take to earn lasting trust from patients, partners, and regulators alike? Useful Links Connect with Dr Werner Engelbrecht Learn more about Veeva Systems Viva Summit Europe and Viva Summit USA Follow on LinkedIn Thanks to our sponsors, Alcor, for supporting the show.
What happens when an industry that has barely changed for generations suddenly finds itself at the center of one of the biggest shifts in modern work? In this episode of Tech Talks Daily, I'm joined by Kate Hayward, UK Managing Director at Xero, for a conversation about how accounting is being reshaped by technology, education, regulation, and changing expectations from clients and talent alike. Kate describes this moment as the largest reorganization of human capital in the history of the profession, and as we talk, it becomes clear why that claim is gaining traction. We explore how AI is shifting accountants away from pure number processing and toward higher-value advisory work, without stripping away the deep financial understanding the role still demands. Kate shares why so many practices are reporting higher revenues and profits, and how technology is acting as a catalyst for rethinking long-standing workflows rather than simply speeding up broken ones. We also dig into research showing that pairing AI with financial education strengthens analytical thinking while leaving core calculation skills intact, a useful counterpoint to the more dramatic headlines about machines replacing people. Our conversation moves into the practical reality of how firms are using tools like ChatGPT today, from scenario planning to preparing for difficult client conversations, while also discussing where caution still matters, particularly around data security and core financial workflows. Kate also explains how government initiatives such as Making Tax Digital and the digitization of HMRC are changing client expectations and deepening the relationship between accountants and the businesses they support. We also spend time on the future of the profession, including how hiring strategies are evolving, why problem-solving and communication skills are becoming just as valuable as technical knowledge, and why private equity interest in accounting is accelerating digital adoption across the sector. Kate rounds things out by sharing how Xero is thinking about product design in 2026, what users can expect next, and why keeping the human side of the profession front and center still matters. So as accounting moves further into an AI-assisted, digitally native future, how do firms balance efficiency, trust, identity, and long-term relevance, and what lessons can other industries take from this moment of change? Useful Links Follow Kate Hayward on LinkedIn Accounting and Bookkeeping Industry Report Xero Website Follow on LinkedIn, Facebook, X, YouTube, Instagram
What does sales leadership actually look like once the AI experimentation phase is over and real results are the only thing that matters? In this episode of Tech Talks Daily, I sit down with Jason Ambrose, CEO of the Iconiq backed AI data platform People.ai, to unpack why the era of pilots, proofs of concept, and AI theater is fading fast. Jason brings a grounded view from the front lines of enterprise sales, where leaders are no longer impressed by clever demos. They want measurable outcomes, better forecasts, and fewer hours lost to CRM busywork. This conversation goes straight to the tension many organizations are feeling right now, the gap between AI potential and AI performance. We talk openly about why sales teams are drowning in activity data yet still starved of answers. Emails, meetings, call transcripts, dashboards, and dashboards about dashboards have created fatigue rather than clarity. Jason explains how turning raw activity into crisp, trusted answers changes how sellers operate day to day, pulling them back into customer conversations instead of internal reporting loops. The discussion challenges the long held assumption that better selling comes from more fields, more workflows, and more dashboards, arguing instead that AI should absorb the complexity so humans can focus on judgment, timing, and relationships. The conversation also explores how tools like ChatGPT and Claude are quietly dismantling the walls enterprise software spent years building. Sales leaders increasingly want answers delivered in natural language rather than another system to log into, and Jason shares why this shift is creating tension for legacy platforms built around walled gardens and locked down APIs. We look at what this means for architecture decisions, why openness is becoming a strategic advantage, and how customers are rethinking who they trust to sit at the center of their agentic strategies. Drawing on work with companies such as AMD, Verizon, NVIDIA, and Okta, Jason shares what top performing revenue organizations have in common. Rather than chasing sameness, scripts, and averages, they lean into curiosity, variation, and context. They look for where growth behaves differently by market, segment, or product, and they use AI to surface those differences instead of flattening them away. It is a subtle shift, but one with big implications for how sales teams compete. We also look ahead to 2026 and beyond, including how pricing models may evolve as token consumption becomes a unit of value rather than seats or licenses. Jason explains why this shift could catch enterprises off guard, what governance will matter, and why AI costs may soon feel as visible as cloud spend did a decade ago. The episode closes with a thoughtful challenge to one of the biggest myths in the industry, the belief that selling itself can be fully automated, and why the last mile of persuasion, trust, and judgment remains deeply human. If you are responsible for revenue, sales operations, or AI strategy, this episode offers a clear-eyed look at what changes when AI stops being an experiment and starts being held accountable, so what assumptions about sales and AI are you still holding onto, and are they helping or quietly holding you back? Useful Links Follow Jason Ambrose on LinkedIn Learn more about people.ai Follow on LinkedIn Thanks to our sponsors, Alcor, for supporting the show.
In this episode of Tech Talks Daily, I sat down with Keith Zubchevich, CEO of Conviva, to unpack one of the most honest analogies I have heard about today's AI rollout. Keith compares modern AI agents to toddlers being sent out to get a job, full of promise, curious, and energetic, yet still lacking the judgment and context required to operate safely in the real world. It is a simple metaphor, but it captures a tension many leaders are feeling as generative AI matures in theory while so many deployments stumble in practice. As ChatGPT approaches its third birthday, the narrative suggests that GenAI has grown up. Yet Keith argues that this sense of maturity is misleading, especially inside enterprises chasing measurable returns. He explains why so many pilots stall or quietly disappoint, not because the models lack intelligence, but because organizations often release agents without clear outcomes, real-time oversight, or an understanding of how customers actually experience those interactions. The result is AI that appears to function well internally while quietly frustrating users or failing to complete the job it was meant to do. We also dig into the now infamous Chevrolet chatbot incident that sold a $76,000 vehicle for one dollar, using it as a lens to examine what happens when agents are left without boundaries or supervision. Keith makes a strong case that the next chapter of enterprise AI will not be defined by ever-larger models, but by visibility. He shares why observing behavior, patterns, sentiment, and efficiency in real time matters more than chasing raw accuracy, especially once AI moves from internal workflows into customer-facing roles. This conversation will resonate with anyone under pressure to scale AI quickly while worrying about brand risk, accountability, and trust. Keith offers a grounded view of what effective AI "parenting" looks like inside modern organizations, and why measuring the customer experience remains the most reliable signal of whether an AI system is actually growing up or simply creating new problems at speed. As leaders rush to put agents into production, are we truly ready to guide them, or are we sending toddlers into the workforce and hoping for the best? Useful Links Connect with Keith Zubchevich Learn more about Conviva Chevrolet Dealer Chatbot Agrees to Sell Tahoe for $1 Thanks to our sponsors, Alcor, for supporting the show.
In this episode of Tech Talks Daily, I sit down with Imran Nino Eškić and Boštjan Kirm from HyperBUNKER to unpack a problem many organisations only discover in their darkest hour. Backups are supposed to be the safety net, yet in real ransomware incidents, they are often the first thing attackers dismantle. Speaking with two people who cut their teeth in data recovery labs across 50,000 real cases gave me a very different perspective on what resilience actually looks like. They explain why so many so-called "air-gapped" or "immutable" backups still depend on identities, APIs, and network pathways that can be abused. We talk through how modern attackers patiently map environments for weeks before neutralising recovery systems, and why that shift makes true physical isolation more relevant than ever. What struck me most was how calmly they described failure scenarios that would keep most leaders awake at night. The heart of the conversation centres on HyperBUNKER's offline vault and its spaceship-style double airlock design. Data enters through a one-way hardware channel, the network door closes, and only then is information moved into a completely cold vault with no address, no credentials, and no remote access. I also reflect on seeing the black box in person at the IT Press Tour in Athens and why it feels less like a gadget and more like a last-resort lifeline. We finish by talking about how businesses should decide what truly belongs in that protected 10 percent of data, and why this is as much a leadership decision as an IT one. If everything vanished tomorrow, what would your company need to breathe again, and would it actually survive? Useful LInks Connect with Imran Nino Eškić Connect With Boštjan Kirm Learn More about HyperBUNKER Lear more about the IT Press Tour Thanks to our sponsors, Alcor, for supporting the show.
What happens when the AI race stops being about size and starts being about sense? In this episode of Tech Talks Daily, I sit down with Wade Myers from MythWorx, a company operating quietly while questioning some of the loudest assumptions in artificial intelligence right now. We recorded this conversation during the noise of CES week, when headlines were full of bigger models, more parameters, and ever-growing GPU demand. But instead of chasing scale, this discussion goes in the opposite direction and asks whether brute force intelligence is already running out of road. Wade brings a perspective shaped by years as both a founder and investor, and he explains why today's large language models are starting to collide with real-world limits around power, cost, latency, and sustainability. We talk openly about the hidden tax of GPUs, how adding more compute often feels like piling complexity onto already fragile systems, and why that approach looks increasingly shaky for enterprises dealing with technical debt, energy constraints, and long deployment cycles. What makes this conversation especially interesting is MythWorx's belief that the next phase of AI will look less like prediction engines and more like reasoning systems. Wade walks through how their architecture is modeled closer to human learning, where intelligence is learned once and applied many times, rather than dragging around the full weight of the internet to answer every question. We explore why deterministic answers, audit trails, and explainability matter far more in areas like finance, law, medicine, and defense than clever-sounding responses. There is also a grounded enterprise angle here. We talk about why so many organizations feel uneasy about sending proprietary data into public AI clouds, how private AI deployments are becoming a board-level concern, and why most companies cannot justify building GPU-heavy data centers just to experiment. Wade draws parallels to the early internet and smartphone app eras, reminding us that the playful phase often comes before the practical one, and that disappointment is often a signal of maturation, not failure. We finish by looking ahead. Edge AI, small-footprint models, and architectures that reward efficiency over excess are all on the horizon, and Wade shares what MythWorx is building next, from faster model training to offline AI that can run on devices without constant connectivity. It is a conversation about restraint, reasoning, and realism at a time when hype often crowds out reflection. So if bigger models are no longer the finish line, what should business and technology leaders actually be paying attention to next, and are we ready to rethink what intelligence really means? Useful Links Connect with Wade Myers Learn More About MythWorx Thanks to our sponsors, Alcor, for supporting the show.
What happens when engineering teams can finally see the business impact of every technical decision they make? In this episode of Tech Talks Daily, I sat down with Chris Cooney, Director of Advocacy at Coralogix, to unpack why observability is no longer just an engineering concern, but a strategic lever for the entire business. Chris joined me fresh from AWS re:Invent, where he had been challenging a long-standing assumption that technical signals like CPU usage, error rates, and logs belong only in engineering silos. Instead, he argues that these signals, when enriched and interpreted correctly, can tell a much more powerful story about revenue loss, customer experience, and competitive advantage. We explored Coralogix's Observability Maturity Model, a four-stage framework that takes organizations from basic telemetry collection through to business-level decision making. Chris shared how many teams stall at measuring engineering health, without ever connecting that data to customer impact or financial outcomes. The conversation became especially tangible when he explained how a single failed checkout log can be enriched with product and pricing data to reveal a bug costing thousands of dollars per day. That shift, from "fix this tech debt" to "fix this issue draining revenue," fundamentally changes how priorities are set across teams. Chris also introduced Oli, Coralogix's AI observability agent, and explained why it is designed as an agent rather than a simple assistant. We talked about how Oli can autonomously investigate issues across logs, metrics, traces, alerts, and dashboards, allowing anyone in the organization to ask questions in plain English and receive actionable insights. From diagnosing a complex SQL injection attempt to surfacing downstream customer impact, Oli represents a move toward democratizing observability data far beyond engineering teams. Throughout our discussion, a clear theme emerged. When technical health is directly tied to business health, observability stops being seen as a cost center and starts becoming a competitive advantage. By giving autonomous engineering teams visibility into real-world impact, organizations can make faster, better decisions, foster innovation, and avoid the blind spots that have cost even well-known brands millions. So if observability still feels like a necessary expense rather than a growth driver in your organization, what would change if every technical signal could be translated into clear business impact, and who would make better decisions if they could finally see that connection? Useful LInks Connect with Chris Cooney Learn more about Coralogix Follow on LinkedIn Thanks to our sponsors, Alcor, for supporting the show.
What happens when the systems we rely on every day start producing more signals than humans can realistically process, and how do IT leaders decide what actually matters anymore? In this episode of Tech Talks Daily, I sit down with Garth Fort, Chief Product Officer at LogicMonitor, to unpack why traditional monitoring models are reaching their limits and why AI native observability is starting to feel less like a future idea and more like a present day requirement. Modern enterprise IT now spans legacy data centers, multiple public clouds, and thousands of services layered on top. That complexity has quietly broken many of the tools teams still depend on, leaving operators buried under alerts rather than empowered by insight. Garth brings a rare perspective shaped by senior roles at Microsoft, AWS, and Splunk, along with firsthand experience running observability at hyperscale. We talk about how alert fatigue has become one of the biggest hidden drains on IT teams, including real world examples where organizations were dealing with tens of thousands of alerts every week and still missing the root cause. This is where LogicMonitor's AI agent, Edwin AI, enters the picture, not as a replacement for human judgment, but as a way to correlate noise into something usable and give operators their time and confidence back. A big part of our conversation centers on trust. AI agents behave very differently from deterministic automation, and that difference matters when systems are responsible for critical services like healthcare supply chains, airline operations, or global hospitality platforms. Garth explains why governance, auditability, and role based controls will decide how quickly enterprises allow AI agents to move from advisory roles into more autonomous ones. We also explore why experimentation with AI has become one of the lowest risk moves leaders can make right now, and why the teams who treat learning as a daily habit tend to outperform the rest. We finish by zooming out to the bigger picture, where observability stops being a technical function and starts becoming a way to understand business health itself. From mapping infrastructure to real customer experiences, to reshaping how IT budgets are justified in boardrooms, this conversation offers a grounded look at where enterprise operations are heading next. So, as AI agents become more embedded in the systems that run our businesses, how comfortable are you with handing them the keys, and what would it take for you to truly trust them? Useful Links Connect with Garth Fort Learn more about LogicMonitor Check out the Logic Monitor blog Follow on LinkedIn, X, Facebook, and YouTube. Alcor is the Sponsor of Tech Talks Network
Was 2025 the year the games industry finally stopped talking about direct-to-consumer and started treating it as the default way to do business? In this episode of Tech Talks Daily, I'm joined by Chris Hewish, President at Xsolla, for a wide-ranging conversation about how regulation, platform pressure, and shifting player expectations have pushed D2C from the margins into the mainstream. As court rulings, the Digital Markets Act, and high-profile battles like Epic versus Apple continue to reshape the industry, developers are gaining more leverage, but also more responsibility, over how they distribute, monetize, and support their games. Chris breaks down why D2C is no longer just about avoiding app store fees. It is about owning player relationships, controlling data, and building sustainable businesses in a more consolidated market. We explore how tools like Xsolla's Unity SDK are lowering the barrier for studios to sell directly across mobile, PC, and the web, while handling the operational complexity that often scares teams away from global payments, compliance, and fraud management. We also dig into what is changing inside live service games. From offer walls that help monetize the vast majority of players who never spend, to LiveOps tools that simplify campaigns and retention strategies, Chris shares real examples of how studios are seeing meaningful lifts in revenue and engagement. The conversation moves beyond technology into mindset, especially for indie and mid-sized teams learning that treating a game as a long-term business needs to start far earlier than launch day. Here in 2026, we talk about account-centric economies, hybrid monetization models running in parallel, and the growing role of community-driven commerce inspired by platforms like Roblox and Fortnite. There is optimism in these shifts, but also understandable anxiety as studios adjust to managing more of the stack themselves. Chris offers a grounded perspective on how that balance is likely to play out. So if games are becoming hobbies, platforms are opening up, and developers finally have the tools to meet players wherever they are, what does the next phase of direct-to-consumer really look like, and are studios ready to fully own that relationship? Useful Links Connect with Chris Hewish on LinkedIn Learn more about Xsolla Follow on LinkedIn, Twitter, and Facebook Thanks to our sponsors, Alcor, for supporting the show.
What if airlines stopped thinking in terms of seats and schedules and started designing for the entire journey instead? In this episode of Tech Talks Daily, I'm joined by Somit Goyal, CEO of IBS Software, to talk about how travel technology is being rebuilt at its foundations. Since we last spoke, AI has moved from experimentation into everyday operations, and that shift is forcing airlines to rethink everything from retailing and loyalty to disruption management and customer trust. Somit shares why AI can no longer sit on the edge of systems as a feature, and why it now has to be embedded directly into how decisions are made across the business. We discuss the growing gap between legacy airline technology and rapidly rising traveler expectations, and why this tension has become a defining moment for the industry. For Somit, travel tech is no longer back office infrastructure. It is becoming the operating system for customer experience and revenue. That shift changes how airlines think about retailing, moving away from selling flights toward curating outcomes across a multi day journey that includes partners, servicing, and real time operational awareness. The conversation also explores why agility now matters more than scale, and how airlines are approaching this transformation without breaking what already works. A major part of this episode focuses on IBS Software's deep co-innovation partnership with Amazon Web Services. Somit explains why this is far more than a cloud hosting arrangement, covering joint R&D, shared roadmaps, and AI labs designed to help airlines build modern retailing capabilities faster. We also unpack what "AI first" really means in practice, how intelligence is reshaping offer creation, pricing, order management, and disruption handling, and why responsible AI must be treated as a product rather than a legal safeguard. We also spend time on loyalty, one of the industry's most stubborn challenges. Somit outlines why converging reservations and loyalty systems is such a powerful unlock, how it enables real time personalization instead of generic segmentation, and why loyalty should evolve from a points ledger into an experience engine that delivers value before, during, and after a trip. As airlines race toward 2026, the big question is no longer whether transformation will happen, but who will move with enough clarity and trust to earn long-term loyalty. In a world where AI knows more about travelers than ever before, how do airlines use that intelligence to create better outcomes without crossing the line, and are they ready to rethink the journey from end to end? Useful Links Connect with Somit Goyal Learn more about IBS Software Tech Talks Daily is Sponsored by Denodo
In this episode of Tech Talks Daily, I'm joined by Kiren Sekar, Chief Product Officer at Samsara, to unpack how AI is finally showing up where it matters most, in the frontline operations that keep the global economy moving. From logistics and construction to manufacturing and field services, these industries represent a huge share of global GDP, yet for years they have been left behind by modern software. Kiren explains why that gap existed, and why the timing is finally right to close it. We talk about Samsara's full-stack approach that blends hardware, software, and AI to turn trillions of real-world data points into decisions people can actually act on. Kiren shares how customers are using this intelligence to prevent accidents, cut fuel waste, digitize paper-based workflows, and scale expert judgment across thousands of vehicles and job sites. The conversation goes deep into real examples, including how large enterprises like Home Depot have dramatically reduced accident rates and improved asset utilization by making safety and efficiency part of everyday operations rather than afterthoughts. A big part of our discussion focuses on trust. When AI enters physical operations, concerns around monitoring and surveillance surface quickly. Kiren walks through how adoption succeeds only when technology is introduced with care, transparency, and a clear focus on protecting workers. From proving driver innocence during incidents to rewarding positive behavior and using AI as a virtual safety coach, we explore why change management matters just as much as the technology itself. We also look at the limits of automation and why human judgment still plays a central role. Kiren explains how Samsara's AI acts as a force multiplier for experienced frontline experts, capturing their hard-won knowledge and scaling it across an entire workforce rather than trying to replace it. As AI moves from pilots into daily decision-making at scale, this episode offers a grounded view of what responsible, high-impact deployment actually looks like. As AI continues to reshape frontline work, making jobs safer, easier, and more engaging, how should product leaders balance innovation with responsibility when their systems start influencing real-world safety and productivity every single day? Useful Links Connect with Kiren Sekar Learn more about Samsara Tech Talks Daily is Sponsored by Denodo
What happens when a podcast stops being something you listen to and becomes something you physically show up for? In this episode of Tech Talks Daily, I wanted to explore a different kind of tech story, one rooted in community, endurance, and real human connection. I was joined by Sam Huntington, a Business Development Officer at Wells Fargo, who has quietly built something special at the intersection of technology, entrepreneurship, and cycling through his podcast and community project, Hill Climbers. Sam's story starts far from a studio. It begins on a bike, moving through Philadelphia, Los Angeles, and eventually Austin, where chance conversations on group rides turned into friendships, business relationships, and eventually a podcast. We talk about why endurance sports and startups share the same mental terrain, the moments when you want to quit, and how those moments often define the outcome. Sam explains how Hill Climbers evolved from recorded conversations into weekly rides, live podcast tapings, and in person events that bring founders, investors, and operators together without name badges or pitch decks. We also dig into what makes Austin such a magnetic place for founders right now, and why community building outside Silicon Valley feels different when it is built around shared effort rather than curated networks. Sam shares lessons learned from taking a podcast offline, including the early weeks when hardly anyone showed up, the temptation to stop, and the persistence required to build momentum. There is a refreshing honesty in how he describes growing something slowly, resisting shortcuts, and letting trust compound over time. This conversation is also a reminder that meaningful networks are rarely built through algorithms. They are built through shared experiences, discomfort, friendly competition, and showing up consistently when no one is watching. Whether you are a founder, an investor, or someone trying to build a community of your own, there is something grounding in hearing how relationships form when work is not the opening line. As more of our professional lives move online, are we losing the spaces where real connection happens, and what would it look like for you to build community around a shared passion rather than a job title? Userful Links Connect with Sam Huntington Hill Climbers Website Instagram Tech Talks Daily is Sponsored by Denodo
What happens when the push for smarter crypto wallets runs headfirst into the reality that everything on a public blockchain can be seen by anyone? In this episode of Tech Talks Daily, I wanted to take listeners who may not live and breathe Web3 every day and introduce them to a problem that is becoming harder to ignore. As Ethereum evolves and smart accounts unlock new wallet features, the surface area for risk grows at the same time. That is where privacy-first Layer 2 solutions enter the conversation, not as an abstract idea, but as a practical response to very real security and usability concerns. My guest is Joe Andrews, Co-founder and President at Aztec Labs. Joe brings an engineering mindset shaped by years of building consumer-facing applications and deep privacy infrastructure. Together, we unpack why privacy and security can no longer be treated as separate topics, especially as Ethereum rolls out more advanced account features. Joe explains how privacy-first Layer 2 networks act as an added line of defense, reducing exposure to threats that come from fully transparent balances, identities, and transaction histories. We also talk about what Aztec actually is, often described as the Private World Computer, and why that framing matters. Joe shares learnings from Aztec's public testnet launch earlier this year, what surprised the team once thousands of nodes were running in the wild, and how the community has stepped up in ways the company itself could not have planned for. There is also an honest discussion about the UK crypto scene, the missed opportunities, and the quiet resilience of builders who continue to ship despite regulatory uncertainty. As we look ahead, Joe outlines what comes next as Aztec moves closer to enabling private transactions on a decentralized network, and why the next phase is less about theory and more about real people using privacy in everyday interactions. If you are curious about how privacy-first Layer 2 solutions fit into Ethereum's roadmap, or why privacy might be the missing piece that finally makes smart wallets usable at scale, does this conversation change how you think about the future of crypto, and where would you like to see this technology go next? Useful Links Connect with Joe Andrews Learn more about Aztec Labs Tech Talks Daily is Sponsored by Denodo
How is HR changing when AI, economic pressure, and rising employee expectations all collide at once? In this episode of Tech Talks Daily, I'm joined by Simon Noble, CEO of Cezanne HR, to unpack how the role of HR is evolving from a traditional support function into something far more closely tied to business performance. Simon shares why HR is increasingly being judged on outcomes like retention, capability building, and readiness for change, rather than policies, processes, or cost control. Yet despite that shift, many HR leaders still find themselves pulled back into a compliance-first mindset as budgets tighten, skills shortages persist, and new legislation raises the stakes. We explore how AI fits into this picture without stripping the humanity out of HR. Simon is clear that AI should automate administration and free up time, rather than replace human judgment or empathy. Used well, it removes friction from onboarding, compliance, and everyday queries, giving HR the space to focus on culture, leadership, and long-term talent development. Used poorly, it risks adding noise without value. The difference, he argues, comes down to data. Without clean, consolidated data, AI simply cannot deliver meaningful insight, no matter how advanced the technology appears. The conversation also looks inward at Cezanne HR's own growth journey. Simon describes rapid expansion as chaos with better branding, and explains why maintaining culture, trust, and clarity becomes harder, yet more important, as teams scale. From onboarding new employees to ensuring a consistent customer experience, the same principles apply internally as they do for customers using HR technology. We also touch on trust, transparency, and the growing focus on areas like pay transparency, data responsibility, and employee confidence in how their information is handled. As expectations continue to rise, HR's credibility increasingly rests on accuracy, fairness, and the ability to turn insight into action. As HR steps closer to the center of business strategy, what mindset shift is needed to move from reacting to change toward actively shaping it, and how prepared is your organization to make that leap? Useful Links Connect with Simon Noble Learn more about Cezanne HR Tech Talks Daily is Sponsored by Denodo
What does it really mean when AI moves from answering questions to making decisions that affect real people, real money, and real outcomes? In this episode of Tech Talks Daily, I'm joined by Joe Kim, CEO of Druid AI, for a grounded conversation about why agentic AI is becoming the focus for enterprises that have moved beyond experimentation. After years of hype around generative tools, many organizations are now facing a tougher question. Can AI be trusted to take action inside core business processes, and can it do so with the accuracy, security, and accountability that enterprises expect? Joe brings a rare perspective shaped by decades leading large-scale enterprise software companies, including his time as CEO of Sumo Logic. He explains why Druid AI deliberately avoids positioning itself as a generative AI company, and instead focuses on systems that can make decisions, trigger workflows, and complete tasks inside regulated, high-stakes environments. We unpack why accuracy thresholds matter when AI touches billing, healthcare, admissions, or compliance, and why security and governance are no longer secondary concerns once AI is allowed to act. We also talk about scale and proof. Druid AI now supports over 120 million conversations every month, a figure that keeps climbing as enterprises move agentic systems into production. Joe shares how those conversations translate into measurable business outcomes, from operational efficiency to revenue growth, and why many AI initiatives fail to reach this stage. His "5 percent club" philosophy cuts through the noise, focusing on the small number of use cases that actually deliver return while most others stall in pilots. The conversation also explores why higher education has become a surprising pressure point for AI adoption, how outdated systems contribute to student churn, and how conversational agents can remove friction at moments that decide whether someone enrolls, stays, or leaves. We close by looking ahead at Druid AI's next chapter, including new platform capabilities designed to make building and deploying agents faster without sacrificing control. As more enterprises demand results instead of promises, are we ready to judge AI by the decisions it makes and the outcomes it delivers, and what should that accountability look like in your organization? I'd love to hear your thoughts. Where do you see agentic AI delivering real value today, and where do you think the risks still outweigh the rewards? What does it really mean when AI moves from answering questions to making decisions that affect real people, real money, and real outcomes? In this episode of Tech Talks Daily, I'm joined by Joe Kim, CEO of Druid AI, for a grounded conversation about why agentic AI is becoming the focus for enterprises that have moved beyond experimentation. After years of hype around generative tools, many organizations are now facing a tougher question. Can AI be trusted to take action inside core business processes, and can it do so with the accuracy, security, and accountability that enterprises expect? Joe brings a rare perspective shaped by decades leading large-scale enterprise software companies, including his time as CEO of Sumo Logic. He explains why Druid AI deliberately avoids positioning itself as a generative AI company, and instead focuses on systems that can make decisions, trigger workflows, and complete tasks inside regulated, high-stakes environments. We unpack why accuracy thresholds matter when AI touches billing, healthcare, admissions, or compliance, and why security and governance are no longer secondary concerns once AI is allowed to act. We also talk about scale and proof. Druid AI now supports over 120 million conversations every month, a figure that keeps climbing as enterprises move agentic systems into production. Joe shares how those conversations translate into measurable business outcomes, from operational efficiency to revenue growth, and why many AI initiatives fail to reach this stage. His "5 percent club" philosophy cuts through the noise, focusing on the small number of use cases that actually deliver return while most others stall in pilots. The conversation also explores why higher education has become a surprising pressure point for AI adoption, how outdated systems contribute to student churn, and how conversational agents can remove friction at moments that decide whether someone enrolls, stays, or leaves. We close by looking ahead at Druid AI's next chapter, including new platform capabilities designed to make building and deploying agents faster without sacrificing control. As more enterprises demand results instead of promises, are we ready to judge AI by the decisions it makes and the outcomes it delivers, and what should that accountability look like in your organization? I'd love to hear your thoughts. Where do you see agentic AI delivering real value today, and where do you think the risks still outweigh the rewards? Useful Links Connect with Joe Kim, CEO of Druid AI. Druid AI Website Tech Talks Daily is Sponsored by Denodo
The world is building data centers, identity rails, and AI policy stacks at a speed that makes 2026 feel closer than it is. In this conversation, Rajesh Natarajan, Global Chief Technology Officer at Gorilla Technology Group, explains what it takes to engineer platforms that remain reliable, secure, and sovereign-ready for decades, especially when infrastructure must operate outside the safety net of constant cloud connectivity. Raj talks about quantum-safe networking as a current risk, not a future headline. Adversaries are capturing encrypted traffic today, betting on decrypting it later, and retrofitting quantum-safe architecture into national platforms mid-lifecycle is an expensive mistake waiting to happen. He also highlights the regional nature of AI infrastructure, Southeast Asia prioritizing sovereignty, speed, and efficiency, Europe leaning on regulation and telemetry, and the U.S. betting on raw cluster scale and throughput. Sustainability at Gorilla isn't a marketing headline, it's an engineering requirement. If a system can't prove its environmental impact using telemetry like workload-level PUE, it isn't labeled sustainable internally. Gorilla applies the same rigor to IoT insight per unit of energy, device lifecycles, and edge-level intelligence placement, minimizing data centralization without operational justification. This episode offers marketers, founders, and technology leaders a rare chance to understand what national-scale resilience looks like when platform alignment breaks first, not technology. Remembering that decisions must be reversible, explicit, and measurable is the foundation of how Gorilla is designing systems that can evolve without forcing rushed compromises when uncertainty becomes reality. Useful links: Connect with Dr Rajesh Natarajan Gorilla website Tech Talks Daily is Sponsored by Denodo
What makes live events feel personal in an age of algorithms making the calls? That's the tension marketers are living in right now. Ben Kruger, Chief Marketing Officer at Event Tickets Center, sits at the center of this shift. He has spent 20 years shaping server-side systems and performance marketing strategies, including a decade of persistence chasing a role at Google before landing a position in New York just as eCommerce demand went into overdrive during the pandemic. Now, at ETC, he runs marketing for more than 130,000 live events simultaneously. It's a scale that forces automation to step in. The industry moves in real time, resellers update prices by the hour, artists trend globally overnight, weather can shift demand before a stadium gate opens. Ben credits Google's AI tools and internal models as a competitive advantage, but he also talks openly about the risks. The early excitement of automation gave way to skepticism after seeing unaligned promises from new platforms and unpredictable campaign behavior in tools that remove control from brands. There's a well-rounded argument to explore here. On one side, AI enables a small team to do the work of thousands, writing content at a volume no human team could deliver alone. On the other, removing risk from campaigns, or removing channel-level choices from advertisers, can reduce trust and increase low-quality creative output. Advantage+ tools that make placement decisions automatically, without brand input, might scale reach, but can reduce clarity of intent and control of outcomes. Some CMOs see that as smart acceleration, others see it as an overcorrection that creates opacity and dependency on platforms optimizing for their own incentives. And somewhere in the middle is the opportunity. ETC's approach shows a future where repetition in rapid testing generates sharper insight, where lean teams move faster, where humans stay in the loop to validate outcomes, and where creativity stays grounded in audience understanding, economics, and transparency. Marketers listening to Ben will hear someone who wants experimentation, control, clarity, and long-term audience trust to exist side by side. Useful links: Connect with Ben Kruger on LinkedIn Event Tickets Center website Tech Talks Daily is Sponsored by Denodo
What does it really take to build software that can grow from a single line of code to millions of users a day without losing its soul along the way? In this episode of Tech Talks Daily, I'm joined by Alex Gusev, CTO at Uploadcare, for a wide-ranging conversation about scale, simplicity, and why leadership in technology starts with people long before it gets anywhere near frameworks or tooling. Alex has spent two decades building server-side systems, often inside small teams, and has seen firsthand how early decisions echo through a company's future, for better and for worse. We talk openly about the realities of early-stage engineering, including why shipping imperfect code is often the only way to survive, how technical debt should be taken on deliberately rather than by accident, and why knowing when to slow down and clean things up is one of the hardest leadership calls to make. Alex shares his belief that simplicity is the strongest ally in high-load environments, and how over-engineering, often inspired by copying the playbooks of much larger companies, creates fragility instead of strength. Our conversation also digs into his continued faith in Ruby on Rails, a framework that divides opinion but still plays a central role in many successful products. Alex reframes the debate around speed, focusing less on raw performance metrics and more on how quickly teams can build, adapt, and maintain systems over time. It's a practical view shaped by real-world trade-offs rather than theory. Beyond code, we explore why Alex puts people ahead of technology and process, and how creating psychological safety inside teams leads to better decisions, lower churn, and smarter use of limited resources. He also reflects on personal experiences that reshaped his approach to leadership, the growing tech scene in Kyrgyzstan, and why he finds as much inspiration in Dostoevsky as he does in engineering blogs. If you've ever questioned whether modern engineering culture has overcomplicated itself, or wondered how to balance ambition with sustainability as your product grows, this episode offers plenty to think about. Where do you think your own team is adding complexity without realizing it, and what might change if you started with people first? Useful Links Connect with Alex Gusev Learn more about Uploadcare Tech Talks Daily is sponsored by Denodo
What does it actually mean to prove who we are online in 2025, and why does it still feel so fragile? In this episode of Tech Talks Daily, I sit down with Alex Laurie from Ping Identity to talk about why digital identity has reached a real moment of tension in the UK. As more of our lives move online, from banking and healthcare to social platforms and government services, the gap between how identity should work and how it actually works keeps widening. Alex shares why the UK now feels out of step with other regions when it comes to online identity schemes, and how heavy reliance on centralized models is slowing adoption while weakening public trust. We spend time unpacking the practical consequences of today's verification systems. Age checks are regularly bypassed, fraud continues to grow, and users are often asked to hand over far more personal data than feels reasonable just to access everyday services. At the same time, public pressure around online safety is rising fast. That creates an uncomfortable push and pull between tighter controls and the expectation of fast, low-friction access. Alex makes the case that this tension exists because the underlying approach is flawed, and that proving something simple, like age, should never require revealing an entire digital identity. From there, the conversation turns to decentralized identity and why it is gaining momentum globally. Instead of placing sensitive data into large centralized databases, decentralized models allow individuals to hold and present verified credentials on their own terms. For me, this reframes digital identity as a right rather than a feature, and opens the door to systems that feel more privacy-aware, inclusive, and resilient. We also explore how agentic AI could play a role here, helping people manage, present, and protect their credentials intelligently without adding complexity or new risks. With fresh consumer research from Ping Identity informing the discussion, this episode looks closely at where trust, privacy, and identity are heading next, and why the choices made now will shape how we prove who we are online for years to come. Are we finally ready to rethink digital identity, and if so, what does that mean for all of us?
What does it really take to build a fintech company that quietly fixes one of the most frustrating problems SMEs face every day? In this episode of Tech Talks Daily, I'm joined by Pierre-Antoine Dusoulier, the Founder and CEO of iBanFirst, for a candid conversation about entrepreneurship, timing, and why cross-border payments have remained broken for so long. Pierre-Antoine's story begins in London, where his early career as an FX trader felt like a compromise at the time, yet quietly gave him a front-row seat to inefficiencies most people accepted as normal. That experience would later shape two companies and a very clear point of view on how money should move across borders. Pierre-Antoine walks through his first venture, Combeast.com, one of France's earliest FX brokerages for retail investors, and what he learned from selling it to Saxo Bank and staying on to run Western European operations. That chapter matters, because it exposed the gap between how sophisticated FX markets really are and how poorly SMEs are served when FX and payments are bundled together inside traditional banks. Out of that frustration, IbanFirst was born in 2016 with a simple idea: treat cross-border payments as a specialist discipline, not a side feature. Today, IbanFirst serves more than 10,000 clients across Europe and processes over €2 billion in transactions every month. We dig into why growth has continued while many fintechs have slowed, from a product designed to be used daily, to proactive sales, to a new generation of CFOs and CEOs who expect the same clarity and speed at work that they get from consumer fintech tools. Pierre-Antoine explains how real-time FX rates, payment tracking using SWIFT GPI, and multi-entity account management change the day-to-day reality for SMEs trading internationally. We also talk about Brexit, and how being rooted in continental Europe created an unexpected opening. Pierre-Antoine shares why expanding into the UK, including the acquisition of Cornhill, made sense, and why London's payments ecosystem still stands apart in scale and depth. Along the way, he is refreshingly open about the heavy investment required in compliance, trust, and regulation, and why nearly a third of IbanFirst's team focuses on operations and oversight. Looking ahead, Pierre-Antoine lays out a bold vision for the SME payments market, predicting a future where specialists replace banks in much the same way fintech reshaped consumer money transfers. As cross-border trade grows and currency volatility becomes a daily concern, his perspective raises an interesting question for anyone running an international business today: if specialists already exist, why keep relying on systems that were never designed for how SMEs actually operate? Useful Links: Connect with Pierre-Antoine Dusoulier Learn more about iBanFirst, Tech Talks Daily is sponsored by Denodo
What does it really mean to support developers in a world where the tools are getting smarter, the expectations are higher, and the human side of technology is easier to forget? In this episode of Tech Talks Daily, I sit down with Frédéric Harper, Senior Developer Relations Manager at TinyMCE, for a thoughtful conversation about what it takes to serve developer communities with credibility, empathy, and long-term intent. With more than twenty years in the tech industry, Fred's career spans hands-on web development, open source advocacy, and senior DevRel roles at companies including Microsoft, Mozilla, Fitbit, and npm. That journey gives him a rare perspective on how developer needs have evolved, and where companies still get it wrong. We explore how starting out as a full-time developer shaped Fred's approach to advocacy, grounding his work in real-world frustration rather than abstract messaging. He reflects on earning trust during challenging periods, including advocating for open source during an era when some communities viewed large tech companies with deep skepticism. Along the way, Fred shares how studying Buddhist philosophy has influenced how he shows up for developers today, helping him keep ego in check and focus on service rather than status. The conversation also lifts the curtain on rich text editing, a capability most users take for granted but one that hides deep technical complexity. Fred explains why building a modern editing experience involves far more than formatting text, touching on collaboration, accessibility, security, and the growing expectations around AI-assisted workflows. It is a reminder that some of the most familiar parts of the web are also among the hardest to build well. We then turn to developer relations itself, a role that is often misunderstood or measured through the wrong lens. Fred shares why DevRel should never be treated as a short-term sales function, how trust and community take time, and why authenticity matters more than volume. From open source responsibility to personal branding for developers, including lessons from his book published with Apress, Fred offers grounded advice on visibility, communication, and staying human in an increasingly automated industry. As the episode closes, we reflect on burnout, boundaries, and inclusion, and why healthier communities lead to better products. For anyone building developer tools, managing technical communities, or trying to grow a career without losing themselves in the process, this conversation leaves a simple question hanging in the air: how do we build technology that supports people without forgetting the people behind the code? Useful Links Connect with Frédéric Harper Learn More About TinyMCE Tech Talks Daily is sponsored by Denodo
What happens when artificial intelligence moves faster than our ability to understand, verify, and trust it? In this episode of Tech Talks Daily, I sit down with Alexander Feick from eSentire, a cybersecurity veteran who has spent more than a decade working at the intersection of complex systems, risk, and emerging technology. Alex leads eSentire Labs, where his team explores how new technologies can be secured before they quietly become load-bearing parts of modern business infrastructure. Our conversation centers on a timely and uncomfortable reality. AI is being embedded into workflows, products, and decision-making systems at a pace most organizations are not prepared for. Alex explains why many AI failures are not caused by malicious models or dramatic breaches, but by broken ownership, invisible dependencies, and a lack of ongoing verification. These are not technical glitches. They are organizational blind spots that quietly compound risk over time. We also explore the ideas behind Alex's recently published book on trust and AI, which he made freely available due to the speed at which real-world AI failures were already overtaking theory. From prompt injection and model drift to the dangers of treating non-deterministic systems as if they were predictable software, Alex shares why generative AI requires a fundamentally different security mindset. He draws a clear distinction between chatbot AI and embedded AI, and explains the moment where trust quietly shifts away from humans and into systems that cannot take accountability. The discussion goes deeper into what trust actually means in an AI-driven organization. Alex argues that trust must be earned, measured, and monitored continuously, not assumed after a successful pilot. Verification becomes the real work, not generation, and leaders who fail to recognize that shift risk scaling errors faster than they can contain them. We also talk about why he turned his book into an AI advisor, what that experiment revealed about the limits of models, and why human responsibility cannot be automated away. This is a grounded, practical conversation for leaders, technologists, and anyone deploying AI inside real organizations. If AI is becoming part of how decisions get made where you work, how confident are you that someone truly owns the outcome? Useful Links Connect with Alexander Feick Learn more about eSentire Tech Talks Daily is sponsored by Denodo
What happens when the future of money stops being about speculation and starts being about people, ownership, and agency? In this episode of Tech Talks Daily, I'm joined by Dr. Friederike Ernst, co-founder of Gnosis, to unpack a conversation that goes far beyond crypto price cycles or technical hype. This is a thoughtful discussion about where blockchain is heading and, just as importantly, where it could go wrong if we are not paying attention. Friederike has spent more than a decade building foundational infrastructure for the Ethereum ecosystem, from smart wallets to decentralized exchanges and blockchain networks that quietly power large parts of Web3. But as she explains, the industry is now standing at a fork in the road. One path leads to blockchain becoming a silent backend upgrade for banks and incumbents, improving efficiency while keeping power centralized. The other path is far more ambitious, using blockchain to return ownership, control, and financial agency to everyday people. We talk about why financial infrastructure, despite working reasonably well for many of us in Europe, remains deeply inefficient, expensive, and exclusionary at a global level. A major theme of this episode is usability. Friederike is clear that technology only matters if it improves real lives. She explains why early blockchain products asked too much of users and how that is now changing, with experiences that feel as simple as using a neobank or debit card while preserving true ownership under the hood. The goal is not to make everyone a crypto expert, but to make financial tools that work seamlessly while remaining genuinely user-owned. We also explore the darker possibilities. Like any powerful technology, blockchain can be used to empower or to control. Friederike does not shy away from the risks of surveillance, social scoring, and misuse, and she argues that the real battle ahead is cultural, not technical. Values like privacy, free expression, and personal agency need to be defended openly, or the technology will be shaped without public consent. As we look toward 2026, this conversation offers a refreshing reminder that the future of money is still being written. The question is whether it will be owned by communities or quietly absorbed by the same institutions we already rely on. After listening to this episode, where do you think that future should land, and what choices are you willing to make to influence it? Useful Links Connect With Dr. Friederike Ernst Learn More about Gnosis Tech Talks Daily is sponsored by Denodo
In this episode of Tech Talks Daily, I'm joined by Stuart Thompson, President of ABB's Electrification Service Division, to explore the intersection of industrial sustainability, energy security, and cutting-edge technology. As industries face growing energy demands and climate targets, Stuart explains how companies can modernize their infrastructure to drive efficiency, reduce carbon footprints, and stay ahead of the energy curve. Navigating the Industrial Sustainability Challenge We start by addressing the urgent need for industries to rethink their energy and carbon strategies. Stuart highlights the significant role of construction and manufacturing in global energy-related emissions, stressing that many businesses are still behind on their 2030 sustainability targets. We dive into the emerging shift from capital expenditure (CapEx) to operational expenditure (OpEx) models, such as predictive maintenance, to maximize value from existing assets. Asset Modernization Stuart explains how asset modernization—upgrading intelligent components like switchgear within existing infrastructure—can dramatically improve efficiency and reduce carbon without the need for costly, full-scale replacements. He also shares examples, including Intel's semiconductor upgrades and Jadal Steel's success in Oman, demonstrating how targeted upgrades can meet sustainability goals while boosting productivity. Smarter Energy Management with AI and AR We explore how AI and augmented reality (AR) are transforming service delivery and operational intelligence. Stuart discusses how AI-powered predictive maintenance helps companies anticipate failures and optimize energy management, while AR facilitates remote assistance for faster issue resolution. He also touches on how these technologies contribute to energy savings and carbon reduction by automating service reports and enabling real-time visibility into asset performance. BESS as a Service: Solving the Energy Security Trilemma One of the key innovations Stuart highlights is ABB's Battery Energy Storage as a Service (BESSaaS), a solution designed to solve the "energy trilemma" of security, cost, and sustainability. With on-site battery storage and AI-driven energy trading, businesses can bypass slow grid connections, ensure energy security, and even turn their energy storage into a profit center. This model is already making waves in industries ranging from data centers to manufacturing. A Glimpse into the Future: ABB's Investment in Asset Management Tech As we look to the future, Stuart reveals ABB's upcoming investment in asset management technology, set to be announced globally in early December 2025. This exciting move will have a significant impact on major customers like the London Underground and Saudi Electric Commission, further cementing ABB's role as a leader in energy innovation. Don't miss this episode, where we discuss the latest trends in industrial sustainability, energy security, and technology's pivotal role in shaping a greener, more efficient future. Useful Links Connect with Stuart on Linkedin Learn more about ABB Tech Talks Daily is sponsored by Denodo
In this episode of Tech Talks Daily, I sit down with Yuyu Zhang to unpack a shift that many developers can feel but struggle to articulate. Yuyu's journey spans academic research at Georgia Tech, building recommendation systems that power TikTok and Douyin at global scale, and leading the Seed-Coder project at ByteDance, which reached state-of-the-art performance among open source code models earlier this year. Today, he is part of Codeck, where the focus has moved beyond AI assistance toward autonomous coding agents that can plan, execute, and verify real engineering work. Our conversation begins with a simple but revealing observation. Most AI coding tools still behave like smarter autocomplete. They help you type faster, but they do not own the work. Yuyu explains why that distinction matters, especially for teams dealing with complex systems, tight deadlines, and constant interruptions. Autonomy, in his view, is not about replacing engineers. It is about giving them back their flow. We explore Verdent, Codeck's autonomous coding agent, and Verdent Deck, the desktop environment designed to coordinate multiple agents in parallel. Instead of one AI reacting line by line inside an editor, these agents operate at the task level. They plan work with the developer upfront, execute independently in safe environments, and validate their output before handing anything back. The result feels less like using a tool and more like managing a small engineering team. Yuyu shares how parallel agents change both speed and predictability. One agent can implement a feature, another can write tests, and another can investigate logs, all without stepping on each other. Just as important, he walks through the safeguards that keep humans in control. Explicit planning, permission boundaries, sandboxed execution, and clear, reviewable diffs are all designed to address the very real concerns engineering leaders have about letting autonomous systems near production code. The discussion also turns personal. Having worked on some of the highest-scale systems in the world, Yuyu reflects on why developers lose momentum. It is rarely about raw ability. It is about constant context switching. His goal with Verdent is to preserve mental focus by offloading interruptions and letting engineers return to work with clarity rather than cognitive fatigue. We close by looking ahead. The definition of a "good developer" is changing, just as it has many times before. AI is not ending programming. It is reshaping it, pushing human creativity, judgment, and design thinking to the foreground while machines handle the repetitive churn. If autonomous coding agents are becoming colleagues rather than helpers, how comfortable are you with that future, and what would you want to stay firmly in human hands?
How do you move faster with AI and cloud innovation without losing control of security along the way? Recorded live from the show floor at AWS re:Invent in Las Vegas, this episode of Tech Talks Daily features a timely conversation with Kimberly Dickson, Worldwide Go-To-Market Lead for AWS Detection and Response Services. As organizations race to adopt agentic AI, modernize applications, and manage sprawling cloud environments, Kimberly offers a grounded look at why security must still sit at the center of every decision. Kimberly explains how her role bridges two worlds at AWS. On one side are customers dealing with prioritization fatigue, fragmented security signals, and growing pressure to do more with fewer resources. On the other hand, there are the internal service teams building products like Amazon GuardDuty, Amazon Inspector, and AWS Security Hub. Her job is to connect those realities, shaping services based on what customers actually struggle with day to day. That perspective sets the tone for a conversation focused less on hype and more on practical outcomes. We unpack how AWS thinks about security culture at scale, from infrastructure and encryption through to threat intelligence gathered across Amazon's global footprint. Kimberly shares how AWS uses large-scale honeypots to observe attacker behavior in real time, feeding that intelligence back into detection services while also working with governments and industry partners to take down active threats. It is a reminder that cloud security is no longer just about protecting individual workloads, but about contributing to a safer internet overall. The conversation also dives into new announcements from re:Invent, including the launch of AWS Security Hub, extended threat detection for EC2 and EKS, and the emergence of security-focused AI agents. Kimberly explains how these tools shift security teams away from manual investigation and toward faster, higher-confidence decisions by correlating risks across vulnerabilities, identity, network exposure, and sensitive data. The goal is clear visibility, clearer priorities, and remediation that fits naturally into existing workflows. We also explore how AWS approaches security in multi-cloud and hybrid environments, why foundational design principles still matter in an AI-driven world, and how open standards are helping normalize security data across vendors. Kimberly's reflections on re:Invent itself bring a human close to the episode, highlighting the pride and responsibility felt by teams building systems that millions of organizations depend on. As AI adoption accelerates and security teams are asked to keep pace without slowing innovation, what would it take for your organization to move faster while still trusting the foundations you are building on?
How do you make sense of an industry that is changing at a pace few predicted, especially with SIGNAL London still fresh in our minds and Twilio unveiling the next stage of its vision for customer engagement? That question sits at the heart of today's conversation with Peter Bell, VP of Marketing for EMEA at Twilio, who joined me to unpack what the past year has taught both companies and consumers about AI's role in shaping modern experiences. Peter begins by grounding everything in a single, striking shift. Only a year ago, AI-powered search barely registered in global traffic. Today it accounts for around a fifth of all searches. That leap signals a broader behavioral shift as consumers move instinctively toward conversational interfaces, which, in turn, leaves brands with a clear message. The clock has moved on. AI is no longer a nice-to-have. It is a direct response to how people now choose to discover, question, and buy. Our conversation turns to the gap between customer expectations and the experiences they receive. Peter discusses why brands often struggle to integrate channels, data, and AI coherently. He explains how first party data has become the anchor for any serious AI strategy, why generic public models cannot solve brand-specific tasks, and why the most successful teams start with simple, tightly scoped problems. A password reset may not sound glamorous, yet it is the kind of focused use case that teaches teams how to govern data, automate safely, and build confidence in the process. We also spend time on branded calling, RCS, and the evolution of voice. Peter breaks down what modern messaging now looks like and why trust sits at the center of every interaction. His explanation of Conversational Relay shows why natural voice exchanges finally feel within reach after years of frustration with rigid IVR systems. The thread running through all of this is clear. Consumers want speed and clarity, but they want reassurance too, and brands need to honor both sides of that equation. Later in the conversation, Peter makes one of the episode's most compelling points. Brand visibility has become harder, not easier, because much of the early research now occurs within AI tools. Buyers form opinions long before they speak with a sales rep. That shift explains why so many B2B companies are returning to high-impact brand channels, whether that is F1 sponsorships or other standout moments that keep them in the initial consideration set. We close with the topic that Peter believes will define the next stage of enterprise AI. Model Context Protocol. MCP has emerged as a quiet breakthrough, enabling LLMs to access data across CRM systems, files, and other software through a standard protocol. This removes one of the biggest blockers in AI projects: the practical challenge of connecting disparate data to a model built for a specific purpose. As Peter puts it, MCP gives companies a realistic way to make the special-purpose models that deliver reliable ROI. It is a wide-ranging conversation shaped by SIGNAL London's announcements, the evolving customer journey, and a year in which AI moved from curiosity to expectation. I would love to know what part stood out most to you. Are you seeing the same shifts Peter describes in your own business, and how are you preparing for the year ahead? Useful Links Interact with the Inside the Conversational AI Revolution report. Learn more about the Signal event Connect with Peter Bell, VP of Marketing for EMEA at Twilio. Tech Talks Daily is sponsored by Denodo
Did you ever stop and wonder how many hours you lose each week hunting for files, tabs, links, or half-written ideas scattered across your apps? It is a familiar frustration, and it sits at the center of today's fast-tracked conversation with Dropbox VP of Engineering, Josh Clemm. Josh has spent two decades building products shaped around scale, personalisation, and clarity, and he brings that mix of experience to Dropbox's push into AI and knowledge management. In this episode, Josh shares stories from his time at LinkedIn and Uber, including the surprising Krispy Kreme promotion that took down Uber Eats across the globe and triggered a major rethink of architecture and resiliency. That experience shaped his belief that chaos often teaches the most. It also sets the stage for why he sees AI fluency as a leadership requirement rather than a trend. You will hear how Dropbox is approaching internal experimentation, why context rot and work slop are real problems inside companies, and why the empty chat box often creates more anxiety than opportunity. Josh walks through the thinking behind Dropbox Dash, a standalone AI powered knowledge layer that connects all of your cloud apps, understands their content, and turns search into something sharper and faster. He explains why context aware AI is the next leap, how Dash builds knowledge graphs across apps, and why the future of AI might look less like single player workflows and more like tools that sit inside the flow of teamwork. It is a wide ranging conversation that moves from engineering history to the practical steps behind building AI products that feel useful rather than overwhelming. So here is the question that sits underneath everything Josh shared. What would your day look like if your information finally made sense without you having to chase it? Tech Talks Daily is Sponsored By Denodo. To learn more, visit denodo.com
How do you guide a workforce through the fastest shift in technology most of us have seen in our careers? That question shaped my conversation with David Martin from BCG, who works at the intersection of talent, culture, and AI. He joined me from New York, with Amelia listening in, and quickly painted a clear picture of what is really happening inside global enterprises right now. We started with the widening split between AI fluent teams and those stuck in endless pilots. David explained why the organizations getting results are the ones doing fewer things with far greater ambition. Many others scatter energy across small use cases, save minutes instead of hours, and never reach a scale where value becomes visible. Training surfaced early as one of the biggest gaps. Not surface level workshops, but the deeper hands-on learning that helps people change how they work. David described why frontline teams lag behind, why engineers still miss major capabilities, and how leadership behaviour dramatically affects adoption. Curiosity and communication play a bigger role than most expect. We explored the move from isolated AI experiments to real workflow transformation. David shared examples from engineering, customer service, and operations where companies are finally seeing measurable results. He also explained why agents remain underused, with hesitation, data quality, and unfamiliarity still slowing progress. Shadow AI added another layer, with half of workers already using tools outside corporate systems. The conversation returned often to people. David outlined BCG's 10-20-70 rule, showing why technology is never the main bottleneck. Culture, roles, and process make or break outcomes. Leaders who provide clarity and a sense of direction see faster adoption. Those who remain hesitant create uncertainty that spreads across teams almost instantly. As we looked toward 2026, David shared cautious optimism. He sees huge potential in areas like healthcare and sustainability, along with a wave of workflow redesign that will reshape daily work. His own learning habits are simple, from podcasts to regular reading, and driven by a desire to set a strong example for his children as they grow into a world shaped by AI. If you want a grounded view of where AI is genuinely delivering change, this conversation offers rare clarity. What resonates with you most from David's perspective, and how will you approach your own learning in the year ahead? I would love to hear your thoughts. Tech Talks Daily is Sponsored By Denodo. To learn more, visit denodo.com
Did you know that when many people hear "Orange," they still ask if it involves SIM cards? That was the perfect place to begin my conversation with Sahem Azzam, President for IMEA and Inner Asia at Orange Business. Once we cleared that up, it opened the door to a much richer story about what enterprise innovation looks like across one of the fastest-moving regions on the planet. Sahem joined me from Dubai, a city that has become a living case study for what happens when a region refuses to think small. As we compared notes from Gitex Global, it became clear that what is happening across the Middle East is not a short burst of enthusiasm. It is a deliberate long-term shift driven by young populations, bold government ambition, and a willingness to adopt new technologies before anyone else. Sahem explained how this appetite for speed is shaping the region's digital transformation and how Orange Business is supporting it through cloud, connectivity, cybersecurity, digital integration, and large-scale smart city programmes. He shared practical stories that peeled back the curtain on cognitive city design, energy optimisation, and the pressure on enterprises to simplify sprawling hybrid IT environments. What stood out was how often the conversation returned to value. Better user experiences, lower costs, and new revenue paths. Everything Orange Business builds must deliver one of those outcomes. Sahem talked through platformization, why unified infrastructure matters, and how enterprises can reduce complexity in an age where cloud, security, networking, and AI all collide at once. We also discussed the growing focus on responsible AI and the shared need for transparency. Sahem spoke about data ownership, trusted models, and the careful guardrails that must sit behind every AI deployment. The rise in cyber threats is making this more important than ever, and he offered a candid look at how Orange Cyberdefense approaches modern security through an integrated view of infrastructure, operations, and risk. What gave this conversation a personal edge was Sahem's final reflection on learning. After years at Stanford, London Business School, and Harvard, he still sees human experience as the most valuable teacher. Listening to people, sharing problems, comparing perspectives. Events like Gitex remind him that optimism is contagious and that the future of the region will be shaped by collaboration as much as technology. If you want a grounded view of digital transformation from someone living it every day, this conversation is a rare window into both the opportunities and the tension behind innovation at scale. Have you seen the same momentum in your own region, and how do you stay ahead of the pace of change? I would love to hear your thoughts. Tech Talks Daily is Sponsored By Denodo. To learn more, visit denodo.com/aws
Have you ever wondered how an industry known for delays and uncertainty suddenly starts operating with the pace of a tech company? That thought stayed with me as I spoke with Eppie Vojt, the Chief Digital and AI Officer at West Shore Home. His team is bringing applied AI into home remodeling in a way that feels practical, grounded, and surprisingly human. Eppie explains how a strong data foundation allowed them to introduce agentic systems without the usual chaos. Those systems now handle scheduling, permitting, forecasting, and communication in the background. The result is a level of certainty that customers rarely experience in remodeling. When someone signs a project, they already know the installation date. Hours of operational work happen silently, and that alone changes the entire experience. We also talk about the culture that made this possible. Instead of forcing new tools onto teams, leadership encouraged small experiments and curiosity. That simple move flipped the mood internally. Departments began approaching Eppie with ideas rather than waiting to be pushed. The rollout was gradual, giving people time to shift into more valuable work without fear or disruption. Looking ahead, Eppie sees huge potential in letting customers start their journey in different ways. Tools like photogrammetry and digital twins could help people get early pricing guidance without a full in-home visit. It reflects a bigger change across physical industries as AI becomes something that quietly supports accuracy, safety, and convenience. If you care about real AI adoption rather than hype, this one offers a clear view into what works. I'd love to hear what stood out to you after listening. Useful Links Connect with Eppie Vojt on LinkedIn Learn more about West Shore in this video Tech Talks Daily is Sponsored by NordLayer: Get the exclusive Black Friday offer: 28% off NordLayer yearly plans with the coupon code: techdaily-28. Valid until December 10th, 2025. Try it risk-free with a 14-day money-back guarantee.