Fed up with tech hype? Looking for a tech podcast where you can learn from tech leaders and startup stories about how technology is transforming businesses and reshaping industries? In this daily tech podcast, Neil interviews tech leaders, CEOs, entrepreneurs, futurists, technologists, thought lead…
Listeners of The Tech Blog Writer Podcast that love the show mention: neil asks, bram, neil hughes, neil does a great, neil's podcast, charismatic host, insightful and engaging, tech topics, love tuning, great tech, engaging podcast, tech industry, emerging, tech podcast, startups, founder, best tech, predictions, technology, innovative.
The Tech Blog Writer Podcast is a must-listen for anyone interested in the intersection of technology and various industries. Hosted by Neil Hughes, this podcast features interviews with a wide range of guests, including visionary entrepreneurs and industry experts. Neil has a remarkable talent for breaking down complex topics into easily understandable discussions, making it accessible to listeners from all backgrounds. One of the best aspects of this podcast is the diversity of guests, as they come from different industries and share their cutting-edge technology solutions. It provides a great source of inspiration and knowledge for staying up to date with the latest advancements in tech.
The worst aspect of The Tech Blog Writer Podcast is that sometimes the discussions can feel a bit rushed due to the time constraints of each episode. With so many interesting guests and topics to cover, it would be great if there was more time for in-depth conversations. Additionally, while Neil does an excellent job at selecting diverse guests, occasionally it would be beneficial to have more representation from underrepresented communities in tech.
In conclusion, The Tech Blog Writer Podcast is an excellent resource for those looking to stay informed about the latest tech advancements while learning from visionary entrepreneurs across various industries. Neil's ability to break down complex topics and his engaging interviewing style make this podcast a valuable source of inspiration and knowledge. Despite some minor flaws, it remains a must-listen for anyone interested in staying up-to-date with cutting-edge technology solutions and developments.

What happens when artificial intelligence starts accelerating cyberattacks faster than most organizations can test, fix, and respond? In this fast-tracked episode of Tech Talks Daily, I sat down with Sonali Shah, CEO of Cobalt, to unpack what real-world penetration testing data is revealing about the current state of enterprise security. With more than two decades in cybersecurity and a background that spans finance, engineering, product, and strategy, Sonali brings a grounded, operator-level view of where security teams are keeping up and where they are quietly falling behind. Our conversation centers on what happens when AI moves from an experiment to an attack surface. Sonali explains how threat actors are already using the same AI-enabled tools as defenders to automate reconnaissance, identify vulnerabilities, and speed up exploitation. We discuss why this is no longer theoretical, referencing findings from companies like Anthropic, including examples where models such as Claude have demonstrated both power and unpredictability. The takeaway is sobering but balanced. AI can automate a large share of the work, but human expertise still plays a defining role, both for attackers and defenders. We also dig into Cobalt's latest State of Pentesting data, including why median remediation times for serious vulnerabilities have improved while overall closure rates remain stubbornly low. Sonali breaks down why large enterprises struggle more than smaller organizations, how legacy systems slow progress, and why generative AI applications currently show some of the highest risk with some of the lowest fix rates. As more companies rush to deploy AI agents into production, this gap becomes harder to ignore. One of the strongest themes in this episode is the shift from point-in-time testing to continuous, programmatic risk reduction. Sonali explains what effective continuous pentesting looks like in practice, why automation alone creates noise and friction, and how human-led testing helps teams move from assumptions to evidence. We also address a persistent confidence gap, where leaders believe their security posture is strong, even when testing shows otherwise. We close by tackling one of the biggest myths in cybersecurity. Security is never finished. It is a constant process of preparation, testing, learning, and improvement. The organizations that perform best accept this reality and build security into daily operations rather than treating it as a one-off task. So as AI continues to accelerate both innovation and attacks, how confident are you that your security program is keeping pace, and what would continuous testing change inside your organization? I would love to hear your thoughts. Useful Links Connect with Sonali Shah Learn more about Cobalt Check out the Cobalt Learning Center State of Pentesting Report Thanks to our sponsors, Alcor, for supporting the show.

What happens when AI stops talking and starts working, and who really owns the value it creates? In this episode of Tech Talks Daily, I'm joined by Sina Yamani, founder and CEO of Action Model, for a conversation that cuts straight to one of the biggest questions hanging over the future of artificial intelligence. As AI systems learn to see screens, click buttons, and complete tasks the way humans do, power and wealth are concentrating fast. Sina argues that this shift is happening far quicker than most people realize, and that the current ownership model leaves everyday users with little say and even less upside. Sina shares the thinking behind Action Model, a community-owned approach to autonomous AI that challenges the idea that automation must sit in the hands of a few giant firms. We unpack the concept of Large Action Models, AI systems trained to perform real online workflows rather than generate text, and why this next phase of AI demands a very different kind of training data. Instead of scraping the internet in the background, Action Model invites users to contribute actively, rewarding them for helping train systems that can navigate software, dashboards, and tools just as a human worker would. We also explore ActionFi, the platform's outcome-based reward layer, and why Sina believes attention-based incentives have quietly broken trust across Web3. Rather than paying for likes or impressions, ActionFi focuses on verifying real actions across the open web, even when no APIs or integrations exist. That raises obvious questions around security and privacy. This conversation does not shy away from the uncomfortable parts. We talk openly about job displacement, the economic reality facing businesses, and why automation is unlikely to slow down. Sina argues that resisting change is futile, but shaping who benefits from it remains possible. He also reflects on lessons from his earlier fintech exit and how movements grow when people feel they are pushing back against an unfair system. By the end of the episode, we look ahead to a future where much of today's computer-based work disappears and ask what success and failure might look like for a community-owned AI model operating at scale. If AI is going to run more of the internet on our behalf, should the people training it have a stake in what it becomes, and would you trust an AI ecosystem owned by its users rather than a handful of billionaires? Useful Links Connect with Sina Yamani on LinkedIn or X Learn more about the Action Model Follow on X Learn more about the Action Model browser extension Check out the whitelabel integration docs Join their Waitlist Join their Discord community Thanks to our sponsors, Alcor, for supporting the show.

What does it really take to remove decades of technical debt without breaking the systems that still keep the business running? In this episode of Tech Talks Daily, I sit down with Pegasystems leaders Dan Kasun, Head of Global Partner Ecosystem, and John Higgins, Chief of Client and Partner Success, to unpack why legacy modernization has reached a breaking point, and why AI is forcing enterprises to rethink how software is designed, sold, and delivered. Our conversation goes beyond surface-level AI promises and gets into the practical reality of transformation, partner economics, and what actually delivers measurable outcomes. We explore how Pega's AI-powered Blueprint is changing the entry point to enterprise-grade workflows, turning what used to be long, expensive discovery phases into fast, collaborative design moments that business and technology teams can engage with together. Dan and John explain why the old "wrap and renew" approach to legacy systems is quietly compounding technical debt, and why reimagining workflows from the ground up is becoming essential for organizations that want to move toward agentic automation with confidence. The discussion also dives into Pega's deep collaboration with Amazon Web Services, including how tools like AWS Transform and Blueprint work together to accelerate modernization at scale. We talk candidly about the evolving role of partners, why the idea of partners as an extension of a sales force is outdated, and how marketplaces are reshaping buying, building, and operating enterprise software. Along the way, we tackle some uncomfortable truths about AI hype, technical debt, and why adding another layer of technology rarely fixes the real problem. This is an episode for anyone grappling with legacy systems, skeptical of quick-fix AI strategies, or rethinking how partner ecosystems need to operate in a world where speed, clarity, and accountability matter more than ever. As enterprises move toward multi-vendor, agent-driven environments, are we finally ready to retire legacy thinking along with legacy systems, or are we still finding new ways to delay the inevitable? Useful Links Connect with Dan Kasun Connect with John Higgins Learn more about Pega Blueprint Thanks to our sponsors, Alcor, for supporting the show.

What does it really take to move AI from proof-of-concept to something that delivers value at scale? In this episode of Tech Talks Daily, I'm joined by Simon Pettit, Area Vice President for the UK and Ireland at UiPath, for a grounded conversation about what is actually happening inside enterprises as AI and automation move beyond experimentation. Simon brings a refreshingly practical perspective shaped by an unconventional career path that spans the Royal Navy, nearly two decades at NetApp, and more than seven years at UiPath. We talk about why the UK and Ireland remain a strategic region for global technology adoption, how London continues to play a central role for companies expanding into Europe, and why AI momentum in the region is very real despite the broader economic noise. A big part of our discussion focuses on why so many organizations are stuck in pilot mode. Simon explains how hype, fragmented experimentation, and poor qualification of use cases often slow progress, while successful teams take a very different approach. He shares real examples of automation already delivering measurable outcomes, from long-running public sector programs to newer agent-driven workflows that are now moving into production after clear ROI validation. We also explore where the next wave of challenges is emerging. As agentic AI becomes easier for anyone to create, Simon draws a direct parallel to the early days of cloud computing and VM sprawl. Visibility, orchestration, and cost control are becoming just as important as innovation itself. Without them, organizations risk losing control of workflows, spend, and accountability as agents multiply across the business. Looking ahead, Simon outlines why AI success will depend on ecosystems rather than single platforms. Partnerships, vertical solutions, and the ability to swap technologies as the market evolves will shape how enterprises scale responsibly. From automation in software testing to cross-functional demand coming from HR, finance, and operations, this conversation captures where AI is delivering today and where the real work still lies. If you're trying to separate AI momentum from AI noise, this episode offers a clear, experience-led view of what it takes to turn potential into progress. What would need to change inside your organization to move from pilots to production with confidence? Useful Links Learn more about Simon Pettit Connect with UiPath Follow on LinkedIn Thanks to our sponsors, Alcor, for supporting the show.

What happens when speed, scale, and convenience start to erode trust in the images brands rely on to tell their story? In this episode of Tech Talks Daily, I spoke with Dr. Rebecca Swift, Senior Vice President of Creative at Getty Images, about a growing problem hiding in plain sight, the rise of low-quality, generic, AI-generated visuals and the quiet damage they are doing to brand credibility. Rebecca brings a rare perspective to this conversation, leading a global creative team responsible for shaping how visual culture is produced, analyzed, and trusted at scale. We explore the idea of AI "sloppification," a term that captures what happens when generative tools are used because they are cheap, fast, and available, rather than because they serve a clear creative purpose. Rebecca explains how the flood of mass-produced AI imagery is making brands look interchangeable, stripping visuals of meaning, craft, and originality. When everything starts to look the same, audiences stop looking altogether, or worse, stop trusting what they see. A central theme in our discussion is transparency. Research shows that the majority of consumers want to know whether an image has been altered or created using AI, and Rebecca explains why this shift matters. For the first time, audiences are actively judging content based on how it was made, not just how it looks. We talk about why some brands misread this moment, mistaking AI usage for innovation, only to face backlash when consumers feel misled or talked down to. Rebecca also unpacks the legal and ethical risks many companies overlook in the rush to adopt generative tools. From copyright exposure to the use of non-consented training data, she outlines why commercially safe AI matters, especially for enterprises that trade on trust. We discuss how Getty Images approaches AI differently, with consented datasets, creator compensation, and strict controls designed to protect both brands and the creative community. The conversation goes beyond risk and into opportunity. Rebecca makes a strong case for why authenticity, real people, and human-made imagery are becoming more valuable, not less, in an AI-saturated world. We explore why video, photography, and behind-the-scenes storytelling are regaining importance, and why audiences are drawn to evidence of craft, effort, and intent. As generative AI becomes impossible to ignore, this episode asks a harder question. Are brands using AI as a thoughtful tool to support creativity, or are they trading long-term trust for short-term convenience, and will audiences continue to forgive that choice? Useful Links Connect with Dr. Rebecca Swift on LinkedIn VisualGSP Creative Trends Follow on Instagram and LinkedIn Thanks to our sponsors, Alcor, for supporting the show.

What actually happens when a company loses control of its own voice in a world full of channels, platforms, and constant noise? In this episode of Tech Talks Daily, I sat down with Joshua Altman, founder of beltway.media, to unpack what corporate communication really means in 2026 and why it has quietly become one of the most misunderstood leadership functions inside modern organizations. Joshua describes his work as a fractional chief communications officer, a role that sits above individual campaigns, tools, or channels and focuses instead on perception, trust, and consistency across everything a company says and does. Our conversation starts by challenging the assumption that communication is something you "turn on" when a product launches or a crisis hits. Joshua explains why corporate communication is not project-based and not owned by marketing alone. It touches internal updates, investor messaging, brand signals, packaging, email, social platforms, and even the tools teams choose to use every day. If it communicates with internal or external audiences and shapes how the company is perceived, it belongs in the communications function. When that function is missing or fragmented, confusion and noise tend to fill the gap. We also explored why communication has arguably become harder, not easier, despite the explosion of collaboration tools. Email was meant to simplify work, then Slack was meant to replace email, and now AI assistants are transcribing every meeting and surfacing more content than anyone can realistically process. Joshua makes a strong case for simplicity, clarity, and focus, arguing that organizations need to pick channels intentionally and use them well rather than spreading messages everywhere and hoping something lands. Technology naturally plays a big role in the discussion. From the shift away from tape-based media and physical workflows to the accessibility of live global collaboration and affordable computing power, Joshua reflects on how dramatically the workplace has changed since he started his career in video news production. He also shares a grounded view on AI, where it adds real value in speeding up research and reducing busywork, and where human judgment and storytelling still matter most. Toward the end of the conversation, we get into ROI, a question every leader eventually asks. Joshua offers a practical way to think about it, starting with the simple fact that founders, operators, and technical leaders get time back when they no longer have to manage communications themselves. From there, alignment, clarity, and consistency compound over time, even if the impact is not always visible in a single metric. As organizations look ahead and try to make sense of AI, platform shifts, and ever-shorter attention spans, are we investing enough thought into how our companies actually communicate, or are we still mistaking volume for clarity? Useful Links Connect with Joshua Altman Learn more about beltway.media Thanks to our sponsors, Alcor, for supporting the show.

What if your AI systems could explain why something will happen before it does, rather than simply reacting after the damage is done? In this episode of Tech Talks Daily, I sat down with Zubair Magrey, co-founder and CEO of Ergodic AI, to unpack a different way of thinking about artificial intelligence, one that focuses on understanding how complex systems actually behave. Zubair's journey begins in aerospace engineering at Rolls-Royce, moves through a decade of large-scale enterprise AI programs at Accenture, and ultimately leads to building Ergodic, a company developing what he describes as world models for enterprise decision making. World models are often mentioned in research circles, but rarely explained in a way that business leaders can connect to real operational decisions. In our conversation, Zubair breaks that gap down clearly. Instead of training AI to spot patterns in past data and assume the future will look the same, world-model AI focuses on cause and effect. It builds a structured representation of how an organization works, how different parts interact, and how actions ripple through the system over time. The result is an AI approach that can simulate outcomes, test scenarios, and help teams understand the consequences of decisions before they commit to them. We explored why this matters so much as organizations move toward agentic AI, where systems are expected to recommend or even execute actions autonomously. Without an understanding of constraints, dependencies, and system dynamics, those agents can easily produce confident but unrealistic recommendations. Zubair explains how Ergodic uses ideas from physics and system theory to respect real-world limits like capacity, time, inventory, and causality, and why ignoring those principles leads to fragile AI deployments that struggle under pressure. The conversation also gets practical. Zubair shares how world-model simulations are being used in supply chain, manufacturing, automotive, and CPG environments to detect early risks, anticipate disruptions, and evaluate trade-offs before problems cascade across customers and regions. We discuss why waiting for perfect data often stalls AI adoption, how Ergodic's data-agnostic approach works alongside existing systems, and what it takes to deliver ROI that teams actually trust and use. Finally, we step back and look at the organizational side of AI adoption. As AI becomes embedded into daily workflows, cultural change, experimentation, and trust become just as important as models and metrics. Zubair offers a grounded view on how leaders can prepare their teams for faster cycles of change without losing confidence or control. As enterprises look ahead to a future shaped by autonomous systems and real-time decision making, are we building AI that truly understands how our organizations work, or are we still guessing based on the past, and what would it take to change that? Useful Links Connect with Zubair Magrey Learn more about Ergodic AI Thanks to our sponsors, Alcor, for supporting the show.

What does it actually take to build trust with developers when your product sits quietly inside thousands of other products, often invisible to the people using it every day? In this episode of Tech Talks Daily, I sat down with Ondřej Chrastina, Developer Relations at CKEditor, to unpack a career shaped by hands-on experience, curiosity, and a deep respect for developer time. Ondřej's story starts in QA and software testing, moves through development and platform work, and eventually lands in developer relations. What makes his perspective compelling is that none of these roles felt disconnected. Each one sharpened his understanding of real developer friction, the kind you only notice when you have lived with a product day in and day out. We talked about what changes when you move from monolithic platforms to API-first services, and why developer relations looks very different depending on whether your audience is an application developer, a data engineer, or an integrator working under tight delivery pressure. Ondřej shared how his time at Kentico, Kontent.ai, and Ataccama shaped his approach to tooling, documentation, and examples. For him, theory rarely lands. Showing something that works, even in a small or imperfect way, tends to earn attention and respect far faster. At CKEditor, that thinking becomes even more interesting. The editor is everywhere, yet rarely recognized. It lives inside SaaS platforms, internal tools, CRMs, and content systems, quietly doing its job. We explored how developer experience matters even more when the product itself fades into the background, and why long-term maintenance, support, and predictability often outweigh short-term feature excitement. Ondřej also explained why building instead of buying an editor is rarely as simple as teams expect, especially when standards, security, and future updates enter the picture. We also got into the human side of developer relations. Balancing credibility with business goals, staying useful rather than loud, and acting as a bridge between engineering, product, marketing, and the outside world. Ondřej was refreshingly honest about the role ego can play, and why staying close to real usage is the fastest way to keep yourself grounded. If you care about developer experience, internal tooling, or how invisible infrastructure shapes modern software, this conversation offers plenty to reflect on. What have you seen work, or fail, when it comes to earning developer trust, and where do you think developer relations still get misunderstood? Useful Links Connect with Ondrej Chrastina Learn more about CK Editor Thanks to our sponsors, Alcor, for supporting the show.

If artificial intelligence is meant to earn trust anywhere, should banking be the place where it proves itself first? In this episode of Tech Talks Daily, I'm joined by Ravi Nemalikanti, Chief Product and Technology Officer at Abrigo, for a grounded conversation about what responsible AI actually looks like when the consequences are real. Abrigo works with more than 2,500 banks and credit unions across the United States, many of them community institutions where every decision affects local businesses, families, and entire regional economies. That reality makes this discussion feel refreshingly practical rather than theoretical. We talk about why financial services has become one of the toughest proving grounds for AI, and why that is a good thing. Ravi explains why concepts like transparency, explainability, and auditability are not optional add-ons in banking, but table stakes. From fraud detection and lending decisions to compliance and portfolio risk, every model has to stand up to regulatory, ethical, and operational scrutiny. A false positive or an opaque decision is not just a technical issue, it can damage trust, disrupt livelihoods, and undermine confidence in an institution. A big focus of the conversation is how AI assistants are already changing day-to-day banking work, largely behind the scenes. Rather than flashy chatbots, Ravi describes assistants embedded directly into lending, anti-money laundering, and compliance workflows. These systems summarize complex documents, surface anomalies, and create consistent narratives that free human experts to focus on judgment, context, and relationships. What surprised me most was how often customers value consistency and clarity over raw speed or automation. We also explore what other industries can learn from community banks, particularly their modular, measured approach to adoption. With limited budgets and decades-old core systems, these institutions innovate cautiously, prioritizing low-risk, high-return use cases and strong governance from day one. Ravi shares why explainable AI must speak the language of bankers and regulators, not data scientists, and why showing the "why" behind a decision is essential to keeping humans firmly in control. As we look toward 2026 and beyond, the conversation turns to where AI can genuinely support better outcomes in lending and credit risk without sidelining human judgment. Ravi is clear that fully autonomous decisioning still has a long way to go in high-stakes environments, and that the future is far more about partnership than replacement. AI can surface patterns, speed up insight, and flag risks early, but people remain essential for context, empathy, and final accountability. If you're trying to cut through the AI noise and understand how trust, governance, and real-world impact intersect, this episode offers a rare look at how responsible AI is actually being built and deployed today. And once you've listened, I'd love to hear your perspective. Where do you see AI earning trust, and where does it still have something to prove?

What really happens after the startup advice runs out and founders are left facing decisions no pitch deck ever prepared them for? In this episode of Tech Talks Daily, I sit down with Vijay Rajendran, a founder, venture capitalist, UC Berkeley instructor, and author of The Funding Framework, to discuss the realities of company building that rarely appear on social feeds or investor blogs. Vijay has spent years working alongside founders at the sharpest end of growth, from early fundraising conversations through to the personal and leadership shifts that scaling demands. That experience shapes a conversation that feels refreshingly honest, thoughtful, and grounded in lived reality. We explore why building something people actually want sounds simple in theory yet proves brutally difficult in practice. Vijay explains how timing, learning velocity, and the willingness to adapt often matter more than stubborn vision, and why many founders misunderstand what momentum really looks like. From there, the discussion moves into investor relationships, not as transactional events, but as long-term partnerships that require founders to shift their mindset from defense to evaluation. The emotional and psychological dynamics of fundraising come into focus, especially the moments when founders underestimate how much power they actually have in shaping those relationships. A big part of this conversation centers on leadership identity. Vijay breaks down the messy transition from being the "chief everything officer" to becoming a true chief executive, and why the most overlooked stage in that journey is learning how to enable others. We talk about the point where founders become the bottleneck, often without realizing it, and why this tends to surface as teams grow and decisions start happening outside the founder's direct line of sight. The plateau many companies hit around scale becomes less mysterious when viewed through this lens. We also challenge some of the most popular startup advice circulating online today, particularly around fundraising volume, pitching styles, and the idea that persistence alone guarantees outcomes. Vijay shares why treating fundraising like enterprise sales, focusing on alignment over volume, and listening more than pitching often leads to better results. The conversation closes with practical reflections on personal growth, co-founder dynamics, and how leaders can regain clarity during periods of pressure without stepping away from responsibility. If you are building a company, leading a team, or questioning whether you are evolving as fast as your business demands, this episode will likely hit closer to home than you expect. And once you've listened, I'd love to hear what resonated most with you and the leadership questions you're still sitting with after the conversation. Useful Links Connect with Vijay Rajendran The Funding Framework Startup Pitch Deck Thanks to our sponsors, Alcor, for supporting the show.

What happens when decades of clinical research experience collide with a regulatory environment that is changing faster than ever? In this episode of Tech Talks Daily, I sat down with Dr Werner Engelbrecht, Senior Director of Strategy at Veeva Systems, for a wide-ranging conversation that explores how life sciences organizations across Europe are responding to mounting regulatory pressure, rapid advances in AI, and growing expectations around transparency and patient trust. Werner brings a rare perspective to this discussion. His career spans clinical research, pharmaceutical development, health authorities, and technology strategy, shaped by firsthand experience as an investigator and later as a senior industry leader. That background gives him a grounded, practical view of what is actually changing inside pharma and biotech organizations, beyond the headlines around AI Acts, data rules, and compliance frameworks. We talk openly about why regulations such as GDPR, the EU AI Act, and ACT-EU are creating real pressure for organizations that are already operating in highly controlled environments. But rather than framing compliance as a blocker, Werner explains why this moment presents an opening for better collaboration, stronger data foundations, and more consistent ways of working across internal teams. According to him, the real challenge is less about technology and more about how companies manage data quality, align processes, and break down silos that slow everything from trial setup to regulatory response times. Our conversation also digs into where AI is genuinely making progress today in life sciences and where caution still matters. Werner shares why drug discovery and non-patient-facing use cases are moving faster, while areas like trial execution and real-world patient data still demand stronger evidence, cleaner datasets, and clearer governance. His perspective cuts through hype and focuses on what is realistic in an industry where patient safety remains the defining responsibility. We also explore patient recruitment, decentralized trials, and the growing complexity of diseases themselves. Advances in genomics and diagnostics are reshaping how trials are designed, which in turn raises questions about access to electronic health records, data harmonization across Europe, and the safeguards regulators care about most. Werner connects these dots in a way that highlights both the operational strain and the long-term upside. Toward the end, we look ahead at emerging technologies such as blockchain and connected devices, and how they could strengthen data integrity, monitoring, and regulatory confidence over time. It is a thoughtful discussion that reflects both optimism and realism, rooted in lived experience rather than theory. If you are working anywhere near clinical research, regulatory affairs, or digital transformation in life sciences, this episode offers a clear-eyed view of where the industry stands today and where it may be heading next. How should organizations turn regulation into momentum instead of resistance, and what will it take to earn lasting trust from patients, partners, and regulators alike? Useful Links Connect with Dr Werner Engelbrecht Learn more about Veeva Systems Viva Summit Europe and Viva Summit USA Follow on LinkedIn Thanks to our sponsors, Alcor, for supporting the show.

What happens when an industry that has barely changed for generations suddenly finds itself at the center of one of the biggest shifts in modern work? In this episode of Tech Talks Daily, I'm joined by Kate Hayward, UK Managing Director at Xero, for a conversation about how accounting is being reshaped by technology, education, regulation, and changing expectations from clients and talent alike. Kate describes this moment as the largest reorganization of human capital in the history of the profession, and as we talk, it becomes clear why that claim is gaining traction. We explore how AI is shifting accountants away from pure number processing and toward higher-value advisory work, without stripping away the deep financial understanding the role still demands. Kate shares why so many practices are reporting higher revenues and profits, and how technology is acting as a catalyst for rethinking long-standing workflows rather than simply speeding up broken ones. We also dig into research showing that pairing AI with financial education strengthens analytical thinking while leaving core calculation skills intact, a useful counterpoint to the more dramatic headlines about machines replacing people. Our conversation moves into the practical reality of how firms are using tools like ChatGPT today, from scenario planning to preparing for difficult client conversations, while also discussing where caution still matters, particularly around data security and core financial workflows. Kate also explains how government initiatives such as Making Tax Digital and the digitization of HMRC are changing client expectations and deepening the relationship between accountants and the businesses they support. We also spend time on the future of the profession, including how hiring strategies are evolving, why problem-solving and communication skills are becoming just as valuable as technical knowledge, and why private equity interest in accounting is accelerating digital adoption across the sector. Kate rounds things out by sharing how Xero is thinking about product design in 2026, what users can expect next, and why keeping the human side of the profession front and center still matters. So as accounting moves further into an AI-assisted, digitally native future, how do firms balance efficiency, trust, identity, and long-term relevance, and what lessons can other industries take from this moment of change? Useful Links Follow Kate Hayward on LinkedIn Accounting and Bookkeeping Industry Report Xero Website Follow on LinkedIn, Facebook, X, YouTube, Instagram

What does sales leadership actually look like once the AI experimentation phase is over and real results are the only thing that matters? In this episode of Tech Talks Daily, I sit down with Jason Ambrose, CEO of the Iconiq backed AI data platform People.ai, to unpack why the era of pilots, proofs of concept, and AI theater is fading fast. Jason brings a grounded view from the front lines of enterprise sales, where leaders are no longer impressed by clever demos. They want measurable outcomes, better forecasts, and fewer hours lost to CRM busywork. This conversation goes straight to the tension many organizations are feeling right now, the gap between AI potential and AI performance. We talk openly about why sales teams are drowning in activity data yet still starved of answers. Emails, meetings, call transcripts, dashboards, and dashboards about dashboards have created fatigue rather than clarity. Jason explains how turning raw activity into crisp, trusted answers changes how sellers operate day to day, pulling them back into customer conversations instead of internal reporting loops. The discussion challenges the long held assumption that better selling comes from more fields, more workflows, and more dashboards, arguing instead that AI should absorb the complexity so humans can focus on judgment, timing, and relationships. The conversation also explores how tools like ChatGPT and Claude are quietly dismantling the walls enterprise software spent years building. Sales leaders increasingly want answers delivered in natural language rather than another system to log into, and Jason shares why this shift is creating tension for legacy platforms built around walled gardens and locked down APIs. We look at what this means for architecture decisions, why openness is becoming a strategic advantage, and how customers are rethinking who they trust to sit at the center of their agentic strategies. Drawing on work with companies such as AMD, Verizon, NVIDIA, and Okta, Jason shares what top performing revenue organizations have in common. Rather than chasing sameness, scripts, and averages, they lean into curiosity, variation, and context. They look for where growth behaves differently by market, segment, or product, and they use AI to surface those differences instead of flattening them away. It is a subtle shift, but one with big implications for how sales teams compete. We also look ahead to 2026 and beyond, including how pricing models may evolve as token consumption becomes a unit of value rather than seats or licenses. Jason explains why this shift could catch enterprises off guard, what governance will matter, and why AI costs may soon feel as visible as cloud spend did a decade ago. The episode closes with a thoughtful challenge to one of the biggest myths in the industry, the belief that selling itself can be fully automated, and why the last mile of persuasion, trust, and judgment remains deeply human. If you are responsible for revenue, sales operations, or AI strategy, this episode offers a clear-eyed look at what changes when AI stops being an experiment and starts being held accountable, so what assumptions about sales and AI are you still holding onto, and are they helping or quietly holding you back? Useful Links Follow Jason Ambrose on LinkedIn Learn more about people.ai Follow on LinkedIn Thanks to our sponsors, Alcor, for supporting the show.

In this episode of Tech Talks Daily, I sat down with Keith Zubchevich, CEO of Conviva, to unpack one of the most honest analogies I have heard about today's AI rollout. Keith compares modern AI agents to toddlers being sent out to get a job, full of promise, curious, and energetic, yet still lacking the judgment and context required to operate safely in the real world. It is a simple metaphor, but it captures a tension many leaders are feeling as generative AI matures in theory while so many deployments stumble in practice. As ChatGPT approaches its third birthday, the narrative suggests that GenAI has grown up. Yet Keith argues that this sense of maturity is misleading, especially inside enterprises chasing measurable returns. He explains why so many pilots stall or quietly disappoint, not because the models lack intelligence, but because organizations often release agents without clear outcomes, real-time oversight, or an understanding of how customers actually experience those interactions. The result is AI that appears to function well internally while quietly frustrating users or failing to complete the job it was meant to do. We also dig into the now infamous Chevrolet chatbot incident that sold a $76,000 vehicle for one dollar, using it as a lens to examine what happens when agents are left without boundaries or supervision. Keith makes a strong case that the next chapter of enterprise AI will not be defined by ever-larger models, but by visibility. He shares why observing behavior, patterns, sentiment, and efficiency in real time matters more than chasing raw accuracy, especially once AI moves from internal workflows into customer-facing roles. This conversation will resonate with anyone under pressure to scale AI quickly while worrying about brand risk, accountability, and trust. Keith offers a grounded view of what effective AI "parenting" looks like inside modern organizations, and why measuring the customer experience remains the most reliable signal of whether an AI system is actually growing up or simply creating new problems at speed. As leaders rush to put agents into production, are we truly ready to guide them, or are we sending toddlers into the workforce and hoping for the best? Useful Links Connect with Keith Zubchevich Learn more about Conviva Chevrolet Dealer Chatbot Agrees to Sell Tahoe for $1 Thanks to our sponsors, Alcor, for supporting the show.

In this episode of Tech Talks Daily, I sit down with Imran Nino Eškić and Boštjan Kirm from HyperBUNKER to unpack a problem many organisations only discover in their darkest hour. Backups are supposed to be the safety net, yet in real ransomware incidents, they are often the first thing attackers dismantle. Speaking with two people who cut their teeth in data recovery labs across 50,000 real cases gave me a very different perspective on what resilience actually looks like. They explain why so many so-called "air-gapped" or "immutable" backups still depend on identities, APIs, and network pathways that can be abused. We talk through how modern attackers patiently map environments for weeks before neutralising recovery systems, and why that shift makes true physical isolation more relevant than ever. What struck me most was how calmly they described failure scenarios that would keep most leaders awake at night. The heart of the conversation centres on HyperBUNKER's offline vault and its spaceship-style double airlock design. Data enters through a one-way hardware channel, the network door closes, and only then is information moved into a completely cold vault with no address, no credentials, and no remote access. I also reflect on seeing the black box in person at the IT Press Tour in Athens and why it feels less like a gadget and more like a last-resort lifeline. We finish by talking about how businesses should decide what truly belongs in that protected 10 percent of data, and why this is as much a leadership decision as an IT one. If everything vanished tomorrow, what would your company need to breathe again, and would it actually survive? Useful LInks Connect with Imran Nino Eškić Connect With Boštjan Kirm Learn More about HyperBUNKER Lear more about the IT Press Tour Thanks to our sponsors, Alcor, for supporting the show.

What happens when the AI race stops being about size and starts being about sense? In this episode of Tech Talks Daily, I sit down with Wade Myers from MythWorx, a company operating quietly while questioning some of the loudest assumptions in artificial intelligence right now. We recorded this conversation during the noise of CES week, when headlines were full of bigger models, more parameters, and ever-growing GPU demand. But instead of chasing scale, this discussion goes in the opposite direction and asks whether brute force intelligence is already running out of road. Wade brings a perspective shaped by years as both a founder and investor, and he explains why today's large language models are starting to collide with real-world limits around power, cost, latency, and sustainability. We talk openly about the hidden tax of GPUs, how adding more compute often feels like piling complexity onto already fragile systems, and why that approach looks increasingly shaky for enterprises dealing with technical debt, energy constraints, and long deployment cycles. What makes this conversation especially interesting is MythWorx's belief that the next phase of AI will look less like prediction engines and more like reasoning systems. Wade walks through how their architecture is modeled closer to human learning, where intelligence is learned once and applied many times, rather than dragging around the full weight of the internet to answer every question. We explore why deterministic answers, audit trails, and explainability matter far more in areas like finance, law, medicine, and defense than clever-sounding responses. There is also a grounded enterprise angle here. We talk about why so many organizations feel uneasy about sending proprietary data into public AI clouds, how private AI deployments are becoming a board-level concern, and why most companies cannot justify building GPU-heavy data centers just to experiment. Wade draws parallels to the early internet and smartphone app eras, reminding us that the playful phase often comes before the practical one, and that disappointment is often a signal of maturation, not failure. We finish by looking ahead. Edge AI, small-footprint models, and architectures that reward efficiency over excess are all on the horizon, and Wade shares what MythWorx is building next, from faster model training to offline AI that can run on devices without constant connectivity. It is a conversation about restraint, reasoning, and realism at a time when hype often crowds out reflection. So if bigger models are no longer the finish line, what should business and technology leaders actually be paying attention to next, and are we ready to rethink what intelligence really means? Useful Links Connect with Wade Myers Learn More About MythWorx Thanks to our sponsors, Alcor, for supporting the show.

What happens when we finally admit that stopping every cyberattack was never realistic in the first place? That is the thread running through this conversation, recorded at the start of the year when reflection tends to be more honest and the noise dial is turned down a little. I was joined by returning guest Raghu Nandakumara from Illumio, nearly three years after our last discussion, to pick up a question that has aged far too well. How do organizations talk about cybersecurity value when breaches keep happening anyway? This episode is less about shiny tools and more about uncomfortable truths. We spend time unpacking why security teams still struggle to show value, why prevention-only thinking keeps setting leaders up for disappointment, and why the conversation is slowly shifting toward resilience and containment. Raghu is refreshingly direct on why reducing cyber risk, rather than chasing impossible guarantees, is the only metric that really holds up under boardroom scrutiny. We also talk about the strange contradiction playing out across industries. Attackers are often using familiar paths like misconfigurations, excessive permissions, and missing patches, yet many organizations still fail to close those gaps. The issue, as Raghu explains, is rarely a lack of tools. It is usually fragmented coverage, outdated processes, and a talent pipeline that blocks capable people from entering the field while claiming there is a skills shortage. One of the most practical parts of this conversation centers on mindset. Instead of asking whether an attacker got in, Raghu argues that leaders should be asking how far they were able to go once inside. That shift alone changes how success is measured, how teams prepare for incidents, and how pressure-filled P1 moments are handled when boards want answers every fifteen minutes. We also touch on how legal action, public claims campaigns, and customer lawsuits are changing the stakes after a breach, forcing executives to rethink how they frame cyber investment. From there, Raghu shares how Illumio has been working with Microsoft to strengthen internal resilience at massive scale, and why visibility and segmentation are becoming harder to ignore. This is a conversation about realism, responsibility, and growing up as an industry. If cybersecurity is really about safety and not slogans, what would you want your organization to stop saying, and what would you rather hear instead? Please feel free to upload the podcast. Here are also the links we discussed on the call: Useful Links Connect with Raghu Nandakumara on LinkedIn and Twitter Learn more about Illumio Lateral Movement in Cyberattacks Illumio Podcast Follow on Facebook, Twitter, LinkedIn, and YouTube Thanks to our sponsors, Alcor, for supporting the show.

What really happens inside an organization when a cyber incident hits and the neat incident response plan starts to fall apart? That question sat at the heart of my return conversation with Max Vetter, VP of Cyber at Immersive. It has been a big year for breaches, public fallout, and eye-watering financial losses, and this episode goes beyond headlines to examine what cyber crisis management actually looks like when pressure, uncertainty, and human behavior collide. Max brings a rare perspective shaped by years in law enforcement, intelligence work, and hands-on cyber defense, and he is refreshingly honest about where most organizations are still unprepared. We talked about why written incident response plans tend to fail at the exact moment they are needed most. Cyber incidents are chaotic, emotional, and non-linear, yet many plans assume calm decision-making and perfect coordination. Max explains why success or failure is often defined by the response rather than the initial breach itself, and why leadership, communication, and judgment matter just as much as technical skill. Real-world examples from major incidents highlight how competing pressures quickly emerge, whether to contain or keep systems running, whether to pay a ransom or risk prolonged downtime, and how every option comes with consequences. One idea that really stood out is Max's belief that resilience is revealed, not documented. Compliance and audits may tick boxes, but they rarely expose how teams behave under stress. We explored why organizations that rely on annual tabletop exercises often develop a false sense of confidence, and how that confidence can become dangerous when decisions are made quickly and publicly. Max shared why the best-performing teams are often the ones that feel less certain in the moment, because they question assumptions and adapt faster. We also dug into the growing role of crisis simulations and micro-drills. Rather than rehearsing a single scenario once a year, Immersive focuses on repeated, realistic practice that builds muscle memory across technical teams, executives, legal, and communications. The goal is not to predict the exact attack, but to train people to think clearly, collaborate across functions, and make defensible decisions when there are no good options. That preparation becomes even more important as cyber incidents increasingly spill into supply chains, manufacturing, and the physical world. As public scrutiny rises and consumer-led legal action becomes more common after breaches, reputation and response speed now sit alongside forensics and recovery as business-critical concerns. This episode is a candid look at why cyber crisis readiness is a discipline, not a document, and why assuming you will cope when the moment arrives is a risky bet. So if resilience only truly shows itself when everything is on the line, how confident are you that your organization would perform when the pressure is real and the clock is ticking? Useful Links Connect with Max Vetter on Linkedin Learn more about Immersive Labs Follow on LinkedIn, Instagram, Twitter and Facebook Thanks to our sponsors, Alcor, for supporting the show.

What happens when the web browser stops being a passive window to information and starts acting like an intelligent coworker, and why does that suddenly make security everyone's problem? At the start of 2026, I sat down with Michael Shieh from Mammoth Cyber to unpack a shift that is quietly redefining how work gets done. AI browsers are moving fast from consumer curiosity to enterprise reality, embedding agentic AI directly into the place where most work already happens, the browser. Search, research, comparison, analysis, and decision support are no longer separate steps. They are becoming one continuous workflow. In this conversation, we talk openly about why consumer adoption has surged while enterprise teams remain hesitant. Many employees already rely on AI-powered browsing at home because it removes ads, personalizes results, and saves time. Inside organizations, however, the same tools raise difficult questions around data exposure, credential safety, and indirect prompt injection. Once an AI agent starts reading untrusted external content, the browser itself becomes a new attack surface. Michael explains why this risk is often misunderstood and why the real danger is not internal documents, but external websites designed to manipulate AI behavior. We dig into how Mammoth Cyber approaches this challenge differently, starting with a secure-first architecture that isolates trusted internal data from untrusted external sources. Every AI action, from memory to model connections to data access, is monitored and governed by policy. It is a practical response to a problem many security teams know is coming but feel unprepared to manage. We also explore how AI browsers change day-to-day work. A task like competitive analysis, which once took days of manual research and document comparison, can now be completed in minutes when an AI browser securely connects internal knowledge with external intelligence. That productivity gain is real, but only if enterprises trust the environment it runs in. We touch on Zero Trust principles, including work influenced by Chase Cunningham, and why 2026 looks like a tipping point for enterprise AI browsing. The technology is maturing, security controls are catching up, and businesses are starting to accept that blocking AI outright is no longer realistic. If you are curious to see how this works in practice, Mammoth Cyber offers a free Enterprise AI Browser that lets you experience what secure AI-powered browsing actually looks like, without putting your organization at risk. I have included the link so you can explore it yourself and decide whether this is where work is heading next. So, as AI browsers become the new workflow hub for knowledge workers everywhere, is your organization ready to secure the browser before it becomes your most exposed endpoint, and what would adopting one safely change about how your teams work? If you want to see what an enterprise-grade AI browser looks like when security is built in from day one, Mammoth Cyber is offering free access to its Enterprise AI Browser. It gives you a hands-on way to experience how agentic AI can automate real work inside the browser while keeping internal data isolated from untrusted external sources. You can explore it yourself and decide whether this is how your organization should be approaching AI-powered browsing in 2026. Useful Links Learn more about the Mammoth Enterprise Browser and try it for free Connect with Michael Shieh on LinkedIn Thanks to our sponsors, Alcor, for supporting the show.

What happens when engineering teams can finally see the business impact of every technical decision they make? In this episode of Tech Talks Daily, I sat down with Chris Cooney, Director of Advocacy at Coralogix, to unpack why observability is no longer just an engineering concern, but a strategic lever for the entire business. Chris joined me fresh from AWS re:Invent, where he had been challenging a long-standing assumption that technical signals like CPU usage, error rates, and logs belong only in engineering silos. Instead, he argues that these signals, when enriched and interpreted correctly, can tell a much more powerful story about revenue loss, customer experience, and competitive advantage. We explored Coralogix's Observability Maturity Model, a four-stage framework that takes organizations from basic telemetry collection through to business-level decision making. Chris shared how many teams stall at measuring engineering health, without ever connecting that data to customer impact or financial outcomes. The conversation became especially tangible when he explained how a single failed checkout log can be enriched with product and pricing data to reveal a bug costing thousands of dollars per day. That shift, from "fix this tech debt" to "fix this issue draining revenue," fundamentally changes how priorities are set across teams. Chris also introduced Oli, Coralogix's AI observability agent, and explained why it is designed as an agent rather than a simple assistant. We talked about how Oli can autonomously investigate issues across logs, metrics, traces, alerts, and dashboards, allowing anyone in the organization to ask questions in plain English and receive actionable insights. From diagnosing a complex SQL injection attempt to surfacing downstream customer impact, Oli represents a move toward democratizing observability data far beyond engineering teams. Throughout our discussion, a clear theme emerged. When technical health is directly tied to business health, observability stops being seen as a cost center and starts becoming a competitive advantage. By giving autonomous engineering teams visibility into real-world impact, organizations can make faster, better decisions, foster innovation, and avoid the blind spots that have cost even well-known brands millions. So if observability still feels like a necessary expense rather than a growth driver in your organization, what would change if every technical signal could be translated into clear business impact, and who would make better decisions if they could finally see that connection? Useful LInks Connect with Chris Cooney Learn more about Coralogix Follow on LinkedIn Thanks to our sponsors, Alcor, for supporting the show.

What does real AI transformation look like when leaders stop chasing prototypes and start demanding outcomes they can actually measure? That question sat at the center of my conversation with Alex Cross, Chief Technology Officer for EMEA at CI&T, alongside Melissa Smith, as we unpacked why so many organizations feel stuck between AI ambition and business reality. There is no shortage of excitement around AI, but there is growing skepticism too, especially from leadership teams who have seen pilots come and go without clear return. This episode focuses on how CI&T is addressing that gap head on. Alex shared how CI&T frames its work as AI-enabled transformation rather than simply layering AI tools onto existing processes. The distinction matters. Instead of using AI to speed up broken workflows, CI&T reshapes how work gets done so AI becomes part of value creation itself. We explored a standout example from ITAU, the largest bank in Latin America, where deep modernization work helped deliver gains that most executives only ever see in strategy decks. Productivity rose sharply, digital launch cycles collapsed from years to months, customer satisfaction jumped, and the commercial impact reached hundreds of millions in uplift. These are the kinds of results that change boardroom conversations. A big part of how CI&T gets there is its proprietary Flow platform. Alex explained how Flow gives clients a day-one AI environment, removing the heavy upfront cost and complexity that often slows momentum. Instead of spending months building platforms before any value appears, teams can move from proof of concept to production in as little as six to eight weeks. Flow also plays a second role that many AI programs miss, acting as a measurement layer so performance, efficiency, and ROI are visible rather than assumed. We also talked about why partnerships matter when execution is the goal. CI&T works closely with hyperscalers like AWS and Databricks, combining native tools with its own codified expertise. That combination has helped the company achieve an unusually high success rate in bringing AI initiatives to production, a challenge many organizations still struggle with. For Alex, the difference comes down to a relentless focus on production readiness and collaboration between business and technology teams from day one. Looking ahead, the conversation turned to CI&T's expansion across EMEA and what the company's 30th year represents. Rather than chasing every new trend, the focus is on productizing services around real client problems, whether that is legacy modernization, efficiency, or growth. The goal is to bridge strategy and execution in a way that feels practical, fast, and accountable. If you are leading AI initiatives and wondering why progress feels slower than the hype suggests, this episode offers a grounded perspective from the front lines. So, as organizations head into another year of bold AI plans, the real question becomes this. Are you building faster caterpillars, or are you ready to do the harder work required to turn ambition into something that can truly scale? Useful Links Connect with Alex Cross Connect With Melissa Smith Learn more about CI&T Follow CI&T on LinkedIn and YouTube Thanks to our sponsors, Alcor, for supporting the show.

What does AI-led transformation actually look like when it moves beyond pilots, hype, and slide decks and starts changing how work gets done every day? That question framed my conversation with Venk Korla, CEO of HGS, at a time when many organizations feel both excited and exhausted by AI. Boards want results, teams are buried in proofs of concept, and leaders are under pressure to show progress without breaking trust, budgets, or operations. This episode cuts through that tension and focuses on what it takes to turn ambition into outcomes. Venk shared how HGS thinks about what he calls intelligent experiences, where customer interactions are directly connected to operational follow-through. Instead of treating AI as a front-end layer or a chatbot add-on, HGS links context, data, and fulfillment so the experience continues after the conversation ends. We talked through practical examples, from airlines proactively rebooking stranded passengers before they queue at a desk, to healthcare providers guiding patients step by step before and after surgery with timely, relevant messages. In each case, the value comes from anticipation and execution, not novelty. A big part of our discussion centered on why so many AI initiatives stall. Venk described how organizations often chase technology first, launching pilots without redesigning the underlying process. HGS takes a different route through what they call Realized AI, embedding AI into specific workflows with clear ownership and measurable goals. The focus is on outcomes such as faster processing, higher compliance, and improved customer satisfaction, all proven within a ninety day proof of value. It is a disciplined approach that favors repeatability over experimentation theater. We also spent time on cloud strategy, an area where expectations and reality often collide. Venk was candid about why simple lift-and-shift migrations fail to deliver value. Without re-architecting applications to take advantage of elasticity and serverless compute, cloud spend can grow while performance stalls. He shared how a FinOps mindset, combined with application redesign, helped one client dramatically improve load speeds while reducing costs, reinforcing the idea that transformation requires structural change, not surface movement. Ethics and trust were another thread running through the conversation. Venk emphasized that AI systems are only as reliable as the data, governance, and oversight behind them. Human-in-the-loop design remains central at HGS, ensuring accountability, empathy, and confidence for both customers and employees working alongside AI. This balance between automation and human judgment came up again when we discussed their software-as-a-surface model, where AI and people work together in a carefully orchestrated way, with pricing tied to resolved outcomes rather than activity alone. As the pace of change continues to accelerate, this episode offers a grounded perspective on how to move forward without getting lost in noise. If you are leading transformation and feeling pressure to show progress, the real challenge may not be choosing the right tool, but deciding which outcomes truly matter and redesigning work around them. As AI, cloud, and customer experience continue to converge, are you building systems that look impressive in demos or that deliver predictable results when it counts? Useful Links Connect with Venk Korla Learn more about HGS Follow on LinkedIn Thanks to our sponsors, Alcor, for supporting the show.

What if the biggest breakthrough in weight management is not a new diet, but finally seeing how your body responds in real time? That question sat at the center of my conversation with Sharam Fouladgar-Mercer, CEO and co-founder of Signos, a continuous glucose monitoring (CGM) and AI-powered health platform built to help people manage weight by understanding their metabolism. January is when motivation is high and the wellness noise is loud, but it is also when a lot of people realize how hard it is to stick with generic advice that does not fit real life. This episode is about why personalization matters, how metabolic signals can change the way you think about food and exercise, and what happens when health technology shifts from reporting the past to guiding the next decision. Sharam explained how Signos pairs a CGM with an AI-driven experience that turns glucose data into practical actions. The point is not to force people into rigid rules or extreme restrictions. Instead, it is about learning how your body reacts to everyday choices, then using that feedback to reduce spikes, improve consistency, and build habits you can actually live with. We talked about simple interventions, like changing the order of foods in a meal, timing movement more intelligently, and spotting patterns that would otherwise stay invisible. Two personal stories brought the conversation to life. Sharam shared how he lost 25 pounds while increasing his calorie intake, which challenges a lot of assumptions people carry into weight loss. He also shared a story from his family life, where his wife's deep sleep increased from roughly 20 minutes a night to around 60 minutes after focusing on glucose stability, even while total sleep time remained limited during the intense period of raising young kids. It is the kind of detail that hits home for anyone who has ever tried to make healthier choices while exhausted and stretched thin. We also explored why FDA clearance matters for Signos and what that could mean for mainstream access. Over-the-counter availability reduces friction, can lower cost, and opens the door to broader adoption, including potential FSA and HSA eligibility. Looking ahead, Sharam shared a vision that goes beyond weight management, connecting metabolic health to the long arc of prevention and chronic conditions where insulin resistance plays a role. If you have ever felt like you are doing all the "right" things and still not seeing results, this episode will make you rethink what "right" even means. And if you could finally see your metabolism in real time, would it change how you approach food, sleep, exercise, and the habits you want to keep this year? Useful Links Connect with Sharam Fouladgar-Mercer Learn more about Signos Instagram, Facebook, X and YouTube Thanks to our sponsors, Alcor, for supporting the show.

What if your website could spot its own problems, fix them, and quietly make more money while you focus on building your business? That question sat at the heart of my conversation with Aviv Frenkel, co-founder and CEO of Moonshot AI, and it speaks to a frustration almost every founder and digital leader recognizes. Traffic is expensive, attention is fragile, and even small issues in design or flow can quietly drain revenue for months before anyone notices. Traditional optimization often means long cycles, internal debates, and teams juggling analytics, design tools, and testing platforms while hoping the next experiment moves the needle. Aviv's perspective is shaped by lived experience. Before building Moonshot AI, he ran an e-commerce company that had plenty of visitors but disappointing conversion. Like many founders, he watched teams guess at fixes, wait weeks for tests to run, then struggle to link effort to outcome. Moonshot AI was born from that frustration, with a simple ambition. Let the website diagnose what is broken, generate solutions, test them, and deploy the winner automatically, without the need for a dedicated growth team. In our discussion, Aviv explained how Moonshot focuses on front-end experience and site performance, spotting issues such as unclear value propositions, poorly placed calls to action, or confusing mobile navigation. The platform generates its own design, copy, and code variants, runs live tests, and then rolls out what actually works. The results are hard to ignore. Brands across beauty, fashion, jewelry, and consumer electronics are seeing revenue per visitor lift by thirty to fifty percent within months. One small change to a mobile navigation menu at Hugh Jewelry led to a fifty seven percent increase in revenue per visitor, which is the kind of outcome that gets leadership teams paying attention. We also talked about momentum behind the company itself. A recently announced ten million dollar seed round has given Moonshot AI the resources to scale engineering and go-to-market teams at a time when demand is accelerating fast. But beyond funding and growth charts, what stood out most was Aviv's longer-term view. As more people turn to AI assistants and agents instead of traditional search, websites need to be structured so machines can understand them as clearly as humans. Moonshot is already optimizing for that future, preparing sites for an agent-driven web where the customer might be an algorithm as much as a person. Aviv also shared his personal journey, moving from a successful career as a tech journalist and TV host into the far more humbling world of building companies. Rejection, uncertainty, and hard lessons came with the territory, but so did clarity. His guiding idea, inspired by Jeff Bezos, is a minimum regret mindset, choosing the harder path now to avoid looking back later and wondering what might have been. So as AI moves from tools that assist to systems that act, and as websites become active participants in growth rather than static assets, the big question becomes this. Are you still relying on slow, manual optimization cycles, or are you ready to let your website start improving itself, and what does that shift mean for how you build and scale in the years ahead? Useful Links Connect with Aviv Frenkel Learn More About Moonshot AI Follow on LinkedIn Thanks to our sponsors, Alcor, for supporting the show.

What happens when decades of supply chain planning collide with AI, volatility, and a world that no longer moves at a predictable pace? That question sat at the heart of my conversation with Piet Buyck, a serial entrepreneur whose career spans early optimization engines, cloud-era planning systems, and now AI-driven decision environments. Speaking from Antwerp just days before the holidays, Piet brought a calm, grounded perspective shaped by years inside organizations operating under real commercial pressure. His journey includes building Garvis, an AI-native planning platform later acquired by Logility, which itself became part of Aptean. That arc alone tells a story about consolidation, scale, and where modern planning is heading. We spent time unpacking ideas from Piet's book, AI Compass for Supply Chain Leaders, particularly his view that planning drifted too far into abstract numbers and away from real-world context. Long before AI became a boardroom obsession, he saw how centralized models created distance between decisions and reality. When disruption arrives, whether through pandemics, tariffs, or geopolitical tension, that distance becomes costly. Piet shared vivid examples of how slow, spreadsheet-heavy processes fail precisely when speed and clarity matter most. One thread that kept resurfacing was data. Many leaders believe their data is "good enough" until volatility exposes blind spots. Piet pushed the conversation further, explaining that AI's value goes beyond crunching clean datasets. It can move understanding across silos, surface the reasons behind decisions, and make context visible without endless meetings. That idea of explainable, collaborative AI came up repeatedly, especially as a counterpoint to opaque automation that creates confidence without understanding. We also tackled the human side. There is anxiety around skills erosion and entry-level roles disappearing, but Piet's view was more nuanced. AI shifts where time and energy go, away from gathering information and toward judgment, fairness, and accountability. In his eyes, the real challenge for leaders is choosing the right scope. Projects that are too small fade into irrelevance, while those that are too big stall under their own weight. As we looked ahead, Piet reflected on how leadership itself may change as data becomes accessible to everyone. Authority based on instinct alone becomes harder to defend when assumptions are visible. The leaders who thrive will be those who can explain direction clearly, connect data to purpose, and bring people with them. So after hearing how planning, AI, and leadership are converging in real organizations today, how do you see the balance between human judgment and machine intelligence playing out in your own world, and are we truly ready for what that shift demands? Useful Links Connect with Piet Buyck The AI Compass for Supply Chain Leaders Book Logility Website Follow on LinkedIn Thanks to our sponsors, Alcor, for supporting the show.

That question sat at the heart of my conversation with Bret Kinsella, recorded while he was in Las Vegas for CES and preparing to step onto the AI stage. Bret brings a rare combination of long-term perspective and hands-on experience. As General Manager of Fuel iX at TELUS Digital, he operates generative AI systems at a scale most enterprises never see, processing trillions of tokens and delivering measurable business outcomes for global organizations. That vantage point gives him a clear view of both the promise of generative AI and the uncomfortable truths many teams are still avoiding. In this episode, we unpack why generative AI breaks so many of the assumptions security teams have relied on for decades. Bret explains why these systems are probabilistic rather than deterministic, and how that single shift creates what he calls an unbounded attack surface. Users are no longer limited to predefined buttons or workflows, and outputs are no longer constrained to a fixed database. The same prompt can succeed or fail depending on subtle changes, which makes single-pass testing and checkbox compliance dangerously misleading. If you have ever wondered why an AI system feels safe one day and unpredictable the next, this conversation offers a grounded explanation. We also explore why focusing on the model alone misses the real risk. Bret makes a strong case that the model is only one part of a much larger system shaped by system prompts, connected data sources, tools, and guardrails. Change any one of those elements and behavior shifts. This is why automated, continuous red teaming has become unavoidable. Bret shares how Telus Digital's Fortify AI attack model uncovered hundreds of vulnerabilities in hours, far beyond what human teams could realistically surface on their own. Yet automation is not the end of the story. The final decisions still depend on people who understand context, trade-offs, and business impact. Throughout the discussion, we return to a simple but uncomfortable idea. AI safety is not something you bolt on after deployment. It demands a different mindset, broader testing, repeated validation, and ongoing human judgment. For leaders moving from experimentation to real-world deployment, this episode is a clear-eyed look at what responsible progress actually requires. So, as more organizations rush to deploy agents and autonomous systems in 2026, are we truly prepared for software that learns, adapts, and occasionally surprises us, and what does that mean for how you test and trust AI inside your own business? Useful Links Connect with Bret Kinsella Telus Digital Website Fuel iX

What does it actually take to move beyond AI pilots and turn enterprise ambition into real productivity gains? That question sat at the center of my conversation with Olivia Nottebohm, Chief Operating Officer at Box, and it is one that every boardroom seems to be wrestling with right now. AI conversations have matured quickly. The early excitement has given way to harder questions about return, trust, and what changes when software stops assisting work and starts acting inside it. Olivia brings a rare vantage point to that discussion, shaped by leadership roles at Google, Dropbox, Notion, and now Box, where she oversees global go to market, customer success, and partnerships at a time when AI is becoming embedded in everyday operations. We talked about why early adopters are already seeing productivity lifts of around thirty seven percent, while others remain stuck in experimentation. The difference, as Olivia explains, is rarely the model itself. Strategy matters more. Teams that treat AI as a chance to rethink how work flows through the organization are pulling away from those that simply layer automation on top of broken processes. This is where unstructured content, often described as dark data, becomes a competitive asset rather than a liability. When that information is curated, permissioned, and ready for agents to use, entire workflows start to look very different. A large part of our discussion focused on AI agents and why 2026 is shaping up to be the year they move from novelty to necessity. Agents are already joining the workforce, taking on tasks that used to require multiple handoffs between teams. That shift brings speed and autonomy, but it also raises new questions about trust. Olivia shared why governance has become one of the biggest blind spots in enterprise AI, especially when agents act independently or interact across platforms. Her perspective was clear. Without strong security, permissioning, and oversight, the risks grow faster than the rewards. We also explored why companies using a mix of models and agents tend to see stronger returns, and how Box approaches this with a neutral, customer choice driven philosophy while maintaining consistent governance. From the five stages of enterprise AI maturity to the idea of a future agent manager role, this conversation offers a grounded look at what AI at scale actually demands from leadership, culture, and operating models. So as investment accelerates and AI becomes part of the fabric of work, the real question is this. Are organizations ready to redesign how they operate around agents, data, and trust, or will they keep experimenting while others pull ahead, and what do you think separates the two?

What happens when the systems we rely on every day start producing more signals than humans can realistically process, and how do IT leaders decide what actually matters anymore? In this episode of Tech Talks Daily, I sit down with Garth Fort, Chief Product Officer at LogicMonitor, to unpack why traditional monitoring models are reaching their limits and why AI native observability is starting to feel less like a future idea and more like a present day requirement. Modern enterprise IT now spans legacy data centers, multiple public clouds, and thousands of services layered on top. That complexity has quietly broken many of the tools teams still depend on, leaving operators buried under alerts rather than empowered by insight. Garth brings a rare perspective shaped by senior roles at Microsoft, AWS, and Splunk, along with firsthand experience running observability at hyperscale. We talk about how alert fatigue has become one of the biggest hidden drains on IT teams, including real world examples where organizations were dealing with tens of thousands of alerts every week and still missing the root cause. This is where LogicMonitor's AI agent, Edwin AI, enters the picture, not as a replacement for human judgment, but as a way to correlate noise into something usable and give operators their time and confidence back. A big part of our conversation centers on trust. AI agents behave very differently from deterministic automation, and that difference matters when systems are responsible for critical services like healthcare supply chains, airline operations, or global hospitality platforms. Garth explains why governance, auditability, and role based controls will decide how quickly enterprises allow AI agents to move from advisory roles into more autonomous ones. We also explore why experimentation with AI has become one of the lowest risk moves leaders can make right now, and why the teams who treat learning as a daily habit tend to outperform the rest. We finish by zooming out to the bigger picture, where observability stops being a technical function and starts becoming a way to understand business health itself. From mapping infrastructure to real customer experiences, to reshaping how IT budgets are justified in boardrooms, this conversation offers a grounded look at where enterprise operations are heading next. So, as AI agents become more embedded in the systems that run our businesses, how comfortable are you with handing them the keys, and what would it take for you to truly trust them? Useful Links Connect with Garth Fort Learn more about LogicMonitor Check out the Logic Monitor blog Follow on LinkedIn, X, Facebook, and YouTube. Alcor is the Sponsor of Tech Talks Network

Are we asking ourselves an honest question about who really owns automation inside a business anymore? In my conversation with Darin Patterson, Vice President of Market Strategy at Make, we explore what happens when speed becomes the default requirement, but visibility and structure fail to keep up. Make has become one of the breakout platforms for teams that want to build automated workflows without writing code, and now, with AI agents joining the mix, the stakes feel even higher. Darin talks candidly about the tension between empowerment and chaos, especially in organizations that embraced no-code tools fast and early, only to discover that automation can quietly turn into sprawl if left unchecked. What struck me most is how strongly Darin challenges the idea that documentation alone can save modern IT teams. He argues that traditional monitoring tools and workflow documentation are breaking down under the weight of constant iteration. That's where Make Grid comes in. Make Grid creates an auto-generated, real-time visual map of a company's automation ecosystem, something Darin describes as a turning point for governance. He explains why this matters now, not later. As companies deploy AI into processes that used to be owned by specialists, Grid provides a shared lens for understanding what is running, who built it, and where dependencies exist. It's an answer to a problem many IT leaders are reluctant to admit publicly, that automation systems often grow faster than oversight systems ever could. Darin also offers a refreshingly grounded take on the psychology of ambitious teams. He talks about the need to prevent "no-code anarchy," a phrase I've heard whispered at conferences, but rarely unpacked with clarity. His view is simple, trust teams to build, but give them shared maps, guardrails, and governance that don't slow them down. That balance between autonomy and oversight becomes even more meaningful when AI is introduced into workflows that touch security, IT performance, and cross-team accountability. Make Grid attempts to solve that balance by showing the automation architecture visually, even when internal documentation has gone stale. So here's the question I want to leave you with, if AI agents can now design, connect, and deploy workflows across an organization, what role will visual governance play in keeping businesses both fast and accountable? And what does good oversight look like when humans are no longer the only builders in the system? Useful Links Learn more about Make Connect with Darin Patterson Thanks to our sponsors, Alcor, for supporting the show.

Was 2025 the year the games industry finally stopped talking about direct-to-consumer and started treating it as the default way to do business? In this episode of Tech Talks Daily, I'm joined by Chris Hewish, President at Xsolla, for a wide-ranging conversation about how regulation, platform pressure, and shifting player expectations have pushed D2C from the margins into the mainstream. As court rulings, the Digital Markets Act, and high-profile battles like Epic versus Apple continue to reshape the industry, developers are gaining more leverage, but also more responsibility, over how they distribute, monetize, and support their games. Chris breaks down why D2C is no longer just about avoiding app store fees. It is about owning player relationships, controlling data, and building sustainable businesses in a more consolidated market. We explore how tools like Xsolla's Unity SDK are lowering the barrier for studios to sell directly across mobile, PC, and the web, while handling the operational complexity that often scares teams away from global payments, compliance, and fraud management. We also dig into what is changing inside live service games. From offer walls that help monetize the vast majority of players who never spend, to LiveOps tools that simplify campaigns and retention strategies, Chris shares real examples of how studios are seeing meaningful lifts in revenue and engagement. The conversation moves beyond technology into mindset, especially for indie and mid-sized teams learning that treating a game as a long-term business needs to start far earlier than launch day. Here in 2026, we talk about account-centric economies, hybrid monetization models running in parallel, and the growing role of community-driven commerce inspired by platforms like Roblox and Fortnite. There is optimism in these shifts, but also understandable anxiety as studios adjust to managing more of the stack themselves. Chris offers a grounded perspective on how that balance is likely to play out. So if games are becoming hobbies, platforms are opening up, and developers finally have the tools to meet players wherever they are, what does the next phase of direct-to-consumer really look like, and are studios ready to fully own that relationship? Useful Links Connect with Chris Hewish on LinkedIn Learn more about Xsolla Follow on LinkedIn, Twitter, and Facebook Thanks to our sponsors, Alcor, for supporting the show.

In this episode of Tech Talks Daily, I'm joined by Kiren Sekar, Chief Product Officer at Samsara, to unpack how AI is finally showing up where it matters most, in the frontline operations that keep the global economy moving. From logistics and construction to manufacturing and field services, these industries represent a huge share of global GDP, yet for years they have been left behind by modern software. Kiren explains why that gap existed, and why the timing is finally right to close it. We talk about Samsara's full-stack approach that blends hardware, software, and AI to turn trillions of real-world data points into decisions people can actually act on. Kiren shares how customers are using this intelligence to prevent accidents, cut fuel waste, digitize paper-based workflows, and scale expert judgment across thousands of vehicles and job sites. The conversation goes deep into real examples, including how large enterprises like Home Depot have dramatically reduced accident rates and improved asset utilization by making safety and efficiency part of everyday operations rather than afterthoughts. A big part of our discussion focuses on trust. When AI enters physical operations, concerns around monitoring and surveillance surface quickly. Kiren walks through how adoption succeeds only when technology is introduced with care, transparency, and a clear focus on protecting workers. From proving driver innocence during incidents to rewarding positive behavior and using AI as a virtual safety coach, we explore why change management matters just as much as the technology itself. We also look at the limits of automation and why human judgment still plays a central role. Kiren explains how Samsara's AI acts as a force multiplier for experienced frontline experts, capturing their hard-won knowledge and scaling it across an entire workforce rather than trying to replace it. As AI moves from pilots into daily decision-making at scale, this episode offers a grounded view of what responsible, high-impact deployment actually looks like. As AI continues to reshape frontline work, making jobs safer, easier, and more engaging, how should product leaders balance innovation with responsibility when their systems start influencing real-world safety and productivity every single day? Useful Links Connect with Kiren Sekar Learn more about Samsara Tech Talks Daily is Sponsored by Denodo

What if airlines stopped thinking in terms of seats and schedules and started designing for the entire journey instead? In this episode of Tech Talks Daily, I'm joined by Somit Goyal, CEO of IBS Software, to talk about how travel technology is being rebuilt at its foundations. Since we last spoke, AI has moved from experimentation into everyday operations, and that shift is forcing airlines to rethink everything from retailing and loyalty to disruption management and customer trust. Somit shares why AI can no longer sit on the edge of systems as a feature, and why it now has to be embedded directly into how decisions are made across the business. We discuss the growing gap between legacy airline technology and rapidly rising traveler expectations, and why this tension has become a defining moment for the industry. For Somit, travel tech is no longer back office infrastructure. It is becoming the operating system for customer experience and revenue. That shift changes how airlines think about retailing, moving away from selling flights toward curating outcomes across a multi day journey that includes partners, servicing, and real time operational awareness. The conversation also explores why agility now matters more than scale, and how airlines are approaching this transformation without breaking what already works. A major part of this episode focuses on IBS Software's deep co-innovation partnership with Amazon Web Services. Somit explains why this is far more than a cloud hosting arrangement, covering joint R&D, shared roadmaps, and AI labs designed to help airlines build modern retailing capabilities faster. We also unpack what "AI first" really means in practice, how intelligence is reshaping offer creation, pricing, order management, and disruption handling, and why responsible AI must be treated as a product rather than a legal safeguard. We also spend time on loyalty, one of the industry's most stubborn challenges. Somit outlines why converging reservations and loyalty systems is such a powerful unlock, how it enables real time personalization instead of generic segmentation, and why loyalty should evolve from a points ledger into an experience engine that delivers value before, during, and after a trip. As airlines race toward 2026, the big question is no longer whether transformation will happen, but who will move with enough clarity and trust to earn long-term loyalty. In a world where AI knows more about travelers than ever before, how do airlines use that intelligence to create better outcomes without crossing the line, and are they ready to rethink the journey from end to end? Useful Links Connect with Somit Goyal Learn more about IBS Software Tech Talks Daily is Sponsored by Denodo

What happens when a podcast stops being something you listen to and becomes something you physically show up for? In this episode of Tech Talks Daily, I wanted to explore a different kind of tech story, one rooted in community, endurance, and real human connection. I was joined by Sam Huntington, a Business Development Officer at Wells Fargo, who has quietly built something special at the intersection of technology, entrepreneurship, and cycling through his podcast and community project, Hill Climbers. Sam's story starts far from a studio. It begins on a bike, moving through Philadelphia, Los Angeles, and eventually Austin, where chance conversations on group rides turned into friendships, business relationships, and eventually a podcast. We talk about why endurance sports and startups share the same mental terrain, the moments when you want to quit, and how those moments often define the outcome. Sam explains how Hill Climbers evolved from recorded conversations into weekly rides, live podcast tapings, and in person events that bring founders, investors, and operators together without name badges or pitch decks. We also dig into what makes Austin such a magnetic place for founders right now, and why community building outside Silicon Valley feels different when it is built around shared effort rather than curated networks. Sam shares lessons learned from taking a podcast offline, including the early weeks when hardly anyone showed up, the temptation to stop, and the persistence required to build momentum. There is a refreshing honesty in how he describes growing something slowly, resisting shortcuts, and letting trust compound over time. This conversation is also a reminder that meaningful networks are rarely built through algorithms. They are built through shared experiences, discomfort, friendly competition, and showing up consistently when no one is watching. Whether you are a founder, an investor, or someone trying to build a community of your own, there is something grounding in hearing how relationships form when work is not the opening line. As more of our professional lives move online, are we losing the spaces where real connection happens, and what would it look like for you to build community around a shared passion rather than a job title? Userful Links Connect with Sam Huntington Hill Climbers Website Instagram Tech Talks Daily is Sponsored by Denodo

What happens to patient care when hospital systems suddenly go dark and clinicians are forced back to pen and paper in the middle of a crisis? In this episode of the Tech Talks Daily Podcast, I speak with Chao Cheng-Shorland, Co-founder and CEO of ShelterZoom, about a problem that many healthcare leaders still underestimate until it is too late. As ransomware attacks, cloud outages, and system failures become more frequent, electronic health record downtime has shifted from a rare incident to a recurring operational risk with real consequences for patient safety, staff wellbeing, and hospital finances. Chao explains why traditional disaster recovery plans fall short in live clinical environments and why returning to paper workflows is no longer viable for modern healthcare teams. We discuss how EHR downtime can stretch from hours into weeks, how reimbursement delays and cash flow pressure compound the damage, and why younger clinicians are often unprepared for manual processes they were never trained to use. The conversation also explores the mindset shift now taking place among CIOs and CISOs, as resilience moves from a compliance checkbox to a survival requirement. At the heart of the discussion is ShelterZoom's SpareTire platform and the thinking behind treating uninterrupted access to clinical data as a baseline rather than a backup. Chao shares how the idea emerged directly from hospital conversations, why an external, always-available system is essential during cyber incidents, and how ShelterZoom's tokenization roots shaped a design focused on security without disruption. We also look at how rising AI adoption is changing the threat landscape and why many healthcare organizations are reordering priorities to secure continuity before rolling out new AI initiatives. As we look toward 2026, this episode offers a grounded view of how healthcare organizations must rethink downtime tolerance, data governance, and operational readiness in a world where digital outages can quickly become clinical emergencies. If downtime is now inevitable rather than hypothetical, what does real resilience look like for hospitals, and are healthcare leaders moving fast enough to protect patients when systems fail? Useful Links Connect with Chao Cheng-Shorland Learn more about ShelterZoom Tech Talks Daily is Sponsored by Denodo

Is your website still the front door to your business, or has AI already quietly changed where customers first meet your brand? In this episode of the Tech Talks Daily Podcast, I sit down with Dominik Angerer, Co-founder and CEO of Storyblok, to unpack how content, search, and discovery are shifting in an AI-first world. As search behavior moves away from blue links toward direct answers inside tools like ChatGPT and Google summaries, Dominik explains why many businesses are seeing traffic decline even while signups and conversions continue to grow. We explore how AI is reshaping the role of content management systems, from automation and orchestration to personalization at scale. Dominik shares why consistency now matters more than volume, how outdated content can actively harm brand visibility inside AI answers, and why the technical foundations built for SEO still play a major role as generative search takes hold. This conversation also dives into headless CMS architecture, why separating content from presentation has become even more valuable, and how structured, well maintained content gives AI systems something reliable to work with. Dominik also introduces the idea of joyful content, a belief that better tools lead to better work and ultimately better experiences for audiences. From AI-powered support workflows to personalized retail and loyalty experiences, he shares real examples of how forward-looking teams are already using content as an active system rather than a static archive. As businesses look toward 2026 and rethink how they show up across websites, apps, agents, and answer engines, this episode offers a grounded look at what needs to change and where to start. As AI becomes the place people go for answers rather than search results, how are you rethinking your content strategy, and what will you do differently after hearing this conversation? Connect with Dominik Angerer Learn more Storyblok Tech Talks Daily is Sponsored by Denodo

What happens when the push for smarter crypto wallets runs headfirst into the reality that everything on a public blockchain can be seen by anyone? In this episode of Tech Talks Daily, I wanted to take listeners who may not live and breathe Web3 every day and introduce them to a problem that is becoming harder to ignore. As Ethereum evolves and smart accounts unlock new wallet features, the surface area for risk grows at the same time. That is where privacy-first Layer 2 solutions enter the conversation, not as an abstract idea, but as a practical response to very real security and usability concerns. My guest is Joe Andrews, Co-founder and President at Aztec Labs. Joe brings an engineering mindset shaped by years of building consumer-facing applications and deep privacy infrastructure. Together, we unpack why privacy and security can no longer be treated as separate topics, especially as Ethereum rolls out more advanced account features. Joe explains how privacy-first Layer 2 networks act as an added line of defense, reducing exposure to threats that come from fully transparent balances, identities, and transaction histories. We also talk about what Aztec actually is, often described as the Private World Computer, and why that framing matters. Joe shares learnings from Aztec's public testnet launch earlier this year, what surprised the team once thousands of nodes were running in the wild, and how the community has stepped up in ways the company itself could not have planned for. There is also an honest discussion about the UK crypto scene, the missed opportunities, and the quiet resilience of builders who continue to ship despite regulatory uncertainty. As we look ahead, Joe outlines what comes next as Aztec moves closer to enabling private transactions on a decentralized network, and why the next phase is less about theory and more about real people using privacy in everyday interactions. If you are curious about how privacy-first Layer 2 solutions fit into Ethereum's roadmap, or why privacy might be the missing piece that finally makes smart wallets usable at scale, does this conversation change how you think about the future of crypto, and where would you like to see this technology go next? Useful Links Connect with Joe Andrews Learn more about Aztec Labs Tech Talks Daily is Sponsored by Denodo

What happens when the systems designed to make life easier quietly begin shaping how we think, decide, and choose? In this episode of the Tech Talks Daily Podcast, I sit down with Jacob Ward, a journalist who has spent more than two decades examining the unseen effects of technology on human behavior. From reporting roles at NBC News, Al Jazeera, CNN, and PBS, to hosting his own podcast The Rip Current, Jacob has built a career around asking uncomfortable questions about power, persuasion, and the psychology sitting beneath our screens. Our conversation centers on his book The Loop: How A.I Is Creating a World Without Choices and How to Fight Back, written before ChatGPT entered everyday life. Jacob explains why his core concern was never about smarter machines alone, but about what happens when AI systems learn us too well. Drawing on behavioral science, newsroom experience, and recent academic research, he argues that AI can narrow our sense of possibility while convincing us we are gaining freedom. The result is a subtle tension between convenience and control that many listeners will recognize in their own digital lives. We also explore the idea of AI companies behaving like nation states, accumulating talent, influence, and authority without the checks that usually accompany that kind of power. Jacob reflects on the speed of AI deployment, the belief systems driving its biggest champions, and why individual self control is unlikely to be enough. Instead, he makes the case for systemic responses, cultural guardrails, and a renewed focus on protecting human skills that cannot be automated away. There is room for optimism here too. We talk about where AI genuinely helps, from medicine to scientific discovery, and how leaders can hold hope and skepticism at the same time without slipping into hype or fear. From preserving entry level work as a form of apprenticeship to resisting the urge to outsource thinking itself, this episode offers a thoughtful look at what staying human might mean in an age of intelligent machines. Jacob has also appeared on shows like The Joe Rogan Experience, This Week in Tech, and The Don Lemon Show, but this conversation strips things back to fundamentals. How much choice do we really have, and what are we willing to give up for frictionless answers? If AI is quietly closing the loop around our decisions, what does fighting back actually look like for you, and where do you think that line between help and influence should be drawn? Useful Links Connect With Jacob Ward Check out his website and book

How is HR changing when AI, economic pressure, and rising employee expectations all collide at once? In this episode of Tech Talks Daily, I'm joined by Simon Noble, CEO of Cezanne HR, to unpack how the role of HR is evolving from a traditional support function into something far more closely tied to business performance. Simon shares why HR is increasingly being judged on outcomes like retention, capability building, and readiness for change, rather than policies, processes, or cost control. Yet despite that shift, many HR leaders still find themselves pulled back into a compliance-first mindset as budgets tighten, skills shortages persist, and new legislation raises the stakes. We explore how AI fits into this picture without stripping the humanity out of HR. Simon is clear that AI should automate administration and free up time, rather than replace human judgment or empathy. Used well, it removes friction from onboarding, compliance, and everyday queries, giving HR the space to focus on culture, leadership, and long-term talent development. Used poorly, it risks adding noise without value. The difference, he argues, comes down to data. Without clean, consolidated data, AI simply cannot deliver meaningful insight, no matter how advanced the technology appears. The conversation also looks inward at Cezanne HR's own growth journey. Simon describes rapid expansion as chaos with better branding, and explains why maintaining culture, trust, and clarity becomes harder, yet more important, as teams scale. From onboarding new employees to ensuring a consistent customer experience, the same principles apply internally as they do for customers using HR technology. We also touch on trust, transparency, and the growing focus on areas like pay transparency, data responsibility, and employee confidence in how their information is handled. As expectations continue to rise, HR's credibility increasingly rests on accuracy, fairness, and the ability to turn insight into action. As HR steps closer to the center of business strategy, what mindset shift is needed to move from reacting to change toward actively shaping it, and how prepared is your organization to make that leap? Useful Links Connect with Simon Noble Learn more about Cezanne HR Tech Talks Daily is Sponsored by Denodo

What does it really mean when AI moves from answering questions to making decisions that affect real people, real money, and real outcomes? In this episode of Tech Talks Daily, I'm joined by Joe Kim, CEO of Druid AI, for a grounded conversation about why agentic AI is becoming the focus for enterprises that have moved beyond experimentation. After years of hype around generative tools, many organizations are now facing a tougher question. Can AI be trusted to take action inside core business processes, and can it do so with the accuracy, security, and accountability that enterprises expect? Joe brings a rare perspective shaped by decades leading large-scale enterprise software companies, including his time as CEO of Sumo Logic. He explains why Druid AI deliberately avoids positioning itself as a generative AI company, and instead focuses on systems that can make decisions, trigger workflows, and complete tasks inside regulated, high-stakes environments. We unpack why accuracy thresholds matter when AI touches billing, healthcare, admissions, or compliance, and why security and governance are no longer secondary concerns once AI is allowed to act. We also talk about scale and proof. Druid AI now supports over 120 million conversations every month, a figure that keeps climbing as enterprises move agentic systems into production. Joe shares how those conversations translate into measurable business outcomes, from operational efficiency to revenue growth, and why many AI initiatives fail to reach this stage. His "5 percent club" philosophy cuts through the noise, focusing on the small number of use cases that actually deliver return while most others stall in pilots. The conversation also explores why higher education has become a surprising pressure point for AI adoption, how outdated systems contribute to student churn, and how conversational agents can remove friction at moments that decide whether someone enrolls, stays, or leaves. We close by looking ahead at Druid AI's next chapter, including new platform capabilities designed to make building and deploying agents faster without sacrificing control. As more enterprises demand results instead of promises, are we ready to judge AI by the decisions it makes and the outcomes it delivers, and what should that accountability look like in your organization? I'd love to hear your thoughts. Where do you see agentic AI delivering real value today, and where do you think the risks still outweigh the rewards? What does it really mean when AI moves from answering questions to making decisions that affect real people, real money, and real outcomes? In this episode of Tech Talks Daily, I'm joined by Joe Kim, CEO of Druid AI, for a grounded conversation about why agentic AI is becoming the focus for enterprises that have moved beyond experimentation. After years of hype around generative tools, many organizations are now facing a tougher question. Can AI be trusted to take action inside core business processes, and can it do so with the accuracy, security, and accountability that enterprises expect? Joe brings a rare perspective shaped by decades leading large-scale enterprise software companies, including his time as CEO of Sumo Logic. He explains why Druid AI deliberately avoids positioning itself as a generative AI company, and instead focuses on systems that can make decisions, trigger workflows, and complete tasks inside regulated, high-stakes environments. We unpack why accuracy thresholds matter when AI touches billing, healthcare, admissions, or compliance, and why security and governance are no longer secondary concerns once AI is allowed to act. We also talk about scale and proof. Druid AI now supports over 120 million conversations every month, a figure that keeps climbing as enterprises move agentic systems into production. Joe shares how those conversations translate into measurable business outcomes, from operational efficiency to revenue growth, and why many AI initiatives fail to reach this stage. His "5 percent club" philosophy cuts through the noise, focusing on the small number of use cases that actually deliver return while most others stall in pilots. The conversation also explores why higher education has become a surprising pressure point for AI adoption, how outdated systems contribute to student churn, and how conversational agents can remove friction at moments that decide whether someone enrolls, stays, or leaves. We close by looking ahead at Druid AI's next chapter, including new platform capabilities designed to make building and deploying agents faster without sacrificing control. As more enterprises demand results instead of promises, are we ready to judge AI by the decisions it makes and the outcomes it delivers, and what should that accountability look like in your organization? I'd love to hear your thoughts. Where do you see agentic AI delivering real value today, and where do you think the risks still outweigh the rewards? Useful Links Connect with Joe Kim, CEO of Druid AI. Druid AI Website Tech Talks Daily is Sponsored by Denodo

The world is building data centers, identity rails, and AI policy stacks at a speed that makes 2026 feel closer than it is. In this conversation, Rajesh Natarajan, Global Chief Technology Officer at Gorilla Technology Group, explains what it takes to engineer platforms that remain reliable, secure, and sovereign-ready for decades, especially when infrastructure must operate outside the safety net of constant cloud connectivity. Raj talks about quantum-safe networking as a current risk, not a future headline. Adversaries are capturing encrypted traffic today, betting on decrypting it later, and retrofitting quantum-safe architecture into national platforms mid-lifecycle is an expensive mistake waiting to happen. He also highlights the regional nature of AI infrastructure, Southeast Asia prioritizing sovereignty, speed, and efficiency, Europe leaning on regulation and telemetry, and the U.S. betting on raw cluster scale and throughput. Sustainability at Gorilla isn't a marketing headline, it's an engineering requirement. If a system can't prove its environmental impact using telemetry like workload-level PUE, it isn't labeled sustainable internally. Gorilla applies the same rigor to IoT insight per unit of energy, device lifecycles, and edge-level intelligence placement, minimizing data centralization without operational justification. This episode offers marketers, founders, and technology leaders a rare chance to understand what national-scale resilience looks like when platform alignment breaks first, not technology. Remembering that decisions must be reversible, explicit, and measurable is the foundation of how Gorilla is designing systems that can evolve without forcing rushed compromises when uncertainty becomes reality. Useful links: Connect with Dr Rajesh Natarajan Gorilla website Tech Talks Daily is Sponsored by Denodo

What makes live events feel personal in an age of algorithms making the calls? That's the tension marketers are living in right now. Ben Kruger, Chief Marketing Officer at Event Tickets Center, sits at the center of this shift. He has spent 20 years shaping server-side systems and performance marketing strategies, including a decade of persistence chasing a role at Google before landing a position in New York just as eCommerce demand went into overdrive during the pandemic. Now, at ETC, he runs marketing for more than 130,000 live events simultaneously. It's a scale that forces automation to step in. The industry moves in real time, resellers update prices by the hour, artists trend globally overnight, weather can shift demand before a stadium gate opens. Ben credits Google's AI tools and internal models as a competitive advantage, but he also talks openly about the risks. The early excitement of automation gave way to skepticism after seeing unaligned promises from new platforms and unpredictable campaign behavior in tools that remove control from brands. There's a well-rounded argument to explore here. On one side, AI enables a small team to do the work of thousands, writing content at a volume no human team could deliver alone. On the other, removing risk from campaigns, or removing channel-level choices from advertisers, can reduce trust and increase low-quality creative output. Advantage+ tools that make placement decisions automatically, without brand input, might scale reach, but can reduce clarity of intent and control of outcomes. Some CMOs see that as smart acceleration, others see it as an overcorrection that creates opacity and dependency on platforms optimizing for their own incentives. And somewhere in the middle is the opportunity. ETC's approach shows a future where repetition in rapid testing generates sharper insight, where lean teams move faster, where humans stay in the loop to validate outcomes, and where creativity stays grounded in audience understanding, economics, and transparency. Marketers listening to Ben will hear someone who wants experimentation, control, clarity, and long-term audience trust to exist side by side. Useful links: Connect with Ben Kruger on LinkedIn Event Tickets Center website Tech Talks Daily is Sponsored by Denodo

What does it really take to build software that can grow from a single line of code to millions of users a day without losing its soul along the way? In this episode of Tech Talks Daily, I'm joined by Alex Gusev, CTO at Uploadcare, for a wide-ranging conversation about scale, simplicity, and why leadership in technology starts with people long before it gets anywhere near frameworks or tooling. Alex has spent two decades building server-side systems, often inside small teams, and has seen firsthand how early decisions echo through a company's future, for better and for worse. We talk openly about the realities of early-stage engineering, including why shipping imperfect code is often the only way to survive, how technical debt should be taken on deliberately rather than by accident, and why knowing when to slow down and clean things up is one of the hardest leadership calls to make. Alex shares his belief that simplicity is the strongest ally in high-load environments, and how over-engineering, often inspired by copying the playbooks of much larger companies, creates fragility instead of strength. Our conversation also digs into his continued faith in Ruby on Rails, a framework that divides opinion but still plays a central role in many successful products. Alex reframes the debate around speed, focusing less on raw performance metrics and more on how quickly teams can build, adapt, and maintain systems over time. It's a practical view shaped by real-world trade-offs rather than theory. Beyond code, we explore why Alex puts people ahead of technology and process, and how creating psychological safety inside teams leads to better decisions, lower churn, and smarter use of limited resources. He also reflects on personal experiences that reshaped his approach to leadership, the growing tech scene in Kyrgyzstan, and why he finds as much inspiration in Dostoevsky as he does in engineering blogs. If you've ever questioned whether modern engineering culture has overcomplicated itself, or wondered how to balance ambition with sustainability as your product grows, this episode offers plenty to think about. Where do you think your own team is adding complexity without realizing it, and what might change if you started with people first? Useful Links Connect with Alex Gusev Learn more about Uploadcare Tech Talks Daily is sponsored by Denodo

If you have ever opened Candy Crush over the holidays without thinking about the design decisions behind every swipe, this episode offers a rare look behind the curtain. I sit down with Abigail Rindo, Head of Creative at King, to unpack how accessibility has evolved from a well-meaning afterthought into a core creative and commercial practice inside one of the world's most recognizable gaming studios. With more than 200 million people playing King's games each month, Abigail explains why inclusive design cannot be treated as charity or compliance, but as a responsibility that directly shapes product quality, player loyalty, and long-term growth. One of the moments that really stayed with me in this conversation is the data. More than a quarter of King's global player base self identifies as having an accessibility need. Even more players benefit from accessibility features without ever labeling themselves that way. Abigail shares how adjustments like customizable audio for tinnitus, reduced flashing to limit eye strain, and subtle interaction changes can quietly transform everyday play for millions of people. These are not edge cases. They are everyday realities for a massive audience that lives with these games as part of their daily routine. We also dig into how inclusive design sparks better creativity rather than limiting it. Abigail walks me through updates to Candy Crush Soda Saga, including the "hold and drag" mechanic that allows players to preview a move before committing. Inspired by the logic of holding a chess piece before placing it, this feature emerged directly from player research around visibility, dexterity, and comfort. It is a reminder that creative constraints, when grounded in real human needs, often lead to smarter and more elegant solutions. Beyond mechanics and metrics, this conversation goes deeper into storytelling, empathy, and team culture. Abigail explains why inclusive design only works when inclusive teams are involved from the start, and how global storytelling choices help King design worlds that resonate everywhere from Stockholm to Antarctica. We also talk about live service realities, blending quantitative data about what players do with qualitative insight into why they do it, especially when a game has been evolving for more than a decade.

What does it actually mean to prove who we are online in 2025, and why does it still feel so fragile? In this episode of Tech Talks Daily, I sit down with Alex Laurie from Ping Identity to talk about why digital identity has reached a real moment of tension in the UK. As more of our lives move online, from banking and healthcare to social platforms and government services, the gap between how identity should work and how it actually works keeps widening. Alex shares why the UK now feels out of step with other regions when it comes to online identity schemes, and how heavy reliance on centralized models is slowing adoption while weakening public trust. We spend time unpacking the practical consequences of today's verification systems. Age checks are regularly bypassed, fraud continues to grow, and users are often asked to hand over far more personal data than feels reasonable just to access everyday services. At the same time, public pressure around online safety is rising fast. That creates an uncomfortable push and pull between tighter controls and the expectation of fast, low-friction access. Alex makes the case that this tension exists because the underlying approach is flawed, and that proving something simple, like age, should never require revealing an entire digital identity. From there, the conversation turns to decentralized identity and why it is gaining momentum globally. Instead of placing sensitive data into large centralized databases, decentralized models allow individuals to hold and present verified credentials on their own terms. For me, this reframes digital identity as a right rather than a feature, and opens the door to systems that feel more privacy-aware, inclusive, and resilient. We also explore how agentic AI could play a role here, helping people manage, present, and protect their credentials intelligently without adding complexity or new risks. With fresh consumer research from Ping Identity informing the discussion, this episode looks closely at where trust, privacy, and identity are heading next, and why the choices made now will shape how we prove who we are online for years to come. Are we finally ready to rethink digital identity, and if so, what does that mean for all of us?

What does it really mean to keep humans at the center of AI when agentic systems are accelerating faster than most organizations can govern them? At AWS re:Invent, I sat down with Michael Bachman from Boomi for a wide-ranging conversation that cut through the hype and focused on the harder questions many leaders are quietly asking. Michael leads technical and market research at Boomi, spending his time looking five to ten years ahead and translating future signals into decisions companies need to make today. That long view shaped a thoughtful discussion on human-centric AI, trust versus autonomy, and why governance can no longer be treated as an afterthought. As businesses rush toward agentic AI, swarms of autonomous systems, and large-scale automation, Michael shared why this moment makes him both optimistic and cautious. He explained why security, legal, and governance teams must be involved early, not retrofitted later, and why observability and sovereignty will become non-negotiable as agents move from experimentation into production. With tens of thousands of agents already deployed through Boomi, the stakes are rising quickly, and organizations that ignore guardrails today may struggle to regain control tomorrow. We also explored one of the biggest paradoxes of the AI era. The more capable these systems become, the more important human judgment and critical thinking are. Michael unpacked what it means to stay in the loop or on the loop, how trust in agentic systems should scale gradually, and why replacing human workers outright is often a short-term mindset that creates long-term risk. Instead, he argued that the real opportunity lies in amplifying human capability, enabling smaller teams to achieve outcomes that were previously out of reach. Looking further ahead, the conversation turned to the limits of large language models, the likelihood of an AI research reset, and why future breakthroughs may come from hybrid approaches that combine probabilistic models, symbolic reasoning, and new hardware architectures. Michael also reflected on how AI is changing how we search, learn, and think, and why fact-checking, creativity, and cognitive discipline matter more than ever as AI assistants become embedded in daily life. This episode offers a grounded, future-facing perspective on where AI is heading, why integration platforms are becoming connective tissue for modern systems, and how leaders can approach the next few years with both ambition and responsibility. Useful Links Learn More About Boomi Connect with Michael Bachman Algorithms to Live By: The Computer Science of Human Decisions Tech Talks Daily is sponsored by Denodo

What does responsible AI really look like when it moves beyond policy papers and starts shaping who gets to build, create, and lead in the next phase of the digital economy? In this conversation recorded during AWS re:Invent, I'm joined by Diya Wynn, Principal for Responsible AI and Global AI Public Policy at Amazon Web Services. With more than 25 years of experience spanning the internet, e-commerce, mobile, cloud, and artificial intelligence, Diya brings a grounded and deeply human perspective to a topic that is often reduced to technical debates or regulatory headlines. Our discussion centers on trust as the real foundation for AI adoption. Diya explains why responsible AI is not about slowing innovation, but about making sure innovation reaches more people in meaningful ways. We talk about how standards and legislation can shape better outcomes when they are informed by real-world capabilities, and why education and skills development will matter just as much as model performance in the years ahead. We also explore how generative AI is changing access for underrepresented founders and creators. Drawing on examples from AWS programs, including work with accelerators, community organizations, and educational partners, Diya shares how tools like Amazon Bedrock and Amazon Q are lowering technical barriers so ideas can move faster from concept to execution. The conversation touches on why access without trust falls short, and why transparency, fairness, and diverse perspectives have to be part of how AI systems are designed and deployed. There's an honest look at the tension many leaders feel right now. AI promises efficiency and scale, but it also raises valid concerns around bias, accountability, and long-term impact. Diya doesn't shy away from those concerns. Instead, she explains how responsible AI practices inside AWS aim to address them through testing, documentation, and people-centered design, while still giving organizations the confidence to move forward. This episode is as much about the future of work and opportunity as it is about technology. It asks who gets to participate, who gets to benefit, and how today's decisions will shape tomorrow's innovation economy. As generative AI becomes part of everyday business life, how do we make sure responsibility, access, and trust grow alongside it, and what role do we each play in shaping that future? Useful Links Connect With Diya Wynn AWS Responsible AI Tech Talks Daily is sponsored by Denodo

What does it really mean to support developers in a world where the tools are getting smarter, the expectations are higher, and the human side of technology is easier to forget? In this episode of Tech Talks Daily, I sit down with Frédéric Harper, Senior Developer Relations Manager at TinyMCE, for a thoughtful conversation about what it takes to serve developer communities with credibility, empathy, and long-term intent. With more than twenty years in the tech industry, Fred's career spans hands-on web development, open source advocacy, and senior DevRel roles at companies including Microsoft, Mozilla, Fitbit, and npm. That journey gives him a rare perspective on how developer needs have evolved, and where companies still get it wrong. We explore how starting out as a full-time developer shaped Fred's approach to advocacy, grounding his work in real-world frustration rather than abstract messaging. He reflects on earning trust during challenging periods, including advocating for open source during an era when some communities viewed large tech companies with deep skepticism. Along the way, Fred shares how studying Buddhist philosophy has influenced how he shows up for developers today, helping him keep ego in check and focus on service rather than status. The conversation also lifts the curtain on rich text editing, a capability most users take for granted but one that hides deep technical complexity. Fred explains why building a modern editing experience involves far more than formatting text, touching on collaboration, accessibility, security, and the growing expectations around AI-assisted workflows. It is a reminder that some of the most familiar parts of the web are also among the hardest to build well. We then turn to developer relations itself, a role that is often misunderstood or measured through the wrong lens. Fred shares why DevRel should never be treated as a short-term sales function, how trust and community take time, and why authenticity matters more than volume. From open source responsibility to personal branding for developers, including lessons from his book published with Apress, Fred offers grounded advice on visibility, communication, and staying human in an increasingly automated industry. As the episode closes, we reflect on burnout, boundaries, and inclusion, and why healthier communities lead to better products. For anyone building developer tools, managing technical communities, or trying to grow a career without losing themselves in the process, this conversation leaves a simple question hanging in the air: how do we build technology that supports people without forgetting the people behind the code? Useful Links Connect with Frédéric Harper Learn More About TinyMCE Tech Talks Daily is sponsored by Denodo

What does it really take to build a fintech company that quietly fixes one of the most frustrating problems SMEs face every day? In this episode of Tech Talks Daily, I'm joined by Pierre-Antoine Dusoulier, the Founder and CEO of iBanFirst, for a candid conversation about entrepreneurship, timing, and why cross-border payments have remained broken for so long. Pierre-Antoine's story begins in London, where his early career as an FX trader felt like a compromise at the time, yet quietly gave him a front-row seat to inefficiencies most people accepted as normal. That experience would later shape two companies and a very clear point of view on how money should move across borders. Pierre-Antoine walks through his first venture, Combeast.com, one of France's earliest FX brokerages for retail investors, and what he learned from selling it to Saxo Bank and staying on to run Western European operations. That chapter matters, because it exposed the gap between how sophisticated FX markets really are and how poorly SMEs are served when FX and payments are bundled together inside traditional banks. Out of that frustration, IbanFirst was born in 2016 with a simple idea: treat cross-border payments as a specialist discipline, not a side feature. Today, IbanFirst serves more than 10,000 clients across Europe and processes over €2 billion in transactions every month. We dig into why growth has continued while many fintechs have slowed, from a product designed to be used daily, to proactive sales, to a new generation of CFOs and CEOs who expect the same clarity and speed at work that they get from consumer fintech tools. Pierre-Antoine explains how real-time FX rates, payment tracking using SWIFT GPI, and multi-entity account management change the day-to-day reality for SMEs trading internationally. We also talk about Brexit, and how being rooted in continental Europe created an unexpected opening. Pierre-Antoine shares why expanding into the UK, including the acquisition of Cornhill, made sense, and why London's payments ecosystem still stands apart in scale and depth. Along the way, he is refreshingly open about the heavy investment required in compliance, trust, and regulation, and why nearly a third of IbanFirst's team focuses on operations and oversight. Looking ahead, Pierre-Antoine lays out a bold vision for the SME payments market, predicting a future where specialists replace banks in much the same way fintech reshaped consumer money transfers. As cross-border trade grows and currency volatility becomes a daily concern, his perspective raises an interesting question for anyone running an international business today: if specialists already exist, why keep relying on systems that were never designed for how SMEs actually operate? Useful Links: Connect with Pierre-Antoine Dusoulier Learn more about iBanFirst, Tech Talks Daily is sponsored by Denodo

What happens when artificial intelligence moves faster than our ability to understand, verify, and trust it? In this episode of Tech Talks Daily, I sit down with Alexander Feick from eSentire, a cybersecurity veteran who has spent more than a decade working at the intersection of complex systems, risk, and emerging technology. Alex leads eSentire Labs, where his team explores how new technologies can be secured before they quietly become load-bearing parts of modern business infrastructure. Our conversation centers on a timely and uncomfortable reality. AI is being embedded into workflows, products, and decision-making systems at a pace most organizations are not prepared for. Alex explains why many AI failures are not caused by malicious models or dramatic breaches, but by broken ownership, invisible dependencies, and a lack of ongoing verification. These are not technical glitches. They are organizational blind spots that quietly compound risk over time. We also explore the ideas behind Alex's recently published book on trust and AI, which he made freely available due to the speed at which real-world AI failures were already overtaking theory. From prompt injection and model drift to the dangers of treating non-deterministic systems as if they were predictable software, Alex shares why generative AI requires a fundamentally different security mindset. He draws a clear distinction between chatbot AI and embedded AI, and explains the moment where trust quietly shifts away from humans and into systems that cannot take accountability. The discussion goes deeper into what trust actually means in an AI-driven organization. Alex argues that trust must be earned, measured, and monitored continuously, not assumed after a successful pilot. Verification becomes the real work, not generation, and leaders who fail to recognize that shift risk scaling errors faster than they can contain them. We also talk about why he turned his book into an AI advisor, what that experiment revealed about the limits of models, and why human responsibility cannot be automated away. This is a grounded, practical conversation for leaders, technologists, and anyone deploying AI inside real organizations. If AI is becoming part of how decisions get made where you work, how confident are you that someone truly owns the outcome? Useful Links Connect with Alexander Feick Learn more about eSentire Tech Talks Daily is sponsored by Denodo

How much value do your developers actually get to deliver in a typical week, and how much of their time is quietly lost to meetings, context hunting, and process drag? I'm joined by Phil Heijkoop, Global Practice Head of Developer Experience at Valiantys, for a conversation that cuts through the hype surrounding AI and asks a harder question about why so many engineering teams still struggle to see meaningful returns. Phil argues that most organizations are only unlocking a small fraction of a developer's true contribution, not because of a lack of talent, but because process drag slowly squeezes out deep, focused work. AI, he explains, does not fix this by default. Without the right foundations in place, it simply accelerates the wrong work at scale. We explore the long shadow cast by the "move fast and break things" mindset and why that philosophy becomes risky inside regulated, enterprise environments where resilience and trust matter more than speed alone. Phil shares what he sees when organizations chase shiny new tooling while ignoring technical debt, unclear standards, and fragile workflows. From protecting uninterrupted time for deep work to automating manual friction points and setting shared guardrails, he outlines how teams can realistically unlock three to five times more output before AI even enters the picture. Only then, he says, does AI act as a multiplier rather than a source of chaos. The conversation also digs into developer experience as a business lever, not a perk, and why leadership clarity, cultural trust, and consistent standards matter as much as tooling choices. We discuss the growing risks in the software supply chain, the sustainability of open source dependencies, and what recent high-profile retirements signal for enterprise teams that depend on them. If AI is accelerating your organization in the wrong direction, what foundational changes would you need to make today to ensure it amplifies value instead of friction, and how honest are you willing to be about what is really slowing your teams down? Useful Links Connect with Phil on LinkedIn Learn more about Phil's work Valiantys Website Tech Talks Daily is sponsored by Denodo

What happens when the future of money stops being about speculation and starts being about people, ownership, and agency? In this episode of Tech Talks Daily, I'm joined by Dr. Friederike Ernst, co-founder of Gnosis, to unpack a conversation that goes far beyond crypto price cycles or technical hype. This is a thoughtful discussion about where blockchain is heading and, just as importantly, where it could go wrong if we are not paying attention. Friederike has spent more than a decade building foundational infrastructure for the Ethereum ecosystem, from smart wallets to decentralized exchanges and blockchain networks that quietly power large parts of Web3. But as she explains, the industry is now standing at a fork in the road. One path leads to blockchain becoming a silent backend upgrade for banks and incumbents, improving efficiency while keeping power centralized. The other path is far more ambitious, using blockchain to return ownership, control, and financial agency to everyday people. We talk about why financial infrastructure, despite working reasonably well for many of us in Europe, remains deeply inefficient, expensive, and exclusionary at a global level. A major theme of this episode is usability. Friederike is clear that technology only matters if it improves real lives. She explains why early blockchain products asked too much of users and how that is now changing, with experiences that feel as simple as using a neobank or debit card while preserving true ownership under the hood. The goal is not to make everyone a crypto expert, but to make financial tools that work seamlessly while remaining genuinely user-owned. We also explore the darker possibilities. Like any powerful technology, blockchain can be used to empower or to control. Friederike does not shy away from the risks of surveillance, social scoring, and misuse, and she argues that the real battle ahead is cultural, not technical. Values like privacy, free expression, and personal agency need to be defended openly, or the technology will be shaped without public consent. As we look toward 2026, this conversation offers a refreshing reminder that the future of money is still being written. The question is whether it will be owned by communities or quietly absorbed by the same institutions we already rely on. After listening to this episode, where do you think that future should land, and what choices are you willing to make to influence it? Useful Links Connect With Dr. Friederike Ernst Learn More about Gnosis Tech Talks Daily is sponsored by Denodo