The Tech Blog Writer Podcast

Follow The Tech Blog Writer Podcast
Share on
Copy link to clipboard

Fed up with tech hype? Looking for a tech podcast where you can learn from tech leaders and startup stories about how technology is transforming businesses and reshaping industries? In this daily tech podcast, Neil interviews tech leaders, CEOs, entrepreneurs, futurists, technologists, thought lead…

Neil C. Hughes


    • Feb 28, 2026 LATEST EPISODE
    • daily NEW EPISODES
    • 27m AVG DURATION
    • 3,478 EPISODES

    5 from 156 ratings Listeners of The Tech Blog Writer Podcast that love the show mention: neil asks, bram, neil hughes, neil does a great, neil's podcast, charismatic host, insightful and engaging, tech topics, love tuning, great tech, engaging podcast, tech industry, emerging, tech podcast, startups, founder, best tech, predictions, technology, innovative.


    Ivy Insights

    The Tech Blog Writer Podcast is a must-listen for anyone interested in the intersection of technology and various industries. Hosted by Neil Hughes, this podcast features interviews with a wide range of guests, including visionary entrepreneurs and industry experts. Neil has a remarkable talent for breaking down complex topics into easily understandable discussions, making it accessible to listeners from all backgrounds. One of the best aspects of this podcast is the diversity of guests, as they come from different industries and share their cutting-edge technology solutions. It provides a great source of inspiration and knowledge for staying up to date with the latest advancements in tech.

    The worst aspect of The Tech Blog Writer Podcast is that sometimes the discussions can feel a bit rushed due to the time constraints of each episode. With so many interesting guests and topics to cover, it would be great if there was more time for in-depth conversations. Additionally, while Neil does an excellent job at selecting diverse guests, occasionally it would be beneficial to have more representation from underrepresented communities in tech.

    In conclusion, The Tech Blog Writer Podcast is an excellent resource for those looking to stay informed about the latest tech advancements while learning from visionary entrepreneurs across various industries. Neil's ability to break down complex topics and his engaging interviewing style make this podcast a valuable source of inspiration and knowledge. Despite some minor flaws, it remains a must-listen for anyone interested in staying up-to-date with cutting-edge technology solutions and developments.



    Search for episodes from The Tech Blog Writer Podcast with a specific topic:

    Latest episodes from The Tech Blog Writer Podcast

    From Data Overload To Decision Advantage: Inside Anticipatory Intelligence with Ansel Stein

    Play Episode Listen Later Feb 28, 2026 23:36


    In this episode, I'm joined by Ansel Stein, Vice President of Operations at Crisis24, and the leader behind AiiA powered by Palantir, an intelligence platform built to help executives cut through noise and make better calls in uncertain conditions.  Ansel's background spans more than two decades across analysis, diplomacy, and high-stakes advisory work, including supporting U.S. national security priorities. Today, he's applying that same discipline to the private sector, helping organizations turn overwhelming streams of information into judgment leaders can actually use. We talk about what "intelligence" really means in this context, and why it's different from collecting more data or running another monitoring program. Ansel breaks down the thinking behind the AiiA President's Brief, inspired by the kind of concise, high-rigor briefings senior government leaders rely on, and explains how that model translates into business decision-making without losing context or nuance. If you have ever felt buried by alerts, headlines, and competing narratives, this conversation puts language around that problem and offers a practical alternative. We also address the concerns many leaders have about AI, privacy, and the fear of being tracked. Ansel is clear on boundaries, what data AiiA uses, why open-source intelligence matters, and how governance needs to be designed upfront if trust is going to hold. From structured analytic techniques and scenario planning to the idea that risk and opportunity often sit side by side, this episode is a look at how organizations can move from reacting to anticipating, without handing accountability over to a machine. If your team is trying to shorten the time from signal to decision while still protecting trust, what would it look like to treat intelligence as a leadership habit rather than a crisis tool, and are you ready to build that muscle before the next disruption hits?

    From FBI Gag Order To Privacy-First Telco: The Nicholas Merrill Story

    Play Episode Listen Later Feb 28, 2026 29:08


    How did a routine request from the FBI turn into a decade-long legal battle that helped reshape modern privacy law and ultimately inspire a new kind of mobile network? In this episode, I sit down with Nicholas Merrill, founder of Phreeli and one of the most influential yet often under-recognized figures in the fight for digital rights. Long before privacy became a mainstream talking point, Nick was running an internet service provider that powered major global brands. That journey took a dramatic turn in 2004 when he became the first person to challenge the constitutionality of a National Security Letter under the Patriot Act, living under a gag order for years while the case unfolded. What followed was a deeply personal and professional transformation that led him to question whether litigation and legislation alone could ever keep pace with the scale of modern surveillance. We explore how that experience pushed him toward a third path, building privacy directly into technology itself. From launching the Calyx Institute and developing privacy-focused Android software to raising a multi-million-dollar endowment for digital rights, Nick has spent decades turning principles into practical tools. Now, with Phreeli, he is taking that philosophy into one of the most data-hungry industries of all, mobile telecoms, reimagining what a carrier looks like when it is designed to know as little about its customers as possible. Our conversation also tackles the shifting balance of power between governments and corporations in the data economy, and why the distinction between the two is becoming increasingly blurred. Nick explains the trade-offs involved in building a privacy-first operator in a heavily regulated market, the cryptographic thinking behind Phreeli's double-blind architecture, and why he believes consent and personal agency should sit at the center of the digital experience. This is a story about resistance, resilience, and the belief that technology can be used to restore choice rather than quietly remove it. It is also a timely reminder that privacy is not an abstract concept for activists and engineers, but something as familiar as closing the curtains in your own home. So after three decades on the front lines of this debate, what does Nick think most of us still misunderstand about our digital rights, and what single shift in mindset could change how we all approach privacy in the connected world?

    AI Fraud vs AI Scams, Alloy CEO Tommy Nicholas Explains The Difference

    Play Episode Listen Later Feb 27, 2026 54:26


    Have you noticed how every week brings a new headline about AI driven fraud, yet it still feels hard to tell what is real risk and what is noise? In this Tech Talks Daily episode, I'm joined by Tommy Nicholas, CEO of Alloy, for a candid conversation that cuts through the fear driven commentary and gets into what fraud teams are actually dealing with right now.  We start with a simple but important distinction that gets blurred all the time. Tommy separates classic "fraud," where institutions take the hit, from "scams," where individuals are manipulated into handing over money or access. That framing changes how you think about solutions, accountability, and where AI is making things worse. Tommy also shares why he believes fraud losses are often massively underreported. It is not because people are trying to hide the truth, it is because organizations rarely have a single, clean view of losses across every product line and channel. Add messy labeling, split ownership across teams, and reporting becomes a best effort estimate rather than an objective number. That reality matters if you're building board level narratives, budgets, or risk models on top of survey data. From there, we talk about what organizations are getting right. Tommy argues there is no magical "undetectable" attack that forces teams to give up, but there is a very real breakdown happening in old fallbacks, especially human review of images and video. The bigger shift he sees is banks and fintechs finally pushing for consistent tooling across every channel, web, mobile, branch, call center, support tickets, because fraud does not respect internal org charts. We then get into why Alloy's AI Assistant is an interesting signal for where agentic AI is heading in regulated work. Tommy explains that agents are only useful when they have rigorous context, strong sources of truth, and clear workflows. Otherwise they guess, and "looks good" is not the same as "safe to run in production." He also lays out where agents can genuinely outperform humans, like scaling investigations during sudden surges, while keeping processes auditable and repeatable. We close by looking ahead at agentic commerce, and why Tommy thinks the breakthrough will arrive through weird, emergent behavior rather than a neat protocol roll out.  When you listen back, do you think the next big leap in fraud prevention will come from better models, better data, or better operational discipline, and what would you bet on if your own customers were the ones on the line?      

    How Lenovo Is Preparing Classrooms For The AI Era

    Play Episode Listen Later Feb 26, 2026 30:35


    How do you prepare an entire generation for a world where AI is already shaping how we work, create, and solve problems? In this episode of Tech Talks Daily, I'm joined by Dr. Tara Nattrass, Chief Innovation Strategist for Education at Lenovo, for a grounded and thoughtful conversation about what responsible AI integration really looks like in K–12 classrooms. Tara brings more than 25 years of experience inside school districts, including serving as Assistant Superintendent for Teaching and Learning in Arlington Public Schools, so this isn't a theory-led discussion. It's informed by lived experience. We explore how the conversation has shifted over the past 18 months. AI has been present in schools for years through adaptive software and analytics, but the arrival of generative and now agentic AI tools has accelerated everything. As Tara explains, the debate is no longer about whether AI should be in schools. It's about how to approach it responsibly, strategically, and in ways that genuinely improve learning outcomes. A big theme in our conversation is AI literacy. Tara breaks this down in practical terms, moving beyond technical understanding to include critical thinking, creativity, collaboration, and the ability to evaluate risk and bias. She shares real examples of students designing AI tools to solve problems in their communities, shifting the focus from passive consumption to active creation. We also talk about infrastructure readiness. Many school systems have bold ambitions around AI, but there is often a gap between vision and technical capability. AI-ready devices, intelligent infrastructure, cybersecurity, and data governance all play a role in making innovation sustainable rather than experimental.  Lenovo's approach, as Tara describes it, centers on building education ecosystems rather than simply refreshing hardware. There is also a careful balance to strike between innovation, privacy, and inclusion. From hybrid AI models to questions around where data is stored and who can access it, schools are navigating complex decisions. Tara shares how Lenovo partners with districts, policymakers, and organizations such as ISTE and ASCD to align infrastructure, professional learning, and governance frameworks. Looking ahead, we discuss what will separate school systems that truly benefit from AI from those that simply layer new tools onto old teaching models. Vision, educator upskilling, cybersecurity, and rethinking assessment all feature prominently in her answer. If you are working in education, technology leadership, or policy, this conversation offers a practical view of how AI-ready classrooms are being built today and what still needs to happen next. As always, I'd love to hear your thoughts. How is AI reshaping learning in your organization, and are you ready for what comes next?

    ServiceNow, Dynatrace And The Future Of End-To-End IT Autonomy

    Play Episode Listen Later Feb 25, 2026 30:17


    What does autonomous IT really look like when you move beyond the slideware and start wiring systems together in the real world? At Dynatrace Perform in Las Vegas, I sat down with Pablo Stern, EVP and GM of Technology Workflow Products at ServiceNow, to unpack exactly that. Pablo leads the teams focused on CIOs and CISOs, building the workflows and security products that sit at the heart of modern IT organizations. From service desks and command centers to risk and asset management, his remit is clear: enable AI to work for people, not the other way around. We began with ServiceNow's deepening multi-year partnership with Dynatrace. While the announcement made headlines, Pablo was quick to point out that the real story starts with customers. This collaboration is rooted in a shared goal of helping joint customers reduce outages, improve SLA adherence, and shrink mean time to resolution. The vision of autonomous IT operations is not about hype. It is about connecting observability data with deterministic workflows so that insight can evolve into coordinated, system-level action. Pablo walked me through the maturity curve he sees emerging. First came AI-powered insight, summarizing data and surfacing signals from noise. Then came task automation, drafting knowledge articles, paging teams, triggering predefined playbooks. The next step, and the one that excites him most, is orchestrated autonomy. That means stitching together skills, agents, and workflows into systems that can drive end-to-end outcomes. It is a journey measured in years, not months, and it depends as much on digitizing process and building trust as it does on technology. We also explored root cause analysis, still one of the biggest time drains in IT. By combining Dynatrace's AI-driven observability with ServiceNow's workflow engine, enterprises can automate forensic steps, correlate events faster, and shorten the time spent on major incident bridges where teams debate ownership. Even incremental improvements in accuracy can save hours when incidents strike. Trust, of course, remains central. Pablo was candid that full self-healing systems are still some distance away. What we will see first is relief automation, controlled failovers, scripted actions suggested by machines but approved by humans. Over time, as confidence grows and processes become fully digitized, the balance will shift. Beyond the technology, a consistent theme ran through our conversation. Outcomes have not changed. Enterprises still want higher availability, faster resolution, better employee experiences. What is changing is the how. ServiceNow is reimagining its platform to deliver those outcomes at a much higher standard, not through incremental tweaks, but through rethinking workflows for an AI-first world. From design partnerships with banks building pre-flight change checks, to internal teams acting as the toughest customers, this was a grounded, practical conversation about where autonomous operations are headed and what it will take to get there. If you are a CIO, CISO, or IT leader wondering how to move from theory to execution, this episode offers a clear-eyed look behind the curtain.      

    Scrut Automation And The Security Blind Spot Facing The 99%

    Play Episode Listen Later Feb 24, 2026 24:30


    What happens when nearly half of organizations admit they have no AI-specific security controls, yet AI-driven data leaks are accelerating at the same time? In this episode of Tech Talks Daily, I spoke with Aayush Choudhry, CEO and co-founder of Scrut Automation, about what he sees as a blind spot in the cybersecurity industry. While much of the market continues to design tools for Fortune 500 enterprises with deep pockets and large security teams, Aayush argues that the real existential risk sits with the 99 percent of businesses that cannot survive a serious breach. Aayush brings a founder's perspective shaped by firsthand pain. Before launching Scrut, he and his co-founder experienced the grind of managing compliance and security as a cloud-native startup trying to sell into enterprises. They were outsiders to GRC and security at the time, forced to learn from first principles. That experience became the foundation for Scrut Automation, a modern GRC platform built specifically for small and mid-sized companies that cannot afford six-month implementations, armies of consultants, or half-million-dollar tooling budgets. We explore why treating compliance and security as separate functions increases risk for smaller organizations. In the mid-market, the same small team is often responsible for both. When compliance is handled as a box-ticking exercise and security as a separate technical discipline, gaps emerge. Scrut's approach converges governance, risk, and security signals into a unified layer that translates hundreds of technical alerts into context-aware risks that actually matter to the business. Our conversation also tackles AI complacency. Using the classic confidentiality, integrity, and availability framework, Aayush outlines what minimum viable AI security hygiene looks like in practice. That includes ensuring AI agents are not over-privileged compared to the humans they represent, placing guardrails around sensitive data fed into models, and extending supply chain security thinking to agentic integrations. For resource-constrained teams, these are not theoretical concerns. They are daily realities. Perhaps most compelling is his view that AI can act as a force multiplier for small teams. By embedding accumulated expertise into agents trained on anonymized patterns and edge cases, Scrut aims to democratize security know-how that would otherwise require multiple full-time analysts. The goal is simple but ambitious: make enterprise-grade security outcomes accessible without enterprise-grade headcount. If you are leading a small or mid-sized business and wondering how to balance growth, compliance, and AI risk without breaking the bank, this conversation offers a candid look from the trenches.

    Inside Epicor's Approach To Inclusive, High-Performing Tech Teams

    Play Episode Listen Later Feb 24, 2026 33:29


    How do you build enterprise software for the companies that keep the world turning, while also building a leadership culture where people can actually thrive? In this episode of Tech Talks Daily, I spoke with Kerrie Jordan, Group VP of Product Management at Epicor, about her journey from studying literature to helping shape cloud ERP strategy at a global software company serving more than 20,000 customers worldwide. Kerrie's story is a reminder that there is no single path into technology leadership. Sometimes the foundations are laid in unexpected places, through storytelling, creativity, and a deep curiosity about people. Kerrie shares how her early career in product lifecycle management opened her eyes to the human side of software. Interviewing customers and writing case studies showed her that behind every system implementation is a personal story, a career milestone, or a business trying to survive and grow. That perspective still shapes how she approaches product and marketing today at Epicor, a company recently recognized as a Leader in the Gartner Magic Quadrant for Cloud ERP for Product-Centric Enterprises for the third consecutive year. But this conversation goes far beyond market recognition. We talk openly about burnout, resilience, and the reality of leading through pressure. Kerrie reflects on the importance of protecting time, creating space to reconnect, and building a culture where empathy is practiced, not just discussed. Her view of leadership is grounded in communication, psychological safety, and being tough on problems rather than people. Mentorship is another thread running throughout our discussion. Kerrie explains why powerful mentorship is not passive. It requires vulnerability, preparation, and a willingness to hear difficult advice. A single phrase from a mentor early in her career, "stick-to-itiveness," continues to shape how she approaches hard problems today. We also explore the future of women in manufacturing and technology. Kerrie highlights the need for intentional change across education, early career development, and leadership visibility. She believes technology, particularly AI, can expand access, enable upskilling, and introduce flexibility that supports long-term career growth. At the same time, she makes a simple but powerful point. Women in tech want the same thing as anyone else: the space and autonomy to do their jobs well. From customer co-innovation and community-driven product roadmaps to inclusive leadership under commercial pressure, this episode offers a candid look at what it really takes to lead in enterprise technology today. If you are building products, leading teams, or questioning your own next career step, I think you will find something in Kerrie's story that resonates.

    Miro CIO Tomás Dostal Freire On Reclaiming Creative Time With AI

    Play Episode Listen Later Feb 23, 2026 27:19


    Why do so many of us feel busy all day, yet struggle to point to the meaningful work we actually completed? In this episode of Tech Talks Daily, I sit down with Tomás Dostal Freire, CIO of Miro, to unpack a challenge that quietly drains modern organizations. Tomás brings experience from companies like Google, Netflix, and Booking.com, and now leads both IT and business acceleration at Miro. His focus is simple but ambitious. Move beyond AI experimentation and rethink how work itself gets done. We explore new research revealing that for every hour of creative work, employees lose up to three hours to meetings, admin, emails, and maintenance tasks. That ratio is more than an inconvenience. It affects decision-making speed, employee satisfaction, and ultimately a company's ability to compete. Tomás argues that future candidates will choose employers based on how much unnecessary internal work they are expected to tolerate. In other words, reducing busy work is quickly becoming a talent strategy. One of the biggest culprits? Context switching. With dozens of browser tabs open and information scattered across tools, teams spend more time stitching together fragments than making decisions. Tomás describes how duplication of work, outdated systems, and a lack of shared context quietly erode momentum. AI, he believes, should not create more noise or another standalone tool. It needs to be embedded where collaboration already happens. We discuss the difference between single-player AI moments, where individuals use tools in isolation, and multiplayer AI collaboration, where shared context allows teams to move faster together. At Miro, this philosophy has shaped what they call an AI Innovation Workspace, a shared canvas where human insight and AI assistance coexist in real time. Tomás also shares practical advice for leaders who want to reclaim creative time. Start by identifying tasks you dislike doing that could easily be handled by someone junior. That list often reveals what AI can already automate. Then focus on building transferable skills like cognitive agility and first-principles thinking, rather than chasing every new tool. If you are wrestling with burnout, fragmented workflows, or wondering how AI can genuinely improve collaboration without overwhelming teams, this conversation offers a grounded, optimistic perspective. And yes, we even add a Beatles classic to the Spotify playlist along the way.

    From 1.16 BillionReactive Logs A Day To Proactive Insight: Storio Group And Dynatrace

    Play Episode Listen Later Feb 22, 2026 25:41


    How do you protect millions in revenue during your busiest hour of the year when your entire business depends on digital performance? At Perform 2026, I caught up with Alex Hibbitt, Engineering Director responsible for the customer platform at Storio Group, to unpack what happens when observability moves from an engineering afterthought to a board-level priority. Storio Group was formed from the merger of Photobox and Albelli, bringing together multiple brands and five separate e-commerce platforms into one unified customer journey. That consolidation created opportunity, but it also exposed risk, especially during peak trading from Black Friday through Black Sunday and into the Christmas rush. Alex shared what it really looks like when downtime is non-negotiable. At peak, Storio's platform can generate up to 1.5 million euros per hour. A single poorly timed incident is not simply a technical problem, it is a direct threat to revenue and customer trust. Before partnering with Dynatrace, the team was relying heavily on centralized logging, processing over a billion log lines a day and depending on engineers to manually interpret signals. It was reactive, labor intensive, and left too much to chance. What stood out for me was how cultural change led the transformation. Rather than imposing a new tool from the top down, Alex and his team built a maturity model engineers could relate to, created internal champions, and framed observability as risk management and business protection. The result was a reported 65 to 70 percent reduction in log costs, a 50 percent drop in mean time to detect overall, and up to 90 percent improvement for the most severe incidents. We also explored how unifying logs, metrics, and traces into a single AI-driven platform helped Storio move from reactive firefighting to proactive detection. During one Black Sunday alone, three major issues were identified early enough to avoid an estimated 4.5 million euros in potential impact. This conversation goes beyond tooling. It is about protecting customer experience, safeguarding revenue during peak demand, and building an engineering culture that embraces change. If your organization is wrestling with cloud costs, fragmented monitoring, or the pressure to deliver flawless digital performance under load, there are some powerful lessons here.

    How The IOWN Global Forum Is Reinventing Financial Infrastructure With Photonics

    Play Episode Listen Later Feb 21, 2026 24:37


    How do you design financial infrastructure that keeps running when the unexpected hits, whether that is a regional outage, a regulatory shift, or a sudden spike in digital demand? In this episode of Tech Talks Daily, I'm joined by Katsutoshi Itoh from Sony and Masahisa Kawashima from NTT, both representing the IOWN Global Forum, to unpack how photonics-based networks could change the foundations of digital finance. Speaking with me from Kyoto, they share how the Innovative Optical and Wireless Network vision is moving beyond theory and into practical, finance-specific use cases. Financial institutions are under constant pressure to deliver uninterrupted services while meeting ever tighter compliance standards. Yet as we discuss, many existing architectures still rely on asynchronous data replication and layered resilience added after the fact. On paper, it works. In a real disruption, gaps quickly appear. Itoh and Kawashima explain how synchronous replication over ultra-low latency optical networks can reduce the risk of data loss while simplifying disaster recovery and lowering operational complexity. We also explore the role of Open All-Photonic Networks and why reducing packet forwarding layers can dramatically cut latency and infrastructure costs. Instead of concentrating compute and storage in dense urban data centers, photonics enables distributed computing across regions while maintaining deterministic performance. That shift opens the door to improved resilience, better infrastructure utilization, and new approaches to scaling without constant over-provisioning. Sustainability sits alongside resilience in this conversation. Rather than treating energy efficiency as a compromise, the IOWN vision distributes power demand geographically, making better use of locally available renewable energy and reducing concentrated load pressures. It is a subtle but important rethink of how infrastructure supports broader societal goals. Looking ahead, we consider what this could mean for digital banking platforms, AI-driven risk management, and cross-border financial services. If infrastructure limitations fall away, institutions can design services around business needs rather than technical constraints. If you are curious about how photonics could underpin the next generation of financial services, this episode offers a grounded and thoughtful perspective. As always, I would love to hear your thoughts after listening.

    Drata And The Rise Of The Chief Trust Officer In The AI Era

    Play Episode Listen Later Feb 20, 2026 32:24


    Have you ever wondered why "compliance" still gets treated like a slow, spreadsheet-heavy chore, even though the rest of the business is moving at machine speed? In this episode of Tech Talks Daily, I sit down with Matt Hillary, Chief Information Security Officer at Drata, to talk about what actually changes when AI and automation land in the middle of governance, risk, and compliance. Matt brings a rare viewpoint because he lives this day-to-day as "customer zero," running Drata internally while also leading IT, security, GRC, and enterprise apps. We get practical fast. Matt shares how AI-assisted questionnaire workflows can turn a 120-question security assessment from a late-afternoon time sink into something you can complete with confidence in minutes, then still make it upstairs in time for dinner. He also explains how automation flips the audit dynamic by moving from random sampling to continuous, full-population checks, using APIs to validate evidence at scale, without hounding control owners unless something is actually wrong. We also talk about what security leadership really looks like when the stakes rise. Matt reflects on lessons from his time at AWS, why curiosity and adaptability matter when the "canvas" keeps changing, and how customer focus becomes the foundation of trust. That theme runs through the whole conversation, including the idea that the CISO role is steadily turning into a chief trust officer role, where integrity, transparency, and credibility under pressure matter as much as tooling. And because burnout is never far away in security, we dig into the human side too. Matt unpacks how automation can reduce cognitive load, but also warns about swapping one kind of pressure for another, especially when teams get trapped producing endless dashboards and vanity metrics instead of focusing on the few measures that actually reduce risk. To wrap things up, Matt leaves a song for the playlist, Illenium's "You're Alive," plus a book recommendation, "Lessons from the Front Lines, Insights from a Cybersecurity Career" by Asaf Karen, which he says stands out for how it treats the human side of security leadership. If you're thinking about modernizing compliance in 2026 without losing the human element, his parting principle is simple and powerful: be intentional, keep asking why, and spend your limited time on what truly matters. So where do you land on this shift toward continuous trust, do you see it becoming the default expectation for buyers and auditors, and what should leaders do now to make sure automation reduces pressure instead of quietly adding more? Share your thoughts with me, I'd love to hear how you're approaching it.

    Rethinking Prevention And Recovery With Barracuda XDR

    Play Episode Listen Later Feb 19, 2026 24:47


    Can designing for human error become the strongest cybersecurity strategy in an AI-accelerated world? In this episode, I sit down with Yaz Bekkar, Principal Consulting Architect for Barracuda XDR and a member of the company's Office of the CTO, to explore why the speed introduced by AI is changing the risk equation for every organization. As automation allows teams to move faster, it also means small mistakes can scale at machine speed. Yaz argues that resilience in 2026 is no longer about trying to prevent every incident. It is about anticipating failure, containing the blast radius, and recovering quickly without bringing the business to a standstill. Our conversation challenges one of the most persistent narratives in security, the idea that people are the weakest link. Yaz explains why safeguarding the workforce begins with reshaping the environment they operate in. When the secure option is also the easiest and fastest path, risky shortcuts begin to disappear. From secure defaults and least-privilege access to paved-road workflows for administrators, he shares practical examples of how organizations can reduce complexity, limit exposure, and support better decisions under pressure. We also tackle the limits of annual compliance training and the cultural shift required to build real cyber resilience. Yaz makes the case for continuous, bite-sized practice embedded into everyday work, from three-minute phishing simulations that teach without blame to short, hands-on misconfiguration drills for technical teams. The result is stronger habits, faster response times, and a security posture designed for real human behavior rather than ideal conditions. If AI is accelerating both innovation and risk, how do leaders move from a prevention-only mindset to resilient operations that protect business continuity when controls fail? And what would change in your organization if every system was designed with the assumption that someone, somewhere, will eventually make a mistake?

    Atlassian On Why AI Must Deliver Measurable Business Outcomes

    Play Episode Listen Later Feb 18, 2026 23:11


    At Davos this year, some of the biggest names in tech sent a clear signal. AI is no longer a novelty. It is no longer a proof-of-concept exercise. As Demis Hassabis of Google DeepMind suggested, AI will shape more meaningful work. And Satya Nadella of Microsoft was even more direct. AI only matters if it improves real outcomes for people. So what does that look like inside the enterprise? In this episode of Tech Talks Daily, I'm joined by Andrew Boyagi, Customer CTO at Atlassian, to unpack how the conversation has shifted from experimentation to execution. Developers, in many ways, are the perfect lens for understanding this moment. Over the last two decades, their role has expanded far beyond writing code. They now own products, infrastructure, operations, and business outcomes. AI is simply the next chapter in that evolution. Andrew argues that AI will not replace engineers. It will raise expectations. As intelligent tools absorb repetitive work, the real value moves up the stack. System design. Architectural thinking. Reviewing and refining AI-generated output and orchestrating solutions that solve genuine business problems. And through it all, humans remain firmly in the loop. We also explore what this means for leadership, why mindset is starting to matter more than technical skill alone, how organizations can avoid layering AI on top of broken processes. And why the companies pulling ahead are treating AI as a strategic discipline, not a feature upgrade. This is a conversation grounded in reality. It speaks to product leaders, CTOs, CIOs, and anyone asking a simple but powerful question. If we are investing in AI, what are we actually getting back? And before we close, we look ahead to Team '26 and the themes Andrew and his team are already working on.  If this year has been about proving value, what will the next chapter demand from enterprise leaders? As always, I'd love to hear your thoughts. Are you seeing proof of value in your organization yet, or are you still working through the pilot phase?

    AI Everything Cairo: Capgemini And Egypt's Moment On The Global AI Stage

    Play Episode Listen Later Feb 17, 2026 20:38


    After stepping off stage from moderating a panel, a Senior Frontend Developer from Capgemini waited to say hello. She asked for a quick photo, and within minutes, we were deep in conversation about hackathons, women in tech, mentoring, and the pride she felt watching Egypt host a platform of this scale. Her name is Alaa Ali Kortoma, and what began as a quick introduction turned into her very first podcast appearance. In today's episode, you will hear directly from someone on the ground in Cairo about what AI Everywhere means to her, to Egypt, and to a generation of more than 750,000 graduates entering the workforce each year. We talk about bridging the gap between academia and industry, shrinking the distance between startups and investors, and why she believes AI represents opportunity rather than replacement. If AI really is everywhere, it should look like a possibility. It should look like inclusion. It should look like young women mentoring at hackathons. It should look like national strategies focused on responsible adoption and skills development. So let me beam your ears to Cairo and introduce you to Alaa Ali Kortoma. And after spending three days at AI Everything MEA, what does AI Everywhere mean to me? It is not hype. It is not a headline. It is policymakers embedding AI into public services. It is engineers building Arabic language models tailored to local needs. It is healthcare systems using AI to detect disease earlier. It is investors listening to founders. It is young professionals investing in themselves. One phrase from this conversation will stay with me long after the microphones were turned off. Proud and full of possibility. Over the last decade, I have seen technology stories unfold across continents, but Cairo reminded me why I started this podcast in the first place. Technology becomes powerful when it connects people. When it builds confidence. When it proves that innovation is not reserved for a select few regions. AI is often framed as a Silicon Valley or East Asia story. What I witnessed in Egypt suggests something broader is taking shape. Capital is flowing differently. Partnerships are forming across Africa and the Middle East. Talent is visible. Voices are confident. So if AI can thrive beside the Nile, if it can empower graduates in Cairo to see opportunity rather than threat, then perhaps AI really is everywhere. The final question is this. What does AI Everywhere look like where you are, and what role are you playing in shaping it? Wherever you are listening from, I would love to hear your story too.

    From AI Pilot Purgatory To Real ROI With Bill Briggs Of Deloitte

    Play Episode Listen Later Feb 16, 2026 38:23


    In this episode, I'm joined by Bill Briggs, CTO at Deloitte, for a straight-talking conversation about why so many organizations get stuck in what he calls "pilot purgatory," and what it takes to move from impressive demos to measurable outcomes. Bill has spent nearly three decades helping leaders translate the "what" of new technology into the "so what," and the "now what," and he brings that lens to everything from GenAI to agentic systems, core modernization, and the messy reality of technical debt. We start with a moment of real-world context, Bill calling in from San Francisco with Super Bowl week chaos nearby, and the funny way Waymo selfies quickly turn into "oh, another Waymo" once the novelty fades. That same pattern shows up in enterprise tech, where shiny tools can grab attention fast, while the harder work, data foundations, APIs, governance, and process redesign, gets pushed to the side. Bill breaks down why layering AI on top of old workflows can backfire, including the idea that you can "weaponize inefficiency" and end up paying for it twice, once in complexity and again in compute costs. From there, we get into his "innovation flywheel" view, where progress depends on getting AI into the hands of everyday teams, building trust beyond the C-suite, and embedding guardrails into engineering pipelines so safety and discipline do not rely on wishful thinking. We also dig into technical debt with a framing I suspect will stick with a lot of listeners. Bill explains three types, malfeasance, misfeasance, and non-feasance, and why most debt comes from understandable trade-offs, not bad intent. It leads into a practical discussion on how to prioritize modernization without falling for simplistic "cloud good, mainframe bad" narratives. We finish with a myth-busting riff on infrastructure choices, a quick look at what he sees coming next in physical AI and robotics, and a human ending that somehow lands on Beach Boys songs and pinball machines, because tech leadership is still leadership, and leaders are still people. So after hearing Bill's take, where do you think your organization is right now, measurable outcomes, success theater, or somewhere in between, and what would you change first, and please share your thoughts?   Useful Links Connect With Bill Briggs Deloitte Tech Trends 2026 report Deloitte The State of AI in the Enterprise report

    Dynatrace Intelligence And The Shift From Observability To Autonomous Action

    Play Episode Listen Later Feb 15, 2026 23:40


    Perform 2026 felt like a turning point for Dynatrace, and when Steve Tack joined me for his fourth appearance on the show, it was clear this was not business as usual.  We began with a little Perform nostalgia, from Dave Anderson's unforgettable "Full Stack Baby" moment to the debut of AI Rick on the keynote stage. But the humor quickly gave way to substance. Because beneath the spectacle, Dynatrace introduced something that signals a broader shift in observability: Dynatrace Intelligence. Steve was candid about the problem they set out to solve. Too much focus on ingesting data. Too much time spent stitching tools together. Too many dashboards. Too many alerts. The real opportunity, he argued, is turning telemetry into trusted, automated action. And that means blending deterministic AI with agentic systems in a way enterprises can actually trust. We unpacked what that looks like in practice. From United Airlines using a digital cockpit to improve operational performance, to TELUS and Vodafone demonstrating measurable ROI on stage, the emphasis at Perform was firmly on production outcomes rather than pilot projects. As Steve put it, the industry has spent long enough in "pilot purgatory." The next phase demands real-world deployment and real return. A big part of that confidence comes from the foundations Dynatrace has laid with Grail and Smartscape. By combining unified telemetry in its data lakehouse with real-time topology mapping and causal AI, Dynatrace is positioning itself as the engine behind explainable, trustworthy automation. When hyperscaler agents from AWS, Azure, or Google Cloud call Dynatrace Intelligence, they are expected to receive answers grounded in causal context rather than probabilistic guesswork. We also explored what this means for developers, who often carry the burden of alert fatigue and fragmented tooling. New integrations into VS Code, Slack, Atlassian, and ServiceNow aim to bring observability directly into the developer workflow. The goal is simple in theory and complex in execution: keep engineers in their flow, reduce toil, and amplify human decision-making rather than replace it. Of course, autonomy raises questions about risk. Steve acknowledged that for now, humans remain firmly in the loop, with most agentic interactions still requiring checkpoints. But as trust grows, so will the willingness to let systems self-optimize, self-heal, and remediate issues automatically. We closed by zooming out. In a market saturated with AI claims, Steve encouraged listeners to bet on change rather than cling to the status quo. There will be hype. There will be agent washing. But there is also real value emerging for those prepared to experiment, learn, and scale responsibly. If you want to understand where AI observability is heading, and how deterministic and agentic intelligence can coexist inside enterprise operations, this episode offers a grounded, practical perspective straight from the Perform show floor.

    Tungsten Automation: Why AI ROI Starts With Boring AI And Real Workflows

    Play Episode Listen Later Feb 14, 2026 27:19


    What happens when the noise around AI starts to drown out the actual business value it is meant to deliver? In this episode of Tech Talks Daily, I sat down with Adam Field, Chief AI and Product Officer at Tungsten Automation, fresh from the conversations unfolding at Davos. While headlines continue to celebrate agentic AI and sweeping automation claims, Adam offered a grounded perspective shaped by decades of experience turning AI pilots into measurable, ROI-driven deployments. His view is simple. The hype cycle may be accelerating, but many organizations still struggle with the fundamentals. Adam described a common boardroom dynamic. "What do we want? AI. What do we want it to do? We're not sure." That pressure to move fast often collides with a deeper reality. Software has shifted from deterministic to probabilistic. Leaders who grew up expecting the same inputs to always produce the same outputs now face systems that behave differently by design. Measuring value in that environment requires a different mindset. One of the most compelling ideas in our conversation was Adam's concept of "boring AI." While splashy announcements about replacing hundreds of employees grab attention, he argues that real returns often come from quieter use cases. At Tungsten Automation, that means intelligent document processing, extracting trusted, AI-ready data from the 80 percent of enterprise information that is unstructured. Contracts, invoices, transcripts, compliance paperwork. The work may not trend on social media, but it saves time, improves accuracy, and fits directly into daily workflows. We also explored accountability. AI can compress output, but it concentrates responsibility. When generative tools make architectural or compliance decisions, the liability does not shift to the model. Organizations remain accountable for privacy, ethics, and customer trust. Adam shared his own experience rebuilding a legacy application in days using AI code generation, only to discover licensing and compliance nuances that required human judgment. The lesson was clear. AI amplifies capability, yet human oversight remains essential. For leaders searching for signals that an AI strategy will actually deliver long-term returns, Adam pointed to two patterns from the small percentage of projects that succeed. First, integration into daily workflows drives adoption. Second, partnering with trusted vendors often reduces risk compared to attempting everything in-house. In a world flooded with open-source experiments and "X is dead" headlines, discipline and focus still matter. Tungsten Automation has spent four decades evolving alongside automation technologies, previously known as Kofax. Today, the company applies large language models and agentic workflows to transform unstructured data into decision-ready insights across finance, logistics, banking, and insurance. It is a reminder that the future of AI may be less about replacing people and more about removing friction so humans can do the work they were actually hired to do. So as AI investment continues to grow and pressure for returns intensifies, the question becomes harder to ignore. Are we chasing the headlines, or are we building systems that quietly deliver value where it counts? Useful Links Connect with Adam Field Learn more about Tungsten Automation Upcoming Events

    Agentic AI In Action: How Swan AI Is Rewriting The Rules Of Company Building

    Play Episode Listen Later Feb 13, 2026 25:30


    How do you build a $30 million ARR business with just three people and a fleet of AI agents doing the heavy lifting? In this episode of Tech Talks Daily, I connected with Amos Joseph, CEO of Swan AI. From the moment we joked about AI notetakers silently observing our conversation, it was clear this discussion would go beyond surface-level automation talk. Amos is attempting something bold. He is building what he calls an autonomous business, one designed to scale with intelligence rather than headcount. Amos has already built and exited two B2B startups using the traditional growth-at-all-costs model. Raise early, hire fast, expand the vision, chase valuation. This time, he is rewriting that script entirely. Swan AI is built around ARR per employee, human-AI collaboration, and what he describes as scaling employees rather than scaling the org chart. With more than 200 customers and only three founders, Swan is already testing whether AI agents can run real go-to-market operations autonomously. We explored why over 90 percent of AI implementations fail and why grassroots experimentation consistently outperforms executive mandates. Amos argues that companies looking outward for AI solutions before understanding their internal bottlenecks are simply scaling chaos. The organizations that succeed start with process clarity, define what humans should do versus what should be automated, and then allow AI to execute within that structure. It is a powerful reminder that becoming AI-native has less to do with tools and more to do with operational self-awareness. We also unpacked the difference between automation and agentic AI. Traditional automation follows deterministic steps coded in advance. Agentic AI shifts decision-making power to the model itself. The AI decides what to do next, introducing statistical reasoning rather than predefined logic. That shift in agency changes everything about how workflows operate and how leaders think about control. Perhaps most fascinating is how Swan generates pipeline entirely through LinkedIn. No paid ads. No outbound. Amos has built an AI-driven engine that creates content, monitors engagement, qualifies prospects, and nurtures relationships at scale. It is an experiment in trust-based distribution powered by agents, not marketing budgets. This conversation reframes what growth can look like in an AI-native world. If scaling no longer equals hiring, and if every employee becomes a manager of AI agents, what does leadership look like next? How do founders build organizations that amplify human zones of genius rather than bury them under coordination overhead? If you are questioning long-held assumptions about team size, growth, and AI adoption, this episode will give you plenty to think about.

    From Digital Gold To DeFi Liquidity: The Threshold Network Vision For Bitcoin

    Play Episode Listen Later Feb 12, 2026 34:00


    Is Bitcoin still just a digital store of value, or is it quietly evolving into the financial engine of a new on-chain economy? In this episode of Tech Talks Daily, I sat down with Callan Sarre, Co-Founder of Threshold Labs, to explore what happens when the world's most recognized crypto asset stops sitting idle and starts becoming programmable capital. We recorded against the backdrop of a sharp market correction that wiped out value across crypto and traditional assets alike, making for a timely and honest conversation about volatility, maturity, and why Bitcoin's next chapter may be defined by utility rather than price speculation.  Callan explains how the rise of ETFs and institutional flows is reshaping ownership, while decentralized infrastructure is working to ensure users can still access the asset's underlying power. At the heart of our discussion is tBTC, a trust-minimized bridge that moves native Bitcoin into DeFi without handing control to centralized custodians. Callan breaks down how Threshold's decentralized custody model works in practice and why removing single points of failure matters in a post-FTX world. We also explore the behavioral barriers that have kept long-term holders from putting their BTC to work, the real risks behind Bitcoin yield strategies, and the infrastructure required to make these tools accessible to a broader audience through familiar Web2-style experiences. The conversation also takes a global turn as we look at why Asia is accelerating Bitcoin innovation, how regulation is driving institutional adoption in Western markets, and what the shift from DAO-led governance to a lab execution model reveals about the realities of building at scale.  Looking ahead five years, Callan paints a picture of an integrated on-chain financial system where Bitcoin can be borrowed against, deployed, and settled instantly across shared liquidity rails, while still preserving the principles that made it attractive in the first place. So if Bitcoin becomes productive capital and the majority of financial activity moves on-chain, what does that mean for traditional finance, for long-term holders, and for the next wave of builders? And are we ready for a world where the most secure monetary asset also becomes the most composable?

    AI PCs Explained With Logan Lawler from Dell Technologies

    Play Episode Listen Later Feb 11, 2026 36:24


    What actually happens when AI stops being a cloud-only experiment and starts running on desks, in labs, and inside real teams trying to ship real work? In this episode, I sit down with Logan Lawler, Senior Director at Dell Technologies, to unpack how AI workloads are really being built and supported on the ground today. Logan leads Dell's Precision and Pro Max AI Solutions business and hosts Dell's own Reshaping Workflows podcast, giving him a rare vantage point into how engineers, developers, creatives, and data teams are actually working, not how marketing slides suggest they should be. We start by cutting through the noise around AI PCs. At every conference stage, Logan breaks down what genuinely matters when choosing hardware for AI work. CPUs, GPUs, NPUs, memory, and software stacks all play different roles, and misunderstanding those roles often leads teams to overspend or underspec. Logan explains why all AI workstations qualify as AI PCs, but not all AI PCs are suitable for serious AI work, and why GPUs remain central for anyone doing real model development, fine-tuning, or inference at scale. From there, the conversation shifts to a broader architectural rethink. As AI workloads grow heavier and data sensitivity increases, many organizations are reconsidering where compute should live. Logan shares how GPU-powered Dell workstations, storage-rich environments, and hybrid cloud setups are giving teams more control over performance, cost, and data. We explore why local compute is becoming attractive again, how modern GPUs now rival small server setups, and why hybrid workflows, local for development and cloud for deployment, are becoming the default rather than the exception. One of the most compelling parts of the discussion comes when Logan connects hardware choices back to business reality. Drawing on real-world examples, he explains how teams use local AI environments to move faster, reduce cloud costs, and avoid getting locked into architectures that are hard to unwind later. This is not about abandoning the cloud, but about being intentional from the start, mainly as AI usage spreads beyond developers into marketing, operations, and everyday business roles. We also step back to reflect on a deeper challenge. As AI becomes easier to use, what happens to critical thinking, curiosity, and learning? Logan shares a candid perspective, shaped by his experiences as a parent, technologist, and podcast host, raising questions about how tools should support rather than replace thinking. If you are trying to make sense of AI PCs, local versus cloud compute, or how teams are really reshaping workflows with AI hardware today, this conversation offers grounded insight from someone living at the center of it. Are we designing systems that genuinely empower people to think better and build faster, or are we sleepwalking into decisions we will regret later? How do you want your own AI workflow to evolve? Useful Links TLDR AI newsletter and the Neurons. The Reshaping Workflows podcast Connect with Logan Lawler Follow Dell Technologies on LinkedIn

    Cisco Live 2026 Amsterdam: Why AI Agents Fail Without Infrastructure Ready For Scale

    Play Episode Listen Later Feb 10, 2026 29:51


    What does it really take to move AI from experimentation into something enterprises can trust, scale, and rely on every day? In this episode of Tech Talks Daily, I'm joined by Rob Lay, CTO and Solutions Engineering Director for Cisco UK and Ireland, recorded in the run-up to Cisco Live EMEA in Amsterdam. As agentic AI dominates conference agendas on both sides of the Atlantic, this conversation steps away from model hype. It focuses on the less glamorous, but far more decisive layer underneath it all: infrastructure. Rob explains why the biggest constraint on scaling AI agents in production is no longer imagination or ambition, but the readiness of the environments those agents run on. We talk about how legacy technical debt, latency, fragmented networks, and disconnected security tools can quietly undermine AI investments long before leaders see any return. As organizations move out of pilot mode and into real execution, those cracks become impossible to ignore. A big part of the discussion centers on why AI changes the relationship between network, compute, and security teams. Traditional silos struggle to keep up as autonomous systems make decisions at machine speed. Rob shares how Cisco is approaching this shift through tighter integration across the stack, with security designed directly into the network rather than bolted on later. When AI agents act independently, routing everything through centralized chokepoints does not hold up. We also explore how operational complexity is evolving. Tool sprawl is already overwhelming many IT leaders, and agent sprawl is clearly coming next. Rob outlines Cisco's platform strategy, including how agent-driven operations, human oversight, and context-aware automation are shaping a new approach to day-to-day resilience. This leads into a wider conversation about digital resilience as a business issue, where visibility, assurance, and learning from incidents matter more than static continuity plans that only get tested once a year. For European leaders in particular, data sovereignty and control remain at the forefront. Rob explains how Cisco is responding with flexible deployment models, local data residency options, and air-gapped environments that support AI innovation without forcing customers into a single rigid operating model. We close by looking at where enterprises are actually seeing value today, where expectations are still running ahead of reality, and what leaders attending Cisco Live should really be listening to as announcements roll in. If you are responsible for infrastructure, security, or technology strategy in an AI-driven organization, this conversation offers a grounded view of what needs to be ready before agents can truly deliver on their promise. As AI-powered systems start to move faster than most roadmaps anticipated, are you confident the foundations underneath them are ready to keep up, and what would you change if you were starting that journey today? Useful Links Connect with Rob Lay Cisco Live Follow Cisco on LinkedIn

    IBM's Global Managing Partner on how CEOs Are Rethinking AI ROI

    Play Episode Listen Later Feb 10, 2026 28:02


    What does it really take to move enterprise AI from impressive demos to decisions that show up in quarterly results? One year into his role as Global Managing Partner at IBM Consulting, Neil Dhar sits at the intersection of strategy, capital allocation, and technology execution. Leading the firm's Americas business and a team of close to 100,000 consultants, he has a front-row view into how large organizations are reassessing their AI investments. From global healthcare leaders like Medtronic to luxury retail brands such as Neiman Marcus, the conversation has shifted. Early proofs of concept helped executives understand what was possible. Now the focus is firmly on proof of value and on whether AI can drive growth, competitiveness, and measurable return. In this episode, I speak with Neil Dhar about what has changed in the boardroom over the past year and why ROI has become the central question. Drawing on more than three decades in finance and private equity, including senior leadership roles at PwC, Neil explains why AI is increasingly being treated as a capital allocation decision rather than a technology experiment. Every dollar invested has to earn its place, whether through productivity gains, operational improvement, or new revenue opportunities. Vanity projects no longer survive scrutiny, especially when boards and investors expect results on a much shorter timeline. We also explore how IBM is applying these same principles internally. Neil shares how the company has identified hundreds of workflows across the business, prioritized those with the strongest economic impact, and used AI and automation to drive large-scale productivity gains. The result is a potential $4.5 billion in annual run rate savings by 2025, with those gains being reinvested into innovation, people, and future growth. It is a candid look at what happens when AI strategy, leadership accountability, and disciplined execution come together inside a global organization. If you are a business leader trying to separate real value from hype, or someone wrestling with how to justify AI spend beyond experimentation, this conversation offers a grounded perspective on what enterprise AI looks like when it is treated as a business decision rather than a technology trend. Are you ready to rethink how AI earns its place inside your organization, and what proof of value really means in 2026? Useful Links Connect With Neil Dhar IBM Institute for Business Value, "The Enterprise in 2030" study Learn More About IBM Consulting

    Why EY Thinks Ecosystems Will Define The Future Of Enterprise AI

    Play Episode Listen Later Feb 9, 2026 21:35


    How Do Marketplaces Turn AI Ambition Into Scalable, Trusted Enterprise Reality? That is the question I explore in this episode with Julie Teigland, Global Vice Chair for Alliances and Ecosystems at EY, someone who sits right at the intersection of enterprise demand, technology platforms, and the ecosystems that increasingly power modern AI adoption. As organizations race to deploy AI at scale, many are discovering that the real challenge is not a lack of tools, but the complexity of choosing, integrating, governing, and standing behind those decisions with confidence. Julie explains why marketplaces are becoming a powerful mechanism for reducing friction in this process, helping enterprises move beyond experimentation toward AI solutions that are trusted, scalable, and aligned with real business outcomes. We talk about how marketplaces can collapse complexity, curate choice, and bring much needed clarity to leaders who are overwhelmed by the sheer volume of AI options available today. Julie also shares how EY approaches this challenge through its "client zero" mindset, turning the lens inward and treating EY itself as the first marketplace customer. By doing so, EY stress tests governance, security, and integration at real enterprise scale, serving tens of thousands of clients, running hundreds of thousands of servers, and processing hundreds of millions of transactions every day. That internal experience shapes how EY helps clients navigate trust, accountability, and cross-vendor integration risks, particularly as AI becomes more embedded into workflows and decision-making. We also explore how strong alliances with cloud leaders like Microsoft and SAP are shaping how AI solutions are vetted, standardized, and deployed across industries, as well as how regulation, particularly in Europe, is influencing a shift toward responsibility by design. This conversation goes beyond technology to focus on orchestration, trust, and outcomes, and why marketplaces are evolving from simple app stores into something far more strategic for enterprise AI. If you are trying to understand how ecosystems, governance, and marketplaces can help turn AI from isolated projects into sustained business value, this episode offers a thoughtful and grounded perspective.  I would love to know what resonated with you most. How do you see marketplaces shaping the future of AI adoption inside your organization? Useful LInks Connect With Julie Teigland Learn More About EY

    Motive on Why Accurate, Real-Time Edge AI Saves Lives in Physical Operations.

    Play Episode Listen Later Feb 9, 2026 29:59


    As someone who spends a lot of time covering AI announcements, product launches, and conference stages, it is easy to forget that most AI today is still built for desks, screens, and digital workflows. Yet the reality is that the vast majority of the global workforce operates in the physical world, on roads, construction sites, depots, and job sites where mistakes are measured in injuries, collisions, and lives lost. That gap between where AI innovation happens and where real risk exists is exactly why I wanted to sit down with Amish Babu, CTO at Motive. In this episode, I speak with Amish about what it truly means to build AI for the physical economy. We unpack why designing AI for vehicles, fleets, and safety-critical environments is fundamentally different from building AI for emails, documents, or dashboards. Amish explains why latency, trust, and reliability are non-negotiable when AI is embedded directly into vehicles, and why edge AI, multimodal sensing, and on-device compute are essential when milliseconds matter. This is a conversation about AI that has to work perfectly in messy, unpredictable, real-world conditions. We also explore how Motive approaches AI as a full system, combining hardware, software, and models into a single platform built specifically for life on the road. Amish shares how AI can help prevent collisions, support drivers in the moment, and create measurable safety and operational outcomes for fleets operating across transportation, construction, energy, and public sector environments. Along the way, we challenge common misconceptions around AI in vehicles, including the idea that it is about surveillance rather than protection, or that all AI systems are created equal when lives are on the line. If you are interested in how AI moves beyond productivity tools and into high-stakes environments where safety, accountability, and trust matter most, this episode offers a grounded and practical perspective from someone building these systems every day. I would love to hear your thoughts on this one. How do you see the role of AI evolving as it moves deeper into the physical world? Useful Links Connect with Amish Babu Learn More About Motive How Motive's AI works: Real-time edge intelligence, humans-in-the-loop, and continuous improvement.

    Building Responsible Agentic AI: Genpact's Blueprint For Enterprise Leaders

    Play Episode Listen Later Feb 9, 2026 32:28


    In this episode of Tech Talks Daily, I sat down with Jinsook Han, Chief Agentic AI Officer at Genpact, to unpack one of the most misunderstood shifts in enterprise AI right now. Many organizations feel confident about the value AI can deliver, yet only a small fraction are able to move beyond pilots and into autonomous operations that actually scale. Genpact's Autonomy By Design research puts hard data behind that gap, and Jinsook explains why optimism often races ahead of readiness. We explore why agentic AI changes the rules entirely. When AI systems begin to act, decide, and adapt on behalf of the business, familiar operating models start to strain. Jinsook makes a compelling case that agentic AI cannot be treated like another software rollout. It demands a rethink of data, governance, roles, and even how teams define work itself. The shift from tools to teammates alters expectations for people across the organization, from frontline operators to the C-suite, and exposes just how unprepared many companies still are. Governance is a major theme throughout the conversation, but not in the way most leaders expect. Rather than slowing progress, Jinsook argues that governance must become part of how work happens every day. She shares how Genpact approaches agent certification, maturity, and oversight, using vivid analogies to explain why quality and alignment matter more than simply deploying large numbers of agents. We also dig into why many governance models fail, especially when they rely on committees instead of lived understanding. Upskilling sits at the heart of this transformation. Jinsook walks through how Genpact is training more than 130,000 employees for an agentic future, starting with executives themselves. The focus is not on abstract learning, but on proving that today's work looks different from yesterday's. Observability, explainability, and responsible AI are woven into this approach, with command centers designed to monitor both agent performance and health, turning early signals into opportunities rather than panic. This conversation goes well beyond hype. It is about readiness, responsibility, and the reality of building autonomous systems that still depend on human judgment. As organizations rush toward agentic AI, are they truly prepared to change how decisions are made, how people work, and how accountability is defined, or are they still treating AI as a faster hammer rather than a new kind of teammate? Useful Links Connect with Jinsook Han Learn More about Genpact    

    Slalom On The AI Leadership Gap Between Confidence And Capability

    Play Episode Listen Later Feb 8, 2026 32:00


    What happens when leaders are confident about AI, but the people expected to use it are not ready? In this episode of Tech Talks Daily, I sat down with Caroline Grant from Slalom Consulting to explore one of the most persistent tensions in enterprise AI adoption right now. Boards and executives are spending more, moving faster, and expecting returns sooner than ever, yet many organizations are struggling to translate that ambition into outcomes that scale.  Caroline brings fresh insight from Slalom's latest research into how leadership, culture, and workforce readiness are shaping what actually happens next. We unpack a clear shift in ownership for AI transformation, with CTOs and CDOs increasingly leading organizational redesign rather than HR. That change reflects how deeply AI now cuts across technology, operations, and business models, but it also introduces new risks.  Caroline explains why sidelining people teams can create blind spots around skills, incentives, and trust, especially as roles evolve and uncertainty grows inside the workforce. The result is what Slalom describes as a growing AI disconnect between executive optimism and day-to-day reality. Despite the noise around job losses, the data tells a more nuanced story. Many organizations are creating new AI-related roles at a pace, yet almost all are facing skills gaps that threaten progress. We talk about why reskilling at scale is now unavoidable, how unclear career paths fuel employee distrust, and why focusing only on technical capability misses the human side of adoption.  Caroline also challenges assumptions about skill priorities, warning that deprioritizing empathy, communication, and change leadership could undermine effective human-AI collaboration. We also dig into ROI expectations, with most UK executives now  expecting returns within two years. Caroline shares why that ambition is achievable, where it breaks down, and why so many organizations remain stuck in pilot mode. From governance and decision rights to culture and leadership behavior, this conversation goes beyond tools and platforms to examine what separates experimentation from fundamental transformation. As AI becomes a test of leadership as much as technology, how are you closing the gap between vision and execution within your organization, and are you building a workforce that can keep pace with change rather than resist it? Connect With Caroline Grant from Slalom Consulting The Great AI Disconnect: Slalom's Insights Survey Learn More About Slalom

    LastPass CEO: If the Browser is AI's New Interface, What Does it Mean for Security?

    Play Episode Listen Later Feb 7, 2026 30:21


    Is the browser quietly becoming the most powerful and dangerous interface in modern work? In this episode of Tech Talks Daily, I sat down with Karim Toubba, CEO of LastPass, to unpack a shift that many people feel every day but rarely stop to question. The browser is no longer just a window to the internet. It has become the place where work happens, where SaaS lives, and increasingly, where humans and AI agents meet data, credentials, and decisions. From AI-native browsers to prompt-based navigation and headless agents acting on our behalf, the way we access information is changing fast, and so are the risks. Karim shares why this moment feels different from earlier waves like SaaS adoption or remote work. Today, more than ever, productivity, identity, and security collide inside the browser.  Shadow AI is spreading faster than most organizations can track, personal accounts are being used to access powerful AI tools, and sensitive data is being uploaded with little visibility or control. At the same time, attackers have noticed that the browser has become the soft underbelly of the enterprise, with a growing share of malware and breaches originating there. We also explore the rise of agentic AI and what happens when software, not people, starts logging into systems. When an agent books travel, pulls data, or completes workflows on a user's behalf, traditional authentication and access models start to break down. Karim explains why identity, visibility, and control must evolve together, and why secure browser extensions are emerging as a practical foundation for this next phase of computing. The conversation goes deep into what users do not see when AI browsers ask for access to email, calendars, and internal apps, and why convenience often masks long-term exposure. Throughout the discussion, Karim brings a grounded perspective shaped by decades in cybersecurity, from risk-based vulnerability management to enterprise threat intelligence. Rather than pushing fear, he focuses on realistic steps organizations and individuals can take, from understanding what data is being shared, to treating security teams as partners, to using tools that bring passwords, passkeys, and authentication into one trusted place as browsing evolves. As AI reshapes how we search, work, and make decisions, the question is no longer whether the browser matters. It is whether we are ready for it to act as the front door to both our productivity and our risk, so are you securing your browser for the future you are already using today? Connect with Karim Toubba LastPass Threat Intelligence, Mitigation, and Escalation (TIME) team page Phish Bowl Podcast    

    Harness And The AI Velocity Paradox Slowing Software Delivery

    Play Episode Listen Later Feb 6, 2026 34:54


    What really happens when AI helps teams write code faster, but everything else in the delivery process starts to slow down? In this episode of Tech Talks Daily, I'm joined once again by returning guest and friend of the show, Martin Reynolds, Field CTO at Harness. It has been two years since we last spoke, and a lot has changed since then. Martin has relocated from London to North Carolina, gaining back hours of his working week. Still, the bigger shift has been in how AI is reshaping software delivery inside modern enterprises. Our conversation centers on what Martin calls the AI velocity paradox. Development teams are producing more code at speed, often thanks to AI coding agents, yet testing, security, governance, and release processes are struggling to keep up. The result is a growing gap between how fast software is written and how safely it can be delivered.  Martin shares research showing how this imbalance is already leading to production incidents, hidden vulnerabilities, and mounting technical debt. We also dig into why this AI-driven transition feels different from previous waves, such as cloud, mobile, or DevOps. Many of the same concerns around security, trust, and control still exist, but this time, everything is happening far faster. Martin explains why AI works best as a human amplifier, strengthening good engineering practices while exposing weak ones sooner than ever before. A significant theme in the episode is visibility. From shadow AI usage to expanding attack surfaces, Martin outlines why security teams are finding it harder to see where AI is being used and how data is flowing through systems. Rather than slowing teams down, he argues that the answer lies in embedding governance directly into delivery pipelines, making security automatic rather than an afterthought. We also explore the rise of agentic AI in testing, quality assurance, and security, where specialized agents act like virtual teammates. When well-designed, these agents help developers stay focused while improving reliability and resilience throughout the lifecycle. If you are responsible for engineering, platform, or security teams, this episode offers a grounded look at how to balance speed with responsibility in an AI-native world. As AI becomes part of every stage of software delivery, are your processes designed to safely absorb that change, or are they quietly becoming the bottleneck? Useful Links Learn More About Harness The State of AI in Engineering The State of AI Application Security EngineeringX Follow Harness on LinkedIn Connect With Martin Reynolds Thanks to our sponsors, Alcor, for supporting the show.

    FreedomPay on The $44.4 Billion Payment Risk Facing Retail And Hospitality

    Play Episode Listen Later Feb 5, 2026 25:15


    What really happens to a business when payments stop working, even for a few minutes? I recorded this episode live at Dynatrace Perform in Las Vegas, inside the Venetian, surrounded by engineers, operators, and business leaders all wrestling with the same uncomfortable reality. Payment outages are no longer rare edge cases. They are becoming a routine operational risk, and the cost is far higher than many organizations realize. To unpack that shift, I sat down with Victoria Ruffo, Software Engineering Team Lead at FreedomPay, for a grounded and practical conversation about resilience, observability, and what failure actually looks like in modern commerce. Victoria explains how FreedomPay supports merchants by orchestrating every part of the payment journey through a single platform, from terminal management to remote updates and even on-device advertising. If you have checked into a hotel and noticed a payment terminal quietly branded "Secured By FreedomPay," there is a good chance you have already interacted with her team's work. That real-world exposure gives her a clear view of what happens when systems fail and why customers are far less patient than businesses often assume. We talk about new research from FreedomPay, Dynatrace, and Retail Economics that puts a stark number on the issue. $44.4 billion in U.S. retail and hospitality revenue is at risk every year due to payment disruptions. But as Victoria points out, the most alarming insight is not the headline figure. It is the gap between how long customers are willing to wait and how long outages actually last. Most consumers abandon a purchase after seven minutes, while many disruptions stretch on for hours. In those early minutes alone, the majority of revenue is already gone. The conversation moves beyond statistics into lived experience. From lunch breaks cut short by declined payments to stadiums losing an entire event's worth of revenue in a single outage, Victoria shares why these failures are not abstract technical issues. They directly affect staff wages, customer loyalty, and long-term brand trust. We also explore why cash-only backups and outdated terminals no longer reflect how people actually pay, and why uneven investment in resilience leaves many merchants dangerously exposed. AI plays a central role in the discussion, but not in the way hype cycles often suggest. Victoria is clear that FreedomPay is not using AI to touch cardholder data or write payment code. Instead, tools like Dynatrace Intelligence help teams detect issues faster, identify patterns humans might miss, and move from reaction to anticipation. That shift, she argues, is where real value shows up, especially when seconds and minutes matter. If you care about payments, customer experience, or the hidden connection between technical failure and business impact, this episode offers a timely reminder that outages do not have to be catastrophic if organizations plan for them properly. As consumers grow less patient and systems grow more complex, are your payment platforms designed to absorb disruption, or are they quietly waiting to fail at the worst possible moment? Useful Links Connect With Victoria Ruffo Learn More About Freedom Pay Whitepaper Payment Resilience in an Uncertain World UK Learn More About Dynatrace Perform Thanks to our sponsors, Alcor, for supporting the show.

    What Bubble Learned About Responsibility in AI-built Apps

    Play Episode Listen Later Feb 5, 2026 24:10


    In this episode of Tech Talks Daily, I'm joined by Josh Haas, co-founder and co-CEO of Bubble, to unpack why the next phase of software creation is already taking shape. We talk about how the early excitement around AI-powered code generation delivered fast demos and instant gratification, but often fell apart when teams tried to turn those experiments into durable products that could grow with a business. Josh takes us back to Bubble's origins in 2012, long before AI hype cycles and trend-driven development. At the time, the idea was simple but ambitious: give more people the ability to build genuine software without spending months learning traditional programming. That early focus on visual development now feels timely again, especially as builders wrestle with the limits of black-box AI tools that hide logic until something breaks. We spend time on where vibe coding struggles in practice. Josh explains why speed alone is never enough once customers, payments, and sensitive data are involved. As he explains, most product requirements only surface after users arrive, and those edge cases are exactly where opaque AI-generated code can become risky. If you cannot see how your system works, you cannot truly own it, secure it, or fix it when something goes wrong. The conversation also digs into Bubble's hybrid approach, blending AI agents with visual development. Rather than asking builders to trust an AI, Bubble's model unquestioningly emphasizes clarity, auditability, and shared responsibility between humans and machines. Josh explains how visual logic makes software behavior explicit, helping teams understand rules, permissions, and workflows before they cause real-world problems.  I learn how this mindset has helped Bubble-powered apps process over $1.1 billion in payments every year, a level of scale that leaves no room for guesswork. We also explore Bubble AI Agent, where conversational AI meets visual editing, and why transparency and control matter more than flashy demos. From governance and rollback logs to builder accountability, this episode looks at what it actually takes to build software that survives beyond the first launch. If you are building with AI or thinking about how software development is changing, this episode offers a grounded perspective on what comes after the hype fades. As AI tools become more powerful, the real question is whether they help you understand your product better over time, or slowly disconnect you from it. Which path should builders choose right now? Useful Links Connect with Josh Haas Learn More About Bubble Thanks to our sponsors, Alcor, for supporting the show.

    Cloudinary and the Business Case for Developer-Led Product Growth

    Play Episode Listen Later Feb 4, 2026 27:08


    How do you turn a developer-first product into a growth engine without losing trust, clarity, or focus along the way? In this episode of Tech Talks Daily, I'm joined by Sanjay Sarathy, VP of Developer Experience and Self Service at Cloudinary, for a grounded and thoughtful conversation about product-led growth when developers sit at the center of the story. Sanjay operates at a rare intersection. He leads Cloudinary's high-volume self-service motion while also caring for the developer community that fuels adoption, advocacy, and long-term loyalty. That dual perspective, part business, part builder, shapes everything we discuss. Our conversation picks up on a theme I have been exploring across recent episodes. When technical work is explained clearly, whether that is security, performance, or reliability, it stops being background noise and starts supporting growth. Sanjay shares how Cloudinary approached this from day one, starting with founders who were developers themselves and carried a deep respect for developer trust into the company's DNA. Documentation that reflects reality, platforms that behave exactly as promised, and support that shows up early rather than as an afterthought all play a part. What stood out to me was how early Cloudinary invested in technical support, even before many traditional growth motions were in place. That decision shaped a self-service experience that still feels human at scale. With thousands of developer sign-ups every day and millions of developers using the platform, Sanjay explains how trust compounds into referrals, word of mouth, and sustained adoption. We also dig into developer advocacy and why community is rarely a single thing. Developers gather around frameworks, tools, workflows, and shared problems, and Cloudinary has learned to meet them where they already are rather than forcing them into a single branded space. From React and Next.js users to enterprise advisory boards, feedback loops become part of the product itself. As AI reshapes how software is built and developer tools become more crowded, Sanjay offers a clear-eyed view on what separates companies that grow steadily from those that burn bright and stall. Profitability, experimentation with intent, and the discipline to double down on what works all feature heavily in his thinking. It is a conversation rooted in experience rather than theory. If you care about product-led growth, developer trust, or building platforms that scale without losing their soul, this episode offers plenty to think about. As always, I would love to hear your perspective too. How do you see developer communities shaping the next phase of product growth, and where do you think companies still get it wrong?

    Syntax - From AI First Thinking To Data First Reality

    Play Episode Listen Later Feb 3, 2026 29:44


    What happens when the rush toward AI collides with the messy reality of enterprise data that was never designed for it? That is exactly where this fast-tracked episode with Kevin Dattolico from Syntax begins. Before we even hit record, we were swapping stories about music, travel, and a certain farewell concert that set the tone for a conversation that was both grounded and unexpectedly human. But once we got going, the discussion quickly shifted to one of the biggest blind spots I keep hearing about at tech conferences around the world. AI ambition is running far ahead of data readiness. Kevin leads Syntax across the Americas, working with organizations that rely on SAP, Oracle, and complex cloud environments to run their businesses. In our conversation, he shares why many AI initiatives stall or quietly reset the moment they touch real production data. Proofs of concept can look impressive in isolation, but once AI starts interacting with live operational systems, the cracks appear. Inconsistent data, duplicated records, missing context, and governance gaps all surface at once. The result is confusion, unpredictable outputs, and a growing realization that the issue is rarely the model itself. We dig into why ERP data has traditionally been trusted, while unstructured data across emails, documents, sensors, and logs often tells a very different story. Kevin explains where the real friction shows up when companies try to bring those worlds together, and why assumptions about data quality tend to break long before the technology does. It is a refreshingly honest look at what usually goes wrong first, and why leaders are often blindsided even after years of investment. One of the strongest themes in this episode is the shift Kevin sees from AI-first thinking toward a data-first mindset. That does not mean abandoning AI spend. It means rebalancing priorities so those investments actually deliver outcomes the business can stand behind. We talk about what consolidation, cleansing, and transformation look like at enterprise scale, especially for organizations carrying decades of technical debt and fragmented systems. The conversation also takes a thoughtful turn around governance, trust, and leadership. Kevin shares how the role of the chief data officer is changing from gatekeeper to enabler, and why modern governance has to support speed without sacrificing accountability. Along the way, he reflects on the risks of pushing ahead with weak data foundations, particularly in regulated industries where the cost of getting it wrong can be operational, reputational, or worse. And then there is the moment that caught me completely off guard. When I asked Kevin to look back on his career and reflect on someone who made a difference, his answer led to one of the most moving stories I have heard in thousands of interviews. It is a reminder that behind every transformation story, there are people who quietly shape the path forward. If you are wrestling with AI expectations, data reality, or simply wondering whether everyone else feels just as overwhelmed by this shift, this episode will resonate. The challenges Kevin describes are far more common than most leaders admit, and the opportunities for those who get the foundations right are real. So as AI continues to dominate boardroom conversations, are you confident your data is ready to support the decisions you are asking it to make, or is it time to pause and rethink what sits underneath it all? Useful Links Connect with Kevin Dattolico Learn more about Syntax Thanks to our sponsors, Alcor, for supporting the show.

    Neurosymbolic AI And Why Reasoning Matters More Than Scale

    Play Episode Listen Later Feb 2, 2026 22:52


    Why do today's most powerful AI systems still struggle to explain their decisions, repeat the same mistakes, and undermine trust at the very moment we are asking them to take on more responsibility? In this episode of Tech Talks Daily, I'm joined by Artur d'Avila Garcez, Professor of Computer Science at City, St George's University of London, and one of the early pioneers of neurosymbolic AI. Our conversation cuts through the noise around ever-larger language models and focuses on a deeper question many leaders are now grappling with. If scale alone cannot deliver reliability, accountability, or genuine reasoning, what is missing from today's AI systems? Artur explains neurosymbolic AI in clear, practical terms as the integration of neural learning with symbolic reasoning. Deep learning excels at pattern recognition across language, images, and sensor data, but it struggles with planning, causality, and guarantees. Symbolic AI, by contrast, offers logic, rules, and explanations, yet falters when faced with messy, unstructured data. Neurosymbolic AI aims to bring these two worlds together, allowing systems to learn from data while reasoning with knowledge, producing AI that can justify decisions and avoid repeating known errors. We explore why simply adding more parameters and data has failed to solve hallucinations, brittleness, and trust issues. Artur shares how neurosymbolic approaches introduce what he describes as software assurances, ways to reduce the chance of critical errors by design rather than trial and error. From self-driving cars to finance and healthcare, he explains why combining learned behavior with explicit rules mirrors how high-stakes systems already operate in the real world. A major part of our discussion centers on explainability and accountability. Artur introduces the neurosymbolic cycle, sometimes called the NeSy cycle, which translates knowledge into neural networks and extracts knowledge back out again. This two-way process opens the door to inspection, validation, and responsibility, shifting AI away from opaque black boxes toward systems that can be questioned, audited, and trusted. We also discuss why scaling neurosymbolic AI looks very different from scaling deep learning, with an emphasis on knowledge reuse, efficiency, and model compression rather than ever-growing compute demands. We also look ahead. From domain-specific deployments already happening today to longer-term questions around energy use, sustainability, and regulation, Artur offers a grounded view on where this field is heading and what signals leaders should watch for as neurosymbolic AI moves from research into real systems. If you care about building AI that is reliable, explainable, and trustworthy, this conversation offers a refreshing and necessary perspective. As the race toward more capable AI continues, are we finally ready to admit that reasoning, not just scale, may decide what comes next, and what kind of AI do we actually want to live with?   Useful Links Neurosymbolic AI (NeSy) Association website Artur's personal webpage on the City, St George's University of London page Co-authored book titled "Neural-Symbolic Learning Systems" The article about neurosymbolic AI and the road to AGI The Accountability in AI article Reasoning in Neurosymbolic AI Neurosymbolic Deep Learning Semantics

    Why Stability Is Emerging As A New Performance Signal In Healthcare Tech

    Play Episode Listen Later Feb 1, 2026 25:24


    Why does healthcare keep investing in new technology while so many clinicians feel buried under paperwork and admin work that has nothing to do with patient care? In this episode of Tech Talks Daily, I'm joined by Dr. Rihan Javid, psychiatrist, former attorney, and co-founder and president of Edge. Our conversation cuts straight into an issue that rarely gets the attention it deserves, the quiet toll that administrative overload takes on doctors, care teams, and ultimately patients. Nearly half of physicians now link burnout to paperwork rather than clinical work, and Rihan explains why this problem keeps slipping past leadership discussions, even as budgets for digital tools continue to rise. Drawing on his experience inside hospitals and clinics, Rihan shares how operational design shapes outcomes in ways many healthcare leaders underestimate. We talk about why short-term staffing fixes often create new problems down the line, and how practices that invest in stable, well-trained remote administrative teams see real improvements. That includes faster billing cycles, fewer errors, and more time back for clinicians who want to focus on care rather than forms. What stood out for me was his framing of workforce infrastructure as a performance driver rather than a compliance box to tick. We also dig into how hybrid operations are becoming the default model. Local clinicians working alongside remote admin teams, supported by AI-assisted workflows, are now common across healthcare. Rihan is clear that while automation and AI can remove friction and cost, human oversight still matters deeply in high-compliance environments. Trust, accuracy, and patient confidence depend on knowing where automation fits and where human judgment must stay firmly in place. Another part of the discussion that stuck with me was Rihan's idea that stability is emerging as a better success signal than raw cost savings. High turnover may look efficient on paper, but it quietly limits a clinic's ability to grow, retain knowledge, and improve patient outcomes. We unpack why consistent administrative support can influence revenue cycles, satisfaction, and long-term resilience in ways traditional metrics often miss. If you're a healthcare leader, operator, or technologist trying to understand how AI, remote teams, and smarter operations can work together without losing trust or care quality, this conversation offers plenty to reflect on. As healthcare systems rethink how work gets done behind the scenes, what would it look like if stability and clinician well-being were treated as core performance measures rather than afterthoughts, and how might that change the future of care? Useful Links Connect with Dr. Rihan Javid Edge Health Rinova AI Thanks to our sponsors, Alcor, for supporting the show.

    Why Relationship-First Platforms Will Win The Next AI Wave

    Play Episode Listen Later Jan 31, 2026 32:43


      Why do small business leaders keep buying more software yet still feel like they are drowning in logins, dashboards, and unfinished work? In this episode of Tech Talks Daily, I sit down with Jesse Lipson, founder and CEO of Levitate, to unpack a frustration I hear from business owners almost daily. After years of being pitched yet another tool, many leaders now spend hours each week troubleshooting software instead of serving customers. Jesse brings a grounded perspective shaped by decades of building SaaS companies, including bootstrapping ShareFile before its acquisition by Citrix, and what stood out to me immediately was how clearly he articulates where the current software model has broken down for small businesses. We talk about why adding more apps has not translated into better outcomes, especially for teams without dedicated specialists in marketing, finance, or sales. Jesse explains how traditional software often solves only part of the problem, leaving owners to become accidental experts in accounting, marketing strategy, or customer communications just to make the tools usable. From there, our conversation shifts toward what he believes will actually matter as AI adoption matures. Rather than chasing full automation or shiny new dashboards, Jesse argues that the real opportunity lies in blending intelligence with human guidance, allowing AI to work quietly behind the scenes while people remain the face of authentic relationships. A big part of our discussion centers on trust and connection in an AI-saturated world. Jesse shares why customers have become incredibly good at spotting automated communication and why relationship-based businesses cannot afford to lose the human element. We explore how AI can act as a second brain, helping business owners remember details, follow up at the right moments, and show up more thoughtfully, without crossing the line into impersonal automation that turns customers away. His examples, from marketing emails to customer support, make it clear that technology should support better relationships rather than replace them. We also look ahead to what small businesses should realistically focus on as AI evolves. Jesse offers practical guidance on getting started, from everyday use of conversational AI, to building internal documentation that allows systems to work more effectively, and eventually moving toward agent-based workflows that can take on real operational tasks. Throughout the conversation, he keeps returning to the same idea, that AI works best when it helps people become the kind of business leaders they already want to be, more present, more consistent, and more human. If you are a founder, operator, or small business leader feeling overwhelmed by tools that promise productivity but deliver friction, this episode offers a refreshing reset. As AI becomes more capable and more embedded in daily work, the real question is not how many systems you deploy, but whether they help you build stronger, more genuine relationships, so how are you choosing to use AI to support the human side of your business rather than bury it? Useful Links Connect with Jesse Lipson Connect with Jesse on X Learn more about Levitate    

    Nyobolt And The Power Bottleneck Inside Modern AI Infrastructure

    Play Episode Listen Later Jan 30, 2026 22:46


    What happens when power, rather than compute, becomes the limiting factor for AI, robotics, and industrial automation? In this episode of Tech Talks Daily, I'm joined by Ramesh Narasimhan from Nyobolt to unpack a challenge that is quietly reshaping modern infrastructure. As AI training and inference workloads grow more dynamic, power demand is no longer predictable or steady. It can spike and drop in milliseconds, creating stress on systems that were never designed for this level of volatility. We talk about why data center operators, automation leaders, and industrial firms are being forced to rethink how energy is delivered, managed, and scaled. Our conversation moves beyond AI headlines and into the less visible constraints holding progress back. Ramesh explains how automation growth, particularly in robotics and autonomous mobile robot fleets, has exposed hidden inefficiencies. Charging downtime, thermal limits, and oversized systems are eroding productivity in warehouses and factories that aim to run around the clock. Instead of expanding physical footprints or adding redundant capacity, many operators are questioning whether the energy layer itself has become outdated. One of the themes that stood out for me is how energy has shifted from a background utility to a board-level concern. Power density, resilience, and cycle life are now discussed with the same urgency as compute performance or sensor accuracy. Ramesh shares why executives across logistics, automotive, advanced manufacturing, and AI infrastructure are starting to see energy strategy as a direct driver of uptime, cost control, and competitive advantage. We also explore the industry-wide push toward high-power, high-uptime operations. As businesses demand systems that can stay online continuously, the pressure is on energy technologies to respond faster, charge quicker, and occupy less space. This raises difficult questions about oversizing infrastructure for rare peak loads versus designing smarter systems that can flex in real time without waste. If you are building or operating AI clusters, robotics platforms, or industrial automation at scale, this episode offers a clear-eyed look at why energy systems may be the next major bottleneck and opportunity. As power becomes inseparable from performance, how ready is your organization to treat energy as a strategic asset rather than an afterthought?

    Cobalt Shares Hard Lessons From the State of Pen Testing Report

    Play Episode Listen Later Jan 28, 2026 26:43


    What happens when artificial intelligence starts accelerating cyberattacks faster than most organizations can test, fix, and respond? In this fast-tracked episode of Tech Talks Daily, I sat down with Sonali Shah, CEO of Cobalt, to unpack what real-world penetration testing data is revealing about the current state of enterprise security. With more than two decades in cybersecurity and a background that spans finance, engineering, product, and strategy, Sonali brings a grounded, operator-level view of where security teams are keeping up and where they are quietly falling behind. Our conversation centers on what happens when AI moves from an experiment to an attack surface. Sonali explains how threat actors are already using the same AI-enabled tools as defenders to automate reconnaissance, identify vulnerabilities, and speed up exploitation. We discuss why this is no longer theoretical, referencing findings from companies like Anthropic, including examples where models such as Claude have demonstrated both power and unpredictability. The takeaway is sobering but balanced. AI can automate a large share of the work, but human expertise still plays a defining role, both for attackers and defenders. We also dig into Cobalt's latest State of Pentesting data, including why median remediation times for serious vulnerabilities have improved while overall closure rates remain stubbornly low. Sonali breaks down why large enterprises struggle more than smaller organizations, how legacy systems slow progress, and why generative AI applications currently show some of the highest risk with some of the lowest fix rates. As more companies rush to deploy AI agents into production, this gap becomes harder to ignore. One of the strongest themes in this episode is the shift from point-in-time testing to continuous, programmatic risk reduction. Sonali explains what effective continuous pentesting looks like in practice, why automation alone creates noise and friction, and how human-led testing helps teams move from assumptions to evidence. We also address a persistent confidence gap, where leaders believe their security posture is strong, even when testing shows otherwise. We close by tackling one of the biggest myths in cybersecurity. Security is never finished. It is a constant process of preparation, testing, learning, and improvement. The organizations that perform best accept this reality and build security into daily operations rather than treating it as a one-off task. So as AI continues to accelerate both innovation and attacks, how confident are you that your security program is keeping pace, and what would continuous testing change inside your organization? I would love to hear your thoughts. Useful Links Connect with Sonali Shah Learn more about Cobalt Check out the Cobalt Learning Center State of Pentesting Report Thanks to our sponsors, Alcor, for supporting the show.

    LAMs (Large Action Models) and the Future of AI Ownership

    Play Episode Listen Later Jan 27, 2026 32:20


    What happens when AI stops talking and starts working, and who really owns the value it creates? In this episode of Tech Talks Daily, I'm joined by Sina Yamani, founder and CEO of Action Model, for a conversation that cuts straight to one of the biggest questions hanging over the future of artificial intelligence.  As AI systems learn to see screens, click buttons, and complete tasks the way humans do, power and wealth are concentrating fast. Sina argues that this shift is happening far quicker than most people realize, and that the current ownership model leaves everyday users with little say and even less upside. Sina shares the thinking behind Action Model, a community-owned approach to autonomous AI that challenges the idea that automation must sit in the hands of a few giant firms. We unpack the concept of  Large Action Models, AI systems trained to perform real online workflows rather than generate text, and why this next phase of AI demands a very different kind of training data. Instead of scraping the internet in the background, Action Model invites users to contribute actively, rewarding them for helping train systems that can navigate software, dashboards, and tools just as a human worker would. We also explore ActionFi, the platform's outcome-based reward layer, and why Sina believes attention-based incentives have quietly broken trust across Web3. Rather than paying for likes or impressions, ActionFi focuses on verifying real actions across the open web, even when no APIs or integrations exist. That raises obvious questions around security and privacy.  This conversation does not shy away from the uncomfortable parts. We talk openly about job displacement, the economic reality facing businesses, and why automation is unlikely to slow down. Sina argues that resisting change is futile, but shaping who benefits from it remains possible. He also reflects on lessons from his earlier fintech exit and how movements grow when people feel they are pushing back against an unfair system. By the end of the episode, we look ahead to a future where much of today's computer-based work disappears and ask what success and failure might look like for a community-owned AI model operating at scale. If AI is going to run more of the internet on our behalf, should the people training it have a stake in what it becomes, and would you trust an AI ecosystem owned by its users rather than a handful of billionaires? Useful Links Connect with Sina Yamani on LinkedIn or X Learn more about the Action Model Follow on X Learn more about the Action Model browser extension Check out the whitelabel integration docs Join their Waitlist Join their Discord community Thanks to our sponsors, Alcor, for supporting the show.

    Pegasystems on Why Legacy Modernization Finally Has a Way Forward

    Play Episode Listen Later Jan 27, 2026 55:56


    What does it really take to remove decades of technical debt without breaking the systems that still keep the business running? In this episode of Tech Talks Daily, I sit down with Pegasystems leaders Dan Kasun, Head of Global Partner Ecosystem, and John Higgins, Chief of Client and Partner Success, to unpack why legacy modernization has reached a breaking point, and why AI is forcing enterprises to rethink how software is designed, sold, and delivered. Our conversation goes beyond surface-level AI promises and gets into the practical reality of transformation, partner economics, and what actually delivers measurable outcomes. We explore how Pega's AI-powered Blueprint is changing the entry point to enterprise-grade workflows, turning what used to be long, expensive discovery phases into fast, collaborative design moments that business and technology teams can engage with together. Dan and John explain why the old "wrap and renew" approach to legacy systems is quietly compounding technical debt, and why reimagining workflows from the ground up is becoming essential for organizations that want to move toward agentic automation with confidence. The discussion also dives into Pega's deep collaboration with Amazon Web Services, including how tools like AWS Transform and Blueprint work together to accelerate modernization at scale.  We talk candidly about the evolving role of partners, why the idea of partners as an extension of a sales force is outdated, and how marketplaces are reshaping buying, building, and operating enterprise software. Along the way, we tackle some uncomfortable truths about AI hype, technical debt, and why adding another layer of technology rarely fixes the real problem. This is an episode for anyone grappling with legacy systems, skeptical of quick-fix AI strategies, or rethinking how partner ecosystems need to operate in a world where speed, clarity, and accountability matter more than ever. As enterprises move toward multi-vendor, agent-driven environments, are we finally ready to retire legacy thinking along with legacy systems, or are we still finding new ways to delay the inevitable? Useful Links Connect with Dan Kasun Connect with John Higgins Learn more about Pega Blueprint Thanks to our sponsors, Alcor, for supporting the show.

    UiPath and the Reality of Managing AI at Enterprise Scale

    Play Episode Listen Later Jan 26, 2026 26:20


    What does it really take to move AI from proof-of-concept to something that delivers value at scale? In this episode of Tech Talks Daily, I'm joined by Simon Pettit, Area Vice President for the UK and Ireland at UiPath, for a grounded conversation about what is actually happening inside enterprises as AI and automation move beyond experimentation. Simon brings a refreshingly practical perspective shaped by an unconventional career path that spans the Royal Navy, nearly two decades at NetApp, and more than seven years at UiPath. We talk about why the UK and Ireland remain a strategic region for global technology adoption, how London continues to play a central role for companies expanding into Europe, and why AI momentum in the region is very real despite the broader economic noise. A big part of our discussion focuses on why so many organizations are stuck in pilot mode. Simon explains how hype, fragmented experimentation, and poor qualification of use cases often slow progress, while successful teams take a very different approach. He shares real examples of automation already delivering measurable outcomes, from long-running public sector programs to newer agent-driven workflows that are now moving into production after clear ROI validation. We also explore where the next wave of challenges is emerging. As agentic AI becomes easier for anyone to create, Simon draws a direct parallel to the early days of cloud computing and VM sprawl. Visibility, orchestration, and cost control are becoming just as important as innovation itself. Without them, organizations risk losing control of workflows, spend, and accountability as agents multiply across the business. Looking ahead, Simon outlines why AI success will depend on ecosystems rather than single platforms. Partnerships, vertical solutions, and the ability to swap technologies as the market evolves will shape how enterprises scale responsibly. From automation in software testing to cross-functional demand coming from HR, finance, and operations, this conversation captures where AI is delivering today and where the real work still lies. If you're trying to separate AI momentum from AI noise, this episode offers a clear, experience-led view of what it takes to turn potential into progress. What would need to change inside your organization to move from pilots to production with confidence? Useful Links Learn more about Simon Pettit Connect with UiPath Follow on LinkedIn Thanks to our sponsors, Alcor, for supporting the show.

    3568: Getty Images: How Brands Can Avoid AI's Sloppification of Visual Content

    Play Episode Listen Later Jan 25, 2026 39:36


    What happens when speed, scale, and convenience start to erode trust in the images brands rely on to tell their story? In this episode of Tech Talks Daily, I spoke with Dr. Rebecca Swift, Senior Vice President of Creative at Getty Images, about a growing problem hiding in plain sight, the rise of low-quality, generic, AI-generated visuals and the quiet damage they are doing to brand credibility. Rebecca brings a rare perspective to this conversation, leading a global creative team responsible for shaping how visual culture is produced, analyzed, and trusted at scale. We explore the idea of AI "sloppification," a term that captures what happens when generative tools are used because they are cheap, fast, and available, rather than because they serve a clear creative purpose. Rebecca explains how the flood of mass-produced AI imagery is making brands look interchangeable, stripping visuals of meaning, craft, and originality. When everything starts to look the same, audiences stop looking altogether, or worse, stop trusting what they see. A central theme in our discussion is transparency. Research shows that the majority of consumers want to know whether an image has been altered or created using AI, and Rebecca explains why this shift matters. For the first time, audiences are actively judging content based on how it was made, not just how it looks. We talk about why some brands misread this moment, mistaking AI usage for innovation, only to face backlash when consumers feel misled or talked down to. Rebecca also unpacks the legal and ethical risks many companies overlook in the rush to adopt generative tools. From copyright exposure to the use of non-consented training data, she outlines why commercially safe AI matters, especially for enterprises that trade on trust. We discuss how Getty Images approaches AI differently, with consented datasets, creator compensation, and strict controls designed to protect both brands and the creative community. The conversation goes beyond risk and into opportunity. Rebecca makes a strong case for why authenticity, real people, and human-made imagery are becoming more valuable, not less, in an AI-saturated world. We explore why video, photography, and behind-the-scenes storytelling are regaining importance, and why audiences are drawn to evidence of craft, effort, and intent. As generative AI becomes impossible to ignore, this episode asks a harder question. Are brands using AI as a thoughtful tool to support creativity, or are they trading long-term trust for short-term convenience, and will audiences continue to forgive that choice?   Useful Links Connect with Dr. Rebecca Swift on LinkedIn VisualGSP Creative Trends Follow on Instagram and LinkedIn Thanks to our sponsors, Alcor, for supporting the show.

    3567: What a Chief Communications Officer Really Does and Why It Matters

    Play Episode Listen Later Jan 25, 2026 25:15


    What actually happens when a company loses control of its own voice in a world full of channels, platforms, and constant noise? In this episode of Tech Talks Daily, I sat down with Joshua Altman, founder of beltway.media, to unpack what corporate communication really means in 2026 and why it has quietly become one of the most misunderstood leadership functions inside modern organizations. Joshua describes his work as a fractional chief communications officer, a role that sits above individual campaigns, tools, or channels and focuses instead on perception, trust, and consistency across everything a company says and does. Our conversation starts by challenging the assumption that communication is something you "turn on" when a product launches or a crisis hits. Joshua explains why corporate communication is not project-based and not owned by marketing alone. It touches internal updates, investor messaging, brand signals, packaging, email, social platforms, and even the tools teams choose to use every day. If it communicates with internal or external audiences and shapes how the company is perceived, it belongs in the communications function. When that function is missing or fragmented, confusion and noise tend to fill the gap. We also explored why communication has arguably become harder, not easier, despite the explosion of collaboration tools. Email was meant to simplify work, then Slack was meant to replace email, and now AI assistants are transcribing every meeting and surfacing more content than anyone can realistically process. Joshua makes a strong case for simplicity, clarity, and focus, arguing that organizations need to pick channels intentionally and use them well rather than spreading messages everywhere and hoping something lands. Technology naturally plays a big role in the discussion. From the shift away from tape-based media and physical workflows to the accessibility of live global collaboration and affordable computing power, Joshua reflects on how dramatically the workplace has changed since he started his career in video news production. He also shares a grounded view on AI, where it adds real value in speeding up research and reducing busywork, and where human judgment and storytelling still matter most. Toward the end of the conversation, we get into ROI, a question every leader eventually asks. Joshua offers a practical way to think about it, starting with the simple fact that founders, operators, and technical leaders get time back when they no longer have to manage communications themselves. From there, alignment, clarity, and consistency compound over time, even if the impact is not always visible in a single metric. As organizations look ahead and try to make sense of AI, platform shifts, and ever-shorter attention spans, are we investing enough thought into how our companies actually communicate, or are we still mistaking volume for clarity? Useful Links Connect with Joshua Altman Learn more about beltway.media Thanks to our sponsors, Alcor, for supporting the show.

    3566: How Ergodic Predicts Complex Disruptions Before They Happen

    Play Episode Listen Later Jan 24, 2026 37:53


    What if your AI systems could explain why something will happen before it does, rather than simply reacting after the damage is done? In this episode of Tech Talks Daily, I sat down with Zubair Magrey, co-founder and CEO of Ergodic AI, to unpack a different way of thinking about artificial intelligence, one that focuses on understanding how complex systems actually behave. Zubair's journey begins in aerospace engineering at Rolls-Royce, moves through a decade of large-scale enterprise AI programs at Accenture, and ultimately leads to building Ergodic, a company developing what he describes as world models for enterprise decision making. World models are often mentioned in research circles, but rarely explained in a way that business leaders can connect to real operational decisions. In our conversation, Zubair breaks that gap down clearly. Instead of training AI to spot patterns in past data and assume the future will look the same, world-model AI focuses on cause and effect. It builds a structured representation of how an organization works, how different parts interact, and how actions ripple through the system over time. The result is an AI approach that can simulate outcomes, test scenarios, and help teams understand the consequences of decisions before they commit to them. We explored why this matters so much as organizations move toward agentic AI, where systems are expected to recommend or even execute actions autonomously. Without an understanding of constraints, dependencies, and system dynamics, those agents can easily produce confident but unrealistic recommendations. Zubair explains how Ergodic uses ideas from physics and system theory to respect real-world limits like capacity, time, inventory, and causality, and why ignoring those principles leads to fragile AI deployments that struggle under pressure. The conversation also gets practical. Zubair shares how world-model simulations are being used in supply chain, manufacturing, automotive, and CPG environments to detect early risks, anticipate disruptions, and evaluate trade-offs before problems cascade across customers and regions. We discuss why waiting for perfect data often stalls AI adoption, how Ergodic's data-agnostic approach works alongside existing systems, and what it takes to deliver ROI that teams actually trust and use. Finally, we step back and look at the organizational side of AI adoption. As AI becomes embedded into daily workflows, cultural change, experimentation, and trust become just as important as models and metrics. Zubair offers a grounded view on how leaders can prepare their teams for faster cycles of change without losing confidence or control. As enterprises look ahead to a future shaped by autonomous systems and real-time decision making, are we building AI that truly understands how our organizations work, or are we still guessing based on the past, and what would it take to change that? Useful Links Connect with Zubair Magrey Learn more about Ergodic AI Thanks to our sponsors, Alcor, for supporting the show.

    3565: CKEditor and the Reality of Supporting Developers Across Every Tech Stack

    Play Episode Listen Later Jan 24, 2026 37:13


    What does it actually take to build trust with developers when your product sits quietly inside thousands of other products, often invisible to the people using it every day? In this episode of Tech Talks Daily, I sat down with Ondřej Chrastina, Developer Relations at CKEditor, to unpack a career shaped by hands-on experience, curiosity, and a deep respect for developer time. Ondřej's story starts in QA and software testing, moves through development and platform work, and eventually lands in developer relations. What makes his perspective compelling is that none of these roles felt disconnected. Each one sharpened his understanding of real developer friction, the kind you only notice when you have lived with a product day in and day out. We talked about what changes when you move from monolithic platforms to API-first services, and why developer relations looks very different depending on whether your audience is an application developer, a data engineer, or an integrator working under tight delivery pressure. Ondřej shared how his time at Kentico, Kontent.ai, and Ataccama shaped his approach to tooling, documentation, and examples. For him, theory rarely lands. Showing something that works, even in a small or imperfect way, tends to earn attention and respect far faster. At CKEditor, that thinking becomes even more interesting. The editor is everywhere, yet rarely recognized. It lives inside SaaS platforms, internal tools, CRMs, and content systems, quietly doing its job. We explored how developer experience matters even more when the product itself fades into the background, and why long-term maintenance, support, and predictability often outweigh short-term feature excitement. Ondřej also explained why building instead of buying an editor is rarely as simple as teams expect, especially when standards, security, and future updates enter the picture. We also got into the human side of developer relations. Balancing credibility with business goals, staying useful rather than loud, and acting as a bridge between engineering, product, marketing, and the outside world. Ondřej was refreshingly honest about the role ego can play, and why staying close to real usage is the fastest way to keep yourself grounded. If you care about developer experience, internal tooling, or how invisible infrastructure shapes modern software, this conversation offers plenty to reflect on. What have you seen work, or fail, when it comes to earning developer trust, and where do you think developer relations still get misunderstood? Useful Links Connect with Ondrej Chrastina Learn more about CK Editor Thanks to our sponsors, Alcor, for supporting the show.

    3564: Why Banking Is the Ultimate Test for Responsible AI

    Play Episode Listen Later Jan 23, 2026 34:15


    If artificial intelligence is meant to earn trust anywhere, should banking be the place where it proves itself first? In this episode of Tech Talks Daily, I'm joined by Ravi Nemalikanti, Chief Product and Technology Officer at Abrigo, for a grounded conversation about what responsible AI actually looks like when the consequences are real. Abrigo works with more than 2,500 banks and credit unions across the United States, many of them community institutions where every decision affects local businesses, families, and entire regional economies. That reality makes this discussion feel refreshingly practical rather than theoretical. We talk about why financial services has become one of the toughest proving grounds for AI, and why that is a good thing. Ravi explains why concepts like transparency, explainability, and auditability are not optional add-ons in banking, but table stakes. From fraud detection and lending decisions to compliance and portfolio risk, every model has to stand up to regulatory, ethical, and operational scrutiny. A false positive or an opaque decision is not just a technical issue, it can damage trust, disrupt livelihoods, and undermine confidence in an institution. A big focus of the conversation is how AI assistants are already changing day-to-day banking work, largely behind the scenes. Rather than flashy chatbots, Ravi describes assistants embedded directly into lending, anti-money laundering, and compliance workflows. These systems summarize complex documents, surface anomalies, and create consistent narratives that free human experts to focus on judgment, context, and relationships. What surprised me most was how often customers value consistency and clarity over raw speed or automation. We also explore what other industries can learn from community banks, particularly their modular, measured approach to adoption. With limited budgets and decades-old core systems, these institutions innovate cautiously, prioritizing low-risk, high-return use cases and strong governance from day one. Ravi shares why explainable AI must speak the language of bankers and regulators, not data scientists, and why showing the "why" behind a decision is essential to keeping humans firmly in control. As we look toward 2026 and beyond, the conversation turns to where AI can genuinely support better outcomes in lending and credit risk without sidelining human judgment. Ravi is clear that fully autonomous decisioning still has a long way to go in high-stakes environments, and that the future is far more about partnership than replacement. AI can surface patterns, speed up insight, and flag risks early, but people remain essential for context, empathy, and final accountability. If you're trying to cut through the AI noise and understand how trust, governance, and real-world impact intersect, this episode offers a rare look at how responsible AI is actually being built and deployed today. And once you've listened, I'd love to hear your perspective. Where do you see AI earning trust, and where does it still have something to prove?

    3563: Vijay Rajendran on Why Startup Advice Fails When Reality Kicks In

    Play Episode Listen Later Jan 23, 2026 26:33


    What really happens after the startup advice runs out and founders are left facing decisions no pitch deck ever prepared them for? In this episode of Tech Talks Daily, I sit down with Vijay Rajendran, a founder, venture capitalist, UC Berkeley instructor, and author of The Funding Framework, to discuss the realities of company building that rarely appear on social feeds or investor blogs. Vijay has spent years working alongside founders at the sharpest end of growth, from early fundraising conversations through to the personal and leadership shifts that scaling demands. That experience shapes a conversation that feels refreshingly honest, thoughtful, and grounded in lived reality. We explore why building something people actually want sounds simple in theory yet proves brutally difficult in practice. Vijay explains how timing, learning velocity, and the willingness to adapt often matter more than stubborn vision, and why many founders misunderstand what momentum really looks like. From there, the discussion moves into investor relationships, not as transactional events, but as long-term partnerships that require founders to shift their mindset from defense to evaluation. The emotional and psychological dynamics of fundraising come into focus, especially the moments when founders underestimate how much power they actually have in shaping those relationships. A big part of this conversation centers on leadership identity. Vijay breaks down the messy transition from being the "chief everything officer" to becoming a true chief executive, and why the most overlooked stage in that journey is learning how to enable others. We talk about the point where founders become the bottleneck, often without realizing it, and why this tends to surface as teams grow and decisions start happening outside the founder's direct line of sight. The plateau many companies hit around scale becomes less mysterious when viewed through this lens. We also challenge some of the most popular startup advice circulating online today, particularly around fundraising volume, pitching styles, and the idea that persistence alone guarantees outcomes. Vijay shares why treating fundraising like enterprise sales, focusing on alignment over volume, and listening more than pitching often leads to better results. The conversation closes with practical reflections on personal growth, co-founder dynamics, and how leaders can regain clarity during periods of pressure without stepping away from responsibility. If you are building a company, leading a team, or questioning whether you are evolving as fast as your business demands, this episode will likely hit closer to home than you expect. And once you've listened, I'd love to hear what resonated most with you and the leadership questions you're still sitting with after the conversation. Useful Links Connect with Vijay Rajendran The Funding Framework Startup Pitch Deck Thanks to our sponsors, Alcor, for supporting the show.

    3562: Veeva Systems on AI and the Future of Clinical Trials

    Play Episode Listen Later Jan 22, 2026 28:22


    What happens when decades of clinical research experience collide with a regulatory environment that is changing faster than ever? In this episode of Tech Talks Daily, I sat down with Dr Werner Engelbrecht, Senior Director of Strategy at Veeva Systems, for a wide-ranging conversation that explores how life sciences organizations across Europe are responding to mounting regulatory pressure, rapid advances in AI, and growing expectations around transparency and patient trust. Werner brings a rare perspective to this discussion. His career spans clinical research, pharmaceutical development, health authorities, and technology strategy, shaped by firsthand experience as an investigator and later as a senior industry leader.  That background gives him a grounded, practical view of what is actually changing inside pharma and biotech organizations, beyond the headlines around AI Acts, data rules, and compliance frameworks. We talk openly about why regulations such as GDPR, the EU AI Act, and ACT-EU are creating real pressure for organizations that are already operating in highly controlled environments. But rather than framing compliance as a blocker, Werner explains why this moment presents an opening for better collaboration, stronger data foundations, and more consistent ways of working across internal teams. According to him, the real challenge is less about technology and more about how companies manage data quality, align processes, and break down silos that slow everything from trial setup to regulatory response times. Our conversation also digs into where AI is genuinely making progress today in life sciences and where caution still matters. Werner shares why drug discovery and non-patient-facing use cases are moving faster, while areas like trial execution and real-world patient data still demand stronger evidence, cleaner datasets, and clearer governance. His perspective cuts through hype and focuses on what is realistic in an industry where patient safety remains the defining responsibility. We also explore patient recruitment, decentralized trials, and the growing complexity of diseases themselves. Advances in genomics and diagnostics are reshaping how trials are designed, which in turn raises questions about access to electronic health records, data harmonization across Europe, and the safeguards regulators care about most. Werner connects these dots in a way that highlights both the operational strain and the long-term upside. Toward the end, we look ahead at emerging technologies such as blockchain and connected devices, and how they could strengthen data integrity, monitoring, and regulatory confidence over time. It is a thoughtful discussion that reflects both optimism and realism, rooted in lived experience rather than theory. If you are working anywhere near clinical research, regulatory affairs, or digital transformation in life sciences, this episode offers a clear-eyed view of where the industry stands today and where it may be heading next. How should organizations turn regulation into momentum instead of resistance, and what will it take to earn lasting trust from patients, partners, and regulators alike? Useful Links Connect with Dr Werner Engelbrecht Learn more about Veeva Systems Viva Summit Europe and Viva Summit USA Follow on LinkedIn Thanks to our sponsors, Alcor, for supporting the show.

    3561: Xero on Trust, Technology, and the Future of Accounting Relationships

    Play Episode Listen Later Jan 21, 2026 23:58


    What happens when an industry that has barely changed for generations suddenly finds itself at the center of one of the biggest shifts in modern work? In this episode of Tech Talks Daily, I'm joined by Kate Hayward, UK Managing Director at Xero, for a conversation about how accounting is being reshaped by technology, education, regulation, and changing expectations from clients and talent alike. Kate describes this moment as the largest reorganization of human capital in the history of the profession, and as we talk, it becomes clear why that claim is gaining traction. We explore how AI is shifting accountants away from pure number processing and toward higher-value advisory work, without stripping away the deep financial understanding the role still demands. Kate shares why so many practices are reporting higher revenues and profits, and how technology is acting as a catalyst for rethinking long-standing workflows rather than simply speeding up broken ones. We also dig into research showing that pairing AI with financial education strengthens analytical thinking while leaving core calculation skills intact, a useful counterpoint to the more dramatic headlines about machines replacing people. Our conversation moves into the practical reality of how firms are using tools like ChatGPT today, from scenario planning to preparing for difficult client conversations, while also discussing where caution still matters, particularly around data security and core financial workflows. Kate also explains how government initiatives such as Making Tax Digital and the digitization of HMRC are changing client expectations and deepening the relationship between accountants and the businesses they support. We also spend time on the future of the profession, including how hiring strategies are evolving, why problem-solving and communication skills are becoming just as valuable as technical knowledge, and why private equity interest in accounting is accelerating digital adoption across the sector. Kate rounds things out by sharing how Xero is thinking about product design in 2026, what users can expect next, and why keeping the human side of the profession front and center still matters. So as accounting moves further into an AI-assisted, digitally native future, how do firms balance efficiency, trust, identity, and long-term relevance, and what lessons can other industries take from this moment of change? Useful Links Follow Kate Hayward on LinkedIn Accounting and Bookkeeping Industry Report Xero Website Follow on LinkedIn, Facebook, X, YouTube, Instagram

    3560: How People.ai is Turning Sales Activity Into Answers Leaders Can Act On

    Play Episode Listen Later Jan 20, 2026 33:51


    What does sales leadership actually look like once the AI experimentation phase is over and real results are the only thing that matters? In this episode of Tech Talks Daily, I sit down with Jason Ambrose, CEO of the Iconiq backed AI data platform People.ai, to unpack why the era of pilots, proofs of concept, and AI theater is fading fast. Jason brings a grounded view from the front lines of enterprise sales, where leaders are no longer impressed by clever demos. They want measurable outcomes, better forecasts, and fewer hours lost to CRM busywork. This conversation goes straight to the tension many organizations are feeling right now, the gap between AI potential and AI performance. We talk openly about why sales teams are drowning in activity data yet still starved of answers. Emails, meetings, call transcripts, dashboards, and dashboards about dashboards have created fatigue rather than clarity. Jason explains how turning raw activity into crisp, trusted answers changes how sellers operate day to day, pulling them back into customer conversations instead of internal reporting loops. The discussion challenges the long held assumption that better selling comes from more fields, more workflows, and more dashboards, arguing instead that AI should absorb the complexity so humans can focus on judgment, timing, and relationships. The conversation also explores how tools like ChatGPT and Claude are quietly dismantling the walls enterprise software spent years building. Sales leaders increasingly want answers delivered in natural language rather than another system to log into, and Jason shares why this shift is creating tension for legacy platforms built around walled gardens and locked down APIs.  We look at what this means for architecture decisions, why openness is becoming a strategic advantage, and how customers are rethinking who they trust to sit at the center of their agentic strategies. Drawing on work with companies such as AMD, Verizon, NVIDIA, and Okta, Jason shares what top performing revenue organizations have in common. Rather than chasing sameness, scripts, and averages, they lean into curiosity, variation, and context. They look for where growth behaves differently by market, segment, or product, and they use AI to surface those differences instead of flattening them away. It is a subtle shift, but one with big implications for how sales teams compete. We also look ahead to 2026 and beyond, including how pricing models may evolve as token consumption becomes a unit of value rather than seats or licenses. Jason explains why this shift could catch enterprises off guard, what governance will matter, and why AI costs may soon feel as visible as cloud spend did a decade ago. The episode closes with a thoughtful challenge to one of the biggest myths in the industry, the belief that selling itself can be fully automated, and why the last mile of persuasion, trust, and judgment remains deeply human. If you are responsible for revenue, sales operations, or AI strategy, this episode offers a clear-eyed look at what changes when AI stops being an experiment and starts being held accountable, so what assumptions about sales and AI are you still holding onto, and are they helping or quietly holding you back? Useful Links Follow Jason Ambrose on LinkedIn Learn more about people.ai Follow on LinkedIn Thanks to our sponsors, Alcor, for supporting the show.

    3559: Conviva CEO on Turning Experimental AI Agents Into Reliable Systems

    Play Episode Listen Later Jan 19, 2026 29:33


    In this episode of Tech Talks Daily, I sat down with Keith Zubchevich, CEO of Conviva, to unpack one of the most honest analogies I have heard about today's AI rollout. Keith compares modern AI agents to toddlers being sent out to get a job, full of promise, curious, and energetic, yet still lacking the judgment and context required to operate safely in the real world. It is a simple metaphor, but it captures a tension many leaders are feeling as generative AI matures in theory while so many deployments stumble in practice. As ChatGPT approaches its third birthday, the narrative suggests that GenAI has grown up. Yet Keith argues that this sense of maturity is misleading, especially inside enterprises chasing measurable returns. He explains why so many pilots stall or quietly disappoint, not because the models lack intelligence, but because organizations often release agents without clear outcomes, real-time oversight, or an understanding of how customers actually experience those interactions. The result is AI that appears to function well internally while quietly frustrating users or failing to complete the job it was meant to do. We also dig into the now infamous Chevrolet chatbot incident that sold a $76,000 vehicle for one dollar, using it as a lens to examine what happens when agents are left without boundaries or supervision. Keith makes a strong case that the next chapter of enterprise AI will not be defined by ever-larger models, but by visibility. He shares why observing behavior, patterns, sentiment, and efficiency in real time matters more than chasing raw accuracy, especially once AI moves from internal workflows into customer-facing roles. This conversation will resonate with anyone under pressure to scale AI quickly while worrying about brand risk, accountability, and trust. Keith offers a grounded view of what effective AI "parenting" looks like inside modern organizations, and why measuring the customer experience remains the most reliable signal of whether an AI system is actually growing up or simply creating new problems at speed. As leaders rush to put agents into production, are we truly ready to guide them, or are we sending toddlers into the workforce and hoping for the best? Useful Links Connect with Keith Zubchevich Learn more about Conviva Chevrolet Dealer Chatbot Agrees to Sell Tahoe for $1 Thanks to our sponsors, Alcor, for supporting the show.

    3558: Do You Really Have an Offline backup, or Just the Illusion of One?

    Play Episode Listen Later Jan 18, 2026 25:08


    In this episode of Tech Talks Daily, I sit down with Imran Nino Eškić and Boštjan Kirm from HyperBUNKER to unpack a problem many organisations only discover in their darkest hour. Backups are supposed to be the safety net, yet in real ransomware incidents, they are often the first thing attackers dismantle. Speaking with two people who cut their teeth in data recovery labs across 50,000 real cases gave me a very different perspective on what resilience actually looks like. They explain why so many so-called "air-gapped" or "immutable" backups still depend on identities, APIs, and network pathways that can be abused. We talk through how modern attackers patiently map environments for weeks before neutralising recovery systems, and why that shift makes true physical isolation more relevant than ever. What struck me most was how calmly they described failure scenarios that would keep most leaders awake at night. The heart of the conversation centres on HyperBUNKER's offline vault and its spaceship-style double airlock design. Data enters through a one-way hardware channel, the network door closes, and only then is information moved into a completely cold vault with no address, no credentials, and no remote access. I also reflect on seeing the black box in person at the IT Press Tour in Athens and why it feels less like a gadget and more like a last-resort lifeline. We finish by talking about how businesses should decide what truly belongs in that protected 10 percent of data, and why this is as much a leadership decision as an IT one. If everything vanished tomorrow, what would your company need to breathe again, and would it actually survive?   Useful LInks Connect with Imran Nino Eškić Connect With Boštjan Kirm Learn More about HyperBUNKER Lear more about the IT Press Tour Thanks to our sponsors, Alcor, for supporting the show.

    Claim The Tech Blog Writer Podcast

    In order to claim this podcast we'll send an email to with a verification link. Simply click the link and you will be able to edit tags, request a refresh, and other features to take control of your podcast page!

    Claim Cancel