POPULARITY
How is AI reshaping our relationship with work, and what does that mean for the tools we rely on every day? In this episode of Tech Talks Daily, I'm joined by Cory McElroy, Vice President of Commercial Product Management at HP. Our conversation begins with a reflection on one of the most famous garages in technology history. The original HP garage in Palo Alto is often described as the birthplace of Silicon Valley, and standing there recently reminded me how far the industry has come since those early days. But as Cory explains, we may be entering another turning point. The nature of work has shifted rapidly in just a few years. Hybrid work is now the norm for millions of people, and expectations around workplace technology have changed with it. Employees no longer see technology as a basic productivity tool. They expect it to adapt to them, reduce friction, and help them focus on meaningful work. Cory shares insights from HP's Work Relationship Index, which highlights a striking reality. Only around 20 percent of employees say they have a healthy relationship with work. That number sounds concerning at first, but it also points to an opportunity. When organizations provide the right tools and experiences, employees become more productive, more creative, and more likely to stay. A big theme throughout our conversation is the growing role of AI directly on devices. Running AI locally on PCs changes how people interact with technology. Tasks that once took hours, such as analyzing documents or extracting insights from data, can now happen almost instantly. In some internal deployments at HP, employees reported saving up to four hours each week. We also talk about the hardware innovations that are emerging in response to this shift. Cory explains how new devices like the HP EliteBook X and the EliteBoard reflect a rethink of the PC itself. The EliteBoard, for example, integrates a full PC inside a keyboard, allowing users to connect to any display and instantly access desktop-level performance. It is a design that reflects the flexibility people now expect from modern workspaces. Looking ahead, Cory believes the next few years will bring even bigger change. Devices will increasingly understand context, connect seamlessly with other tools, and respond to natural language requests. Instead of jumping between multiple applications to complete a task, users may simply ask their device to assemble information and produce the outcome they need. So as AI becomes embedded into the devices we use every day and work continues to evolve, what would a truly frictionless workday look like for you, and how will your relationship with technology change as a result?
How do you secure a modern business when identities no longer belong only to employees, but also to partners, machines, applications, and increasingly AI agents? In this episode of Tech Talks Daily, I sat down with Paul Zolfaghari, President of Saviynt, to unpack why identity security has moved from a background IT function to one of the defining challenges facing modern enterprises. Over the past decade, the identity problem has expanded far beyond the traditional office worker logging into internal systems. Today's organizations must manage access across a vast digital ecosystem that includes contractors, suppliers, customers, APIs, machines, and now autonomous AI agents. Paul explains how this shift has fundamentally changed the way security leaders think about identity governance. The challenge is no longer limited to preventing unauthorized access from outside attackers. Instead, companies must manage the complex question of who, or what, should have access to specific data, systems, and processes at any given moment. When thousands of employees, partners, and automated systems interact across multiple cloud platforms, the complexity grows rapidly. We also explore how the rise of non-human identities is reshaping the security landscape. Machines, software services, and AI agents now operate alongside human employees inside enterprise environments. In many cases, these digital identities are already beginning to outnumber people. As AI agents gain the ability to gather information, adapt to context, and take actions autonomously, organizations must rethink how access permissions are granted, monitored, and governed. Another theme that emerged during our conversation is the idea that identity security is not only about protection. While it clearly sits within the cybersecurity domain, Paul argues that identity governance also acts as a business enabler. When the right people and systems can access the right information at the right time, organizations operate more efficiently and collaborate more effectively across complex supply chains and partner ecosystems. We also discussed findings from Saviynt's CISO AI Risk Report, which highlights a growing concern among security leaders. AI adoption is accelerating rapidly, often moving faster than the governance frameworks designed to manage it. This creates a challenge for organizations trying to adopt AI responsibly while maintaining visibility and control over how these technologies interact with enterprise systems. With more than 600 enterprise customers and a recent $700 million growth investment backing its expansion, Saviynt is operating in a market that many investors now view as one of the defining layers of modern digital infrastructure. Identity, in many ways, is becoming the control plane for how businesses operate in an AI driven world. Looking ahead, Paul believes organizations must begin preparing for a future where digital identities dramatically outnumber human employees. That shift will require new approaches to governance, visibility, and control. So as AI adoption accelerates and businesses continue expanding across cloud platforms and digital ecosystems, one question becomes impossible to ignore. Is identity security ready to serve as the foundation for how organizations operate in the next decade? Useful Links Connect with Paul Zolfaghari Check out the Saviynt Website Follow on Facebook, LinkedIn, and X
How prepared are organizations for a world where today's encrypted communications could be quietly stored and cracked years from now? In this episode of Tech Talks Daily, I sat down with Nate Jenniges, Senior Vice President and General Manager at BlackBerry, to talk about why the conversation around quantum computing is moving from academic curiosity to operational reality. For many leaders, quantum threats still feel distant, something for researchers and cryptographers to worry about. But as Nate explained, governments and adversaries are already capturing encrypted data today with the expectation that it can be decrypted later when quantum capabilities mature. This idea of "harvest now, decrypt later" attacks completely changes the timeline for security planning. If sensitive information needs to remain confidential for five, ten, or even twenty years, the exposure may already have started. That means the challenge is no longer theoretical. It is becoming a strategic issue that boards, CISOs, and government leaders must begin addressing right now. One of the most interesting parts of our conversation focused on something many people rarely think about. Metadata. While encryption protects the content of a message or phone call, the surrounding patterns often reveal just as much. Who spoke to whom, how often, from where, and at what time can tell a surprisingly detailed story. With modern analytics and AI tools, these patterns can expose command structures, business relationships, or crisis response activity even if the message itself remains encrypted. Nate explained why this is becoming a frontline issue in the emerging post-quantum era. As organizations integrate AI into communication platforms, new forms of metadata are emerging from model interactions, system queries, and inference activities. That means protecting communications requires a broader view than simply upgrading encryption algorithms. We also explored how governments and highly regulated sectors are preparing for this shift. BlackBerry today operates in a very different space than many people remember, focusing on identity-verified, mission-critical communications used by governments and institutions that cannot afford uncertainty. These systems are designed to operate during the moments that matter most, whether that involves cyber incident response, national security coordination, or emergency response to climate-related events. Another theme that stood out was the leadership challenge behind quantum readiness. Nate believes organizations should avoid treating quantum as a separate security initiative. Instead, it should be integrated into the technology refresh cycles that companies already manage, including hardware updates, software upgrades, and certificate renewals. The organizations that begin asking the right questions today will avoid scrambling later when regulatory expectations tighten and deadlines arrive. By the end of our conversation, one message became very clear. The first real defense in the post-quantum era may not come from stronger encryption alone. It may come from understanding and controlling the communication patterns and metadata that surround every digital interaction. As quantum computing research accelerates and governments begin setting deadlines for post-quantum security readiness, the question becomes increasingly hard to ignore. Are organizations truly prepared for the communications challenges that the next decade may bring?
Why are employees still drowning in administrative work despite years of digital transformation, new software platforms, and constant promises that technology will make work easier? In this episode of Tech Talks Daily, I explore that question with Jason Spry from Ricoh Europe. What begins as a discussion about a new Ricoh research report quickly turns into a much broader conversation about how modern workplaces actually operate day to day. The findings are striking. Employees across Europe are losing an average of 15 hours every week to routine administrative tasks. That is time spent searching for documents, reentering data across systems, preparing reports manually, and navigating layers of disconnected tools. For many organizations, this creates a strange contradiction. Leadership teams often believe that new platforms and software will simplify workflows, yet many employees feel the opposite. The tools designed to make work easier sometimes create additional layers of complexity. Jason shares his perspective from nearly three decades in document processing and outsourcing, explaining how years of digital initiatives have often resulted in systems stacked on top of one another rather than genuinely simplified workflows. The result is a fragmented experience where finding the latest version of a document or locating the right information for a meeting can consume far more time than it should. We also discuss the hidden risks behind these inefficiencies. When documents are scattered across systems or poorly managed, the consequences go beyond frustration. Ricoh's research shows that many organizations have experienced compliance breaches or near misses because important documents were missing, misfiled, or simply impossible to locate at the right moment. Jason explains why governance, visibility, and consistent document management are becoming increasingly important in a world where decisions rely on accurate information. Another theme that runs throughout this conversation is the idea of marginal gains. Small inefficiencies like searching for files, reentering data, or preparing documents for meetings might seem trivial in isolation. Yet when they happen hundreds of times across a workforce, they add up to a serious productivity drain. Jason compares it to the concept of improving performance by one percent at a time. Removing even a few of these micro frustrations can transform how people experience their workday. Naturally, we also talk about automation and AI. But Jason offers a refreshing perspective here as well. Rather than starting with the technology, he argues that organizations should begin by identifying the real pain points employees face. That often means speaking directly with the people doing the work and asking what frustrates them most. Once those challenges are clear, automation and intelligent document management tools can start delivering results quickly, sometimes within weeks rather than years. By the end of the conversation, it becomes clear that solving the admin overload problem does not always require massive transformation projects. Often the answer lies in simplifying processes, connecting systems more intelligently, and removing the small friction points that slow everyone down. So I am curious. How much time do you think your organization loses to administrative work each week, and what simple changes could give employees that time back?
How do you turn trillions of user interactions into meaningful decisions without drowning in data? In this episode of Tech Talks Daily, I sit down with Todd Olson, co-founder and CEO of Pendo, to talk about the future of product-led organizations and why AI is reshaping how software companies grow, build, and compete. Pendo tracks trillions of product usage events to help organizations understand how customers actually interact with their software. That level of data sounds powerful, but it also raises a challenge many teams face today. How do you turn massive data sets into clear signals that teams can act on without falling into analysis paralysis? Todd explains how Pendo approaches this problem by organizing product data around real user journeys, feature adoption, and areas where people drop off. Instead of leaving teams buried in dashboards, the goal is to surface insights that matter. Increasingly, AI is helping by acting as a kind of embedded analyst that highlights the patterns product teams should focus on. Our conversation also revisits the idea behind Todd's book, The Product-Led Organization. When it was published around the time of the pandemic, it argued that great products should do much of the heavy lifting traditionally done by sales or support teams. Looking back now, Todd believes the core idea remains intact. AI simply accelerates the model by allowing companies to experiment faster and scale product-driven experiences with far fewer people. But that shift is also creating tension in the software industry. We talk about the so-called reckoning in SaaS economics and the growing debate around whether AI will make traditional software companies obsolete. Todd offers a more measured perspective. While AI allows anyone to prototype software quickly, the companies that survive will still be the ones solving difficult problems, navigating compliance requirements, and building products that customers trust. Another theme we explore is geography and innovation. Pendo is headquartered in Raleigh, North Carolina, far from the usual coastal tech hubs. Todd shares how building outside Silicon Valley has shaped the company's culture, talent strategy, and mindset. There are advantages to being close to the center of the AI boom, but there is also value in building away from the echo chamber. We also spend time unpacking the rise of AI-assisted development and the trend many people call "vibe coding." Todd believes AI will dramatically reshape product teams, but he also pushes back against the idea that humans will disappear from the development process. Engineers will still need to review code, teach AI systems best practices, and ensure security and reliability. One of the most interesting moments in our conversation comes near the end when Todd shares a belief that originality will become one of the most valuable assets in the age of AI. As automated content and automated code become easier to generate, he believes people will increasingly value craft, taste, and original thinking. So in a world where AI can generate almost anything with a prompt, the real question becomes far more human. What problems are actually worth solving? If you care about the future of software, product strategy, and how AI is reshaping the economics of building companies, this is a conversation that offers plenty to think about. And after listening, I would love to hear your perspective. As AI becomes embedded in every product and workflow, do you believe originality and craft will become the true differentiators in the software industry?
Have you ever contacted customer support with a simple request, only to find yourself trapped in a loop of scripted chatbot responses that never actually solve the problem? It's an experience many of us know all too well. AI has made customer service more conversational over the last few years, yet there is still a gap between answering a question and actually resolving an issue. That gap is exactly where today's conversation begins. In this episode of Tech Talks Daily, I spoke with Mike Szilagyi, SVP and General Manager of Product Management at Genesys Cloud, about a new chapter in AI-powered customer experience. Genesys has announced what it describes as the industry's first agentic virtual agent built on Large Action Models, or LAMs. While Large Language Models have dominated the conversation around AI for the past few years, they have largely focused on generating responses, retrieving knowledge, or answering questions. What they have struggled with is execution. Mike explained how Large Action Models take the next step. Rather than simply telling a customer how to solve a problem, these systems can plan and execute the steps needed to complete a task. Imagine contacting an airline after a sudden flight cancellation. Instead of navigating multiple menus or repeating information to a human agent, an agentic virtual assistant could understand your situation, check alternative flights, apply airline policies, and complete the rebooking process across several systems. In other words, the AI moves from conversation to action. We also explored how Genesys approached the design of this technology with enterprise governance in mind. From explainable decision paths and audit logs to guardrails that ensure every automated action can be traced and understood, the goal is to make autonomous AI trustworthy inside complex organizations. Mike also shared insights into Genesys' partnership with Scaled Cognition and how integrating specialized models helps deliver reliable execution in real-world customer service environments. Perhaps most interesting was our discussion about the human role in this evolving contact center landscape. As automation begins to handle routine and multi-step workflows, human agents are free to focus on situations that require empathy, judgment, and expertise. That shift raises interesting questions about how organizations design customer experiences in the years ahead. So how will customers respond when virtual agents move beyond answering questions and begin resolving problems on their behalf? And once one brand delivers that experience, will it quickly become the expectation? Useful Links Connect with Mike Szilagyi Learn more about Genesys Genesys Agentic Virtual Agent Powered by LAMs for Enterprise CX Follow on LinkedIn
What does it take to design a data center for a world where the technology inside it may change several times before the building even opens? In this episode of Tech Talks Daily, I sit down with Jackson Metcalf, Principal at Gensler, to talk about how AI is forcing a complete rethink of data center design. Jackson has spent nearly two decades working on critical facilities, and in our conversation he explains how the shift from traditional cloud workloads to dense AI environments is changing everything from building form and cooling strategy to long-term infrastructure planning. What struck me most in this conversation is the sheer mismatch in timescales. Data centers can take two and a half to three years to design and build, while chip and GPU roadmaps are evolving in cycles of months. Jackson explains why that means designing for a fixed end state no longer makes sense. Instead, the future may belong to facilities built with flexibility at their core, spaces that can be reconfigured, upgraded, and even conceptually rebuilt over time rather than treated as static assets. We also talk about what hyper-flexibility actually means in practice. This is not just a buzzword. It is about designing buildings with enough structural and engineering headroom to support very different cooling and power models over their lifespan. As AI workloads push cabinet densities to levels that would have sounded impossible only a few years ago, the need for plug-and-play mechanical and electrical infrastructure becomes far more than a design preference. It becomes essential. Another fascinating part of the conversation centers on sustainability. Jackson shares why durable, well-built structures can create long-term environmental value, even in an industry often criticized for its energy demands. We discuss embodied carbon, adaptive reuse, and why a high-quality building may have a much better second life than something built purely for short-term speed. That leads into a wider conversation about repositioning underused real estate, from former industrial facilities to vacant office buildings, as potential digital infrastructure. We also get into the growing energy challenge behind AI. With demand for power rising fast, and the US grid under increasing pressure, many operators are now weighing options such as on-site natural gas generation while waiting for cleaner long-term alternatives to mature. Jackson offers a thoughtful perspective on the tension between urgent infrastructure needs and environmental responsibility, as well as the uncertainty surrounding future energy roadmaps. Looking further ahead, I ask Jackson what will define a successful data center campus in the years to come. Will it be raw megawatts, adaptability, carbon intensity, location strategy, or something else entirely? His answer opens up a much bigger conversation about whether these buildings can become more connected to the communities around them, and what role they may play in a future where digital infrastructure is no longer hidden in the background, but central to how society functions. So if AI is pushing data center design to extremes, how do we build facilities that are ready for what comes next without becoming obsolete almost as soon as they open? And what does sustainable, adaptable digital infrastructure really look like in practice?
How do global companies make confident decisions when supply chains are constantly disrupted by tariffs, geopolitical tension, shifting consumer demand, and unpredictable global events? In this episode of Tech Talks Daily, I sat down with Dr. Ashwin Rao, EVP of AI and R&D at o9 Solutions, to talk about how artificial intelligence is changing the way organizations plan, forecast, and respond to uncertainty. Ashwin brings a fascinating mix of experience to the conversation. After earning a PhD in mathematics and computer science, he spent fifteen years on Wall Street working on derivatives trading strategies at Goldman Sachs and Morgan Stanley before moving into the world of enterprise technology. Today, he operates at the meeting point between business and academia as both a senior AI leader and an adjunct professor at Stanford University. Our conversation begins with Ashwin's unusual career path and how those early experiences in finance shaped the way he thinks about risk, decision making, and real world AI deployment. The journey from theoretical mathematics to trading floors and eventually into Silicon Valley offers an interesting lens on how analytical thinking can travel across industries and still remain highly relevant. We then move into the work happening at o9 Solutions, where AI is helping organizations make smarter decisions across supply chain planning, demand forecasting, and inventory management. In a world that Ashwin describes using the acronym VUCA, volatility, uncertainty, complexity, and ambiguity, businesses are under pressure to react faster and make better informed decisions. He explains how enterprise AI platforms can connect fragmented data across departments and create a more complete view of the business. One example he shares brings the concept down to earth. Even predicting how many bananas a grocery store should stock on any given day requires analyzing internal sales trends alongside external signals such as weather, social media trends, and economic conditions. Machine learning systems can now process those signals in real time and continuously update forecasts so businesses can respond quickly to changes. We also explore the rise of neuro- and symbolic AI, a concept Ashwin believes represents the next stage in enterprise decision-making. Rather than relying only on large language models, this approach blends the structured reasoning of symbolic systems with the pattern recognition of neural networks. The result, he suggests, feels less like a chatbot and more like having an expert coach embedded inside the decision-making process. Along the way, we also discuss why many organizations still struggle to embed AI successfully. Technology is only one piece of the puzzle. Ashwin believes the toughest obstacle is organizational change management, bringing teams together, connecting data across silos, and helping leaders guide their organizations through transformation. If you have ever wondered how AI moves beyond chatbots and into the systems that quietly power global supply chains, this conversation offers a thoughtful and practical perspective. So, how prepared is your organization to make decisions in a world defined by volatility and uncertainty, and could AI become the trusted partner that helps guide those choices? Useful Links Ashwin's blog Ashwin's LinkedIn o9 Solutions Website o9 LinkedIn
How close are we to the moment when quantum computing moves from scientific curiosity to real-world infrastructure? In today's episode of Tech Talks Daily, I speak with Christian Weedbrook, Founder and CEO of Xanadu, a company pushing the boundaries of what quantum computers might soon achieve. Xanadu has taken an unconventional route in the race to build practical quantum systems. Instead of relying on electronic approaches used by many others in the field, the company builds quantum computers using photonics, effectively computing with particles of light. Christian explains why this matters and how working with photons could unlock advantages in energy efficiency, scalability, and networking as quantum machines grow into large data center–scale systems. The conversation also arrives at a fascinating moment for the company. Xanadu has announced plans to go public through a SPAC deal that values the company at around $3.1 billion. Christian shares what that milestone means, not only for Xanadu but for the broader quantum ecosystem. According to him, the excitement surrounding quantum computing is no longer limited to research labs. Governments, enterprise partners, and investors are increasingly paying attention as the technology edges closer to commercial relevance. One of the most engaging parts of our conversation is Christian's own journey into the world of quantum physics. Before earning a PhD in photonic quantum computing, he began as a film student who admits he once dreamed of becoming a filmmaker. That winding path eventually led him into physics and entrepreneurship, where he founded Xanadu in 2016 with a mission to make quantum computers useful and accessible to everyone. We also discuss PennyLane, the open-source quantum programming framework developed by Xanadu that has quietly become one of the most widely used tools in the quantum developer community. Now taught in universities across more than 30 countries, PennyLane plays an important role in building the next generation of quantum talent. Christian also shares a realistic timeline for where the industry stands today. Quantum computers already exist, but they remain smaller than what is needed for commercial breakthroughs. Xanadu's roadmap points toward large-scale quantum data centers by the end of the decade, systems capable of tackling problems in drug discovery, materials science, logistics, and finance that traditional computers struggle to simulate. For enterprise leaders listening today, the message is clear. The quantum future is closer than many people assume, and organizations that begin exploring use cases now will be far better prepared when these systems mature. So how should businesses prepare for a computing paradigm based on the mathematics of quantum physics rather than traditional software logic? And what lessons can founders learn from a journey that began with filmmaking ambitions and led to building one of the most ambitious quantum companies in the world? Let's find out together.
How can companies invest heavily in AI and still struggle to see meaningful returns? In this episode of Tech Talks Daily, I sit down with Thomas Scott, CEO of Wrike, to unpack a growing tension many organizations are facing right now. Artificial intelligence adoption is accelerating rapidly across the workplace, yet the structures needed to support it are struggling to keep pace. Wrike's latest research into the "Age of Connected Intelligence" reveals that more than 80 percent of employees are already using AI at work. Yet fewer than half have received any formal training, guidance, or governance around how these tools should be used. That gap between enthusiasm and enablement is creating a new workplace phenomenon that many leaders are only just beginning to notice. Shadow AI. When employees cannot find approved tools that solve their problems quickly, they often turn to unapproved applications or personal accounts instead. Wrike's data shows that 42 percent of workers admit they have already done this. For organizations handling sensitive data, intellectual property, or regulated information, that trend raises serious questions about security, compliance, and trust. Thomas explains why this pattern is not surprising. Whenever a new technology emerges, the builders and experimenters move first. They explore possibilities, test new tools, and discover productivity gains long before formal policies or training frameworks arrive. The challenge for leadership teams is learning how to harness that momentum without letting experimentation turn into fragmentation. We also explore one of the most overlooked barriers to AI return on investment. Integration. Many employees are now juggling multiple AI tools every week, yet those systems rarely communicate with one another or connect deeply into the core business platforms where real work happens. As a result, context gets lost, workflows become fragmented, and organizations end up running expensive pilots that never scale into meaningful transformation. Thomas introduces the idea of connected intelligence as a possible solution. Instead of deploying AI tools in isolation, companies need systems that understand context across projects, teams, and workflows. When AI can access structured data, shared history, and operational context, it becomes far more capable of supporting real decision making rather than simply generating isolated outputs. Our conversation also explores how leaders can move beyond scattered experimentation and start building structured AI adoption across their organizations. Thomas argues that the most successful companies start with highly specific problems, empower small groups of motivated builders, and maintain strong executive involvement throughout the process. AI transformation is rarely driven by technology alone. It requires people, process, and leadership alignment working together. So if your organization has already deployed AI tools but still struggles to see real impact, perhaps the question is not whether you are using AI. The real question might be whether those tools are truly connected to the work your teams are trying to do every day.
How can organizations use AI to transform hiring while still protecting the human element at the heart of work? In this episode of Tech Talks Daily, I sit down with Mahe Bayireddi, co-founder and CEO of Phenom, to explore how artificial intelligence is reshaping the way companies attract, hire, and develop talent. Our conversation comes at an interesting moment for the company, following the announcement that Phenom has acquired Be Applied, an AI-driven cognitive assessment platform designed to validate candidate and employee capabilities at scale. The move follows an earlier acquisition of Included, an AI-native people analytics platform focused on delivering deeper workforce insights and faster decision making. Mahe shares how Phenom's long-term mission to help a billion people find the right job is evolving as AI becomes embedded throughout the HR lifecycle. From candidate discovery to onboarding and internal mobility, organizations are now experimenting with automation, personalization, and intelligent workflows that aim to improve both productivity and employee experience. One theme that runs throughout our discussion is how AI adoption in HR varies dramatically depending on geography, regulation, and industry. In Europe, regulatory frameworks are shaping how companies deploy automation. In the United States, state-level policies introduce additional complexity. Meanwhile, organizations across Asia are often approaching AI with entirely different priorities. As a result, many global companies are experimenting carefully, introducing AI into specific business units or regions before rolling it out more broadly. We also talk about a challenge that has caught many HR teams by surprise: the growing issue of fraudulent candidates and identity manipulation in the hiring process. As job applications become easier to submit and remote work expands global talent pools, organizations must rethink how they validate candidate identity and credentials. Mahe explains how AI-driven fraud detection tools can help highlight suspicious patterns while still keeping humans in the loop for final decisions. Another important point raised in the conversation is the need to preserve humanity in the workplace while introducing intelligent automation. While AI can dramatically improve efficiency across recruiting and workforce planning, Mahe believes HR leaders must be careful to ensure technology strengthens human potential rather than reducing people to data points in a system. Looking ahead, we discuss how organizations can begin adopting AI responsibly by starting small, focusing on high-impact areas, and building guardrails that reflect regional regulations and company culture. For many companies, the most successful path forward will involve testing AI within specific workflows, measuring outcomes quickly, and scaling what works. So as artificial intelligence becomes a central part of hiring, workforce planning, and employee development, the big question for leaders is this. Can organizations use AI to create faster, smarter talent decisions while still keeping people at the center of the workplace experience?
What if the next big shift in personal audio is not about blocking the world out, but staying connected to it? In this episode of Tech Talks Daily, I sit down with Nicole from Shokz to talk about why open-ear headphones are suddenly everywhere, and why this category is moving from niche curiosity to everyday essential. For years, the audio market was obsessed with sealing users off from the outside world. Now the conversation is changing. More people want to hear their music, podcasts, and calls without losing awareness of traffic, fellow commuters, colleagues, or the world happening around them. Nicole helps unpack what open-ear audio actually means in simple terms, and why it is resonating with runners, commuters, parents, office workers, and anyone trying to balance comfort, safety, and sound quality. We talk about the cultural shift behind this rise, from growing health and fitness habits to the way hybrid work and always-on lifestyles have changed how people use earbuds throughout the day. We also get into why Shokz has become one of the defining brands in this space. Long before open-ear audio became a trend, Shokz was investing in bone conduction, open-ear design, and the kind of product research needed to make this category work in real life. Nicole shares how years of persistence, technical innovation, and consumer education helped the company move from specialist player to category leader. During our conversation, we explore how real-world behavior shapes product design. That means thinking beyond audio specs and focusing on how headphones actually fit into daily life. Whether someone is running in the rain, commuting to work, wearing glasses, sitting in an office, or trying to stay aware while walking the dog, those everyday moments are shaping the next generation of audio devices. Nicole also talks me through some of Shokz's latest product thinking, including the OpenDots One and the OpenFit Pro. From compact clip-on designs that feel almost like wearable accessories to new approaches around noise reduction in open-ear listening, this episode looks at how the category is becoming more sophisticated and more versatile without losing the awareness that made it appealing in the first place. Looking ahead, we discuss whether open-ear audio will live alongside sealed earbuds as part of a two-device lifestyle, or whether it could eventually become the default choice for more people. We also touch on what comes next, from smarter audio experiences to the role AI and even connected glasses could play in the future of listening. So if you have been seeing the phrase open-ear audio more often and wondering what all the fuss is about, this conversation will bring it to life. Are open-ear headphones simply having a moment, or are we watching a bigger shift in how people want to hear the world around them?
How does a CISO turn cybersecurity from a technical conversation into a business conversation that boards actually care about? In this episode of Tech Talks Daily, I sit down with Thom Langford, EMEA CTO at Rapid7 and a former CISO, to explore what he calls the second phase of cybersecurity leadership. For years, the industry worked hard to secure a seat at the boardroom table. In many organizations, that mission has largely succeeded. But as Thom explains, gaining access was only the first step. The real challenge now is communicating security in a way that drives meaningful business decisions. Thom shares why many CISOs still approach board conversations in the same way they did a decade ago, even though boardroom awareness of cybersecurity has changed dramatically. Today, many boards include members with cybersecurity knowledge or direct security experience. That means security leaders can no longer rely on technical jargon, complex frameworks, or compliance language to make their case. One of the most interesting insights from our conversation is the disconnect between how CISOs frame risk and what boards are actually focused on. While security teams often lead with risk reduction, boards tend to think in terms of revenue growth and operational costs. Thom argues that security leaders must learn to translate cybersecurity into the language of profit and loss if they want their message to resonate at the executive level. We also explore how traditional security tools such as risk frameworks, audits, and compliance standards can sometimes create distance rather than clarity in board discussions. Instead of helping executives understand security priorities, these models can obscure the real question boards are trying to answer. How secure are we, and what does that mean for the business? Another area we discuss is the growing role of tabletop exercises. Thom explains why these simulations are becoming one of the most effective ways for CISOs to demonstrate the real-world impact of security decisions. By walking executives through a realistic incident scenario, leaders can see how security, operations, legal teams, and business priorities intersect during a crisis. Looking ahead, Thom believes the most successful CISOs will increasingly need to think like business leaders rather than purely technical specialists. Communication skills, relationship building, and understanding the organization's financial priorities may prove just as important as deep technical expertise. So if cybersecurity leaders have already earned their place in the boardroom, the next question becomes much more interesting. Are they speaking the language the board actually understands, or are they still trying to solve business problems using only security vocabulary?
How should businesses rethink infrastructure when applications, data, and users are increasingly spread across thousands of locations? In this episode of Tech Talks Daily, I sit down with Mark Cree, President and Chief Operating Officer at Scale Computing, to talk about why the future of enterprise infrastructure is moving closer to where data is actually created. This conversation was recorded following the 66th edition of The IT Press Tour, where some of the most interesting conversations in enterprise infrastructure centered on what happens when businesses move away from oversized, monolithic stacks and start focusing on practical, distributed solutions. From retail stores and airports to remote industrial sites, the edge is becoming a critical part of modern IT strategy. Mark shares how Scale Computing has spent years building an edge-first platform designed to run critical workloads reliably across everything from a single location to tens of thousands of distributed sites. Mark also reflects on his own journey through the technology industry, which includes founding companies acquired by Cisco and NetApp, working as a venture capitalist, and leading major storage initiatives at AWS. That experience gives him a unique perspective on how enterprise infrastructure has evolved, particularly as organizations reconsider the balance between centralized cloud environments and local processing closer to users and devices. During our conversation, we explore why edge computing is becoming increasingly important for AI workloads, especially when large volumes of data are generated outside traditional data centers. Mark explains how processing information locally can reduce costs, improve performance, and enable entirely new use cases, from monitoring customer behavior in retail environments to running intelligent systems in remote locations. We also talk about the ongoing reassessment happening across enterprise IT teams following major industry shifts, including changes in the virtualization market and growing concerns around vendor lock-in. Mark explains how Scale Computing is positioning itself as a flexible alternative by combining virtualization, containerization, networking, and security into a platform designed specifically for distributed environments. Looking ahead, Mark shares his perspective on where enterprise infrastructure is heading over the next five years. As smaller AI models become more capable and organizations seek greater control over their data and systems, the role of edge platforms may become even more important. Instead of relying solely on massive centralized environments, companies may find new value in distributing intelligence closer to the places where real-world activity happens. So as organizations rethink how they deploy applications, manage data, and control infrastructure, is the next big shift in enterprise IT happening right at the edge? And how prepared is your organization for that change?
What happens when the real bottleneck in artificial intelligence is no longer training models, but actually running them at scale? In this episode of Tech Talks Daily, I sit down with Satyam Srivastava from d-Matrix to explore a shift that is quietly reshaping the entire AI infrastructure landscape. While much of the early AI race focused on training ever larger models, the next phase of AI adoption is increasingly defined by inference. That is the moment when trained models are deployed and used to generate real-world results millions of times a day. Satyam brings a unique perspective shaped by years of experience in signal processing, machine learning, and hardware architecture, including time spent at NVIDIA and Intel working on graphics, media technologies, and AI systems. Now at d-Matrix, he is helping design next-generation computing architectures focused on one of the biggest challenges facing the AI industry today: efficiently running large language models without overwhelming data centers with unsustainable power and infrastructure demands. During our conversation, we explored why the industry underestimated the infrastructure implications of inference at scale. While training large models grabs headlines, the real operational pressure often comes later when those models must serve millions of queries in real time. That shift places enormous strain on memory bandwidth, energy consumption, and data movement inside modern data centers. Satyam explains how d-Matrix identified this challenge years before generative AI exploded into the mainstream. Instead of focusing on training hardware like many AI startups at the time, the company concentrated on inference efficiency. That decision is becoming increasingly relevant as organizations begin to realize that simply adding more GPUs to data centers is not a sustainable long-term strategy. We also discuss the growing power constraints surrounding AI infrastructure, and why efficiency-driven design may be the only realistic path forward. With electricity supply, cooling capacity, and semiconductor availability all becoming limiting factors, the industry is being forced to rethink how AI systems are architected. Custom silicon, purpose-built accelerators, and heterogeneous computing environments are now emerging as key pieces of the puzzle. The conversation also touches on the geopolitical and economic importance of AI semiconductor leadership, and why the relationship between frontier AI labs, infrastructure providers, and chip designers is becoming increasingly strategic. As governments and companies compete to maintain technological leadership, the question of who controls the hardware powering AI may prove just as important as the models themselves. Looking ahead, Satyam shares his perspective on how the role of engineers will evolve as AI infrastructure becomes more specialized and energy-aware. Foundational engineering skills remain essential, but the next generation of engineers will also need to think in terms of entire systems, combining software, hardware, and AI tools to build more efficient computing environments. As AI continues to move from research labs into everyday products and services, are organizations prepared for the infrastructure shift that comes with an inference-driven future? And could efficiency, rather than raw computing power, become the defining metric of the next phase of the AI race?
Have you ever bought a ticket to a show and wondered why the experience still feels strangely disconnected, with one app for ticketing, another for marketing, another for refunds, and a dozen spreadsheets held together by late nights and good intentions? In this episode of Tech Talks Daily, I'm joined by Ritesh Patel, co-founder of Ticket Fairy, to talk about the technology behind live events and why it has lagged behind other industries in some surprisingly familiar ways. Ritesh makes the case that most organizers are operating more like creative founders than corporate operators, building "mini cities" for a weekend with tiny teams, tight budgets, and very little margin for error. That reality shapes every technology decision, and it explains why fragmented tools and siloed data can become a hidden tax on the business. Ritesh walks me through Ticket Fairy's full stack approach, bringing ticketing, marketing, CRM, logistics, and payments into a single system, and why unifying data changes the economics of running an event. We dig into practical examples that go beyond vague AI talk, including how small workflow fixes can speed up entry, improve the on-site experience, and even translate into real revenue uplift once you multiply time savings across thousands of attendees. We also get into where AI agents and large language models are already finding a foothold in events, particularly around unstructured documents like artist specs, supplier agreements, and operational paperwork that can swallow hundreds of hours. Ritesh shares why "AI-native" should mean more than a writing assistant in a text box, and what it looks like when AI becomes an extension of a lean events team, including a prototype voice agent designed to handle common ticket-holder questions without creating new support bottlenecks. If you're interested in the real business mechanics of events, and how SaaS, payments, data, and AI can quietly shape everything from entry lines to repeat attendance, this conversation offers a fresh way to think about an industry that touches all of us, even when we don't think of it as a tech story. And as a bonus, Ritesh leaves a music recommendation that sent me back to an album I had not played in years, Burial's Untrue, with "Archangel" as the track to start with. After listening, tell me this, where do you think unified data and practical AI will make the biggest difference in live experiences over the next couple of years, on the promoter side or the fan side, and why?
How confident are you that your business could recover from a cyberattack, cloud outage, or infrastructure failure in minutes rather than hours or even days? In this episode of Tech Talks Daily, I explore the changing nature of enterprise resilience with Joseph D'Angelo and Cassie Stanek from InfoScale, now part of Cloud Software Group. Our conversation looks at why many organizations still rely on backup and replication strategies that were designed for a very different era of IT. In a world of hybrid infrastructure, multi-cloud deployments, and increasingly complex application stacks, those traditional tools often protect the data but often fail to restore the business services that depend on it. My guests shares how InfoScale approaches resilience from the application layer outward. Instead of focusing on individual components such as storage or infrastructure, the platform looks at the relationships between applications, services, and data so entire systems can be orchestrated and recovered as a coordinated unit. That distinction becomes especially important during a ransomware attack or cloud outage, where restoring a single database rarely brings a digital business back online. We also discuss how growing regulatory pressure is changing the conversation. Enterprises are no longer expected to simply claim they have disaster recovery processes in place. Increasingly they must demonstrate, test, and prove that recovery capabilities actually work. Cassie explains how controlled "fire drill" rehearsals allow organizations to validate recovery plans without disrupting production systems, creating defensible proof that systems can be restored when it matters most. We also look ahead to the next phase of resilience, where environments will increasingly diagnose, adapt, and respond to disruptions in real time. Instead of reacting after an outage occurs, operational resilience will rely on predictive analytics, anomaly detection, and automated response capabilities that allow systems to self-correct before users ever notice a problem. Throughout our discussion, one theme becomes clear. IT resilience is no longer just an infrastructure conversation. It has become a business continuity strategy that directly affects revenue, customer trust, and competitive advantage. As organizations depend more heavily on digital services, the ability to recover quickly from disruption is becoming one of the defining capabilities of modern enterprise technology. So after listening, I'm curious about your perspective. Do you think most organizations are truly prepared for operational resilience in a multi-cloud world, or are many still relying on backup strategies that were built for a much simpler IT environment?
How can a world that produces more than enough food still leave millions of people struggling to put a healthy meal on the table? In this episode of Tech Talks Daily, I speak with Jordan Schenck, CEO of Flashfood, about the growing paradox at the heart of our global food system. Grocery prices are climbing, families everywhere are making harder choices at the checkout, and food banks are seeing rising demand. Yet at the same time, vast quantities of perfectly edible food never make it onto a plate. Jordan shares the startling scale of the problem. In North America alone, billions of pounds of edible food are thrown away every year, including huge volumes from grocery stores themselves. Fresh produce, meat, and dairy often end up discarded even though they remain safe and nutritious to eat. The result is a system where food waste and food insecurity grow side by side, despite a supply chain that already produces far more calories than the world needs. Flashfood is attempting to change that equation with a simple but powerful idea. Through its marketplace app, the company partners with grocery retailers to sell surplus food at steep discounts before it reaches the landfill. Shoppers gain access to fresh groceries at far lower prices, while retailers recover value from inventory that might otherwise be lost. What emerges is a rare triple win for shoppers, grocers, and the environment. During our conversation, Jordan explains how consumer behavior, retail expectations, and supply chain logistics have shaped today's food waste problem. She also shares how technology and data are beginning to shift the system in a different direction. Flashfood is now working with more than two thousand grocery partners across North America and serving over a million users, using data and AI to help retailers price surplus inventory more effectively and move products before they are discarded.But the story behind Flashfood is also personal. Jordan reflects on her earlier experiences at Impossible Foods and as founder of the beverage brand Sunwink, and how those roles helped her see both the strengths and weaknesses inside modern food production. Over time, she began to question whether the industry truly needed more products on shelves, or whether the bigger opportunity lay in fixing the inefficiencies that already existed. Our discussion touches on the psychology of grocery shopping, the economics of surplus inventory, and the cultural expectations that lead retailers to overstock shelves in the first place. We also explore why many consumers are more open to buying discounted food than retailers once believed, particularly as the cost of living continues to rise. Perhaps most encouraging of all is the idea that solving food waste does not require entirely new supply chains or radical lifestyle changes. Sometimes it simply requires connecting the dots between food that already exists and the people who need it most.
Have you noticed how every week brings a new headline about AI driven fraud, yet it still feels hard to tell what is real risk and what is noise? In this Tech Talks Daily episode, I'm joined by Tommy Nicholas, CEO of Alloy, for a candid conversation that cuts through the fear driven commentary and gets into what fraud teams are actually dealing with right now. We start with a simple but important distinction that gets blurred all the time. Tommy separates classic "fraud," where institutions take the hit, from "scams," where individuals are manipulated into handing over money or access. That framing changes how you think about solutions, accountability, and where AI is making things worse. Tommy also shares why he believes fraud losses are often massively underreported. It is not because people are trying to hide the truth, it is because organizations rarely have a single, clean view of losses across every product line and channel. Add messy labeling, split ownership across teams, and reporting becomes a best effort estimate rather than an objective number. That reality matters if you're building board level narratives, budgets, or risk models on top of survey data. From there, we talk about what organizations are getting right. Tommy argues there is no magical "undetectable" attack that forces teams to give up, but there is a very real breakdown happening in old fallbacks, especially human review of images and video. The bigger shift he sees is banks and fintechs finally pushing for consistent tooling across every channel, web, mobile, branch, call center, support tickets, because fraud does not respect internal org charts. We then get into why Alloy's AI Assistant is an interesting signal for where agentic AI is heading in regulated work. Tommy explains that agents are only useful when they have rigorous context, strong sources of truth, and clear workflows. Otherwise they guess, and "looks good" is not the same as "safe to run in production." He also lays out where agents can genuinely outperform humans, like scaling investigations during sudden surges, while keeping processes auditable and repeatable. We close by looking ahead at agentic commerce, and why Tommy thinks the breakthrough will arrive through weird, emergent behavior rather than a neat protocol roll out. When you listen back, do you think the next big leap in fraud prevention will come from better models, better data, or better operational discipline, and what would you bet on if your own customers were the ones on the line?
How do you prepare an entire generation for a world where AI is already shaping how we work, create, and solve problems? In this episode of Tech Talks Daily, I'm joined by Dr. Tara Nattrass, Chief Innovation Strategist for Education at Lenovo, for a grounded and thoughtful conversation about what responsible AI integration really looks like in K–12 classrooms. Tara brings more than 25 years of experience inside school districts, including serving as Assistant Superintendent for Teaching and Learning in Arlington Public Schools, so this isn't a theory-led discussion. It's informed by lived experience. We explore how the conversation has shifted over the past 18 months. AI has been present in schools for years through adaptive software and analytics, but the arrival of generative and now agentic AI tools has accelerated everything. As Tara explains, the debate is no longer about whether AI should be in schools. It's about how to approach it responsibly, strategically, and in ways that genuinely improve learning outcomes. A big theme in our conversation is AI literacy. Tara breaks this down in practical terms, moving beyond technical understanding to include critical thinking, creativity, collaboration, and the ability to evaluate risk and bias. She shares real examples of students designing AI tools to solve problems in their communities, shifting the focus from passive consumption to active creation. We also talk about infrastructure readiness. Many school systems have bold ambitions around AI, but there is often a gap between vision and technical capability. AI-ready devices, intelligent infrastructure, cybersecurity, and data governance all play a role in making innovation sustainable rather than experimental. Lenovo's approach, as Tara describes it, centers on building education ecosystems rather than simply refreshing hardware. There is also a careful balance to strike between innovation, privacy, and inclusion. From hybrid AI models to questions around where data is stored and who can access it, schools are navigating complex decisions. Tara shares how Lenovo partners with districts, policymakers, and organizations such as ISTE and ASCD to align infrastructure, professional learning, and governance frameworks. Looking ahead, we discuss what will separate school systems that truly benefit from AI from those that simply layer new tools onto old teaching models. Vision, educator upskilling, cybersecurity, and rethinking assessment all feature prominently in her answer. If you are working in education, technology leadership, or policy, this conversation offers a practical view of how AI-ready classrooms are being built today and what still needs to happen next. As always, I'd love to hear your thoughts. How is AI reshaping learning in your organization, and are you ready for what comes next?
What happens when nearly half of organizations admit they have no AI-specific security controls, yet AI-driven data leaks are accelerating at the same time? In this episode of Tech Talks Daily, I spoke with Aayush Choudhry, CEO and co-founder of Scrut Automation, about what he sees as a blind spot in the cybersecurity industry. While much of the market continues to design tools for Fortune 500 enterprises with deep pockets and large security teams, Aayush argues that the real existential risk sits with the 99 percent of businesses that cannot survive a serious breach. Aayush brings a founder's perspective shaped by firsthand pain. Before launching Scrut, he and his co-founder experienced the grind of managing compliance and security as a cloud-native startup trying to sell into enterprises. They were outsiders to GRC and security at the time, forced to learn from first principles. That experience became the foundation for Scrut Automation, a modern GRC platform built specifically for small and mid-sized companies that cannot afford six-month implementations, armies of consultants, or half-million-dollar tooling budgets. We explore why treating compliance and security as separate functions increases risk for smaller organizations. In the mid-market, the same small team is often responsible for both. When compliance is handled as a box-ticking exercise and security as a separate technical discipline, gaps emerge. Scrut's approach converges governance, risk, and security signals into a unified layer that translates hundreds of technical alerts into context-aware risks that actually matter to the business. Our conversation also tackles AI complacency. Using the classic confidentiality, integrity, and availability framework, Aayush outlines what minimum viable AI security hygiene looks like in practice. That includes ensuring AI agents are not over-privileged compared to the humans they represent, placing guardrails around sensitive data fed into models, and extending supply chain security thinking to agentic integrations. For resource-constrained teams, these are not theoretical concerns. They are daily realities. Perhaps most compelling is his view that AI can act as a force multiplier for small teams. By embedding accumulated expertise into agents trained on anonymized patterns and edge cases, Scrut aims to democratize security know-how that would otherwise require multiple full-time analysts. The goal is simple but ambitious: make enterprise-grade security outcomes accessible without enterprise-grade headcount. If you are leading a small or mid-sized business and wondering how to balance growth, compliance, and AI risk without breaking the bank, this conversation offers a candid look from the trenches.
How do you build enterprise software for the companies that keep the world turning, while also building a leadership culture where people can actually thrive? In this episode of Tech Talks Daily, I spoke with Kerrie Jordan, Group VP of Product Management at Epicor, about her journey from studying literature to helping shape cloud ERP strategy at a global software company serving more than 20,000 customers worldwide. Kerrie's story is a reminder that there is no single path into technology leadership. Sometimes the foundations are laid in unexpected places, through storytelling, creativity, and a deep curiosity about people. Kerrie shares how her early career in product lifecycle management opened her eyes to the human side of software. Interviewing customers and writing case studies showed her that behind every system implementation is a personal story, a career milestone, or a business trying to survive and grow. That perspective still shapes how she approaches product and marketing today at Epicor, a company recently recognized as a Leader in the Gartner Magic Quadrant for Cloud ERP for Product-Centric Enterprises for the third consecutive year. But this conversation goes far beyond market recognition. We talk openly about burnout, resilience, and the reality of leading through pressure. Kerrie reflects on the importance of protecting time, creating space to reconnect, and building a culture where empathy is practiced, not just discussed. Her view of leadership is grounded in communication, psychological safety, and being tough on problems rather than people. Mentorship is another thread running throughout our discussion. Kerrie explains why powerful mentorship is not passive. It requires vulnerability, preparation, and a willingness to hear difficult advice. A single phrase from a mentor early in her career, "stick-to-itiveness," continues to shape how she approaches hard problems today. We also explore the future of women in manufacturing and technology. Kerrie highlights the need for intentional change across education, early career development, and leadership visibility. She believes technology, particularly AI, can expand access, enable upskilling, and introduce flexibility that supports long-term career growth. At the same time, she makes a simple but powerful point. Women in tech want the same thing as anyone else: the space and autonomy to do their jobs well. From customer co-innovation and community-driven product roadmaps to inclusive leadership under commercial pressure, this episode offers a candid look at what it really takes to lead in enterprise technology today. If you are building products, leading teams, or questioning your own next career step, I think you will find something in Kerrie's story that resonates.
Why do so many of us feel busy all day, yet struggle to point to the meaningful work we actually completed? In this episode of Tech Talks Daily, I sit down with Tomás Dostal Freire, CIO of Miro, to unpack a challenge that quietly drains modern organizations. Tomás brings experience from companies like Google, Netflix, and Booking.com, and now leads both IT and business acceleration at Miro. His focus is simple but ambitious. Move beyond AI experimentation and rethink how work itself gets done. We explore new research revealing that for every hour of creative work, employees lose up to three hours to meetings, admin, emails, and maintenance tasks. That ratio is more than an inconvenience. It affects decision-making speed, employee satisfaction, and ultimately a company's ability to compete. Tomás argues that future candidates will choose employers based on how much unnecessary internal work they are expected to tolerate. In other words, reducing busy work is quickly becoming a talent strategy. One of the biggest culprits? Context switching. With dozens of browser tabs open and information scattered across tools, teams spend more time stitching together fragments than making decisions. Tomás describes how duplication of work, outdated systems, and a lack of shared context quietly erode momentum. AI, he believes, should not create more noise or another standalone tool. It needs to be embedded where collaboration already happens. We discuss the difference between single-player AI moments, where individuals use tools in isolation, and multiplayer AI collaboration, where shared context allows teams to move faster together. At Miro, this philosophy has shaped what they call an AI Innovation Workspace, a shared canvas where human insight and AI assistance coexist in real time. Tomás also shares practical advice for leaders who want to reclaim creative time. Start by identifying tasks you dislike doing that could easily be handled by someone junior. That list often reveals what AI can already automate. Then focus on building transferable skills like cognitive agility and first-principles thinking, rather than chasing every new tool. If you are wrestling with burnout, fragmented workflows, or wondering how AI can genuinely improve collaboration without overwhelming teams, this conversation offers a grounded, optimistic perspective. And yes, we even add a Beatles classic to the Spotify playlist along the way.
How do you design financial infrastructure that keeps running when the unexpected hits, whether that is a regional outage, a regulatory shift, or a sudden spike in digital demand? In this episode of Tech Talks Daily, I'm joined by Katsutoshi Itoh from Sony and Masahisa Kawashima from NTT, both representing the IOWN Global Forum, to unpack how photonics-based networks could change the foundations of digital finance. Speaking with me from Kyoto, they share how the Innovative Optical and Wireless Network vision is moving beyond theory and into practical, finance-specific use cases. Financial institutions are under constant pressure to deliver uninterrupted services while meeting ever tighter compliance standards. Yet as we discuss, many existing architectures still rely on asynchronous data replication and layered resilience added after the fact. On paper, it works. In a real disruption, gaps quickly appear. Itoh and Kawashima explain how synchronous replication over ultra-low latency optical networks can reduce the risk of data loss while simplifying disaster recovery and lowering operational complexity. We also explore the role of Open All-Photonic Networks and why reducing packet forwarding layers can dramatically cut latency and infrastructure costs. Instead of concentrating compute and storage in dense urban data centers, photonics enables distributed computing across regions while maintaining deterministic performance. That shift opens the door to improved resilience, better infrastructure utilization, and new approaches to scaling without constant over-provisioning. Sustainability sits alongside resilience in this conversation. Rather than treating energy efficiency as a compromise, the IOWN vision distributes power demand geographically, making better use of locally available renewable energy and reducing concentrated load pressures. It is a subtle but important rethink of how infrastructure supports broader societal goals. Looking ahead, we consider what this could mean for digital banking platforms, AI-driven risk management, and cross-border financial services. If infrastructure limitations fall away, institutions can design services around business needs rather than technical constraints. If you are curious about how photonics could underpin the next generation of financial services, this episode offers a grounded and thoughtful perspective. As always, I would love to hear your thoughts after listening.
Have you ever wondered why "compliance" still gets treated like a slow, spreadsheet-heavy chore, even though the rest of the business is moving at machine speed? In this episode of Tech Talks Daily, I sit down with Matt Hillary, Chief Information Security Officer at Drata, to talk about what actually changes when AI and automation land in the middle of governance, risk, and compliance. Matt brings a rare viewpoint because he lives this day-to-day as "customer zero," running Drata internally while also leading IT, security, GRC, and enterprise apps. We get practical fast. Matt shares how AI-assisted questionnaire workflows can turn a 120-question security assessment from a late-afternoon time sink into something you can complete with confidence in minutes, then still make it upstairs in time for dinner. He also explains how automation flips the audit dynamic by moving from random sampling to continuous, full-population checks, using APIs to validate evidence at scale, without hounding control owners unless something is actually wrong. We also talk about what security leadership really looks like when the stakes rise. Matt reflects on lessons from his time at AWS, why curiosity and adaptability matter when the "canvas" keeps changing, and how customer focus becomes the foundation of trust. That theme runs through the whole conversation, including the idea that the CISO role is steadily turning into a chief trust officer role, where integrity, transparency, and credibility under pressure matter as much as tooling. And because burnout is never far away in security, we dig into the human side too. Matt unpacks how automation can reduce cognitive load, but also warns about swapping one kind of pressure for another, especially when teams get trapped producing endless dashboards and vanity metrics instead of focusing on the few measures that actually reduce risk. To wrap things up, Matt leaves a song for the playlist, Illenium's "You're Alive," plus a book recommendation, "Lessons from the Front Lines, Insights from a Cybersecurity Career" by Asaf Karen, which he says stands out for how it treats the human side of security leadership. If you're thinking about modernizing compliance in 2026 without losing the human element, his parting principle is simple and powerful: be intentional, keep asking why, and spend your limited time on what truly matters. So where do you land on this shift toward continuous trust, do you see it becoming the default expectation for buyers and auditors, and what should leaders do now to make sure automation reduces pressure instead of quietly adding more? Share your thoughts with me, I'd love to hear how you're approaching it.
At Davos this year, some of the biggest names in tech sent a clear signal. AI is no longer a novelty. It is no longer a proof-of-concept exercise. As Demis Hassabis of Google DeepMind suggested, AI will shape more meaningful work. And Satya Nadella of Microsoft was even more direct. AI only matters if it improves real outcomes for people. So what does that look like inside the enterprise? In this episode of Tech Talks Daily, I'm joined by Andrew Boyagi, Customer CTO at Atlassian, to unpack how the conversation has shifted from experimentation to execution. Developers, in many ways, are the perfect lens for understanding this moment. Over the last two decades, their role has expanded far beyond writing code. They now own products, infrastructure, operations, and business outcomes. AI is simply the next chapter in that evolution. Andrew argues that AI will not replace engineers. It will raise expectations. As intelligent tools absorb repetitive work, the real value moves up the stack. System design. Architectural thinking. Reviewing and refining AI-generated output and orchestrating solutions that solve genuine business problems. And through it all, humans remain firmly in the loop. We also explore what this means for leadership, why mindset is starting to matter more than technical skill alone, how organizations can avoid layering AI on top of broken processes. And why the companies pulling ahead are treating AI as a strategic discipline, not a feature upgrade. This is a conversation grounded in reality. It speaks to product leaders, CTOs, CIOs, and anyone asking a simple but powerful question. If we are investing in AI, what are we actually getting back? And before we close, we look ahead to Team '26 and the themes Andrew and his team are already working on. If this year has been about proving value, what will the next chapter demand from enterprise leaders? As always, I'd love to hear your thoughts. Are you seeing proof of value in your organization yet, or are you still working through the pilot phase?
What happens when the noise around AI starts to drown out the actual business value it is meant to deliver? In this episode of Tech Talks Daily, I sat down with Adam Field, Chief AI and Product Officer at Tungsten Automation, fresh from the conversations unfolding at Davos. While headlines continue to celebrate agentic AI and sweeping automation claims, Adam offered a grounded perspective shaped by decades of experience turning AI pilots into measurable, ROI-driven deployments. His view is simple. The hype cycle may be accelerating, but many organizations still struggle with the fundamentals. Adam described a common boardroom dynamic. "What do we want? AI. What do we want it to do? We're not sure." That pressure to move fast often collides with a deeper reality. Software has shifted from deterministic to probabilistic. Leaders who grew up expecting the same inputs to always produce the same outputs now face systems that behave differently by design. Measuring value in that environment requires a different mindset. One of the most compelling ideas in our conversation was Adam's concept of "boring AI." While splashy announcements about replacing hundreds of employees grab attention, he argues that real returns often come from quieter use cases. At Tungsten Automation, that means intelligent document processing, extracting trusted, AI-ready data from the 80 percent of enterprise information that is unstructured. Contracts, invoices, transcripts, compliance paperwork. The work may not trend on social media, but it saves time, improves accuracy, and fits directly into daily workflows. We also explored accountability. AI can compress output, but it concentrates responsibility. When generative tools make architectural or compliance decisions, the liability does not shift to the model. Organizations remain accountable for privacy, ethics, and customer trust. Adam shared his own experience rebuilding a legacy application in days using AI code generation, only to discover licensing and compliance nuances that required human judgment. The lesson was clear. AI amplifies capability, yet human oversight remains essential. For leaders searching for signals that an AI strategy will actually deliver long-term returns, Adam pointed to two patterns from the small percentage of projects that succeed. First, integration into daily workflows drives adoption. Second, partnering with trusted vendors often reduces risk compared to attempting everything in-house. In a world flooded with open-source experiments and "X is dead" headlines, discipline and focus still matter. Tungsten Automation has spent four decades evolving alongside automation technologies, previously known as Kofax. Today, the company applies large language models and agentic workflows to transform unstructured data into decision-ready insights across finance, logistics, banking, and insurance. It is a reminder that the future of AI may be less about replacing people and more about removing friction so humans can do the work they were actually hired to do. So as AI investment continues to grow and pressure for returns intensifies, the question becomes harder to ignore. Are we chasing the headlines, or are we building systems that quietly deliver value where it counts? Useful Links Connect with Adam Field Learn more about Tungsten Automation Upcoming Events
How do you build a $30 million ARR business with just three people and a fleet of AI agents doing the heavy lifting? In this episode of Tech Talks Daily, I connected with Amos Joseph, CEO of Swan AI. From the moment we joked about AI notetakers silently observing our conversation, it was clear this discussion would go beyond surface-level automation talk. Amos is attempting something bold. He is building what he calls an autonomous business, one designed to scale with intelligence rather than headcount. Amos has already built and exited two B2B startups using the traditional growth-at-all-costs model. Raise early, hire fast, expand the vision, chase valuation. This time, he is rewriting that script entirely. Swan AI is built around ARR per employee, human-AI collaboration, and what he describes as scaling employees rather than scaling the org chart. With more than 200 customers and only three founders, Swan is already testing whether AI agents can run real go-to-market operations autonomously. We explored why over 90 percent of AI implementations fail and why grassroots experimentation consistently outperforms executive mandates. Amos argues that companies looking outward for AI solutions before understanding their internal bottlenecks are simply scaling chaos. The organizations that succeed start with process clarity, define what humans should do versus what should be automated, and then allow AI to execute within that structure. It is a powerful reminder that becoming AI-native has less to do with tools and more to do with operational self-awareness. We also unpacked the difference between automation and agentic AI. Traditional automation follows deterministic steps coded in advance. Agentic AI shifts decision-making power to the model itself. The AI decides what to do next, introducing statistical reasoning rather than predefined logic. That shift in agency changes everything about how workflows operate and how leaders think about control. Perhaps most fascinating is how Swan generates pipeline entirely through LinkedIn. No paid ads. No outbound. Amos has built an AI-driven engine that creates content, monitors engagement, qualifies prospects, and nurtures relationships at scale. It is an experiment in trust-based distribution powered by agents, not marketing budgets. This conversation reframes what growth can look like in an AI-native world. If scaling no longer equals hiring, and if every employee becomes a manager of AI agents, what does leadership look like next? How do founders build organizations that amplify human zones of genius rather than bury them under coordination overhead? If you are questioning long-held assumptions about team size, growth, and AI adoption, this episode will give you plenty to think about.
Is Bitcoin still just a digital store of value, or is it quietly evolving into the financial engine of a new on-chain economy? In this episode of Tech Talks Daily, I sat down with Callan Sarre, Co-Founder of Threshold Labs, to explore what happens when the world's most recognized crypto asset stops sitting idle and starts becoming programmable capital. We recorded against the backdrop of a sharp market correction that wiped out value across crypto and traditional assets alike, making for a timely and honest conversation about volatility, maturity, and why Bitcoin's next chapter may be defined by utility rather than price speculation. Callan explains how the rise of ETFs and institutional flows is reshaping ownership, while decentralized infrastructure is working to ensure users can still access the asset's underlying power. At the heart of our discussion is tBTC, a trust-minimized bridge that moves native Bitcoin into DeFi without handing control to centralized custodians. Callan breaks down how Threshold's decentralized custody model works in practice and why removing single points of failure matters in a post-FTX world. We also explore the behavioral barriers that have kept long-term holders from putting their BTC to work, the real risks behind Bitcoin yield strategies, and the infrastructure required to make these tools accessible to a broader audience through familiar Web2-style experiences. The conversation also takes a global turn as we look at why Asia is accelerating Bitcoin innovation, how regulation is driving institutional adoption in Western markets, and what the shift from DAO-led governance to a lab execution model reveals about the realities of building at scale. Looking ahead five years, Callan paints a picture of an integrated on-chain financial system where Bitcoin can be borrowed against, deployed, and settled instantly across shared liquidity rails, while still preserving the principles that made it attractive in the first place. So if Bitcoin becomes productive capital and the majority of financial activity moves on-chain, what does that mean for traditional finance, for long-term holders, and for the next wave of builders? And are we ready for a world where the most secure monetary asset also becomes the most composable?
What does it really take to move AI from experimentation into something enterprises can trust, scale, and rely on every day? In this episode of Tech Talks Daily, I'm joined by Rob Lay, CTO and Solutions Engineering Director for Cisco UK and Ireland, recorded in the run-up to Cisco Live EMEA in Amsterdam. As agentic AI dominates conference agendas on both sides of the Atlantic, this conversation steps away from model hype. It focuses on the less glamorous, but far more decisive layer underneath it all: infrastructure. Rob explains why the biggest constraint on scaling AI agents in production is no longer imagination or ambition, but the readiness of the environments those agents run on. We talk about how legacy technical debt, latency, fragmented networks, and disconnected security tools can quietly undermine AI investments long before leaders see any return. As organizations move out of pilot mode and into real execution, those cracks become impossible to ignore. A big part of the discussion centers on why AI changes the relationship between network, compute, and security teams. Traditional silos struggle to keep up as autonomous systems make decisions at machine speed. Rob shares how Cisco is approaching this shift through tighter integration across the stack, with security designed directly into the network rather than bolted on later. When AI agents act independently, routing everything through centralized chokepoints does not hold up. We also explore how operational complexity is evolving. Tool sprawl is already overwhelming many IT leaders, and agent sprawl is clearly coming next. Rob outlines Cisco's platform strategy, including how agent-driven operations, human oversight, and context-aware automation are shaping a new approach to day-to-day resilience. This leads into a wider conversation about digital resilience as a business issue, where visibility, assurance, and learning from incidents matter more than static continuity plans that only get tested once a year. For European leaders in particular, data sovereignty and control remain at the forefront. Rob explains how Cisco is responding with flexible deployment models, local data residency options, and air-gapped environments that support AI innovation without forcing customers into a single rigid operating model. We close by looking at where enterprises are actually seeing value today, where expectations are still running ahead of reality, and what leaders attending Cisco Live should really be listening to as announcements roll in. If you are responsible for infrastructure, security, or technology strategy in an AI-driven organization, this conversation offers a grounded view of what needs to be ready before agents can truly deliver on their promise. As AI-powered systems start to move faster than most roadmaps anticipated, are you confident the foundations underneath them are ready to keep up, and what would you change if you were starting that journey today? Useful Links Connect with Rob Lay Cisco Live Follow Cisco on LinkedIn
In this episode of Tech Talks Daily, I sat down with Jinsook Han, Chief Agentic AI Officer at Genpact, to unpack one of the most misunderstood shifts in enterprise AI right now. Many organizations feel confident about the value AI can deliver, yet only a small fraction are able to move beyond pilots and into autonomous operations that actually scale. Genpact's Autonomy By Design research puts hard data behind that gap, and Jinsook explains why optimism often races ahead of readiness. We explore why agentic AI changes the rules entirely. When AI systems begin to act, decide, and adapt on behalf of the business, familiar operating models start to strain. Jinsook makes a compelling case that agentic AI cannot be treated like another software rollout. It demands a rethink of data, governance, roles, and even how teams define work itself. The shift from tools to teammates alters expectations for people across the organization, from frontline operators to the C-suite, and exposes just how unprepared many companies still are. Governance is a major theme throughout the conversation, but not in the way most leaders expect. Rather than slowing progress, Jinsook argues that governance must become part of how work happens every day. She shares how Genpact approaches agent certification, maturity, and oversight, using vivid analogies to explain why quality and alignment matter more than simply deploying large numbers of agents. We also dig into why many governance models fail, especially when they rely on committees instead of lived understanding. Upskilling sits at the heart of this transformation. Jinsook walks through how Genpact is training more than 130,000 employees for an agentic future, starting with executives themselves. The focus is not on abstract learning, but on proving that today's work looks different from yesterday's. Observability, explainability, and responsible AI are woven into this approach, with command centers designed to monitor both agent performance and health, turning early signals into opportunities rather than panic. This conversation goes well beyond hype. It is about readiness, responsibility, and the reality of building autonomous systems that still depend on human judgment. As organizations rush toward agentic AI, are they truly prepared to change how decisions are made, how people work, and how accountability is defined, or are they still treating AI as a faster hammer rather than a new kind of teammate? Useful Links Connect with Jinsook Han Learn More about Genpact
What happens when leaders are confident about AI, but the people expected to use it are not ready? In this episode of Tech Talks Daily, I sat down with Caroline Grant from Slalom Consulting to explore one of the most persistent tensions in enterprise AI adoption right now. Boards and executives are spending more, moving faster, and expecting returns sooner than ever, yet many organizations are struggling to translate that ambition into outcomes that scale. Caroline brings fresh insight from Slalom's latest research into how leadership, culture, and workforce readiness are shaping what actually happens next. We unpack a clear shift in ownership for AI transformation, with CTOs and CDOs increasingly leading organizational redesign rather than HR. That change reflects how deeply AI now cuts across technology, operations, and business models, but it also introduces new risks. Caroline explains why sidelining people teams can create blind spots around skills, incentives, and trust, especially as roles evolve and uncertainty grows inside the workforce. The result is what Slalom describes as a growing AI disconnect between executive optimism and day-to-day reality. Despite the noise around job losses, the data tells a more nuanced story. Many organizations are creating new AI-related roles at a pace, yet almost all are facing skills gaps that threaten progress. We talk about why reskilling at scale is now unavoidable, how unclear career paths fuel employee distrust, and why focusing only on technical capability misses the human side of adoption. Caroline also challenges assumptions about skill priorities, warning that deprioritizing empathy, communication, and change leadership could undermine effective human-AI collaboration. We also dig into ROI expectations, with most UK executives now expecting returns within two years. Caroline shares why that ambition is achievable, where it breaks down, and why so many organizations remain stuck in pilot mode. From governance and decision rights to culture and leadership behavior, this conversation goes beyond tools and platforms to examine what separates experimentation from fundamental transformation. As AI becomes a test of leadership as much as technology, how are you closing the gap between vision and execution within your organization, and are you building a workforce that can keep pace with change rather than resist it? Connect With Caroline Grant from Slalom Consulting The Great AI Disconnect: Slalom's Insights Survey Learn More About Slalom
Is the browser quietly becoming the most powerful and dangerous interface in modern work? In this episode of Tech Talks Daily, I sat down with Karim Toubba, CEO of LastPass, to unpack a shift that many people feel every day but rarely stop to question. The browser is no longer just a window to the internet. It has become the place where work happens, where SaaS lives, and increasingly, where humans and AI agents meet data, credentials, and decisions. From AI-native browsers to prompt-based navigation and headless agents acting on our behalf, the way we access information is changing fast, and so are the risks. Karim shares why this moment feels different from earlier waves like SaaS adoption or remote work. Today, more than ever, productivity, identity, and security collide inside the browser. Shadow AI is spreading faster than most organizations can track, personal accounts are being used to access powerful AI tools, and sensitive data is being uploaded with little visibility or control. At the same time, attackers have noticed that the browser has become the soft underbelly of the enterprise, with a growing share of malware and breaches originating there. We also explore the rise of agentic AI and what happens when software, not people, starts logging into systems. When an agent books travel, pulls data, or completes workflows on a user's behalf, traditional authentication and access models start to break down. Karim explains why identity, visibility, and control must evolve together, and why secure browser extensions are emerging as a practical foundation for this next phase of computing. The conversation goes deep into what users do not see when AI browsers ask for access to email, calendars, and internal apps, and why convenience often masks long-term exposure. Throughout the discussion, Karim brings a grounded perspective shaped by decades in cybersecurity, from risk-based vulnerability management to enterprise threat intelligence. Rather than pushing fear, he focuses on realistic steps organizations and individuals can take, from understanding what data is being shared, to treating security teams as partners, to using tools that bring passwords, passkeys, and authentication into one trusted place as browsing evolves. As AI reshapes how we search, work, and make decisions, the question is no longer whether the browser matters. It is whether we are ready for it to act as the front door to both our productivity and our risk, so are you securing your browser for the future you are already using today? Connect with Karim Toubba LastPass Threat Intelligence, Mitigation, and Escalation (TIME) team page Phish Bowl Podcast
What really happens when AI helps teams write code faster, but everything else in the delivery process starts to slow down? In this episode of Tech Talks Daily, I'm joined once again by returning guest and friend of the show, Martin Reynolds, Field CTO at Harness. It has been two years since we last spoke, and a lot has changed since then. Martin has relocated from London to North Carolina, gaining back hours of his working week. Still, the bigger shift has been in how AI is reshaping software delivery inside modern enterprises. Our conversation centers on what Martin calls the AI velocity paradox. Development teams are producing more code at speed, often thanks to AI coding agents, yet testing, security, governance, and release processes are struggling to keep up. The result is a growing gap between how fast software is written and how safely it can be delivered. Martin shares research showing how this imbalance is already leading to production incidents, hidden vulnerabilities, and mounting technical debt. We also dig into why this AI-driven transition feels different from previous waves, such as cloud, mobile, or DevOps. Many of the same concerns around security, trust, and control still exist, but this time, everything is happening far faster. Martin explains why AI works best as a human amplifier, strengthening good engineering practices while exposing weak ones sooner than ever before. A significant theme in the episode is visibility. From shadow AI usage to expanding attack surfaces, Martin outlines why security teams are finding it harder to see where AI is being used and how data is flowing through systems. Rather than slowing teams down, he argues that the answer lies in embedding governance directly into delivery pipelines, making security automatic rather than an afterthought. We also explore the rise of agentic AI in testing, quality assurance, and security, where specialized agents act like virtual teammates. When well-designed, these agents help developers stay focused while improving reliability and resilience throughout the lifecycle. If you are responsible for engineering, platform, or security teams, this episode offers a grounded look at how to balance speed with responsibility in an AI-native world. As AI becomes part of every stage of software delivery, are your processes designed to safely absorb that change, or are they quietly becoming the bottleneck? Useful Links Learn More About Harness The State of AI in Engineering The State of AI Application Security EngineeringX Follow Harness on LinkedIn Connect With Martin Reynolds Thanks to our sponsors, Alcor, for supporting the show.
In this episode of Tech Talks Daily, I'm joined by Josh Haas, co-founder and co-CEO of Bubble, to unpack why the next phase of software creation is already taking shape. We talk about how the early excitement around AI-powered code generation delivered fast demos and instant gratification, but often fell apart when teams tried to turn those experiments into durable products that could grow with a business. Josh takes us back to Bubble's origins in 2012, long before AI hype cycles and trend-driven development. At the time, the idea was simple but ambitious: give more people the ability to build genuine software without spending months learning traditional programming. That early focus on visual development now feels timely again, especially as builders wrestle with the limits of black-box AI tools that hide logic until something breaks. We spend time on where vibe coding struggles in practice. Josh explains why speed alone is never enough once customers, payments, and sensitive data are involved. As he explains, most product requirements only surface after users arrive, and those edge cases are exactly where opaque AI-generated code can become risky. If you cannot see how your system works, you cannot truly own it, secure it, or fix it when something goes wrong. The conversation also digs into Bubble's hybrid approach, blending AI agents with visual development. Rather than asking builders to trust an AI, Bubble's model unquestioningly emphasizes clarity, auditability, and shared responsibility between humans and machines. Josh explains how visual logic makes software behavior explicit, helping teams understand rules, permissions, and workflows before they cause real-world problems. I learn how this mindset has helped Bubble-powered apps process over $1.1 billion in payments every year, a level of scale that leaves no room for guesswork. We also explore Bubble AI Agent, where conversational AI meets visual editing, and why transparency and control matter more than flashy demos. From governance and rollback logs to builder accountability, this episode looks at what it actually takes to build software that survives beyond the first launch. If you are building with AI or thinking about how software development is changing, this episode offers a grounded perspective on what comes after the hype fades. As AI tools become more powerful, the real question is whether they help you understand your product better over time, or slowly disconnect you from it. Which path should builders choose right now? Useful Links Connect with Josh Haas Learn More About Bubble Thanks to our sponsors, Alcor, for supporting the show.
How do you turn a developer-first product into a growth engine without losing trust, clarity, or focus along the way? In this episode of Tech Talks Daily, I'm joined by Sanjay Sarathy, VP of Developer Experience and Self Service at Cloudinary, for a grounded and thoughtful conversation about product-led growth when developers sit at the center of the story. Sanjay operates at a rare intersection. He leads Cloudinary's high-volume self-service motion while also caring for the developer community that fuels adoption, advocacy, and long-term loyalty. That dual perspective, part business, part builder, shapes everything we discuss. Our conversation picks up on a theme I have been exploring across recent episodes. When technical work is explained clearly, whether that is security, performance, or reliability, it stops being background noise and starts supporting growth. Sanjay shares how Cloudinary approached this from day one, starting with founders who were developers themselves and carried a deep respect for developer trust into the company's DNA. Documentation that reflects reality, platforms that behave exactly as promised, and support that shows up early rather than as an afterthought all play a part. What stood out to me was how early Cloudinary invested in technical support, even before many traditional growth motions were in place. That decision shaped a self-service experience that still feels human at scale. With thousands of developer sign-ups every day and millions of developers using the platform, Sanjay explains how trust compounds into referrals, word of mouth, and sustained adoption. We also dig into developer advocacy and why community is rarely a single thing. Developers gather around frameworks, tools, workflows, and shared problems, and Cloudinary has learned to meet them where they already are rather than forcing them into a single branded space. From React and Next.js users to enterprise advisory boards, feedback loops become part of the product itself. As AI reshapes how software is built and developer tools become more crowded, Sanjay offers a clear-eyed view on what separates companies that grow steadily from those that burn bright and stall. Profitability, experimentation with intent, and the discipline to double down on what works all feature heavily in his thinking. It is a conversation rooted in experience rather than theory. If you care about product-led growth, developer trust, or building platforms that scale without losing their soul, this episode offers plenty to think about. As always, I would love to hear your perspective too. How do you see developer communities shaping the next phase of product growth, and where do you think companies still get it wrong?
Why do today's most powerful AI systems still struggle to explain their decisions, repeat the same mistakes, and undermine trust at the very moment we are asking them to take on more responsibility? In this episode of Tech Talks Daily, I'm joined by Artur d'Avila Garcez, Professor of Computer Science at City, St George's University of London, and one of the early pioneers of neurosymbolic AI. Our conversation cuts through the noise around ever-larger language models and focuses on a deeper question many leaders are now grappling with. If scale alone cannot deliver reliability, accountability, or genuine reasoning, what is missing from today's AI systems? Artur explains neurosymbolic AI in clear, practical terms as the integration of neural learning with symbolic reasoning. Deep learning excels at pattern recognition across language, images, and sensor data, but it struggles with planning, causality, and guarantees. Symbolic AI, by contrast, offers logic, rules, and explanations, yet falters when faced with messy, unstructured data. Neurosymbolic AI aims to bring these two worlds together, allowing systems to learn from data while reasoning with knowledge, producing AI that can justify decisions and avoid repeating known errors. We explore why simply adding more parameters and data has failed to solve hallucinations, brittleness, and trust issues. Artur shares how neurosymbolic approaches introduce what he describes as software assurances, ways to reduce the chance of critical errors by design rather than trial and error. From self-driving cars to finance and healthcare, he explains why combining learned behavior with explicit rules mirrors how high-stakes systems already operate in the real world. A major part of our discussion centers on explainability and accountability. Artur introduces the neurosymbolic cycle, sometimes called the NeSy cycle, which translates knowledge into neural networks and extracts knowledge back out again. This two-way process opens the door to inspection, validation, and responsibility, shifting AI away from opaque black boxes toward systems that can be questioned, audited, and trusted. We also discuss why scaling neurosymbolic AI looks very different from scaling deep learning, with an emphasis on knowledge reuse, efficiency, and model compression rather than ever-growing compute demands. We also look ahead. From domain-specific deployments already happening today to longer-term questions around energy use, sustainability, and regulation, Artur offers a grounded view on where this field is heading and what signals leaders should watch for as neurosymbolic AI moves from research into real systems. If you care about building AI that is reliable, explainable, and trustworthy, this conversation offers a refreshing and necessary perspective. As the race toward more capable AI continues, are we finally ready to admit that reasoning, not just scale, may decide what comes next, and what kind of AI do we actually want to live with? Useful Links Neurosymbolic AI (NeSy) Association website Artur's personal webpage on the City, St George's University of London page Co-authored book titled "Neural-Symbolic Learning Systems" The article about neurosymbolic AI and the road to AGI The Accountability in AI article Reasoning in Neurosymbolic AI Neurosymbolic Deep Learning Semantics
Why does healthcare keep investing in new technology while so many clinicians feel buried under paperwork and admin work that has nothing to do with patient care? In this episode of Tech Talks Daily, I'm joined by Dr. Rihan Javid, psychiatrist, former attorney, and co-founder and president of Edge. Our conversation cuts straight into an issue that rarely gets the attention it deserves, the quiet toll that administrative overload takes on doctors, care teams, and ultimately patients. Nearly half of physicians now link burnout to paperwork rather than clinical work, and Rihan explains why this problem keeps slipping past leadership discussions, even as budgets for digital tools continue to rise. Drawing on his experience inside hospitals and clinics, Rihan shares how operational design shapes outcomes in ways many healthcare leaders underestimate. We talk about why short-term staffing fixes often create new problems down the line, and how practices that invest in stable, well-trained remote administrative teams see real improvements. That includes faster billing cycles, fewer errors, and more time back for clinicians who want to focus on care rather than forms. What stood out for me was his framing of workforce infrastructure as a performance driver rather than a compliance box to tick. We also dig into how hybrid operations are becoming the default model. Local clinicians working alongside remote admin teams, supported by AI-assisted workflows, are now common across healthcare. Rihan is clear that while automation and AI can remove friction and cost, human oversight still matters deeply in high-compliance environments. Trust, accuracy, and patient confidence depend on knowing where automation fits and where human judgment must stay firmly in place. Another part of the discussion that stuck with me was Rihan's idea that stability is emerging as a better success signal than raw cost savings. High turnover may look efficient on paper, but it quietly limits a clinic's ability to grow, retain knowledge, and improve patient outcomes. We unpack why consistent administrative support can influence revenue cycles, satisfaction, and long-term resilience in ways traditional metrics often miss. If you're a healthcare leader, operator, or technologist trying to understand how AI, remote teams, and smarter operations can work together without losing trust or care quality, this conversation offers plenty to reflect on. As healthcare systems rethink how work gets done behind the scenes, what would it look like if stability and clinician well-being were treated as core performance measures rather than afterthoughts, and how might that change the future of care? Useful Links Connect with Dr. Rihan Javid Edge Health Rinova AI Thanks to our sponsors, Alcor, for supporting the show.
Why do small business leaders keep buying more software yet still feel like they are drowning in logins, dashboards, and unfinished work? In this episode of Tech Talks Daily, I sit down with Jesse Lipson, founder and CEO of Levitate, to unpack a frustration I hear from business owners almost daily. After years of being pitched yet another tool, many leaders now spend hours each week troubleshooting software instead of serving customers. Jesse brings a grounded perspective shaped by decades of building SaaS companies, including bootstrapping ShareFile before its acquisition by Citrix, and what stood out to me immediately was how clearly he articulates where the current software model has broken down for small businesses. We talk about why adding more apps has not translated into better outcomes, especially for teams without dedicated specialists in marketing, finance, or sales. Jesse explains how traditional software often solves only part of the problem, leaving owners to become accidental experts in accounting, marketing strategy, or customer communications just to make the tools usable. From there, our conversation shifts toward what he believes will actually matter as AI adoption matures. Rather than chasing full automation or shiny new dashboards, Jesse argues that the real opportunity lies in blending intelligence with human guidance, allowing AI to work quietly behind the scenes while people remain the face of authentic relationships. A big part of our discussion centers on trust and connection in an AI-saturated world. Jesse shares why customers have become incredibly good at spotting automated communication and why relationship-based businesses cannot afford to lose the human element. We explore how AI can act as a second brain, helping business owners remember details, follow up at the right moments, and show up more thoughtfully, without crossing the line into impersonal automation that turns customers away. His examples, from marketing emails to customer support, make it clear that technology should support better relationships rather than replace them. We also look ahead to what small businesses should realistically focus on as AI evolves. Jesse offers practical guidance on getting started, from everyday use of conversational AI, to building internal documentation that allows systems to work more effectively, and eventually moving toward agent-based workflows that can take on real operational tasks. Throughout the conversation, he keeps returning to the same idea, that AI works best when it helps people become the kind of business leaders they already want to be, more present, more consistent, and more human. If you are a founder, operator, or small business leader feeling overwhelmed by tools that promise productivity but deliver friction, this episode offers a refreshing reset. As AI becomes more capable and more embedded in daily work, the real question is not how many systems you deploy, but whether they help you build stronger, more genuine relationships, so how are you choosing to use AI to support the human side of your business rather than bury it? Useful Links Connect with Jesse Lipson Connect with Jesse on X Learn more about Levitate
What happens when power, rather than compute, becomes the limiting factor for AI, robotics, and industrial automation? In this episode of Tech Talks Daily, I'm joined by Ramesh Narasimhan from Nyobolt to unpack a challenge that is quietly reshaping modern infrastructure. As AI training and inference workloads grow more dynamic, power demand is no longer predictable or steady. It can spike and drop in milliseconds, creating stress on systems that were never designed for this level of volatility. We talk about why data center operators, automation leaders, and industrial firms are being forced to rethink how energy is delivered, managed, and scaled. Our conversation moves beyond AI headlines and into the less visible constraints holding progress back. Ramesh explains how automation growth, particularly in robotics and autonomous mobile robot fleets, has exposed hidden inefficiencies. Charging downtime, thermal limits, and oversized systems are eroding productivity in warehouses and factories that aim to run around the clock. Instead of expanding physical footprints or adding redundant capacity, many operators are questioning whether the energy layer itself has become outdated. One of the themes that stood out for me is how energy has shifted from a background utility to a board-level concern. Power density, resilience, and cycle life are now discussed with the same urgency as compute performance or sensor accuracy. Ramesh shares why executives across logistics, automotive, advanced manufacturing, and AI infrastructure are starting to see energy strategy as a direct driver of uptime, cost control, and competitive advantage. We also explore the industry-wide push toward high-power, high-uptime operations. As businesses demand systems that can stay online continuously, the pressure is on energy technologies to respond faster, charge quicker, and occupy less space. This raises difficult questions about oversizing infrastructure for rare peak loads versus designing smarter systems that can flex in real time without waste. If you are building or operating AI clusters, robotics platforms, or industrial automation at scale, this episode offers a clear-eyed look at why energy systems may be the next major bottleneck and opportunity. As power becomes inseparable from performance, how ready is your organization to treat energy as a strategic asset rather than an afterthought?
What happens when artificial intelligence starts accelerating cyberattacks faster than most organizations can test, fix, and respond? In this fast-tracked episode of Tech Talks Daily, I sat down with Sonali Shah, CEO of Cobalt, to unpack what real-world penetration testing data is revealing about the current state of enterprise security. With more than two decades in cybersecurity and a background that spans finance, engineering, product, and strategy, Sonali brings a grounded, operator-level view of where security teams are keeping up and where they are quietly falling behind. Our conversation centers on what happens when AI moves from an experiment to an attack surface. Sonali explains how threat actors are already using the same AI-enabled tools as defenders to automate reconnaissance, identify vulnerabilities, and speed up exploitation. We discuss why this is no longer theoretical, referencing findings from companies like Anthropic, including examples where models such as Claude have demonstrated both power and unpredictability. The takeaway is sobering but balanced. AI can automate a large share of the work, but human expertise still plays a defining role, both for attackers and defenders. We also dig into Cobalt's latest State of Pentesting data, including why median remediation times for serious vulnerabilities have improved while overall closure rates remain stubbornly low. Sonali breaks down why large enterprises struggle more than smaller organizations, how legacy systems slow progress, and why generative AI applications currently show some of the highest risk with some of the lowest fix rates. As more companies rush to deploy AI agents into production, this gap becomes harder to ignore. One of the strongest themes in this episode is the shift from point-in-time testing to continuous, programmatic risk reduction. Sonali explains what effective continuous pentesting looks like in practice, why automation alone creates noise and friction, and how human-led testing helps teams move from assumptions to evidence. We also address a persistent confidence gap, where leaders believe their security posture is strong, even when testing shows otherwise. We close by tackling one of the biggest myths in cybersecurity. Security is never finished. It is a constant process of preparation, testing, learning, and improvement. The organizations that perform best accept this reality and build security into daily operations rather than treating it as a one-off task. So as AI continues to accelerate both innovation and attacks, how confident are you that your security program is keeping pace, and what would continuous testing change inside your organization? I would love to hear your thoughts. Useful Links Connect with Sonali Shah Learn more about Cobalt Check out the Cobalt Learning Center State of Pentesting Report Thanks to our sponsors, Alcor, for supporting the show.
What happens when AI stops talking and starts working, and who really owns the value it creates? In this episode of Tech Talks Daily, I'm joined by Sina Yamani, founder and CEO of Action Model, for a conversation that cuts straight to one of the biggest questions hanging over the future of artificial intelligence. As AI systems learn to see screens, click buttons, and complete tasks the way humans do, power and wealth are concentrating fast. Sina argues that this shift is happening far quicker than most people realize, and that the current ownership model leaves everyday users with little say and even less upside. Sina shares the thinking behind Action Model, a community-owned approach to autonomous AI that challenges the idea that automation must sit in the hands of a few giant firms. We unpack the concept of Large Action Models, AI systems trained to perform real online workflows rather than generate text, and why this next phase of AI demands a very different kind of training data. Instead of scraping the internet in the background, Action Model invites users to contribute actively, rewarding them for helping train systems that can navigate software, dashboards, and tools just as a human worker would. We also explore ActionFi, the platform's outcome-based reward layer, and why Sina believes attention-based incentives have quietly broken trust across Web3. Rather than paying for likes or impressions, ActionFi focuses on verifying real actions across the open web, even when no APIs or integrations exist. That raises obvious questions around security and privacy. This conversation does not shy away from the uncomfortable parts. We talk openly about job displacement, the economic reality facing businesses, and why automation is unlikely to slow down. Sina argues that resisting change is futile, but shaping who benefits from it remains possible. He also reflects on lessons from his earlier fintech exit and how movements grow when people feel they are pushing back against an unfair system. By the end of the episode, we look ahead to a future where much of today's computer-based work disappears and ask what success and failure might look like for a community-owned AI model operating at scale. If AI is going to run more of the internet on our behalf, should the people training it have a stake in what it becomes, and would you trust an AI ecosystem owned by its users rather than a handful of billionaires? Useful Links Connect with Sina Yamani on LinkedIn or X Learn more about the Action Model Follow on X Learn more about the Action Model browser extension Check out the whitelabel integration docs Join their Waitlist Join their Discord community Thanks to our sponsors, Alcor, for supporting the show.
What does it really take to remove decades of technical debt without breaking the systems that still keep the business running? In this episode of Tech Talks Daily, I sit down with Pegasystems leaders Dan Kasun, Head of Global Partner Ecosystem, and John Higgins, Chief of Client and Partner Success, to unpack why legacy modernization has reached a breaking point, and why AI is forcing enterprises to rethink how software is designed, sold, and delivered. Our conversation goes beyond surface-level AI promises and gets into the practical reality of transformation, partner economics, and what actually delivers measurable outcomes. We explore how Pega's AI-powered Blueprint is changing the entry point to enterprise-grade workflows, turning what used to be long, expensive discovery phases into fast, collaborative design moments that business and technology teams can engage with together. Dan and John explain why the old "wrap and renew" approach to legacy systems is quietly compounding technical debt, and why reimagining workflows from the ground up is becoming essential for organizations that want to move toward agentic automation with confidence. The discussion also dives into Pega's deep collaboration with Amazon Web Services, including how tools like AWS Transform and Blueprint work together to accelerate modernization at scale. We talk candidly about the evolving role of partners, why the idea of partners as an extension of a sales force is outdated, and how marketplaces are reshaping buying, building, and operating enterprise software. Along the way, we tackle some uncomfortable truths about AI hype, technical debt, and why adding another layer of technology rarely fixes the real problem. This is an episode for anyone grappling with legacy systems, skeptical of quick-fix AI strategies, or rethinking how partner ecosystems need to operate in a world where speed, clarity, and accountability matter more than ever. As enterprises move toward multi-vendor, agent-driven environments, are we finally ready to retire legacy thinking along with legacy systems, or are we still finding new ways to delay the inevitable? Useful Links Connect with Dan Kasun Connect with John Higgins Learn more about Pega Blueprint Thanks to our sponsors, Alcor, for supporting the show.
What does it really take to move AI from proof-of-concept to something that delivers value at scale? In this episode of Tech Talks Daily, I'm joined by Simon Pettit, Area Vice President for the UK and Ireland at UiPath, for a grounded conversation about what is actually happening inside enterprises as AI and automation move beyond experimentation. Simon brings a refreshingly practical perspective shaped by an unconventional career path that spans the Royal Navy, nearly two decades at NetApp, and more than seven years at UiPath. We talk about why the UK and Ireland remain a strategic region for global technology adoption, how London continues to play a central role for companies expanding into Europe, and why AI momentum in the region is very real despite the broader economic noise. A big part of our discussion focuses on why so many organizations are stuck in pilot mode. Simon explains how hype, fragmented experimentation, and poor qualification of use cases often slow progress, while successful teams take a very different approach. He shares real examples of automation already delivering measurable outcomes, from long-running public sector programs to newer agent-driven workflows that are now moving into production after clear ROI validation. We also explore where the next wave of challenges is emerging. As agentic AI becomes easier for anyone to create, Simon draws a direct parallel to the early days of cloud computing and VM sprawl. Visibility, orchestration, and cost control are becoming just as important as innovation itself. Without them, organizations risk losing control of workflows, spend, and accountability as agents multiply across the business. Looking ahead, Simon outlines why AI success will depend on ecosystems rather than single platforms. Partnerships, vertical solutions, and the ability to swap technologies as the market evolves will shape how enterprises scale responsibly. From automation in software testing to cross-functional demand coming from HR, finance, and operations, this conversation captures where AI is delivering today and where the real work still lies. If you're trying to separate AI momentum from AI noise, this episode offers a clear, experience-led view of what it takes to turn potential into progress. What would need to change inside your organization to move from pilots to production with confidence? Useful Links Learn more about Simon Pettit Connect with UiPath Follow on LinkedIn Thanks to our sponsors, Alcor, for supporting the show.
What happens when speed, scale, and convenience start to erode trust in the images brands rely on to tell their story? In this episode of Tech Talks Daily, I spoke with Dr. Rebecca Swift, Senior Vice President of Creative at Getty Images, about a growing problem hiding in plain sight, the rise of low-quality, generic, AI-generated visuals and the quiet damage they are doing to brand credibility. Rebecca brings a rare perspective to this conversation, leading a global creative team responsible for shaping how visual culture is produced, analyzed, and trusted at scale. We explore the idea of AI "sloppification," a term that captures what happens when generative tools are used because they are cheap, fast, and available, rather than because they serve a clear creative purpose. Rebecca explains how the flood of mass-produced AI imagery is making brands look interchangeable, stripping visuals of meaning, craft, and originality. When everything starts to look the same, audiences stop looking altogether, or worse, stop trusting what they see. A central theme in our discussion is transparency. Research shows that the majority of consumers want to know whether an image has been altered or created using AI, and Rebecca explains why this shift matters. For the first time, audiences are actively judging content based on how it was made, not just how it looks. We talk about why some brands misread this moment, mistaking AI usage for innovation, only to face backlash when consumers feel misled or talked down to. Rebecca also unpacks the legal and ethical risks many companies overlook in the rush to adopt generative tools. From copyright exposure to the use of non-consented training data, she outlines why commercially safe AI matters, especially for enterprises that trade on trust. We discuss how Getty Images approaches AI differently, with consented datasets, creator compensation, and strict controls designed to protect both brands and the creative community. The conversation goes beyond risk and into opportunity. Rebecca makes a strong case for why authenticity, real people, and human-made imagery are becoming more valuable, not less, in an AI-saturated world. We explore why video, photography, and behind-the-scenes storytelling are regaining importance, and why audiences are drawn to evidence of craft, effort, and intent. As generative AI becomes impossible to ignore, this episode asks a harder question. Are brands using AI as a thoughtful tool to support creativity, or are they trading long-term trust for short-term convenience, and will audiences continue to forgive that choice? Useful Links Connect with Dr. Rebecca Swift on LinkedIn VisualGSP Creative Trends Follow on Instagram and LinkedIn Thanks to our sponsors, Alcor, for supporting the show.
What actually happens when a company loses control of its own voice in a world full of channels, platforms, and constant noise? In this episode of Tech Talks Daily, I sat down with Joshua Altman, founder of beltway.media, to unpack what corporate communication really means in 2026 and why it has quietly become one of the most misunderstood leadership functions inside modern organizations. Joshua describes his work as a fractional chief communications officer, a role that sits above individual campaigns, tools, or channels and focuses instead on perception, trust, and consistency across everything a company says and does. Our conversation starts by challenging the assumption that communication is something you "turn on" when a product launches or a crisis hits. Joshua explains why corporate communication is not project-based and not owned by marketing alone. It touches internal updates, investor messaging, brand signals, packaging, email, social platforms, and even the tools teams choose to use every day. If it communicates with internal or external audiences and shapes how the company is perceived, it belongs in the communications function. When that function is missing or fragmented, confusion and noise tend to fill the gap. We also explored why communication has arguably become harder, not easier, despite the explosion of collaboration tools. Email was meant to simplify work, then Slack was meant to replace email, and now AI assistants are transcribing every meeting and surfacing more content than anyone can realistically process. Joshua makes a strong case for simplicity, clarity, and focus, arguing that organizations need to pick channels intentionally and use them well rather than spreading messages everywhere and hoping something lands. Technology naturally plays a big role in the discussion. From the shift away from tape-based media and physical workflows to the accessibility of live global collaboration and affordable computing power, Joshua reflects on how dramatically the workplace has changed since he started his career in video news production. He also shares a grounded view on AI, where it adds real value in speeding up research and reducing busywork, and where human judgment and storytelling still matter most. Toward the end of the conversation, we get into ROI, a question every leader eventually asks. Joshua offers a practical way to think about it, starting with the simple fact that founders, operators, and technical leaders get time back when they no longer have to manage communications themselves. From there, alignment, clarity, and consistency compound over time, even if the impact is not always visible in a single metric. As organizations look ahead and try to make sense of AI, platform shifts, and ever-shorter attention spans, are we investing enough thought into how our companies actually communicate, or are we still mistaking volume for clarity? Useful Links Connect with Joshua Altman Learn more about beltway.media Thanks to our sponsors, Alcor, for supporting the show.
What if your AI systems could explain why something will happen before it does, rather than simply reacting after the damage is done? In this episode of Tech Talks Daily, I sat down with Zubair Magrey, co-founder and CEO of Ergodic AI, to unpack a different way of thinking about artificial intelligence, one that focuses on understanding how complex systems actually behave. Zubair's journey begins in aerospace engineering at Rolls-Royce, moves through a decade of large-scale enterprise AI programs at Accenture, and ultimately leads to building Ergodic, a company developing what he describes as world models for enterprise decision making. World models are often mentioned in research circles, but rarely explained in a way that business leaders can connect to real operational decisions. In our conversation, Zubair breaks that gap down clearly. Instead of training AI to spot patterns in past data and assume the future will look the same, world-model AI focuses on cause and effect. It builds a structured representation of how an organization works, how different parts interact, and how actions ripple through the system over time. The result is an AI approach that can simulate outcomes, test scenarios, and help teams understand the consequences of decisions before they commit to them. We explored why this matters so much as organizations move toward agentic AI, where systems are expected to recommend or even execute actions autonomously. Without an understanding of constraints, dependencies, and system dynamics, those agents can easily produce confident but unrealistic recommendations. Zubair explains how Ergodic uses ideas from physics and system theory to respect real-world limits like capacity, time, inventory, and causality, and why ignoring those principles leads to fragile AI deployments that struggle under pressure. The conversation also gets practical. Zubair shares how world-model simulations are being used in supply chain, manufacturing, automotive, and CPG environments to detect early risks, anticipate disruptions, and evaluate trade-offs before problems cascade across customers and regions. We discuss why waiting for perfect data often stalls AI adoption, how Ergodic's data-agnostic approach works alongside existing systems, and what it takes to deliver ROI that teams actually trust and use. Finally, we step back and look at the organizational side of AI adoption. As AI becomes embedded into daily workflows, cultural change, experimentation, and trust become just as important as models and metrics. Zubair offers a grounded view on how leaders can prepare their teams for faster cycles of change without losing confidence or control. As enterprises look ahead to a future shaped by autonomous systems and real-time decision making, are we building AI that truly understands how our organizations work, or are we still guessing based on the past, and what would it take to change that? Useful Links Connect with Zubair Magrey Learn more about Ergodic AI Thanks to our sponsors, Alcor, for supporting the show.
What does it actually take to build trust with developers when your product sits quietly inside thousands of other products, often invisible to the people using it every day? In this episode of Tech Talks Daily, I sat down with Ondřej Chrastina, Developer Relations at CKEditor, to unpack a career shaped by hands-on experience, curiosity, and a deep respect for developer time. Ondřej's story starts in QA and software testing, moves through development and platform work, and eventually lands in developer relations. What makes his perspective compelling is that none of these roles felt disconnected. Each one sharpened his understanding of real developer friction, the kind you only notice when you have lived with a product day in and day out. We talked about what changes when you move from monolithic platforms to API-first services, and why developer relations looks very different depending on whether your audience is an application developer, a data engineer, or an integrator working under tight delivery pressure. Ondřej shared how his time at Kentico, Kontent.ai, and Ataccama shaped his approach to tooling, documentation, and examples. For him, theory rarely lands. Showing something that works, even in a small or imperfect way, tends to earn attention and respect far faster. At CKEditor, that thinking becomes even more interesting. The editor is everywhere, yet rarely recognized. It lives inside SaaS platforms, internal tools, CRMs, and content systems, quietly doing its job. We explored how developer experience matters even more when the product itself fades into the background, and why long-term maintenance, support, and predictability often outweigh short-term feature excitement. Ondřej also explained why building instead of buying an editor is rarely as simple as teams expect, especially when standards, security, and future updates enter the picture. We also got into the human side of developer relations. Balancing credibility with business goals, staying useful rather than loud, and acting as a bridge between engineering, product, marketing, and the outside world. Ondřej was refreshingly honest about the role ego can play, and why staying close to real usage is the fastest way to keep yourself grounded. If you care about developer experience, internal tooling, or how invisible infrastructure shapes modern software, this conversation offers plenty to reflect on. What have you seen work, or fail, when it comes to earning developer trust, and where do you think developer relations still get misunderstood? Useful Links Connect with Ondrej Chrastina Learn more about CK Editor Thanks to our sponsors, Alcor, for supporting the show.
If artificial intelligence is meant to earn trust anywhere, should banking be the place where it proves itself first? In this episode of Tech Talks Daily, I'm joined by Ravi Nemalikanti, Chief Product and Technology Officer at Abrigo, for a grounded conversation about what responsible AI actually looks like when the consequences are real. Abrigo works with more than 2,500 banks and credit unions across the United States, many of them community institutions where every decision affects local businesses, families, and entire regional economies. That reality makes this discussion feel refreshingly practical rather than theoretical. We talk about why financial services has become one of the toughest proving grounds for AI, and why that is a good thing. Ravi explains why concepts like transparency, explainability, and auditability are not optional add-ons in banking, but table stakes. From fraud detection and lending decisions to compliance and portfolio risk, every model has to stand up to regulatory, ethical, and operational scrutiny. A false positive or an opaque decision is not just a technical issue, it can damage trust, disrupt livelihoods, and undermine confidence in an institution. A big focus of the conversation is how AI assistants are already changing day-to-day banking work, largely behind the scenes. Rather than flashy chatbots, Ravi describes assistants embedded directly into lending, anti-money laundering, and compliance workflows. These systems summarize complex documents, surface anomalies, and create consistent narratives that free human experts to focus on judgment, context, and relationships. What surprised me most was how often customers value consistency and clarity over raw speed or automation. We also explore what other industries can learn from community banks, particularly their modular, measured approach to adoption. With limited budgets and decades-old core systems, these institutions innovate cautiously, prioritizing low-risk, high-return use cases and strong governance from day one. Ravi shares why explainable AI must speak the language of bankers and regulators, not data scientists, and why showing the "why" behind a decision is essential to keeping humans firmly in control. As we look toward 2026 and beyond, the conversation turns to where AI can genuinely support better outcomes in lending and credit risk without sidelining human judgment. Ravi is clear that fully autonomous decisioning still has a long way to go in high-stakes environments, and that the future is far more about partnership than replacement. AI can surface patterns, speed up insight, and flag risks early, but people remain essential for context, empathy, and final accountability. If you're trying to cut through the AI noise and understand how trust, governance, and real-world impact intersect, this episode offers a rare look at how responsible AI is actually being built and deployed today. And once you've listened, I'd love to hear your perspective. Where do you see AI earning trust, and where does it still have something to prove?
What really happens after the startup advice runs out and founders are left facing decisions no pitch deck ever prepared them for? In this episode of Tech Talks Daily, I sit down with Vijay Rajendran, a founder, venture capitalist, UC Berkeley instructor, and author of The Funding Framework, to discuss the realities of company building that rarely appear on social feeds or investor blogs. Vijay has spent years working alongside founders at the sharpest end of growth, from early fundraising conversations through to the personal and leadership shifts that scaling demands. That experience shapes a conversation that feels refreshingly honest, thoughtful, and grounded in lived reality. We explore why building something people actually want sounds simple in theory yet proves brutally difficult in practice. Vijay explains how timing, learning velocity, and the willingness to adapt often matter more than stubborn vision, and why many founders misunderstand what momentum really looks like. From there, the discussion moves into investor relationships, not as transactional events, but as long-term partnerships that require founders to shift their mindset from defense to evaluation. The emotional and psychological dynamics of fundraising come into focus, especially the moments when founders underestimate how much power they actually have in shaping those relationships. A big part of this conversation centers on leadership identity. Vijay breaks down the messy transition from being the "chief everything officer" to becoming a true chief executive, and why the most overlooked stage in that journey is learning how to enable others. We talk about the point where founders become the bottleneck, often without realizing it, and why this tends to surface as teams grow and decisions start happening outside the founder's direct line of sight. The plateau many companies hit around scale becomes less mysterious when viewed through this lens. We also challenge some of the most popular startup advice circulating online today, particularly around fundraising volume, pitching styles, and the idea that persistence alone guarantees outcomes. Vijay shares why treating fundraising like enterprise sales, focusing on alignment over volume, and listening more than pitching often leads to better results. The conversation closes with practical reflections on personal growth, co-founder dynamics, and how leaders can regain clarity during periods of pressure without stepping away from responsibility. If you are building a company, leading a team, or questioning whether you are evolving as fast as your business demands, this episode will likely hit closer to home than you expect. And once you've listened, I'd love to hear what resonated most with you and the leadership questions you're still sitting with after the conversation. Useful Links Connect with Vijay Rajendran The Funding Framework Startup Pitch Deck Thanks to our sponsors, Alcor, for supporting the show.