Fed up with tech hype? Looking for a tech podcast where you can learn from tech leaders and startup stories about how technology is transforming businesses and reshaping industries? In this daily tech podcast, Neil interviews tech leaders, CEOs, entrepreneurs, futurists, technologists, thought lead…
bram, neil hughes, neil does a great, neil's podcast, charismatic host, insightful and engaging, tech topics, love tuning, great tech, engaging podcast, tech industry, emerging, tech podcast, startups, founder, best tech, predictions, technology, innovative.
Listeners of The Tech Blog Writer Podcast that love the show mention: neil asks,The Tech Blog Writer Podcast is a must-listen for anyone interested in the intersection of technology and various industries. Hosted by Neil Hughes, this podcast features interviews with a wide range of guests, including visionary entrepreneurs and industry experts. Neil has a remarkable talent for breaking down complex topics into easily understandable discussions, making it accessible to listeners from all backgrounds. One of the best aspects of this podcast is the diversity of guests, as they come from different industries and share their cutting-edge technology solutions. It provides a great source of inspiration and knowledge for staying up to date with the latest advancements in tech.
The worst aspect of The Tech Blog Writer Podcast is that sometimes the discussions can feel a bit rushed due to the time constraints of each episode. With so many interesting guests and topics to cover, it would be great if there was more time for in-depth conversations. Additionally, while Neil does an excellent job at selecting diverse guests, occasionally it would be beneficial to have more representation from underrepresented communities in tech.
In conclusion, The Tech Blog Writer Podcast is an excellent resource for those looking to stay informed about the latest tech advancements while learning from visionary entrepreneurs across various industries. Neil's ability to break down complex topics and his engaging interviewing style make this podcast a valuable source of inspiration and knowledge. Despite some minor flaws, it remains a must-listen for anyone interested in staying up-to-date with cutting-edge technology solutions and developments.
Two years after our last conversation, Raj Koneru, CEO and Founder of Kore.ai, returns to discuss how the world of AI has changed and how much of it still needs to. When we first spoke, conversational AI was promising. Now it is powering over a billion interactions every day for companies like LG, Coca-Cola, and Blue Cross Blue Shield. Yet Raj argues that the next real breakthrough will not come from novelty, but from accessibility. In this episode, Raj explains why the future of AI depends on open collaboration rather than vendor lock-in. Kore.ai's partnerships with Microsoft, AWS, and G42's Inception in the UAE reflect a commitment to interoperability and shared innovation. He offers a rare look into what happens “above the line,” where enterprises actually design and deploy AI agents, compared to the massive “below the line” investments driving the hardware, cloud, and model layers of AI. For Raj, platforms like Kore.ai act as the bridge, translating technical potential into business outcomes. We also explore what true democratization of AI looks like in practice. Raj believes no-code platforms are key to giving both large and small businesses the power to build their own agents without deep technical skills. He discusses the challenges of scaling responsibly, managing latency, ensuring governance, and keeping AI secure and transparent. From the shift toward on-device AI in smartphones to the lessons learned from running one of the world's largest enterprise AI platforms, his perspective blends realism with optimism. This conversation is a reminder that progress in AI will not be defined by who owns the biggest model but by who makes the technology usable, ethical, and open to everyone. Raj's message is simple but powerful: read widely, question everything, and collaborate boldly.
In this episode, Mike Baker, Vice President and Global CISO at DXC Technology, says the cyber industry has been focusing on the wrong side of AI. He believes too many companies use it only to block threats instead of studying how criminals use it to scale phishing, bypass defenses, and deploy adaptive malware. Attackers are learning faster than ever, and security teams must catch up. Mike argues that defenders need to think differently and use AI as both protection and opportunity. He shares how DXC is already doing this. The company has brought autonomous AI agents into its security operations through a partnership with 7AI. These agents process alerts that used to require hours of human effort. The result is faster detection, less burnout, and more time for analysts to investigate real threats. By cutting manual work by more than eighty percent, DXC has shown how AI can make cybersecurity teams stronger, not smaller. Zero Trust remains a core part of DXC's strategy. Mike calls it a journey that never ends. It needs cultural change, constant learning, and leadership that keeps security invisible to end users. AI now plays a role here too, improving identity checks and spotting access issues in real time. Yet, he reminds us, AI still needs people in the loop for oversight and judgment. We also talk about supply chain risks. Too many companies still treat risk assessments as one-time tasks. Mike pushes for continuous monitoring and close collaboration with suppliers. He closes the conversation on a hopeful note. AI will not replace people in cybersecurity, he says. It will make their work more meaningful and more effective if used with care and common sense.
What happens when the future of teamwork collides with the power of AI? That's the question at the heart of this episode as Tiffany from Atlassian joins me from Barcelona during Team 25, where Atlassian is showcasing how AI-powered collaboration is redefining how work gets done. We talk about how Atlassian's mission to unleash the potential of every team is coming to life through its bold decisions, from sunsetting data center products to expanding its multi-cloud partnerships with Google. Tiffany offers a front-row view of how Atlassian's evolving cloud platform is designed to help customers work smarter while enabling secure, scalable innovation across some of the world's most complex enterprises. The conversation also uncovers the thinking behind the teamwork graph, Atlassian's powerful data intelligence layer that connects billions of work objects to create truly personalized AI experiences. Tiffany shares how companies like Royal Caribbean and Mercedes-Benz are already seeing measurable performance gains and how AI is becoming a real teammate that unifies knowledge, connects tools, and drives better outcomes. We discuss what it means to build a “system of work,” why flexibility and context matter, and how Atlassian's open approach allows teams to build custom systems tailored to their own culture and workflows. Beyond the technology, this is a story about continuous learning, adaptability, and human-centered progress. Tiffany's reflections on learning from the toughest customers, embracing change, and reimagining the browser as an active workspace reveal how Atlassian is blending AI with empathy and purpose. As AI becomes inseparable from teamwork, what steps will you take to unleash what your team can do next? I'd love to hear your thoughts.
As AI tools race into every corner of software development, a simple question keeps coming back to me. Will AI replace human testers, or will it force us to rethink what great testing looks like in the first place. In today's conversation, I talk with Santiago Komadina Geffroy, a Software Engineer at Jalasoft and an educator with Jala University, about what changes, what stays, and what teams should do next. Santiago shares how his day job and teaching intersect. He points to a gap he sees often. Engineers are experimenting with large language models without fully understanding how they work, which leads to overconfidence and avoidable rework. He argues for clearer interaction patterns between tools and people. Think less about magic prompts and more about protocols, context sharing, and agent to agent collaboration. That shift frees testers to do the thinking work that AI still struggles with, from exploratory testing and usability judgment to spotting the weird edge cases that only show up when real humans use real products. We also get into bias and ethics. AI is only as fair as the data it learns from, and that matters in healthcare, finance, and hiring where a mistake can carry life changing consequences. Santiago calls for stronger education around data quality, authorship, privacy, and environmental impact, not as a side note but as part of how engineers are trained. He believes governance helps teams move faster with fewer regrets when they take AI into production. Security sits in the mix too. Many AI tools need deep system access. If compromised, they can distort results or leak sensitive information. Santiago is candid about the limits of any single safeguard. He recommends a culture of shared responsibility where engineers understand when to call in security specialists and how to design workflows that keep humans in the loop for consequential decisions. We close with what Jalasoft has learned from building with AI inside a nearshore model in South America. More thinking time. Smaller, controllable scopes. Clear lines between routine automation and human judgment. The headline is simple. AI will change testing. Human testers will remain at the heart of quality.
Here's the thing. Most of us still picture a hotel lobby with a counter, a queue, and someone typing furiously while we wait after a long flight. In this episode, I sit with Richard Valtr, founder of Mews, to ask whether that scene is quietly fading. Backed by Tiger Global, Goldman Sachs, and Battery Ventures, Mews recently raised 75 million dollars to scale an AI-powered platform that already processes more than 10 billion dollars in payments each year. Richard argues the real bottleneck in hospitality isn't software. It's mindset. If hotels rethought workflows around guests rather than systems, the front desk would feel less like a checkpoint and more like a welcome. Richard shares the origin story of building for hoteliers as well as guests, and why the property management system should function like a central nervous system. He explains how automation handles the repetitive pieces of check-in so staff can actually look people in the eye and start a conversation. That's the promise of AI here. Not gimmicks, but orchestration across bookings, payments, inventory, and service so the boring parts disappear into the background and the human parts come forward. We also talk about underused tech. Richard uses a memorable comparison for many hotel platforms that have Ferrari-level capability but get driven like Volvos. The data is there. The intent to serve is there. What's missing is the leadership confidence to rewire the stack, measure outcomes, and keep pushing. When that happens, hotels stop thinking only in terms of rooms and start monetizing the full journey. Daybeds, coworking passes, last-mile upgrades, spa time after back-to-back meetings. AI can surface the right offer at the right moment without turning the experience into a sales pitch. By the end, Richard paints a picture of hospitality where screens fade, transactions happen on the guest's time, and every interaction feels more personal precisely because the admin has been taken out of the way. If you want a grounded view of how AI will change hotels without stripping away the reason we love staying in them, this conversation is a helpful place to start.
When a company quietly builds world-class storage and virtualization software for twenty years, it usually means they have been too busy solving real problems to shout about it. That is what makes euroNAS and its founder, Tvrtko Fritz, such an interesting story. In this episode, I reconnect with Tvrtko after meeting him on the IT Press Tour in Amsterdam to learn how his company evolved from “NAS for the masses” into a trusted enterprise alternative in a market filled with bigger names. Tvrtko shares how euroNAS began with a simple idea that administrators should not have to battle complex infrastructure to keep systems running. Over time, that belief shaped a complete platform covering hyper-converged virtualization, Ceph-based storage, and instant backup and recovery. He recalls the story of a dentist who lost a full day of work waiting for a slow restore, which inspired euroNAS to create instant recovery that restores in seconds rather than hours. We also discuss how their intuitive graphical interface has turned Ceph from a daunting project that once took a week to set up into something that can be configured in twenty minutes. That change has opened advanced storage to universities, managed service providers, and enterprises handling petabyte-scale workloads. We also tackle a topic that many in IT are thinking about right now: VMware. With licensing changes frustrating customers, Tvrtko explains how euroNAS has become the quiet plan B for many organizations seeking stability and control. Its perpetual per-node licensing model removes the pressure of forced subscriptions, while tools such as the VM import wizard make migration faster and less painful. What stands out most is that Tvrtko still takes part in customer support himself, using real conversations to guide product development and keep the company close to the people who depend on it. Looking ahead, Tvrtko outlines how euroNAS is growing through partnerships with major hardware vendors and through its expanding role in AI infrastructure, where demand for scalable storage continues to rise. The conversation highlights the value of engineering-led companies that build with care, focus on reliability, and give customers genuine ownership of their systems. If you want to understand what practical innovation looks like in enterprise storage, this episode will remind you why simplicity still wins.
AI hype has been loud for three years, but most leaders still tell me the real work begins after the demo. That was the starting point for my conversation with Christina Ellwood, co-founder of AI Realized, a community built to help enterprises move from pilots to production with less noise and more results. Christina has a calm, practical way of explaining why progress has accelerated from a tiny fraction of companies in production to roughly one in five this year, and why many of the remaining blockers have little to do with model choice and everything to do with people, policy, and permission to ship. We talk about the messy middle between a proof of concept and a live service that customers can rely on. According to Christina, the most complex problems are organizational. Teams need upskilling, guardrails, and clear deployment guidelines to ensure effective execution. Legal and brand risk create hesitation. Boards want more substantial evidence and better controls. That is where leadership shows up in a very human way. The skill she hears most often from successful program leads is humility. No one knows everything here, and the leaders who admit that, invite challenge, and keep learning are the ones getting to value without creating chaos. I loved her point that cross-organisational leadership is fast becoming the hidden superpower as AI connects systems and workflows that used to sit in separate silos. We also look forward to the 2025 AI Realized Summit, scheduled for November 5 in San Francisco. Attendance is intentionally capped at 500 to maintain high-quality conversation and genuine networking. Expect Fortune 2000 use cases across multiple industries, a healthy mix of predictive and generative work, and practical talk on small language models, multi-model strategies, and running models inside your security perimeter. Eric Siegel will keynote on combining predictive analytics with generative techniques, and you will hear from executives at companies including Amazon, Audible, Red Hat, and Zscaler. Christina highlights one example from Fandom that combines predictive ad targeting with generative tools to enhance brand safety and suitability, a trend I expect to see repeated throughout the day. If you are leading AI programs and need fewer slogans and more proof, this episode will feel like a deep breath. We explore how to move faster while staying responsible, why smaller and multi-model setups are gaining traction, and how to build confidence with your board without overpromising.
Finance leaders know the struggle of managing endless spreadsheets, juggling data from every corner of the business, and trying to plan for a world that changes by the hour. In this episode, I talk with Julio Martínez, Co-Founder and CEO of Abacum, about how his team is helping finance professionals move from reactive reporting to confident, real-time decision making. Abacum was recently named the fastest growing tech company in Spain by Deloitte after increasing revenue by 6,733 percent in just four years. Julio shares the story behind that growth and explains how finance teams are transforming from back-office operators into true strategic partners. He describes how Abacum's platform helps CFOs and FP&A teams create accurate forecasts, automate manual work, and build scenario models that answer “what if” questions in minutes instead of days. We also talk about the role of AI in finance and why current large language models are not yet reliable enough for quantitative use cases. Julio discusses the need for precision, the importance of a human in the loop, and how new hybrid approaches are shaping the future of financial planning. From Barcelona to New York, his journey reflects the global rise of data-driven finance and the growing strength of Spain's startup ecosystem. Julio also leaves listeners with a thoughtful recommendation, Meditations by Marcus Aurelius, a book that continues to inspire him to stay grounded amid rapid change. If you want to understand how technology is redefining financial planning and how strong foundations can fuel extraordinary growth, this conversation with Julio offers a rare look inside the engine of one of Europe's fastest-rising tech companies.
What happens when a CTO and a CIO of a global tech company sit down together to talk about AI? That's the starting point of today's episode, where I'm joined by Jeremy Ung, CTO at Blackline, and Sumit Johar, the company's CIO. Rather than chasing the hype, we focus on what AI really means for executive decision making, governance, and business outcomes. Both leaders open up about how their partnership is blurring the traditional lines between product and IT, and why the board is demanding answers on topics that once sat deep in the technology stack. Jeremy and Sumit explain why AI is not just another SaaS subscription and why expectations have changed so dramatically. For decades, technology was seen as predictable, a rules-based engine that followed instructions without error. AI feels different because it speaks, reasons, and sometimes makes mistakes. That human-like experience is what excites employees, but it is also what unsettles them. This is where education and governance come in, helping teams learn how to question, verify, and trace AI outputs before they make critical decisions. We also explore how AI agents are beginning to work across tools like SharePoint and email, raising new compliance and security questions that CIOs and CTOs must answer together. The conversation turns to AI sprawl, a problem that mirrors the SaaS explosion of a decade ago. With new AI tools emerging every week, enterprises risk overlapping investments and fragmented initiatives. Sumit shares how Blackline uses two governance councils to keep projects aligned. One is dedicated to risk, pulling in voices from legal, security, and privacy. The other is focused on transformation, evaluating whether requests for new AI capabilities make sense, or whether they duplicate what already exists. The signal that sprawl is taking root, he says, is when requests for tools suddenly jump from a few each month to a dozen. We also tackle the build versus buy dilemma. Budgets haven't magically increased just because AI is hot. Jeremy argues that building only makes sense when it reinforces a company's core advantage. Everything else should be bought, integrated, and kept flexible so that organizations can pivot as the AI landscape changes. Both leaders stress that trust, auditability, and value delivery must sit at the center of every investment decision.
Zeta Global's CTO, Chris Monberg talks about building AI that helps brands grow with repeatable, scalable programs without losing the spark that makes a brand feel human. Zeta's promise is simple to say and hard to do. Help marketers deliver better results with less waste by pairing strong data, clear identity, and practical AI inside the Zeta Marketing Platform. What stood out first was Chris's view of design as a contact sport. He hires builders who live in the work, and he still enjoys rolling up his sleeves himself. That mindset shows up in how Zeta approaches AI for marketing. Rather than shouting for the next click, he wants systems that perceive intent and context. He described an early lesson from retail floors in Seattle. The best experience came from people who noticed a customer's posture and pace before speaking. Empathetic design translates that awareness into algorithms that understand latent signals and respond with care, not noise. We also dug into a tension many leaders feel. Automation is exciting, but nobody wants generic content. Chris answered with a practical frame. Give marketers a way to create a personal “super agent” that learns from their choices, their brand voice, and the paths they take through the platform. Offload the repetitive chores, keep creative control, and grow pride of ownership. That pride matters because it breeds adoption. When teams feel the system reflects them, they keep using it and keep improving it. Another thread was trust. In Chris's words, the market still underestimates what these tools can do, partly because users are unsure where the value comes from. Zeta is leaning into transparency so teams can see how decisions are made and how results tie back to their inputs. Data and identity are the moat, but privacy and compliance are the foundation. He was candid about the weekly grind of meeting new regulatory needs region by region. That operational discipline shapes how Zeta decides to build, buy, or partner. Acquisitions must make sense on day one and integrate fast, with people as the primary asset. Chris also spoke directly to younger builders who feel stuck. There are no shortcuts. The only way through is work, curiosity, and a willingness to learn in public. He sees small teams pushing new protocols and patterns forward, and he wants more marketers and technologists to join that frontier with clear eyes and a bias for doing. We closed on culture. Zeta Live in New York brings sports and tech onto the same stage, and there is a reason. When the wider world pays attention, ideas travel further. If you care about marketing that respects customers and still moves the needle, this episode will give you a practical blueprint. It is about AI that makes room for people, systems that earn trust, and a product leader who still enjoys getting a little grease under his nails.
I invited Michael Reitblat, CEO and founder of Forter, to unpack a reality many retailers are living with every day. Fraud is no longer a side issue. It shapes conversion rates, customer loyalty, and the bottom line. Michael argues that if you remove the fear of fraud, you unlock growth. That sounds bold, but his lens is practical. Replace guesswork with instant, consistent decisions and you improve both security and the checkout experience. Here's the thing. False declines feel like fraud in disguise. When good customers get blocked, they do not return. Michael explains how Forter uses real-time signals to say yes or no within the transaction, without adding friction. The promise is simple. If a buyer is genuine, let them through. If it is fraud, stop it and cover the chargeback. It is a clean model that puts accountability on the platform, not the merchant. We also talk about what happens when AI agents start buying on our behalf. If software is placing orders, refunding items, or filing disputes, identity and intent become fluid. Michael walks through how trust platforms need to reason about behavior across accounts, devices, and sessions. The goal is confidence at the moment of purchase without slowing anyone down. Michael shares how Forter's scope has expanded from blocking bad actors to enabling smart, business-wide decisions about customers. That means recognizing loyal buyers even if they shop across regions and brands, and spotting synthetic identities that mimic human patterns. It also means measuring success by approvals and lifetime value, not only by stopped attacks. Let me explain why this matters. Retailers are caught between two pains. Ease up and you invite chargebacks. Tighten controls and you lose revenue from good customers. Michael's point is that trust should be a growth lever. If the system is confident, the checkout stays smooth on web and mobile. If the system is unsure, it can ask for the least painful extra step rather than send a blanket decline. We close with practical guidance for leaders. Treat trust as a product. Give teams shared visibility into decisions. Align incentives so fraud, payments, product, and marketing are working from the same truth. Michael's vision is a world where anyone can transact with ease because fraud has been priced out of the experience. That is a conversation worth having, and one retailers can act on today.
Here's the thing. “Smart” has been the buzzword for years, but Richard Leurig argues we're on the cusp of something bolder. In our conversation, the Accruent president drew a clear line between buildings filled with connected systems and buildings that can sense, decide, and act without a person staring at a dashboard all day. Richard shared a retail story that sticks. By wiring refrigeration units with sensors and training models on billions of telemetry points, his team can spot failures 48 to 72 hours before lettuce wilts or milk spoils. That time window turns panic calls at 3 a.m. into planned daytime fixes. It cuts waste, protects revenue, and keeps customers from walking into empty shelves. The bigger idea is a shift from many panes of glass to no pane of glass. Instead of asking people to wrangle alerts, AI agents coordinate HVAC, security, and maintenance, then dispatch the right technician with the right part only when one is truly needed. That is the road to self-healing facilities. Practicalities that matter now Let me explain why this resonates across industries. Whether you run a hospital, a university, a factory, or a grocery chain, you're wrestling with aging infrastructure and short supply of skilled workers. Richard sees the same pattern everywhere. Teams need guidance at the point of work, not another report. Natural language agents that answer plain questions and walk users through a task are winning hearts because they remove friction. Return-to-office adds another layer. Hybrid work has made space usage lumpy. Richard outlined how linking lease data, occupancy, and booking behavior helps leaders decide what to close, reshape, or scale. It also changes floor plans. When people do come in, they want project rooms and collaboration zones, not endless rows of cubicles. Retrofit is the sleeper story. You don't need a skyline of brand-new towers to get smarter. Low-cost sensors and targeted integrations are making older buildings more responsive than most people expect. That opens the door for progress without nine-figure capex. Energy, sustainability, and proof Boards want less energy spend and real emissions progress. The quickest wins are often hiding in plain sight. Richard walked through HVAC control that follows people, sunlight, and weather rather than fixed schedules. Lights that turn off when a room is empty are yesterday's news. Cooling only where teams are actually working is today's play. He also flagged a coming wave on factory floors. Many legacy motors and line components quietly draw more power than they should. Clip-on sensors can spot out-of-tolerance behavior so maintenance can fix the energy hog instead of replacing an entire line. That is the kind of operational change that lowers bills and supports sustainability targets with data, not slogans. Richard's timeline is refreshingly near term. He believes a large slice of the built environment will show real autonomy in three to five years. Not theory. Not demos. Everyday operations that quietly handle themselves until a human is truly required. If this conversation sparks an idea for your sites, stores, labs, or campuses, I want to hear how you're approaching it. What feels possible this quarter, and what still feels out of reach?
What if the biggest weakness in cybersecurity isn't a missing tool, but a cultural blind spot? That's the perspective of Dan Jones, Senior Security Advisor at Tanium, who joined me on Tech Talks Daily to share why he believes cybersecurity is fundamentally a people problem dressed up as a technology problem. Dan brings nearly three decades of experience in cyber operations, including leading cyber defence strategy for the UK Ministry of Defence. His career has shown him that technology alone doesn't secure organisations—it's the people at the front line, their leadership, and their ability to make the right decisions under pressure. He argues that while new tools flood the market every year, the make-or-break factor remains the same: how teams are led, supported, and empowered. In our conversation, Dan explains why leadership is often the overlooked part of cybersecurity, how culture shapes security outcomes, and why automation should be embraced not as a threat to jobs but as a way to give people time back for higher-value decision making. He shares examples from both military and enterprise contexts, showing how organisations succeed or fail based not on what tools they buy, but on how well they bring their people along for the journey. We also dig into one of today's hottest debates: the role of AI in cybersecurity. While many fear AI will displace jobs, Dan insists those fears are rooted in culture, not reality. He draws parallels to past industrial shifts, making the case that automation and orchestration are stepping stones that prepare teams for an AI-powered future—one where human judgment still sits firmly at the centre. This is a timely reminder for every leader and practitioner that cybersecurity is about more than firewalls and code. It's about trust, training, and people working together with the right tools at the right time. And yes, it's also about taking five minutes to brew a proper cup of tea—a lesson Dan believes says a lot about leadership and reflection. If you've ever wondered whether your organisation is focusing too much on tools and not enough on culture, this episode will make you stop and think. ********* Visit the Sponsor of Tech Talks Network: Land your first job in tech in 6 months as a Software QA Engineering Bootcamp with Careerist https://crst.co/OGCLA
Some interviews stick because they take a noisy topic and bring it back to reality. This was one of them. I spoke with Erin Gajdalo, CEO of Pluralsight, about what it actually takes to upskill a workforce in an AI era that seems to change by the week. We compared boardroom intent with day-to-day practice, and Erin was refreshingly clear about both. Pluralsight began more than twenty years ago in classrooms, moved online as the market shifted, and now supports Fortune 500 teams with expert-led courses, hands-on labs, and the admin tools leaders need to measure progress at scale. The thread running through the whole story is simple: people learn by doing, and companies get value when that learning maps to real work. We talked about AI in her own workflow first. Erin uses it to draft presentations, crunch data, and speed up research, then pushes that mindset across the company through focused sprints where every department experiments and reports back. That culture piece matters. Pluralsight's latest research found that 61 percent of respondents still think using generative AI is “lazy,” which drives employees to adopt tools in the shadows and exposes the business to avoidable risk. Her answer is clear guidance, safe environments to practice, and permission to test without fear of failure. The payoff shows up in real examples. One financial services firm raised prompt engineering efficiency by 20 percent and saved 1,600 hours in three months by pairing assessments with prescriptive learning paths and hands-on practice. We also explored the fear that keeps people quiet. Layoff headlines travel faster than case studies, and that skews the mood inside many teams. Erin makes a straightforward case. Treat AI as an assistant that improves standard and repetitive tasks, protect the business with clear policies, then invest in education for everyone, not only engineers. Close the confidence gap with data. Baseline skills, prescribe learning, measure proficiency, and tie improvements to actual tasks. When leaders show their own work and give teams room to try things, adoption follows. The conversation finished on the future. Technical skills will keep evolving, but the standout advantage will be a willingness to learn and the soft skills that carry ideas from prototype to production. Erin also shared a personal goal that resonated with me. She would love a private breakfast with Serena Williams to talk about Serena Ventures and backing founders from underrepresented groups. It fit the theme of the episode. Talent is everywhere. Opportunity appears when someone opens a door and stays long enough to help you through it. If you want the full story, including how Pluralsight is updating its platform for scale and how leaders can reduce “shadow AI” without slowing innovation, you can find their research and resources at Pluralsight.com. ********* Visit the Sponsor of Tech Talks Network: Land your first job in tech in 6 months as a Software QA Engineering Bootcamp with Careerist https://crst.co/OGCLA
Here's the thing. We have had brilliant ideas in Web3 for years, along with better tooling and plenty of enthusiasm, yet adoption still feels slower than it should be. In my conversation with Maciej Baj, founder of t3rn, we got under the skin of why that is and what it might take to change the pace. His starting point is simple to state and hard to deliver at scale: make cross-chain interactions feel seamless for users and predictable for developers. If you can do that, the door opens to practical products rather than experiments that only the bravest try. Maciej describes t3rn as a universal execution layer for cross-chain smart contracts, and the phrase matters because it changes how we think about interoperability. Instead of stitching together a mess of bridges and oracles, t3rn lets a contract access state and data across multiple chains from one place. Today it is mapped to the EVM for broad compatibility, but the design is chain agnostic by intent. That choice is less about tribal loyalties and more about meeting developers where they already build while keeping the door open to other ecosystems as the market evolves. Trust shows up in the details, and atomic execution is one of those details that changes behavior. If a multi-chain transaction cannot complete in full, it reverts. No half-finished transfers. No manual recovery adventures. This mirrors what smart contracts already offer on a single chain, which means developers can reason about outcomes without inventing fresh playbooks for every hop. It also reassures users, who care less about the plumbing and more about knowing that funds either arrive or return. Cost matters too. t3rn has been engineered for cost-efficient token movement across chains, which sounds mundane until you price a complex strategy that touches multiple venues. Lower friction makes new use cases economical. Maciej outlined a few that caught my eye. Trading algorithms that read and act on signals from multiple chains without duct tape. Simpler asset movement across ecosystems that do not share a wallet culture or UX conventions. Agent-driven executors that can watch for arbitrage or rebalance a portfolio without constant human oversight. The theme is the same throughout. Reduce the number of hoops and you increase the number of people willing to try something new. We also looked ahead. t3rn is preparing an integration with hyperliquid and rolling out a builder program to widen the ecosystem on top of its execution layer. An SDK is on the way so the community can help bring in new chains faster, rather than waiting for a core team to do all the heavy lifting. There is a governance track forming as well, aimed at giving the community more say in integrations and priorities. None of this guarantees success, but it signals a path from protocol to platform. I left the conversation with a clearer view of why interoperability still matters in 2025. The multi-chain world is not going away. Users move between ecosystems. Developers deploy to several environments at once. Liquidity, identity, and logic already live in many places. A universal execution layer that is reliable, cost aware, and easy to build on is the kind of boring-sounding foundation that ends up changing behavior. ********* Visit the Sponsor of Tech Talks Network: Land your first job in tech in 6 months as a Software QA Engineering Bootcamp with Careerist https://crst.co/OGCLA
When we think about what separates winning traders from those who struggle, we usually picture strategies, indicators, or a bit of insider know-how. But what if the biggest edge has been sitting on your desk all along? In this episode, I sit down with Eddie Z, also known as Russ Hazelcorn, the founder of EZ Trading Computers and EZBreakouts. With more than 37 years of experience as a trader, stockbroker, technologist, and educator, Eddie has built his career around one mission: helping traders cut through noise, avoid expensive mistakes, and get the tools they need to stay competitive in a fast-moving market. Eddie breaks down the specs that actually matter when building a trading setup, from RAM to CPUs to data feeds, and exposes which so-called “upgrades” are nothing more than overpriced fluff. We also dig into the rise of AI-powered trading platforms and bots, and what traders can do today to prepare their machines for the next wave. As Eddie points out, a lagging system or a missed feed isn't just an inconvenience—it can be the difference between a profitable trade and a costly loss. Beyond the hardware, we explore the broader picture. Rising tariffs and global supply chain disruptions are already reshaping the way traders access technology, and Eddie shares practical steps to avoid being caught short. He also explains why many experienced traders overlook their machines as a “secret weapon” and how quick, targeted fixes can transform reliability and performance in under an hour. This conversation goes deeper than specs and gadgets. Eddie opens up about the philosophy behind the EZ-Factor, his unique approach that blends decades of Wall Street expertise with cutting-edge technology to simplify trading and help people succeed. We talk about his ventures, including EZ Trading Computers, trusted by over 12,000 traders, and EZBreakouts, which delivers actionable daily and weekly picks backed by years of experience. For traders looking to level up—whether you're just starting out or managing multiple screens in a professional setting—this episode is packed with insights that can help you sharpen your edge. Eddie's perspective is clear: the right machine, the right mindset, and the right knowledge can make trading not only more profitable, but, as he likes to put it, as “EZ” as possible. ********* Visit the Sponsor of Tech Talks Network: Land your first job in tech in 6 months as a Software QA Engineering Bootcamp with Careerist https://crst.co/OGCLA
Most conversations about AI are still caught up in the spectacle. We see demos, marvel at copilots, and argue about the latest big model. But what happens when you strip away the hype and focus on AI that simply works? That is exactly the perspective Olga Lagunova brings to this episode. As Chief Product and Technology Officer at GoTo, she has one goal in mind: make AI useful, practical, and almost invisible. Olga believes the real test of AI is whether it integrates seamlessly into workflows. In her view, the most powerful AI is the kind that feels almost boring because it is just part of how work gets done. During our conversation she explains how GoTo is embedding AI into its platform so that small and midsize businesses can benefit without needing data scientists on staff or large budgets to experiment. We explore the difference between AI for SMBs and AI for enterprises, and why simplicity and trust matter more than shiny features. Our discussion also goes deeper into agentic AI, where tools are no longer just assistants but are taking on tasks in the background. Olga highlights how GoTo balances this shift with guardrails, governance, and human-in-the-loop oversight to ensure that efficiency never comes at the cost of security. We also unpack the classic build versus buy dilemma, why shadow AI is becoming a real risk for companies, and how leaders can measure ROI in a way that proves value both immediately and over time. If you are tired of the hype and want to understand how AI is quietly reshaping the backbone of business operations, this episode with Olga Lagunova will give you a grounded and forward-looking perspective.
I wanted this conversation to do two things at once. First, ground the hype in real practice. Second, show how a small country can punch well above its weight by connecting industry, academia, and government with purpose. With Chantelle Kiernan from IDA Ireland and Stephen Flannagan from Eli Lilly and Company, we explored what digital transformation really looks like on the factory floor in Ireland, why talent is the engine behind it, and how cross-sector collaboration is turning ideas into measurable outcomes. Ireland's manufacturing base employs hundreds of thousands and fuels exports, yet what stands out is the shared mindset. The shift toward Industry 5.0 puts people at the center while using digital, disruptive, and sustainable technologies to rethink production. Eli Lilly's experience shows how a digital-first culture changes everything. New sites start paperless by default. Established plants raise their game through micro-learning, data-driven problem solving, and champions who model the behavior. The message is simple. Technology only sticks when people see clear value and have the skills to act on it. From pilots to site-wide change Here's the thing. The strongest wins come from a strategic, site-wide approach rather than isolated pilots. Maturity assessments across pharma sites in Ireland revealed common patterns, shared bottlenecks, and repeatable opportunities. That insight helps teams justify investment, sharpen ROI arguments, and accelerate adoption without slowing production. Reinvestment in legacy facilities becomes a long-term advantage when you connect equipment, data, and people with a clear plan. This is where Ireland's ecosystem shows its class. Purpose-built centers like Digital Manufacturing Ireland, NIBRT, IMR, and I-FORM give teams a place to test before they invest. Indigenous tech SMEs sit at the same table as global pharma leaders and large tech firms, which means collaboration moves faster. When 50 percent or more of new R&D projects cite academic partnerships, you know something healthy is happening. Skills, STEM, and the mindset shift Upskilling came through as the decisive enabler. IDA Ireland supports companies with skills needs analysis and access to training. Universities co-create relevant courses. Micro-credentials and immersive apprenticeships build confidence on the shop floor. Stephen's point about micro-learning hit home. People learn best when they can apply knowledge to a problem they care about, right now. That keeps momentum high and spreads digital competence across teams without waiting on giant projects. Barriers still exist. Defining ROI, coping with regulatory complexity, and balancing change with daily production are real challenges. Culture is the swing factor. Leaders who set the tone, create space for experiments, and reward progress see faster results. GenAI is already shifting attitudes by improving personal productivity, which naturally opens minds to operational use cases like predictive maintenance, knowledge capture, and quality improvements. What comes next If the last decade was about connecting machines, the next decade will be about connecting knowledge. Expect smarter, greener, and more multidisciplinary manufacturing. AI will sit alongside advanced materials and sustainable design. The most resilient sites will combine agile infrastructure with strong learning cultures, so they can absorb change rather than resist it. Ireland's model of collaboration gives a useful signal. When industry, government, and academia align around shared outcomes, the runway gets longer and the takeoff gets smoother. This episode is about the practical choices that make transformation real. Strategic assessments. Shared R&D spaces. Cohorts of digital champions. And a relentless commitment to skills. It is a story of steady progress that scales, and a reminder that the future belongs to teams who can learn faster together. ********* Visit the Sponsor of Tech Talks Network: Land your first job in tech in 6 months as a Software QA Engineering Bootcamp with Careerist https://crst.co/OGCLA
What does it really mean to future-proof financial data? That's the question at the heart of my conversation with George Rosenberger, General Manager of NYFIX at Broadridge. George has spent his career moving through every corner of the capital markets, from trading desks to broker-dealers, and now into the software side where he oversees order routing, post-trade matching, and the adoption of new AI tools. His perspective is uniquely positioned between the history of financial markets and their rapidly accelerating future. This discussion takes inspiration from Broadridge's fifth annual Digital Transformation and Next Gen Technology study, which collected insights from more than five hundred technology and operations leaders across financial services. The survey highlights both the progress and the pressure points facing the industry. Forty-one percent of leaders still cite data security as a major hurdle, and while cloud, AI, and cybersecurity dominate the technology stack, a third of firms still lack security built into their core systems. George explains why this gap persists, how legacy platforms complicate modernization, and what steps firms can take to extract value from old infrastructure while preparing for what's next. We also explore the irony that many organizations overestimate their digital maturity. Generative AI adoption has surged from forty to seventy-two percent in a year, but governance, compliance, and data quality concerns remain. George stresses the importance of measuring outcomes, not just intentions, and shares how Broadridge is approaching AI responsibly through initiatives like its Algo Copilot, which helps traders make sharper decisions. If you're curious about how financial services can strengthen cybersecurity, reduce technical debt, and rethink data strategy as a true engine of innovation, this episode offers both a candid reality check and a roadmap. The speed of change is staggering, but with the right strategy, leaders can build resilience and stay ahead in a digital-first world.
What does it take to deliver personalized financial guidance to more than 140 million people every single day? That is the question I put to Wan Agus, Head of Engineering at Intuit Credit Karma, in this episode of Tech Talks Daily. Most of us open the Credit Karma app to check our credit score, look at a loan option, or browse for a better credit card. What we rarely consider is the technology running behind the curtain. Wan revealed that his teams are powering more than 60 billion daily AI predictions to understand members' needs, protect their privacy, and guide them toward the right financial choices. He explained why accuracy is everything in fintech. A misplaced recommendation can mean more than a poor customer experience; it can damage someone's credit score and hold back their progress. Our conversation also looked at what happened after Intuit acquired Credit Karma. Two very different tech stacks had to be brought together, and identity systems had to be unified so members could move seamlessly between Credit Karma and products like TurboTax. Wan compared the process to playing two complex board games at once, where success depends on strategy and collaboration. We also explored how Credit Karma is blending traditional AI with generative AI. From early chatbot experiments to today's Wallet Analyzer and Tax Advisor, Wan shared how his teams decide when to push forward with new tools and when to slow down to ensure safety and trust. He also gave us a glimpse into the future, where agent-to-agent technology could bring open banking-style transparency to the U.S. So how do you scale personalization without losing trust? And what can every business leader learn from Credit Karma's balance between speed, culture, and responsibility? I would love to hear your thoughts after listening.
AI is quickly moving from boardroom buzzword to boardroom headache. Enterprises are waking up to the fact that bringing large language models in-house is not just about performance or cost, but about control, accountability, and trust. In this episode of Tech Talks Daily, I sit down with Octavian Tanase, Chief Product Officer at Hitachi Vantara, to unpack what this shift really means for business and technology leaders. Octavian explains why governance has become the defining challenge of the AI era. Companies are under pressure not only to innovate but also to meet new regulatory demands and maintain trust with customers. That requires more than patching together tools or hoping for transparency from public AI providers. It means creating governance frameworks that deliver traceability, auditability, and explainability as standard practice, not as afterthoughts. We explore why vector databases may need something like a time-machine capability to document when and how information is added, giving enterprises a provable audit trail. This level of accountability supports both internal oversight and external compliance, turning abstract AI ethics debates into real operational requirements. Our conversation also turns to the role of infrastructure. Hitachi Vantara's VSP One, with its tagline “One Data Platform, No Limits,” has been built to simplify data complexity across block, file, and object storage while providing a unified foundation for AI workloads. Octavian shares how this unified approach helps enterprises run compliant, explainable, and efficient AI across hybrid environments that span both on-premises and the cloud. This isn't just a story about technology, but about the future of trust in digital business. If AI remains a black box, its value will always be limited. If it becomes explainable, traceable, and accountable, it can transform not only efficiency but also relationships with customers, regulators, and partners. So, how can leaders strike the right balance between governance and innovation without slowing down progress? Octavian leaves listeners with a forward-looking perspective on what the next few years of enterprise AI will demand, and why those who build on strong governance today may end up with the most resilient advantage tomorrow. ********* Visit the Sponsor of Tech Talks Network: Land your first job in tech in 6 months as a Software QA Engineering Bootcamp with Careerist https://crst.co/OGCLA
What if the way we store data is shaping the planet's future? That thought has been on my mind ever since attending the IT Press Tour in Amsterdam, where I first connected with today's guest. With global data creation forecast to hit 510 zettabytes by 2030, and data centers already consuming staggering amounts of power, the conversation is no longer about whether change is needed but about how we approach it. Joining me on the podcast is Nicholas Stavrinou, co-founder of CompressionX, a company rethinking lossless compression. Nicholas shares how a mathematical paradox in a university notebook grew into a technology that promises faster, cheaper, and more sustainable data storage. His story takes us from leather-bound journals and napkin sketches to a working product that is already helping users cut their digital footprints by more than 90 percent. In our discussion, Nicholas explains why compression deserves a seat at the sustainability table, especially as AI and enterprise workloads generate unprecedented volumes of cold data that simply sit idle in storage. We talk about the real costs of data growth, from spiraling cloud bills to the hidden environmental toll of cooling data centers, and we explore whether smarter compression could give businesses an edge while also reducing emissions. Nicholas and his team are also taking this message beyond theory. After the IT Press Tour, they are heading to Big Data LDN at Olympia London, where Compression X will be presenting in the Data for Good theatre at 2:40pm on Wednesday, September 24, and welcoming visitors at stand G58. It's a reminder that sustainable infrastructure isn't just about grand new facilities or green energy projects; sometimes it starts with rethinking something as humble as a file format. As you listen, ask yourself: could compression be one of the simplest yet most overlooked ways to make digital life more efficient, affordable, and sustainable? ********* Visit the Sponsor of Tech Talks Network: Land your first job in tech in 6 months as a Software QA Engineering Bootcamp with Careerist https://crst.co/OGCLA
I recorded this episode at Barracuda TechSummit25 in Alpbach, Austria, a mountain village that looks like a postcard and hosts some of the most grounded security conversations you will hear all year. My guest is Richard Flanders, Commercial Director at Aura Technology, a managed service provider on the south coast of England that supports public sector organisations and tightly regulated commercial clients. Richard arrived as part of Barracuda's Partner Advisory Board, which means he spends as much time feeding customer reality back into product teams as he does comparing notes with peers in the hallway. We talk through his first TechSummit experience and why the event's focus on hands-on engineering matters for MSPs who live in the weeds of configuration, policy, and response. Richard shares early thoughts on Barracuda's secure edge service and the continued maturation of XDR, but the heart of our chat is the pressure he sees on customers. Compliance is no longer a side quest. ISO 27001, Cyber Essentials Plus, supply chain reporting, and new European rules are shaping budgets and expectations. Boards want proof. Auditors want evidence. Buyers want to know a supplier chose fit-for-purpose tools. That makes documentation, contracts, and the ability to show your working as important as the tech itself. We also get into the human side. In a world that loves point solutions, many teams are tired of alert noise and tool sprawl. Richard explains why a single, coherent view helps his engineers move faster and train better, and why MSPs are leaning into prevention-focused workflows rather than waiting for the next fire. He is candid about the conversations no one enjoys, like end-of-life systems that keep a legacy app alive, and the need for tougher stances when risk sits outside an acceptable boundary. AI comes up too, without the hype. Aura is hiring a Head of AI and Automation, standing up a private AI platform, and committing to ship a handful of small, useful apps for customers in the year ahead. The lens is productivity and safety, with an emphasis on teaching teams how to question outputs and rethink everyday tasks. Add in security awareness training, phishing simulations, and tabletop exercises, and you start to see a culture shift from annual tick-boxes to regular, lived practice. There is a lovely moment of serendipity in here as well. Richard's first conversation on day one was with another partner from Pune, the same city where Aura runs its network operations. They swapped ideas on automation and integration that might never have surfaced on a video call. That is the value of getting people in a room together, especially when the room happens to be carved into the side of a mountain. If you work with an MSP, this episode will help you ask better questions. If you are an MSP, you will recognise the balance Richard describes. Pick the right controls for the risks you actually face. Prove what you do. Keep training. And give your teams a single place to see what matters, so the next incident stays small. ********* Visit the Sponsor of Tech Talks Network: Land your first job in tech in 6 months as a Software QA Engineering Bootcamp with Careerist https://crst.co/OGCLA
I recorded this conversation at Barracuda TechSummit25 in Alpbach, Austria, where the mountains feel close enough to touch and the discussions get very real very quickly. My guests are Adam Khan, VP of Global Security Operations at Barracuda XDR, and Eric Russo, Director of SOC Defensive Security. Together they run the teams that watch, interpret, and act when attacks move across email, identity, network, cloud, and endpoints. Their keynote used the language of sport to make sense of modern defense, and it worked. You will hear why football tactics map cleanly to security, how roles and formations translate to controls and playbooks, and why a strong back line matters when the opposition moves the ball quickly. Here is the thing that stood out for me. Integrated defense is not a slogan. When Adam and Eric talk about Extended Detection and Response, they are describing a practical way to join signals, add context, and trigger action without waiting for a human to click through ten consoles. XDR gives analysts one source of truth, connects events that would otherwise sit in separate tools, and shortens the time between a suspicious signal and an action that contains it. That is how you turn alert fatigue into something manageable, and it is how small teams hold their own against fast, multi-step attacks. The analogies make it easier to picture. In football, a defense tracks runners, closes passing lanes, and communicates constantly. In security, that means correlating identity with network flows and endpoint behavior, then deciding who picks up the threat and how to press. The Home Alone reference takes it further. Imagine Kevin's improvised defenses as point tools scattered around a house. Now add a single screen that shows every door, every window, and which trap fires next. That is the plain-English version of XDR that anyone can understand. We also unpack real incidents that their teams have faced, without naming names. You will hear how attackers chain steps across layers, and how automated responses isolate systems, lock accounts, and cut off command and control before damage spreads. The lesson is simple. Visibility gives you options. Automation buys you time. People make the right calls when they can see the whole pitch. If you work in security, this episode gives you a clear view of what good looks like. If you are a business leader, it offers a way to measure progress that goes beyond tool counts and budget lines. And if you enjoy a metaphor that lands, football and Home Alone might be the clearest explanation of XDR you will hear all year.
I recorded this conversation in Alpbach, Austria, a village that looks like a postcard and hosts a very serious tech gathering. TechSummit25 is Barracuda's deeply technical event, and it shows. The rooms are packed with solution architects, product managers, and engineers comparing notes with customers who run these systems every day. It is the kind of environment where product direction and real-world pain points meet over a coffee, then head straight into a lab to test an idea. My guest today is Neal Bradbury, Chief Product Officer at Barracuda, who leads engineering, product management, and the operations teams that keep services running around the clock. Fresh from a session titled “Secured today, secured tomorrow,” Neal breaks down what that promise means in practice. We explore why Barracuda is doubling down on a platform approach with Barracuda One, how a single dashboard helps teams see posture and value in one place, and why consolidation matters when alerts and tools pile up faster than teams can respond. We also talk about the balance between immediate protection and longer-term planning. Neil explains how quarterly releases and shared services underpin the roadmap, how zero trust network access moves from theory to deployment as VPNs fade, and how managed vulnerability services help organizations find risks they did not know they had. He shares why service providers are shifting toward vCIO and vCISO models, how value reporting answers the board's simplest question about where the budget goes, and why response time is the measure that keeps coming up in every conversation. Secured today, secured tomorrow The headline theme is simple enough. Know where you stand right now, then set a clear plan for the next year. Barracuda One aims to cut noise and show whether tools are configured properly. The same view rolls up alerts across email, network, and application security, and for MSPs it stretches across all customers. That single source of truth is designed to reduce swivel-chair work and make decisions faster. We dig into the reality of tool sprawl and alert fatigue. A recent study Barracuda commissioned points to teams carrying too many point solutions, with slower responses and misconfigurations as the cost. Neal's answer is convergence without ignoring specialist depth. Product groups keep shipping, while shared AI and threat protection services raise the floor across the portfolio. That approach feeds directly into XDR, where integrations with tenants, firewalls, and endpoints help shrink the gap between detection and action. AI sits in the background of all of this. Neal describes it as a reckless intern that needs guardrails. In practice that means setup wizards that cut deployment time, incident response that can pull a bad message from twenty tenants in one sweep, and ML-driven triggers that fire automated remediation when patterns line up. The aim is clear. Let machines handle the routine work at machine speed, so people can focus on decisions and the weird edge cases attackers love to try. What listeners will take away If you run security day to day, you will hear practical direction rather than slogans. Consolidated dashboards exist to show posture, not just counts. Value reporting exists to explain outcomes to a board, not to pad a slide deck. Managed services rise in importance because many organizations need strategy as much as tools, and that includes smaller enterprises that outsource large parts of their stack. For leaders planning the next quarter, the emphasis on zero trust and managed vulnerability services will stand out. For operators, the XDR and SOAR focus is about shaving minutes into seconds, connecting identity with network and endpoint events, and giving analysts room to breathe. And for anyone curious about how product roadmaps form, conferences like this one offer a candid loop between feedback and action that you rarely see on a press release. By the time we wrap, Alpbach's quiet streets feel like an unlikely place to discuss ransomware, posture, and platform design. Yet that contrast makes the conversation land even harder. Secure today, plan for tomorrow, and give your team the visibility to do both.
In this episode of Tech Talks Daily, I'm joined by Todd Grabowski from Johnson Controls to unpack the physics, products, and design choices shaping the next generation of data center cooling. It's a practical conversation that moves from chips and compressors to water, power, and land constraints, and what it really takes to keep modern infrastructure reliable at scale. Todd brings three decades of experience to the table and a front-row view of how Johnson Controls and the York brand have kept their focus on energy efficiency, reliability, and sustainability for more than a century. That longevity matters when the market is moving fast. He explains why cooling now sits alongside power as the defining constraint for data centers, and why roughly forty percent of a facility's energy can be spent on cooling rather than computation. If you lead technology, finance, or facilities, that single number should focus the mind. Todd walks through Johnson Controls' YVAM platform and the York magnetic bearing centrifugal compressor at its core, with real numbers on what that means in practice. Consuming around forty percent less energy than typical cooling devices of the past five years and operating in ambient conditions up to fifty-five degrees Celsius, it is designed for the reality of hotter climates and denser loads. The naval pedigree of the driveline is a nice twist, since it was originally built for quiet and high-reliability conditions long before hyperscale data centers needed the same. Sustainability threads through the entire discussion. Todd lays out how the company holds itself to internal targets while engineering solutions that reduce customer resource use. We talk about closed-loop designs that do not consume water, careful refrigerant choices with ultra-low global warming potential, and product footprints that consider carbon impact from the start. It is a useful reminder that sustainability is a systems problem, not a single feature on a spec sheet. I was especially interested in the three resources Todd says every modern cooling strategy must balance. Land, because you need somewhere to reject heat. Power, because every watt pulled into cooling is a watt not used for compute. Water, because many regions are already under stress and consumption cannot be the answer. Good design weighs these factors against the climate, the workload profile, and the operational model, then standardizes wherever possible so the same unit can run efficiently in Scandinavia or Dubai without special tweaks. We also dig into what AI means internally for Johnson Controls. It is showing up in manufacturing lines, speeding up design cycles, and improving the fidelity of compressor and heat transfer models. That translates into quicker time to market and more confidence in performance envelopes. On the market side, Todd is clear that demand has not softened. If anything, efficiencies tend to unlock more use cases, and the net effect is more workloads and continued pressure on facilities to cool them well. If your team is wrestling with when to adopt liquid cooling, how to reduce PUE through smarter chiller choices, or how to plan for climate variability across a global footprint, this episode offers an honest, grounded view from someone who has shipped the hardware and lived with its trade-offs. It also doubles as a quiet celebration of engineering craft. The kind that rarely makes headlines, yet underpins everything we build in the AI age. ********* Visit the Sponsor of Tech Talks Network: Land your first job in tech in 6 months as a Software QA Engineering Bootcamp with Careerist https://crst.co/OGCLA
I've spent years talking about endpoint security, yet printers rarely enter the conversation. Today, that blind spot takes center stage. I'm joined by Jim LaRoe, CEO of Symphion, to unpack why printers now represent one of the most exposed corners of the enterprise and what can be done about it. Jim's team protects fleets that range from a few hundred devices to tens of thousands, and the picture he paints is stark. In many organizations, printers make up 20 to 30 percent of endpoints, and almost all of them are left in a factory default state. That means open ports, default passwords, and little to no monitoring. Pair that with the sensitive data printers receive, process, and store, plus the privileged connections they hold to email and file servers, and you start to see why attackers love them. We trace Symphion's path from a configuration management roots story in 1999 to a pivot in 2015 when a major printer manufacturer invited the company behind the curtain. What they found was a parallel universe to mainstream IT. Brand silos, disparate operating systems, and a culture that treated printers as cost items rather than connected computers. Add in the human factor, where technicians reset devices to factory defaults after service as second nature, and you have a recipe for recurring vulnerabilities that never make it into a SOC dashboard. Jim explains how Symphion's Print Fleet Cybersecurity as a Service tackles this mess with cross-brand software, professional operations, and proven processes delivered for a simple per-device price. The model is designed to remove operational burden from IT teams. Automated daily monitoring detects drift, same-day remediation resets hardened controls, and comprehensive reporting supports regulatory needs in sectors like healthcare where compliance is non-negotiable. The goal is steady cyber hygiene for printers that mirrors what enterprises already expect for servers and PCs, without cobbling together multiple vendor tools, licenses, and extra headcount to operate them. We also talk about the hidden costs of DIY printer security. Licensing multiple management platforms for different brands, training staff who already have full plates, and outages caused by misconfigurations all add up. Jim shares real-world perspectives from organizations that tried to patch together a solution before calling in help. The pattern is familiar. Costs creep. Vulnerabilities reappear. Incidents push the topic onto the CISO's agenda. Symphion's pitch is straightforward. Treat print fleets like any other class of critical infrastructure in the enterprise, and measure outcomes in risk reduction, time saved, and fewer surprises. If you are commuting while listening and now hearing alarm bells, you are not alone. Think about the printers scattered across your offices and clinics. Consider the data that passes through them every day. Then picture an attacker who finds default credentials in minutes and uses a printer to move across your network. Tune in for a fast, practical look at a risk hiding in plain sight, and learn how Symphion's Print Fleet Cybersecurity as a Service can help you close a gap that attackers know too well. ********* Visit the Sponsor of Tech Talks Network: Land your first job in tech in 6 months as a Software QA Engineering Bootcamp with Careerist https://crst.co/OGCLA
Marketing teams used to have a simple enough job: follow the click, count the conversions, and shift the budget accordingly. But that world is gone. GDPR, iOS restrictions, and browser-level changes have left most attribution models broken or unreliable. So what now? In this episode, I sat down with Fredrik Skansen, CEO of Funnel, to unpack how marketing intelligence actually works in a world where data is partial, journeys are fragmented, and the old models don't hold. Since founding Funnel in 2014, Fredrik has grown the company into a platform that supports over 2,600 brands and handles reporting on more than 80 billion dollars in annual digital spend. That scale gives him a front-row seat to the questions every CMO and CFO are asking right now. Fredrik explains why last-click attribution didn't just become inaccurate. It became misleading. With tracking capabilities stripped down and user signals disappearing, the industry has had to move toward modeled attribution and real-time optimisation. That only works if your data is clean, aligned, and ready for analysis. Funnel's platform helps structure campaigns upfront, pull data into a unified model, apply intelligence, push learnings back into the platforms, and produce reporting that makes sense to the wider business. This isn't about dashboards. It's about decisions. We also talk about budget mix. Performance channels may feel safe, but Fredrik points out they are also getting more expensive. When teams bring brand and mid-funnel activity back into the measurement framework, the picture often changes. He shares how Swedish retailer Gina Tricot grew from 100 million to 300 million dollars in three years, in part by shifting spend to brand and driving demand earlier in the customer journey. That move only felt safe because the data supported it. AI adds another layer. With tools like Perplexity reshaping search behavior and the web shifting from links to answers, click-throughs are drying up. But it's not the end of visibility. Content still matters. So does structure. The difference is that now your reader might be an AI model, not a human. That requires a rethink in how brands approach discoverability, authority, and engagement. What makes Funnel interesting is that it doesn't stop at analytics. The platform feeds insight back into action, reducing waste and creating tighter loops between teams. It also works for agencies, which is why groups like Havas use it across 40 offices through a global agreement. If you're tired of attribution theatre and want to understand what marketing measurement looks like when it's built for reality, this episode gives you a clear, usable view. Listen in, then tell me which decision you're still guessing on. Because marketing can be measured. Just not the way it used to be. ********* Visit the Sponsor of Tech Talks Network: Land your first job in tech in 6 months as a Software QA Engineering Bootcamp with Careerist https://crst.co/OGCLA
I invited Egil Østhus to unpack a simple idea that tends to get lost in release day pressure. DevOps gets code to production quickly, but users experience features, not pipelines. Egil is the founder of Unleash, an open source feature management platform with close to 30 million downloads, and he argues that the next step is FeatureOps. It is a mindset and a set of practices that separate deployment from release, so teams can place code in production, light it up for a small cohort, learn, and only then scale out with confidence. Here is the thing. Controlled rollouts, clear telemetry, and fast rollback reduce risk without slowing teams down. Egil explains how FeatureOps connects engineering effort to business outcomes through gradual exposure, full stack experimentation, and what he calls surgical rollback. Instead of ripping out an entire release when one part misbehaves, teams can disable the offending capability and keep the rest of the value in place. It sounds straightforward because it is, and that is the point. Less drama, more learning, better results. We also talk about culture. When releases repeatedly disappoint, trust between product and engineering frays. Egil shares examples where Unleash helped a hardware and software company move from blame to shared ownership by making rollout plans visible and collaborative. Another client, an ERP vendor, discovered that early feedback from a small group of users allowed them to ship a leaner version that met the need without months of extra scope. That is how FeatureOps saves money and tempers expectations while still delighting customers. AI enters the story too. Code is shipping faster, but reliability can wobble when autogenerated changes move through pipelines. Egil sees feature management as a practical control plane for this new reality. Feature flags provide a real time safety net and, if needed, a kill switch for AI powered functionality. Teams can keep experimenting while protecting users and brand equity. If you want to move beyond release day roulette, this episode offers a practical playbook. We cover privacy first design, open source flexibility, and why metadata from FeatureOps will help leaders study how their organizations truly build. To learn more, visit getunleash.io or search for Unleash in your favorite tool, then tell me how you plan to measure your next rollout's impact. ********* Visit the Sponsor of Tech Talks Network: Land your first job in tech in 6 months as a Software QA Engineering Bootcamp with Careerist https://crst.co/OGCLA
I invited Atalia Horenshtien to unpack a topic many leaders are wrestling with right now. Everyone is talking about AI agents, yet most teams are still living with rule based bots, brittle scripts, and a fair bit of anxiety about handing decisions to software. Atalia has lived through the full arc, from early machine learning and automated pipelines to today's agent frameworks inside large enterprises. She is an AI and data strategist, a former data scientist and software engineer, and has just joined Hakoda, an IBM company, to help global brands move from experiments to outcomes. The timing matters. She starts on the 18th, and this conversation captures how she thinks about responsible progress at exactly the moment she steps into that new role. Here's the thing. Words like autonomy sound glamorous until an agent faces a messy real world task. Atalia draws a clear line between scripted bots and agents with goals, memory, and the ability to learn from feedback. Her advice is refreshingly grounded. Start internal where you can observe behavior. Put human in the loop review where it counts. Use role based access rather than feeding an LLM everything you own. Build an observability layer so you can see what the model did, why it did it, and what it cost. We also get into measurements that matter. Time saved, cycle time reduction, adoption, before and after comparisons, and a sober look at LLM costs against any reduction in FTE hours. She shares how custom cost tracking for agents prevents surprises, and why version one should ship even if it is imperfect. Culture shows up as a recurring theme. Leaders need to talk openly about reskilling, coach managers through change, and invite teams to be co creators. Her story about Hakoda's internal AI Lab is a good example. What began as an engineer's idea for ETL schema matching grew into agent powered tools that won a CIO 100 award and now help deliver faster, better outcomes for clients. There are lighter moments too. Atalia explains how she taught an ex NFL player the basics of time series forecasting using football tactics. Then she takes us behind the scenes with McLaren Racing, where data and strategy collide on the F1 circuit, and admits she has become a committed fan because of that work. If you want a practical playbook for moving from shiny demos to dependable agents, this episode will help you think clearly about scope, safeguards, and speed. Connect with Atalia on LinkedIn, explore Hakoda's work at hakoda.io, and then tell me how you plan to measure your first agent's value. ********* Visit the Sponsor of Tech Talks Network: Land your first job in tech in 6 months as a Software QA Engineering Bootcamp with Careerist https://crst.co/OGCLA
Here's the thing. Connecting thousands of devices is the easy part. Keeping them resilient and secure as you grow is where the real work lives. In this episode, I sit down with Iain Davidson, Senior Product Manager at Wireless Logic, to unpack what happens when connectivity, security, and operations meet in the real world. Wireless Logic connects a new IoT device every 18 seconds, with more than 18 million active subscriptions across 165 countries and partnerships with over 750 mobile networks. That reach brings hard lessons about where projects stall, where breaches begin, and how to build systems that can take a hit without taking your business offline. Iain lays out a simple idea that more teams need to hear. Resilience and security have to scale at the same pace as your device rollouts. He explains why fallback connectivity, private networking, and an IoT-optimised mobile core such as Conexa set the ground rules, but the real differentiator is visibility. If you cannot see what your fleet is doing in near real time, you are guessing. We talk through Wireless Logic's agentless anomaly and threat detection that runs in the mobile core, creating behavioural baselines and flagging malware events, backdoors, and suspicious traffic before small issues become outages. It is an early warning layer for fleets that often live beyond the traditional IT perimeter. We also get honest about risk. Iain shares why one in three breaches now involve an IoT device and why detection can still take months. Ransomware demands grab headlines, but the quiet damage shows up in recovery costs, truck rolls, and trust lost with customers. Then there is compliance. With new rules tightening in Europe and beyond, scaling without protection does not only invite attackers. It can keep you out of the market. Iain's message is clear. Bake security in from day one through defend, detect, react practices, supply chain checks, secure boot and firmware integrity, OTA updates, and the discipline to rehearse incident playbooks so people know what to do when alarms sound. What if you already shipped devices without all of that in place? We cover that too. From migrating SIMs into secure private networks to quarantining suspect endpoints and turning on core-level detection without adding agents, there are practical ways to raise your posture without ripping and replacing hardware. Automation helps, especially at global scale, but people still make the judgment calls. Train your teams, run simulations, and give both humans and digital systems clear rules for when to block, when to escalate, and when to restore from backup. I left this conversation with a simple takeaway. Growth is only real if it is durable. If you are rolling out EV chargers, medical devices, cameras, industrial sensors, or anything that talks to the network, this episode gives you a working playbook for scaling with confidence. Connect with Iain on LinkedIn, explore the IoT security resources at WirelessLogic.com, or reach the team at hello@wirelesslogic.com. ********* Visit the Sponsor of Tech Talks Network: Land your first job in tech in 6 months as a Software QA Engineering Bootcamp with Careerist https://crst.co/OGCLA
We talk a lot about AI as if it can fix broken systems. But what happens when the underlying data is too messy, too slow, or too disconnected to support anything useful? That's the problem Manish Sood, founder and CTO of Reltio, has spent the last decade working to solve. Reltio is not your average data company. It sits behind some of the world's most recognisable brands, helping names like L'Oréal, Pfizer, HP, and CarMax modernise how they manage and activate data across the business. What they all share is a recognition that outdated systems and disconnected records don't just slow down insights. They actively block innovation. Manish breaks this down with uncommon clarity. He calls it “data debt”—the invisible burden of stale, incomplete, or fragmented information that quietly kills speed and adds risk. It's not just a technical problem. It's a leadership challenge, especially as businesses adopt generative AI tools that rely on clean, contextual data to function reliably. We explore how real-time intelligence is changing the way companies operate across customer experience, fraud detection, and supply chain resilience. Manish shares examples from enterprise clients who have moved from legacy systems to unified platforms, and how that shift enabled smarter decision-making at scale. From personalised retail offers to proactive healthcare outreach, the stories point to one common truth: if the data isn't trusted, the AI cannot be either. There's also a new role emerging inside many companies—the data steward as AI enabler. These are the people ensuring that data isn't just stored, but shaped. Human-guided, explainable, traceable. That clarity is key to responsible AI, especially in sectors where compliance and reputation are tightly linked. Manish also explains how Reltio's platform helps businesses protect against AI vulnerabilities by enabling resilient data pipelines, consistent governance, and real-time monitoring. In a world where data is created and used simultaneously, batch syncing is not enough. Real-time pipelines give companies the confidence to experiment with AI without falling into chaos. If your business is chasing innovation without cleaning up its data layer first, this conversation is a wake-up call. Manish shows that the future of AI isn't about who builds the best model. It's about who feeds it the best foundation. ********* Visit the Sponsor of Tech Talks Network: Land your first job in tech in 6 months as a Software QA Engineering Bootcamp with Careerist https://crst.co/OGCLA
Here's the thing. Most enterprise AI pitches talk about scale and speed. Fewer talk about trust, tone, and culture. In this conversation with Inflection AI's Amit Manjhi and Shruti Prakash, I explore a different path for enterprise AI, one that combines emotional intelligence with analytical horsepower, enabling teams to ask more informed questions of their data and receive answers that are grounded in context. Amit's story sets the pace. He is a three-time founder, a YC alum, and a CS PhD who has solved complex problems across mobile, ad tech, and data. Shruti complements that arc with a product lens shaped by real operational trenches, from clean rooms to grocery retail analytics. Together, they built BoostKPI during the pandemic, transforming natural language into actionable insights, and then joined Inflection AI to help refocus the company on achieving enterprise outcomes. Their shared north star is simple to say yet tricky to execute. Make data analysis conversational, accurate, and emotionally aware so people actually use it. We unpack Inflection's shift from Pi's consumer roots to privacy-first enterprise tools. That history matters because it gives the team a head start on EQ. When you combine a deep well of human-to-AI conversations with modern LLMs, you get systems that explain, probe, and adapt rather than dump charts and call it a day. Shruti breaks down what dialogue with data looks like in practice. Think back-and-forth exchanges that move from "what happened" to "why it happened," then on to "where else this pattern appears" and "what to do next," all grounded in an organization's language and values. Amit takes us under the hood on deployment choices and ownership. If a customer wants on-prem or VPC, they get it. If they're going to fine-tune models to their vernacular, they can. The model, the insights, and the guardrails remain in the customer's control. I enjoyed the honesty around adoption. Chasing AGI makes headlines, but it rarely helps a merchandising manager spot an early drop in lifetime value or a CX lead understand churn risk before quarter end. The duo keeps the conversation grounded in everyday questions that drive numbers and reduce meetings. They describe a path where EQ and IQ come together to form what Shruti calls contextual intelligence, and where brands can trust AI agents to assist without losing ownership or voice. If you care about making data useful to more people, and you want AI that sounds like your company rather than a generic assistant, this one is for you. We cover startup lessons, the reality of cofounding as a couple during lockdowns, and how Inflection is working with large enterprises to bring conversational analysis to real workloads. It is a grounded look at where enterprise AI is heading, and a timely reminder that technology should elevate humans, not replace them. ********* Visit the Sponsor of Tech Talks Network: Land your first job in tech in 6 months as a Software QA Engineering Bootcamp with Careerist https://crst.co/OGCLA
Here's the thing. Payments only look simple from the outside. In this Tech Talks Daily episode, I sit down with Roberto “Reks” Kafati, CEO and co-founder of DEUNA, to unpack how a scrappy one-click checkout idea grew into an intelligent payments infrastructure that now touches a large slice of Mexico's online economy. Reks explains why Latin America's high decline rates aren't just an inconvenience but a growth killer, and how DEUNA's early focus on orchestration and checkout opened the door to something bigger. When a region routinely sees more than four out of ten online transactions knocked back, the bar for reliability sits in a different place. That practical problem set the stage for what came next. Athena, Real-Time Decisions, and 638 Signals per Transaction DEUNA's pivot point came when merchants asked a fair question. With all this payment data flying through the system, what should we do with it? The answer is Athena, DEUNA's AI-powered layer that watches every transaction and feeds merchants real-time insight, routing choices, and suggested actions. It is not another dashboard you promise to check and then ignore by Friday. It is a reasoning engine that sits on top of 638 data points per transaction and turns mess into movement. That is how you recover revenue without punishing good customers with extra friction, how you avoid surprise fees from networks, and how you protect recurring revenue when a processor wobbles. Reks walks us through results that speak plainly. Ramped merchants saw conversion lift from the original one-click experience. The infrastructure tier recovers meaningful GMV and trims fees. Enterprise clients report double-digit ROI and stick around for the compounding effect. Building Through Adversity and Betting on the Right Layer What resonated most was the human story behind the metrics. DEUNA was born in the first months of the pandemic, shaped by the shock that hit real-world businesses when revenue fell off a cliff and marketplaces became a lifeline with strings attached. Reks shares an unvarnished look at a tough 2023, the kind of year founders rarely talk about on record. Revenues dipped, deals went sideways, life got complicated. The team chose resilience and doubled down on a two-year vision. That bet is paying off. Over the past twenty-four months the company has grown at a pace that would bend a chart, and the focus has shifted from commoditizing orchestration to productizing intelligence. Put simply, earn trust at checkout, then make the data work for the merchant in real time. Agentic Commerce, US Expansion, and What Comes Next We also look forward. If chat interfaces begin to mediate more buying decisions, merchants will need infrastructure that can think, not just connect endpoints. That is the territory DEUNA calls intelligent infrastructure, and it is where Athena operates every day. The company is now in active conversations with major US retailers, confident after winning head-to-head enterprise evaluations. Reks frames the opportunity without hype. If you can see acceptance trends by processor, by country, by card type, and act in the moment, you keep customers, protect margins, and avoid death by a thousand false declines. If you cannot, competitors will gladly welcome your frustrated shoppers. If you care about the real mechanics of growth, this conversation is for you. We talk conversion lift, recovered revenue, and the gritty bits of building a payments company that merchants actually rely on. We also talk about the days that test your resolve and the tenth day that reminds you why you started.
I sat down with Leo de Araujo, Head of Global Business Innovation at Syntax Systems, to unpack a problem every SAP team knows too well. Years of enhancements and quick fixes leave you with custom code that nobody wants to document, a maze of SharePoint folders, and hard questions whenever S/4HANA comes up. What does this program do. What breaks if we change that field. Do we have three versions of the same thing. Leo's answer is Syntax AI CodeGenie, an agentic AI solution with a built-in chatbot that finally treats documentation and code understanding as a living part of the system, not an afterthought. Here's the thing. CodeGenie automates the creation and upkeep of custom code documentation, then lets you ask plain-language questions about function and business value. Instead of hunting through 40-page PDFs, teams can ask, “Do we already upload sales orders from Excel,” or “What depends on this BAdI,” and get an instant explanation. That changes migration planning. You can see what to keep, what to retire, and where standard capabilities or new extensions make more sense, which shortens the path to S/4HANA Cloud and helps you stay on a clean core. We also talk about how this is delivered. CodeGenie runs on SAP Business Technology Platform, connects through standard APIs, and avoids intrusive add-ons. It is compatible with SAP S/4HANA, S/4HANA Cloud Private Edition through RISE with SAP, and on-premises ECC. Security comes first, with tenant isolation for each customer and no custom code shared externally or used for AI model training. The result is a setup that respects enterprise guardrails while still giving developers and architects fast answers. Clean core gets a plain explanation in this episode. Build outside the application with published APIs, keep upgrades predictable, and innovate at the edge where you can move quickly. CodeGenie gives you the visibility to make that real, surfacing what you actually run today and how it ties to outcomes, so you can design a migration roadmap that fits the business rather than guessing from stale documents. Leo also previews the Gen AI Starter Pack, launching September 9. It bundles a managed, model-flexible platform with workshops, use-case ideation, and initial builds, so teams can move from curiosity to working solutions without locking themselves into a single provider. Paired with CodeGenie and Syntax's development accelerators, the Starter Pack points toward something SAP leaders have wanted for years, a practical way to shift from in-core customizations to clean-core extensions with much less friction. If you are planning S/4HANA, balancing hybrid and multi-cloud realities, or simply tired of tribal knowledge around critical programs, this conversation is for you. We get specific about how CodeGenie works, where it saves time and cost, and how Syntax is shaping a playbook for AI that helps teams deliver results they can trust. ********* Visit the Sponsor of Tech Talks Network: Land your first job in tech in 6 months as a Software QA Engineering Bootcamp with Careerist https://crst.co/OGCLA
Drew Allen, CEO of Grace Technologies, shares real stories from the floor, the ideas shaping safer plants, and why culture matters more than slogans. Drew's background stretches from a family line linked to Samuel Morse to teenage years in China to global business development at 3M. That range shows up in how he leads. He listens, he moves fast, and he expects teams to work on things that matter. In his world that means saving electricians from shocks and arc flash while helping manufacturers modernize without losing their soul. Grace started with mechanical and analog products, then took the hard road into fully digital systems. The shift took time and patience. Today their platform brings sensors, AI, and cloud tooling into maintenance and safety. The example that stuck with me is a proximity band for electricians. It lights, beeps, and vibrates as a worker approaches live voltage. At TriCity, that band prevented three near misses in a three month pilot. A fourth incident still ended in a hospital visit and a costly outage because the worker left the band in his car. Another apprentice nearly placed a hand on a live bus bar until the band told him something was wrong. These moments remind you that technology can change a day and a life. Drew's take on culture is refreshingly direct. Values are not a poster. They are a filter for who you hire. He looks for customer obsession, ownership, curiosity, and candid communication. Then he pairs that with high expectations and real care. Autonomy comes with accountability. Impact matters. If someone does not want to work on meaningful problems, this is not their place. It sounds firm. It also explains why the company keeps earning top workplace recognition while raising the bar on performance. We also talked about Maple Studios, the startup incubator Drew launched in Davenport, Iowa. He sees gaps in the industrial ecosystem. Fewer big exits. Slow adoption cycles. Founders stuck inside large companies. Maple gives them tools, space, and hard feedback so they can iterate faster and build things factories will actually deploy. His advice is simple. Ship, learn, and repeat. Do customer reviews early. Expect a thousand small gotchas. Move through them rather than pretending they will not appear. Looking ahead, Drew expects robotics to accelerate for a very practical reason. Companies cannot find enough people. Dangerous work will be automated. He imagines maintenance tasks shifting toward humanoid robots, with machines designed so robotic agents can service them. He also references GM's self healing language to point at a coming blend of sensing, prediction, and automated repair. On AI, he shares Satya Nadella's challenge. Measure productivity and GDP impact rather than hype. The promise is there. The scoreboard will tell the story. If you work in industrial tech, this conversation lands close to home. You will hear how to bring digital tools into legacy environments, how to design for safety from the start, and how to keep teams motivated without losing kindness. You will also catch an open invitation. Drew wants to partner with builders who care about this space. If that is you, reach out to him on LinkedIn or visit graceport.com. And if you are curious about the band that vibrates before a bad day begins, this episode is a good place to start. ********* Visit the Sponsor of Tech Talks Network: Land your first job in tech in 6 months as a Software QA Engineering Bootcamp with Careerist https://crst.co/OGCLA
I spoke with Kalai Ramea at a timely moment. We recorded this conversation during a heatwave in the UK, which made her work at Planette AI feel very real. Kalai calls herself an all-purpose scientist, with a path that runs through California climate policy, Xerox PARC, and now a startup focused on the forecast window that most people ignore. Not tomorrow's weather. Not far-off climate scenarios. The space in between. Two weeks to two months out, where decisions get made and money is on the line. Kalai explains Planette AI's idea of scientific AI in plain words. Instead of learning from yesterday's weather patterns and hoping the future looks the same, their models learn physics from earth system simulations. Ocean meets atmosphere, energy moves, and the model learns those relationships directly. That matters in a warming world where history is a shaky guide. It also shortens time to insight. Traditional models can take weeks to run. If the output arrives after the risky period has passed, it is trivia. tte AI is building for speed and usefulness. The value shows up in places you can picture. Event planners deciding whether to green-light a festival. Airlines shaping schedules and staffing. Farmers choosing when to plant and irrigate. Insurers pricing risk without leaning only on the past. Kalai shared a telling backcast of Bonnaroo in Tennessee, where flooding forced a last-minute cancellation. Their system showed heavy-rain signals weeks ahead. That kind of lead time changes outcomes, budgets, and stress levels. From Jargon To Decisions What I appreciate most about this story is the focus on access. Too many forecasts live in papers that only specialists read. Kalai and team are working to strip away jargon and deliver answers people can act on. Will it rain enough to trigger a payout. Will a heat threshold be crossed. Will the next month bring the kind of wind that matters for grid operations. The delivery matters as much as the math. NetCDF files might work for researchers, but a map, a simple number, or a chat interface is what users reach for when time is short. There is also a financial thread running through this work. Climate risk now shapes crop insurance, carbon programs, and balance sheets. Parametric insurance is growing because it is simple. Set a threshold. If it hits, the policy pays. Better medium-range signals make those products fairer and more useful. Kalai describes Planette AI's role as a baseline layer others can build on, a kind of AWS for climate intelligence. That framing fits. No single company will build every app in this space. A reliable core makes the rest possible. Kalai's path ties it all together. Policy taught her how decisions get made. PARC sharpened her instincts for practical AI. PlanetteAI is the result. If you care about planning beyond next week, this episode will give you a new way to think about forecasts and the tools that power them. I will add the blog link Kalai shared in the show notes. In the meantime, if you are in agriculture, travel, energy, or insurance, ask yourself a simple question. What would you change if you had a trustworthy signal three to eight weeks ahead. ********* Visit the Sponsor of Tech Talks Network: Land your first job in tech in 6 months as a Software QA Engineering Bootcamp with Careerist https://crst.co/OGCLA
During the IT Press Tour, I had the pleasure of speaking with Weimo Liu, CEO and co-founder of PuppyGraph, and hearing firsthand how his team is rethinking graph technology for the enterprise. In this episode of Tech Talks Daily, Weimo joins me to share the story behind PuppyGraph's “zero ETL” approach, which lets organizations query their existing data as a graph without ever moving or duplicating it. We discuss why graph databases, despite their promise, have struggled with mainstream adoption, often because of complex pipelines and heavy infrastructure requirements. Weimo explains how PuppyGraph borrows from his time at TigerGraph and Google's F1 engine to build something new: a distributed query engine that maps tables into a logical graph and delivers subsecond performance on massive datasets. That shift opens the door for use cases in cybersecurity, fraud detection, and AI-driven applications where latency and accuracy matter most. We also unpack the developer experience. Instead of rewriting schemas or reloading data every time requirements change, PuppyGraph allows teams to define nodes and edges directly from existing tables. That design lowers the barrier for SQL-focused teams and accelerates time to value. Weimo even touches on the role of graph in reducing AI hallucinations, showing how structured relationships can make enterprise AI systems more reliable. What struck me most in our conversation is how PuppyGraph's playful branding belies its serious engineering depth. Behind the “puppy” name lies a distributed engine built to scale with today's data volumes, backed by strong early adoption and a team that listens closely to customer needs. Whether you're exploring graph for cybersecurity, AI chatbots, or supply chain analytics, this discussion offers a glimpse of how the next generation of graph tech might finally break free from its niche and go mainstream. ********* Visit the Sponsor of Tech Talks Network: Land your first job in tech in 6 months as a Software QA Engineering Bootcamp with Careerist https://crst.co/OGCLA
When I invited Or Eshed, CEO and co-founder of LayerX Security, onto Tech Talks Daily, I wanted to challenge a blind spot most teams carry into work each day. We talk about phishing, ransomware, and endpoint controls, yet we skip the place where employees actually live online. The browser. That quiet tab bar has become the front door to identities, payments, SaaS, and now AI. Or calls it a different operating system in its own right, and once you hear his examples of how extensions can intercept cookies, mimic logins, or even meddle with AI chats, the penny drops fast. Here's the thing. Blocking extensions across the board no longer fits how people work. Developers, marketers, sales teams, and support agents all lean on extensions for real productivity gains. Or's argument is simple. If the business depends on extensions, security has to meet people where they are with continuous, risk-based controls inside the browser itself. That means assessing code, permissions, ownership changes, and live behaviors, not relying on a static allow list that grows and grows while attackers slip through the cracks. We also unpack Extensionpedia, LayerX's free resource that lets anyone look up the risk profile of a specific extension. It is part education, part early warning system, and it serves a wider mission to raise the floor for everyone. Or shares how a technology alliance with Google has helped the team analyze extensions at serious scale, and why better data beats clever slogans in a space where signals change hour by hour. Malicious Extensions, AI Shortcuts, And The Culture Shift Security Needs One of the standout moments is a real-world story that starts at home and ends inside a corporate network. A spouse installs a screen-recording extension on a personal device, the browser profile syncs at work, and suddenly corporate credentials and sensitive sessions are mirrored to an untrusted machine. No shadowy APT needed. Just everyday sync doing exactly what it was designed to do. It is messy, human, and exactly why policy needs to be paired with continuous visibility in the browser. We explore the gray zone where productivity tools collide with privacy. Password managers, VPN helpers, and AI-everywhere extensions promise convenience, yet they can scrape data across SaaS apps or sync credentials in ways security leaders never intended. Or's advice is refreshingly pragmatic. Assume extensions are staying. Instrument the browser, score risk in real time, and adapt access based on what an extension actually does, not what it claims on a store page. Looking ahead, Or sees the browser taking an even bigger role as email, SaaS, and AI agents converge in one place. With AI companies building their own browsers, the last mile of user interaction gets denser, faster, and more valuable to protect. If 99 percent of enterprise users already run at least one extension, the task is clear. Know which ones are in play, understand how they behave, and keep policy dynamic. If this conversation sparks a rethink of your own approach, check your extensions in Extensionpedia, and then consider what modern, in-browser controls would look like in your environment. After this episode, you may never look at that tidy row of icons the same way again. ********* Visit the Sponsor of Tech Talks Network: Land your first job in tech in 6 months as a Software QA Engineering Bootcamp with Careerist https://crst.co/OGCLA
I invited Laura Desmond, CEO of Smartly, to make sense of what feels like the biggest shake-up in marketing since the mobile era. She has led through every cycle I can remember, from the early internet to the rise of social, and she sees AI changing the rules faster than any previous wave. Across our conversation we unpack how AI is rewriting creative work, buying, and measurement, while forcing brands to rebuild trust with clear rules on data, models, and creator rights. Here's the thing. Attention is shorter, and the thumb moves fast. Most people give an ad about two seconds, and video is taking over the feed. Laura expects video to account for three quarters of digital ads by 2026, which tracks with what I am seeing across every platform. Smartly is betting on that shift with tools that turn Shorts or TikToks into personalized CTV spots, and bring CTV signal back into social. The goal is simple to say and hard to pull off. Show every person something that feels made for them, then learn from the response and improve the next piece of creative in near real time. We also talk about why the ground is moving under search. A growing number of people, especially younger users, skip the front page of Google and ask an AI assistant instead. That changes how discovery works, how queries appear, and where ad products live. Laura thinks we are heading toward campaigns that cut across search, social, retail media, and CTV as one flowing video-first effort, with creative and media stitched together by software rather than teams tossing files over the wall. Results matter, and Laura shared two proof points I kept coming back to. Smartly's platform has been validated by PwC for a 13 percent ROI lift across clients. The same study confirmed time savings that add up to 42 minutes a day for hands-on users. That reclaimed time funds the work that actually moves the needle, like faster A/B tests, sharper creative decisions, and better budget moves across channels. We also dig into conversational ads. In a recent test with Boots, Smartly's format delivered roughly four times the return on investment versus business as usual, which speaks to how fast query-style interactions are shaping expectations. Trust sits in the middle of all this. Laura is clear that responsible AI is table stakes. Brands need controls to tune or override generated assets, clarity on data sources and model choice, and a stance on creator rights before any content goes live. Her view of AI is creative first. Automate the tedious parts. Keep people in charge of taste, tone, and brand. Use the feedback loop to learn faster, not to replace the team. We close on where this all leads. Expect brand experiences that blur physical and digital without losing the human spark. Stadiums full, stores buzzing, and at the same time richer virtual touchpoints, snackable video, and one-to-one conversations that feel helpful rather than creepy. If this is your world, Laura is hosting Smartly's ADVANCE on September 17 in Brooklyn, and it looks set to be a real working session for marketers who want results, not theater. You can find details here: https://bit.ly/4fRgWEE. Tune in if you want a candid, practical map for where creative, media, and AI are heading next, and how to measure what matters while keeping your brand worthy of trust. ********* Visit the Sponsor of Tech Talks Network: Land your first job in tech in 6 months as a Software QA Engineering Bootcamp with Careerist https://crst.co/OGCLA
What does it take to map the oceans when most of the world's seabed remains unseen and unmapped? That's the question I explored with Mike Liddell from Fugro, a company using technology to reveal what lies beneath the waves. In our conversation, Mike explained why surveying the ocean is like “working in heavy fog on a roller coaster” and how traditional tools like light and radio signals are useless underwater. Instead, sonar, robotics, and increasingly AI are stepping in to make sense of this hidden world. Mike described the huge scale of the challenge, from mapping areas larger than major cities to supporting offshore wind farms that power our clean energy transition. With labour shortages and younger generations less willing to spend months at sea, Fugro is shifting to remote operations centres and uncrewed surface vessels. These new approaches not only widen the talent pool but also cut fuel use dramatically—by as much as 95 percent compared to older ships. What really struck me was the pace of change. A few years ago, offshore vessels struggled with internet speeds reminiscent of dial-up modems. Today, satellite systems like Starlink make real-time collaboration between sea and shore possible. Add in AI that can process data at the edge and make instant decisions about where and how to collect information, and you begin to see how marine surveying is entering a new era. This episode is a glimpse into that frontier and into how technology is reshaping the way we understand and care for our blue planet. ********* Visit the Sponsor of Tech Talks Network: Land your first job in tech in 6 months as a Software QA Engineering Bootcamp with Careerist https://crst.co/OGCLA
Here's the thing. Generative AI for visuals has shifted from a party trick to everyday craftwork, and few people sit closer to that shift than Ofir Bibi, VP of Research at Lightricks. In this conversation, I wanted to understand how a company famous for Facetune, Photoleap, and Videoleap is building for a future where creators expect speed, control, and choice without a headache. What I found was a story about building core technology that serves real creative workflows, not the other way around. Ofir traces Lightricks' journey from clever on-device tricks that made small screens feel powerful to today's foundation models running in the cloud. The constant thread is usability. Making complex editing feel simple requires smart decisions in the background, and that mindset has shaped everything from their early mobile apps to LTX Studio, the company's multi-model creative platform. Across the last three years, generative features moved from novelty to necessity, and that reality forced a bigger question: when do you stop stitching together other people's models and start crafting your own? That question led to LTXV, an open-source video generation model designed for speed, efficiency, and control. Ofir explains why Lightricks built it from scratch and why they shared the weights and trainer with the community. The result is a fast feedback loop where researchers, developers, and even competitors try ideas on a model that runs on consumer-grade hardware and can generate clips faster than they can be watched. The new LTXV 2B Distilled build continues that push toward quicker iteration and creator-friendly control, including arbitrary frame conditioning that suits animation and keyframe-driven workflows. We also talk about the changing data diet for training. Quantity is out. Quality and preparation matter. Licensed, high-aesthetic datasets and tighter curation produce models that understand prompts, motion, and physics with fewer weird edges. That discipline shows up in the product too. LTX Studio blends Lightricks tech with options from partners like Google's Veo and Black Forest Labs' Flux, then steers users toward the right model for the job through thoughtful UI. If you want the sharpest single shot, you can choose it. If you want fast, iterative tweaks for storytelling, LTXV is front and center. Looking ahead, Ofir sees a near future where models become broader and more multimodal, while creators and enterprises ask for local and on-prem options that keep data closer to home. That makes efficiency a feature, not a footnote. If you care about the craft of making, not just the spectacle, this episode offers a grounded view of how AI can actually serve creators. It left me convinced that speed and control are the real differentiators, and that open source can be a very practical way to get both. ********* Visit the Sponsor of Tech Talks Network: Land your first job in tech in 6 months as a Software QA Engineering Bootcamp with Careerist https://crst.co/OGCLA
Artificial intelligence is no longer confined to experiments in labs or one-off pilot projects. For many enterprises, it is becoming the backbone of how they operate, innovate, and compete. But as companies race to deploy AI, the biggest challenge is not whether the technology works, but whether the foundations exist to scale it safely and effectively. In this episode of Tech Talks Daily, I'm joined by Christian Buckner, Senior Vice President of Data and AI Platform at Altair, a company known for combining rocket science with data science. Christian unpacks the concept of an AI Fabric, a framework that harmonizes enterprise data and embeds AI directly into a universal model. Rather than scattered tools and isolated projects, the AI Fabric acts as a living system of intelligence, helping organizations move faster, make better decisions, and unlock new kinds of automation. We talk about how global enterprises from automotive suppliers to petrochemical giants are already using Altair's technology to improve safety, optimize production, and cut costs. Christian shares examples including a transportation company that boosted revenue by $50 million in its first year of AI-driven dynamic pricing and a healthcare provider that saved $17 million in analysis time using knowledge graphs for drug discovery. The conversation also explores the hype and the risks around AI agents. While it is easy to spin up a proof of concept with a Python library, Christian explains why real enterprise impact requires governance, monitoring, and infrastructure to make agents trustworthy and sustainable. He likens it to building HR systems for AI, where agents need onboarding, oversight, and performance evaluation to operate alongside humans. We also touch on Altair's acquisition by Siemens and what this means for the future of industrial AI. By integrating Altair's data and AI expertise with Siemens' deep industrial systems, enterprises can add intelligence without ripping out existing infrastructure. The result is not about replacing workers but enabling them to become what Christian calls “10x employees,” augmented by AI tools and agents that multiply their effectiveness. For anyone curious about how AI will change product design, operations, and enterprise decision-making, this episode offers a rare inside look at the technology foundations being built today. You can learn more at altair.com/ai-fabric. ********* Visit the Sponsor of Tech Talks Network: Land your first job in tech in 6 months as a Software QA Engineering Bootcamp with Careerist https://crst.co/OGCLA
What if meetings stopped draining your time and instead became engines for action? That's the question driving Christoph Fleischmann, CEO of Arthur AI, and the conversation in today's episode of Tech Talks Daily. Christoph has spent his career at the intersection of human potential and technology, and now he's leading a company that wants to change how enterprises actually get work done. Arthur AI isn't another tool to add to the stack. It's a digital co-worker—an intelligent presence that joins meetings, captures knowledge, and keeps teams aligned across time zones and formats. Whether in XR spaces, on the web, or through conversational interfaces, Arthur AI blends real-time and asynchronous collaboration. The aim is to replace endless, inefficient meetings with something more dynamic: an environment where humans and AI collaborate side by side to deliver outcomes. This conversation goes beyond theory. Christoph shares how Fortune 500 companies are already using Arthur AI to align global strategies, manage complex transformations, and modernize learning and development programs. He explains how their platform is built on enterprise-grade security and a flexible, LLM-agnostic architecture—critical foundations for companies wary of vendor lock-in or compliance risks. We also touch on the cultural shift of inviting AI to take a real seat at the table. From interviewing and project management to knowledge sharing, Arthur AI represents a new category of work experience, one where digital co-workers support people rather than replace them. For leaders tired of meetings that go nowhere and knowledge trapped in silos, this episode offers a glimpse of what smarter, faster collaboration looks like at scale. Could the blueprint for the future of digital work already be here? ********* Visit the Sponsor of Tech Talks Network: Land your first job in tech in 6 months as a Software QA Engineering Bootcamp with Careerist https://crst.co/OGCLA
Here's the thing. Most enterprise AI talk today starts with chatbots and ends with glossy demos. Meanwhile, the data that actually runs a business lives in rows, columns, and time stamps. That gap is where my conversation with Vanja Josifovski, CEO of Kuma.ai, really comes alive. Vanja has spent two and a half decades helping companies turn data into decisions, from research roles at Yahoo and Google to steering product and engineering at Pinterest through its IPO and later leading Airbnb Homes. He's now building Kuma.ai to answer an old question with a new approach: how do you get accurate, production-grade predictions from relational data without spending months crafting a bespoke model for each use case? Vanja explains why structured business data has been underserved for years. Images and text behave nicely compared to the messy reality of multiple tables, mixed data types, and event histories. Traditional teams anticipate a prediction need, then kick off a long feature engineering and modeling process. Kuma's Relational Foundation Model, or RFM, flips that script. Pre-trained on a large mix of public and synthetic data warehouses, it delivers task-agnostic, zero-shot predictions for problems like churn and fraud. That means you can ask the model questions directly of your data and get useful answers fast, then fine-tune for another 15 to 20 percent uplift when you're ready to squeeze more from your full dataset. What stood out for me is how Kuma removes the grind of manual feature creation. Vanja draws a clear parallel to computer vision's shift years ago, when teams stopped handcrafting edge detectors and started learning from raw pixels. By learning directly from raw tables, Kuma taps the entirety of the data rather than a bundle of human-crafted summaries. The payoff shows up in the numbers customers care about, with double-digit improvements against mature, well-defended baselines and the kind of time savings that change roadmaps. One customer built sixty models in two weeks, a job that would typically span a year or more. We also explore how this fits with the LLM moment. Vanja doesn't position RFM as a replacement for language models. He frames it as a complement that fills an accuracy gap on tabular data where LLMs often drift. Think of RFM as part of an agentic toolbox: when an agent needs a reliable prediction from enterprise data, it can call Kuma instead of generating code, training a fresh model, or bluffing an answer. That design extends to the realities of production as well. Kuma's fine-tuning and serving stack is built for high-QPS environments, the kind you see in recommendations and ad tech, where cost and latency matter. The training story is another thread you'll hear in this episode. The team began with public datasets, then leaned into synthetic data to cover scenarios that are hard to source in the wild. Synthetic generation gives them better control over distribution shifts and edge cases, which speeds iteration and makes the foundation model more broadly capable upon arrival. If you care about measurable outcomes, this episode shows why CFOs pay attention when RFM lands. Vanja shares examples where a 20 to 30 percent lift translates into hundreds of thousands of additional monthly active users and direct revenue impact. That kind of improvement isn't theory. It's the difference between a model that nudges a metric and a model that moves it. By the end, you'll have a clear picture of what Kuma.ai is building, why relational data warrants its own foundation model, and how enterprises can move from wishful thinking to practical wins. Curious to try it yourself? Vanja also points to a sandbox where teams can load data and ask predictive questions within a notebook, then compare results against in-house models. If your AI plans keep stalling on tabular reality, this conversation offers a way forward that's fast, accurate, and designed for the systems you already run.
When VMware Cloud Foundation 9.0 launched in June, it marked more than just another release. It was the clearest signal yet that Broadcom is betting big on the modern private cloud. In this episode of Tech Talks Daily, I sat down with Prashanth Shenoy, who leads marketing and learning for the VCF division at Broadcom, to discuss what the launch means for enterprises and how those themes are playing out live at VMware Explore in Las Vegas. Prashanth shares how VCF 9.0 was designed to help enterprises operate private clouds with the same simplicity and scale as public hyperscalers, while keeping sovereignty, security, and cost predictability front and center. He explains why this release is more than an infrastructure update. It's a shift toward a workload-agnostic, developer-centric platform where virtual machines, containers, and AI workloads can run side by side with a consistent operational experience. We also unpack Broadcom's headline announcements at the show. From making VCF an AI-native platform to embedding private AI services directly into the foundation, the message is clear: the AI pilots of the past are moving into production, and Broadcom wants VCF to be the default home for enterprise AI. Another major theme is cyber compliance at scale, with VCF now offering continuous enforcement, rapid ransomware recovery, and advanced security services that address today's board-level concerns. But perhaps the biggest takeaway is the momentum. Nine of the top ten Fortune companies are now running on VCF, more than 100 million cores have been licensed, and dozens of enterprises—from global giants to mid-sized insurers—are on stage at VMware Explore sharing their adoption stories. The so-called “cloud reset” that Prashanth has written about is not just theory. Companies are rethinking their cloud strategies, seeking cost transparency, avoiding waste, and building resilient, AI-ready private clouds. This conversation highlights how Broadcom is doubling down on VCF with a singular focus, a massive R&D commitment, and a clear vision of where private cloud is headed. If you want to understand why private AI, developer services, and cyber resilience are now central to enterprise strategy, this is a conversation worth hearing.
At VMware Explore in Las Vegas, the buzz wasn't just about generative AI, but about where and how it should run. My guest is Tasha Drew, Director of Engineering for the AI team in the VMware Cloud Foundation division at Broadcom, who has been at the center of this conversation. Fresh off the main stage, where she helped debut VMware's new Private AI Services and Intelligent Assist for VMware Cloud Foundation, Tasha joins me to unpack what these announcements mean for enterprises grappling with privacy, cost, and integration challenges. Tasha explains why private AI is resonating so strongly in 2025, outlining the three pillars that define it: protecting sensitive intellectual property, managing regulated or high-value data, and ensuring role-based control of fine-tuned models. She shares how organizations often start their AI journey in the public cloud, but as experimentation turns to production, cost pressures, data compliance, and proximity to data drive them toward private AI. We also dive into VMware's own evolution toward building an AI-native private cloud platform. Tasha highlights the journey from deep learning VMs and Jupyter notebooks to full AI platform services that empower IT teams to deliver models efficiently, save money, and accelerate deployment of retrieval-augmented generation (RAG) applications. She introduces Intelligent Assist for VMware Cloud Foundation, an AI-powered guide that helps teams navigate complex deployments with context-aware support and step-by-step instructions. Beyond the technology, Tasha reflects on the broader ecosystem shifts, from partnerships with NVIDIA and AMD to the role of Model Context Protocol (MCP) in breaking down integration barriers between enterprise systems. She believes MCP represents a turning point, enabling seamless workflows between platforms that historically lacked incentive to work together. This conversation captures a pivotal moment where private AI is moving from theory into enterprise adoption. For leaders weighing their next move, Tasha provides both the strategic framing and the technical insight to understand why private AI has become one of the most talked-about forces shaping enterprise IT today.
Factories don't usually make headlines at tech conferences, but what Audi is doing inside its production labs is anything but ordinary. At VMware Explore in Las Vegas, I sat down with Dr. Henning Löser, Head of the Audi Production Lab, to talk about how the automaker is reinventing its factory floor with a software-first mindset. Henning leads a small team he jokingly calls “the nerds of production,” but their work is changing how cars are built. Instead of replacing entire lines for every new piece of technology, Audi has found a way to bring the speed and flexibility of IT into the world of industrial automation. The result is Edge Cloud 4 Production, a system that takes virtualization technology normally reserved for data centers and applies it directly to manufacturing. In our conversation, Henning explained why virtual PLCs may be one of the biggest breakthroughs yet. They look invisible to workers on the line but give maintenance teams new transparency and resilience. We explored how replacing thousands of industrial PCs with centralized, virtualized workloads not only reduces downtime but also cuts energy use and simplifies updates. And yes, we even discussed the day a beaver chewed through one of Audi's fiber optic cables and how redundancy kept production running without a hitch. This episode is about more than smart factories. It's about how an industry known for heavy machinery is learning to think like the cloud. From scalability and sustainability to predictive maintenance and AI-ready infrastructure, Audi is showing how the car of the future starts with the factory of the future. If you've ever wondered how emerging technologies like virtualization and private cloud are reshaping the shop floor, this is a story you'll want to hear.
In this episode of Tech Talks Daily, I speak with Jane Ostler from Kantar, the world's leading marketing data and analytics company, whose clients include Google, Diageo, AB InBev, Unilever, and Kraft Heinz. Jane brings clarity to a debate often clouded by headlines, explaining why AI should be seen as a creative sparring partner, not a rival. She outlines how Kantar is helping brands balance efficiency with inspiration, and why the best marketing in the years ahead will come from humans and machines working together. We explore Kantar's research into how marketers really feel about AI adoption, uncovering why so many projects stall in pilot phase, and what steps can help teams move from experimentation to execution. Jane also discusses the importance of data quality as the foundation of effective AI, drawing comparisons to the early days of GDPR when oversight and governance first became front of mind. From Coca-Cola's AI-assisted Christmas ads to predictive analytics that help brands allocate budgets with greater confidence, Jane shares examples of where AI is already shaping marketing in ways that might surprise you. She also highlights the importance of cultural nuance in AI-driven campaigns across 90-plus markets, and why transparency, explainability, and human oversight are vital for earning consumer trust. Whether you're a CMO weighing AI strategy, a brand manager experimenting with new tools, or someone curious about how the biggest advertisers are reshaping their playbooks, this conversation with Jane Ostler offers both inspiration and practical guidance. It's about rethinking AI not as the end of creativity, but as the beginning of a new partnership between data, machines, and human imagination.
The enterprise network is under pressure like never before. Hybrid environments, cloud migrations, edge deployments, and the sudden surge in AI workloads have made it increasingly difficult to keep application connectivity secure and reliable. The old model of device-by-device, rule-based network management can't keep up with today's hyperconnected, API-driven world. In this episode of Tech Talks Daily, I sit down with Kyle Wickert, Field Chief Technology Officer at AlgoSec, to discuss the future of network management in the age of platformization. With more than a decade at AlgoSec and years of hands-on experience working with some of the world's largest enterprises, Kyle brings an unfiltered view of the challenges and opportunities that IT leaders are facing right now. We talk about why enterprises are rapidly shifting to platform-based models to simplify network security, but also why that strategy can start to break down when dealing with multi-vendor environments. Kyle explains the fragmentation across cloud, on-prem, and edge infrastructure that keeps CIOs awake at night, and why spreadsheets and manual change processes are still far too common in 2025. He also shares why visibility, intent-based policies, and policy automation are becoming non-negotiable in reducing risk and friction. Kyle doesn't just talk theory. He shares a real-world case study of a European financial institution that automated policy provisioning across firewalls and cloud infrastructure, integrated it with CI/CD pipelines, and reduced its change rejection rate from 25% to 4%. It's a compelling example of how the right approach to network management can deliver measurable improvements in agility, security, and business satisfaction.
AI is rapidly becoming part of the healthcare system, powering everything from diagnostic tools and medical devices to patient monitoring and hospital operations. But while the potential is extraordinary, the risks are equally stark. Many hospitals are adopting AI without the safeguards needed to protect patient safety, leaving critical systems exposed to threats that most in the sector have never faced before. In this episode of Tech Talks Daily, I speak with Ty Greenhalgh, Healthcare Industry Principal at Claroty, about why healthcare's AI rush could come at a dangerous cost if security does not keep pace. Ty explains how novel threats like adversarial prompts, model poisoning, and decision manipulation could compromise clinical systems in ways that are very different from traditional cyberattacks. These are not just theoretical scenarios. AI-driven misinformation or manipulated diagnostics could directly impact patient care. We explore why the first step for hospitals is building a clear AI asset inventory. Too many organizations are rolling out AI models without knowing where they are deployed, how they interact with other systems, or what risks they introduce. Ty draws parallels with the hasty adoption of electronic health records, which created unforeseen security gaps that still haunt the industry today. With regulatory frameworks like the UK's AI Act and the EU's AI regulation approaching, Ty stresses that hospitals cannot afford to wait for legislation. Immediate action is needed to implement risk frameworks, strengthen vendor accountability, and integrate real-time monitoring of AI alongside legacy devices. Only then can healthcare organizations gain the trust and resilience needed to safely embrace the benefits of AI. This is a timely conversation for leaders across healthcare and cybersecurity. The sector is on the edge of an AI revolution, but the choices made now will determine whether that revolution strengthens patient care or undermines it. You can learn more about Claroty's approach to securing healthcare technology at claroty.com.