The Tech Blog Writer Podcast

Follow The Tech Blog Writer Podcast
Share on
Copy link to clipboard

Fed up with tech hype? Looking for a tech podcast where you can learn from tech leaders and startup stories about how technology is transforming businesses and reshaping industries? In this daily tech podcast, Neil interviews tech leaders, CEOs, entrepreneurs, futurists, technologists, thought lead…

Neil C. Hughes


    • Sep 16, 2025 LATEST EPISODE
    • daily NEW EPISODES
    • 27m AVG DURATION
    • 3,292 EPISODES

    5 from 156 ratings Listeners of The Tech Blog Writer Podcast that love the show mention: neil asks, bram, neil hughes, neil does a great, neil's podcast, charismatic host, insightful and engaging, tech topics, love tuning, great tech, engaging podcast, tech industry, emerging, tech podcast, startups, founder, best tech, predictions, technology, innovative.


    Ivy Insights

    The Tech Blog Writer Podcast is a must-listen for anyone interested in the intersection of technology and various industries. Hosted by Neil Hughes, this podcast features interviews with a wide range of guests, including visionary entrepreneurs and industry experts. Neil has a remarkable talent for breaking down complex topics into easily understandable discussions, making it accessible to listeners from all backgrounds. One of the best aspects of this podcast is the diversity of guests, as they come from different industries and share their cutting-edge technology solutions. It provides a great source of inspiration and knowledge for staying up to date with the latest advancements in tech.

    The worst aspect of The Tech Blog Writer Podcast is that sometimes the discussions can feel a bit rushed due to the time constraints of each episode. With so many interesting guests and topics to cover, it would be great if there was more time for in-depth conversations. Additionally, while Neil does an excellent job at selecting diverse guests, occasionally it would be beneficial to have more representation from underrepresented communities in tech.

    In conclusion, The Tech Blog Writer Podcast is an excellent resource for those looking to stay informed about the latest tech advancements while learning from visionary entrepreneurs across various industries. Neil's ability to break down complex topics and his engaging interviewing style make this podcast a valuable source of inspiration and knowledge. Despite some minor flaws, it remains a must-listen for anyone interested in staying up-to-date with cutting-edge technology solutions and developments.



    Search for episodes from The Tech Blog Writer Podcast with a specific topic:

    Latest episodes from The Tech Blog Writer Podcast

    3422: Meet Symphion and the Print Fleet Cybersecurity as a Service

    Play Episode Listen Later Sep 16, 2025 21:57


    I've spent years talking about endpoint security, yet printers rarely enter the conversation. Today, that blind spot takes center stage. I'm joined by Jim LaRoe, CEO of Symphion, to unpack why printers now represent one of the most exposed corners of the enterprise and what can be done about it. Jim's team protects fleets that range from a few hundred devices to tens of thousands, and the picture he paints is stark. In many organizations, printers make up 20 to 30 percent of endpoints, and almost all of them are left in a factory default state. That means open ports, default passwords, and little to no monitoring. Pair that with the sensitive data printers receive, process, and store, plus the privileged connections they hold to email and file servers, and you start to see why attackers love them. We trace Symphion's path from a configuration management roots story in 1999 to a pivot in 2015 when a major printer manufacturer invited the company behind the curtain. What they found was a parallel universe to mainstream IT. Brand silos, disparate operating systems, and a culture that treated printers as cost items rather than connected computers. Add in the human factor, where technicians reset devices to factory defaults after service as second nature, and you have a recipe for recurring vulnerabilities that never make it into a SOC dashboard. Jim explains how Symphion's Print Fleet Cybersecurity as a Service tackles this mess with cross-brand software, professional operations, and proven processes delivered for a simple per-device price. The model is designed to remove operational burden from IT teams. Automated daily monitoring detects drift, same-day remediation resets hardened controls, and comprehensive reporting supports regulatory needs in sectors like healthcare where compliance is non-negotiable. The goal is steady cyber hygiene for printers that mirrors what enterprises already expect for servers and PCs, without cobbling together multiple vendor tools, licenses, and extra headcount to operate them. We also talk about the hidden costs of DIY printer security. Licensing multiple management platforms for different brands, training staff who already have full plates, and outages caused by misconfigurations all add up. Jim shares real-world perspectives from organizations that tried to patch together a solution before calling in help. The pattern is familiar. Costs creep. Vulnerabilities reappear. Incidents push the topic onto the CISO's agenda. Symphion's pitch is straightforward. Treat print fleets like any other class of critical infrastructure in the enterprise, and measure outcomes in risk reduction, time saved, and fewer surprises. If you are commuting while listening and now hearing alarm bells, you are not alone. Think about the printers scattered across your offices and clinics. Consider the data that passes through them every day. Then picture an attacker who finds default credentials in minutes and uses a printer to move across your network.  Tune in for a fast, practical look at a risk hiding in plain sight, and learn how Symphion's Print Fleet Cybersecurity as a Service can help you close a gap that attackers know too well. ********* Visit the Sponsor of Tech Talks Network: Land your first job  in tech in 6 months as a Software QA Engineering Bootcamp with Careerist https://crst.co/OGCLA  

    Marketing Intelligence After Cookies: How Funnel Turns Data Into Decisions

    Play Episode Listen Later Sep 15, 2025 30:29


    Marketing teams used to have a simple enough job: follow the click, count the conversions, and shift the budget accordingly. But that world is gone. GDPR, iOS restrictions, and browser-level changes have left most attribution models broken or unreliable. So what now? In this episode, I sat down with Fredrik Skansen, CEO of Funnel, to unpack how marketing intelligence actually works in a world where data is partial, journeys are fragmented, and the old models don't hold. Since founding Funnel in 2014, Fredrik has grown the company into a platform that supports over 2,600 brands and handles reporting on more than 80 billion dollars in annual digital spend. That scale gives him a front-row seat to the questions every CMO and CFO are asking right now. Fredrik explains why last-click attribution didn't just become inaccurate. It became misleading. With tracking capabilities stripped down and user signals disappearing, the industry has had to move toward modeled attribution and real-time optimisation. That only works if your data is clean, aligned, and ready for analysis. Funnel's platform helps structure campaigns upfront, pull data into a unified model, apply intelligence, push learnings back into the platforms, and produce reporting that makes sense to the wider business. This isn't about dashboards. It's about decisions. We also talk about budget mix. Performance channels may feel safe, but Fredrik points out they are also getting more expensive. When teams bring brand and mid-funnel activity back into the measurement framework, the picture often changes. He shares how Swedish retailer Gina Tricot grew from 100 million to 300 million dollars in three years, in part by shifting spend to brand and driving demand earlier in the customer journey. That move only felt safe because the data supported it. AI adds another layer. With tools like Perplexity reshaping search behavior and the web shifting from links to answers, click-throughs are drying up. But it's not the end of visibility. Content still matters. So does structure. The difference is that now your reader might be an AI model, not a human. That requires a rethink in how brands approach discoverability, authority, and engagement. What makes Funnel interesting is that it doesn't stop at analytics. The platform feeds insight back into action, reducing waste and creating tighter loops between teams. It also works for agencies, which is why groups like Havas use it across 40 offices through a global agreement. If you're tired of attribution theatre and want to understand what marketing measurement looks like when it's built for reality, this episode gives you a clear, usable view. Listen in, then tell me which decision you're still guessing on. Because marketing can be measured. Just not the way it used to be. ********* Visit the Sponsor of Tech Talks Network: Land your first job  in tech in 6 months as a Software QA Engineering Bootcamp with Careerist https://crst.co/OGCLA    

    Why FeatureOps Might Be the Future of Software Delivery

    Play Episode Listen Later Sep 14, 2025 49:23


    I invited Egil Østhus to unpack a simple idea that tends to get lost in release day pressure. DevOps gets code to production quickly, but users experience features, not pipelines. Egil is the founder of Unleash, an open source feature management platform with close to 30 million downloads, and he argues that the next step is FeatureOps. It is a mindset and a set of practices that separate deployment from release, so teams can place code in production, light it up for a small cohort, learn, and only then scale out with confidence. Here is the thing. Controlled rollouts, clear telemetry, and fast rollback reduce risk without slowing teams down. Egil explains how FeatureOps connects engineering effort to business outcomes through gradual exposure, full stack experimentation, and what he calls surgical rollback. Instead of ripping out an entire release when one part misbehaves, teams can disable the offending capability and keep the rest of the value in place. It sounds straightforward because it is, and that is the point. Less drama, more learning, better results. We also talk about culture. When releases repeatedly disappoint, trust between product and engineering frays. Egil shares examples where Unleash helped a hardware and software company move from blame to shared ownership by making rollout plans visible and collaborative. Another client, an ERP vendor, discovered that early feedback from a small group of users allowed them to ship a leaner version that met the need without months of extra scope. That is how FeatureOps saves money and tempers expectations while still delighting customers. AI enters the story too. Code is shipping faster, but reliability can wobble when autogenerated changes move through pipelines. Egil sees feature management as a practical control plane for this new reality. Feature flags provide a real time safety net and, if needed, a kill switch for AI powered functionality. Teams can keep experimenting while protecting users and brand equity. If you want to move beyond release day roulette, this episode offers a practical playbook. We cover privacy first design, open source flexibility, and why metadata from FeatureOps will help leaders study how their organizations truly build. To learn more, visit getunleash.io or search for Unleash in your favorite tool, then tell me how you plan to measure your next rollout's impact. ********* Visit the Sponsor of Tech Talks Network: Land your first job  in tech in 6 months as a Software QA Engineering Bootcamp with Careerist https://crst.co/OGCLA      

    From Bots To Agents: Building Trustworthy Autonomy With Hakkōda, an IBM Company

    Play Episode Listen Later Sep 13, 2025 25:49


    I invited Atalia Horenshtien to unpack a topic many leaders are wrestling with right now. Everyone is talking about AI agents, yet most teams are still living with rule based bots, brittle scripts, and a fair bit of anxiety about handing decisions to software. Atalia has lived through the full arc, from early machine learning and automated pipelines to today's agent frameworks inside large enterprises. She is an AI and data strategist, a former data scientist and software engineer, and has just joined Hakoda, an IBM company, to help global brands move from experiments to outcomes. The timing matters. She starts on the 18th, and this conversation captures how she thinks about responsible progress at exactly the moment she steps into that new role. Here's the thing. Words like autonomy sound glamorous until an agent faces a messy real world task. Atalia draws a clear line between scripted bots and agents with goals, memory, and the ability to learn from feedback. Her advice is refreshingly grounded. Start internal where you can observe behavior. Put human in the loop review where it counts. Use role based access rather than feeding an LLM everything you own. Build an observability layer so you can see what the model did, why it did it, and what it cost. We also get into measurements that matter. Time saved, cycle time reduction, adoption, before and after comparisons, and a sober look at LLM costs against any reduction in FTE hours. She shares how custom cost tracking for agents prevents surprises, and why version one should ship even if it is imperfect. Culture shows up as a recurring theme. Leaders need to talk openly about reskilling, coach managers through change, and invite teams to be co creators. Her story about Hakoda's internal AI Lab is a good example. What began as an engineer's idea for ETL schema matching grew into agent powered tools that won a CIO 100 award and now help deliver faster, better outcomes for clients. There are lighter moments too. Atalia explains how she taught an ex NFL player the basics of time series forecasting using football tactics. Then she takes us behind the scenes with McLaren Racing, where data and strategy collide on the F1 circuit, and admits she has become a committed fan because of that work. If you want a practical playbook for moving from shiny demos to dependable agents, this episode will help you think clearly about scope, safeguards, and speed. Connect with Atalia on LinkedIn, explore Hakoda's work at hakoda.io, and then tell me how you plan to measure your first agent's value. ********* Visit the Sponsor of Tech Talks Network: Land your first job  in tech in 6 months as a Software QA Engineering Bootcamp with Careerist https://crst.co/OGCLA  

    3418: Scaling IoT Security with Real Time Visibility at Wireless Logic

    Play Episode Listen Later Sep 12, 2025 35:33


    Here's the thing. Connecting thousands of devices is the easy part. Keeping them resilient and secure as you grow is where the real work lives. In this episode, I sit down with Iain Davidson, Senior Product Manager at Wireless Logic, to unpack what happens when connectivity, security, and operations meet in the real world. Wireless Logic connects a new IoT device every 18 seconds, with more than 18 million active subscriptions across 165 countries and partnerships with over 750 mobile networks. That reach brings hard lessons about where projects stall, where breaches begin, and how to build systems that can take a hit without taking your business offline. Iain lays out a simple idea that more teams need to hear. Resilience and security have to scale at the same pace as your device rollouts. He explains why fallback connectivity, private networking, and an IoT-optimised mobile core such as Conexa set the ground rules, but the real differentiator is visibility. If you cannot see what your fleet is doing in near real time, you are guessing. We talk through Wireless Logic's agentless anomaly and threat detection that runs in the mobile core, creating behavioural baselines and flagging malware events, backdoors, and suspicious traffic before small issues become outages. It is an early warning layer for fleets that often live beyond the traditional IT perimeter. We also get honest about risk. Iain shares why one in three breaches now involve an IoT device and why detection can still take months. Ransomware demands grab headlines, but the quiet damage shows up in recovery costs, truck rolls, and trust lost with customers. Then there is compliance. With new rules tightening in Europe and beyond, scaling without protection does not only invite attackers. It can keep you out of the market. Iain's message is clear. Bake security in from day one through defend, detect, react practices, supply chain checks, secure boot and firmware integrity, OTA updates, and the discipline to rehearse incident playbooks so people know what to do when alarms sound. What if you already shipped devices without all of that in place? We cover that too. From migrating SIMs into secure private networks to quarantining suspect endpoints and turning on core-level detection without adding agents, there are practical ways to raise your posture without ripping and replacing hardware. Automation helps, especially at global scale, but people still make the judgment calls. Train your teams, run simulations, and give both humans and digital systems clear rules for when to block, when to escalate, and when to restore from backup. I left this conversation with a simple takeaway. Growth is only real if it is durable. If you are rolling out EV chargers, medical devices, cameras, industrial sensors, or anything that talks to the network, this episode gives you a working playbook for scaling with confidence. Connect with Iain on LinkedIn, explore the IoT security resources at WirelessLogic.com, or reach the team at hello@wirelesslogic.com. ********* Visit the Sponsor of Tech Talks Network: Land your first job  in tech in 6 months as a Software QA Engineering Bootcamp with Careerist https://crst.co/OGCLA

    Can You Trust Your AI if You Can't Trust Your Data? Reltio Weighs In

    Play Episode Listen Later Sep 11, 2025 27:09


    We talk a lot about AI as if it can fix broken systems. But what happens when the underlying data is too messy, too slow, or too disconnected to support anything useful? That's the problem Manish Sood, founder and CTO of Reltio, has spent the last decade working to solve. Reltio is not your average data company. It sits behind some of the world's most recognisable brands, helping names like L'Oréal, Pfizer, HP, and CarMax modernise how they manage and activate data across the business. What they all share is a recognition that outdated systems and disconnected records don't just slow down insights. They actively block innovation. Manish breaks this down with uncommon clarity. He calls it “data debt”—the invisible burden of stale, incomplete, or fragmented information that quietly kills speed and adds risk. It's not just a technical problem. It's a leadership challenge, especially as businesses adopt generative AI tools that rely on clean, contextual data to function reliably. We explore how real-time intelligence is changing the way companies operate across customer experience, fraud detection, and supply chain resilience. Manish shares examples from enterprise clients who have moved from legacy systems to unified platforms, and how that shift enabled smarter decision-making at scale. From personalised retail offers to proactive healthcare outreach, the stories point to one common truth: if the data isn't trusted, the AI cannot be either. There's also a new role emerging inside many companies—the data steward as AI enabler. These are the people ensuring that data isn't just stored, but shaped. Human-guided, explainable, traceable. That clarity is key to responsible AI, especially in sectors where compliance and reputation are tightly linked. Manish also explains how Reltio's platform helps businesses protect against AI vulnerabilities by enabling resilient data pipelines, consistent governance, and real-time monitoring. In a world where data is created and used simultaneously, batch syncing is not enough. Real-time pipelines give companies the confidence to experiment with AI without falling into chaos. If your business is chasing innovation without cleaning up its data layer first, this conversation is a wake-up call. Manish shows that the future of AI isn't about who builds the best model. It's about who feeds it the best foundation. ********* Visit the Sponsor of Tech Talks Network: Land your first job  in tech in 6 months as a Software QA Engineering Bootcamp with Careerist https://crst.co/OGCLA    

    3417: Inflection AI and the Rise of Contextual Intelligence

    Play Episode Listen Later Sep 11, 2025 32:10


    Here's the thing. Most enterprise AI pitches talk about scale and speed. Fewer talk about trust, tone, and culture. In this conversation with Inflection AI's Amit Manjhi and Shruti Prakash, I explore a different path for enterprise AI, one that combines emotional intelligence with analytical horsepower, enabling teams to ask more informed questions of their data and receive answers that are grounded in context. Amit's story sets the pace. He is a three-time founder, a YC alum, and a CS PhD who has solved complex problems across mobile, ad tech, and data. Shruti complements that arc with a product lens shaped by real operational trenches, from clean rooms to grocery retail analytics.  Together, they built BoostKPI during the pandemic, transforming natural language into actionable insights, and then joined Inflection AI to help refocus the company on achieving enterprise outcomes. Their shared north star is simple to say yet tricky to execute. Make data analysis conversational, accurate, and emotionally aware so people actually use it. We unpack Inflection's shift from Pi's consumer roots to privacy-first enterprise tools. That history matters because it gives the team a head start on EQ. When you combine a deep well of human-to-AI conversations with modern LLMs, you get systems that explain, probe, and adapt rather than dump charts and call it a day.  Shruti breaks down what dialogue with data looks like in practice. Think back-and-forth exchanges that move from "what happened" to "why it happened," then on to "where else this pattern appears" and "what to do next," all grounded in an organization's language and values. Amit takes us under the hood on deployment choices and ownership. If a customer wants on-prem or VPC, they get it. If they're going to fine-tune models to their vernacular, they can. The model, the insights, and the guardrails remain in the customer's control. I enjoyed the honesty around adoption. Chasing AGI makes headlines, but it rarely helps a merchandising manager spot an early drop in lifetime value or a CX lead understand churn risk before quarter end. The duo keeps the conversation grounded in everyday questions that drive numbers and reduce meetings. They describe a path where EQ and IQ come together to form what Shruti calls contextual intelligence, and where brands can trust AI agents to assist without losing ownership or voice. If you care about making data useful to more people, and you want AI that sounds like your company rather than a generic assistant, this one is for you. We cover startup lessons, the reality of cofounding as a couple during lockdowns, and how Inflection is working with large enterprises to bring conversational analysis to real workloads. It is a grounded look at where enterprise AI is heading, and a timely reminder that technology should elevate humans, not replace them. ********* Visit the Sponsor of Tech Talks Network: Land your first job  in tech in 6 months as a Software QA Engineering Bootcamp with Careerist https://crst.co/OGCLA

    3416: DEUNA: From One-Click Checkout to Intelligent Payments Infrastructure

    Play Episode Listen Later Sep 10, 2025 35:12


    Here's the thing. Payments only look simple from the outside. In this Tech Talks Daily episode, I sit down with Roberto “Reks” Kafati, CEO and co-founder of DEUNA, to unpack how a scrappy one-click checkout idea grew into an intelligent payments infrastructure that now touches a large slice of Mexico's online economy.  Reks explains why Latin America's high decline rates aren't just an inconvenience but a growth killer, and how DEUNA's early focus on orchestration and checkout opened the door to something bigger. When a region routinely sees more than four out of ten online transactions knocked back, the bar for reliability sits in a different place. That practical problem set the stage for what came next. Athena, Real-Time Decisions, and 638 Signals per Transaction DEUNA's pivot point came when merchants asked a fair question. With all this payment data flying through the system, what should we do with it? The answer is Athena, DEUNA's AI-powered layer that watches every transaction and feeds merchants real-time insight, routing choices, and suggested actions. It is not another dashboard you promise to check and then ignore by Friday. It is a reasoning engine that sits on top of 638 data points per transaction and turns mess into movement. That is how you recover revenue without punishing good customers with extra friction, how you avoid surprise fees from networks, and how you protect recurring revenue when a processor wobbles. Reks walks us through results that speak plainly. Ramped merchants saw conversion lift from the original one-click experience. The infrastructure tier recovers meaningful GMV and trims fees. Enterprise clients report double-digit ROI and stick around for the compounding effect. Building Through Adversity and Betting on the Right Layer What resonated most was the human story behind the metrics. DEUNA was born in the first months of the pandemic, shaped by the shock that hit real-world businesses when revenue fell off a cliff and marketplaces became a lifeline with strings attached. Reks shares an unvarnished look at a tough 2023, the kind of year founders rarely talk about on record. Revenues dipped, deals went sideways, life got complicated. The team chose resilience and doubled down on a two-year vision. That bet is paying off. Over the past twenty-four months the company has grown at a pace that would bend a chart, and the focus has shifted from commoditizing orchestration to productizing intelligence. Put simply, earn trust at checkout, then make the data work for the merchant in real time. Agentic Commerce, US Expansion, and What Comes Next We also look forward. If chat interfaces begin to mediate more buying decisions, merchants will need infrastructure that can think, not just connect endpoints. That is the territory DEUNA calls intelligent infrastructure, and it is where Athena operates every day.  The company is now in active conversations with major US retailers, confident after winning head-to-head enterprise evaluations. Reks frames the opportunity without hype. If you can see acceptance trends by processor, by country, by card type, and act in the moment, you keep customers, protect margins, and avoid death by a thousand false declines. If you cannot, competitors will gladly welcome your frustrated shoppers. If you care about the real mechanics of growth, this conversation is for you. We talk conversion lift, recovered revenue, and the gritty bits of building a payments company that merchants actually rely on. We also talk about the days that test your resolve and the tenth day that reminds you why you started.

    3415: Secure GenAI for SAP: Syntax Systems CodeGenie on BTP

    Play Episode Listen Later Sep 9, 2025 25:31


    I sat down with Leo de Araujo, Head of Global Business Innovation at Syntax Systems, to unpack a problem every SAP team knows too well. Years of enhancements and quick fixes leave you with custom code that nobody wants to document, a maze of SharePoint folders, and hard questions whenever S/4HANA comes up. What does this program do. What breaks if we change that field. Do we have three versions of the same thing. Leo's answer is Syntax AI CodeGenie, an agentic AI solution with a built-in chatbot that finally treats documentation and code understanding as a living part of the system, not an afterthought. Here's the thing. CodeGenie automates the creation and upkeep of custom code documentation, then lets you ask plain-language questions about function and business value. Instead of hunting through 40-page PDFs, teams can ask, “Do we already upload sales orders from Excel,” or “What depends on this BAdI,” and get an instant explanation. That changes migration planning. You can see what to keep, what to retire, and where standard capabilities or new extensions make more sense, which shortens the path to S/4HANA Cloud and helps you stay on a clean core. We also talk about how this is delivered. CodeGenie runs on SAP Business Technology Platform, connects through standard APIs, and avoids intrusive add-ons. It is compatible with SAP S/4HANA, S/4HANA Cloud Private Edition through RISE with SAP, and on-premises ECC. Security comes first, with tenant isolation for each customer and no custom code shared externally or used for AI model training. The result is a setup that respects enterprise guardrails while still giving developers and architects fast answers. Clean core gets a plain explanation in this episode. Build outside the application with published APIs, keep upgrades predictable, and innovate at the edge where you can move quickly. CodeGenie gives you the visibility to make that real, surfacing what you actually run today and how it ties to outcomes, so you can design a migration roadmap that fits the business rather than guessing from stale documents. Leo also previews the Gen AI Starter Pack, launching September 9. It bundles a managed, model-flexible platform with workshops, use-case ideation, and initial builds, so teams can move from curiosity to working solutions without locking themselves into a single provider. Paired with CodeGenie and Syntax's development accelerators, the Starter Pack points toward something SAP leaders have wanted for years, a practical way to shift from in-core customizations to clean-core extensions with much less friction. If you are planning S/4HANA, balancing hybrid and multi-cloud realities, or simply tired of tribal knowledge around critical programs, this conversation is for you. We get specific about how CodeGenie works, where it saves time and cost, and how Syntax is shaping a playbook for AI that helps teams deliver results they can trust. ********* Visit the Sponsor of Tech Talks Network: Land your first job  in tech in 6 months as a Software QA Engineering Bootcamp with Careerist https://crst.co/OGCLA

    3414: Self-Healing Machines and Robotics with Grace Technologies

    Play Episode Listen Later Sep 8, 2025 27:14


    Drew Allen, CEO of Grace Technologies, shares real stories from the floor, the ideas shaping safer plants, and why culture matters more than slogans. Drew's background stretches from a family line linked to Samuel Morse to teenage years in China to global business development at 3M. That range shows up in how he leads. He listens, he moves fast, and he expects teams to work on things that matter. In his world that means saving electricians from shocks and arc flash while helping manufacturers modernize without losing their soul. Grace started with mechanical and analog products, then took the hard road into fully digital systems. The shift took time and patience. Today their platform brings sensors, AI, and cloud tooling into maintenance and safety. The example that stuck with me is a proximity band for electricians. It lights, beeps, and vibrates as a worker approaches live voltage. At TriCity, that band prevented three near misses in a three month pilot. A fourth incident still ended in a hospital visit and a costly outage because the worker left the band in his car. Another apprentice nearly placed a hand on a live bus bar until the band told him something was wrong. These moments remind you that technology can change a day and a life. Drew's take on culture is refreshingly direct. Values are not a poster. They are a filter for who you hire. He looks for customer obsession, ownership, curiosity, and candid communication. Then he pairs that with high expectations and real care. Autonomy comes with accountability. Impact matters. If someone does not want to work on meaningful problems, this is not their place. It sounds firm. It also explains why the company keeps earning top workplace recognition while raising the bar on performance. We also talked about Maple Studios, the startup incubator Drew launched in Davenport, Iowa. He sees gaps in the industrial ecosystem. Fewer big exits. Slow adoption cycles. Founders stuck inside large companies. Maple gives them tools, space, and hard feedback so they can iterate faster and build things factories will actually deploy. His advice is simple. Ship, learn, and repeat. Do customer reviews early. Expect a thousand small gotchas. Move through them rather than pretending they will not appear. Looking ahead, Drew expects robotics to accelerate for a very practical reason. Companies cannot find enough people. Dangerous work will be automated. He imagines maintenance tasks shifting toward humanoid robots, with machines designed so robotic agents can service them. He also references GM's self healing language to point at a coming blend of sensing, prediction, and automated repair.  On AI, he shares Satya Nadella's challenge. Measure productivity and GDP impact rather than hype. The promise is there. The scoreboard will tell the story. If you work in industrial tech, this conversation lands close to home. You will hear how to bring digital tools into legacy environments, how to design for safety from the start, and how to keep teams motivated without losing kindness. You will also catch an open invitation. Drew wants to partner with builders who care about this space. If that is you, reach out to him on LinkedIn or visit graceport.com. And if you are curious about the band that vibrates before a bad day begins, this episode is a good place to start. ********* Visit the Sponsor of Tech Talks Network: Land your first job  in tech in 6 months as a Software QA Engineering Bootcamp with Careerist https://crst.co/OGCLA

    3413: Why Medium-Range Forecasts Could Save Millions: Lessons from Planette AI

    Play Episode Listen Later Sep 7, 2025 25:08


    I spoke with Kalai Ramea at a timely moment. We recorded this conversation during a heatwave in the UK, which made her work at Planette AI feel very real. Kalai calls herself an all-purpose scientist, with a path that runs through California climate policy, Xerox PARC, and now a startup focused on the forecast window that most people ignore. Not tomorrow's weather. Not far-off climate scenarios. The space in between. Two weeks to two months out, where decisions get made and money is on the line. Kalai explains Planette AI's idea of scientific AI in plain words. Instead of learning from yesterday's weather patterns and hoping the future looks the same, their models learn physics from earth system simulations. Ocean meets atmosphere, energy moves, and the model learns those relationships directly. That matters in a warming world where history is a shaky guide. It also shortens time to insight. Traditional models can take weeks to run. If the output arrives after the risky period has passed, it is trivia.  tte AI is building for speed and usefulness. The value shows up in places you can picture. Event planners deciding whether to green-light a festival. Airlines shaping schedules and staffing. Farmers choosing when to plant and irrigate. Insurers pricing risk without leaning only on the past. Kalai shared a telling backcast of Bonnaroo in Tennessee, where flooding forced a last-minute cancellation. Their system showed heavy-rain signals weeks ahead. That kind of lead time changes outcomes, budgets, and stress levels. From Jargon To Decisions What I appreciate most about this story is the focus on access. Too many forecasts live in papers that only specialists read. Kalai and team are working to strip away jargon and deliver answers people can act on. Will it rain enough to trigger a payout. Will a heat threshold be crossed. Will the next month bring the kind of wind that matters for grid operations. The delivery matters as much as the math. NetCDF files might work for researchers, but a map, a simple number, or a chat interface is what users reach for when time is short. There is also a financial thread running through this work. Climate risk now shapes crop insurance, carbon programs, and balance sheets. Parametric insurance is growing because it is simple. Set a threshold. If it hits, the policy pays. Better medium-range signals make those products fairer and more useful. Kalai describes Planette AI's role as a baseline layer others can build on, a kind of AWS for climate intelligence. That framing fits. No single company will build every app in this space. A reliable core makes the rest possible. Kalai's path ties it all together. Policy taught her how decisions get made. PARC sharpened her instincts for practical AI. PlanetteAI is the result. If you care about planning beyond next week, this episode will give you a new way to think about forecasts and the tools that power them. I will add the blog link Kalai shared in the show notes. In the meantime, if you are in agriculture, travel, energy, or insurance, ask yourself a simple question. What would you change if you had a trustworthy signal three to eight weeks ahead. ********* Visit the Sponsor of Tech Talks Network: Land your first job  in tech in 6 months as a Software QA Engineering Bootcamp with Careerist https://crst.co/OGCLA

    3412: PuppyGraph at the IT Press Tour: Graph Power Without the Pain

    Play Episode Listen Later Sep 6, 2025 21:59


    During the IT Press Tour, I had the pleasure of speaking with Weimo Liu, CEO and co-founder of PuppyGraph, and hearing firsthand how his team is rethinking graph technology for the enterprise. In this episode of Tech Talks Daily, Weimo joins me to share the story behind PuppyGraph's “zero ETL” approach, which lets organizations query their existing data as a graph without ever moving or duplicating it. We discuss why graph databases, despite their promise, have struggled with mainstream adoption, often because of complex pipelines and heavy infrastructure requirements. Weimo explains how PuppyGraph borrows from his time at TigerGraph and Google's F1 engine to build something new: a distributed query engine that maps tables into a logical graph and delivers subsecond performance on massive datasets. That shift opens the door for use cases in cybersecurity, fraud detection, and AI-driven applications where latency and accuracy matter most. We also unpack the developer experience. Instead of rewriting schemas or reloading data every time requirements change, PuppyGraph allows teams to define nodes and edges directly from existing tables. That design lowers the barrier for SQL-focused teams and accelerates time to value. Weimo even touches on the role of graph in reducing AI hallucinations, showing how structured relationships can make enterprise AI systems more reliable. What struck me most in our conversation is how PuppyGraph's playful branding belies its serious engineering depth. Behind the “puppy” name lies a distributed engine built to scale with today's data volumes, backed by strong early adoption and a team that listens closely to customer needs. Whether you're exploring graph for cybersecurity, AI chatbots, or supply chain analytics, this discussion offers a glimpse of how the next generation of graph tech might finally break free from its niche and go mainstream. ********* Visit the Sponsor of Tech Talks Network: Land your first job  in tech in 6 months as a Software QA Engineering Bootcamp with Careerist https://crst.co/OGCLA

    3411: Why The Browser Is The New Security Perimeter

    Play Episode Listen Later Sep 6, 2025 27:16


    When I invited Or Eshed, CEO and co-founder of LayerX Security, onto Tech Talks Daily, I wanted to challenge a blind spot most teams carry into work each day. We talk about phishing, ransomware, and endpoint controls, yet we skip the place where employees actually live online. The browser. That quiet tab bar has become the front door to identities, payments, SaaS, and now AI. Or calls it a different operating system in its own right, and once you hear his examples of how extensions can intercept cookies, mimic logins, or even meddle with AI chats, the penny drops fast. Here's the thing. Blocking extensions across the board no longer fits how people work. Developers, marketers, sales teams, and support agents all lean on extensions for real productivity gains. Or's argument is simple. If the business depends on extensions, security has to meet people where they are with continuous, risk-based controls inside the browser itself. That means assessing code, permissions, ownership changes, and live behaviors, not relying on a static allow list that grows and grows while attackers slip through the cracks. We also unpack Extensionpedia, LayerX's free resource that lets anyone look up the risk profile of a specific extension. It is part education, part early warning system, and it serves a wider mission to raise the floor for everyone. Or shares how a technology alliance with Google has helped the team analyze extensions at serious scale, and why better data beats clever slogans in a space where signals change hour by hour. Malicious Extensions, AI Shortcuts, And The Culture Shift Security Needs One of the standout moments is a real-world story that starts at home and ends inside a corporate network. A spouse installs a screen-recording extension on a personal device, the browser profile syncs at work, and suddenly corporate credentials and sensitive sessions are mirrored to an untrusted machine. No shadowy APT needed. Just everyday sync doing exactly what it was designed to do. It is messy, human, and exactly why policy needs to be paired with continuous visibility in the browser. We explore the gray zone where productivity tools collide with privacy. Password managers, VPN helpers, and AI-everywhere extensions promise convenience, yet they can scrape data across SaaS apps or sync credentials in ways security leaders never intended. Or's advice is refreshingly pragmatic. Assume extensions are staying. Instrument the browser, score risk in real time, and adapt access based on what an extension actually does, not what it claims on a store page. Looking ahead, Or sees the browser taking an even bigger role as email, SaaS, and AI agents converge in one place. With AI companies building their own browsers, the last mile of user interaction gets denser, faster, and more valuable to protect. If 99 percent of enterprise users already run at least one extension, the task is clear. Know which ones are in play, understand how they behave, and keep policy dynamic. If this conversation sparks a rethink of your own approach, check your extensions in Extensionpedia, and then consider what modern, in-browser controls would look like in your environment. After this episode, you may never look at that tidy row of icons the same way again. ********* Visit the Sponsor of Tech Talks Network: Land your first job  in tech in 6 months as a Software QA Engineering Bootcamp with Careerist https://crst.co/OGCLA

    3410: Smartly CEO Laura Desmond on how AI is Rewriting the Rules of AdTech

    Play Episode Listen Later Sep 5, 2025 37:24


    I invited Laura Desmond, CEO of Smartly, to make sense of what feels like the biggest shake-up in marketing since the mobile era. She has led through every cycle I can remember, from the early internet to the rise of social, and she sees AI changing the rules faster than any previous wave. Across our conversation we unpack how AI is rewriting creative work, buying, and measurement, while forcing brands to rebuild trust with clear rules on data, models, and creator rights. Here's the thing. Attention is shorter, and the thumb moves fast. Most people give an ad about two seconds, and video is taking over the feed. Laura expects video to account for three quarters of digital ads by 2026, which tracks with what I am seeing across every platform. Smartly is betting on that shift with tools that turn Shorts or TikToks into personalized CTV spots, and bring CTV signal back into social. The goal is simple to say and hard to pull off. Show every person something that feels made for them, then learn from the response and improve the next piece of creative in near real time. We also talk about why the ground is moving under search. A growing number of people, especially younger users, skip the front page of Google and ask an AI assistant instead. That changes how discovery works, how queries appear, and where ad products live. Laura thinks we are heading toward campaigns that cut across search, social, retail media, and CTV as one flowing video-first effort, with creative and media stitched together by software rather than teams tossing files over the wall. Results matter, and Laura shared two proof points I kept coming back to. Smartly's platform has been validated by PwC for a 13 percent ROI lift across clients. The same study confirmed time savings that add up to 42 minutes a day for hands-on users. That reclaimed time funds the work that actually moves the needle, like faster A/B tests, sharper creative decisions, and better budget moves across channels. We also dig into conversational ads. In a recent test with Boots, Smartly's format delivered roughly four times the return on investment versus business as usual, which speaks to how fast query-style interactions are shaping expectations. Trust sits in the middle of all this. Laura is clear that responsible AI is table stakes. Brands need controls to tune or override generated assets, clarity on data sources and model choice, and a stance on creator rights before any content goes live. Her view of AI is creative first. Automate the tedious parts. Keep people in charge of taste, tone, and brand. Use the feedback loop to learn faster, not to replace the team. We close on where this all leads. Expect brand experiences that blur physical and digital without losing the human spark. Stadiums full, stores buzzing, and at the same time richer virtual touchpoints, snackable video, and one-to-one conversations that feel helpful rather than creepy. If this is your world, Laura is hosting Smartly's ADVANCE on September 17 in Brooklyn, and it looks set to be a real working session for marketers who want results, not theater. You can find details here: https://bit.ly/4fRgWEE. Tune in if you want a candid, practical map for where creative, media, and AI are heading next, and how to measure what matters while keeping your brand worthy of trust. ********* Visit the Sponsor of Tech Talks Network: Land your first job  in tech in 6 months as a Software QA Engineering Bootcamp with Careerist https://crst.co/OGCLA

    How Fugro is Using Tech to Chart the Ocean Floor

    Play Episode Listen Later Sep 4, 2025 29:35


    What does it take to map the oceans when most of the world's seabed remains unseen and unmapped? That's the question I explored with Mike Liddell from Fugro, a company using technology to reveal what lies beneath the waves.  In our conversation, Mike explained why surveying the ocean is like “working in heavy fog on a roller coaster” and how traditional tools like light and radio signals are useless underwater. Instead, sonar, robotics, and increasingly AI are stepping in to make sense of this hidden world. Mike described the huge scale of the challenge, from mapping areas larger than major cities to supporting offshore wind farms that power our clean energy transition. With labour shortages and younger generations less willing to spend months at sea, Fugro is shifting to remote operations centres and uncrewed surface vessels. These new approaches not only widen the talent pool but also cut fuel use dramatically—by as much as 95 percent compared to older ships.  What really struck me was the pace of change. A few years ago, offshore vessels struggled with internet speeds reminiscent of dial-up modems.  Today, satellite systems like Starlink make real-time collaboration between sea and shore possible. Add in AI that can process data at the edge and make instant decisions about where and how to collect information, and you begin to see how marine surveying is entering a new era. This episode is a glimpse into that frontier and into how technology is reshaping the way we understand and care for our blue planet.   ********* Visit the Sponsor of Tech Talks Network: Land your first job  in tech in 6 months as a Software QA Engineering Bootcamp with Careerist https://crst.co/OGCLA      

    3408: Lightricks, Open Source Video, and the Race for Faster Creativity

    Play Episode Listen Later Sep 3, 2025 28:30


    Here's the thing. Generative AI for visuals has shifted from a party trick to everyday craftwork, and few people sit closer to that shift than Ofir Bibi, VP of Research at Lightricks.  In this conversation, I wanted to understand how a company famous for Facetune, Photoleap, and Videoleap is building for a future where creators expect speed, control, and choice without a headache. What I found was a story about building core technology that serves real creative workflows, not the other way around. Ofir traces Lightricks' journey from clever on-device tricks that made small screens feel powerful to today's foundation models running in the cloud. The constant thread is usability. Making complex editing feel simple requires smart decisions in the background, and that mindset has shaped everything from their early mobile apps to LTX Studio, the company's multi-model creative platform. Across the last three years, generative features moved from novelty to necessity, and that reality forced a bigger question: when do you stop stitching together other people's models and start crafting your own? That question led to LTXV, an open-source video generation model designed for speed, efficiency, and control. Ofir explains why Lightricks built it from scratch and why they shared the weights and trainer with the community. The result is a fast feedback loop where researchers, developers, and even competitors try ideas on a model that runs on consumer-grade hardware and can generate clips faster than they can be watched. The new LTXV 2B Distilled build continues that push toward quicker iteration and creator-friendly control, including arbitrary frame conditioning that suits animation and keyframe-driven workflows. We also talk about the changing data diet for training. Quantity is out. Quality and preparation matter. Licensed, high-aesthetic datasets and tighter curation produce models that understand prompts, motion, and physics with fewer weird edges. That discipline shows up in the product too. LTX Studio blends Lightricks tech with options from partners like Google's Veo and Black Forest Labs' Flux, then steers users toward the right model for the job through thoughtful UI. If you want the sharpest single shot, you can choose it. If you want fast, iterative tweaks for storytelling, LTXV is front and center. Looking ahead, Ofir sees a near future where models become broader and more multimodal, while creators and enterprises ask for local and on-prem options that keep data closer to home. That makes efficiency a feature, not a footnote. If you care about the craft of making, not just the spectacle, this episode offers a grounded view of how AI can actually serve creators. It left me convinced that speed and control are the real differentiators, and that open source can be a very practical way to get both. ********* Visit the Sponsor of Tech Talks Network: Land your first job  in tech in 6 months as a Software QA Engineering Bootcamp with Careerist https://crst.co/OGCLA

    How Altair's AI Fabric Helps Businesses Move Beyond Pilot Projects

    Play Episode Listen Later Sep 2, 2025 25:51


    Artificial intelligence is no longer confined to experiments in labs or one-off pilot projects. For many enterprises, it is becoming the backbone of how they operate, innovate, and compete. But as companies race to deploy AI, the biggest challenge is not whether the technology works, but whether the foundations exist to scale it safely and effectively. In this episode of Tech Talks Daily, I'm joined by Christian Buckner, Senior Vice President of Data and AI Platform at Altair, a company known for combining rocket science with data science. Christian unpacks the concept of an AI Fabric, a framework that harmonizes enterprise data and embeds AI directly into a universal model. Rather than scattered tools and isolated projects, the AI Fabric acts as a living system of intelligence, helping organizations move faster, make better decisions, and unlock new kinds of automation. We talk about how global enterprises from automotive suppliers to petrochemical giants are already using Altair's technology to improve safety, optimize production, and cut costs. Christian shares examples including a transportation company that boosted revenue by $50 million in its first year of AI-driven dynamic pricing and a healthcare provider that saved $17 million in analysis time using knowledge graphs for drug discovery. The conversation also explores the hype and the risks around AI agents. While it is easy to spin up a proof of concept with a Python library, Christian explains why real enterprise impact requires governance, monitoring, and infrastructure to make agents trustworthy and sustainable. He likens it to building HR systems for AI, where agents need onboarding, oversight, and performance evaluation to operate alongside humans. We also touch on Altair's acquisition by Siemens and what this means for the future of industrial AI. By integrating Altair's data and AI expertise with Siemens' deep industrial systems, enterprises can add intelligence without ripping out existing infrastructure. The result is not about replacing workers but enabling them to become what Christian calls “10x employees,” augmented by AI tools and agents that multiply their effectiveness. For anyone curious about how AI will change product design, operations, and enterprise decision-making, this episode offers a rare inside look at the technology foundations being built today. You can learn more at altair.com/ai-fabric. ********* Visit the Sponsor of Tech Talks Network: Land your first job  in tech in 6 months as a Software QA Engineering Bootcamp with Careerist https://crst.co/OGCLA

    Arthur AI and the Future of Digital Co-Workers

    Play Episode Listen Later Sep 1, 2025 27:33


    What if meetings stopped draining your time and instead became engines for action? That's the question driving Christoph Fleischmann, CEO of Arthur AI, and the conversation in today's episode of Tech Talks Daily. Christoph has spent his career at the intersection of human potential and technology, and now he's leading a company that wants to change how enterprises actually get work done. Arthur AI isn't another tool to add to the stack. It's a digital co-worker—an intelligent presence that joins meetings, captures knowledge, and keeps teams aligned across time zones and formats. Whether in XR spaces, on the web, or through conversational interfaces, Arthur AI blends real-time and asynchronous collaboration. The aim is to replace endless, inefficient meetings with something more dynamic: an environment where humans and AI collaborate side by side to deliver outcomes. This conversation goes beyond theory. Christoph shares how Fortune 500 companies are already using Arthur AI to align global strategies, manage complex transformations, and modernize learning and development programs. He explains how their platform is built on enterprise-grade security and a flexible, LLM-agnostic architecture—critical foundations for companies wary of vendor lock-in or compliance risks. We also touch on the cultural shift of inviting AI to take a real seat at the table. From interviewing and project management to knowledge sharing, Arthur AI represents a new category of work experience, one where digital co-workers support people rather than replace them. For leaders tired of meetings that go nowhere and knowledge trapped in silos, this episode offers a glimpse of what smarter, faster collaboration looks like at scale. Could the blueprint for the future of digital work already be here? ********* Visit the Sponsor of Tech Talks Network: Land your first job  in tech in 6 months as a Software QA Engineering Bootcamp with Careerist https://crst.co/OGCLA

    From Pinterest and Airbnb to Kuma.ai: Reinventing Enterprise AI

    Play Episode Listen Later Aug 31, 2025 26:45


    Here's the thing. Most enterprise AI talk today starts with chatbots and ends with glossy demos. Meanwhile, the data that actually runs a business lives in rows, columns, and time stamps. That gap is where my conversation with Vanja Josifovski, CEO of Kuma.ai, really comes alive. Vanja has spent two and a half decades helping companies turn data into decisions, from research roles at Yahoo and Google to steering product and engineering at Pinterest through its IPO and later leading Airbnb Homes. He's now building Kuma.ai to answer an old question with a new approach: how do you get accurate, production-grade predictions from relational data without spending months crafting a bespoke model for each use case? Vanja explains why structured business data has been underserved for years. Images and text behave nicely compared to the messy reality of multiple tables, mixed data types, and event histories. Traditional teams anticipate a prediction need, then kick off a long feature engineering and modeling process. Kuma's Relational Foundation Model, or RFM, flips that script. Pre-trained on a large mix of public and synthetic data warehouses, it delivers task-agnostic, zero-shot predictions for problems like churn and fraud. That means you can ask the model questions directly of your data and get useful answers fast, then fine-tune for another 15 to 20 percent uplift when you're ready to squeeze more from your full dataset. What stood out for me is how Kuma removes the grind of manual feature creation. Vanja draws a clear parallel to computer vision's shift years ago, when teams stopped handcrafting edge detectors and started learning from raw pixels. By learning directly from raw tables, Kuma taps the entirety of the data rather than a bundle of human-crafted summaries. The payoff shows up in the numbers customers care about, with double-digit improvements against mature, well-defended baselines and the kind of time savings that change roadmaps. One customer built sixty models in two weeks, a job that would typically span a year or more. We also explore how this fits with the LLM moment. Vanja doesn't position RFM as a replacement for language models. He frames it as a complement that fills an accuracy gap on tabular data where LLMs often drift. Think of RFM as part of an agentic toolbox: when an agent needs a reliable prediction from enterprise data, it can call Kuma instead of generating code, training a fresh model, or bluffing an answer. That design extends to the realities of production as well. Kuma's fine-tuning and serving stack is built for high-QPS environments, the kind you see in recommendations and ad tech, where cost and latency matter. The training story is another thread you'll hear in this episode. The team began with public datasets, then leaned into synthetic data to cover scenarios that are hard to source in the wild. Synthetic generation gives them better control over distribution shifts and edge cases, which speeds iteration and makes the foundation model more broadly capable upon arrival. If you care about measurable outcomes, this episode shows why CFOs pay attention when RFM lands. Vanja shares examples where a 20 to 30 percent lift translates into hundreds of thousands of additional monthly active users and direct revenue impact. That kind of improvement isn't theory. It's the difference between a model that nudges a metric and a model that moves it. By the end, you'll have a clear picture of what Kuma.ai is building, why relational data warrants its own foundation model, and how enterprises can move from wishful thinking to practical wins. Curious to try it yourself? Vanja also points to a sandbox where teams can load data and ask predictive questions within a notebook, then compare results against in-house models. If your AI plans keep stalling on tabular reality, this conversation offers a way forward that's fast, accurate, and designed for the systems you already run.

    VMware Explore 2025: Broadcom Showcases the Next Chapter of VCF Innovation

    Play Episode Listen Later Aug 30, 2025 24:10


    When VMware Cloud Foundation 9.0 launched in June, it marked more than just another release. It was the clearest signal yet that Broadcom is betting big on the modern private cloud. In this episode of Tech Talks Daily, I sat down with Prashanth Shenoy, who leads marketing and learning for the VCF division at Broadcom, to discuss what the launch means for enterprises and how those themes are playing out live at VMware Explore in Las Vegas. Prashanth shares how VCF 9.0 was designed to help enterprises operate private clouds with the same simplicity and scale as public hyperscalers, while keeping sovereignty, security, and cost predictability front and center. He explains why this release is more than an infrastructure update. It's a shift toward a workload-agnostic, developer-centric platform where virtual machines, containers, and AI workloads can run side by side with a consistent operational experience. We also unpack Broadcom's headline announcements at the show. From making VCF an AI-native platform to embedding private AI services directly into the foundation, the message is clear: the AI pilots of the past are moving into production, and Broadcom wants VCF to be the default home for enterprise AI. Another major theme is cyber compliance at scale, with VCF now offering continuous enforcement, rapid ransomware recovery, and advanced security services that address today's board-level concerns. But perhaps the biggest takeaway is the momentum. Nine of the top ten Fortune companies are now running on VCF, more than 100 million cores have been licensed, and dozens of enterprises—from global giants to mid-sized insurers—are on stage at VMware Explore sharing their adoption stories. The so-called “cloud reset” that Prashanth has written about is not just theory. Companies are rethinking their cloud strategies, seeking cost transparency, avoiding waste, and building resilient, AI-ready private clouds. This conversation highlights how Broadcom is doubling down on VCF with a singular focus, a massive R&D commitment, and a clear vision of where private cloud is headed. If you want to understand why private AI, developer services, and cyber resilience are now central to enterprise strategy, this is a conversation worth hearing.

    Private AI Takes Center Stage at VMware Explore with Broadcom's Tasha Drew

    Play Episode Listen Later Aug 30, 2025 19:22


    At VMware Explore in Las Vegas, the buzz wasn't just about generative AI, but about where and how it should run. My guest is Tasha Drew, Director of Engineering for the AI team in the VMware Cloud Foundation division at Broadcom, who has been at the center of this conversation. Fresh off the main stage, where she helped debut VMware's new Private AI Services and Intelligent Assist for VMware Cloud Foundation, Tasha joins me to unpack what these announcements mean for enterprises grappling with privacy, cost, and integration challenges. Tasha explains why private AI is resonating so strongly in 2025, outlining the three pillars that define it: protecting sensitive intellectual property, managing regulated or high-value data, and ensuring role-based control of fine-tuned models. She shares how organizations often start their AI journey in the public cloud, but as experimentation turns to production, cost pressures, data compliance, and proximity to data drive them toward private AI. We also dive into VMware's own evolution toward building an AI-native private cloud platform. Tasha highlights the journey from deep learning VMs and Jupyter notebooks to full AI platform services that empower IT teams to deliver models efficiently, save money, and accelerate deployment of retrieval-augmented generation (RAG) applications. She introduces Intelligent Assist for VMware Cloud Foundation, an AI-powered guide that helps teams navigate complex deployments with context-aware support and step-by-step instructions. Beyond the technology, Tasha reflects on the broader ecosystem shifts, from partnerships with NVIDIA and AMD to the role of Model Context Protocol (MCP) in breaking down integration barriers between enterprise systems. She believes MCP represents a turning point, enabling seamless workflows between platforms that historically lacked incentive to work together. This conversation captures a pivotal moment where private AI is moving from theory into enterprise adoption. For leaders weighing their next move, Tasha provides both the strategic framing and the technical insight to understand why private AI has become one of the most talked-about forces shaping enterprise IT today.

    Inside Audi's Smart Factory Vision at VMware Explore

    Play Episode Listen Later Aug 29, 2025 28:30


    Factories don't usually make headlines at tech conferences, but what Audi is doing inside its production labs is anything but ordinary. At VMware Explore in Las Vegas, I sat down with Dr. Henning Löser, Head of the Audi Production Lab, to talk about how the automaker is reinventing its factory floor with a software-first mindset. Henning leads a small team he jokingly calls “the nerds of production,” but their work is changing how cars are built. Instead of replacing entire lines for every new piece of technology, Audi has found a way to bring the speed and flexibility of IT into the world of industrial automation. The result is Edge Cloud 4 Production, a system that takes virtualization technology normally reserved for data centers and applies it directly to manufacturing. In our conversation, Henning explained why virtual PLCs may be one of the biggest breakthroughs yet. They look invisible to workers on the line but give maintenance teams new transparency and resilience. We explored how replacing thousands of industrial PCs with centralized, virtualized workloads not only reduces downtime but also cuts energy use and simplifies updates. And yes, we even discussed the day a beaver chewed through one of Audi's fiber optic cables and how redundancy kept production running without a hitch. This episode is about more than smart factories. It's about how an industry known for heavy machinery is learning to think like the cloud. From scalability and sustainability to predictive maintenance and AI-ready infrastructure, Audi is showing how the car of the future starts with the factory of the future. If you've ever wondered how emerging technologies like virtualization and private cloud are reshaping the shop floor, this is a story you'll want to hear.

    Fear vs FOMO: Kantar's View on AI Adoption in Marketing

    Play Episode Listen Later Aug 28, 2025 27:32


    In this episode of Tech Talks Daily, I speak with Jane Ostler from Kantar, the world's leading marketing data and analytics company, whose clients include Google, Diageo, AB InBev, Unilever, and Kraft Heinz. Jane brings clarity to a debate often clouded by headlines, explaining why AI should be seen as a creative sparring partner, not a rival. She outlines how Kantar is helping brands balance efficiency with inspiration, and why the best marketing in the years ahead will come from humans and machines working together. We explore Kantar's research into how marketers really feel about AI adoption, uncovering why so many projects stall in pilot phase, and what steps can help teams move from experimentation to execution. Jane also discusses the importance of data quality as the foundation of effective AI, drawing comparisons to the early days of GDPR when oversight and governance first became front of mind. From Coca-Cola's AI-assisted Christmas ads to predictive analytics that help brands allocate budgets with greater confidence, Jane shares examples of where AI is already shaping marketing in ways that might surprise you. She also highlights the importance of cultural nuance in AI-driven campaigns across 90-plus markets, and why transparency, explainability, and human oversight are vital for earning consumer trust. Whether you're a CMO weighing AI strategy, a brand manager experimenting with new tools, or someone curious about how the biggest advertisers are reshaping their playbooks, this conversation with Jane Ostler offers both inspiration and practical guidance. It's about rethinking AI not as the end of creativity, but as the beginning of a new partnership between data, machines, and human imagination.

    AlgoSec on AI, Automation, and the Next Era of Network Management

    Play Episode Listen Later Aug 27, 2025 22:30


    The enterprise network is under pressure like never before. Hybrid environments, cloud migrations, edge deployments, and the sudden surge in AI workloads have made it increasingly difficult to keep application connectivity secure and reliable. The old model of device-by-device, rule-based network management can't keep up with today's hyperconnected, API-driven world. In this episode of Tech Talks Daily, I sit down with Kyle Wickert, Field Chief Technology Officer at AlgoSec, to discuss the future of network management in the age of platformization. With more than a decade at AlgoSec and years of hands-on experience working with some of the world's largest enterprises, Kyle brings an unfiltered view of the challenges and opportunities that IT leaders are facing right now. We talk about why enterprises are rapidly shifting to platform-based models to simplify network security, but also why that strategy can start to break down when dealing with multi-vendor environments. Kyle explains the fragmentation across cloud, on-prem, and edge infrastructure that keeps CIOs awake at night, and why spreadsheets and manual change processes are still far too common in 2025. He also shares why visibility, intent-based policies, and policy automation are becoming non-negotiable in reducing risk and friction. Kyle doesn't just talk theory. He shares a real-world case study of a European financial institution that automated policy provisioning across firewalls and cloud infrastructure, integrated it with CI/CD pipelines, and reduced its change rejection rate from 25% to 4%. It's a compelling example of how the right approach to network management can deliver measurable improvements in agility, security, and business satisfaction.  

    Claroty on Combating Model Poisoning and Adversarial Prompts

    Play Episode Listen Later Aug 26, 2025 35:29


    AI is rapidly becoming part of the healthcare system, powering everything from diagnostic tools and medical devices to patient monitoring and hospital operations. But while the potential is extraordinary, the risks are equally stark. Many hospitals are adopting AI without the safeguards needed to protect patient safety, leaving critical systems exposed to threats that most in the sector have never faced before. In this episode of Tech Talks Daily, I speak with Ty Greenhalgh, Healthcare Industry Principal at Claroty, about why healthcare's AI rush could come at a dangerous cost if security does not keep pace. Ty explains how novel threats like adversarial prompts, model poisoning, and decision manipulation could compromise clinical systems in ways that are very different from traditional cyberattacks. These are not just theoretical scenarios. AI-driven misinformation or manipulated diagnostics could directly impact patient care. We explore why the first step for hospitals is building a clear AI asset inventory. Too many organizations are rolling out AI models without knowing where they are deployed, how they interact with other systems, or what risks they introduce. Ty draws parallels with the hasty adoption of electronic health records, which created unforeseen security gaps that still haunt the industry today. With regulatory frameworks like the UK's AI Act and the EU's AI regulation approaching, Ty stresses that hospitals cannot afford to wait for legislation. Immediate action is needed to implement risk frameworks, strengthen vendor accountability, and integrate real-time monitoring of AI alongside legacy devices. Only then can healthcare organizations gain the trust and resilience needed to safely embrace the benefits of AI. This is a timely conversation for leaders across healthcare and cybersecurity. The sector is on the edge of an AI revolution, but the choices made now will determine whether that revolution strengthens patient care or undermines it. You can learn more about Claroty's approach to securing healthcare technology at claroty.com.

    From Deliverables to Outcomes: Emergn's New Playbook for Digital Success

    Play Episode Listen Later Aug 25, 2025 22:19


    In November, Alex Adamopoulos, CEO of Emergn, joined me on Tech Talks Daily to talk about transformation fatigue and why so many well-intentioned change programs leave people drained rather than inspired. This time, he's back with a sharper question: if traditional transformation is broken, what actually works? His answer is refreshingly direct. Product thinking is strategic thinking, and it belongs everywhere in the enterprise, not just in product teams. In our conversation, Alex explains why HR, finance, and even legal teams now need product strategy skills as much as engineers or designers. He introduces Praxis, Emergn's newly launched platform that rebrands their long-standing VFQ approach and now embeds product thinking across entire organizations. With its AI-powered coach Stella, Praxis is designed to support continuous learning while helping teams make better day-to-day decisions. We also discuss why outcomes, not deliverables, have become the accurate measure of digital success. Alex likens it to leaders constantly returning to their boards like entrepreneurs on Shark Tank, demonstrating incremental value before securing the next round of support. This shift in accountability changes how teams plan, learn, and invest. Another essential thread is the link between burnout and broken transformation models. Alex recently co-authored a paper with Harvard professor Amy Edmondson on “Breaking the Failure Cycle,” and he shares how adopting a product mindset can help organizations move past fatigue by focusing on outcomes, embracing uncertainty, and avoiding the endless reinvention trap. Whether you're in a global enterprise grappling with AI adoption or a smaller company rethinking strategy, this episode is a reminder that transformation is not a program but a continuous practice. Product thinking offers a practical path forward, one that makes strategy executable, measurable, and, most importantly, sustainable.

    3397: From Wallets to Agents: The Next Chapter for Magic Labs

    Play Episode Listen Later Aug 24, 2025 40:24


    In this episode of Tech Talks Daily, Neil sits down with Sean Li, co-founder and CEO of Magic Labs, to explore the intersection of crypto wallets, artificial intelligence, and the future of autonomous finance. Sean shares how Magic Labs has already onboarded over 50 million crypto wallets by pioneering simple login methods using email and SMS. Now, with Magic Newton, Sean is pushing into new territory where AI agents could securely manage digital assets on our behalf. From AI "concierges" executing investment strategies to cryptographic policy engines enforcing trust and control, the vision is clear: a financial internet where humans set the intent and machines handle the execution. We discuss the challenges of today's fragmented crypto experience, how smart wallets and AI could abstract away complexity, and why Sean believes everyone with an email address will eventually have multiple agents acting on their behalf. You'll also hear why he compares this shift to building a digital institution or constitution for autonomous finance. Whether you're a developer, investor, or simply crypto-curious, this episode offers a fascinating look at where Web3, AI, and programmable trust may be heading next.

    3396: How Boson Protocol Is Creating the Infrastructure for AI-Driven Trade

    Play Episode Listen Later Aug 24, 2025 36:33


    In this episode of Tech Talks Daily, I'm joined for the third time by Justin Banon, the founder of Boson Protocol. A lot has changed since his last appearance. What started as a bold idea to decentralize e-commerce has now evolved into an ambitious, AI-first infrastructure aiming to redefine how we buy, sell, and interact with value itself. Justin walks us through the evolution of Boson. It began as a system for peer-to-peer digital and physical commerce, aiming to remove intermediaries like Amazon from the process. The next step was Fermion, a protocol designed specifically for high-value assets such as luxury goods, fine art, and real estate. Now, Boson is launching the Metasystem, a full-scale framework designed for what Justin calls “agentic commerce,” where AI agents transact on behalf of users. We talk about what that future looks like. Imagine an AI that not only shops for you but negotiates, verifies, and settles transactions securely. Justin predicts that in just a few years, these agents will outnumber human buyers and sellers by orders of magnitude. Boson's mission is to build the decentralized rails for that world while avoiding the centralization traps of today's tech giants. One particularly fascinating moment is our discussion of the Dolce & Gabbana glass suit. Purchased during the last bull run, this million-dollar piece of digital-physical fashion was not just fractionalized through Fermion but transformed into an AI persona called Dolce Lorien. This character now leads a gamified community campaign, rewarding participants with fractional ownership. It's luxury meets sci-fi, wrapped in Web3 narrative. We also dig into what decentralized infrastructure really means for legacy brands. With Fermion, companies can reclaim the secondary market, verify resales, reconnect with past buyers, and turn their customer base into a true community. This isn't just resale with a twist. It's a new type of programmable loyalty system. Justin shares how AI doesn't just enable commerce. It also supports his own personal growth. He reveals how he uses tools like ChatGPT and Speechify to create custom audio courses for niche subjects, walking through the woods while absorbing AI-generated masterclass-level insights. For those tired of seeing tech platforms control value creation while users are left with little ownership, this episode offers a glimpse of a different future. One where AI works directly for you. One where commerce is flexible, open, and fair. One where the infrastructure is built to stay that way.

    3395: Communication Skills Every Tech Leader Needs

    Play Episode Listen Later Aug 23, 2025 28:38


    What do you do when your technical brilliance doesn't translate into clear, compelling communication? That's where Salvatore Manzi comes in. With a background in business communication and a career spent coaching leaders across tech, finance, and global policy, Salvatore helps bold thinkers bring clarity and connection to even the most complex ideas. In this episode, we talk about the communication challenges that many technical leaders face—especially when speaking to non-technical stakeholders. Whether you're leading a product team, presenting in a high-stakes board meeting, or trying to assert your voice in a fast-paced discussion, Salvatore offers practical strategies that go far beyond public speaking tips. His insights are grounded in two decades of real-world experience, from coaching TEDx speakers and United Nations delegates to guiding SVPs through business model pivots and helping raise hundreds of millions in funding. We explore how data-driven leaders can use storytelling without compromising accuracy, how to manage energy and presence when nerves kick in, and what leadership looks like when you lean more analytical than charismatic. Salvatore shares a refreshing take on body language, framing it not as performance but as a tool for genuine alignment between your message and your intention. We also dig into the nuance of communicating uncertainty—how to be honest and credible without losing influence. For anyone in a technical role looking to lead more effectively, this episode is packed with insight. Salvatore's mission is to help people speak with clarity and lead with authenticity. After this conversation, you may just find yourself doing both a little differently.

    3394: How Pet Media Group Is Using Tech to Protect Animals and Buyers

    Play Episode Listen Later Aug 22, 2025 23:58


    The global pet industry has long been riddled with problems. From low-welfare breeding practices to online scams, the darker side of pet rehoming often goes unchecked. But what if there was a way to combine animal protection with a sustainable, profitable business model? In this episode of Tech Talks Daily, I speak with Axel Lagercrantz, co-founder and CEO of Pet Media Group, the company behind platforms like Pets4Homes in the UK and Lancaster Puppies in the US. Axel shares the story of how two friends with backgrounds in finance and tech came together to rethink what ethical pet ownership and commerce should look like. Since 2018, PMG has been working to remove anonymity and reduce fraud across pet marketplaces by embedding ethical standards directly into their platform's infrastructure. We explore how PMG uses custom-built AI to scan tens of thousands of images every day for signs of mistreatment, as well as to flag suspicious documentation and chat messages. Axel explains why ID verification, device fingerprinting, and real-time fraud detection are essential to maintaining user trust, especially in a high-emotion, high-value market like pets. He also talks through the company's expansion model, which focuses on acquiring local leaders and embedding PMG's standards from the ground up. With operations now spanning six countries and a 50 percent EBITDA margin, PMG's approach proves that protecting animals and scaling a business are not mutually exclusive goals. What stands out most is Axel's clarity of purpose. PMG isn't trying to digitize pet sales for convenience alone. The mission is to create a global infrastructure that prioritizes the welfare of animals and builds lasting trust between buyers and responsible breeders. If you care about technology that delivers real-world impact, this conversation will change how you think about one of the most overlooked parts of the digital economy.

    3393: The Raiinmaker Model: Making Technology Work for the Many, Not the Few

    Play Episode Listen Later Aug 21, 2025 30:25


    As AI tools become increasingly embedded in the workplace, the future of work hangs in a delicate balance between automation and opportunity. In this episode, J.D. Seraphine, founder and CEO of Raiinmaker, joins Neil to challenge the dominant narrative of AI-driven job loss. Instead, he offers a bold vision—one where technology enhances human creativity, not replaces it. J.D. shares how Raiinmaker is merging Web3 and AI to build platforms that reward human contribution, decentralize value, and give people a voice in shaping the future of artificial intelligence. We explore what it means to create ethical, human-led AI systems, how businesses can support young workers through upskilling and mentorship, and why the biggest challenge may not be employee readiness—but leadership inertia. From entry-level displacement to data ethics, from complementary currencies to AI-generated video training, this conversation goes far beyond the hype. J.D. also speaks candidly about the real risks ahead—alongside the unprecedented potential for a new kind of economic empowerment.

    3392: Rebuilding the Internet: The Web3 Foundation's Long Game

    Play Episode Listen Later Aug 20, 2025 25:47


    David Hawig never set out to work in blockchain. He began his career in health tech, drawn to the potential of scaling impactful solutions. But it was the promise of a more transparent and user-controlled internet that ultimately led him to the Web3 Foundation, where he now serves as Director of Ecosystem Development and Investor Relations. In our conversation on the Tech Talks Daily Podcast, David shares why Web3 is not just a trend or buzzword, but a complete rethink of how digital systems should function. He describes a world where users own their data, can verify transactions without relying on central authorities, and can move between services without friction. To David, the Web3 movement is not about hype or speculation. It's about enabling a future where the internet is built on truth, not trust. At the heart of the Web3 Foundation's work is Polkadot, a protocol designed to solve the scalability and interoperability challenges that plague many early blockchain networks. David explains that Polkadot's sharded infrastructure allows workloads to be split across participants in a trustless way. This setup not only enables more transactions but positions the ecosystem to support millions of users as mainstream adoption accelerates. That acceleration, he argues, is already happening. Thanks to new regulations like the Clarity Act and Genius Act in the US, enterprise adoption is becoming a reality. Where Web3 once attracted only startups and crypto-native communities, today major companies, including sports brands and entertainment groups, are actively building in the space. Unlike earlier efforts that felt more like PR stunts, these companies now see tangible benefits. Web3 can deliver faster, cheaper, and more flexible digital services, available 24/7, with no lock-in to single vendors. David is especially passionate about removing the barriers that have made Web3 feel intimidating to the average user. While early blockchain projects often demanded technical knowledge and wallet key management, he sees a future where users interact with Web3 products as easily as they would a mobile app. Behind the scenes, cryptography and decentralization are doing the heavy lifting, but from the user's perspective, the experience is seamless. One area David is particularly excited about is decentralized storage. As more businesses realize the risks of relying on centralized cloud giants, alternatives are emerging that offer both cost advantages and greater control. He sees this as a critical part of the broader shift toward self-sovereign infrastructure. When asked if large corporations truly understand the scale of the disruption ahead, David is cautious but optimistic. Many established players, he says, still underestimate how quickly network effects can take hold. Once enough users and companies move into Web3 ecosystems, the old models will no longer be competitive. Whether it's financial services, social media, or identity management, the shift toward user-owned infrastructure will be difficult to reverse. Looking ahead to 2026 and beyond, David points to the upcoming Jam upgrade as a major milestone. This next evolution of the Polkadot network is designed to dramatically improve scalability, supporting not just crypto-native transactions but also broader use cases in gaming, ticketing, payments, and more. The goal is clear: create a robust, low-cost, interoperable infrastructure capable of supporting millions of users across different networks and applications. Before signing off, David leaves listeners with a recommendation. He suggests reading Man's Search for Meaning by Viktor Frankl, a book he returns to often when reflecting on motivation and purpose. It's a fitting choice for someone working at the edge of one of the most transformative shifts in modern tech. The Web3 Foundation is not just funding protocols or building tools. It's laying the groundwork for a future where the internet belongs to everyone. And if David's predictions are right, that future may arrive faster than we think.

    3391: What Wikidata Reveals About the Good Side of the Internet

    Play Episode Listen Later Aug 19, 2025 19:09


    When most people think of Wikipedia, they picture an endless scroll of human-readable pages. But there's another side to this ecosystem, one designed not just for people but also for machines. It's called Wikidata, and if you haven't heard of it, that's exactly why this conversation matters. In this episode of Tech Talks Daily, I'm joined by Lydia Pintscher, Wikidata Portfolio Manager at Wikimedia Deutschland, for a deep look into how structured, open data is quietly powering civic tech, cultural preservation, and knowledge equity across the globe. Wikidata is the backbone that helps turn static knowledge into something living, adaptable, and scalable. With over 117 million items, 1.65 billion semantic statements, and more than 2.34 billion edits, it's become one of the largest collaborative datasets in the world. But it's not just the size that makes it impressive. It's what people are doing with it. Lydia shares how volunteers and developers are building tools for everything from investigative journalism to public libraries, all without needing deep pockets or proprietary infrastructure. This isn't big tech. It's a global, grassroots movement making open data work for the public good. We explore how tools like Toolforge and the Wikidata Query Service lower the barrier to entry, allowing civil society groups to build sophisticated applications that would otherwise be out of reach. Whether it's helping connect citizens to government services or preserving disappearing languages, the use cases are multiplying fast. Lydia also reflects on how Wikidata fosters a sense of purpose for contributors, offering a rare example of what many call the good internet, where collaboration outweighs competition and building something meaningful beats chasing virality. If you're curious about where open knowledge is headed, how structured data can be a force for social impact, or why Wikidata might be the most important project you've never fully explored, this episode offers a window into a future where machines help humans build something better, together.

    3390: How Veritone Is Making AI Work for Media, Government, and Beyond

    Play Episode Listen Later Aug 18, 2025 33:49


    What does it take to make AI actually work at enterprise scale? In this episode of Tech Talks Daily, I'm joined by Ryan Steelberg, CEO of Veritone, to unpack the very real challenges and opportunities that come with bringing artificial intelligence into complex industries like media, law enforcement, and government. Ryan has been building AI solutions long before it became headline news. He co-founded Veritone in 2014 with a mission to solve the unstructured data problem. Think audio, video, and other media that doesn't fit neatly into rows and columns. Now, Veritone isn't just talking about AI. It's powering more than 3,000 clients across sectors with real applications that drive measurable ROI. We get into what it means to work with unstructured data at scale, and how Veritone processed over 58 million hours of media in 2024 alone. Ryan explains why traditional enterprises struggle to operationalize AI and how Veritone has evolved from a platform company into a creator of end-user applications designed for impact. This includes ad optimization for ESPN and video redaction for law enforcement. What's especially compelling is Veritone's growing footprint in high-barrier sectors like federal defense and law enforcement. Ryan talks through what it took to become a prime contractor for the U.S. Air Force and Defense Logistics Agency, how Veritone adapted its stack for secure deployments, and why the key to adoption in these sectors isn't just tech. It's trust and proximity to mission-critical outcomes. We also discuss the company's push into the $17 billion global training data market through the Veritone Data Repository. Ryan shares why VDR is gaining traction fast and how it positions the company as a key partner for the next generation of AI models. This isn't a story of hype or futuristic promises. It's a grounded look at what it really takes to scale AI in some of the most demanding enterprise environments. Whether you're deep in the AI world or still figuring out where your business fits, Ryan's perspective is honest, strategic, and full of lessons you can apply right now.

    3889: Plaza Finance and the Rise of Programmable Derivatives

    Play Episode Listen Later Aug 18, 2025 27:40


    Is the future of finance programmable? In this episode of Tech Talks Daily, I'm joined by Ryan Galvankar, founder of Plaza Finance, to explore how programmable derivatives and on-chain bonds are reshaping the way we think about risk, leverage, and asset management in crypto. Plaza Finance isn't just another DeFi protocol. Since launching its core system on Base in April 2025, the team has introduced two standout tokens: bondETH and levETH. These programmable derivative tokens split Ethereum into customizable risk profiles, allowing both institutional and retail investors to choose their exposure with precision. Under Ryan's leadership, Plaza has already attracted over 600,000 testnet users, $2 million in early deposits, and $2.5 million in pre-seed funding. Ryan explains why this matters. Today's DeFi tools tend to sit at either extreme—ultra-safe or wildly speculative. Plaza aims to bridge that gap with products that reflect the diversity of traditional finance, yet remain liquid and composable across Ethereum's ecosystem. He also shares why Ethereum staking plays a central role, and how integrations with lending platforms and cross-chain bridges will make these tokens even more powerful. But Ryan doesn't stop at structured investing. He also launched OptFun, an ultra-short-term options trading platform that hit nearly $1 billion in trading volume within a month. It appeals to crypto natives seeking high-volatility speculation in a fair, on-chain environment. Together, these two platforms reflect what Ryan sees as a split in the future of DeFi: one track of thoughtful, long-term financial products that institutions will trust, and another track of high-speed, high-risk experimentation for users who live for volatility. He also gives his take on the rise of tokenized real-world assets, AI-generated portfolios, and what it will take to truly decentralize the next generation of finance. Whether you're deep into DeFi or still watching from the sidelines, this conversation sheds light on a pivotal moment where programmable finance is becoming real, and access to tailored financial tools is expanding faster than ever.

    3388: The Case for Digital IDs in Recruitment, Renting, and Beyond

    Play Episode Listen Later Aug 17, 2025 48:20


    What if proving your right to work, rent, or even open a bank account didn't require digging out your passport or waiting weeks for manual verification? In this episode of Tech Talks Daily, I'm joined by Hamraj Gulamali, Head of Legal and Compliance at Zinc, to talk about how digital identities are changing the way we think about employment, data ownership, and trust. Hamraj brings a unique perspective to the conversation. A barrister-turned-startup leader, he now sits at the centre of one of the UK's most forward-thinking background check platforms. Zinc's mission is simple but ambitious: to make employment identity verification faster, safer, and fairer. But as Hamraj points out, this shift isn't just about convenience. It's about control. We discuss how digital IDs are already helping automate background checks, reduce onboarding delays, and improve accuracy in high-volume hiring environments. Hamraj explains why placing ownership of work identity back in the hands of individuals has the potential to unlock wider applications—from renting a home to streamlining banking processes. But there's also friction. Some industries face structural and cultural resistance. Others fear the cost of digital transformation or worry about public trust. Hamraj unpacks why regulators have both helped and hindered progress. He shares his view on the UK's Trust Framework, the Online Safety Act, and why transparency, retention policies, and robust cybersecurity are essential for building public confidence. As we look ahead, Hamraj offers a clear vision for how verified digital IDs could simplify everyday tasks, slash manual admin, and reduce the compliance burden across multiple sectors. This episode cuts through the noise to offer a calm, informed, and practical take on one of the most overlooked shifts in how we live and work. Whether you're in HR, tech, compliance, or just tired of carrying your passport across town on your first day of work, this conversation will get you thinking about what needs to happen next.

    3387: How Tableau's Srinivas Chippagiri Thinks About Responsible AI and Cloud Systems

    Play Episode Listen Later Aug 17, 2025 31:41


    What does it take to build intelligent systems that are not only AI-powered but also secure, scalable, and grounded in real-world needs? In this episode of Tech Talks Daily, I speak with Srinivas Chippagiri, a senior technology leader and author of Building Intelligent Systems with AI and Cloud Technologies. With over a decade of experience spanning Wipro, GE Healthcare, Siemens, and now Tableau at Salesforce, Srinivas offers a practical view into how AI and cloud infrastructure are evolving together. We explore how AI is changing cloud-native development through predictive maintenance, automated DevOps pipelines, and developer co-pilots. But this is not just about technology. Srinivas highlights why responsible AI needs to be part of every system design, sharing examples from his own research into anomaly detection, fuzzy logic, and explainable models that support trust in regulated industries. The conversation also covers the rise of hybrid and edge computing, the real challenges of data fragmentation and compute costs, and how teams are adapting with new skills like prompt engineering and model observability. Srinivas gives a thoughtful view on what ethical AI deployment looks like in practice, from bias audits to AI governance boards. For those looking to break into this space, his advice is refreshingly clear. Start with small, end-to-end projects. Learn by doing. Contribute to open-source communities. And stay curious. Whether you're scaling AI systems, building a career in cloud tech, or just trying to keep pace with fast-moving trends, this episode offers a grounded and insightful guide to where things are heading next. Srinivas's book is available on Amazon under Building Intelligent Systems with AI and Cloud Technologies, and you can connect with him on LinkedIn to continue the conversation.

    3386: Kinsta on Why 93% of Customers Still Prefer Human Support

    Play Episode Listen Later Aug 16, 2025 26:11


    AI is finding its way into almost every corner of customer service, but is it really what customers want? According to Kinsta's recent survey, the answer is overwhelmingly no. An incredible 93 percent of respondents said they would rather speak to a human than an AI chatbot when they need support. Nearly half even said they would cancel a service if it relied solely on AI-driven support. In this episode, Roger Williams, Community Manager at Kinsta, breaks down the story behind those numbers and explains why his team continues to invest heavily in human-first support. Kinsta has built its reputation on 24/7 access to real engineers who understand the complexities of WordPress hosting, resulting in a 98 percent customer satisfaction score and consistently high ratings on platforms like G2 and Trustpilot. Roger shares how Kinsta blends human expertise with AI tools in a way that enhances, rather than replaces, the customer experience. He talks about the role empathy plays in building trust, the importance of empowering support staff to own conversations, and why support should be viewed as an opportunity to strengthen relationships rather than just a cost to be reduced. We also discuss the growing perception that AI in support is more about saving money than improving service, how business leaders can avoid falling into that trap, and what the future of open web hosting could look like as AI capabilities mature. Whether you are in tech leadership, run a customer-facing team, or simply value a good support experience, this conversation offers insights that go beyond the AI hype and focus on what truly matters to customers.

    3385: How Vasion Balances Short-Term Wins with Long-Term Vision

    Play Episode Listen Later Aug 16, 2025 21:19


    In a business climate shaped by rapid technological disruption, shifting geopolitical landscapes, and evolving customer expectations, strategy cannot remain static. JD Carter, Chief Strategy Officer at Vasion, believes the key to success lies in constantly aligning vision with execution while adapting to market realities in real time. In this conversation, JD shares how his role involves continuously monitoring external signals such as technology shifts, regulatory changes, and economic pressures, then translating those insights into operational action. We explore his approach to looking beyond the company vision by breaking it down into achievable missions that link long-term goals with day-to-day work across cross-functional teams. A major focus of the discussion is AI readiness and how organisations can move beyond hype to real impact. JD outlines a five-stage process for becoming AI-ready, starting with digitising and centralising documents and data, followed by cleaning and structuring information for use with large language models. He explains how automating repetitive workflows, modernising infrastructure, and ensuring interoperability across systems such as ERP and HR platforms create the foundation for orchestrated automation at scale. Governance and access controls complete the picture, ensuring that AI deployment meets both internal and regulatory standards. We also look at how Vasion balances short-term market needs with a long-term platform vision. JD describes how leveraging the company's market-leading print infrastructure products supports current growth, while investing in R&D drives the development of a multi-product SaaS platform designed to integrate AI across the enterprise. A customer-first mindset shapes every decision, from pricing to product development, with continuous engagement through advisory boards, surveys, and direct conversations ensuring that partner strategies align with customer priorities. To illustrate these principles in action, JD shares how Vasion responded to market demand for system-generated print job support by developing a SaaS-based output automation product. This pivot addressed a gap created by ERP and EMR vendors moving customers to the cloud and positioned Vasion at the start of its multi-product journey. We discuss the signals leaders should watch to keep strategy relevant, including shifts in customer behaviour, the pace of technology adoption, and internal friction that may indicate misalignment. JD likens strategy to a GPS system, with vision as the destination and constant recalibration required to navigate roadblocks and changing conditions. The conversation closes with a focus on embedding strategic agility into company culture. JD explains Vasion's Missions of Aspirational Performance system, which connects corporate values directly to execution by breaking down strategic goals into work that individuals can see contributing to the bigger picture. He also shares his personal three-pronged approach to continuous learning, combining formal education, informal learning, and mentor-driven guidance. This episode offers practical insight for leaders navigating uncertainty, balancing present-day demands with future opportunity, and embedding adaptability into the DNA of their organisations.

    3384: MariaDB's Roadmap for Cloud, AI, and Performance Leadership

    Play Episode Listen Later Aug 15, 2025 27:03


    MariaDB is a name with deep roots in the open-source database world, but in 2025 it is showing the energy and ambition of a company on the rise. Taken private in 2022 and backed by K1 Investment Management, MariaDB is doubling down on innovation while positioning itself as a strong alternative to MySQL and Oracle. At a time when many organisations are frustrated with Oracle's pricing and MySQL's cloud-first pivot, MariaDB is finding new opportunities by combining open-source freedom with enterprise-grade reliability. In this conversation, I sit down with Vikas Mathur, Chief Product Officer at MariaDB, to explore how the company is capitalising on these market shifts. Vikas shares the thinking behind MariaDB's renewed focus, explains how the platform delivers similar features to Oracle at up to 80 percent lower total cost of ownership, and details how recent innovations are opening the door to new workloads and use cases. One of the most significant developments is the launch of Vector Search in January 2023. This feature is built directly into InnoDB, eliminating the need for separate vector databases and delivering two to three times the performance of PG Vector. With hardware acceleration on both x86 and IBM Power architectures, and native connectors for leading AI frameworks such as LlamaIndex, LangChain and Spring AI, MariaDB is making it easier for developers to integrate AI capabilities without complex custom work. Vikas explains how MariaDB's pluggable storage engine architecture allows users to match the right engine to the right workload. InnoDB handles balanced transactional workloads, MyRocks is optimised for heavy writes, ColumnStore supports analytical queries, and Moroonga enables text search. With native JSON support and more than forty functions for manipulating semi-structured data, MariaDB can also remove the need for separate document databases. This flexibility underpins the company's vision of one database for infinite possibilities. The discussion also examines how MariaDB manages the balance between its open-source community and enterprise customers. Community adoption provides early feedback on new features and helps drive rapid improvement, while enterprise customers benefit from production support, advanced security, high availability and disaster recovery capabilities such as Galera-based synchronous replication and the MacScale proxy. We look ahead to how MariaDB plans to expand its managed cloud services, including DBaaS and serverless options, and how the company is working on a “RAG in a box” approach to simplify retrieval-augmented generation for DBAs. Vikas also shares his perspective on market trends, from the shift away from embedded AI and traditional machine learning features toward LLM-powered applications, to the growing number of companies moving from NoSQL back to SQL for scalability and long-term maintainability. This is a deep dive into the strategy, technology and market forces shaping MariaDB's next chapter. It will be of interest to database architects, AI engineers, and technology leaders looking for insight into how an open-source veteran is reinventing itself for the AI era while challenging the biggest names in the industry.

    3383: Denodo Technologies on Balancing AI Innovation with Data Integrity

    Play Episode Listen Later Aug 14, 2025 33:39


    In this episode, I am joined by Charles Southwood, Regional GM of Denodo Technologies, a company recognised globally for its leadership in data management. With revenues of $288M and a customer base that includes Hitachi, Informa, Engie, and Walmart, Denodo sits at the heart of how enterprises access, trust, and act on their data. Charles brings over 35 years of experience in the tech industry, offering both a long-term view of how the data landscape has evolved and sharp insights into the challenges businesses face today. Our conversation begins with a pressing issue for any organisation exploring generative AI: data reliability. With many AI models trained on vast amounts of internet content, there is a real risk of false information creeping into business outputs. Charles explains why mitigating hallucinations and inaccuracies is essential not just for technical quality, but for protecting brand reputation and avoiding costly missteps. We explore alternative approaches that allow enterprises to benefit from AI innovation while maintaining data integrity and control. We also examine the broader enterprise pressures AI has created. The promise of reduced IT costs and improved agility is enticing, but how much of this is achievable today and how much is inflated by hype? Charles shares why he believes 2025 is a tipping point for achieving true business agility through data virtualisation, and what a virtualised data layer can deliver for teams across IT, marketing, and beyond. Along the way, Charles reflects on the industry shifts that caught him most by surprise, the decisions he would make differently with the benefit of hindsight, and the one golden rule he would share with younger tech professionals starting their careers now. This is a conversation for anyone who wants to understand how to harness AI and advanced data integration without falling into the traps of poor data quality, overblown expectations, or short-term thinking.

    3382: Akamai CTO on Why Real AI Assistants Are Still Far from Iron Man's JARVIS

    Play Episode Listen Later Aug 13, 2025 36:03


    In the movies, Tony Stark's JARVIS is the ultimate AI assistant, managing schedules, running simulations, controlling environments, and anticipating needs before they are voiced. In reality, today's AI agents are still far from that vision. Despite 2025 being heralded as the year of agentic AI, the first offerings from major players have been underwhelming. They can perform tasks, but they remain a long way from the seamless, hyper-intelligent assistants we imagined. In this episode, Dr. Robert “Bobby” Blumofe, CTO at Akamai Technologies, joins me to explore what is really holding AI assistants back and what it will take to build one as capable as a top human executive assistant. Bobby argues that the leap forward will not come from chasing ever-larger models but from optimization, efficiency, and integrating AI with the right tools, infrastructure, and processes. He believes that breakthroughs in model efficiency, like those seen in DeepSeek, could make capable agents affordable and viable for everyday use. We break down the spectrum of AI agents from simple, task-specific helpers to the fully autonomous, general-purpose vision of JARVIS. Bobby shares why many of the most valuable enterprise applications will come from the middle ground, where agents are semi-autonomous, task-focused, and integrated with other systems for reliability. He also explains why smaller, specialized models often outperform “ask me anything” LLMs for specific business use cases, reducing cost, latency, and security risks. The conversation covers Akamai's role in enabling low-latency, scalable AI at the edge, the importance of combining neural and symbolic AI to achieve reliable reasoning and planning, and a realistic five-to-seven-year timeline for assistants that can rival the best human EAs. We also look at the technical, social, and business challenges ahead, from over-reliance on LLMs to the ethics of deploying highly capable agents at scale. This is a grounded, forward-looking discussion on the future of AI assistants, where they are today, why they have fallen short, and the practical steps needed to turn fiction into reality.

    3381: Cognizant AI CTO on Open Source, AI Trust, and the Road Ahead

    Play Episode Listen Later Aug 12, 2025 32:47


    Multi-agent AI systems are moving from theory to enterprise reality, and in this episode, Babak Hodjat, CTO of AI at Cognizant, explains why he believes they represent the future of business. Speaking from his AI R&D lab in San Francisco, Babak outlines how his team is both deploying these systems inside Cognizant and helping clients build their own, breaking down organisational silos, coordinating processes, and improving decision-making. We discuss how the arrival of optimized, open-source models like DeepSeek is accelerating AI democratization, making it viable to run capable agents on far smaller infrastructure. Babak explains why this is not a “Sputnik moment” for AI, but a powerful industry correction that opens the door for more granular, cost-effective agents. This shift is enabling richer, more scalable multi-agent networks, with agents that can not only perform autonomous tasks but also communicate and collaborate with each other. Babak also introduces Cognizant's open-sourced Neuro-AI Network platform, designed to be model and cloud agnostic while supporting large-scale coordination between agents. By separating the opaque AI model from the fully controllable code layer, the platform builds in safeguards for trust, data handling, and access control, addressing one of the most pressing challenges in AI adoption. Looking ahead, Babak predicts rapid growth in multi-agent ecosystems, including inter-company agent communication and even self-evolving agent networks. He also stresses the importance of open-source collaboration in AI's next chapter, warning that closed, proprietary approaches risk slowing innovation and eroding trust. This is a deep yet accessible conversation for anyone curious about how enterprise AI is evolving from single, monolithic models to flexible, distributed systems that can adapt and scale. You can learn more about Cognizant's AI work at cognizant.com and follow Babak's insights on LinkedIn.

    3380: EY Technology Pulse: The Shift From LLMs to Agentic AI in Enterprise Adoption

    Play Episode Listen Later Aug 11, 2025 23:08


    AI adoption in the tech sector has reached a tipping point, and the latest EY Technology Pulse poll offers a fascinating look at just how quickly things are moving. In this episode, I'm joined by Ken Englund, Partner at EY, to unpack the survey's findings and what they reveal about the next two years of AI in enterprise. Conducted with 500 senior tech executives from companies with more than 5,000 employees, the poll focuses on AI agents, autonomous deployments, and the shifting priorities that are shaping investment strategies. Ken shares why half of the respondents expect more than 50% of their AI deployments to be fully autonomous within 24 months, and why optimism around AI's potential remains high. He also explains a notable shift away from last year's heavy focus on large language models toward applied AI and agentic systems designed to handle real-world business workflows. While 92% of tech leaders plan to increase AI spending in 2026, the investment isn't solely about technology—it's about competitiveness, customer experience, and strategic alignment. One of the more surprising takeaways is the human side of the equation. Despite headlines predicting widespread job losses, only 9% of respondents anticipate layoffs in the next six months, down from 20% last year. Instead, companies are balancing upskilling existing staff with bringing in new AI talent, with roles emerging in areas like MLOps, AI product management, and forward-deployed engineering. Ken and I also dive into the growing weight of governance, data privacy, and security. With autonomous agents becoming more embedded in core operations, boards, CFOs, and audit teams are taking a closer look at trust frameworks without wanting to slow innovation. The conversation highlights a critical inflection point: most companies are still in the “automating existing workflows” phase, but the real breakthroughs will come when AI enables entirely new business models. This episode is a snapshot of a sector in rapid evolution—where enthusiasm is tempered by lessons learned, and where the road from pilot to production is still full of challenges. You can read the full EY Technology Pulse poll here.

    3379: How MachineQ, a Comcast Company, is Powering the Future of IoT

    Play Episode Listen Later Aug 10, 2025 28:03


    In this episode of Tech Talks Daily, I speak with Steve Corbesero, Jr., Senior Director of Products and Solutions at MachineQ, a Comcast Company, about the findings from their new Lab of the Future survey. Conducted with Censuswide, the study gathered insights from more than 400 U.S.-based lab professionals, revealing both the opportunities and persistent pain points shaping modern laboratory operations. Steve unpacks why nearly 60% of labs still face unplanned downtime from equipment failures, missed calibration schedules, and asset location issues. We discuss how manual monitoring remains the norm for over half of labs surveyed, and why 14% have no monitoring system in place at all. Despite these challenges, there's a clear appetite for change, with 85% planning to adopt IoT solutions and 87% intending to integrate AI and machine learning into their workflows over the next two years. Our conversation explores the operational impact of disconnected systems, the risks of relying on spreadsheets, and the role of real-time monitoring in preventing costly disruptions. Steve shares practical examples of how IoT and AI are already helping labs shift from reactive to proactive management, from anomaly detection that reduces alert fatigue to intelligent data summaries that give lab managers actionable insights without hours of manual analysis. We also talk about the barriers holding some labs back from adoption, including pilot purgatory, budget constraints, and integration challenges. Steve offers his perspective on how future-ready labs will differentiate themselves—by embracing connected infrastructure, unifying data, and embedding AI into both scientific and operational workflows. If you want to understand what's really happening inside modern labs, and how emerging technology can transform efficiency, productivity, and innovation, this episode offers a clear, data-backed view of the road ahead.

    3378: Inside QuantumScape's Mission to Deliver Safer, Faster-Charging EV Batteries

    Play Episode Listen Later Aug 10, 2025 24:02


    Electric vehicles promise a cleaner future, but battery performance remains one of the biggest bottlenecks. In this episode, Tim Holme, co-founder and CTO of QuantumScape, takes us inside the company's mission to build a new generation of solid-state lithium-metal batteries that could change the game for EVs and beyond. Tim explains why the industry has been stuck with incremental improvements to lithium-ion for decades and why replacing the graphite anode with lithium metal could unlock longer range, under-15-minute charging, and improved safety. He shares how QuantumScape's ceramic separator prevents the dendrite formation that has held back lithium-metal designs, and why this innovation can make batteries both more energy-dense and safer, even under extreme conditions. We also discuss the company's recent “COBRA” manufacturing breakthrough, which has increased separator production speed by roughly 200 times. This leap is key to scaling production for automotive partners like Volkswagen's PowerCo and Murata. Tim outlines how QuantumScape is approaching commercialization through a capital-light licensing model, avoiding the pitfalls that have caused other U.S. battery innovations to be commercialized overseas. Beyond electric vehicles, Tim sees untapped potential for solid-state batteries in grid-scale storage and at EV charging stations, where they could buffer demand and reduce grid strain. He also reflects on the global battery race, why careful partner selection is essential for protecting IP, and how the U.S. can maintain leadership in next-generation energy storage. Whether you are interested in battery chemistry, clean tech innovation, or the business of scaling breakthrough hardware, Tim offers a rare look at the science, strategy, and partnerships shaping the future of energy storage.

    3377: SentiLink on How Deepfakes and AI Are Supercharging Identity Fraud

    Play Episode Listen Later Aug 9, 2025 34:25


    Tax season scams are nothing new, but David Maimon is tracking a worrying evolution. As head of Fraud Insights at SentiLink and a professor of fraud intelligence at Georgia State University, David has been studying how organised crime groups are now blending stolen identities with generative AI and deepfake technology to outpace traditional security measures. In this conversation, he explains how identities from some of the least likely victims, including death row inmates, are being exploited to open neobank accounts, set up fake businesses, and run sophisticated bust-out schemes with a low risk of detection. David breaks down how these operations work, from creating synthetic identities using stolen Social Security numbers to manufacturing convincing documents and faces that can pass liveness checks. He reveals the telltale signs his team uncovered, such as shared physical addresses, legacy email domains, and consistent digital fingerprints that point to coordinated fraud rings. With tools like DeepFaceLive, Avatarify, and cloned voices now being deployed to bypass authentication, he warns that the gap between criminal innovation and institutional defences can be as wide as 7 to 12 months. We also explore why financial institutions struggle to detect these scams early, and why layered verification, combining real-time checks with historical identity analysis, is essential. David shares the threats on the horizon, from increasingly realistic AI-generated images to voice cloning attacks, and stresses the need for both technological solutions and public awareness to slow the momentum of these schemes. Whether you work in banking, cybersecurity, or simply want to protect your own identity, this episode offers a rare look inside the tactics, tools, and vulnerabilities shaping the next wave of financial fraud. And yes, there is still time at the end for a great book recommendation and a classic Tom Petty track.

    3376: From Hoodies to Suits: Kadena's Blueprint for Digital Asset Adoption

    Play Episode Listen Later Aug 9, 2025 23:55


    In this episode of Tech Talks Daily, I speak with Annelise Osborne, Chief Business Officer at Kadena, former Wall Street executive, and author of From Hoodies to Suits: Innovating Digital Assets for Traditional Finance. Annelise brings more than two decades of leadership experience in finance, real estate, and digital assets, and she has spent her career bridging the worlds of institutional finance and emerging blockchain technology. Our conversation begins with her personal journey from commercial mortgage-backed securities to the frontlines of blockchain innovation. She shares how a regulatory task force invitation led her to deep-dive into ICOs back in 2017, sparking her fascination with the potential of blockchain to modernize financial markets. That curiosity eventually became a career shift, culminating in her work at Propellr, ARCA Labs, and now Kadena. We explore the key themes of her book, which seeks to break down the language barriers between “hoodies” (tech innovators) and “suits” (traditional finance professionals). Annelise makes the case for clear regulation, continuous education, and interoperability as the three essential pillars for mainstream blockchain adoption. She's candid about the misconceptions that persist in traditional finance and explains why payment solutions, tokenization, and back-office efficiencies are the practical entry points for institutions. Annelise also gives a behind-the-scenes look at Kadena's blockchain infrastructure, from its proof-of-work security model and low gas fees to its plans for EVM compatibility and Web2-like user experiences. She highlights real-world use cases in sports fan engagement, product lifecycle tracking, and on-chain lending, illustrating how DeFi and TradFi can merge under regulated frameworks. Throughout the discussion, one theme is constant: the importance of finding the right partners, speaking each other's language, and staying curious in a fast-changing industry. Whether you're a finance professional, a blockchain developer, or simply interested in where digital assets are headed, Annelise's insights offer a grounded yet forward-looking perspective on the future of finance.

    3375: How Dropbox is Helping to End Digital Clutter For Good

    Play Episode Listen Later Aug 8, 2025 29:08


    In this episode of Tech Talks Daily, I chat with Andy Wilson, Senior Director at Dropbox, to explore how AI is quietly but fundamentally changing how we work. Andy offers a deep dive into Dropbox Dash, an AI-powered search and knowledge management tool designed to cut through digital noise and reclaim the one thing most professionals are running out of: time. We unpack how Dash is solving the growing problem of content sprawl by integrating with over 60 SaaS platforms, offering AI-driven answers instead of just links. Andy describes it best: the internet lets you search all of human knowledge, but good luck finding that one deck from last quarter in your company's Google Drive. Dash aims to fix that. And it's already doing so for teams like McLaren F1, who use the platform to coordinate high-stakes race weekends with precision and security. We also discuss how Dropbox is adopting AI internally, from the acquisition of Reclaim.ai to enhancing virtual-first collaboration with intelligent scheduling and governance tools. For Dropbox, AI isn't a novelty. It's becoming the backbone of how distributed teams work better together. Andy likens it to having a personalized chief of staff, helping you prioritize, organize, and even remind you when to take a break. Our conversation also touches on Andy's creative roots at the BBC and Aardman, and how those experiences shaped his product thinking today. He shares real examples of how Dropbox supports content creators and knowledge workers alike, including feedback tools like Dropbox Replay and user-generated storytelling workflows powered by simple file request features. If you're wrestling with digital overload, managing hybrid teams, or just wondering how AI can actually help without taking over, this episode is for you. Andy paints a grounded, human picture of the future of work, one where AI supports creativity and restores clarity, not chaos.

    3374: Concentrix CPO on AI, Trust, and the Next Generation of CX

    Play Episode Listen Later Aug 7, 2025 39:54


    Customer service used to be seen as a cost center. Something to manage, streamline, and, if possible, outsource or automate. But what happens when AI shifts that narrative entirely? In this episode of Tech Talks Daily, I sat down with Ryan Peterson, Chief Product Officer at Concentrix, to unpack how customer service is being reimagined as a revenue-driving engine. With 430,000 advisors handling millions of calls for brands across every major industry, Concentrix isn't theorizing about the future. They're building it. Ryan shares how blending human empathy with AI efficiency is creating faster resolutions, stronger loyalty, and in some cases, serious bottom-line results. One example saw upsell revenue rise from $500,000 to $1.6 million per month simply by supporting human agents with smart, context-aware AI assistants. We also talk about the evolution of the agent's role. Far from replacing jobs, AI is creating a new class of specialists called agent engineers. These are people responsible for maintaining, optimizing, and guiding AI systems that now work alongside human teams. This shift is opening doors for deeper personalization, real-time translation, and richer customer engagement across channels and geographies. Ryan also gives us a behind-the-scenes look at Concentrix's award-winning IX...Low Platform, which was recently named Intelligent Personal Assistant of the Year. Supporting over 7,000 AI agents and offering robust security and integration capabilities, the platform is designed for scale and resilience. It's not just handling simple FAQs. From legal contract analysis to proactive service interventions, these AI agents are transforming how enterprise support works. We close with a forward-looking conversation about hyper-personalization, ethical AI integration, and the long-term role of trust in CX strategy. Ryan's optimism is clear, but it's grounded in metrics, use cases, and a deep understanding of what businesses actually need to deliver meaningful customer experiences. If you're curious about what customer service looks like when AI and humans collaborate effectively, or what it takes to move from handling complaints to driving conversion, this is the conversation you'll want to hear.

    3373: From Developer to CEO: Kirill Skrygan on Leading JetBrains into the AI Era

    Play Episode Listen Later Aug 6, 2025 25:09


    If you've ever used IntelliJ IDEA, PyCharm, or Rider, you've probably already felt the impact of JetBrains. But what happens when one of the world's most trusted software development toolmakers starts asking whether developers should completely rethink their identity? In this episode of Tech Talks Daily, I caught up with Kirill Skrygan, the CEO of JetBrains. Kirill's story is something many developers will admire. He joined JetBrains as a junior developer back in 2010 and steadily worked his way through the ranks. Along the way, he helped launch new products, led remote development during the pandemic, and is now steering the company into the AI era. This isn't just about adding AI features to tools. Kirill is challenging long-held assumptions about what it means to be a software engineer. He believes we're entering a new chapter where non-technical creators will build their own tools, and where proactive AI agents will help maintain and even update code automatically. It's a bold vision, but one grounded in practical experience and responsibility. JetBrains is used by over 11 million developers, including 88 of the Fortune Global Top 100, so the stakes are high. We explored how AI is lowering the barrier to software creation, what that means for traditional developers, and why Kirill believes the shift won't devalue their skills but rather evolve them. He shared insights into JetBrains' own AI agent, Junie, which is already being used in production environments. He also talked about Kineta, their no-code platform designed for a new generation of creators, including those who want to build apps without a computer science background. There's also an honest discussion around the friction points of this new wave, especially when non-engineers build tools that later need to be secured and maintained. Kirill didn't shy away from the complexity of that challenge, but he's optimistic that the industry can solve it. Toward the end of the conversation, we reflected on JetBrains' role as an independent company that doesn't chase hype or rely on VC funding. That independence gives them the freedom to build what developers actually need, rather than what might look good on a press release. This episode is a thoughtful look at the future of software development, leadership from inside the codebase, and the evolving relationship between humans and AI in one of tech's most foundational professions. If you're a developer, CTO, product manager, or simply curious about how AI is changing the craft of coding, this one's worth your time.

    Claim The Tech Blog Writer Podcast

    In order to claim this podcast we'll send an email to with a verification link. Simply click the link and you will be able to edit tags, request a refresh, and other features to take control of your podcast page!

    Claim Cancel