POPULARITY
In this episode of Scouting for Growth, Sabine VanderLinden welcomes industry veteran Karl Grandl, now of Miss Moneypenny Technologies, for a wide-ranging conversation on the real transformation underway in financial services and insurance. Sabine VanderLinden sets the stage by emphasizing that digitization is no longer enough—true change means re-architecting operating models for velocity, intelligence, and trust at scale. Together, they explore the pitfalls of strategic complacency, the opportunities provided by European regulation, and the immense potential of intelligence layers and wallet technology to redefine how institutions interact with customers. The discussion moves from strategic leadership to practical use cases—from frictionless onboarding and claims to agentic customer experiences—offering a roadmap for both incumbents and challenger firms looking to thrive in the era of real-time risk and embedded governance. KEY TAKEAWAYS Reflecting on my conversation with Karl Grandl, what became clear is that transformation in financial services isn't just about digitizing legacy systems—it's about fundamentally re-architecting the industry. For decades, institutions like banks and insurers were built for stability, but the pace of change and customer expectation today demands real-time, intelligent, and seamless experiences. Simply layering new digital tools over old processes leads to fragmentation, not progress. We're stepping into the era of frontier firms: organizations powered by intelligence, human-agent collaboration, and embedded governance. As Karl emphasized, automation by itself doesn't mean autonomy or intelligence. Instead, success hinges on evolving operating models and creating trust at scale. Regulatory changes, particularly in Europe—such as the EU AI Act and the introduction of digital identity wallets—are not burdens, but strategic advantages. They force discipline, drive infrastructure modernization, and create opportunities to offer frictionless experiences for 450 million citizens. Karl's insight into customer experience “activation layers” resonated deeply. True transformation is about orchestrating intelligent touchpoints so insurance feels invisible and effortless, yet highly trustworthy, especially at moments of service or claim. This approach preserves the value of brokers and advisors, enhancing their roles as strategic risk partners instead of replacing them. Finally, leadership, not technology, is at the heart of transformation. The ability to articulate a clear vision and quickly demonstrate value is what distinguishes the winners. Real-time governance, compliance by design, and empathetic human engagement are becoming essential to build—and keep—customer trust. The challenge for every executive now is not just to optimize yesterday's operations but to actively build tomorrow's intelligence layer. The frontier is being defined now, and it begins with a leadership mindset ready for structural redesign and velocity. BEST MOMENTS "Automation is not autonomy, efficiency is not intelligence, and digital channels without orchestration create digital fragmentation." "European regulation is our unfair advantage. It's not just about discipline, it's about infrastructure." "You have to evolve—from transaction intermediary into a strategic risk advisor, augmented by intelligence that handles routine so you can focus on relationships, empathy, and judgment." "Governance is about to become the most strategic capability. When compliance agents and financial AI are embedded in every workflow, governance shifts from retrospective reporting to real-time intervention." "The frontier firm is not defined by how much AI it deploys; it is defined by how intelligently it integrates risk, compliance, capital, and customer experience." — Sabine VanderLinden ABOUT THE GUEST Karl Grandl is often dubbed an “insurance dinosaur,” with over 30 years in the industry spanning Swiss Life, GetSafe, WeFox, and now Miss Moneypenny Technologies. His experience spans product development, distribution, and embedded insurance, as well as scaling tech-driven aggregators across markets. At Miss Moneypenny, Karl is spearheading the integration of wallet technology and intelligence layers, focusing on frictionless customer interaction and embedding trust and compliance by design. An advocate for regulation as a strategic advantage and transformation as a leadership imperative, Karl is a sought-after voice for both legacy insurers and challenger MGAs looking to build tomorrow's intelligence-driven operating models. Connect with him via LinkedIn or at upcoming events such as InsurTech Week and InsurTech Insights in London. ABOUT THE HOST Sabine VanderLinden is a corporate strategist turned entrepreneur and the CEO of Alchemy Crew Ventures. She leads venture-client labs that help Fortune 500 companies adopt and scale cutting-edge technologies from global tech ventures. A builder of accelerators, investor, and co-editor of the bestseller The INSURTECH Book, Sabine is known for asking the uncomfortable questions—about AI governance, risk, and trust. On Scouting for Growth, she decodes how real growth happens—where capital, collaboration, and courage meet. If this episode sparked your thinking, follow Sabine VanderLinden on LinkedIn, Twitter, and Instagram for more insights. And if you're interested in sponsoring the podcast, reach out to the team at hello@alchemycrew.ventures
Human Oversight, Transparency & What It Means for Global AI RegulationIn this episode, Todd (COO & CISO) and Nate (Director of Cybersecurity) to discuss South Korea's new AI Basic Act and what it signals for AI regulation globally. They outline the law's focus on guardrails such as mandatory disclosure when AI is used, clear labeling of AI-generated content, human oversight in high-impact areas (like healthcare and critical infrastructure), and explainability around how AI-driven decisions are made. The conversation compares South Korea's approach as a middle ground between the EU's stricter governance (EU AI Act) and the US's current emphasis on innovation and some AI deregulation, while noting the US has limited election-related rules around deepfakes. They discuss real-world risks like AI errors in allergen data, misleading or harmful AI content, and business impacts including compliance pressure, the importance of human validation, and maintaining an inventory of where AI is used to address “shadow AI.” They also note that South Korea's fines appear relatively small compared to EU-style penalties, and that broader impact will depend on enforcement and how regulations evolve over the next several years.00:00 AI Regulation Kickoff00:26 South Korea AI Act Basics01:17 Transparency and Labeling Rules02:22 Elections and Deepfakes03:21 Real World Risk Examples04:39 Global Approaches EU US Korea05:41 How Global Rules Shift07:20 US Policy Tensions and Lawsuits09:00 Business Compliance and Fines10:10 Explainable AI in Banking11:16 Regulated Marketing Disclosures13:23 Shadow AI and Inventories14:33 Explainable AI Expectations14:46 Disclosure and AI Skepticism15:50 AI in Customer Delivery16:29 Code Risk and Accountability17:18 Hallucinations in Compliance18:03 Human in the Loop18:59 Leadership and Validation19:50 High Stakes Decisions20:59 Regulation and Enforcement21:34 Wrap Up and Next Episode
There is a question that sounds almost embarrassingly simple. After a vulnerability is discovered in a piece of widely used software — something like Log4Shell, which shook the security world and left hundreds of thousands of organizations exposed overnight — the question organizations scrambled to answer was this: where is this code, and what does it touch? Most couldn't answer it. Not the Fortune 500 companies. Not the government agencies. Not the critical infrastructure operators. Not the hospitals or the banks or the utilities. They had built and bought mountains of software over years and decades, and when the moment came to understand what was actually inside it, they were effectively blind. That gap is exactly what Daniel Bardenstein set out to close when he co-founded Manifest Cyber in 2023. And in a conversation on ITSPmagazine's Brand Highlight series, he made a case for technology transparency that is hard to argue with — not because it's technically complex, but because the analogy he draws is so strikingly obvious once you hear it. "If you want to buy a house, you get to go inside the house, do the home inspection," he said. "You want to buy food from the grocery store — you can look at the ingredients. Even our clothes tell you what they're made of, how to care for them, and where they're from." But software? The technology running hospital MRI machines, weapon systems, financial infrastructure, water delivery? No transparency required. No ingredient label. No inspection rights. Just trust. That trust, as Log4Shell demonstrated, is a vulnerability in itself. Bardenstein came to this problem with credentials that few founders in the space can claim. Before starting Manifest, he spent four and a half years in the US government leading large-scale cyber programs and serving as technology strategy lead at CISA — the Cybersecurity and Infrastructure Security Agency. He saw firsthand how defenders are perpetually at a disadvantage, operating without the basic visibility they need to do their jobs. His mission became building the tools to change that. The problem, he's quick to point out, has not improved in the years since Log4Shell. Software supply chain attacks have multiplied — XZ Utils, NPM Polyfill, and others following the same pattern: trusted software becomes the attack vector, and it spreads fast. Meanwhile, most security teams are still operating with SCA tools that generate noisy, overwhelming alerts and vendor risk programs built on Excel spreadsheets and questionnaires rather than actual empirical data about the security of what they're buying. "Security teams have a false sense of security," Bardenstein said. The gap between what organizations think they know and what they actually know about their software supply chains remains dangerously wide. Manifest Cyber addresses this across the full lifecycle. For organizations that build software, the platform maps every open source dependency, assesses it for risk, and ensures developers can write more secure code without losing velocity. For organizations that buy software — which is everyone — it finds risks before procurement, then continuously monitors every third party component so that when something breaks, they know the blast radius in seconds, not weeks. The timing matters. Regulation is catching up to the problem. The EU AI Act, the Cyber Resilience Act, and a growing body of global policy are beginning to demand exactly the kind of software supply chain transparency that Manifest is built to provide. Organizations that wait to build this capability will find themselves scrambling to comply — those that build it in now will have it as a competitive advantage. The ingredient label for software has always been missing. Manifest Cyber is writing it. ________________________________________________________________ Marco Ciappelli interviews Daniel Bardenstein, CEO & Co-Founder of Manifest Cyber, for ITSPmagazine's Brand Highlight series. HOST Marco Ciappelli — Co-Founder & CMO, ITSPmagazine | Journalist, Writer & Branding Advisor
On this episode of The Cybersecurity Defenders Podcast, we speak with Chris Cochran, Field CISO & Vice President of AI Security at SANS Institute, about how to navigate the future of AI risk and security strategyChris works at the intersection of cyber defense, AI safety, and emerging risk, where the threats are converging and the playbooks are still being written. His career has taken him from the Marine Corps to NSA, U.S. Cyber Command, the U.S. House of Representatives, Mandiant, and Netflix. Across every role, one throughline: understanding adversaries, building high-trust teams, and translating complex problems into strategies leaders can act on.Today, Chris advises organizations, governments, and research institutions on AI governance, agentic threat preparedness, and unifying safety and security into a single discipline. He contributes to global standards efforts including the EU AI Act (via OWASP AI) and leads executive education on cybersecurity and AI strategy at SANS.Support our show by sharing your favorite episodes with a friend, subscribe, give us a rating or leave a comment on your podcast platform. This podcast is brought to you by LimaCharlie, maker of the SecOps Cloud Platform, infrastructure for SecOps where everything is built API first. Scale with confidence as your business grows. Start today for free at limacharlie.io
Ist das echt – oder hat das eine KI gemacht? Diese Frage wird 2026 immer schwerer zu beantworten. Microsoft hat jetzt einen neuen Plan vorgelegt: ein ganzes System aus unsichtbaren Wasserzeichen, kryptografischen Metadaten und forensischen Fingerabdrücken soll künftig belegen, ob ein Bild, ein Text oder ein Video von einer KI stammt. Klingt gut – aber es gibt einen Haken: Das Ganze funktioniert nur, wenn auch alle anderen mitmachen. In dieser Folge schauen wir uns an, wie Microsofts Blueprint technisch aufgebaut ist, was der EU AI Act in Artikel 50 dazu vorschreibt und warum die eigentliche Herausforderung nicht die Technik ist, sondern die Frage, ob sich ein ganzes Ökosystem auf gemeinsame Spielregeln einigen kann.
Innovation comes in many areas, and compliance professionals need not only to be ready for it but also to embrace it. Join Tom Fox, the Voice of Compliance, as he visits with top innovative minds, thinkers, and creators in the award-winning Innovation in Compliance podcast. In this episode, host Tom Fox welcomes Tim Khamzin, Founder & CEO of Vivox AI, to discuss building explainable, trusted AI agents for financial crime compliance teams. Tim describes his background in banking operations automation, including large-scale digital transformation and the development of compliance products, and explains how large language models since 2023–2024 enable the automation of unstructured compliance work without extensive model training. He outlines key challenges in AML/KYC operations—15% of bank headcount tied to compliance, heavy manual repetitive investigations across multiple systems, and cultural resistance to adopting technology. Tim emphasizes “explainability” through consistent, repeatable investigations with audit logs and screenshots that mirror human workflows, and “trust” through transparency, compliant vendor choices, and clear communication of limitations. Tim introduces Vivox compliance analyst, “Rachel,” a platform of collaborating agents that supports onboarding, customer due diligence, and false-positive reduction, improved via structured human feedback (thumbs up/down) to learn firm-specific standards. He explains how Vivox stays aligned with evolving regulations by engaging with bodies such as the UK FCA and tracking frameworks such as the EU AI Act and Singapore guidance, with a focus on auditability and explainability. Tim predicts most compliance work will shift to AI agents, with humans handling complex cases and a new role of “compliance engineer” emerging to configure and evaluate agents, alongside industry consolidation and operating-system-style vendor platforms. Key highlights: From Banking Automation to Founding Vivox AI: The Opportunity in LLMs What's Broken Today: Manual Investigations, Backlogs, and Culture Gaps Explainable + Trusted AI: Audit Trails, Screenshots, and Transparency Regulators' Top AI Concerns: Black Box, Bias, and 99% Accuracy Inside ‘Rachel': The AI Compliance Analyst & Human-in-the-Loop Feedback The Future: Compliance Engineers, Agent “Operating Systems,” and Consolidation Resources: Tim Khamzin on LinkedIn Vivox AI Innovation in Compliance was recently honored as the Number 4 podcast in Risk Management by 1,000,000 Podcasts.
Most companies struggle to modernize their data infrastructure without risking compliance breaches or massive overhauls—that is, until now. Matt Zoltow reveals how his startup turned the world of regulated industries upside down with a platform that keeps data in its own private cloud, yet delivers seamless AI-powered insights in real time. If your organization faces strict data sovereignty rules but still craves agility and security, this episode is your game-changer.In this eye-opening conversation, Matt shares the origin story of IntelliPaaS and how a narrow focus on regulated enterprises helped pioneer a revolutionary approach to data integration. You'll discover the key principles that allow global companies—automotive giants, government agencies, and beverage producers—to connect 170 sites across Europe without ever compromising their sovereignty. Matt breaks down how their platform enables enterprises to maintain full control of data while integrating AI, automating supply chains, and ensuring compliance—no matter where they operate.We break down how legacy systems and point-to-point integrations threaten to spiral out of control, creating unmanageable complexity and security risks. Matt shares concrete strategies for avoiding these pitfalls, emphasizing cloud-agnostic, flexible architectures that adapt to changing regulations like the EU AI Act or Korea's new AI law. You'll learn practical tactics for building trust with regulators, mastering regional data standards, and ensuring your organization stays ahead in the race for secure, compliant data innovation.Failing to modernize your data architecture risks not just security breaches but missed opportunities in AI-driven decision making and supply chain optimization. This episode offers a blueprint to turn compliance constraints into a source of competitive advantage—empowering you to harness data without sacrificing control or flexibility. Perfect for executives, founders, and IT leaders in regulated industries, this is essential listening to future-proof your data infrastructure.Matt Zoltow is the CTO of IntelliPaaS, a leader in secure, compliant data integration solutions for highly regulated sectors. With a background spanning Germany, New Zealand, and Asia, Matt's entrepreneurial journey is defined by doing what's impossible—building platforms trusted globally despite complex regulatory environments. His insights combine deep industry expertise with real-world success stories, making this episode a must-listen for anyone aiming to lead with confidence in a data-driven world.Why this works:This episode hooks listeners immediately by addressing a common pain point—balancing compliance with agility—while teasing a groundbreaking solution. It appeals directly to decision-makers in regulated industries who need practical, scalable strategies. The detailed insights, real-world examples, and focus on future regulatory trends create curiosity and provide actionable value, encouraging clicks and sustained engagement.
http://www.five1.de/podcast/ai-data-security Alle reden über KI-Use-Cases. Über Agenten. Über Automatisierung. Kaum jemand redet ernsthaft über das, was im Hintergrund passiert: Datensicherheit. In dieser Folge spreche ich mit Christian Bühler und Joshua Zielinski darüber, warum Security im KI-Umfeld kein IT-Randthema ist – sondern strategische Pflicht. Denn KI ist nichts anderes als eine Datenpipeline: Was du reinsteckst, kommt verstärkt wieder raus. Und genau dort liegen die Risiken. Wir sprechen über: • Schatten-KI und unbewusste Datenleaks durch Mitarbeitende • Warum Agenten mit zu vielen Rechten gefährlicher sind als Hacker im Hoodie • Manipulierte Wissensdatenbanken und falsche KI-Outputs • Denial-of-Wallet-Attacken und versteckte Kostenrisiken • Warum der EU AI Act eher Leitplanke als Innovationsbremse ist • Und weshalb Governance im KI-Zeitalter neu gedacht werden muss Die zentrale Botschaft: Sicherheit bremst dich nicht aus. Sie macht dich handlungsfähig. Wenn du KI einführst, ohne dir über Privilegien, Datenquellen und Monitoring Gedanken zu machen, potenzierst du Fehler. Wenn du es sauber aufsetzt, wird KI zum Wettbewerbsvorteil. Und genau darum geht es in dieser Episode. ⸻ ⏱️ Timestamps 00:00 – Rückblick: Security-Assessment & Reaktionen 01:20 – Warum KI-Security plötzlich Chefsache wird 03:00 – Hype vs. Regulierung: Wo stehen Unternehmen? 04:30 – Imperfect User: Das größte Sicherheitsrisiko 06:00 – Schatten-KI und Datenleaks 08:00 – Enterprise-Lösungen vs. öffentliche KI 10:00 – Least Privilege: Warum Agenten Grenzen brauchen 12:00 – On-Prem vs. Cloud: Realität im Mittelstand 14:00 – EU AI Act: Bremse oder Leitplanke? 17:00 – Manipulation von KI-Daten als reales Risiko 20:00 – Governance im KI-Zeitalter 23:00 – Typische Sicherheitslücken bei KI-Use-Cases 26:00 – Predictive Maintenance & Praxisbeispiele 29:00 – Monitoring, Logging & Wissensdatenbanken 32:00 – KI-Workflows & vererbte Berechtigungen 34:00 – White Paper: Die 5 häufigsten KI-Security-Risiken 35:30 – Fazit: Sicherheit als Enabler
In dieser Folge nehmen Janine Schäfer (Fachanwältin für Arbeitsrecht) und Jens Grosser (Rechtsanwalt & Wirtschaftsmediator) eine typische Alltagsszene auseinander, die plötzlich brandgefährlich werden kann: KI „nur zum Spaß" – ein Bild verfremden, kurz lachen, fertig? Genau hier wird's spannend, denn aus einem vermeintlichen Gag kann schnell ein echtes Problem werden. Von dort geht es direkt weiter zur nächsten großen Frage im Betrieb: Wenn KI-Tools eingeführt werden, braucht es klare Schulungen und Spielregeln. Und genau an dieser Stelle ist der Betriebsrat gefragt, gemeinsam mit dem Arbeitgeber Inhalte, Ablauf und Verantwortlichkeiten sauber zu klären. Außerdem sprechen die beiden darüber, welche Stellen im EU AI Act im Zusammenhang mit der Schulungspflicht genannt werden – und warum „einfach löschen" im KI-Kontext oft leichter gesagt als getan ist. Themen der Episode: KI-Bildbearbeitung „aus Spaß": Warum verfälschte Bilder schnell zum Problem werden können (Persönlichkeitsrecht/Datenschutz) „Wenn's im Netz ist, darf ich alles?" – typische Irrtümer und echte Risiken im Arbeitsalltag Schulungspflicht bei KI: Was im Podcast zum EU AI Act (u.a. genannte Artikel/Begriffe) erklärt wird KI-Kompetenz: Welche Mindestidee der Inhalte diskutiert wird (Chancen, Risiken, sachkundiger Einsatz) Praxisfrage „Löschung": Was passiert mit Bildern/Daten, wenn sie einmal bei Tools oder im Netz gelandet sind? Betriebsrat als Treiber: Auftrag nach §80 BetrVG und wie Sie den Arbeitgeber auf Pflichten hinweisen Umsetzung in der Praxis: Schulungskonzept + mögliche Regelungen per Betriebsvereinbarung (z.B.§87/§88 BetrVG) Seminarempfehlung: Mitbestimmung bei Künstlicher Intelligenz Teil 1: https://www.waf-seminar.de/538
This is a real estate podcast. But there is no part of the modern world that is insulated from advances in AI, as well as the negative consequences of AI. Today we're talking about the EU AI Act, and what it can mean for real estate investors who own property in Europe, especially if you use “smart” building tech like security cameras, access control, tenant screening, or building automation. I've long held the opinion that European legislation can often serve as a canary in the coal mine for what might happen elsewhere in the world. So we are looking at Europe's AI Act in that context. ------------**Real Estate Espresso Podcast:** Spotify: [The Real Estate Espresso Podcast](https://open.spotify.com/show/3GvtwRmTq4r3es8cbw8jW0?si=c75ea506a6694ef1) iTunes: [The Real Estate Espresso Podcast](https://podcasts.apple.com/ca/podcast/the-real-estate-espresso-podcast/id1340482613) Website: [www.victorjm.com](http://www.victorjm.com) LinkedIn: [Victor Menasce](http://www.linkedin.com/in/vmenasce) YouTube: [The Real Estate Espresso Podcast](http://www.youtube.com/@victorjmenasce6734) Facebook: [www.facebook.com/realestateespresso](http://www.facebook.com/realestateespresso) Email: [podcast@victorjm.com](mailto:podcast@victorjm.com) **Y Street Capital:** Website: [www.ystreetcapital.com](http://www.ystreetcapital.com) Facebook: [www.facebook.com/YStreetCapital](https://www.facebook.com/YStreetCapital) Instagram: [@ystreetcapital](http://www.instagram.com/ystreetcapital)
Learnovate, a leading global future of work and learning research hub in Trinity College Dublin, is leading a new Community of Practice for AI implementers and practitioners involved in teaching and learning. The Responsible AI for Learning (RAIL) initiative will allow practitioners to share knowledge, interpret guidelines, and comply with AI regulations. Learnovate is leading the RAIL initiative, which is made up of professionals from all four education domains, including schools, higher education, vocational education and training, and professional education, as well as representatives from the Department of Education, teaching unions, and other sectors. RAIL was formed in November last year when more than 50 professionals in the education sector came together in Trinity College Dublin to discuss the need for a collective interpretation of the AI Advisory Council's guidelines on the use of AI in education. There was also agreement at the meeting on the need for a facility to share knowledge, discuss the opportunities and risks accompanying the use of AI in education, and support each other in complying with the EU AI Act. RAIL will host its inaugural meeting on February 24 2026. The one-hour event is one of three virtual meetings set to take place this year, with a fourth in-person event to follow in November. Those wishing to attend the free event can register at www.learnovatecentre.org/events The February 24 meeting will be led by Dr Gill Ferrell, Executive Director for Europe of 1EdTech, a global organisation promoting and supporting education standards and protocols for K-12 through to higher education and professional education. She will deliver a presentation to the event entitled, 'A European and Global Perspective on AI in Education: Opportunity, Risk, and a Vision for the Future'. Dr Ferrell's expertise is in understanding, managing and guiding the use of technology in learning. She has held senior roles with Jisc, the agency that manages shared services for education institutions and provides advice and guidance to UK education, and has published research in curriculum, student data, social media, assessment and feedback, and design of learning spaces. She has also worked with Universities and Colleges Information Systems Association (UCISA) and European University Information Systems Association (EUNIS). The Community of Practice will be chaired in 2026 by Jonathan Dempsey, Commercial Lead for Diotima, an AI-enabled platform for formative assessment and feedback. Diotima supports teaching practice using responsible AI to provide learners with feedback, leading to more and better assessments and improved learning outcomes for students, and a more manageable workload for teachers. In 2025, Diotima received €500,000 in funding from the Enterprise Ireland Commercialisation Fund, which helps third-level researchers to translate their research into innovative and commercially viable products, services and companies. Diotima partnered with Learnovate in February last year and will spin out of Trinity College Dublin as a company in 2026. The Learnovate Centre at Trinity College Dublin is a leading global future of work and learning research hub funded by Enterprise Ireland and IDA Ireland. Learnovate Managing Director Nessa McEniff said: "Learnovate is delighted to lead the formation of Responsible AI for Learning, a new Community of Practice. The group was formed following the publication of the guidelines on the use of AI in education by the AI Advisory Council. Rather than try to interpret those guidelines in a silo, implementers and practitioners came together to establish a collective interpretation, share knowledge, and ensure compliance with AI regulations. We look forward to the inaugural virtual meeting of RAIL on February 24 2026, the first of four planned for 2026, including one in-person meeting in November." RAIL Chair and Diotima Commercial Lead Jonathan Dempsey said: "Everyone involved in schools, highe...
KI wirksam einsetzen: Warum Strategie, Taktik und Operative zusammengehören Viele reden über KI, Tools und schnelle Automatisierung. Doch die eigentliche Frage ist eine andere: Warum schaffen es manche Unternehmer und Berater, mit KI echte Wirkung zu erzielen und andere bleiben in Spielerei stecken? Genau darum geht es in dieser Folge. Es geht nicht um das nächste Tool, sondern um ein klares Modell, das operative Umsetzung, taktische Klarheit und strategische Ausrichtung miteinander verbindet. Torsten Körting auf LinkedIn: LinkedIn - https://www.linkedin.com/in/torstenkoerting/ Die drei Ebenen wirksamer KI-Arbeit: Operativ, taktisch, strategisch Wirksamkeit entsteht nicht zufällig. Sie entsteht, wenn drei Ebenen sauber zusammenspielen. Auf der operativen Ebene wird umgesetzt: Projekte, Prozesse, konkrete Anwendungsfälle. Die taktische Ebene ist der Denkraum dazwischen. Hier werden Prozesse, Produkte, Kundenerlebnisse und interne Reibungsverluste reflektiert. Daraus entstehen sinnvolle KI-Anwendungsfälle – nicht umgekehrt. Die strategische Ebene beantwortet schließlich die entscheidende Frage nach dem Warum. Hier geht es um Geschäftsmodelle, um KI-first-Ansätze, um Zukunftsbilder. Strategie gibt Richtung vor, Taktik übersetzt sie in Initiativen, Operative macht sie real. Fehlt eine dieser Ebenen, bleibt KI entweder Theorie oder Aktionismus. KI ist kein Allheilmittel – Kontext, Verantwortung und Grenzen zählen KI kann viel, aber sie ist kein Wundermittel. Sie halluziniert, sie irrt sich, sie braucht Kontext. Genau deshalb ist es gefährlich, KI unreflektiert in automatisierte Prozesse zu integrieren. Qualität entsteht nicht durch Tool-Nutzung, sondern durch klare Fragestellungen, saubere Daten und kritisches Denken. Dazu kommen flankierende Themen, die oft ausgeblendet werden: Unternehmenskultur, Ethik, Change, Datenschutz, DSGVO, Compliance und der EU AI Act. Diese Aspekte sind nicht verhandelbar. Wer KI ernsthaft einsetzt, braucht hier Klarheit – nicht Interpretation. Verantwortung ist kein Nice-to-have, sondern Voraussetzung für nachhaltige Nutzung. Warum operative Umsetzung allein nicht reicht Viele Unternehmen starten bei KI auf der operativen Ebene: ein Tool hier, eine Automation dort. Das erinnert stark an frühere Automatisierungswellen… nur schneller. Ideen sind billig, Umsetzung ist anspruchsvoll. Ohne taktische Bewertung und strategische Einbettung bleibt es beim Ausprobieren. Echte Wirkung entsteht erst, wenn Anwendungsfälle priorisiert, als Business Case gedacht und sauber übergeben werden. Projekte brauchen Struktur, Verantwortung und Menschen im Unternehmen, die als Multiplikatoren wirken. KI entfaltet ihren Wert dann, wenn sie strategisch, taktisch und operativ gleichzeitig gedacht wird – als Partner, nicht als Spielzeug. Fazit: Raus aus der Spielwiese, rein in echte Wirkung Die Zeit der KI-Spielwiese ist vorbei. Heute geht es um Wachstum, um Nutzen, um messbaren ROI. Wer KI wirksam einsetzen will, braucht mehr als Tools: ein klares Modell, saubere Ebenen, Verantwortung und Umsetzungskraft. Die entscheidende Frage ist nicht, was KI alles kann, sondern wo sie sinnvoll eingesetzt wird und warum. Genau dort beginnt unternehmerische Wirksamkeit mit KI. Noch mehr von den Koertings ... Das KI-Café ... jede Woche Mittwoch (>350 Teilnehmer) von 08:30 bis 10:00 Uhr ... online via Zoom .. kostenlos und nicht umsonstJede Woche Mittwoch um 08:30 Uhr öffnet das KI-Café seine Online-Pforten ... wir lösen KI-Anwendungsfälle live auf der Bühne ... moderieren Expertenpanel zu speziellen Themen (bspw. KI im Recruiting ... KI in der Qualitätssicherung ... KI im Projektmanagement ... und vieles mehr) ... ordnen die neuen Entwicklungen in der KI-Welt ein und geben einen Ausblick ... und laden Experten ein für spezielle Themen ... und gehen auch mal in die Tiefe und durchdringen bestimmte Bereiche ganz konkret ... alles für dein Weiterkommen. Melde dich kostenfrei an ... www.koerting-institute.com/ki-cafe/ Mit jedem Prompt ein WOW! ... für Selbstständige und Unternehmer Ein klarer Leitfaden für Unternehmer, Selbstständige und Entscheider, die Künstliche Intelligenz nicht nur verstehen, sondern wirksam einsetzen wollen. Dieses Buch zeigt dir, wie du relevante KI-Anwendungsfälle erkennst und die KI als echten Sparringspartner nutzt, um diese Realität werden zu lassen. Praxisnah, mit echten Beispielen und vollständig umsetzungsorientiert. Das Buch ist ein Geschenk, nur Versandkosten von 9,95 € fallen an. Perfekt für Anfänger und Fortgeschrittene, die mit KI ihr Potenzial ausschöpfen möchten. Das Buch in deinen Briefkasten ... https://koerting-institute.com/shop/buch-mit-jedem-prompt-ein-wow/ Die KI-Lounge ... unsere Community für den Einstieg in die KI (>2800 Mitglieder) Die KI-Lounge ist eine Community für alle, die mehr über generative KI erfahren und anwenden möchten. Mitglieder erhalten exklusive monatliche KI-Updates, Experten-Interviews, Vorträge des KI-Speaker-Slams, KI-Café-Aufzeichnungen und einen 3-stündigen ChatGPT-Kurs. Tausche dich mit über 2800 KI-Enthusiasten aus, stelle Fragen und starte durch. Initiiert von Torsten & Birgit Koerting, bietet die KI-Lounge Orientierung und Inspiration für den Einstieg in die KI-Revolution. Hier findet der Austausch statt ... www.koerting-institute.com/ki-lounge/ Starte mit uns in die 1:1 Zusammenarbeit Wenn du direkt mit uns arbeiten und KI in deinem Business integrieren möchtest, buche dir einen Termin für ein persönliches Gespräch. Gemeinsam finden wir Antworten auf deine Fragen und finden heraus, wie wir dich unterstützen können. Klicke hier, um einen Termin zu buchen und deine Fragen zu klären. Buche dir jetzt deinen Termin mit uns ... www.koerting-institute.com/termin/ Weitere Impulse im Netflix Stil ... Wenn du auf der Suche nach weiteren spannenden Impulsen für deine Selbstständigkeit bist, dann gehe jetzt auf unsere Impulseseite und lass die zahlreichen spannenden Impulse auf dich wirken. Inspiration pur ... www.koerting-institute.com/impulse/ Die Koertings auf die Ohren ... Wenn dir diese Podcastfolge gefallen hat, dann höre dir jetzt noch weitere informative und spannende Folgen an ... über 440 Folgen findest du hier ... www.koerting-institute.com/podcast/ Wir freuen uns darauf, dich auf deinem Weg zu begleiten!
AI has officially moved from experimentation to execution—and regulation is racing to catch up.In this episode of Reimagining Cyber, Tyler Moffitt is joined by Matt Aldridge to unpack what the rapidly evolving AI regulatory landscape means for security teams, businesses, and managed service providers heading into 2026.From the EU AI Act and GDPR to California's CPRA and emerging rules around automated decision-making, they explore how governments are trying to balance innovation with safety, privacy, and accountability. The conversation dives into the real-world security implications of agentic AI, autonomous decision-making, biased training data, and the growing risks of AI systems operating with minimal oversight.Whether you're an enterprise security leader, an SMB, or an MSP supporting multiple customers, this episode breaks down why AI regulation is no longer a future concern—and what practical steps organizations should be taking now to reduce risk, protect data, and responsibly govern AI adoption.As featured on Million Podcasts' Best 100 Cybersecurity Podcasts Top 50 Chief Information Security Officer CISO Podcasts Top 70 Security Hacking Podcasts This list is the most comprehensive ranking of Cyber Security Podcasts online and we are honoured to feature amongst the best! Follow or subscribe to the show on your preferred podcast platform.Share the show with others in the cybersecurity world.Get in touch via reimaginingcyber@gmail.com
WHAT RESPONSIBLE AI MEANS FOR RECRUITERS? What is Responsible AI for Talent Acquisition Teams? is a practical, straight-talking podcast for recruiters who are already using AI—and now need to make sure they're using it properly. AI is no longer a future experiment in hiring. It's embedded in sourcing, screening, assessment, and workforce planning. The real question facing TA leaders today is not whether to use AI, but how to use it in a way that stands up to governance scrutiny, fairness expectations, and growing legal risk. Regulators, candidates, and internal stakeholders are all paying closer attention—and the margin for error is shrinking. This podcast explores the reality behind “responsible AI” in talent acquisition, cutting through vague principles and focusing on what recruiters actually need to know. We'll examine why so many organisations still lack formal AI governance, why confidence in bias reduction remains low, and what that means for teams deploying AI at scale. Drawing on 2024–2025 data and real-world TA use cases, the discussion will unpack the tension between automation, efficiency, and human accountability. Key areas covered include: • What responsible AI really means in a TA context • Governance frameworks recruiters should understand—even if legal owns them • Bias, fairness, and explainability in screening and assessment tools • Legal and regulatory risk, including emerging obligations under the EU AI Act and employment law • The role of recruiters as AI operators, not just end users • How to balance speed, cost savings, and candidate trust • What “human-in-the-loop” looks like in practice Listeners will learn how to evaluate their current AI stack, ask better questions of vendors, reduce risk exposure, and build hiring processes that are efficient, defensible, and fair. We're with Martyn Redstone, Head of Responsible AI & Industry Engagement (Warden.AI) & friends on Wednesday 4th February, 12pm GMT. Register by clicking on the green button (save my spot) and follow the channel here (recommended) Ep359 is sponsored by Oleeo AI is now used by 62% of companies for hiring, but rapid efficiency shouldn't come at the expense of fairness. Oleeo and Aptitude Research's new report highlights a major gap: only 20% of employers have fully established AI governance frameworks, which can lead to unintended bias. To keep things fair and compliant, 85% of recruiters demand final decision-making authority. Download 'Setting the Standard for Responsible AI: A Guide For Modern Recruiters' today to build a transparent, human-led strategy that uses AI responsibly.
In dieser Folge hören wir Aboubacar (Abou) Diallo und Ranjan Vitt vom globalen SAP Risk & Compliance Bereich über AI Governance und Standards in der Praxis. Die Experten erklären SAPs Governance und Responsible AI-Ansatz, der Ethik, Security und Compliance über den gesamten AI-Lebenszyklus integriert – basierend auf Standards wie ISO 42001 oder NIST AI RMF. Die jüngste ISO 42001-Zertifizierung von SAP für Lösungen wie Joule und AI Core wird ebenso erwähnt wie die Relevanz des EU AI Acts. Wir schauen auf die Relevanz für SAP Kunden, zum Beispiel bezüglich Transparenz, Sicherheit und Vertrauen in KI Lösungen. Kontroverse Meinungen zum Thema und Empfehlungen für weitere Vertiefung runden das Ganze ab. Wie immer freuen wir uns über Feedback und wenn Ihr den Podcast in Eurem Netzwerk teilt.LinksLinkedIn Abou Diallo: https://www.linkedin.com/in/aboubacar-dialloLinkedIn Ranjan Vitt: https://www.linkedin.com/in/ranjan-vittVerantwortungsvolle KI auf sap.com: https://www.sap.com/germany/products/artificial-intelligence/ai-ethics.htmlSAP Trust Center | ISO 42001 Zertifizierung und Statement of Applicability (SoA): https://www.sap.com/about/trust-center/certification-compliance/compliance-finder.html?sort=latest_desc&search=iso%2042001Everyday AI Podcasts: https://www.youreverydayai.com/
Send us a textResponsibility breaks where AI moves fastest, and that's exactly where we go today. Grant sits down with Daniel Ikem—strategic operator at the intersection of emerging technology, intellectual property, and public policy—to unpack how shadow AI, data limits, and legal gray zones collide inside modern organizations. From boardrooms pushing Copilot to teams quietly pasting prompts into other models, we trace how governance cracks form and why documentation, auditability, and accountability must evolve as quickly as the tools.Daniel shares firsthand insights from big-tech partnerships and from founding the Diverse IP Alliance, where he's helping HBCU and underrepresented students build fluency in AI and IP. We examine the core challenges leaders face: capturing tacit knowledge that models can't see, preventing biased historical data from influencing outcomes, and defining ownership of outputs when proprietary data mixes with external systems. We also tackle the jagged frontier of agentic AI—who's liable when autonomy kicks in—and the geopolitical reality that makes “slow down” easier to say than to implement.You'll walk away with pragmatic steps to act now: set clear policies on approved models and data access, capture critical processes that were never written down, design human-in-the-loop review for high-impact decisions, and build a living risk register that survives model updates. We compare U.S. uncertainty with GDPR and the EU AI Act to show where global benchmarks can guide you before domestic rules arrive. Above all, we make the case that governance is not just compliance—it's strategy, trust, and long-term resilience.If you care about AI governance, IP risk, bias, and building a talent pipeline that reflects the communities your systems will serve, this one's for you. Subscribe, share with a colleague who's wrestling with AI policy, and leave a review with your top governance question so we can tackle it next.Thanks for tuning in to this episode of Follow The Brand! We hope you enjoyed learning about the latest trends and strategies in Personal Branding, Business and Career Development, Financial Empowerment, Technology Innovation, and Executive Presence. To keep up with the latest insights and updates, visit 5starbdm.com. And don't miss Grant McGaugh's new book, First Light — a powerful guide to igniting your purpose and building a BRAVE brand that stands out in a changing world. - https://5starbdm.com/brave-masterclass/ See you next time on Follow The Brand!
Get featured on the show by leaving us a Voice Mail: https://bit.ly/MIPVM This episode with Michael Plettner explores how organisations move from AI curiosity to practical, business focused implementation. You will learn why user adoption and leadership support matter, how teams shift from generic Copilot use to targeted process improvement, and how the EU AI Act is pushing companies toward more mature and responsible AI practices.
On this special edition of The Data Day podcast, Ropes & Gray partner Rohan Massey—leader of the firm's data, privacy & cybersecurity practice and managing partner of the London office—is joined by counsel Edward Machin and associates Catherine Keeling and Suzie Wilson to celebrate the 19th World Data Protection Day and explore the evolving landscape of data, privacy, and cybersecurity regulation across the UK and EU in 2026. The discussion covers headline-making cybersecurity breaches, new compliance obligations under DORA and the Cyber Resilience Act, and the anticipated UK cyber bill. The panel also examines the regulatory outlook for AI, including key dates for the EU AI Act and the potential direction of the UK's AI Bill. Rounding out the conversation, the team highlights upcoming changes in digital regulation, such as the UK's Data (Use and Access) Act, the EU Data Act, and the Digital Omnibus package.
Recorded live at Cloud Connections, the Cloud Communications Alliance event in Delray Beach, Doug Green, Publisher of Technology Reseller News, spoke with Bill Placke, Co-Founder & President, Americas at SecurePII, about one of the most pressing challenges facing AI-driven communications today: how to scale AI while complying with global data privacy regulations—and how that challenge can become a competitive advantage. Placke explains that SecurePII was formed to address a growing structural problem in AI adoption. While organizations are eager to deploy AI and train large language models, regulatory uncertainty around personally identifiable information (PII) has stalled progress. Citing industry research showing that more than 60 percent of AI initiatives have been paused due to data privacy concerns, Placke argues that governance policies alone are not enough. Instead, SecurePII takes an architectural approach. At the core of SecurePII's solution is data minimization at the point of ingestion. The company's technology prevents sensitive information—such as credit card numbers, names, addresses, or social security numbers—from ever entering enterprise systems. SecurePII's existing PCI-focused offering already removes cardholder data from call flows, keeping organizations out of PCI scope entirely. The same approach is now being extended to broader categories of PII, enabling AI systems to operate and train on clean data streams that are free from regulated information. Placke emphasizes that this upstream architectural design fundamentally changes the compliance equation. Regulators and plaintiff attorneys, he notes, care about outcomes—not intent. If sensitive data never enters the system, compliance scope, audit costs, breach exposure, and regulatory risk are dramatically reduced. “Downstream controls don't scale with AI—architecture does,” Placke says, positioning data minimization as a foundation for both trust and growth. The discussion also highlights the role of consent and customer trust in an AI-enabled world. Rather than asking customers to consent to broad data use, SecurePII enables enterprises to clearly state that sensitive information is neither seen nor stored, while still allowing AI to learn from outcomes and sentiment. This approach removes what Placke calls the “creepy factor” associated with AI and personal data, while aligning with emerging frameworks such as the EU AI Act and long-standing NIST guidance. For MSPs, UCaaS providers, and channel partners, Placke frames compliance not as a cost center but as a revenue opportunity. By embedding privacy-preserving architectures into voice, AI, and communications solutions, service providers can differentiate themselves as trusted advisors—helping customers deploy AI safely, reduce regulatory exposure, and accelerate adoption. To learn more about SecurePII and its privacy-first AI architecture, visit https://www.securepii.cloud/.
Mike Oaten is the Founder and CEO of TIKOS, working on building AI assurance, explainability, and trustworthy AI infrastructure, helping organizations test, monitor, and govern AI models and systems to make them transparent, fair, robust, and compliant with emerging regulations.Cracking the Black Box: Real-Time Neuron Monitoring & Causality Traces // MLOps Podcast #358 with Mike Oaten, Founder and CEO of TIKOSJoin the Community: https://go.mlops.community/YTJoinInGet the newsletter: https://go.mlops.community/YTNewsletter// AbstractAs AI models move into high-stakes environments like Defence and Financial Services, standard input/output testing, evals, and monitoring are becoming dangerously insufficient. To achieve true compliance, MLOps teams need to access and analyse the internal reasoning of their models to achieve compliance with the EU AI Act, NIST AI RMF, and other requirements.In this session, Mike introduces the company's patent-pending AI assurance technology that moves beyond statistical proxies. He will break down the architecture of the Synapses Logger, a patent-pending technology that embeds directly into the neural activation flow to capture weights, activations, and activation paths in real-time.// BioMike Oaten serves as the CEO of TIKOS, leading the company's mission to progress trustworthy AI through unique, high-performance AI model assurance technology. A seasoned technical and data entrepreneur, Mike brings experience from successfully co-founding and exiting two previous data science startups: Riskopy Inc. (acquired by Nasdaq-listed Coupa Software in 2017) and Regulation Technologies Limited (acquired by mnAi Data Solutions in 2022).Mike's expertise spans data, analytics, and ML product and governance leadership. At TIKOS, Mike leads a VC-backed team developing technology to test and monitor deep-learning models in high-stakes environments, such as defence and financial services, so they comply with the stringent new laws and regulations.// Related LinksWebsite: https://tikos.tech/LLM guardrails: https://medium.com/tikos-tech/your-llm-output-is-confidently-wrong-heres-how-to-fix-it-08194fdf92b9Model Bias: https://medium.com/tikos-tech/from-hints-to-hard-evidence-finally-how-to-find-and-fix-model-bias-in-dnns-2553b072fd83Model Robustness: https://medium.com/tikos-tech/tikos-spots-neural-network-weaknesses-before-they-fail-the-iris-dataset-b079265c04daGPU Optimisation: https://medium.com/tikos-tech/400x-performance-a-lightweight-open-source-python-cuda-utility-to-break-vram-barriers-d545e5b6492fHyperbolic GPU Cloud: app.hyperbolic.ai.Coding Agents Conference: https://luma.com/codingagents~~~~~~~~ ✌️Connect With Us ✌️ ~~~~~~~Catch all episodes, blogs, newsletters, and more: https://go.mlops.community/TYExploreJoin our Slack community [https://go.mlops.community/slack]Follow us on X/Twitter [@mlopscommunity](https://x.com/mlopscommunity) or [LinkedIn](https://go.mlops.community/linkedin)] Sign up for the next meetup: [https://go.mlops.community/register]MLOps Swag/Merch: [https://shop.mlops.community/]Connect with Demetrios on LinkedIn: /dpbrinkmConnect with Mike on LinkedIn: /mike-oaten/Timestamps:[00:00] Regulations as Opportunity[00:25] Regulation Compliance Fun[02:49] AI Act Layers Explained[05:19] Observability in Systems vs ML[09:05] Risk Transfer in AI[11:26] LLMs and Model Approval[14:53] LLMs in Finance[17:17] Hyperbolic GPU Cloud Ad[18:16] Stakeholder Alignment and Tech[22:20] AI in Regulated Environments[28:55] Autonomous Boat Regulations[34:20] Data Compliance Mapping[39:11] Data Capture Strategy[41:13] EU AI Act Insights[44:52] Wrap up[45:45] Join the Coding Agents Conference!
A conversation with Jody Elliott, Head of IT Risk and Sustainability at National Grid, on embedding responsible AI and sustainability at scale. In this episode of the ServiceNow Executive Circle Podcast, Jody Elliott, Head of IT Risk and Sustainability at National Grid, explores how sustainability, risk, and AI can work together to create real business value. Operating across critical national infrastructure in the UK and US, Jody shares why “good green operations are just good operations” and why sustainability and responsible AI must be embedded by design, not treated as standalone initiatives. The conversation covers: How IT sustainability, e-waste policy, emissions reduction, and regulatory compliance can drive efficiency, trust, and innovation Practical, real-world AI use cases, including how generative AI can improve compliance and risk oversight by analysing large volumes of unstructured data The importance of AI literacy, human-in-the-loop governance, and strong control frameworks How organisations should prepare for emerging regulation, including the EU AI Act and CSRD What’s next—from rising energy demand and supply chain risk to the next wave of responsible technology Tune in now for a practical, real-world look at how responsible AI and sustainability can drive efficiency, trust, and innovation at scale. If you’ve got an idea for a topic, would like to propose a guest for the show or discuss any of the points raised in this episode with a ServiceNow representative, just send an email to executivecircleuki@servicenow.com And if you are not already an EXECUTIVE CIRCLE member and would like to learn more about our exclusive membership and all the benefits it brings, please visit. See omnystudio.com/listener for privacy information.
Get featured on the show by leaving us a Voice Mail: https://bit.ly/MIPVM John Rood shares how organisations can unlock real value from AI by balancing innovation, governance, and compliance. Learn why robust frameworks, practical training, and a bottom-up approach are key to sustainable AI adoption and risk management.
What happens when decades of clinical research experience collide with a regulatory environment that is changing faster than ever? In this episode of Tech Talks Daily, I sat down with Dr Werner Engelbrecht, Senior Director of Strategy at Veeva Systems, for a wide-ranging conversation that explores how life sciences organizations across Europe are responding to mounting regulatory pressure, rapid advances in AI, and growing expectations around transparency and patient trust. Werner brings a rare perspective to this discussion. His career spans clinical research, pharmaceutical development, health authorities, and technology strategy, shaped by firsthand experience as an investigator and later as a senior industry leader. That background gives him a grounded, practical view of what is actually changing inside pharma and biotech organizations, beyond the headlines around AI Acts, data rules, and compliance frameworks. We talk openly about why regulations such as GDPR, the EU AI Act, and ACT-EU are creating real pressure for organizations that are already operating in highly controlled environments. But rather than framing compliance as a blocker, Werner explains why this moment presents an opening for better collaboration, stronger data foundations, and more consistent ways of working across internal teams. According to him, the real challenge is less about technology and more about how companies manage data quality, align processes, and break down silos that slow everything from trial setup to regulatory response times. Our conversation also digs into where AI is genuinely making progress today in life sciences and where caution still matters. Werner shares why drug discovery and non-patient-facing use cases are moving faster, while areas like trial execution and real-world patient data still demand stronger evidence, cleaner datasets, and clearer governance. His perspective cuts through hype and focuses on what is realistic in an industry where patient safety remains the defining responsibility. We also explore patient recruitment, decentralized trials, and the growing complexity of diseases themselves. Advances in genomics and diagnostics are reshaping how trials are designed, which in turn raises questions about access to electronic health records, data harmonization across Europe, and the safeguards regulators care about most. Werner connects these dots in a way that highlights both the operational strain and the long-term upside. Toward the end, we look ahead at emerging technologies such as blockchain and connected devices, and how they could strengthen data integrity, monitoring, and regulatory confidence over time. It is a thoughtful discussion that reflects both optimism and realism, rooted in lived experience rather than theory. If you are working anywhere near clinical research, regulatory affairs, or digital transformation in life sciences, this episode offers a clear-eyed view of where the industry stands today and where it may be heading next. How should organizations turn regulation into momentum instead of resistance, and what will it take to earn lasting trust from patients, partners, and regulators alike? Useful Links Connect with Dr Werner Engelbrecht Learn more about Veeva Systems Viva Summit Europe and Viva Summit USA Follow on LinkedIn Thanks to our sponsors, Alcor, for supporting the show.
Episode 144Happy New Year! This is one of my favorite episodes of the year — for the fourth time, Nathan Benaich and I did our yearly roundup of AI news and advancements, including selections from this year's State of AI Report.If you've stuck around and continue to listen, I'm really thankful you're here. I love hearing from you.You can find Nathan and Air Street Press here on Substack and on Twitter, LinkedIn, and his personal site. Check out his writing at press.airstreet.com.Find me on Twitter (or LinkedIn if you want…) for updates on new episodes, and reach me at editor@thegradient.pub for feedback, ideas, guest suggestions.Outline* (00:00) Intro* (00:44) Air Street Capital and Nathan world* Nathan's path from cancer research and bioinformatics to AI investing* The “evergreen thesis” of AI from niche to ubiquitous* Portfolio highlights: Eleven Labs, Synthesia, Crusoe* (03:44) Geographic flexibility: Europe vs. the US* Why SF isn't always the best place for original decisions* Industry diversity in New York vs. San Francisco* The Munich Security Conference and Europe's defense pivot* Playing macro games from a European vantage point* (07:55) VC investment styles and the “solo GP” approach* Taste as the determinant of investments* SF as a momentum game with small information asymmetry* Portfolio diversity: defense (Delian), embodied AI (Syriact), protein engineering* Finding entrepreneurs who “can't do anything else”* (10:44) State of AI progress in 2025* Momentous progress in writing, research, computer use, image, and video* We're in the “instruction manual” phase* The scale of investment: private markets, public markets, and nation states* (13:21) Range of outcomes and what “going bad” looks like* Today's systems are genuinely useful—worst case is a valuation problem* Financialization of AI buildouts and GPUs* (14:55) DeepSeek and China closing the capability gap* Seven-month lag analysis (Epoch AI)* Benchmark skepticism and consumer preferences (”Coca-Cola vs. Pepsi”)* Hedonic adaptation: humans reset expectations extremely quickly* Bifurcation of model companies toward specific product bets* (18:29) Export controls and the “evolutionary pressure” argument* Selective pressure breeds innovation* Chinese companies rushing to public markets (Minimax, ZAI)* (21:30) Reasoning models and test-time compute* Chain of thought faithfulness questions* Monitorability tax: does observability reduce quality?* User confusion about when models should “think”* AI for science: literature agents, hypothesis generation* (23:53) Chain of thought interpretability and safety* Anthropomorphization concerns* Alignment faking and self-preservation behaviors* Cybersecurity as a bigger risk than existential risk* Models as payloads injected into critical systems* (27:26) Commercial traction and AI adoption data* Ramp data: 44% of US businesses paying for AI (up from 5% in early 2023)* Average contract values up to $530K from $39K* State of AI survey: 92% report productivity gains* The “slow takeoff” consensus and human inertia* Use cases: meeting notes, content generation, brainstorming, coding, financial analysis* (32:53) The industrial era of AI* Stargate and XAI data centers* Energy infrastructure: gas turbines and grid investment* Labs need to own models, data, compute, and power* Poolside's approach to owning infrastructure* (35:40) Venture capital in the age of massive GPU capex* The GP lives in the present, the entrepreneur in the future, the LP in the past* Generality vs. specialism narratives* “Two or 20”: management fees vs. carried interest* Scaling funds to match entrepreneur ambitions* (40:10) NVIDIA challengers and returns analysis* Chinese challengers: 6x return vs. 26x on NVIDIA* US challengers: 2x return vs. 12x on NVIDIA* Grok acquired for $20B; Samba Nova markdown to $1.6B* “The tide is lifting all boats”—demand exceeds supply* (44:06) The hardware lottery and architecture convergence* Transformer dominance and custom ASICs making a comeback* NVIDIA still 90–95% of published AI research* (45:49) AI regulation: Trump agenda and the EU AI Act* Domain-specific regulators vs. blanket AI policy* State-level experimentation creates stochasticity* EU AI Act: “born before GPT-4, takes effect in a world shaped by GPT-7”* Only three EU member states compliant by late 2025* (50:14) Sovereign AI: what it really means* True sovereignty requires energy, compute, data, talent, chip design, and manufacturing* The US is sovereign; the UK by itself is not* Form alliances or become world-class at one level of the stack* ASML and the Netherlands as an example* (52:33) Open weight safety and containment* Three paths: model-based safeguards, scaffolding/ecosystem, procedural/governance* “Pandora's box is open”—containment on distribution, not weights* Leak risk: the most vulnerable link is often human* Developer–policymaker communication and regulator upskilling* (55:43) China's AI safety approach* Matt Sheehan's work on Chinese AI regulation* Safety summits and China's participation* New Chinese policies: minor modes, mental health intervention, data governance* UK's rebrand from “safety” to “security” institutes* (58:34) Prior predictions and patterns* Hits on regulatory/political areas; misses on semiconductor consolidation, AI video games* (59:43) 2026 Predictions* A Chinese lab overtaking US on frontier (likely ZAI or DeepSeek, on scientific reasoning)* Data center NIMBYism influencing midterm politics* (01:01:01) ClosingLinks and ResourcesNathan / Air Street Capital* Air Street Capital* State of AI Report 2025* Air Street Press — essays, analysis, and the Guide to AI newsletter* Nathan on Substack* Nathan on Twitter/X* Nathan on LinkedInFrom Air Street Press (mentioned in episode)* Is the EU AI Act Actually Useful? — by Max Cutler and Nathan Benaich* China Has No Place at the UK AI Safety Summit (2023) — by Alex Chalmers and Nathan BenaichResearch & Analysis* Epoch AI: Chinese AI Models Lag US by 7 Months — the analysis referenced on the US-China capability gap* Sara Hooker: The Hardware Lottery — the essay on how hardware determines which research ideas succeed* Matt Sheehan: China's AI Regulations and How They Get Made — Carnegie EndowmentCompanies Mentioned* Eleven Labs — AI voice synthesis (Air Street portfolio)* Synthesia — AI video generation (Air Street portfolio)* Crusoe — clean compute infrastructure (Air Street portfolio)* Poolside — AI for code (Air Street portfolio)* DeepSeek — Chinese AI lab* Minimax — Chinese AI company* ASML — semiconductor equipmentOther Resources* Search Engine Podcast: Data Centers (Part 1 & 2) — PJ Vogt's two-part series on XAI data centers and the AI financing boom* RAAIS Foundation — Nathan's AI research and education charity Get full access to The Gradient at thegradientpub.substack.com/subscribe
We're back! Explore the future of artificial intelligence in the year 2026 with Patrick McGarry, Federal Chief Data Officer at ServiceNow, and Dr. Jupiter Bakakeu, Lead Generative AI Technologist at Alteryx. This milestone 200th episode examines the critical shift from AI as an answer machine to AI as an autonomous work agent capable of executing tasks independently. Learn about the four characteristics of AI agents (perceive, reflect, act, learn), discover which tasks organizations should and shouldn't delegate to AI, and understand why modernization, trust, and governance matter more than model selection. Panelists:Patrick McGarry, Federal Chief Data Officer @ ServiceNow - LinkedInJupiter Bakakeu, Lead Generative AI Technologist @ Alteryx - LinkedInJoshua Burkhow, Chief Evangelist @ Alteryx - @JoshuaB, LinkedInShow notes: ServiceNowAlteryxData.world"Beyond the Algorithm" by Patrick McGarry (upcoming publication) Interested in sharing your feedback with the Alter Everything team? Take our feedback survey here!This episode was produced by Cecilia Murray, Mike Cusic, and Matt Rotundo. Special thanks to Andy Uttley for the theme music.
In a world where climate change is reshaping the way we grow, transport, and consume the things we rely on, understanding the first mile of supply chains has never been more critical. That's the stage where over 60 per cent of risks arise, yet it remains the hardest to measure and manage. In a recent episode of Tech Transform, Trisha Pillay sits down with Jonathan Horn, co-founder and CEO of Treefera, to explore how artificial intelligence is providing clarity, actionable insights, and sustainable solutions for this complex ecosystem.The First Mile and Climate PressuresHorn's perspective comes from a mix of experience: growing up on a farm, studying physics, and working in investment banking. That combination gives him a lens on both the natural systems that underpin agriculture and the data-driven tools that help manage risk.Extreme weather patterns like droughts, heavy rainfall, and hurricanes are putting pressure on crops such as cocoa, coffee, wheat, and soy. The consequences ripple outward: production costs rise, commodity prices fluctuate, and supply chains become less predictable. A simple example illustrates this clearly: certain chocolate biscuits in the UK have moved from being chocolate-filled to chocolate-flavoured, reflecting disruptions in cocoa production in West Africa caused by extreme weather and disease. These changes are not isolated; they affect global markets and everyday products.Turning Data into Actionable InsightsAI can help make sense of the complexity. Treefera, for instance, combines satellite imagery, sensor data, and other datasets to provide insights on crop yields, supply risks, and climate impacts. Horn describes it like a car dashboard: “You don't need to know every technical detail to understand what's happening and act accordingly.”The value of AI lies not in flashy algorithms but in its ability to translate raw data into practical decision-making tools. By analysing multiple signals from weather events to agricultural output, AI can highlight trends, flag potential disruptions, and support planning for traders, insurers, or supply chain managers. The goal is clarity and action, not simply more information.Data, Regulation, and Responsible UseAlongside operational complexity, organisations face questions about data governance. Emerging regulations such as the EU AI Act aim to ensure AI is used responsibly, and companies need to maintain control over proprietary information while leveraging technology effectively. Horn stresses the importance of frugal, transparent AI applications that produce meaningful insights without unnecessary complexity.In practice, this means balancing innovation with compliance: using AI to understand risks, improve planning, and support sustainability without overstating its capabilities or creating new vulnerabilities. The conversation underlines a key point: the impact of AI is most tangible when it's applied thoughtfully, in service of real-world decisions.In short, AI is helping organisations navigate the increasingly unpredictable intersection of climate, risk, and supply chain complexity. The first mile, long a blind spot, is becoming visible not through hype or marketing claims, but through practical, data-driven insight that helps people respond to the world as it is, not as we wish it to be.TakeawaysAI can significantly improve the management of supply chains.Climate change is causing more extreme weather patterns, affecting agriculture.Data sovereignty is crucial for companies to maintain...
**Anzeige** Künstliche Intelligenz gilt als wichtigste Technologie der Gegenwart. Ob dies Anlass zu Furcht oder Hoffnung ist, wird davon abhängen, wie gut es gelingt, KI sicher und verantwortungsvoll verfügbar zu machen.
Artificial Intelligence (AI) is no longer a futuristic concept in life sciences—it's here, transforming drug safety and pharmacovigilance at an unprecedented pace. But as the industry embraces automation and advanced analytics, one truth stands firm: innovation without governance is a risk no one can afford.In this episode of The Top Line, we explore how AI is reshaping drug safety while governance sets the guardrails for ethical, compliant, and sustainable adoption. Our featured guest, Marie Flanagan, Regulatory and AI Governance Lead at IQVIA Safety Technologies, dives deep into why governance isn't just a checkbox—it's the backbone of responsible AI deployment in healthcare.Key Themes You'll Discover:The Dual Imperative: Innovation and OversightAI promises speed, accuracy, and scalability in pharmacovigilance, from case intake to signal detection. Yet, without robust governance frameworks, these benefits can quickly turn into liabilities. Learn why embedding governance into AI design from day one is critical—and why retrofitting controls later is a recipe for risk.Shared Responsibility Across StakeholdersGovernance isn't the job of one team. Flanagan explains how compliance, technology, business units, and regulators must collaborate to ensure AI systems are ethically designed, technically validated, and transparent. Discover how this multi-layered approach builds trust and resilience in a rapidly evolving regulatory landscape.Principles That Matter: Human Oversight, Fairness, AccountabilityFrom mitigating bias to ensuring explainability, guiding principles are more than buzzwords—they're operational necessities. We unpack how these principles translate into practical steps, including continuous monitoring, feedback loops, and adaptive controls that evolve with technology.Compliance as an Enabler, Not a BarrierTraditionally seen as a brake on innovation, compliance teams can become catalysts for safe progress. Hear how organizations are reframing compliance roles to support innovation while safeguarding patient safety and regulatory integrity.Global Regulatory ContextWith frameworks like the EU AI Act and U.S. federal memoranda shaping expectations, companies must navigate a complex web of international standards. Flanagan shares insights on harmonizing governance strategies across jurisdictions without slowing down innovation.Practical Playbook for Life SciencesFrom zero-touch case processing to agentic AI in signal workflows, we spotlight real-world use cases that demonstrate how governance and technology can coexist—and thrive. Learn how IQVIA's Vigilance Platform integrates governance into every layer of AI-driven pharmacovigilance.Why This Matters NowAI adoption in pharma is accelerating, with 85% of top companies prioritizing AI as a strategic imperative. But speed without structure can lead to compliance gaps, reputational damage, and patient risk. This episode equips you with the insights to innovate responsibly, ensuring your AI journey is both ambitious and anchored in governance.Tune in for actionable strategies, expert perspectives, and a candid conversation on the future of drug safety in the AI era. Whether you're a regulatory leader, technology strategist, or business decision-maker, this episode will help you balance the promise of AI with the principles that protect patients and preserve trust.
Traditional vulnerability management is simple: find the flaw, patch it, and verify the fix. But what happens when the "asset" is a neural network that has learned something ethically wrong? In this episode, Sapna Paul (Senior Manager at Dayforce) explains why there are no "Patch Tuesdays" for AI models .Sapna breaks down the three critical layers of AI vulnerability management: protecting production models, securing the data layer against poisoning, and monitoring model behavior for technically correct but ethically flawed outcomes . We discuss how to update your risk register to speak the language of business and the essential skills security professionals need to survive in an AI-first world .The conversation also covers practical ways to use AI within your security team to combat alert fatigue , the importance of explainability tools like SHAP and LIME , and how to align with frameworks like the NIST AI RMF and the EU AI Act .Guest Socials - Sapna's LinkedinPodcast Twitter - @CloudSecPod If you want to watch videos of this LIVE STREAMED episode and past episodes - Check out our other Cloud Security Social Channels:-Cloud Security Podcast- Youtube- Cloud Security Newsletter If you are interested in AI Security, you can check out our sister podcast - AI Security PodcastQuestions asked:(00:00) Introduction(02:00) Who is Sapna Paul?(02:40) What is Vulnerability Management in the Age of AI? (05:00) Defining the New Asset: Neural Networks & Models (07:00) The 3 Layers of AI Vulnerability (Production, Data, Behavior) (10:20) Updating the Risk Register for AI Business Risks (13:30) Compliance vs. Innovation: Preventing AI from Going Rogue (18:20) Using AI to Solve Vulnerability Alert Fatigue (23:00) Skills Required for Future VM Professionals (25:40) Measuring AI Adoption in Security Teams (29:20) Key Frameworks: NIST AI RMF & EU AI Act (31:30) Tools for AI Security: Counterfit, SHAP, and LIME (33:30) Where to Start: Learning & Persona-Based Prompts (38:30) Fun Questions: Painting, Mentoring, and Vegan Ramen
This interview was recorded for the GOTO Book Club.http://gotopia.tech/bookclubCheck out more here:https://gotopia.tech/episodes/409Dr. Larysa Visengeriyeva - Author of "The AI Engineer's Guide to Surviving the EU AI Act" & Independent Consultant for EU AI Act EngineeringBarbara Lampl - Behavioral Mathematician at empathic business by Barbara LamplRESOURCESLarysahttps://x.com/visengerhttps://bsky.app/profile/visenger.bsky.socialhttps://github.com/visengerhttps://www.linkedin.com/in/larysavisengerBarbarahttps://x.com/BarbaraLamplhttps://www.linkedin.com/in/barbaralamplhttps://barbara-lampl.tumblr.comLinkshttps://ml-ops.orghttps://github.com/visenger/awesome-mlopshttps://eur-lex.europa.eu/eli/reg/2024/1689/oj/enghttps://machinelearningcanvas.comhttps://louisdorard.gumroad.com/l/mlcanvashttps://ml-ops.org/content/crisp-mlDESCRIPTIONBarbara Lampl interviews Larysa Visengeriyeva, software engineer and "godmother of MLOps", about her new book on AI engineering and compliance. What starts as a discussion about the EU AI Act quickly reveals a deeper truth: the real challenge isn't regulatory compliance - it's fundamental engineering practices.Larysa argues that quality AI systems require robust MLOps, comprehensive documentation, and proper data governance, whether regulation mandates it or not. Drawing from frameworks like CRISP-ML and the Machine Learning Canvas, the book provides practical checklists and methodologies for taking AI projects from prototype to production. Written partially in Ukraine during wartime, this "battle-tested" guide addresses the gap between technical and non-technical stakeholders, offering a common language for building sustainable AI systems.RECOMMENDED BOOKSLarysa Visengeriyeva • The AI Engineer's Guide to Surviving the EU AI Act • https://amzn.to/42SKOuULakshmanan, Robinson & Munn • Machine Learning Design Patterns • https://amzn.to/4ox4EosPhil Winder • Reinforcement Learning • https://amzn.to/3t1S1VZDiana Montalion • Learning Systems Thinking • https://amzn.to/3ZpycdJBernd Rücker • Practical Process Automation • https://amzn.to/3cs3BSHLauren Maffeo • Designing Data Governance from the Ground Up • https://amzn.to/3QhIlnVKatharine Jarmul • Practical Data Privacy • https://amzn.to/46XPrnsZhamak Dehghani • Data Mesh • https://amzn.to/3tTCwACKate Stanley & Mickael Maison • Kafka Connect • https://amzn.to/40Jq5JzBlueskyTwitterInstagramLinkedInFacebookCHANNEL MEMBERSHIP BONUSJoin this channel to get early access to videos & other perks:https://www.youtube.com/channel/UCs_tLP3AiwYKwdUHpltJPuA/joinLooking for a unique learning experience?Attend the next GOTO conference near you! Get your ticket: gotopia.techSUBSCRIBE TO OUR YOUTUBE CHANNEL - new videos posted daily!
Professor Alex 'Sandy' Pentland, one of the most renowned computational scientists in the world, joins Vasant Dhar in Episode 102 of Brave New World to discuss the state and development of human-centric AI. Useful Resources: 1. Alex 'Sandy' Pentland. 2. Stanford Research Institute. 3. MIT Media Lab. 4. Distributed Computing, Blockchain. 5. Nature Magazine, Nature Machine Intelligence. 6. The Hard Problem Of Consciousness. 7. Shared Wisdom: Cultural Evolution In The Age Of AI: Alex Pentland. 8. Brave New World Episode 101: Deepak Chopra On Consciousness and Reality. 9. Digital Dharma: How AI Can Elevate Spiritual Intelligence and Personal Well-Being - Deepak Chopra. 10. Awakening: The Path to Freedom and Enlightenment - Deepak Chopra. 11. Sharing The Wisdom Of Time: Pope Francis. 12. UN, Sustainable Development Goals. 13. Jonathan Haidt. 14. Brave New World Episode 08: Jonathan Haidt, How Social Media Threatens Society. 15. Daniel Kahneman, Behavioural Economics. 16. Brave New World Episode 21: Daniel Kahneman, How Noise Hampers Judgement. 17. Loyal Agents. 18. Loyal Agents Consumer Reports19. EU - AI Act. 20. Duty Of Care. 21. Internet Engineering Task Force. 22. World Trade Organisation. Check out Vasant Dhar's newsletter on Substack. The subscription is free! Order Vasan Dhar's new book, Thinking With Machines Check out Vasant Dhar's newsletter on Substack. The subscription is free! Order Vasan Dhar's new book, Thinking With Machines
In an era dominated by AI-powered security tools and cloud-native architectures, are traditional Web Application Firewalls still relevant? Join us as we speak with Felipe Zipitria, co-leader of the OWASP Core Rule Set (CRS) project. Felipe has been at the forefront of open-source security, leading the development of one of the world's most widely deployed WAF rule sets, trusted by organizations globally to protect their web applications. Felipe explains why WAFs remain a critical layer in modern defense-in-depth strategies. We'll explore what makes OWASP CRS the go-to choice for security teams, dive into the project's current innovations, and discuss how traditional rule-based security is evolving to work alongside — not against — AI. Segment Resources: github.com/coreruleset/coreruleset coreruleset.org The future of CycloneDX is defined by modularity, API-first design, and deeper contextual insight, enabling transparency that is not just comprehensive, but actionable. At its heart is the Transparency Exchange API, which delivers a normalized, format-agnostic model for sharing SBOMs, attestations, risks, and more across the software supply chain. As genAI transforms every sector of modern business, the security community faces a question: how do we protect systems we can't fully see or understand? In this fireside chat, Aruneesh Salhotra, Project Lead for OWASP AIBOM and Co-Lead of OWASP AI Exchange, discusses two groundbreaking initiatives that are reshaping how organizations approach AI security and supply chain transparency. OWASP AI Exchange has emerged as the go-to single resource for AI security and privacy, providing over 200 pages of practical advice on protecting AI and data-centric systems from threats. Through its official liaison partnership with CEN/CENELEC, the project has contributed 70 pages to ISO/IEC 27090 and 40 pages to the EU AI Act security standard OWASP, achieving OWASP Flagship project status in March 2025. Meanwhile, the OWASP AIBOM Project is establishing a comprehensive framework to provide transparency into how AI models are built, trained, and deployed, extending OWASP's mission of making security visible to the rapidly evolving AI ecosystem. This conversation explores how these complementary initiatives are addressing real-world challenges—from prompt injection and data poisoning to model provenance and supply chain risks—while actively shaping international standards and regulatory frameworks. We'll discuss concrete achievements, lessons learned from global collaboration, and the ambitious roadmap ahead as these projects continue to mature and expand their impact across the AI security landscape. Segment Resources: https://owasp.org/www-project-aibom/ https://www.linkedin.com/posts/aruneeshsalhotra_owasp-ai-aisecurity-activity-7364649799800766465-DJGM/ https://www.youtube.com/@OWASPAIBOM https://www.youtube.com/@RobvanderVeer-ex3gj https://owaspai.org/ Agentic AI introduces unique and complex security challenges that render traditional risk management frameworks insufficient. In this keynote, Ken Huang, CEO of Distributedapps.ai and a key contributor to AI security standards, outlines a new approach to manage these emerging threats. The session will present a practical strategy that integrates the NIST AI Risk Management Framework with specialized tools to address the full lifecycle of Agentic AI. Segment Resources: aivss.owasp.org https://kenhuangus.substack.com/p/owasp-aivss-the-new-framework-for https://cloudsecurityalliance.org/blog/2025/02/06/agentic-ai-threat-modeling-framework-maestro This interview is sponsored by the OWASP GenAI Security Project. Visit https://securityweekly.com/owaspappsec to watch all of CyberRisk TV's interviews from the OWASP 2025 Global AppSec Conference! Visit https://www.securityweekly.com/asw for all the latest episodes! Show Notes: https://securityweekly.com/asw-363
In an era dominated by AI-powered security tools and cloud-native architectures, are traditional Web Application Firewalls still relevant? Join us as we speak with Felipe Zipitria, co-leader of the OWASP Core Rule Set (CRS) project. Felipe has been at the forefront of open-source security, leading the development of one of the world's most widely deployed WAF rule sets, trusted by organizations globally to protect their web applications. Felipe explains why WAFs remain a critical layer in modern defense-in-depth strategies. We'll explore what makes OWASP CRS the go-to choice for security teams, dive into the project's current innovations, and discuss how traditional rule-based security is evolving to work alongside — not against — AI. Segment Resources: github.com/coreruleset/coreruleset coreruleset.org The future of CycloneDX is defined by modularity, API-first design, and deeper contextual insight, enabling transparency that is not just comprehensive, but actionable. At its heart is the Transparency Exchange API, which delivers a normalized, format-agnostic model for sharing SBOMs, attestations, risks, and more across the software supply chain. As genAI transforms every sector of modern business, the security community faces a question: how do we protect systems we can't fully see or understand? In this fireside chat, Aruneesh Salhotra, Project Lead for OWASP AIBOM and Co-Lead of OWASP AI Exchange, discusses two groundbreaking initiatives that are reshaping how organizations approach AI security and supply chain transparency. OWASP AI Exchange has emerged as the go-to single resource for AI security and privacy, providing over 200 pages of practical advice on protecting AI and data-centric systems from threats. Through its official liaison partnership with CEN/CENELEC, the project has contributed 70 pages to ISO/IEC 27090 and 40 pages to the EU AI Act security standard OWASP, achieving OWASP Flagship project status in March 2025. Meanwhile, the OWASP AIBOM Project is establishing a comprehensive framework to provide transparency into how AI models are built, trained, and deployed, extending OWASP's mission of making security visible to the rapidly evolving AI ecosystem. This conversation explores how these complementary initiatives are addressing real-world challenges—from prompt injection and data poisoning to model provenance and supply chain risks—while actively shaping international standards and regulatory frameworks. We'll discuss concrete achievements, lessons learned from global collaboration, and the ambitious roadmap ahead as these projects continue to mature and expand their impact across the AI security landscape. Segment Resources: https://owasp.org/www-project-aibom/ https://www.linkedin.com/posts/aruneeshsalhotra_owasp-ai-aisecurity-activity-7364649799800766465-DJGM/ https://www.youtube.com/@OWASPAIBOM https://www.youtube.com/@RobvanderVeer-ex3gj https://owaspai.org/ Agentic AI introduces unique and complex security challenges that render traditional risk management frameworks insufficient. In this keynote, Ken Huang, CEO of Distributedapps.ai and a key contributor to AI security standards, outlines a new approach to manage these emerging threats. The session will present a practical strategy that integrates the NIST AI Risk Management Framework with specialized tools to address the full lifecycle of Agentic AI. Segment Resources: aivss.owasp.org https://kenhuangus.substack.com/p/owasp-aivss-the-new-framework-for https://cloudsecurityalliance.org/blog/2025/02/06/agentic-ai-threat-modeling-framework-maestro This interview is sponsored by the OWASP GenAI Security Project. Visit https://securityweekly.com/owaspappsec to watch all of CyberRisk TV's interviews from the OWASP 2025 Global AppSec Conference! Show Notes: https://securityweekly.com/asw-363
In an era dominated by AI-powered security tools and cloud-native architectures, are traditional Web Application Firewalls still relevant? Join us as we speak with Felipe Zipitria, co-leader of the OWASP Core Rule Set (CRS) project. Felipe has been at the forefront of open-source security, leading the development of one of the world's most widely deployed WAF rule sets, trusted by organizations globally to protect their web applications. Felipe explains why WAFs remain a critical layer in modern defense-in-depth strategies. We'll explore what makes OWASP CRS the go-to choice for security teams, dive into the project's current innovations, and discuss how traditional rule-based security is evolving to work alongside — not against — AI. Segment Resources: github.com/coreruleset/coreruleset coreruleset.org The future of CycloneDX is defined by modularity, API-first design, and deeper contextual insight, enabling transparency that is not just comprehensive, but actionable. At its heart is the Transparency Exchange API, which delivers a normalized, format-agnostic model for sharing SBOMs, attestations, risks, and more across the software supply chain. As genAI transforms every sector of modern business, the security community faces a question: how do we protect systems we can't fully see or understand? In this fireside chat, Aruneesh Salhotra, Project Lead for OWASP AIBOM and Co-Lead of OWASP AI Exchange, discusses two groundbreaking initiatives that are reshaping how organizations approach AI security and supply chain transparency. OWASP AI Exchange has emerged as the go-to single resource for AI security and privacy, providing over 200 pages of practical advice on protecting AI and data-centric systems from threats. Through its official liaison partnership with CEN/CENELEC, the project has contributed 70 pages to ISO/IEC 27090 and 40 pages to the EU AI Act security standard OWASP, achieving OWASP Flagship project status in March 2025. Meanwhile, the OWASP AIBOM Project is establishing a comprehensive framework to provide transparency into how AI models are built, trained, and deployed, extending OWASP's mission of making security visible to the rapidly evolving AI ecosystem. This conversation explores how these complementary initiatives are addressing real-world challenges—from prompt injection and data poisoning to model provenance and supply chain risks—while actively shaping international standards and regulatory frameworks. We'll discuss concrete achievements, lessons learned from global collaboration, and the ambitious roadmap ahead as these projects continue to mature and expand their impact across the AI security landscape. Segment Resources: https://owasp.org/www-project-aibom/ https://www.linkedin.com/posts/aruneeshsalhotra_owasp-ai-aisecurity-activity-7364649799800766465-DJGM/ https://www.youtube.com/@OWASPAIBOM https://www.youtube.com/@RobvanderVeer-ex3gj https://owaspai.org/ Agentic AI introduces unique and complex security challenges that render traditional risk management frameworks insufficient. In this keynote, Ken Huang, CEO of Distributedapps.ai and a key contributor to AI security standards, outlines a new approach to manage these emerging threats. The session will present a practical strategy that integrates the NIST AI Risk Management Framework with specialized tools to address the full lifecycle of Agentic AI. Segment Resources: aivss.owasp.org https://kenhuangus.substack.com/p/owasp-aivss-the-new-framework-for https://cloudsecurityalliance.org/blog/2025/02/06/agentic-ai-threat-modeling-framework-maestro This interview is sponsored by the OWASP GenAI Security Project. Visit https://securityweekly.com/owaspappsec to watch all of CyberRisk TV's interviews from the OWASP 2025 Global AppSec Conference! Visit https://www.securityweekly.com/asw for all the latest episodes! Show Notes: https://securityweekly.com/asw-363
In an era dominated by AI-powered security tools and cloud-native architectures, are traditional Web Application Firewalls still relevant? Join us as we speak with Felipe Zipitria, co-leader of the OWASP Core Rule Set (CRS) project. Felipe has been at the forefront of open-source security, leading the development of one of the world's most widely deployed WAF rule sets, trusted by organizations globally to protect their web applications. Felipe explains why WAFs remain a critical layer in modern defense-in-depth strategies. We'll explore what makes OWASP CRS the go-to choice for security teams, dive into the project's current innovations, and discuss how traditional rule-based security is evolving to work alongside — not against — AI. Segment Resources: github.com/coreruleset/coreruleset coreruleset.org The future of CycloneDX is defined by modularity, API-first design, and deeper contextual insight, enabling transparency that is not just comprehensive, but actionable. At its heart is the Transparency Exchange API, which delivers a normalized, format-agnostic model for sharing SBOMs, attestations, risks, and more across the software supply chain. As genAI transforms every sector of modern business, the security community faces a question: how do we protect systems we can't fully see or understand? In this fireside chat, Aruneesh Salhotra, Project Lead for OWASP AIBOM and Co-Lead of OWASP AI Exchange, discusses two groundbreaking initiatives that are reshaping how organizations approach AI security and supply chain transparency. OWASP AI Exchange has emerged as the go-to single resource for AI security and privacy, providing over 200 pages of practical advice on protecting AI and data-centric systems from threats. Through its official liaison partnership with CEN/CENELEC, the project has contributed 70 pages to ISO/IEC 27090 and 40 pages to the EU AI Act security standard OWASP, achieving OWASP Flagship project status in March 2025. Meanwhile, the OWASP AIBOM Project is establishing a comprehensive framework to provide transparency into how AI models are built, trained, and deployed, extending OWASP's mission of making security visible to the rapidly evolving AI ecosystem. This conversation explores how these complementary initiatives are addressing real-world challenges—from prompt injection and data poisoning to model provenance and supply chain risks—while actively shaping international standards and regulatory frameworks. We'll discuss concrete achievements, lessons learned from global collaboration, and the ambitious roadmap ahead as these projects continue to mature and expand their impact across the AI security landscape. Segment Resources: https://owasp.org/www-project-aibom/ https://www.linkedin.com/posts/aruneeshsalhotra_owasp-ai-aisecurity-activity-7364649799800766465-DJGM/ https://www.youtube.com/@OWASPAIBOM https://www.youtube.com/@RobvanderVeer-ex3gj https://owaspai.org/ Agentic AI introduces unique and complex security challenges that render traditional risk management frameworks insufficient. In this keynote, Ken Huang, CEO of Distributedapps.ai and a key contributor to AI security standards, outlines a new approach to manage these emerging threats. The session will present a practical strategy that integrates the NIST AI Risk Management Framework with specialized tools to address the full lifecycle of Agentic AI. Segment Resources: aivss.owasp.org https://kenhuangus.substack.com/p/owasp-aivss-the-new-framework-for https://cloudsecurityalliance.org/blog/2025/02/06/agentic-ai-threat-modeling-framework-maestro This interview is sponsored by the OWASP GenAI Security Project. Visit https://securityweekly.com/owaspappsec to watch all of CyberRisk TV's interviews from the OWASP 2025 Global AppSec Conference! Show Notes: https://securityweekly.com/asw-363
In this episode, Etienne Nichols sits down with Michelle Wu, Founder and CEO of Nyquist AI and one of the top 100 women in AI, to discuss the transformative state of artificial intelligence within the MedTech regulatory and quality space. Reflecting on her recent personal experience as a surgical patient, Michelle shares a unique perspective on the critical importance of the devices and quality systems that keep the industry running.The conversation dives deep into the "Great Rewiring" of the medical device industry. Michelle outlines how we have moved past the initial phase of AI skepticism and "AI fatigue" into a period of hyper-acceleration. With the introduction of the FDA's ELSA and the implementation of the EU AI Act, the industry has reached a point where AI is no longer a side project but a fundamental requirement for operational longevity.Finally, the episode provides a roadmap for both organizations and individual contributors. Michelle introduces her "Holy Trinity" framework for AI implementation—Data, Workflow, and Agents—and explains why the next two years will be defined by the "Invisible Colleague" or AI copilot. For junior professionals, the message is clear: knowledge is now a commodity, and the real value lies in the ability to ask high-quality, strategic questions.Key Timestamps00:00 – Introduction and Michelle Wu's background in MedTech and AI.03:45 – A founder's perspective: Michelle's personal experience in the OR seeing her clients' devices.08:12 – The 2025 Inflection Point: FDA ELSA, EU AI Act, and global AI expectations.11:50 – From billable hours to value-based output: How AI is disrupting the consulting business model.15:35 – Micro-timestamp: 2026 Predictions. The shift toward universal AI Copilots and Agents for every MedTech role.18:22 – The Holy Trinity of AI: Breaking down Data Layers, Workflow Automation, and AI Agents.22:10 – Case Study: How a top-tier MedTech company automated 17,000 quality and regulatory tasks.27:45 – The 56.8% Salary Premium: Why AI literacy is the most important skill for young RAQA professionals.31:15 – Shifting from memorization to "Clarity of Mind" and high-quality inquiry.Quotes"Knowledge is a commodity now. Previously, regulatory consultants or professionals stood out by their knowledge. Now, with AI leveling the field, the capability lies in those who can ask high-quality questions." - Michelle Wu, Nyquist AITakeawaysAI Literacy is a Financial Multiplier: LinkedIn data shows that non-engineering knowledge workers with AI literacy can command a salary premium of up to 56.8%.The 80/20 Rule of Automation: Approximately 80% of current RAQA tasks are tedious, manual, or administrative. Successful teams are using AI to automate that 80%, allowing humans to focus on the 20% that is strategic and high-value.The Three-Layer AI Strategy: To effectively implement AI, companies should look at the Data Layer (intelligence), the Workflow Layer (automation of specific tasks), and the Agent Layer (autonomous "employees").Value-Based Billing: As AI reduces the time required for regulatory submissions and gap analyses, the industry is moving away from the "billable hour" toward pricing based on the value and quality of the output.ReferencesNyquist AI: Michelle Wu's platform specializing in global regulatory intelligence and AI-driven workflow automation for MedTech.FDA ELSA: The...
Alexandru Voica, Head of Corporate Affairs and Policy at Synthesia, discusses how the world's largest enterprise AI video platform has approached trust and safety from day one. He explains Synthesia's "three C's" framework—consent, control, and collaboration: never creating digital replicas without explicit permission, moderating every video before rendering, and engaging with policymakers to shape practical regulation. Voica acknowledges these safeguards have cost some business, but argues that for enterprise sales, trust is competitively essential. The company's content moderation has evolved from simple keyword detection to sophisticated LLM-based analysis, recently withstanding a rigorous public red team test organized by NIST and Humane Intelligence. Voica criticizes the EU AI Act's approach of regulating how AI systems are built rather than focusing on harmful outcomes, noting that smaller models can now match frontier capabilities while evading compute-threshold regulations. He points to the UK's outcome-focused approach—like criminalizing non-consensual deepfake pornography—as more effective. On adoption, Voica argues that AI companies should submit to rigorous third-party audits using ISO standards rather than publishing philosophical position papers—the thesis of his essay "Audits, Not Essays." The conversation closes personally: growing up in 1990s Romania with rare access to English tutoring, Voica sees AI-powered personalized education as a transformative opportunity to democratize learning. Alexandru Voica is the Head of Corporate Affairs and Policy at Synthesia, the UK's largest generative AI company and the world's leading AI video platform. He has worked in the technology industry for over 15 years, holding public affairs and engineering roles at Meta, NetEase, Ocado, and Arm. Voica holds an MSc in Computer Science from the Sant'Anna School of Advanced Studies and serves as an advisor to MBZUAI, the world's first AI university. Transcript Audits, Not Essays: How to Win Trust for Enterprise AI (Transformer) Synthesia's Content Moderation Systems Withstand Rigorous NIST, Humane Intelligence Red Team Test (Synthesia) Computerspeak Newsletter
On November 19, the European Commission unveiled two major omnibus packages as part of its European Data Union Strategy. One package proposes several changes to the EU General Data Protection Regulation, while the other proposes significant changes to the recently minted EU AI Act, including a proposed delay to the regulation of so-called high-risk AI systems. Laura Caroli was a lead negotiator and policy advisor to AI Act co-rapporteur Brando Benifei and was immersed in the high-stakes negotiations leading to the AI regulation. She is also a former senior fellow at the Center for Strategic and International Studies, but recently moved back to Brussels during a time of major complexity in the EU. IAPP Editorial Director Jedidiah Bracy caught up with Caroli to discuss her views on the proposed changes to the AI Act in the omnibus package and how she thinks the negotiations will play out. Here's what she had to say.
AI is evolving faster than most organizations can keep up with — and the truth is, very few companies are prepared for what's coming in 2026. In this episode of Reimagining Cyber, Rob Aragao speaks with Ken Johnston, VP of Data, Analytics and AI at Envorso, about the uncomfortable reality: autonomous AI systems are accelerating, regulations are tightening, and most businesses have no idea how much risk they're carrying.Ken explains why companies have fallen behind, how “AI governance debt” has quietly piled up, and why leaders must take action now before the EU AI Act and Colorado's 2026 regulation bring real financial consequences. From AI bias and data provenance to agentic AI guardrails, observability, audits, and model versioning — Ken lays out the essential steps organizations must take to catch up before it's too late. It's 5 years since Reimagining Cyber began. Thanks to all of our loyal listeners!As featured on Million Podcasts' Best 100 Cybersecurity Podcasts Top 50 Chief Information Security Officer CISO Podcasts Top 70 Security Hacking Podcasts This list is the most comprehensive ranking of Cyber Security Podcasts online and we are honoured to feature amongst the best! Follow or subscribe to the show on your preferred podcast platform.Share the show with others in the cybersecurity world.Get in touch via reimaginingcyber@gmail.com
On Wednesday November 19 2025, the European Commission unveiled its Digital Omnibus Package, which was basically split in two proposals: a proposed Regulation on simplification for AI rules; and a proposed Regulation on simplification of the digital legislation. We will tackle the first one today.Today we are reviewing that AI-related block with Oliver Patel, who is AI Governance Lead at the global pharma and biotech company AstraZeneca, where he helps implement and scale AI governance worldwide. He also advises governments and international policymakers as a Member of the OECD's Expert Group on AI Risk and Accountability.References:* Oliver Patel, “Fundamentals of AI Governance” (now available for pre-order)* Enterprise AI Governance, a newsletter by Oliver Patel* Oliver Patel on LinkedIn* Oliver Patel: How could the EU AI Act change?* EU proposal for a Regulation on simplification for AI rules (EU Commission, covered today)* EU proposal for a Regulation on simplification of the digital legislation (EU Commission, not covered today)* Europe's digital sovereignty: from doctrine to delivery (Politico). This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit www.mastersofprivacy.com/subscribe
This week, Andreas Munk Holm sits down with Jack Leeney, co-founder of 7GC, the transatlantic growth fund bridging Silicon Valley and Europe and a backer of AI giants like Anthropic, alongside European rising stars Poolside and Fluidstack.From IPOs at Morgan Stanley to running Telefónica's US venture arm and now operating a dual-continental fund, Jack shares how 7GC reads the AI supercycle, why infrastructure and platforms win first, and what Europe must fix to unlock the next wave of venture liquidity.
Join host Martin Quibell (Marv) and a panel of industry experts as they dive deep into the impact of artificial intelligence on podcasting. From ethical debates to hands-on tools, discover how AI is shaping the future of audio and video content creation. Guests: ● Benjamin Field (Deep Fusion Films) ● William Corbin (Inception Point AI) ● John McDermott & Mark Francis (Caloroga Shark Media) Timestamps 00:00 – Introduction 00:42 – Meet the Guests 01:45 – The State of AI in Podcasting 03:45 – Transparency, Ethics & the EU AI Act 06:00 – Nuance: How AI Is Used (Descript, Shorten Word Gaps, Remove Retakes) 08:45 – AI & Niche Content: Economic Realities 12:00 – Human Craft vs. AI Automation 15:00 – Job Evolution: Prompt Authors & QC 18:00 – Quality Control & Remastering 21:00 – Volume, Scale, and Audience 24:00 – AI Co-Hosts & Experiments (Virtually Parkinson, AI Voices) 27:00 – AI in Video & Visuals (HeyGen, Weaver) 30:00 – Responsibility & Transparency 33:00 – The Future of AI in Media 46:59 – Guest Contact Info & Closing Tools & Platforms Mentioned ● Descript: Shorten word gaps, remove retakes, AI voice, scriptwriting, editing ● HeyGen: AI video avatars for podcast visuals ● Weaver (Deep Fusion Films): AI-driven video editing and archive integration ● Verbal: AI transcription and translation ● AI Voices: For narration, co-hosting, and accessibility ● Other references: Spotify, Amazon, Wikipedia, TikTok, Apple Podcasts, Google Programmatic Ads Contact the Guests: - William Corbin: william@inceptionpoint.ai | LinkedIn - John McDermott: john@caloroga.com | LinkedIn - Benjamin Field: benjamin.field@deepfusionfilms.com | LinkedIn - Mark Francis: mark@caloroga.com | LinkedIn | caloroga.com - Marv: themarvzone.org Like, comment, and subscribe for more deep dives into the future of podcasting and media! #Podcasting #AI #ArtificialIntelligence #Descript #HeyGen #PodcastTools #Ethics #MediaInnovation
Finding it difficult to navigate the changing landscape of data protection? In this episode of the DMI podcast, host Will Francis speaks with Steven Roberts, Group Head of Marketing at Griffith College, Chartered Director, certified Data Protection Officer, and long-time marketing leader. Steven demystifies GDPR, AI governance, and the rapidly evolving regulatory environment that marketers must now navigate. Steven explains how GDPR enforcement has matured, why AI has created a new layer of complexity, and how businesses can balance innovation with compliance. He breaks down the EU AI Act, its risk-based structure, and its implications for organizations inside and outside the EU. Steven also shares practical guidance for building internal AI policies, tackling “shadow AI,” reducing data breach risks, and supporting teams with training and clear governance. For an even deeper look into how businesses can ensure data protection compliance, check out Steven's book, Data Protection for Business: Compliance, Governance, Reputation and Trust. Steven's Top 3 Tips Build data protection into projects from the start, using tools like Data Protection Impact Assessments to uncover risks early. Invest in regular staff training to avoid common mistakes caused by human error. Balance compliance with business performance by setting clear policies, understanding your risk appetite, and iterating your AI governance over time. The Ahead of the Game podcast is brought to you by the Digital Marketing Institute and is available on YouTube, Apple Podcasts, Spotify, and all other podcast platforms. And if you enjoyed this episode please leave a review so others can find us. If you have other feedback for or would like to be a guest on the show, email the podcast team! Timestamps 01:29 – AI's impact on GDPR & the explosion of new global privacy laws 03:26 – Is GDPR the global gold standard? 05:04 – GDPR enforcement today: Who gets fined and why 07:09 – Cultural attitudes toward data: EU vs. US 08:51 – The EU AI Act explained: Risk tiers, guardrails & human oversight 10:48 – What businesses must do: DPIAs, fundamental rights assessments & more 13:38 – Shadow AI, risk appetite & internal governance challenges 17:10 – Should you upload company data to ChatGPT? 20:40 – How the AI Act affects countries outside the EU 24:47 – Will privacy improve over time? 28:45 – What teams can do now: Tools, processes & data audits 33:49 – Data enrichment tools: targeting vs. Legality 36:47 – Will anyone actually check your data practices? 40:06 – Steven's top tips for navigating GDPR & AI
Send us a textWalk the floor at Web Summit without leaving your headphones. We sit down with Jo Smets, founder of BluePanda and president of the Portuguese Belgian Luxembourg Chamber of Commerce, to unpack how nearshoring and AI are reshaping CRM, marketing, and team delivery across Europe.We start with clarity on nearshoring: why time zone, culture, and communication speed beat cost alone, and how that proximity pays off when you're wiring AI into daily work. Jo shares how BluePanda applies AI beyond demos—recruitment, performance, and operations—then translates those lessons into client outcomes. We compare adoption patterns across startups and corporates, call out the real blocker (end‑to‑end process automation), and map the role of global networks like BBN for keeping pace with tools and trends.The conversation pivots to trust and governance: practical ways to protect data, when on‑prem makes sense, and how to use EU AI Act guidance without stalling innovation. We explore the marketing shift from SEO to GEO, the idea of “AI‑proof” websites, and the move toward dynamic, persona‑aware content that renders at load. Jo offers a simple path to progress—pick one process, pilot, measure, educate—while keeping empathy at the core as managers start leading both humans and AI agents. Along the way, we spotlight how chambers and communities connect ecosystems across borders, turning events into learning loops and real partnerships.Looking to modernize without losing your team's identity? You'll leave with a plan for small wins, a lens for tool curation, and a sharper view of where marketing is headed next. If this resonated, subscribe, share it with a colleague who's wrestling with AI adoption, and drop a review to help others find the show.This episode was recorded in the official podcast booth at Web Summit (Lisbon) on November 12, 2025. Check the video footage, read the blog article and show notes here: https://webdrie.net/why-european-teams-win-with-nearshoring-and-practical-ai/..........................................................................
In this episode of Alexa's Input (AI) Podcast, host Alexa Griffith sits down with Liana Tomescu, founder of Sonny Labs and host of the AI Hacks podcast. Dive into the world of AI security and compliance as Liana shares her journey from Microsoft to founding her own company. Discover the challenges and opportunities in making AI applications secure and compliant, and learn about the latest in AI regulations, including the EU AI Act. Whether you're an AI enthusiast or a tech professional, this episode offers valuable insights into the evolving landscape of AI technology.LinksSonnyLabs Website: https://sonnylabs.ai/SonnyLabs LinkedIn: https://www.linkedin.com/company/sonnylabs-ai/Liana's LinkedIn: https://www.linkedin.com/in/liana-anca-tomescu/Alexa's LinksLinkTree: https://linktr.ee/alexagriffithAlexa's Input YouTube Channel: https://www.youtube.com/@alexa_griffithWebsite: https://alexagriffith.com/LinkedIn: https://www.linkedin.com/in/alexa-griffith/Substack: https://alexasinput.substack.com/KeywordsAI security, compliance, female founder, Sunny Labs, EU AI Act, cybersecurity, prompt injection, AI agents, technology innovation, startup journeyChapters00:00 Introduction to Liana Tomescu and Sunny Labs02:53 The Journey of a Female Founder in Tech05:49 From Microsoft to Startup: The Transition09:04 Exploring AI Security and Compliance11:41 The Role of Curiosity in Entrepreneurship14:52 Understanding Sunny Labs and Its Mission17:52 The Importance of Community and Networking20:42 MCP: Model Context Protocol Explained23:54 Security Risks in AI and MCP Servers27:03 The Future of AI Security and Compliance38:25 Understanding Prompt Injection Risks45:34 The Shadow AI Phenomenon45:48 Navigating the EU AI Act52:28 Banned and High-Risk AI Practices01:00:43 Implementing AI Security Measures01:17:28 Exploring AI Security Training
Send us a textExplore how leaders and coaches can adopt AI without losing the human core, turning compliance and ethics into everyday practice rather than a side office. Colin Cosgrove shares a practical arc for AI readiness, concrete use cases, and a clear view of risk, trust, and governance.• journey from big-tech compliance to leadership coaching • why AI changes the leadership environment and decision pace • making compliance human: transparency, explainability, consent • AI literacy across every function, not just data teams • the AI leader archetype arc for mindset and readiness • practical augmentation: before, during, after coaching sessions • three risks: reputational, relational, regulatory • leader as coach: trust, questions, and human skills • EU AI Act overview and risk-based obligations • governance, accountability, and cross-Reach out to Colin on LinkedIn and check out his website: Movizimo.com. Support the showBelemLeaders–Your organization's trusted partner for leader and team development. Visit our website to connect: belemleaders.org or book a discovery call today! belem.as.me/discoveryUntil next time, keep doing great things!
In this episode of The Digital Executive, host Brian Thomas welcomes Yakir Golan, CEO and Co-founder of Kovrr, a global leader in cyber and AI risk quantification. Drawing from his early career in Israeli intelligence and later roles in software, hardware, and product management, Yakir explains how his background shaped his holistic approach to understanding complex, interconnected risk systems.Yakir breaks down why quantifying AI and cyber risk—rather than relying on subjective, color-coded scoring—is becoming essential for enterprise leaders, boards, and regulators. He explains how Kovrr's new AI Risk Assessment and Quantification module helps organizations model real financial exposure, understand high-impact “tail risks,” and align security, GRC, and finance teams around a shared, objective language.Looking ahead, Yakir discusses how global regulation, including the EU AI Act, is accelerating the need for measurable, defensible risk management. He outlines a future where AI risk quantification becomes a board-level expectation and a foundation for resilient, responsible innovation. Through Kovrr's mission, Yakir aims to equip enterprises with the same level of intelligence-driven decision making once reserved for national security—now applied to the rapidly evolving digital risk landscape.If you liked what you heard today, please leave us a review - Apple or Spotify.See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
Oliver Patel has built a sizeable online following for his social media posts and Substack about enterprise AI governance, using clever acronyms and visual frameworks to distill down insights based on his experience at AstraZeneca, a major global pharmaceutical company. In this episode, he details his career journey from academic theory to government policy and now practical application, and offers insights for those new to the field. He argues that effective enterprise AI governance requires being pragmatic and picking your battles, since the role isn't to stop AI adoption but to enable organizations to adopt it safely and responsibly at speed and scale. He notes that core pillars of modern AI governance, such as AI literacy, risk classification, and maintaining an AI inventory, are incorporated into the EU AI Act and thus essential for compliance. Looking forward, Patel identifies AI democratization—how to govern AI when everyone in the workforce can use and build it—as the biggest hurdle, and offers thougths about how enteprises can respond. Oliver Patel is the Head of Enterprise AI Governance at AstraZeneca. Before moving into the corporate sector, he worked for the UK government as Head of Inbound Data Flows, where he focused on data policy and international data transfers, and was a researcher at University College London. He serves as an IAPP Faculty Member and a member of the OECD's Expert Group on AI Risk. His forthcoming book, Fundamentals of AI Governance, will be released in early 2026. Transcript Enterprise AI Governance Substack Top 10 Challenges for AI Governance Leaders in 2025 (Part 1) Fundamentals of AI Governance book page
Get featured on the show by leaving us a Voice Mail: https://bit.ly/MIPVM