POPULARITY
AI has the power to scale innovation at breakneck speed—but without a steering wheel, it can scale risk just as fast. Enter ISO/IEC 42001:2023, the world's first international standard for Artificial Intelligence Management Systems (AIMS). As organizations move from AI experimentation to full-scale production, this standard provides the essential framework for deploying AI that is not only powerful but also responsible, secure, and ethical.In this episode, we simplify the complexities of AI governance. We explore how to manage unique AI risks like algorithmic bias, model drift, and opaque decision-making using the proven "Plan-Do-Check-Act" (PDCA) approach. Whether you are a business leader, a developer, or a compliance officer, learn how to turn high-level ethics into operational reality.
This Financial Crime Weekly Special Episode looks ahead to 2026, a year defined by localisation and divergence in global financial crime regulation. From the EU's AMLA rollout and the US Corporate Transparency Act compliance cliff, to the UK's aggressive enforcement of the new “Failure to Prevent Fraud” offence, the episode explores how jurisdictions are reshaping rules to meet domestic priorities. With insights into sanctions reform, fraud liability shifts, capital markets changes, and the operational resilience demands of DORA, the UK's Critical Third Parties regime, and the EU AI Act, this horizon scan highlights the strategic risks and compliance imperatives that will shape the year ahead.
In an era dominated by AI-powered security tools and cloud-native architectures, are traditional Web Application Firewalls still relevant? Join us as we speak with Felipe Zipitria, co-leader of the OWASP Core Rule Set (CRS) project. Felipe has been at the forefront of open-source security, leading the development of one of the world's most widely deployed WAF rule sets, trusted by organizations globally to protect their web applications. Felipe explains why WAFs remain a critical layer in modern defense-in-depth strategies. We'll explore what makes OWASP CRS the go-to choice for security teams, dive into the project's current innovations, and discuss how traditional rule-based security is evolving to work alongside — not against — AI. Segment Resources: github.com/coreruleset/coreruleset coreruleset.org The future of CycloneDX is defined by modularity, API-first design, and deeper contextual insight, enabling transparency that is not just comprehensive, but actionable. At its heart is the Transparency Exchange API, which delivers a normalized, format-agnostic model for sharing SBOMs, attestations, risks, and more across the software supply chain. As genAI transforms every sector of modern business, the security community faces a question: how do we protect systems we can't fully see or understand? In this fireside chat, Aruneesh Salhotra, Project Lead for OWASP AIBOM and Co-Lead of OWASP AI Exchange, discusses two groundbreaking initiatives that are reshaping how organizations approach AI security and supply chain transparency. OWASP AI Exchange has emerged as the go-to single resource for AI security and privacy, providing over 200 pages of practical advice on protecting AI and data-centric systems from threats. Through its official liaison partnership with CEN/CENELEC, the project has contributed 70 pages to ISO/IEC 27090 and 40 pages to the EU AI Act security standard OWASP, achieving OWASP Flagship project status in March 2025. Meanwhile, the OWASP AIBOM Project is establishing a comprehensive framework to provide transparency into how AI models are built, trained, and deployed, extending OWASP's mission of making security visible to the rapidly evolving AI ecosystem. This conversation explores how these complementary initiatives are addressing real-world challenges—from prompt injection and data poisoning to model provenance and supply chain risks—while actively shaping international standards and regulatory frameworks. We'll discuss concrete achievements, lessons learned from global collaboration, and the ambitious roadmap ahead as these projects continue to mature and expand their impact across the AI security landscape. Segment Resources: https://owasp.org/www-project-aibom/ https://www.linkedin.com/posts/aruneeshsalhotra_owasp-ai-aisecurity-activity-7364649799800766465-DJGM/ https://www.youtube.com/@OWASPAIBOM https://www.youtube.com/@RobvanderVeer-ex3gj https://owaspai.org/ Agentic AI introduces unique and complex security challenges that render traditional risk management frameworks insufficient. In this keynote, Ken Huang, CEO of Distributedapps.ai and a key contributor to AI security standards, outlines a new approach to manage these emerging threats. The session will present a practical strategy that integrates the NIST AI Risk Management Framework with specialized tools to address the full lifecycle of Agentic AI. Segment Resources: aivss.owasp.org https://kenhuangus.substack.com/p/owasp-aivss-the-new-framework-for https://cloudsecurityalliance.org/blog/2025/02/06/agentic-ai-threat-modeling-framework-maestro This interview is sponsored by the OWASP GenAI Security Project. Visit https://securityweekly.com/owaspappsec to watch all of CyberRisk TV's interviews from the OWASP 2025 Global AppSec Conference! Visit https://www.securityweekly.com/asw for all the latest episodes! Show Notes: https://securityweekly.com/asw-363
In an era dominated by AI-powered security tools and cloud-native architectures, are traditional Web Application Firewalls still relevant? Join us as we speak with Felipe Zipitria, co-leader of the OWASP Core Rule Set (CRS) project. Felipe has been at the forefront of open-source security, leading the development of one of the world's most widely deployed WAF rule sets, trusted by organizations globally to protect their web applications. Felipe explains why WAFs remain a critical layer in modern defense-in-depth strategies. We'll explore what makes OWASP CRS the go-to choice for security teams, dive into the project's current innovations, and discuss how traditional rule-based security is evolving to work alongside — not against — AI. Segment Resources: github.com/coreruleset/coreruleset coreruleset.org The future of CycloneDX is defined by modularity, API-first design, and deeper contextual insight, enabling transparency that is not just comprehensive, but actionable. At its heart is the Transparency Exchange API, which delivers a normalized, format-agnostic model for sharing SBOMs, attestations, risks, and more across the software supply chain. As genAI transforms every sector of modern business, the security community faces a question: how do we protect systems we can't fully see or understand? In this fireside chat, Aruneesh Salhotra, Project Lead for OWASP AIBOM and Co-Lead of OWASP AI Exchange, discusses two groundbreaking initiatives that are reshaping how organizations approach AI security and supply chain transparency. OWASP AI Exchange has emerged as the go-to single resource for AI security and privacy, providing over 200 pages of practical advice on protecting AI and data-centric systems from threats. Through its official liaison partnership with CEN/CENELEC, the project has contributed 70 pages to ISO/IEC 27090 and 40 pages to the EU AI Act security standard OWASP, achieving OWASP Flagship project status in March 2025. Meanwhile, the OWASP AIBOM Project is establishing a comprehensive framework to provide transparency into how AI models are built, trained, and deployed, extending OWASP's mission of making security visible to the rapidly evolving AI ecosystem. This conversation explores how these complementary initiatives are addressing real-world challenges—from prompt injection and data poisoning to model provenance and supply chain risks—while actively shaping international standards and regulatory frameworks. We'll discuss concrete achievements, lessons learned from global collaboration, and the ambitious roadmap ahead as these projects continue to mature and expand their impact across the AI security landscape. Segment Resources: https://owasp.org/www-project-aibom/ https://www.linkedin.com/posts/aruneeshsalhotra_owasp-ai-aisecurity-activity-7364649799800766465-DJGM/ https://www.youtube.com/@OWASPAIBOM https://www.youtube.com/@RobvanderVeer-ex3gj https://owaspai.org/ Agentic AI introduces unique and complex security challenges that render traditional risk management frameworks insufficient. In this keynote, Ken Huang, CEO of Distributedapps.ai and a key contributor to AI security standards, outlines a new approach to manage these emerging threats. The session will present a practical strategy that integrates the NIST AI Risk Management Framework with specialized tools to address the full lifecycle of Agentic AI. Segment Resources: aivss.owasp.org https://kenhuangus.substack.com/p/owasp-aivss-the-new-framework-for https://cloudsecurityalliance.org/blog/2025/02/06/agentic-ai-threat-modeling-framework-maestro This interview is sponsored by the OWASP GenAI Security Project. Visit https://securityweekly.com/owaspappsec to watch all of CyberRisk TV's interviews from the OWASP 2025 Global AppSec Conference! Show Notes: https://securityweekly.com/asw-363
In an era dominated by AI-powered security tools and cloud-native architectures, are traditional Web Application Firewalls still relevant? Join us as we speak with Felipe Zipitria, co-leader of the OWASP Core Rule Set (CRS) project. Felipe has been at the forefront of open-source security, leading the development of one of the world's most widely deployed WAF rule sets, trusted by organizations globally to protect their web applications. Felipe explains why WAFs remain a critical layer in modern defense-in-depth strategies. We'll explore what makes OWASP CRS the go-to choice for security teams, dive into the project's current innovations, and discuss how traditional rule-based security is evolving to work alongside — not against — AI. Segment Resources: github.com/coreruleset/coreruleset coreruleset.org The future of CycloneDX is defined by modularity, API-first design, and deeper contextual insight, enabling transparency that is not just comprehensive, but actionable. At its heart is the Transparency Exchange API, which delivers a normalized, format-agnostic model for sharing SBOMs, attestations, risks, and more across the software supply chain. As genAI transforms every sector of modern business, the security community faces a question: how do we protect systems we can't fully see or understand? In this fireside chat, Aruneesh Salhotra, Project Lead for OWASP AIBOM and Co-Lead of OWASP AI Exchange, discusses two groundbreaking initiatives that are reshaping how organizations approach AI security and supply chain transparency. OWASP AI Exchange has emerged as the go-to single resource for AI security and privacy, providing over 200 pages of practical advice on protecting AI and data-centric systems from threats. Through its official liaison partnership with CEN/CENELEC, the project has contributed 70 pages to ISO/IEC 27090 and 40 pages to the EU AI Act security standard OWASP, achieving OWASP Flagship project status in March 2025. Meanwhile, the OWASP AIBOM Project is establishing a comprehensive framework to provide transparency into how AI models are built, trained, and deployed, extending OWASP's mission of making security visible to the rapidly evolving AI ecosystem. This conversation explores how these complementary initiatives are addressing real-world challenges—from prompt injection and data poisoning to model provenance and supply chain risks—while actively shaping international standards and regulatory frameworks. We'll discuss concrete achievements, lessons learned from global collaboration, and the ambitious roadmap ahead as these projects continue to mature and expand their impact across the AI security landscape. Segment Resources: https://owasp.org/www-project-aibom/ https://www.linkedin.com/posts/aruneeshsalhotra_owasp-ai-aisecurity-activity-7364649799800766465-DJGM/ https://www.youtube.com/@OWASPAIBOM https://www.youtube.com/@RobvanderVeer-ex3gj https://owaspai.org/ Agentic AI introduces unique and complex security challenges that render traditional risk management frameworks insufficient. In this keynote, Ken Huang, CEO of Distributedapps.ai and a key contributor to AI security standards, outlines a new approach to manage these emerging threats. The session will present a practical strategy that integrates the NIST AI Risk Management Framework with specialized tools to address the full lifecycle of Agentic AI. Segment Resources: aivss.owasp.org https://kenhuangus.substack.com/p/owasp-aivss-the-new-framework-for https://cloudsecurityalliance.org/blog/2025/02/06/agentic-ai-threat-modeling-framework-maestro This interview is sponsored by the OWASP GenAI Security Project. Visit https://securityweekly.com/owaspappsec to watch all of CyberRisk TV's interviews from the OWASP 2025 Global AppSec Conference! Visit https://www.securityweekly.com/asw for all the latest episodes! Show Notes: https://securityweekly.com/asw-363
In an era dominated by AI-powered security tools and cloud-native architectures, are traditional Web Application Firewalls still relevant? Join us as we speak with Felipe Zipitria, co-leader of the OWASP Core Rule Set (CRS) project. Felipe has been at the forefront of open-source security, leading the development of one of the world's most widely deployed WAF rule sets, trusted by organizations globally to protect their web applications. Felipe explains why WAFs remain a critical layer in modern defense-in-depth strategies. We'll explore what makes OWASP CRS the go-to choice for security teams, dive into the project's current innovations, and discuss how traditional rule-based security is evolving to work alongside — not against — AI. Segment Resources: github.com/coreruleset/coreruleset coreruleset.org The future of CycloneDX is defined by modularity, API-first design, and deeper contextual insight, enabling transparency that is not just comprehensive, but actionable. At its heart is the Transparency Exchange API, which delivers a normalized, format-agnostic model for sharing SBOMs, attestations, risks, and more across the software supply chain. As genAI transforms every sector of modern business, the security community faces a question: how do we protect systems we can't fully see or understand? In this fireside chat, Aruneesh Salhotra, Project Lead for OWASP AIBOM and Co-Lead of OWASP AI Exchange, discusses two groundbreaking initiatives that are reshaping how organizations approach AI security and supply chain transparency. OWASP AI Exchange has emerged as the go-to single resource for AI security and privacy, providing over 200 pages of practical advice on protecting AI and data-centric systems from threats. Through its official liaison partnership with CEN/CENELEC, the project has contributed 70 pages to ISO/IEC 27090 and 40 pages to the EU AI Act security standard OWASP, achieving OWASP Flagship project status in March 2025. Meanwhile, the OWASP AIBOM Project is establishing a comprehensive framework to provide transparency into how AI models are built, trained, and deployed, extending OWASP's mission of making security visible to the rapidly evolving AI ecosystem. This conversation explores how these complementary initiatives are addressing real-world challenges—from prompt injection and data poisoning to model provenance and supply chain risks—while actively shaping international standards and regulatory frameworks. We'll discuss concrete achievements, lessons learned from global collaboration, and the ambitious roadmap ahead as these projects continue to mature and expand their impact across the AI security landscape. Segment Resources: https://owasp.org/www-project-aibom/ https://www.linkedin.com/posts/aruneeshsalhotra_owasp-ai-aisecurity-activity-7364649799800766465-DJGM/ https://www.youtube.com/@OWASPAIBOM https://www.youtube.com/@RobvanderVeer-ex3gj https://owaspai.org/ Agentic AI introduces unique and complex security challenges that render traditional risk management frameworks insufficient. In this keynote, Ken Huang, CEO of Distributedapps.ai and a key contributor to AI security standards, outlines a new approach to manage these emerging threats. The session will present a practical strategy that integrates the NIST AI Risk Management Framework with specialized tools to address the full lifecycle of Agentic AI. Segment Resources: aivss.owasp.org https://kenhuangus.substack.com/p/owasp-aivss-the-new-framework-for https://cloudsecurityalliance.org/blog/2025/02/06/agentic-ai-threat-modeling-framework-maestro This interview is sponsored by the OWASP GenAI Security Project. Visit https://securityweekly.com/owaspappsec to watch all of CyberRisk TV's interviews from the OWASP 2025 Global AppSec Conference! Show Notes: https://securityweekly.com/asw-363
In this episode, Etienne Nichols sits down with Michelle Wu, Founder and CEO of Nyquist AI and one of the top 100 women in AI, to discuss the transformative state of artificial intelligence within the MedTech regulatory and quality space. Reflecting on her recent personal experience as a surgical patient, Michelle shares a unique perspective on the critical importance of the devices and quality systems that keep the industry running.The conversation dives deep into the "Great Rewiring" of the medical device industry. Michelle outlines how we have moved past the initial phase of AI skepticism and "AI fatigue" into a period of hyper-acceleration. With the introduction of the FDA's ELSA and the implementation of the EU AI Act, the industry has reached a point where AI is no longer a side project but a fundamental requirement for operational longevity.Finally, the episode provides a roadmap for both organizations and individual contributors. Michelle introduces her "Holy Trinity" framework for AI implementation—Data, Workflow, and Agents—and explains why the next two years will be defined by the "Invisible Colleague" or AI copilot. For junior professionals, the message is clear: knowledge is now a commodity, and the real value lies in the ability to ask high-quality, strategic questions.Key Timestamps00:00 – Introduction and Michelle Wu's background in MedTech and AI.03:45 – A founder's perspective: Michelle's personal experience in the OR seeing her clients' devices.08:12 – The 2025 Inflection Point: FDA ELSA, EU AI Act, and global AI expectations.11:50 – From billable hours to value-based output: How AI is disrupting the consulting business model.15:35 – Micro-timestamp: 2026 Predictions. The shift toward universal AI Copilots and Agents for every MedTech role.18:22 – The Holy Trinity of AI: Breaking down Data Layers, Workflow Automation, and AI Agents.22:10 – Case Study: How a top-tier MedTech company automated 17,000 quality and regulatory tasks.27:45 – The 56.8% Salary Premium: Why AI literacy is the most important skill for young RAQA professionals.31:15 – Shifting from memorization to "Clarity of Mind" and high-quality inquiry.Quotes"Knowledge is a commodity now. Previously, regulatory consultants or professionals stood out by their knowledge. Now, with AI leveling the field, the capability lies in those who can ask high-quality questions." - Michelle Wu, Nyquist AITakeawaysAI Literacy is a Financial Multiplier: LinkedIn data shows that non-engineering knowledge workers with AI literacy can command a salary premium of up to 56.8%.The 80/20 Rule of Automation: Approximately 80% of current RAQA tasks are tedious, manual, or administrative. Successful teams are using AI to automate that 80%, allowing humans to focus on the 20% that is strategic and high-value.The Three-Layer AI Strategy: To effectively implement AI, companies should look at the Data Layer (intelligence), the Workflow Layer (automation of specific tasks), and the Agent Layer (autonomous "employees").Value-Based Billing: As AI reduces the time required for regulatory submissions and gap analyses, the industry is moving away from the "billable hour" toward pricing based on the value and quality of the output.ReferencesNyquist AI: Michelle Wu's platform specializing in global regulatory intelligence and AI-driven workflow automation for MedTech.FDA ELSA: The...
Willkommen zum letzten AI und Automation Update des Jahres, kurz vor Weihnachten! Die letzten Wochen waren super stressig, aber ich freue mich, heute mit Thomas über die rasanten Entwicklungen in der KI zu sprechen. Dieses Jahr war echt der Wahnsinn und wir blicken heute noch einmal zurück und geben einen Ausblick auf das, was uns erwartet. Ein großes Thema, das wir direkt zu Beginn ansprechen, ist die Frage, ob der KI-Hype vorbei ist. Meine These: Ja, er ist vorbei! Wir sprechen nicht mehr über Hype, sondern über Industrialisierung. KI wird jetzt wirklich genutzt und findet überall Anwendung, vom privaten Bereich bis hin zu Banken und Shops. Es geht nicht mehr um das Staunen, sondern um Effizienz und Produktivität. Wir sprechen auch über die Regulierung durch den EU AI Act, der dieses Jahr gestartet ist. Das ist nicht nur eine neue Regelung, sondern beinhaltet auch eine Schulungspflicht für Mitarbeiter, die KI nutzen. Es ist wichtig, die Funktionsweise, ethische Grundlagen und Risiken zu verstehen – Stichwort Prompt Injection! Gleichzeitig fragen wir uns, wie sich die Regulierung weiterentwickelt und ob sie Innovationen hemmt. In der Versicherungswelt sehen wir, dass das Schadensmanagement die "Low Hanging Fruit" ist, wenn es um KI-Anwendungen geht. Viele Versicherer nutzen KI hier bereits, um unstrukturierte Daten zu verarbeiten und Prozesse zu optimieren. Aber es gibt noch so viel mehr Potenzial in anderen Bereichen, und ich frage mich, warum die Branche nicht mutiger ist, diese zu erkunden. Ein zentrales Thema für das kommende Jahr ist meiner Meinung nach die "Agentic AI". Wir sind vom einfachen Chat über komplexe Prompts hin zu KI-Agenten gekommen, die selbstständig Aufgaben zerlegen, Pläne erstellen und mit uns in Dialog treten können. Das wird unsere Arbeitsweise grundlegend verändern, und ich gebe ein Beispiel aus der Softwareentwicklung, um das greifbar zu machen. Zuletzt diskutieren wir, dass viele Herausforderungen, die wir jetzt mit KI erleben – wie das Thema Mindset, Experimentierfreude und das schnelle Anpassen an neue Technologien – gar nicht so neu sind. Sie erinnern an frühere Trends wie Agilität oder Design Thinking, aber die Geschwindigkeit und die technologische Komplexität sind diesmal viel höher. Links in dieser Ausgabe Zur Homepage von Jonas Piela Zum LinkedIn-Profil von Jonas Piela Zum LinkedIn-Profil von Thomas Fröhlich Whitepaper: KI verantwortungsvoll einsetzen Das Einzige, was riskanter ist als KI, ist sie zu ignorieren. Ladet euch jetzt das Whitepaper von Thoughtworks herunter und setzt KI verantwortungsvoll ein.
Achtung (Werbung in eigener Sache): Jetzt mein neues Buch (in Co-Produktion mit Prof. Dr. Johanna Bath): "Die perfekte Employee Journey & Experience" kaufen (Lieferung im Oktober 2025): Springer: https://link.springer.com/book/9783662714195 Amazon: https://bit.ly/44aajaP Thalia: https://www.thalia.de/shop/home/artikeldetails/A1074960417 Dieses Fachbuch stellt die wichtigsten Elemente der Employee Journey vor – vom Pre-Boarding bis zum Offboarding – und erläutert, wie Verantwortliche in Unternehmen eine gelungene Employee Experience realisieren und nachhaltig verankern können. Meine Gäste Marcel Rütten, Robindro Ullah, Marcus Merheim, Stefan Dietz, Stefan Scheller, Jan Hawliczek und Dominik Becker Thema Wie in jedem Jahr haben wir auch in diesem Weihnachtsspecial (Podcastfolge 432) arüber gesprochen, was für jeden persönliche die Highlights von 2025 waren und was die individuellen Ausblicke für 2026 sind. Herzlichen Dank an meine Gäste für die spannenden Erkenntnisse und Aussichten. Viel Spaß beim Reinhören! Was waren die Highlights in 2025: KI: es ist viel ausprobiert und neu ausgerichtet worden, deutlich breiterer Einsatz von KI in den Bereichen HR und Recruiting (mehr Einsatz von KI-Agenten) Die Einhaltung des EU-AI-Act bzw. das damit verbundene Datenschutzthema war von besonderer Bedeutung (und wird es bleiben) mehr Resilienz im Recruiting angekommen Was ist der Ausblick für 2026: Agilität und schnellere Anpassungen im HR und Recruiting bleiben notwendig Transformation durch KI muss von den Führungskräften deutlich stärker getrieben werden KI braucht eine gute Datenbasis, um gute Ergebnisse liefern zu können und die Einschätzung von Ergebnissen wird ein echtes Skillset der Zukunft Eine zentrale Fragestellung könnte lauten: wie werden Mensch und KI in Zukunft zusammenarbeiten und wie erfolgt die Arbeitsteilung? HR kann Wechsel hin zum KI-Mindset gestalten und vorantreiben KI und Governance: EU AI-Act bleibt weiterhin ein großes Thema HR wird für die Budgets stärker kämpfen müssen als in 2025 People Management (Bindung von Mitarbeitenden, etc.) wird wichtiger Das Thema Gehaltstransparenz wird eine Herausforderung für viele Unternehmen #Recruiting #KI #EUAIAct #Resilienz #HRdata #Peoplemanagement #Gehaltstransparenz #GainTalentsPodcast Links Hans-Heinz Wisotzky: Website: https://www.gaintalents.com/podcast und https://www.gaintalents.com/blog Podcast: https://www.gaintalents.com/podcast Bücher: Neu (jetzt überall zu kaufen): Die perfekte Employee Journey und Experience https://link.springer.com/book/9783662714195 Erste Buch: Die perfekte Candidate Journey und Experience https://www.gaintalents.com/buch-die-perfekte-candidate-journey-und-experience LinkedIn https://www.linkedin.com/in/hansheinzwisotzky/ LinkedIn https://www.linkedin.com/company/gaintalents XING https://www.xing.com/profile/HansHeinz_Wisotzky/cv Facebook https://www.facebook.com/GainTalents Instagram https://www.instagram.com/gain.talents/ Youtube https://bit.ly/2GnWMFg
Achtung (Werbung in eigener Sache): Jetzt mein neues Buch (in Co-Produktion mit Prof. Dr. Johanna Bath): "Die perfekte Employee Journey & Experience" kaufen (Lieferung im Oktober 2025): Springer: https://link.springer.com/book/9783662714195 Amazon: https://bit.ly/44aajaP Thalia: https://www.thalia.de/shop/home/artikeldetails/A1074960417 Dieses Fachbuch stellt die wichtigsten Elemente der Employee Journey vor – vom Pre-Boarding bis zum Offboarding – und erläutert, wie Verantwortliche in Unternehmen eine gelungene Employee Experience realisieren und nachhaltig verankern können. Meine Gäste Marcel Rütten, Robindro Ullah, Marcus Merheim, Stefan Dietz, Stefan Scheller, Jan Hawliczek und Dominik Becker Thema Wie in jedem Jahr haben wir auch in diesem Weihnachtsspecial (Podcastfolge 432) arüber gesprochen, was für jeden persönliche die Highlights von 2025 waren und was die individuellen Ausblicke für 2026 sind. Herzlichen Dank an meine Gäste für die spannenden Erkenntnisse und Aussichten. Viel Spaß beim Reinhören! Was waren die Highlights in 2025: KI: es ist viel ausprobiert und neu ausgerichtet worden, deutlich breiterer Einsatz von KI in den Bereichen HR und Recruiting (mehr Einsatz von KI-Agenten) Die Einhaltung des EU-AI-Act bzw. das damit verbundene Datenschutzthema war von besonderer Bedeutung (und wird es bleiben) mehr Resilienz im Recruiting angekommen Was ist der Ausblick für 2026: Agilität und schnellere Anpassungen im HR und Recruiting bleiben notwendig Transformation durch KI muss von den Führungskräften deutlich stärker getrieben werden KI braucht eine gute Datenbasis, um gute Ergebnisse liefern zu können und die Einschätzung von Ergebnissen wird ein echtes Skillset der Zukunft Eine zentrale Fragestellung könnte lauten: wie werden Mensch und KI in Zukunft zusammenarbeiten und wie erfolgt die Arbeitsteilung? HR kann Wechsel hin zum KI-Mindset gestalten und vorantreiben KI und Governance: EU AI-Act bleibt weiterhin ein großes Thema HR wird für die Budgets stärker kämpfen müssen als in 2025 People Management (Bindung von Mitarbeitenden, etc.) wird wichtiger Das Thema Gehaltstransparenz wird eine Herausforderung für viele Unternehmen #Recruiting #KI #EUAIAct #Resilienz #HRdata #Peoplemanagement #Gehaltstransparenz #GainTalentsPodcast Links Hans-Heinz Wisotzky: Website: https://www.gaintalents.com/podcast und https://www.gaintalents.com/blog Podcast: https://www.gaintalents.com/podcast Bücher: Neu (jetzt überall zu kaufen): Die perfekte Employee Journey und Experience https://link.springer.com/book/9783662714195 Erste Buch: Die perfekte Candidate Journey und Experience https://www.gaintalents.com/buch-die-perfekte-candidate-journey-und-experience LinkedIn https://www.linkedin.com/in/hansheinzwisotzky/ LinkedIn https://www.linkedin.com/company/gaintalents XING https://www.xing.com/profile/HansHeinz_Wisotzky/cv Facebook https://www.facebook.com/GainTalents Instagram https://www.instagram.com/gain.talents/ Youtube https://bit.ly/2GnWMFg
"I always say, you can't learn how to swim if you don't jump into the water. And it's so important for people to be able to jump into the water and to really test it out."
Alexandru Voica, Head of Corporate Affairs and Policy at Synthesia, discusses how the world's largest enterprise AI video platform has approached trust and safety from day one. He explains Synthesia's "three C's" framework—consent, control, and collaboration: never creating digital replicas without explicit permission, moderating every video before rendering, and engaging with policymakers to shape practical regulation. Voica acknowledges these safeguards have cost some business, but argues that for enterprise sales, trust is competitively essential. The company's content moderation has evolved from simple keyword detection to sophisticated LLM-based analysis, recently withstanding a rigorous public red team test organized by NIST and Humane Intelligence. Voica criticizes the EU AI Act's approach of regulating how AI systems are built rather than focusing on harmful outcomes, noting that smaller models can now match frontier capabilities while evading compute-threshold regulations. He points to the UK's outcome-focused approach—like criminalizing non-consensual deepfake pornography—as more effective. On adoption, Voica argues that AI companies should submit to rigorous third-party audits using ISO standards rather than publishing philosophical position papers—the thesis of his essay "Audits, Not Essays." The conversation closes personally: growing up in 1990s Romania with rare access to English tutoring, Voica sees AI-powered personalized education as a transformative opportunity to democratize learning. Alexandru Voica is the Head of Corporate Affairs and Policy at Synthesia, the UK's largest generative AI company and the world's leading AI video platform. He has worked in the technology industry for over 15 years, holding public affairs and engineering roles at Meta, NetEase, Ocado, and Arm. Voica holds an MSc in Computer Science from the Sant'Anna School of Advanced Studies and serves as an advisor to MBZUAI, the world's first AI university. Transcript Audits, Not Essays: How to Win Trust for Enterprise AI (Transformer) Synthesia's Content Moderation Systems Withstand Rigorous NIST, Humane Intelligence Red Team Test (Synthesia) Computerspeak Newsletter
On November 19, the European Commission unveiled two major omnibus packages as part of its European Data Union Strategy. One package proposes several changes to the EU General Data Protection Regulation, while the other proposes significant changes to the recently minted EU AI Act, including a proposed delay to the regulation of so-called high-risk AI systems. Laura Caroli was a lead negotiator and policy advisor to AI Act co-rapporteur Brando Benifei and was immersed in the high-stakes negotiations leading to the AI regulation. She is also a former senior fellow at the Center for Strategic and International Studies, but recently moved back to Brussels during a time of major complexity in the EU. IAPP Editorial Director Jedidiah Bracy caught up with Caroli to discuss her views on the proposed changes to the AI Act in the omnibus package and how she thinks the negotiations will play out. Here's what she had to say.
Send us a textIn Episode 267 of The Data Diva Talks Privacy Podcast, Debbie Reynolds, The Data Diva, talks with Federico Marengo, Associate Partner at White Label Consultancy in Italy. They explore the accelerating intersection of privacy, artificial intelligence, and governance, and discuss how organizations can build practical, responsible AI programs that align with existing privacy and security frameworks. Federico explains why AI governance cannot exist in a vacuum and must be integrated with the policies, controls, and operational practices companies already use.The conversation delves into the challenges organizations face in adopting AI responsibly, including understanding the requirements of the EU AI Act, right-sizing compliance expectations for organizations of different scales, and developing programs that allow innovation while managing risk. Federico highlights the importance of educating leadership about where AI regulations actually apply, since many businesses overestimate their obligations, and he explains why clarity around high-risk systems is essential for reducing unnecessary fear and confusion.Debbie and Federico also discuss future trends for global AI and privacy governance, including whether companies will eventually adopt unified enterprise frameworks rather than fragmented jurisdiction-specific practices. They explore how organizations can upskill their teams, embed governance into product development, and normalize AI as part of standard technology operations. Federico shares his vision for a world where professionals collaborate to advance best practices and help organizations embrace AI with confidence rather than hesitation.Support the showBecome an insider, join Data Diva Confidential for data strategy and data privacy insights delivered to your inbox.
Artificial intelligence is rapidly transforming the pharmaceutical and life sciences sector — but innovation in this field comes with some of the highest regulatory, ethical, and governance expectations.In this episode of Legal Leaders Insights from Diritto al Digitale, Giulio Coraggio of DLA Piper speaks with Oliver Patel, Head of Enterprise AI Governance at AstraZeneca, about how AI governance is implemented in practice within a global pharmaceutical company.The conversation covers:What enterprise AI governance looks like in the life sciences sectorHow to balance AI innovation with privacy, intellectual property, and complianceThe concrete implications of the EU AI Act for pharmaceutical companiesPractical governance approaches to enable responsible and scalable AIThis episode is particularly relevant for legal professionals, compliance teams, in-house counsel, data leaders, and executives working in highly regulated industries.Diritto al Digitale is the podcast where law, technology, and digital regulation intersect with real business challenges.Send us a text
AI is evolving faster than most organizations can keep up with — and the truth is, very few companies are prepared for what's coming in 2026. In this episode of Reimagining Cyber, Rob Aragao speaks with Ken Johnston, VP of Data, Analytics and AI at Envorso, about the uncomfortable reality: autonomous AI systems are accelerating, regulations are tightening, and most businesses have no idea how much risk they're carrying.Ken explains why companies have fallen behind, how “AI governance debt” has quietly piled up, and why leaders must take action now before the EU AI Act and Colorado's 2026 regulation bring real financial consequences. From AI bias and data provenance to agentic AI guardrails, observability, audits, and model versioning — Ken lays out the essential steps organizations must take to catch up before it's too late. It's 5 years since Reimagining Cyber began. Thanks to all of our loyal listeners!As featured on Million Podcasts' Best 100 Cybersecurity Podcasts Top 50 Chief Information Security Officer CISO Podcasts Top 70 Security Hacking Podcasts This list is the most comprehensive ranking of Cyber Security Podcasts online and we are honoured to feature amongst the best! Follow or subscribe to the show on your preferred podcast platform.Share the show with others in the cybersecurity world.Get in touch via reimaginingcyber@gmail.com
AI Unraveled: Latest AI News & Trends, Master GPT, Gemini, Generative AI, LLMs, Prompting, GPT Store
Welcome back to AI Unraveled, your daily strategic briefing on the business impact of artificial intelligence.Today, we are flipping the script on the most boring word in tech: Governance. We are diving into the 'Compliance Cost Cliff'—a new reality where the ability to control your AI is not just a legal shield, but the primary engine of your velocity. We'll look at how AI hallucinations cost businesses $67 billion this year alone, why the EU AI Act is actually a springboard for global dominance, and how giants like JPMorgan and Mayo Clinic are building 'Trust Moats' to leave their competitors in the dust.1. The Strategic Inversion: From Brake to Engine The narrative of "move fast and break things" is dead. We have reached the Compliance Cost Cliff, where the financial and reputational risks of ungoverned AI far outweigh the friction of implementing it. Organizations that treat governance as infrastructure are unlocking high-risk, high-reward use cases that remain inaccessible to less disciplined competitors.2. The "Trust Moat" Theory In a market flooded with AI-generated noise and deepfakes, verified reality is the only scarce resource.Sales Friction: Governance-first companies bypass lengthy procurement security questionnaires, winning deals in the "silent" phase of the buying cycle.Pricing Power: Verified, auditable AI outputs command a premium. An AI that cites its sources is a professional tool; one that doesn't is a liability.3. The Economics of FailureThe Hallucination Bill: In 2024, AI hallucinations cost businesses $67.4 billion in direct losses, legal sanctions, and operational remediation.Regulatory Hammers: The EU AI Act introduces fines of up to 7% of global turnover—a penalty structure that can erase a year's worth of profitability for major firms.4. Sector Deep Dives: The First MoversFinance (JPMorgan Chase): Misinterpreted for initially banning ChatGPT, JPMC used the pause to build the LLM Suite—a governed platform that handles data privacy and model risk centrally. This infrastructure now allows them to deploy tools like Connect Coach safely while competitors struggle with compliance.Healthcare (Mayo Clinic): Mayo's "Deploy" platform acts as governance middleware. Insurance (AXA): With SecureGPT, AXA positions itself as a governance auditor, refusing to insure companies that cannot prove their AI safety standards—effectively monetizing governance.5. The Technical Architecture of Compliance Governance must be encoded into the software itself.Auditable RAGImmutable Audit Logs6. Future Outlook: Agentic AI & Liability As we move toward Agentic AI (systems that take action, not just chat), the liability shifts entirely to the deployer. The only defense against an agent that executes a bad trade or deletes a file is a robust, documented governance history.KeywordsAI Governance, Compliance Cost Cliff, Trust Moat, EU AI Act, Agentic AI, Hallucination Costs, JPMorgan LLM Suite, Mayo Clinic Deploy, Auditable RAG, Vector DB Audit Logs,
On Wednesday November 19 2025, the European Commission unveiled its Digital Omnibus Package, which was basically split in two proposals: a proposed Regulation on simplification for AI rules; and a proposed Regulation on simplification of the digital legislation. We will tackle the first one today.Today we are reviewing that AI-related block with Oliver Patel, who is AI Governance Lead at the global pharma and biotech company AstraZeneca, where he helps implement and scale AI governance worldwide. He also advises governments and international policymakers as a Member of the OECD's Expert Group on AI Risk and Accountability.References:* Oliver Patel, “Fundamentals of AI Governance” (now available for pre-order)* Enterprise AI Governance, a newsletter by Oliver Patel* Oliver Patel on LinkedIn* Oliver Patel: How could the EU AI Act change?* EU proposal for a Regulation on simplification for AI rules (EU Commission, covered today)* EU proposal for a Regulation on simplification of the digital legislation (EU Commission, not covered today)* Europe's digital sovereignty: from doctrine to delivery (Politico). This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit www.mastersofprivacy.com/subscribe
We talk with legal expert Liane Colonna (Stockholm University) about the EU ‘AI Act' and what it means for the use of AI in education. To what extent can we rely on regulation to enforce safer and more beneficial forms of AI use in education? Accompanying reference >>> Colonna, L. (2025). Artificial Intelligence in Education (AIED): Towards More Effective Regulation. European Journal of Risk Regulation, doi:10.1017/err.2025.10039
AI Unraveled: Latest AI News & Trends, Master GPT, Gemini, Generative AI, LLMs, Prompting, GPT Store
Special Edition: The Billion-Dollar Decision (December 05, 2025)Today's episode is a deep dive into the strategic shift from "renting" AI to "owning" it. We explore the 2025 playbook for shifting from API wrappers to sovereign AI assets.Key Topics & Insights
This episode kicks off with a little red-carpet flair – Littler's Stephan Swinkels returns from the 2025 European Executive Employer conference in London to share the inside scoop. Hosts Nicole LeFave and Claire Deason get the unfiltered download – straight from the source as they dive into the findings from Littler's 2025 European Employer Survey Report, spotlighting workplace trends shaking up Europe – from pay transparency and the EU AI Act to IE&D and return to work policies. Whether you're navigating new regulations, planning ahead, or trying to make sense of how EU directives intersect with local implementation, this conversation bridges the U.S. patchwork of state and local laws with the European landscape – offering practical insights and fresh perspectives to help employers stay ahead in a rapidly evolving environment. https://www.littler.com/news-analysis/podcast/littler-lounge-european-employer-edition-policy-shifts-workplace-solutions
This week, Andreas Munk Holm sits down with Jack Leeney, co-founder of 7GC, the transatlantic growth fund bridging Silicon Valley and Europe and a backer of AI giants like Anthropic, alongside European rising stars Poolside and Fluidstack.From IPOs at Morgan Stanley to running Telefónica's US venture arm and now operating a dual-continental fund, Jack shares how 7GC reads the AI supercycle, why infrastructure and platforms win first, and what Europe must fix to unlock the next wave of venture liquidity.
Join host Martin Quibell (Marv) and a panel of industry experts as they dive deep into the impact of artificial intelligence on podcasting. From ethical debates to hands-on tools, discover how AI is shaping the future of audio and video content creation. Guests: ● Benjamin Field (Deep Fusion Films) ● William Corbin (Inception Point AI) ● John McDermott & Mark Francis (Caloroga Shark Media) Timestamps 00:00 – Introduction 00:42 – Meet the Guests 01:45 – The State of AI in Podcasting 03:45 – Transparency, Ethics & the EU AI Act 06:00 – Nuance: How AI Is Used (Descript, Shorten Word Gaps, Remove Retakes) 08:45 – AI & Niche Content: Economic Realities 12:00 – Human Craft vs. AI Automation 15:00 – Job Evolution: Prompt Authors & QC 18:00 – Quality Control & Remastering 21:00 – Volume, Scale, and Audience 24:00 – AI Co-Hosts & Experiments (Virtually Parkinson, AI Voices) 27:00 – AI in Video & Visuals (HeyGen, Weaver) 30:00 – Responsibility & Transparency 33:00 – The Future of AI in Media 46:59 – Guest Contact Info & Closing Tools & Platforms Mentioned ● Descript: Shorten word gaps, remove retakes, AI voice, scriptwriting, editing ● HeyGen: AI video avatars for podcast visuals ● Weaver (Deep Fusion Films): AI-driven video editing and archive integration ● Verbal: AI transcription and translation ● AI Voices: For narration, co-hosting, and accessibility ● Other references: Spotify, Amazon, Wikipedia, TikTok, Apple Podcasts, Google Programmatic Ads Contact the Guests: - William Corbin: william@inceptionpoint.ai | LinkedIn - John McDermott: john@caloroga.com | LinkedIn - Benjamin Field: benjamin.field@deepfusionfilms.com | LinkedIn - Mark Francis: mark@caloroga.com | LinkedIn | caloroga.com - Marv: themarvzone.org Like, comment, and subscribe for more deep dives into the future of podcasting and media! #Podcasting #AI #ArtificialIntelligence #Descript #HeyGen #PodcastTools #Ethics #MediaInnovation
Finding it difficult to navigate the changing landscape of data protection? In this episode of the DMI podcast, host Will Francis speaks with Steven Roberts, Group Head of Marketing at Griffith College, Chartered Director, certified Data Protection Officer, and long-time marketing leader. Steven demystifies GDPR, AI governance, and the rapidly evolving regulatory environment that marketers must now navigate. Steven explains how GDPR enforcement has matured, why AI has created a new layer of complexity, and how businesses can balance innovation with compliance. He breaks down the EU AI Act, its risk-based structure, and its implications for organizations inside and outside the EU. Steven also shares practical guidance for building internal AI policies, tackling “shadow AI,” reducing data breach risks, and supporting teams with training and clear governance. For an even deeper look into how businesses can ensure data protection compliance, check out Steven's book, Data Protection for Business: Compliance, Governance, Reputation and Trust. Steven's Top 3 Tips Build data protection into projects from the start, using tools like Data Protection Impact Assessments to uncover risks early. Invest in regular staff training to avoid common mistakes caused by human error. Balance compliance with business performance by setting clear policies, understanding your risk appetite, and iterating your AI governance over time. The Ahead of the Game podcast is brought to you by the Digital Marketing Institute and is available on YouTube, Apple Podcasts, Spotify, and all other podcast platforms. And if you enjoyed this episode please leave a review so others can find us. If you have other feedback for or would like to be a guest on the show, email the podcast team! Timestamps 01:29 – AI's impact on GDPR & the explosion of new global privacy laws 03:26 – Is GDPR the global gold standard? 05:04 – GDPR enforcement today: Who gets fined and why 07:09 – Cultural attitudes toward data: EU vs. US 08:51 – The EU AI Act explained: Risk tiers, guardrails & human oversight 10:48 – What businesses must do: DPIAs, fundamental rights assessments & more 13:38 – Shadow AI, risk appetite & internal governance challenges 17:10 – Should you upload company data to ChatGPT? 20:40 – How the AI Act affects countries outside the EU 24:47 – Will privacy improve over time? 28:45 – What teams can do now: Tools, processes & data audits 33:49 – Data enrichment tools: targeting vs. Legality 36:47 – Will anyone actually check your data practices? 40:06 – Steven's top tips for navigating GDPR & AI
Send us a textWalk the floor at Web Summit without leaving your headphones. We sit down with Jo Smets, founder of BluePanda and president of the Portuguese Belgian Luxembourg Chamber of Commerce, to unpack how nearshoring and AI are reshaping CRM, marketing, and team delivery across Europe.We start with clarity on nearshoring: why time zone, culture, and communication speed beat cost alone, and how that proximity pays off when you're wiring AI into daily work. Jo shares how BluePanda applies AI beyond demos—recruitment, performance, and operations—then translates those lessons into client outcomes. We compare adoption patterns across startups and corporates, call out the real blocker (end‑to‑end process automation), and map the role of global networks like BBN for keeping pace with tools and trends.The conversation pivots to trust and governance: practical ways to protect data, when on‑prem makes sense, and how to use EU AI Act guidance without stalling innovation. We explore the marketing shift from SEO to GEO, the idea of “AI‑proof” websites, and the move toward dynamic, persona‑aware content that renders at load. Jo offers a simple path to progress—pick one process, pilot, measure, educate—while keeping empathy at the core as managers start leading both humans and AI agents. Along the way, we spotlight how chambers and communities connect ecosystems across borders, turning events into learning loops and real partnerships.Looking to modernize without losing your team's identity? You'll leave with a plan for small wins, a lens for tool curation, and a sharper view of where marketing is headed next. If this resonated, subscribe, share it with a colleague who's wrestling with AI adoption, and drop a review to help others find the show.This episode was recorded in the official podcast booth at Web Summit (Lisbon) on November 12, 2025. Check the video footage, read the blog article and show notes here: https://webdrie.net/why-european-teams-win-with-nearshoring-and-practical-ai/..........................................................................
In der neuen Folge von AI in Finance passiert das, was in den letzten Monaten zum Normalzustand geworden ist: Das Tempo der KI-Industrie zieht weiter an, während Regulierung, Infrastruktur und Use Cases versuchen, mitzuhalten. Sascha und Maik haben so viele News im Gepäck, dass man locker eine Dreistundenfolge daraus hätte machen können. Es wird ein Rundflug über Europa, Silicon Valley, Big Tech, neue Modelle, neue Tools und die Frage, wie nah wir eigentlich an echter, alltäglicher KI sind.
Speaker: Professor Lilian Edwards, Emeritus Professor of Law, Innovation & Society, Newcastle Law School Biography: Lilian Edwards is a leading academic in the field of Internet law. She has taught information technology law, e-commerce law, privacy law and Internet law at undergraduate and postgraduate level since 1996 and been involved with law and artificial intelligence (AI) since 1985. She is now Emerita Professor at Newcastle and Honorary Professor at CREAte, University of Glasgow, which she helped co-found. She is the editor and major author of Law, Policy and the Internet, one of the leading textbooks in the field of Internet law (Hart, 2018, new edition forthcoming with Urquhart and Goanta, 2026). She won the Future of Privacy Forum award in 2019 for best paper ("Slave to the Algorithm" with Michael Veale) and the award for best non-technical paper at FAccT in 2020, on automated hiring. In 2004 she won the Barbara Wellberry Memorial Prize in 2004 for work on online privacy where she invented the notion of data trusts, a concept which ten years later has been proposed in EU legislation. She is a former fellow of the Alan Turing Institute on Law and AI, and the Institute for the Future of Work. Edwards has consulted for inter alia the EU Commission, the OECD, and WIPO.Abstract: The right to an explanation is having another moment. Well after the heyday of 2016-2018 when scholars tussled over whether the GDPR ( in either art 22 or arts 13-15) conferred a right to explanation, the CJEU case of Dun and Bradstreet has finally confirmed its existence, and the Platform Work Directive has wholesale revamped art 22 in its Algorithmic Management chapter. Most recently the EU AI Act added its own Frankenstein-like right to an explanation (art 86) of AI systems .None of these provisions however pin down what the essence of the explanation should be, given many notions can be invoked here ; a faithful description of source code or training data; an account that enables challenge or contestation; a “plausible” description that may be appealing in a behaviouralist sense but might be actually misleading when operationalised eg to generate a medical course of treatment. Agarwal et al argue that the tendency of UI designers, and regulators and judges alike to lean towards the plausibility end, may be unsuited to large language models which represent far more of a black box in size and optimisation than conventional machine learning, and which are trained to present encouraging but not always accurate accounts of their workings. Yet this is also the direction of travel taken by CJEU Dun & Bradstreet , above. This paper argues that explanations of large model outputs may present novel challenges needing thoughtful legal mandates.For more information (and to download slides) see: https://www.cipil.law.cam.ac.uk/seminars-and-events/cipil-seminars
AI Unraveled: Latest AI News & Trends, Master GPT, Gemini, Generative AI, LLMs, Prompting, GPT Store
Welcome to a Special Episode of AI Unraveled: The Cost of Data Gravity: Solving the Hybrid AI Deployment Nightmare.We are tackling the silent budget killer in enterprise AI: Data Gravity. You have petabytes of proprietary data—the "mass" that attracts apps and services—but moving it to the cloud for inference is becoming a financial and regulatory nightmare. We break down why the cloud-first strategy is failing for heavy data, the hidden tax of egress fees, and the new architectural playbook for 2025.Source: https://www.linkedin.com/pulse/cost-data-gravity-solving-hybrid-ai-deployment-nightmare-djamgatech-ic42cStrategic Pillars & Topics
In this episode, we start by discussing Greg's trip to India and the upcoming India AI Impact Summit in February 2026 (00:29). We then unpack the Trump Administration's draft executive order to preempt state AI laws (07:46) and break down the European Commission's new “digital omnibus” package, including proposed adjustments to the AI Act and broader regulatory simplification efforts (17:51). Finally, we discuss Anthropic's report on a China-backed “highly sophisticated cyber espionage campaign" using Claude and the mixed reactions from cybersecurity and AI policy experts (37:37).
In this episode of Alexa's Input (AI) Podcast, host Alexa Griffith sits down with Liana Tomescu, founder of Sonny Labs and host of the AI Hacks podcast. Dive into the world of AI security and compliance as Liana shares her journey from Microsoft to founding her own company. Discover the challenges and opportunities in making AI applications secure and compliant, and learn about the latest in AI regulations, including the EU AI Act. Whether you're an AI enthusiast or a tech professional, this episode offers valuable insights into the evolving landscape of AI technology.LinksSonnyLabs Website: https://sonnylabs.ai/SonnyLabs LinkedIn: https://www.linkedin.com/company/sonnylabs-ai/Liana's LinkedIn: https://www.linkedin.com/in/liana-anca-tomescu/Alexa's LinksLinkTree: https://linktr.ee/alexagriffithAlexa's Input YouTube Channel: https://www.youtube.com/@alexa_griffithWebsite: https://alexagriffith.com/LinkedIn: https://www.linkedin.com/in/alexa-griffith/Substack: https://alexasinput.substack.com/KeywordsAI security, compliance, female founder, Sunny Labs, EU AI Act, cybersecurity, prompt injection, AI agents, technology innovation, startup journeyChapters00:00 Introduction to Liana Tomescu and Sunny Labs02:53 The Journey of a Female Founder in Tech05:49 From Microsoft to Startup: The Transition09:04 Exploring AI Security and Compliance11:41 The Role of Curiosity in Entrepreneurship14:52 Understanding Sunny Labs and Its Mission17:52 The Importance of Community and Networking20:42 MCP: Model Context Protocol Explained23:54 Security Risks in AI and MCP Servers27:03 The Future of AI Security and Compliance38:25 Understanding Prompt Injection Risks45:34 The Shadow AI Phenomenon45:48 Navigating the EU AI Act52:28 Banned and High-Risk AI Practices01:00:43 Implementing AI Security Measures01:17:28 Exploring AI Security Training
René Fergen, CEO von Jupus, ist selbst Diplom-Jurist und hat die klassische Karriere bewusst gegen die Digitalisierung der Rechtsbranche eingetauscht. Sein Unternehmen Jupus hat ein KI-Sekretariat entwickelt, das Anwaltskanzleien dabei unterstützt, alle nicht-juristischen Aufgaben vom ersten Mandantenkontakt bis zur Rechnungsstellung zu automatisieren. Hintergrund ist ein massiver Fachkräftemangel: Während die Zahl der Anwält:innen in Deutschland steigt, hat sich die Zahl der neu ausgebildeten Rechtsanwaltsfachangestellten in den letzten 30 Jahren um über 80 % reduziert. Im Gespräch mit Christoph Burseg spricht René Fergen über die Disruption in einer der ältesten Branchen der Welt , die Herausforderung, KI-Software im hochsensiblen juristischen Umfeld zu entwickeln und warum es für Kanzleien keine Alternative mehr zur Digitalisierung gibt, um wettbewerbsfähig zu bleiben. In dieser Episode erfährst du: - Welche Aufgaben die Software von Jupus übernimmt, um Anwält:innen und Personal zu entlasten. - Dass in Deutschland rund 165.000 Rechtsanwält:innen tätig sind und wie sich der Personalmangel in der Branche auf den Zugang zum Recht auswirkt. - Wie Jupus unstrukturierte Dokumente (z. B. 10.000 Seiten an Verträgen oder Korrespondenz) in Sekundenbruchteilen analysiert und strukturiert – eine Aufgabe, die sonst Tage in Anspruch nehmen würde. - Was die Telefon-KI von Jupus kann und wie sie mit Anrufern von Richtern bis zu Vertrieblern umgeht. - Warum das Team von Jupus in weniger als drei Jahren über 60 Personen stark geworden ist und über 8 Millionen Euro Kapital aufgenommen hat. - Wieso der CEO von Jupus den EU AI Act gesamtgesellschaftlich als „Katastrophe“ für Europas Wettbewerbsfähigkeit und Innovation ansieht. Christoph auf LinkedIn: https://www.linkedin.com/in/christophburseg Kontaktiere uns über Instagram: https://www.instagram.com/vodafonebusinessde/
Send us a textExplore how leaders and coaches can adopt AI without losing the human core, turning compliance and ethics into everyday practice rather than a side office. Colin Cosgrove shares a practical arc for AI readiness, concrete use cases, and a clear view of risk, trust, and governance.• journey from big-tech compliance to leadership coaching • why AI changes the leadership environment and decision pace • making compliance human: transparency, explainability, consent • AI literacy across every function, not just data teams • the AI leader archetype arc for mindset and readiness • practical augmentation: before, during, after coaching sessions • three risks: reputational, relational, regulatory • leader as coach: trust, questions, and human skills • EU AI Act overview and risk-based obligations • governance, accountability, and cross-Reach out to Colin on LinkedIn and check out his website: Movizimo.com. Support the showBelemLeaders–Your organization's trusted partner for leader and team development. Visit our website to connect: belemleaders.org or book a discovery call today! belem.as.me/discoveryUntil next time, keep doing great things!
In this episode of The Digital Executive, host Brian Thomas welcomes Yakir Golan, CEO and Co-founder of Kovrr, a global leader in cyber and AI risk quantification. Drawing from his early career in Israeli intelligence and later roles in software, hardware, and product management, Yakir explains how his background shaped his holistic approach to understanding complex, interconnected risk systems.Yakir breaks down why quantifying AI and cyber risk—rather than relying on subjective, color-coded scoring—is becoming essential for enterprise leaders, boards, and regulators. He explains how Kovrr's new AI Risk Assessment and Quantification module helps organizations model real financial exposure, understand high-impact “tail risks,” and align security, GRC, and finance teams around a shared, objective language.Looking ahead, Yakir discusses how global regulation, including the EU AI Act, is accelerating the need for measurable, defensible risk management. He outlines a future where AI risk quantification becomes a board-level expectation and a foundation for resilient, responsible innovation. Through Kovrr's mission, Yakir aims to equip enterprises with the same level of intelligence-driven decision making once reserved for national security—now applied to the rapidly evolving digital risk landscape.If you liked what you heard today, please leave us a review - Apple or Spotify.See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
Guest article by Paul Dongha . Co-author of Governing the Machine: How to navigate the risks of AI and unlock its true potential. Artificial Intelligence (AI) has moved beyond the realm of IT, it is now the defining strategic challenge for every modern organisation. The global rush to adopt AI is shifting from a sprint for innovation to a race for survival. Yet as businesses scramble to deploy powerful systems, from predictive analytics to generative AI, they risk unleashing a wave of unintended consequences that could cripple them. That warning sits at the heart of Governing the Machine: How to navigate the risks of AI and unlock its true potential, a timely new guide for business leaders. Governing the Machine The authors, Dr Paul Dongha, Ray Eitel-Porter, and Miriam Vogel, argue that the drive to embrace AI must be matched by an equally urgent determination to govern it. Drawing on extensive experience advising global boardrooms, they cut through technical jargon to focus on the organisational realities of AI risk. Their step-by-step approach shows how companies can build responsible AI capability, adopting new systems effectively without waiting for perfect regulation or fully mature technology. That wait-and-see strategy, they warn, is a losing one: delay risks irrelevance, while reckless deployment invites legal and reputational harm. The evidence is already visible in a growing list of AI failures, from discriminatory algorithms in public services to generative models fabricating news or infringing intellectual property. These are not abstract technical flaws but concrete business risks with real-world consequences. Whose problem is it anyway? According to the authors, it is everyone's. The book forcefully argues that AI governance cannot be siloed within the technology department. It demands a cross-enterprise approach, requiring active leadership driven from the C-suite, Legal counsel, Human Resources, Privacy and Information Security teams as well as frontline staff alike. Rather than just sounding the alarm, the book provides a practical framework for action. It guides readers through the steps of building a robust AI governance programme. This includes defining clear principles and policies, establishing accountability, and implementing crucial checkpoints. A core part of this framework is a clear-eyed look at the nine key risks organisations must manage: accuracy, fairness and bias, explainability, accountability, privacy, security, intellectual property, safety, and the impact on the workforce and environment. Each risk area is explained, and numerous controls that mitigate and manage these risks are listed with ample references to allow the interested reader to follow-up. Organisations should carefully consider implementing a Governance Risk and Compliance (GRC) system, which brings together all key aspects of AI governance. GRC systems are available, both from large tech companies and from specialist vendors. A GRC system ties together all key components of AI governance, providing management with a single view of their deployed AI systems, and a window into all stages of AI governance for systems under development. The book is populated with numerous case studies and interviews with senior executives from some of the largest and well-known origanisations in the world that are grappling with AI risk management. The authors also navigate the complex and rapidly evolving global regulatory landscape. With the European Union implementing its comprehensive AI Act and the United States advancing a fragmented patchwork of state and federal rules, a strong, adaptable internal governance system is presented as the only viable path forward. The EU AI Act, which has now come into force, with staggered compliance deadlines in the coming two years, requires all organisations that operate within the EU, to implement risk mitigation controls with evidence of compliance. A key date is August 2nd 2026, by which time all 'Hig...
In deze speciale aflevering die aansluit bij de Accountantsdag 2025 met als thema Reality Check, duikt Vitamine A opnieuw in de impact van kunstmatige intelligentie op het accountantsvak. Drie gasten, drie invalshoeken en één grote vraag: wat betekent AI voor het beroep, de organisatie en de mens achter de accountant.Mona de Boer (PwC, Responsible AI) vertelt hoe AI inmiddels een dagelijkse realiteit is geworden en waarom organisaties nu moeten bepalen welke waarden ze hanteren. Ze bespreekt de betekenis van de EU AI Act en de opkomst van AI assurance als nieuw domein binnen het vertrouwen in technologie. Daarbij benadrukt ze dat de accountant geen terrein verliest maar juist aan belang wint.Nart Wielaard neemt het publiek mee in het concept van de Zero Person Company, een experimentele organisatie die draait op agents in plaats van mensen. Het experiment laat zien dat AI geen mens kan kopiëren, maar dat processen op een fundamenteel andere manier ingericht kunnen worden. De accountant speelt daarin een rol als coach, toezichthouder en kwaliteitsbewaker van AI-gedreven processen.Met Marjan Heemskerk verschuift de focus naar de dagelijkse praktijk van ondernemers. Zij ziet hoe AI basisvragen overneemt, maar vooral ruimte creëert voor een accountant die duidt, meedenkt en context biedt. Soft skills worden cruciaal. De uitdaging voor kantoren is om AI verantwoord in te zetten, medewerkers daarin mee te nemen en tegelijkertijd de verleiding van shortcuts te voorkomen.De aflevering eindigt met een reality check die zowel technologisch als menselijk is. AI verandert veel, maar het fundament van het accountantsvak blijft overeind: vertrouwen, onafhankelijkheid en het vermogen om de werkelijkheid te duiden.Vitamine A sprak eerder over AI. Esther Kox, Hakan Koçak en Nart Wielaard spreken ook op de Accountantsdag 2025, op 19 november 2025.Accountantsdag 2025: http://www.accountantsdag.nlVitamine A #63 | AI als assistent, niet als autoriteit... In gesprek met Esther KoxVitamine A #62 | AI op kantoor: Twijfelen of toepassen? Met Hakan KoçakVitamine A #43 | Betrouwbare AI en verantwoording. Hoe doe je dat? Met Mona de Boer (PwC)Vitamine A #34 | Wat betekent AI voor accountants die op zoek zijn naar waarheid?
Oliver Patel has built a sizeable online following for his social media posts and Substack about enterprise AI governance, using clever acronyms and visual frameworks to distill down insights based on his experience at AstraZeneca, a major global pharmaceutical company. In this episode, he details his career journey from academic theory to government policy and now practical application, and offers insights for those new to the field. He argues that effective enterprise AI governance requires being pragmatic and picking your battles, since the role isn't to stop AI adoption but to enable organizations to adopt it safely and responsibly at speed and scale. He notes that core pillars of modern AI governance, such as AI literacy, risk classification, and maintaining an AI inventory, are incorporated into the EU AI Act and thus essential for compliance. Looking forward, Patel identifies AI democratization—how to govern AI when everyone in the workforce can use and build it—as the biggest hurdle, and offers thougths about how enteprises can respond. Oliver Patel is the Head of Enterprise AI Governance at AstraZeneca. Before moving into the corporate sector, he worked for the UK government as Head of Inbound Data Flows, where he focused on data policy and international data transfers, and was a researcher at University College London. He serves as an IAPP Faculty Member and a member of the OECD's Expert Group on AI Risk. His forthcoming book, Fundamentals of AI Governance, will be released in early 2026. Transcript Enterprise AI Governance Substack Top 10 Challenges for AI Governance Leaders in 2025 (Part 1) Fundamentals of AI Governance book page
Over seven years since its introduction, the GDPR continues to evolve as new technologies, court rulings and regulatory guidance reshape how organisations handle personal data. In this episode, we bring you insights from our recent webinar, where experts unpacked the latest developments in GDPR and global data protection. With the EU AI Act now in force, shifting cross-border data frameworks, and regulators issuing record fines, compliance has never been more complex — or more crucial. Tune in to learn: What recent GDPR fines reveal about regulator priorities How to navigate overlaps between AI regulation and data protection rules Best practices for managing EU–UK–US data transfers after new adequacy decisions How to address emerging risks around biometrics, children's data, and AI profiling Real-world case studies showing how organisations are adapting to change This episode is a must-listen for data protection officers, compliance professionals and legal teams looking to strengthen governance, maintain trust, and stay ahead in a fast-moving regulatory landscape.
A Mol és az OTP gyorsjelentéseivel foglalkoztunk, és a mélyére néztünk a legfrissebb számoknak, amelyek fogódzót jelenthetnek a befektetők számára, hogy vételben vagy eladásban gondolkodjanak. A témáról Nagy Viktor, a Portfolio vezető elemzője beszélt. A műsor második felében az uniós AI Act kapta a fókuszt, az Európai Bizottság ugyanis részben elhalasztaná a világon legszigorúbb AI-szabályozásának hatályba lépését, miután az Egyesült Államok és a nagy technológiai cégek intenzív nyomást gyakoroltak Brüsszelre. A döntés hátteréről és a hazai cégek AI Actből fakadó esetleges kötelezettségeiről is kérdeztük Petrányi Dórát, a CMS közép-kelet-európai régióért felelős ügyvezető igazgatóját. Főbb részek: Intro − (00:00) Jelentett a Mol és OTP is: venni vagy nem venni? − (02:26) EU AI Act: haladék a Big Technek − (14:15) Tőkepiaci kitekintő − (25:44) Kép forrása: Getty ImagesSee omnystudio.com/listener for privacy information.
Join host Bobby Brill as he sits down with ServiceNow's AI legal and governance experts to break down the complex world of AI regulations. Andrea LaFontain (Director of AI Legal), Ken Miller (Senior Director of Product Legal), and Navdeep Gill (Staff Senior Product Manager, Responsible AI) explain how organizations can navigate the growing landscape of AI compliance. In this episode, you'll learn about three major regulatory approaches: the risk-based EU AI Act, Colorado's algorithmic discrimination law, and the NIST voluntary framework. The experts discuss practical strategies for complying with multiple regulations simultaneously, using the EU AI Act as a baseline and measuring the delta for new requirements. Key topics covered:- Why proactive compliance matters before regulations fully take effect - How AI Control Tower helps discover and manage AI systems across your enterprise - The exponential math behind AI compliance (vendors, employees, third parties) - Setting up governance policies for high-risk AI use cases - Timeline for major compliance deadlines (Colorado June 2026, EU August 2026) - The real costs of waiting for your first violation Whether you're managing AI deployment, working in compliance, or trying to understand the regulatory landscape, this episode provides actionable insights on building responsible AI governance infrastructure. Guests - Andrea LaFountain -Director, AI Legal Ken Miller - Senior Director, Product Legal Navdeep Gill - Staff Senior Product Manager, Responsible AI Host - Bobby Brill Chapters:00:00 Introduction to AI and Regulations 00:45 Meet the Experts 01:52 Overview of Key AI Regulations 03:03 Compliance Strategies for AI Regulations 07:33 ServiceNow's AI Control Tower 14:02 Challenges and Risks in AI Governance 16:04 Future of AI Regulations 18:34 Conclusion and Final ThoughtsSee omnystudio.com/listener for privacy information.
Join host Bobby Brill as he sits down with ServiceNow's AI legal and governance experts to break down the complex world of AI regulations. Andrea LaFontain (Director of AI Legal), Ken Miller (Senior Director of Product Legal), and Navdeep Gill (Staff Senior Product Manager, Responsible AI) explain how organizations can navigate the growing landscape of AI compliance. In this episode, you'll learn about three major regulatory approaches: the risk-based EU AI Act, Colorado's algorithmic discrimination law, and the NIST voluntary framework. The experts discuss practical strategies for complying with multiple regulations simultaneously, using the EU AI Act as a baseline and measuring the delta for new requirements. Key topics covered:- Why proactive compliance matters before regulations fully take effect - How AI Control Tower helps discover and manage AI systems across your enterprise - The exponential math behind AI compliance (vendors, employees, third parties) - Setting up governance policies for high-risk AI use cases - Timeline for major compliance deadlines (Colorado June 2026, EU August 2026) - The real costs of waiting for your first violation Whether you're managing AI deployment, working in compliance, or trying to understand the regulatory landscape, this episode provides actionable insights on building responsible AI governance infrastructure. Guests - Andrea LaFountain -Director, AI Legal Ken Miller - Senior Director, Product Legal Navdeep Gill - Staff Senior Product Manager, Responsible AI Host - Bobby Brill Chapters:00:00 Introduction to AI and Regulations 00:45 Meet the Experts 01:52 Overview of Key AI Regulations 03:03 Compliance Strategies for AI Regulations 07:33 ServiceNow's AI Control Tower 14:02 Challenges and Risks in AI Governance 16:04 Future of AI Regulations 18:34 Conclusion and Final ThoughtsSee omnystudio.com/listener for privacy information.
Get featured on the show by leaving us a Voice Mail: https://bit.ly/MIPVM
In this podcast, discover how to best navigate California's new employment AI regulations that recently went into effect on October 1st. The speaker highlighted how the usage of Automated Decision Systems, which includes AI, when making employment decisions, can directly violate California law if these tools are found to discriminate against employees or applicants, either directly or indirectly, on the basis of already protected characteristics such as race, age, gender, etc. In addition, they highlighted other recent AI regulations taking place around the world, such as the EU AI Act and more. Moderator: Adam Wehler, Director of eDiscovery and Litigation Technology, Smith Anderson Speaker: Kassi Burns, Senior Attorney, Trial and Global Disputes, King & Spalding
Kevin Werbach speaks with Caroline Louveaux, Chief Privacy, AI, and Data Responsibility Officer at Mastercard, about what it means to make trust mission critical in the age of artificial intelligence. Caroline shares how Mastercard built its AI governance program long before the current AI boom, grounding it in the company's Data and Technology Responsibility Principles". She explains how privacy-by-design practices evolved into a single global AI governance framework aligned with the EU AI Act, NIST AI Risk Management, and standards. The conversation explores how Mastercard balances innovation speed with risk management, automates low-risk assessments, and maintains executive oversight through its AI Governance Council. Caroline also discusses the company's work on agentic commerce, where autonomous AI agents can initiate payments, and why trust, certification, and transparency are essential for such systems to succeed. Caroline unpacks what it takes for a global organization to innovate responsibly — from cross-functional governance and "tone from the top," to partnerships like the Data & Trust Alliance and efforts to harmonize global standards. Caroline emphasizes that responsible AI is a shared responsibility and that companies that can "innovate fast, at scale, but also do so responsibly" will be the ones that thrive. Caroline Louveaux leads Mastercard's global privacy and data responsibility strategy. She has been instrumental in building Mastercard's AI governance framework and shaping global policy discussions on data and technology. She serves on the board of the International Association of Privacy Professionals (IAPP), the WEF Task Force on Data Intermediaries, the ENISA Working Group on AI Cybersecurity, and the IEEE AI Systems Risk and Impact Executive Committee, among other activities. Transcript How Mastercard Uses AI Strategically: A Case Study (Forbes 2024) Lessons From a Pioneer: Mastercard's Experience of AI Governance (IMD, 2023) As AI Agents Gain Autonomy, Trust Becomes the New Currency. Mastercard Wants to Power Both. (Business Insider, July 2025)
This episode is sponsored by Deel.Ensure fair, consistent reviews with Deel's calibration template. Deel's free Performance Calibration Template helps HR teams and managers run more equitable, structured reviews. Use it to align evaluations with business goals,reduce bias in ratings, and ensure every performance conversation is fair, consistent,and grounded in shared standards.Download now: www.deel.com/nickdayIn this episode of the HR L&D Podcast, host Nick Day explores how HR can use AI to become more strategic and more human. The conversation covers where AI truly fits in HR, what changes with the EU AI Act, and how leaders can turn time saved on admin into culture, capability, and impact.You will hear practical frameworks including a simple 4Ps plus 2 model for HR AI, human in the loop hiring, guardrails to reduce hallucinations, and a clear view on when AI must be 100 percent accurate. The discussion also outlines a modern HR operating model with always on self service, plus policy steps for ethical, explainable AI.Whether you are an HR leader, CEO, or L&D professional, this conversation will help you move from pilots to scaled adoption and build an AI ready organization. Expect actionable steps to improve employee experience, strengthen compliance, and unlock productivity and performance across your teams. 100X Book on Amazon: https://www.amazon.com/dp/B0D41BP5XTNick Day's LinkedIn: https://www.linkedin.com/in/nickday/Find your ideal candidate with our job vacancy system: https://jgarecruitment.ck.page/919cf6b9eaSign up to the HR L&D Newsletter - https://jgarecruitment.ck.page/23e7b153e700:00 Intro & Preview02:25 What HR Is For03:54 Why HR + AI Now06:19 AI as Augmentation07:43 HR AI Framework & Use Cases10:14 Guardrails: Hallucinations & Accuracy12:45 Guardrails: Bias & Human in the Loop16:58 Recruiting with AI21:01 EU AI Act for HR25:16 HR Team of the Future25:56 New HR Operating Model31:54 Tools for Culture Change35:35 Rethink Processes
As financial services accelerate their digital transformations, AI is reshaping how institutions identify, assess, and manage risk. But with that transformation comes an equally complex web of systemic risks, regulatory challenges, and questions about accountability. In this episode of the AI in Business podcast, host Matthew DeMello, Head of Content at Emerj, speaks with Miriam Fernandez, Director in the Analytical Innovation Team specializing in AI research at S&P Global Ratings, and Sudeep Kesh, Chief Innovation Officer at S&P Global Ratings. Together, they unpack how generative AI, agentic systems, and regulatory oversight are evolving within one of the most interconnected sectors of the global economy. The conversation explores how AI is amplifying both efficiency and exposure across financial ecosystems — from the promise of multimodal data integration in risk management to the growing challenge of concentration and contagion risks in increasingly digital markets. Miriam and Sudeep discuss how regulators are responding through risk-based frameworks such as the EU AI Act and DORA, and how the private sector is taking a larger role in ensuring transparency, compliance, and trust. Want to share your AI adoption story with executive peers? Click emerj.com/e2 for more information and to be a potential future guest on Emerj's flagship ‘AI in Business' podcast! If you've enjoyed or benefited from some of the insights of this episode, consider leaving us a five-star review on Apple Podcasts, and let us know what you learned, found helpful, or liked most about this show!
In this episode of the Fit4Privacy Podcast, host Punit Bhatia explores the EU AI Act— why it matters, what it requires, and how it impacts your business, even outside the EU.You will also hear about the Act's risk-based approach, the four categories of AI systems (unacceptable, high, limited, and minimal risk), and the penalties for non-compliance, which can be as high as 7% of global turnover or €35 million.Just like GDPR, the EU AI Act has global reach—so if your company offers AI-based products or services to EU citizens, it applies to you. Listen in to understand the requirements and discover how to turn AI compliance into an opportunity for building trust, demonstrating responsibility, and staying ahead of the competition.KEY CONVERSION 00:00:00 Introduction to the EU AI Act 00:01:22 Why the EU AI Act Matters to Your Business 00:03:40 Risk Categories Under the EU AI Act 00:04:52 Key Timelines and Provisions 00:06:07 Compliance Requirements 00:07:09 Leveraging the EU AI Act for Competitive Advantage 00:08:38 Conclusion and Contact Information ABOUT HOST Punit Bhatia is one of the leading privacy experts who works independently and has worked with professionals in over 30 countries. Punit works with business and privacy leaders to create an organization culture with high privacy awareness and compliance as a business priority. Selectively, Punit is open to mentor and coach professionals. Punit is the author of books “Be Ready for GDPR' which was rated as the best GDPR Book, “AI & Privacy – How to Find Balance”, “Intro To GDPR”, and “Be an Effective DPO”. Punit is a global speaker who has spoken at over 30 global events. Punit is the creator and host of the FIT4PRIVACY Podcast. This podcast has been featured amongst top GDPR and privacy podcasts. As a person, Punit is an avid thinker and believes in thinking, believing, and acting in line with one's value to have joy in life. He has developed the philosophy named ‘ABC for joy of life' which he passionately shares. Punit is based out of Belgium, the heart of Europe. RESOURCES Websites www.fit4privacy.com,www.punitbhatia.com Podcast https://www.fit4privacy.com/podcast Blog https://www.fit4privacy.com/blog YouTube http://youtube.com/fit4privacy
“The Future of Life Institute has been working on AI governance-related issues for the last decade. We're already over 10 years old, and our mission is to steer very powerful technology away from large-scale harm and toward very beneficial outcomes. You could think about any kind of extreme risks from AI, all the way to existential or extinction risk, the worst kinds of risks and the benefits. You can think about any kind of large benefits that humans could achieve from technology, all the way through to utopia, right? Utopia is the biggest benefit you can get from technology. Historically, that has meant we have focused on climate change, for example, and the impact of climate change. We have also focused on bio-related risks, pandemics and nuclear security issues. If things go well, we will be able to avoid these really bad downsides in terms of existential risk, extinction risks, mass surveillance, and really disturbing futures. We can avoid that very harmful side of AI or technology, and we can achieve some of the benefits.”Today, we take a closer look at the future of artificial intelligence and the policies that determine its place in our societies. Risto Uuk is Head of EU Policy and Research at the Future of Life Institute in Brussels, and a philosopher and researcher at KU Leuven, where he studies the systemic risks posed by AI. He has worked with the World Economic Forum, the European Commission, and leading thinkers like Stuart Russell and Daniel Susskind. He also runs one of the most widely read newsletters on the EU AI Act. As this technology is transforming economies, politics, and human life itself, we'll talk about the promises and dangers of AI, how Europe is trying to regulate it, and what it means to build safeguards for a technology that may be more powerful than anything we've seen before.Episode Websitewww.creativeprocess.info/podInstagram:@creativeprocesspodcast
“The Future of Life Institute has been working on AI governance-related issues for the last decade. We're already over 10 years old, and our mission is to steer very powerful technology away from large-scale harm and toward very beneficial outcomes. You could think about any kind of extreme risks from AI, all the way to existential or extinction risk, the worst kinds of risks and the benefits. You can think about any kind of large benefits that humans could achieve from technology, all the way through to utopia, right? Utopia is the biggest benefit you can get from technology. Historically, that has meant we have focused on climate change, for example, and the impact of climate change. We have also focused on bio-related risks, pandemics and nuclear security issues. If things go well, we will be able to avoid these really bad downsides in terms of existential risk, extinction risks, mass surveillance, and really disturbing futures. We can avoid that very harmful side of AI or technology, and we can achieve some of the benefits.”Today, we take a closer look at the future of artificial intelligence and the policies that determine its place in our societies. Risto Uuk is Head of EU Policy and Research at the Future of Life Institute in Brussels, and a philosopher and researcher at KU Leuven, where he studies the systemic risks posed by AI. He has worked with the World Economic Forum, the European Commission, and leading thinkers like Stuart Russell and Daniel Susskind. He also runs one of the most widely read newsletters on the EU AI Act. As this technology is transforming economies, politics, and human life itself, we'll talk about the promises and dangers of AI, how Europe is trying to regulate it, and what it means to build safeguards for a technology that may be more powerful than anything we've seen before.Episode Websitewww.creativeprocess.info/podInstagram:@creativeprocesspodcast