Podcasts about eu ai act

  • 389PODCASTS
  • 668EPISODES
  • 35mAVG DURATION
  • 5WEEKLY NEW EPISODES
  • Nov 30, 2025LATEST

POPULARITY

20172018201920202021202220232024


Best podcasts about eu ai act

Latest podcast episodes about eu ai act

Pods Like Us
AI in Podcasting: Humans, Voices, Ethics

Pods Like Us

Play Episode Listen Later Nov 30, 2025 79:25


Join host Martin Quibell (Marv) and a panel of industry experts as they dive deep into the impact of artificial intelligence on podcasting. From ethical debates to hands-on tools, discover how AI is shaping the future of audio and video content creation.  Guests:  ● Benjamin Field (Deep Fusion Films)  ● William Corbin (Inception Point AI)  ● John McDermott & Mark Francis (Caloroga Shark Media)   Timestamps  00:00 – Introduction  00:42 – Meet the Guests  01:45 – The State of AI in Podcasting  03:45 – Transparency, Ethics & the EU AI Act  06:00 – Nuance: How AI Is Used (Descript, Shorten Word Gaps, Remove Retakes)  08:45 – AI & Niche Content: Economic Realities  12:00 – Human Craft vs. AI Automation  15:00 – Job Evolution: Prompt Authors & QC  18:00 – Quality Control & Remastering  21:00 – Volume, Scale, and Audience  24:00 – AI Co-Hosts & Experiments (Virtually Parkinson, AI Voices)  27:00 – AI in Video & Visuals (HeyGen, Weaver)  30:00 – Responsibility & Transparency  33:00 – The Future of AI in Media  46:59 – Guest Contact Info & Closing   Tools & Platforms Mentioned  ● Descript: Shorten word gaps, remove retakes, AI voice, scriptwriting, editing  ● HeyGen: AI video avatars for podcast visuals  ● Weaver (Deep Fusion Films): AI-driven video editing and archive integration  ● Verbal: AI transcription and translation  ● AI Voices: For narration, co-hosting, and accessibility  ● Other references: Spotify, Amazon, Wikipedia, TikTok, Apple Podcasts, Google  Programmatic Ads  Contact the Guests:  - William Corbin: william@inceptionpoint.ai | LinkedIn - John McDermott: john@caloroga.com | LinkedIn - Benjamin Field: benjamin.field@deepfusionfilms.com | LinkedIn - Mark Francis: mark@caloroga.com | LinkedIn | caloroga.com  - Marv: themarvzone.org   Like, comment, and subscribe for more deep dives into the future of podcasting and media!  #Podcasting #AI #ArtificialIntelligence #Descript #HeyGen #PodcastTools #Ethics #MediaInnovation

Ahead of the Game
GDPR and AI Regulation for Marketers

Ahead of the Game

Play Episode Listen Later Nov 28, 2025 52:55


Finding it difficult to navigate the changing landscape of data protection? In this episode of the DMI podcast, host Will Francis speaks with Steven Roberts, Group Head of Marketing at Griffith College, Chartered Director, certified Data Protection Officer, and long-time marketing leader. Steven demystifies GDPR, AI governance, and the rapidly evolving regulatory environment that marketers must now navigate. Steven explains how GDPR enforcement has matured, why AI has created a new layer of complexity, and how businesses can balance innovation with compliance. He breaks down the EU AI Act, its risk-based structure, and its implications for organizations inside and outside the EU. Steven also shares practical guidance for building internal AI policies, tackling “shadow AI,” reducing data breach risks, and supporting teams with training and clear governance. For an even deeper look into how businesses can ensure data protection compliance, check out Steven's book, Data Protection for Business: Compliance, Governance, Reputation and Trust. Steven's Top 3 Tips Build data protection into projects from the start, using tools like Data Protection Impact Assessments to uncover risks early. Invest in regular staff training to avoid common mistakes caused by human error. Balance compliance with business performance by setting clear policies, understanding your risk appetite, and iterating your AI governance over time. The Ahead of the Game podcast is brought to you by the Digital Marketing Institute and is available on ⁠⁠⁠⁠YouTube, Apple Podcasts⁠⁠⁠⁠, ⁠⁠⁠⁠Spotify⁠⁠⁠⁠, and ⁠⁠⁠⁠all other podcast platforms. And if you enjoyed this episode please leave a review so others can find us. If you have other feedback for or would like to be a guest on the show, email the podcast team! Timestamps 01:29 – AI's impact on GDPR & the explosion of new global privacy laws 03:26 – Is GDPR the global gold standard? 05:04 – GDPR enforcement today: Who gets fined and why 07:09 – Cultural attitudes toward data: EU vs. US 08:51 – The EU AI Act explained: Risk tiers, guardrails & human oversight 10:48 – What businesses must do: DPIAs, fundamental rights assessments & more 13:38 – Shadow AI, risk appetite & internal governance challenges 17:10 – Should you upload company data to ChatGPT? 20:40 – How the AI Act affects countries outside the EU 24:47 – Will privacy improve over time? 28:45 – What teams can do now: Tools, processes & data audits 33:49 – Data enrichment tools: targeting vs. Legality 36:47 – Will anyone actually check your data practices? 40:06 – Steven's top tips for navigating GDPR & AI 

Web3 CMO Stories
Nearshoring Meets AI | S5 E49

Web3 CMO Stories

Play Episode Listen Later Nov 25, 2025 19:18 Transcription Available


Send us a textWalk the floor at Web Summit without leaving your headphones. We sit down with Jo Smets, founder of BluePanda and president of the Portuguese Belgian Luxembourg Chamber of Commerce, to unpack how nearshoring and AI are reshaping CRM, marketing, and team delivery across Europe.We start with clarity on nearshoring: why time zone, culture, and communication speed beat cost alone, and how that proximity pays off when you're wiring AI into daily work. Jo shares how BluePanda applies AI beyond demos—recruitment, performance, and operations—then translates those lessons into client outcomes. We compare adoption patterns across startups and corporates, call out the real blocker (end‑to‑end process automation), and map the role of global networks like BBN for keeping pace with tools and trends.The conversation pivots to trust and governance: practical ways to protect data, when on‑prem makes sense, and how to use EU AI Act guidance without stalling innovation. We explore the marketing shift from SEO to GEO, the idea of “AI‑proof” websites, and the move toward dynamic, persona‑aware content that renders at load. Jo offers a simple path to progress—pick one process, pilot, measure, educate—while keeping empathy at the core as managers start leading both humans and AI agents. Along the way, we spotlight how chambers and communities connect ecosystems across borders, turning events into learning loops and real partnerships.Looking to modernize without losing your team's identity? You'll leave with a plan for small wins, a lens for tool curation, and a sharper view of where marketing is headed next. If this resonated, subscribe, share it with a colleague who's wrestling with AI adoption, and drop a review to help others find the show.This episode was recorded in the official podcast booth at Web Summit (Lisbon) on November 12, 2025. Check the video footage, read the blog article and show notes here: https://webdrie.net/why-european-teams-win-with-nearshoring-and-practical-ai/..........................................................................

AI Unraveled: Latest AI News & Trends, Master GPT, Gemini, Generative AI, LLMs, Prompting, GPT Store

Welcome to a Special Episode of AI Unraveled: The Cost of Data Gravity: Solving the Hybrid AI Deployment Nightmare.We are tackling the silent budget killer in enterprise AI: Data Gravity. You have petabytes of proprietary data—the "mass" that attracts apps and services—but moving it to the cloud for inference is becoming a financial and regulatory nightmare. We break down why the cloud-first strategy is failing for heavy data, the hidden tax of egress fees, and the new architectural playbook for 2025.Source: https://www.linkedin.com/pulse/cost-data-gravity-solving-hybrid-ai-deployment-nightmare-djamgatech-ic42cStrategic Pillars & Topics

The AI Policy Podcast
Trump's Draft AI Preemption Order, EU AI Act Delays, and Anthropic's Cyberattack Report

The AI Policy Podcast

Play Episode Listen Later Nov 21, 2025 54:26


In this episode, we start by discussing Greg's trip to India and the upcoming India AI Impact Summit in February 2026 (00:29). We then unpack the Trump Administration's draft executive order to preempt state AI laws (07:46) and break down the European Commission's new “digital omnibus” package, including proposed adjustments to the AI Act and broader regulatory simplification efforts (17:51). Finally, we discuss Anthropic's report on a China-backed “highly sophisticated cyber espionage campaign" using Claude and the mixed reactions from cybersecurity and AI policy experts (37:37).

AI Unraveled: Latest AI News & Trends, Master GPT, Gemini, Generative AI, LLMs, Prompting, GPT Store

Welcome to AI Unraveled (November 20, 2025): Your daily strategic briefing on the business impact of AI.Today's Highlights: Saudi Arabia signs landmark AI deals with xAI and Nvidia; Europe scales back crucial AI and privacy laws; Anthropic courts Microsoft and Nvidia to break free from AWS; and Google's Gemini 3 climbs leaderboards, reinforcing its path toward AGI.Strategic Pillars & Topics:

Alexa's Input (AI)
Shift Left Your AI Security with SonnyLabs Founder Liana Tomescu

Alexa's Input (AI)

Play Episode Listen Later Nov 17, 2025 64:23


In this episode of Alexa's Input (AI) Podcast, host Alexa Griffith sits down with Liana Tomescu, founder of Sonny Labs and host of the AI Hacks podcast. Dive into the world of AI security and compliance as Liana shares her journey from Microsoft to founding her own company. Discover the challenges and opportunities in making AI applications secure and compliant, and learn about the latest in AI regulations, including the EU AI Act. Whether you're an AI enthusiast or a tech professional, this episode offers valuable insights into the evolving landscape of AI technology.LinksSonnyLabs Website: https://sonnylabs.ai/SonnyLabs LinkedIn: https://www.linkedin.com/company/sonnylabs-ai/Liana's LinkedIn: https://www.linkedin.com/in/liana-anca-tomescu/Alexa's LinksLinkTree: https://linktr.ee/alexagriffithAlexa's Input YouTube Channel: https://www.youtube.com/@alexa_griffithWebsite: https://alexagriffith.com/LinkedIn: https://www.linkedin.com/in/alexa-griffith/Substack: https://alexasinput.substack.com/KeywordsAI security, compliance, female founder, Sunny Labs, EU AI Act, cybersecurity, prompt injection, AI agents, technology innovation, startup journeyChapters00:00 Introduction to Liana Tomescu and Sunny Labs02:53 The Journey of a Female Founder in Tech05:49 From Microsoft to Startup: The Transition09:04 Exploring AI Security and Compliance11:41 The Role of Curiosity in Entrepreneurship14:52 Understanding Sunny Labs and Its Mission17:52 The Importance of Community and Networking20:42 MCP: Model Context Protocol Explained23:54 Security Risks in AI and MCP Servers27:03 The Future of AI Security and Compliance38:25 Understanding Prompt Injection Risks45:34 The Shadow AI Phenomenon45:48 Navigating the EU AI Act52:28 Banned and High-Risk AI Practices01:00:43 Implementing AI Security Measures01:17:28 Exploring AI Security Training

Digitale Vorreiter - Vodafone Business Cases
Die Legal Tech Revolution: Wie das KI-Sekretariat von Jupus zur Lösung des Fachkräftemangels in Kanzleien beiträgt

Digitale Vorreiter - Vodafone Business Cases

Play Episode Listen Later Nov 17, 2025 39:58 Transcription Available


René Fergen, CEO von Jupus, ist selbst Diplom-Jurist und hat die klassische Karriere bewusst gegen die Digitalisierung der Rechtsbranche eingetauscht. Sein Unternehmen Jupus hat ein KI-Sekretariat entwickelt, das Anwaltskanzleien dabei unterstützt, alle nicht-juristischen Aufgaben vom ersten Mandantenkontakt bis zur Rechnungsstellung zu automatisieren. Hintergrund ist ein massiver Fachkräftemangel: Während die Zahl der Anwält:innen in Deutschland steigt, hat sich die Zahl der neu ausgebildeten Rechtsanwaltsfachangestellten in den letzten 30 Jahren um über 80 % reduziert. Im Gespräch mit Christoph Burseg spricht René Fergen über die Disruption in einer der ältesten Branchen der Welt , die Herausforderung, KI-Software im hochsensiblen juristischen Umfeld zu entwickeln und warum es für Kanzleien keine Alternative mehr zur Digitalisierung gibt, um wettbewerbsfähig zu bleiben. In dieser Episode erfährst du: - Welche Aufgaben die Software von Jupus übernimmt, um Anwält:innen und Personal zu entlasten. - Dass in Deutschland rund 165.000 Rechtsanwält:innen tätig sind und wie sich der Personalmangel in der Branche auf den Zugang zum Recht auswirkt. - Wie Jupus unstrukturierte Dokumente (z. B. 10.000 Seiten an Verträgen oder Korrespondenz) in Sekundenbruchteilen analysiert und strukturiert – eine Aufgabe, die sonst Tage in Anspruch nehmen würde. - Was die Telefon-KI von Jupus kann und wie sie mit Anrufern von Richtern bis zu Vertrieblern umgeht. - Warum das Team von Jupus in weniger als drei Jahren über 60 Personen stark geworden ist und über 8 Millionen Euro Kapital aufgenommen hat. - Wieso der CEO von Jupus den EU AI Act gesamtgesellschaftlich als „Katastrophe“ für Europas Wettbewerbsfähigkeit und Innovation ansieht. Christoph auf LinkedIn: https://www.linkedin.com/in/christophburseg Kontaktiere uns über Instagram: https://www.instagram.com/vodafonebusinessde/

Conversations For Leaders & Teams
E89. Responsible AI for the Modern Leader & Coach w/Colin Cosgrove

Conversations For Leaders & Teams

Play Episode Listen Later Nov 15, 2025 34:36


Send us a textExplore how leaders and coaches can adopt AI without losing the human core, turning compliance and ethics into everyday practice rather than a side office. Colin Cosgrove shares a practical arc for AI readiness, concrete use cases, and a clear view of risk, trust, and governance.• journey from big-tech compliance to leadership coaching • why AI changes the leadership environment and decision pace • making compliance human: transparency, explainability, consent • AI literacy across every function, not just data teams • the AI leader archetype arc for mindset and readiness • practical augmentation: before, during, after coaching sessions • three risks: reputational, relational, regulatory • leader as coach: trust, questions, and human skills • EU AI Act overview and risk-based obligations • governance, accountability, and cross-Reach out to Colin on LinkedIn and check out his website: Movizimo.com. Support the showBelemLeaders–Your organization's trusted partner for leader and team development. Visit our website to connect: belemleaders.org or book a discovery call today! belem.as.me/discoveryUntil next time, keep doing great things!

The Digital Executive
Quantifying AI Risk: Yakir Golan on Turning Cyber Threats Into Business Intelligence | Ep 1145

The Digital Executive

Play Episode Listen Later Nov 14, 2025 15:20


In this episode of The Digital Executive, host Brian Thomas welcomes Yakir Golan, CEO and Co-founder of Kovrr, a global leader in cyber and AI risk quantification. Drawing from his early career in Israeli intelligence and later roles in software, hardware, and product management, Yakir explains how his background shaped his holistic approach to understanding complex, interconnected risk systems.Yakir breaks down why quantifying AI and cyber risk—rather than relying on subjective, color-coded scoring—is becoming essential for enterprise leaders, boards, and regulators. He explains how Kovrr's new AI Risk Assessment and Quantification module helps organizations model real financial exposure, understand high-impact “tail risks,” and align security, GRC, and finance teams around a shared, objective language.Looking ahead, Yakir discusses how global regulation, including the EU AI Act, is accelerating the need for measurable, defensible risk management. He outlines a future where AI risk quantification becomes a board-level expectation and a foundation for resilient, responsible innovation. Through Kovrr's mission, Yakir aims to equip enterprises with the same level of intelligence-driven decision making once reserved for national security—now applied to the rapidly evolving digital risk landscape.If you liked what you heard today, please leave us a review - Apple or Spotify.See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

Irish Tech News Audio Articles
Governing AI in the Age of Risk

Irish Tech News Audio Articles

Play Episode Listen Later Nov 14, 2025 6:59


Guest article by Paul Dongha . Co-author of Governing the Machine: How to navigate the risks of AI and unlock its true potential. Artificial Intelligence (AI) has moved beyond the realm of IT, it is now the defining strategic challenge for every modern organisation. The global rush to adopt AI is shifting from a sprint for innovation to a race for survival. Yet as businesses scramble to deploy powerful systems, from predictive analytics to generative AI, they risk unleashing a wave of unintended consequences that could cripple them. That warning sits at the heart of Governing the Machine: How to navigate the risks of AI and unlock its true potential, a timely new guide for business leaders. Governing the Machine The authors, Dr Paul Dongha, Ray Eitel-Porter, and Miriam Vogel, argue that the drive to embrace AI must be matched by an equally urgent determination to govern it. Drawing on extensive experience advising global boardrooms, they cut through technical jargon to focus on the organisational realities of AI risk. Their step-by-step approach shows how companies can build responsible AI capability, adopting new systems effectively without waiting for perfect regulation or fully mature technology. That wait-and-see strategy, they warn, is a losing one: delay risks irrelevance, while reckless deployment invites legal and reputational harm. The evidence is already visible in a growing list of AI failures, from discriminatory algorithms in public services to generative models fabricating news or infringing intellectual property. These are not abstract technical flaws but concrete business risks with real-world consequences. Whose problem is it anyway? According to the authors, it is everyone's. The book forcefully argues that AI governance cannot be siloed within the technology department. It demands a cross-enterprise approach, requiring active leadership driven from the C-suite, Legal counsel, Human Resources, Privacy and Information Security teams as well as frontline staff alike. Rather than just sounding the alarm, the book provides a practical framework for action. It guides readers through the steps of building a robust AI governance programme. This includes defining clear principles and policies, establishing accountability, and implementing crucial checkpoints. A core part of this framework is a clear-eyed look at the nine key risks organisations must manage: accuracy, fairness and bias, explainability, accountability, privacy, security, intellectual property, safety, and the impact on the workforce and environment. Each risk area is explained, and numerous controls that mitigate and manage these risks are listed with ample references to allow the interested reader to follow-up. Organisations should carefully consider implementing a Governance Risk and Compliance (GRC) system, which brings together all key aspects of AI governance. GRC systems are available, both from large tech companies and from specialist vendors. A GRC system ties together all key components of AI governance, providing management with a single view of their deployed AI systems, and a window into all stages of AI governance for systems under development. The book is populated with numerous case studies and interviews with senior executives from some of the largest and well-known origanisations in the world that are grappling with AI risk management. The authors also navigate the complex and rapidly evolving global regulatory landscape. With the European Union implementing its comprehensive AI Act and the United States advancing a fragmented patchwork of state and federal rules, a strong, adaptable internal governance system is presented as the only viable path forward. The EU AI Act, which has now come into force, with staggered compliance deadlines in the coming two years, requires all organisations that operate within the EU, to implement risk mitigation controls with evidence of compliance. A key date is August 2nd 2026, by which time all 'Hig...

Vitamine A | De podcast voor accountants
Vitamine A #66 | Reality Check: overleeft de accountant de AI-revolutie?

Vitamine A | De podcast voor accountants

Play Episode Listen Later Nov 14, 2025 38:28


In deze speciale aflevering die aansluit bij de Accountantsdag 2025 met als thema Reality Check, duikt Vitamine A opnieuw in de impact van kunstmatige intelligentie op het accountantsvak. Drie gasten, drie invalshoeken en één grote vraag: wat betekent AI voor het beroep, de organisatie en de mens achter de accountant.Mona de Boer (PwC, Responsible AI) vertelt hoe AI inmiddels een dagelijkse realiteit is geworden en waarom organisaties nu moeten bepalen welke waarden ze hanteren. Ze bespreekt de betekenis van de EU AI Act en de opkomst van AI assurance als nieuw domein binnen het vertrouwen in technologie. Daarbij benadrukt ze dat de accountant geen terrein verliest maar juist aan belang wint.Nart Wielaard neemt het publiek mee in het concept van de Zero Person Company, een experimentele organisatie die draait op agents in plaats van mensen. Het experiment laat zien dat AI geen mens kan kopiëren, maar dat processen op een fundamenteel andere manier ingericht kunnen worden. De accountant speelt daarin een rol als coach, toezichthouder en kwaliteitsbewaker van AI-gedreven processen.Met Marjan Heemskerk verschuift de focus naar de dagelijkse praktijk van ondernemers. Zij ziet hoe AI basisvragen overneemt, maar vooral ruimte creëert voor een accountant die duidt, meedenkt en context biedt. Soft skills worden cruciaal. De uitdaging voor kantoren is om AI verantwoord in te zetten, medewerkers daarin mee te nemen en tegelijkertijd de verleiding van shortcuts te voorkomen.De aflevering eindigt met een reality check die zowel technologisch als menselijk is. AI verandert veel, maar het fundament van het accountantsvak blijft overeind: vertrouwen, onafhankelijkheid en het vermogen om de werkelijkheid te duiden.Vitamine A sprak eerder over AI. Esther Kox, Hakan Koçak en Nart Wielaard spreken ook op de Accountantsdag 2025, op 19 november 2025.Accountantsdag 2025: http://www.accountantsdag.nlVitamine A #63 | AI als assistent, niet als autoriteit... In gesprek met Esther KoxVitamine A #62 | AI op kantoor: Twijfelen of toepassen? Met Hakan KoçakVitamine A #43 | Betrouwbare AI en verantwoording. Hoe doe je dat? Met Mona de Boer (PwC)Vitamine A #34 | Wat betekent AI voor accountants die op zoek zijn naar waarheid? 

The Road to Accountable AI
Oliver Patel: Sharing Frameworks for AI Governance

The Road to Accountable AI

Play Episode Listen Later Nov 13, 2025 36:03


Oliver Patel has built a sizeable online following for his social media posts and Substack about enterprise AI governance, using clever acronyms and visual frameworks to distill down insights based on his experience at AstraZeneca, a major global pharmaceutical company. In this episode, he details his career journey from academic theory to government policy and now practical application, and offers insights for those new to the field. He argues that effective enterprise AI governance requires being pragmatic and picking your battles, since the role isn't to stop AI adoption but to enable organizations to adopt it safely and responsibly at speed and scale. He notes that core pillars of modern AI governance, such as AI literacy, risk classification, and maintaining an AI inventory, are incorporated into the EU AI Act and thus essential for compliance. Looking forward, Patel identifies AI democratization—how to govern AI when everyone in the workforce can use and build it—as the biggest hurdle, and offers thougths about how enteprises can respond. Oliver Patel is the Head of Enterprise AI Governance at AstraZeneca. Before moving into the corporate sector, he worked for the UK government as Head of Inbound Data Flows, where he focused on data policy and international data transfers, and was a researcher at University College London. He serves as an IAPP Faculty Member and a member of the OECD's Expert Group on AI Risk. His forthcoming book, Fundamentals of AI Governance, will be released in early 2026. Transcript Enterprise AI Governance Substack Top 10 Challenges for AI Governance Leaders in 2025 (Part 1)  Fundamentals of AI Governance book page  

Jorge Borges
A IA nas escolas irlandesas | Guia oficial

Jorge Borges

Play Episode Listen Later Nov 13, 2025 12:00


O guia intitulado "Guidance on Artificial Intelligence in Schools" (Orientação sobre Inteligência Artificial nas Escolas), tem o objetivo de apoiar líderes escolares e professores a desenvolver uma compreensão sobre aprendizagem e na gestão escolar, mas também identifica desafios e riscos associados ao seu uso. Especificamente, o texto estabelece um enquadramento ético e princípios para o uso responsável, abordando preocupações como privacidade, segurança e a necessidade de supervisão humana. Além disso, o guia oferece um "Roteiro de IA para Escolas — Abordagem 4P" (Propósito, Planeamento, Políticas e Prática) e detalha as aplicações atuais da IA na preparação de professores, no ensino e na liderança escolar. O documento visa ser uma "living document" (documento vivo) que será revisto e atualizado regularmente devido à natureza emergente da tecnologia, mencionando ainda a monitorização do EU AI Act e a colaboração com a State Examinations Commission (SEC).

VinciWorks
GDPR - What's next for data protection and compliance

VinciWorks

Play Episode Listen Later Nov 12, 2025 55:46


Over seven years since its introduction, the GDPR continues to evolve as new technologies, court rulings and regulatory guidance reshape how organisations handle personal data. In this episode, we bring you insights from our recent webinar, where experts unpacked the latest developments in GDPR and global data protection. With the EU AI Act now in force, shifting cross-border data frameworks, and regulators issuing record fines, compliance has never been more complex — or more crucial. Tune in to learn: What recent GDPR fines reveal about regulator priorities How to navigate overlaps between AI regulation and data protection rules Best practices for managing EU–UK–US data transfers after new adequacy decisions How to address emerging risks around biometrics, children's data, and AI profiling Real-world case studies showing how organisations are adapting to change This episode is a must-listen for data protection officers, compliance professionals and legal teams looking to strengthen governance, maintain trust, and stay ahead in a fast-moving regulatory landscape.

Zebras & Unicorns
AI Talk 51: Schulden für AI | XPeng-Roboter | Deep Dive AI Act & DSGVO | Gamma | Openmaind

Zebras & Unicorns

Play Episode Listen Later Nov 12, 2025 51:45


Diritto al Digitale
Legitimate Interest to Save AI? Europe's Bold GDPR Reform

Diritto al Digitale

Play Episode Listen Later Nov 11, 2025 8:18


The European Commission is preparing to codify “legitimate interest” as a lawful basis for AI training — a reform that could become the most significant update to the GDPR since 2018.In this episode, Giulio Coraggio, Technology & Data Lawyer at DLA Piper, explores how this proposal could reshape the legal foundations of AI development, bridging the gap between data protection and innovation.

The Value Pricing Podcast
Why You Can't Analyse Client Data (Unless You Do THIS)

The Value Pricing Podcast

Play Episode Listen Later Nov 10, 2025 34:45


Can you safely analyse client data with ChatGPT? Only if you do THIS one critical thing.  We reveal a powerful use case for financial analysis—done in seconds.  But most accountants are breaking the rules without realising it.  GDPR, client consent, data security—it's all covered.  Plus, the surprising fix that makes everything compliant.  This could change how you use AI forever. The latest episode of the Value Pricing Podcast is now available: Why You Can't Analyse Client Data (Unless You Do THIS) In today's episode you will learn: How to analyse financial data with ChatGPT—safely and fastWhy free and Plus plans break GDPR rulesThe #1 mistake accountants make with client dataWhat the EU AI Act means for your firmHow to get client consent and stay compliantA powerful workaround using local AI on your computer Don't miss out on the essential AI insights every accountant needs to stay compliant, save time, and deliver smarter client reports. Listen now! 

Portfolio Checklist
Mikor érdemes betárazni a magyar csúcsrészvényekből? Jelentett az OTP és a Mol

Portfolio Checklist

Play Episode Listen Later Nov 7, 2025 30:07


A Mol és az OTP gyorsjelentéseivel foglalkoztunk, és a mélyére néztünk a legfrissebb számoknak, amelyek fogódzót jelenthetnek a befektetők számára, hogy vételben vagy eladásban gondolkodjanak. A témáról Nagy Viktor, a Portfolio vezető elemzője beszélt. A műsor második felében az uniós AI Act kapta a fókuszt, az Európai Bizottság ugyanis részben elhalasztaná a világon legszigorúbb AI-szabályozásának hatályba lépését, miután az Egyesült Államok és a nagy technológiai cégek intenzív nyomást gyakoroltak Brüsszelre. A döntés hátteréről és a hazai cégek AI Actből fakadó esetleges kötelezettségeiről is kérdeztük Petrányi Dórát, a CMS közép-kelet-európai régióért felelős ügyvezető igazgatóját. Főbb részek: Intro − (00:00) Jelentett a Mol és OTP is: venni vagy nem venni? − (02:26) EU AI Act: haladék a Big Technek − (14:15) Tőkepiaci kitekintő − (25:44) Kép forrása: Getty ImagesSee omnystudio.com/listener for privacy information.

ServiceNow Podcasts
AI Regulations Explained: EU AI Act, Colorado Law, and NIST Framework

ServiceNow Podcasts

Play Episode Listen Later Nov 6, 2025 19:02


Join host Bobby Brill as he sits down with ServiceNow's AI legal and governance experts to break down the complex world of AI regulations. Andrea LaFontain (Director of AI Legal), Ken Miller (Senior Director of Product Legal), and Navdeep Gill (Staff Senior Product Manager, Responsible AI) explain how organizations can navigate the growing landscape of AI compliance. In this episode, you'll learn about three major regulatory approaches: the risk-based EU AI Act, Colorado's algorithmic discrimination law, and the NIST voluntary framework. The experts discuss practical strategies for complying with multiple regulations simultaneously, using the EU AI Act as a baseline and measuring the delta for new requirements. Key topics covered:- Why proactive compliance matters before regulations fully take effect - How AI Control Tower helps discover and manage AI systems across your enterprise - The exponential math behind AI compliance (vendors, employees, third parties) - Setting up governance policies for high-risk AI use cases - Timeline for major compliance deadlines (Colorado June 2026, EU August 2026) - The real costs of waiting for your first violation Whether you're managing AI deployment, working in compliance, or trying to understand the regulatory landscape, this episode provides actionable insights on building responsible AI governance infrastructure. Guests - Andrea LaFountain -Director, AI Legal Ken Miller - Senior Director, Product Legal Navdeep Gill - Staff Senior Product Manager, Responsible AI Host - Bobby Brill Chapters:00:00 Introduction to AI and Regulations 00:45 Meet the Experts 01:52 Overview of Key AI Regulations 03:03 Compliance Strategies for AI Regulations 07:33 ServiceNow's AI Control Tower 14:02 Challenges and Risks in AI Governance 16:04 Future of AI Regulations 18:34 Conclusion and Final ThoughtsSee omnystudio.com/listener for privacy information.

ServiceNow TechBytes
AI Regulations Explained: EU AI Act, Colorado Law, and NIST Framework

ServiceNow TechBytes

Play Episode Listen Later Nov 6, 2025 19:02


Join host Bobby Brill as he sits down with ServiceNow's AI legal and governance experts to break down the complex world of AI regulations. Andrea LaFontain (Director of AI Legal), Ken Miller (Senior Director of Product Legal), and Navdeep Gill (Staff Senior Product Manager, Responsible AI) explain how organizations can navigate the growing landscape of AI compliance. In this episode, you'll learn about three major regulatory approaches: the risk-based EU AI Act, Colorado's algorithmic discrimination law, and the NIST voluntary framework. The experts discuss practical strategies for complying with multiple regulations simultaneously, using the EU AI Act as a baseline and measuring the delta for new requirements. Key topics covered:- Why proactive compliance matters before regulations fully take effect - How AI Control Tower helps discover and manage AI systems across your enterprise - The exponential math behind AI compliance (vendors, employees, third parties) - Setting up governance policies for high-risk AI use cases - Timeline for major compliance deadlines (Colorado June 2026, EU August 2026) - The real costs of waiting for your first violation Whether you're managing AI deployment, working in compliance, or trying to understand the regulatory landscape, this episode provides actionable insights on building responsible AI governance infrastructure. Guests - Andrea LaFountain -Director, AI Legal Ken Miller - Senior Director, Product Legal Navdeep Gill - Staff Senior Product Manager, Responsible AI Host - Bobby Brill Chapters:00:00 Introduction to AI and Regulations 00:45 Meet the Experts 01:52 Overview of Key AI Regulations 03:03 Compliance Strategies for AI Regulations 07:33 ServiceNow's AI Control Tower 14:02 Challenges and Risks in AI Governance 16:04 Future of AI Regulations 18:34 Conclusion and Final ThoughtsSee omnystudio.com/listener for privacy information.

Irish Tech News Audio Articles
Irish Leaders Face The Cyber Stress Test, Amid Rising Talent, Training And Tech Supply Chain Disruptions

Irish Tech News Audio Articles

Play Episode Listen Later Nov 6, 2025 5:51


A majority of Irish organisations have enhanced cybersecurity measures in recent months yet under-investment in key areas of training and compliance, ongoing talent shortages and AI-powered cyber threats continue to be areas of concern for Irish cyber leaders. That's according to EY Ireland's inaugural Cyber Leaders Index, which surveyed 165 of Ireland's senior cyber leaders with a particular focus on the corporate, health and life sciences and government sectors. 83% of Irish cyber leaders report enhancing cybersecurity measures over the past six months, with nearly a third (32%) noting an increase in budgets, while two thirds (67%) report investment holding steady. However, more than 70% of cyber leaders report difficulties securing budget for staff cyber awareness training. 43% cited challenges in securing budget for hiring and retaining skilled personnel, which remains a key challenge for cyber leaders. Nearly half (48%) of cyber leaders identified AI and data security as a top priority for the year ahead, and many organisations are adapting their practices in response to the EU AI Act. Yet 44% say they face challenges securing budget for AI-related security initiatives, suggesting that investment is not keeping pace with strategic intent. This may reflect internal competition for AI budgets, rather than reluctance to invest in cybersecurity, and embedding cybersecurity into AI efforts positions the function as a driver of growth and advantage. Almost seven in ten (68%) of respondents said that protecting against supply chain and vendor-related threats is a top priority within their cybersecurity programmes, however only 4% identify third-party vendor risk as one of their main concerns. Compliance with relevant regulations and data privacy laws such as NIS2 was cited as a priority by 39% of respondents, while the EU AI Act is also having an impact with nearly half (47%) of the leaders surveyed stating they have updated their data handling and monitoring practices and four in ten (39%) having updated their data protection impact assessment systems. Puneet Kukreja, Technology Consulting Partner and Head of Cyber at EY Ireland said: "In an AI-driven world where algorithms and code are reshaping both attacks and defences, cyber risk is no longer something to eliminate, it must be managed with precision. This shift demands that cyber leaders evolve from engineers and managers to architects of trust, with a seat and a voice at the top table where strategic decisions are made and budgets are shaped. Cyber threats are escalating, with major breaches reported almost every week, and it's clear that defences are only as strong as their weakest point. Yet investment is not always going where it matters most, with gaps in staff training and talent retention remaining areas of concern." Carol Murphy, Consulting Partner and Head of Markets at EY Ireland said: "Irish organisations are strengthening their cyber resilience, with most reporting enhanced defences and stable or increased budgets. The challenge now is to direct that investment towards people and partnerships, ensuring teams are trained, supported and equipped to manage the growing demands of compliance and third-party risk. Organisations must prioritise the continuous training and wellbeing of their cyber teams, recognising that resilience depends as much on people as it does on technology." Burnout Risk As Cyber Threats Remain A Top Concern Burnout and fatigue amongst cyber leaders have been identified as growing resilience risks for Irish organisations, with 37% of those surveyed reporting concern about the gaps in their organisation's cyber risk coverage. More than one in four (26%) of respondents reported negative impacts on their mental health. Puneet Kukreja said: "Our research shows that stress is fast becoming a hidden cyber risk for organisations. Cyber risk is constant, and that unrelenting pressure is taking a toll on the people who defend against it. Burnout does...

Outgrow's Marketer of the Month
Snippet: AI Caramba! CEO Matthew Blakemore Warns That Strict EU AI Rules May Push Innovation to Looser Markets Before Tools Enter the EU.

Outgrow's Marketer of the Month

Play Episode Listen Later Nov 5, 2025 0:57


Künstliche Intelligenz
Klaus Müller: Warum wir weniger Angst vor KI-Regulierung haben sollten

Künstliche Intelligenz

Play Episode Listen Later Nov 5, 2025 36:52 Transcription Available


Der Präsident der Bundesnetzagentur ist künftig auch für den EU AI Act zuständig. Und warnt vor einer oft unnötigen Aufregung. „Es gibt eine massive Diskrepanz zwischen der Selbstwahrnehmung und dem, was der AI Act wirklich vorschreibt.“

GainTalents - Expertenwissen zu Recruiting, Gewinnung und Entwicklung von Talenten und Führungskräften
#425 Wie heute KI in der Eignungsdiagnostik zum Einsatz kommt – mit Prof. Dr. Florian Feltes

GainTalents - Expertenwissen zu Recruiting, Gewinnung und Entwicklung von Talenten und Führungskräften

Play Episode Listen Later Nov 4, 2025 46:14


Achtung (Werbung in eigener Sache):  Jetzt mein neues Buch (in Co-Produktion mit Prof. Dr. Johanna Bath): "Die perfekte Employee Journey & Experience" bestellen (erschienen Oktober 2025): Springer: https://link.springer.com/book/9783662714195 Amazon: https://bit.ly/44aajaP Thalia: https://www.thalia.de/shop/home/artikeldetails/A1074960417 Dieses Fachbuch stellt die wichtigsten Elemente der Employee Journey vor – vom Pre-Boarding bis zum Offboarding – und erläutert, wie Verantwortliche in Unternehmen eine gelungene Employee Experience realisieren und nachhaltig verankern können.   Mein Gast: Prof. Dr. Florian Feltes (CEO & Co-Founder von Zortify, Gründungsprofessor für Digital HR und Leadership an der XU Exponential University of Applied Sciences in Potsdam) Prof. Dr. Florian Feltes ist CEO & Co-Founder von Zortify, einem mehrfach ausgezeichneten Unternehmen, das mit KI-gestützten HR-Diagnostiken neue Maßstäbe in der Personalauswahl, Teamentwicklung und im Bereich Leadership setzt. Seine Mission: HR-Professionals mit datenbasierten Insights zu befähigen, bessere und zugleich menschlichere Entscheidungen im Recruiting und in der Personalentwicklung zu treffen. Parallel zu seiner Tätigkeit als Unternehmer ist Florian Gründungsprofessor für Digital HR und Leadership an der XU Exponential University of Applied Sciences in Potsdam. Die Verbindung von Wirtschaft und Wissenschaft zieht sich wie ein roter Faden durch seine Arbeit – insbesondere im Bereich Diagnostik, wo wissenschaftliche Fundierung und praktische Anwendung untrennbar miteinander verbunden sind. Mit seinem Buch „Revolution? Ja, bitte. Wenn Old-School-Führung auf New-Work-Leadership trifft" hat er eindrucksvoll gezeigt, wie sehr sich Führungskultur verändern muss, um Wandel, Diversität und Empowerment wirklich zu leben. Welchen Einfluss KI-gestützte Eignungsdiagnostik dabei nehmen kann, bespreche ich nun mit Prof. Dr. Florian Feltes hier in der heutigen Podcastfolge.   Thema: Mit Prof. Dr. Florian Feltes habe ich in der GainTalents-Podcastfolge 425 über das Thema KI in der Eignungsdiagnostik besprochen. Florian bietet als Co-Founder von Zortify eine spannende Lösung diesbezüglich und wir haben uns über das Zortify-Verfahren und weitere Dinge im Kontext von KI in der Eignungsdiagnostik ausgetauscht. Herzlichen Dank an Florian für dieses sehr gute Gespräch und auch für die sehr guten Insights zum Thema.   Was können moderne, KI-gestützte Verfahren der Eignungsdiagnostik heute bieten? fairer und effizienter als gängige Verfahren berücksichtigt den EU-AI-Act in vollem Umfang Selbsteinschätzung durch Online-Befragung (Fragen mit Auswahlantworten sowie Fragen, bei denen Freitext als Antwort eingegeben werden muss) verwendet werden nur aktive Daten der Probanden (keine anderen Daten stehen der KI zur Verfügung oder werden analysiert!) Evaluiert werden: Big 5 - Persönlichkeitseigenschaften Unternehmerisches Kapital (Optimismus, Resilienz, Selbstwirksamkeit, Agility Mindset) Kontraproduktive Verhaltenstendenzen (Dunkle Triade: Impulsivität, Taktisch-Manipulativ, Selbstbezogenheit) was und wie es gemessen wird, ist entscheidend - soziale Erwünschtheit (Bias) wird durch die Kombination von Fragen mit vorgegebenen Antworten und Freitextantworten ausgehebelt Idealerweise erfolgt im Recruiting erst die Diagnostik und dann das Tiefeninterview   #KI #AI #HRTech #PeopleAnalytics #Eignungsdiagnostik #DataDrivenHR #Recruiting #Talententwicklung #NewLeadership #CandidateExperience #EmployeeExperience #HRInnovation #GainTalentspodcast   Links Prof. Dr. Florian Feltes LinkedIn: https://www.linkedin.com/in/florianfeltes/ Webseite: https://zortify.com/de/ Artikel Harvard Business Manager: https://hubs.ly/Q03QJdnM0 OMR Reviews Zortify: https://omr.com/de/reviews/product/zortify  Links Hans-Heinz Wisotzky:  Website: https://www.gaintalents.com/podcast und https://www.gaintalents.com/blog Podcast: https://www.gaintalents.com/podcast Bücher: Neu (seit Oktober 2025 verfügbar): Die perfekte Employee Journey und Experience https://link.springer.com/book/9783662714195 Erste Buch: Die perfekte Candidate Journey und Experience https://www.gaintalents.com/buch-die-perfekte-candidate-journey-und-experience   LinkedIn https://www.linkedin.com/in/hansheinzwisotzky/ LinkedIn https://www.linkedin.com/company/gaintalents XING https://www.xing.com/profile/HansHeinz_Wisotzky/cv Facebook https://www.facebook.com/GainTalents Instagram https://www.instagram.com/gain.talents/ Youtube https://bit.ly/2GnWMFg  

Diritto al Digitale
When AI Becomes “Medical”: Legal Responsibilities at the Frontier of Psychological Support Technologies

Diritto al Digitale

Play Episode Listen Later Nov 4, 2025 11:04


AI solutions for psychological support are crossing into the territory of medical devices — raising complex legal questions about qualification, liability, data protection, and regulatory oversight. In this episode, Giulio Coraggio, location of the Italian Intellectual Property & Technology department of the global law firm DLA Piper, explores how law, technology, and ethics intersect when artificial intelligence steps into the domain of mental health addressing also the qualification as medical device and the impact of the EU AI Act.Send us a text

It's Cyber Up North
Episode 35: AI and Cyber Security— how the EU AI Act is Changing the Game

It's Cyber Up North

Play Episode Listen Later Nov 4, 2025 24:57


 In this episode of It's Cyber Up North we are live from CyberFest 2025 exploring the hot topic of AI and Cyber Security—and how the EU AI Act is changing the game. Jon Holden (CyberNorth) is joined by Lucy Batley, owner at Traction Industries, and an absolute powerhouse recognised on the AI 100 UK List 2025, which celebrates individuals and organisations driving responsible, ethical and impactful use of Artificial Intelligence across the UK. Tune in as the duo take a deep-dive into AI and Cyber.

HRM-Podcast
GainTalents - Expertenwissen zu Recruiting, Gewinnung und Entwicklung von Talenten und Führungskräften: #425 Wie heute KI in der Eignungsdiagnostik zum Einsatz kommt – mit Prof. Dr. Florian Feltes

HRM-Podcast

Play Episode Listen Later Nov 4, 2025 46:14


Achtung (Werbung in eigener Sache):  Jetzt mein neues Buch (in Co-Produktion mit Prof. Dr. Johanna Bath): "Die perfekte Employee Journey & Experience" bestellen (erschienen Oktober 2025): Springer: https://link.springer.com/book/9783662714195 Amazon: https://bit.ly/44aajaP Thalia: https://www.thalia.de/shop/home/artikeldetails/A1074960417 Dieses Fachbuch stellt die wichtigsten Elemente der Employee Journey vor – vom Pre-Boarding bis zum Offboarding – und erläutert, wie Verantwortliche in Unternehmen eine gelungene Employee Experience realisieren und nachhaltig verankern können.   Mein Gast: Prof. Dr. Florian Feltes (CEO & Co-Founder von Zortify, Gründungsprofessor für Digital HR und Leadership an der XU Exponential University of Applied Sciences in Potsdam) Prof. Dr. Florian Feltes ist CEO & Co-Founder von Zortify, einem mehrfach ausgezeichneten Unternehmen, das mit KI-gestützten HR-Diagnostiken neue Maßstäbe in der Personalauswahl, Teamentwicklung und im Bereich Leadership setzt. Seine Mission: HR-Professionals mit datenbasierten Insights zu befähigen, bessere und zugleich menschlichere Entscheidungen im Recruiting und in der Personalentwicklung zu treffen. Parallel zu seiner Tätigkeit als Unternehmer ist Florian Gründungsprofessor für Digital HR und Leadership an der XU Exponential University of Applied Sciences in Potsdam. Die Verbindung von Wirtschaft und Wissenschaft zieht sich wie ein roter Faden durch seine Arbeit – insbesondere im Bereich Diagnostik, wo wissenschaftliche Fundierung und praktische Anwendung untrennbar miteinander verbunden sind. Mit seinem Buch „Revolution? Ja, bitte. Wenn Old-School-Führung auf New-Work-Leadership trifft" hat er eindrucksvoll gezeigt, wie sehr sich Führungskultur verändern muss, um Wandel, Diversität und Empowerment wirklich zu leben. Welchen Einfluss KI-gestützte Eignungsdiagnostik dabei nehmen kann, bespreche ich nun mit Prof. Dr. Florian Feltes hier in der heutigen Podcastfolge.   Thema: Mit Prof. Dr. Florian Feltes habe ich in der GainTalents-Podcastfolge 425 über das Thema KI in der Eignungsdiagnostik besprochen. Florian bietet als Co-Founder von Zortify eine spannende Lösung diesbezüglich und wir haben uns über das Zortify-Verfahren und weitere Dinge im Kontext von KI in der Eignungsdiagnostik ausgetauscht. Herzlichen Dank an Florian für dieses sehr gute Gespräch und auch für die sehr guten Insights zum Thema.   Was können moderne, KI-gestützte Verfahren der Eignungsdiagnostik heute bieten? fairer und effizienter als gängige Verfahren berücksichtigt den EU-AI-Act in vollem Umfang Selbsteinschätzung durch Online-Befragung (Fragen mit Auswahlantworten sowie Fragen, bei denen Freitext als Antwort eingegeben werden muss) verwendet werden nur aktive Daten der Probanden (keine anderen Daten stehen der KI zur Verfügung oder werden analysiert!) Evaluiert werden: Big 5 - Persönlichkeitseigenschaften Unternehmerisches Kapital (Optimismus, Resilienz, Selbstwirksamkeit, Agility Mindset) Kontraproduktive Verhaltenstendenzen (Dunkle Triade: Impulsivität, Taktisch-Manipulativ, Selbstbezogenheit) was und wie es gemessen wird, ist entscheidend - soziale Erwünschtheit (Bias) wird durch die Kombination von Fragen mit vorgegebenen Antworten und Freitextantworten ausgehebelt Idealerweise erfolgt im Recruiting erst die Diagnostik und dann das Tiefeninterview   #KI #AI #HRTech #PeopleAnalytics #Eignungsdiagnostik #DataDrivenHR #Recruiting #Talententwicklung #NewLeadership #CandidateExperience #EmployeeExperience #HRInnovation #GainTalentspodcast   Links Prof. Dr. Florian Feltes LinkedIn: https://www.linkedin.com/in/florianfeltes/ Webseite: https://zortify.com/de/ Artikel Harvard Business Manager: https://hubs.ly/Q03QJdnM0 OMR Reviews Zortify: https://omr.com/de/reviews/product/zortify  Links Hans-Heinz Wisotzky:  Website: https://www.gaintalents.com/podcast und https://www.gaintalents.com/blog Podcast: https://www.gaintalents.com/podcast Bücher: Neu (seit Oktober 2025 verfügbar): Die perfekte Employee Journey und Experience https://link.springer.com/book/9783662714195 Erste Buch: Die perfekte Candidate Journey und Experience https://www.gaintalents.com/buch-die-perfekte-candidate-journey-und-experience   LinkedIn https://www.linkedin.com/in/hansheinzwisotzky/ LinkedIn https://www.linkedin.com/company/gaintalents XING https://www.xing.com/profile/HansHeinz_Wisotzky/cv Facebook https://www.facebook.com/GainTalents Instagram https://www.instagram.com/gain.talents/ Youtube https://bit.ly/2GnWMFg  

AI Unraveled: Latest AI News & Trends, Master GPT, Gemini, Generative AI, LLMs, Prompting, GPT Store
⚖️ AI Liability, Litigation and Proactive Governance: Preparing for the Legal Risk Landscape

AI Unraveled: Latest AI News & Trends, Master GPT, Gemini, Generative AI, LLMs, Prompting, GPT Store

Play Episode Listen Later Nov 3, 2025 22:07


Welcome to AI Unraveled, Your daily briefing on the real world business impact of AI.Tune in at https://podcasts.apple.com/us/podcast/ai-liability-litigation-and-proactive-governance/id1684415169?i=1000735013941Today, we pivot from deployment to defense. The autonomous capabilities of Generative AI—from hallucinating content to designing novel drugs—have created a legal risk landscape that challenges every traditional doctrine of corporate liability, from foreseeability to product liability. This is no longer an ethics debate; it's a litigation ticking clock.In this essential special episode, we dissect the burgeoning regulatory dichotomy: the comprehensive, risk-based approach of the EU AI Act versus the fragmented, litigation-led system of the United States. We will analyze the central conflict of copyright law and the high-stakes lawsuits over training data, and we will equip you to defend against the threat of defamation by hallucination.But first, a crucial message for the enterprise builders:

Microsoft Business Applications Podcast
Copilot Success Starts with Clean Data

Microsoft Business Applications Podcast

Play Episode Listen Later Nov 2, 2025 33:44 Transcription Available


ILTA
#0134: (JIT ) ILTA Just-In-Time: What You Need to Know About New Regulations Governing AI in HR

ILTA

Play Episode Listen Later Oct 28, 2025 26:41


In this podcast, discover how to best navigate California's new employment AI regulations that recently went into effect on October 1st.    The speaker highlighted how the usage of Automated Decision Systems, which includes AI, when making employment decisions, can directly violate California law if these tools are found to discriminate against employees or applicants, either directly or indirectly, on the basis of already protected characteristics such as race, age, gender, etc.    In addition, they highlighted other recent AI regulations taking place around the world, such as the EU AI Act and more.    Moderator:  Adam Wehler, Director of eDiscovery and Litigation Technology, Smith Anderson Speaker: Kassi Burns, Senior Attorney, Trial and Global Disputes, King & Spalding

THE Presentations Japan Series by Dale Carnegie Training Tokyo, Japan

Before you build slides, get crystal clear on who you're speaking to and why you're speaking at all. From internal All-Hands to industry chambers and benkyōkai study groups in Japan, the purpose drives the structure, the tone, and the proof you choose.  What's the real purpose of a business presentation? Your presentation exists to create a specific outcome for a specific audience—choose the outcome first. Whether you need to inform, convince, persuade to action, or entertain enough to keep attention, the purpose becomes your design brief. In 2025's attention-scarce workplace—Tokyo to Sydney to New York—audiences bring "Era of Cynicism" energy, so clarity of intent is non-negotiable. Choose the one primary verb your talk must deliver (inform/convince/persuade/entertain) and align evidence, tone, and timing to that verb for executives, SMEs, and multinationals alike. Use decision criteria (see checklist below) before you touch PowerPoint or Keynote.  Do now: Write "The purpose of this talk is to ___ for ___ by ___." Tape it above your keyboard. How do I define my audience before I write a single slide? Profile the room first; the content follows. Map role seniority (board/C-suite vs. managers), cultural context (Japan vs. US/Europe norms), and decision horizon (today vs. next quarter). In Japan, executives prefer evidence chains and respect for hierarchy; in US tech startups, crisp bottom lines and next steps often win. For internal Town Halls, keep jargon minimal and tie metrics to team impact; for external industry forums, cite research, case studies, and trend lines from recognisable entities (Dale Carnegie, Toyota, Rakuten). Once you know the level, you can calibrate depth, vocabulary, and the "so what" that matters to them. Skip this step and you'll either drown them in detail or sound vague.  Do now: Write three bullets: "They care about…," "They already know…," "They must decide…". Inform, convince, persuade, or entertain—how do I choose? Pick one dominant mode and let the others support it. Inform for internal/industry updates rich in stats, expert opinion, and research (think "Top Five Trends 2025" with case studies). Limit the "data dump"—gold in the main talk, silver/bronze in Q&A. Convince/Impress when credibility is on the line; your delivery quality now represents the whole organisation. Persuade/Inspire when behaviour must change—leaders need this most. Entertain doesn't mean stand-up; it means energy, story beats, and occasional humour you've tested. Across APAC, Europe, and the US, the balance shifts by culture and sector (B2B vs. consumer), but the discipline—one primary purpose—does not.  Do now: Circle the mode that matches your outcome; design every section to serve it. How do I stop the "data dump" and choose the right evidence? Curate like a prosecutor: fewer exhibits, stronger case. Open with a bold answer, then prove it with 2–3 high-leverage data points (trend, benchmark, case). Anchor time ("post-pandemic," "as of 2025") and entities (Nikkei index moves, METI guidance, EU AI Act, industry frameworks) to help AI search and humans connect dots. Keep detailed tables for the appendix or Q&A; in the main flow, show only what advances your single purpose. This approach works for multinationals reporting quarterly KPIs and for SMEs pitching a new budget. Variant phrases (metrics, numbers, stats, proof, evidence) boost retrievability without breaking flow.  Do now: Delete one slide for every two you keep—then rehearse the proof path out loud. How do leaders actually inspire action in 2025? Pair delivery excellence with relevance—then make the ask unmistakable. Inspiration is practical when urgency, consequence, and agency meet. Churchill's seven-word charge—"Never, ever ever ever ever give up"—worked because context (1941 Europe), clarity, and cadence aligned; your 2025 equivalent might be "Ship it safely this sprint" or "Call every lapsed client this week." In Japan's post-2023 labour reforms, tie actions to work-style realities; in US/Europe, link to quarterly OKRs and risk controls. Leaders at firms like Toyota and Rakuten model the ask, specify the first step, and remove friction. Finish with a one-page action checklist and a deadline.  Do now: State the concrete next action, owner, and timebox—then say it again at the close. What's the right design order—openings first or last? Design the closes first (Close #1 and Close #2), build the body, then craft the opening last. The close is the destination; design it before you chart the route. Create two closes: the "time-rich" version and a "compressed" version in case you run short. Build the body to earn those closes with evidence and examples. Only then write your opening—short, audience-hooked, and purpose-aligned. This reverse-engineering avoids rambling intros and ensures your opener previews exactly what you'll deliver. It's a proven workflow for internal All-Hands, marketing spend reviews, and external keynotes alike.  Do now: Write Close #1 and Close #2 in full sentences before touching the first slide. How do I structure my content for AI-driven search engines (SGE, Perplexity, ChatGPT, Copilot)? Lead with answer-first headings, dense entities, and time anchors in each section. Use conversational query subheads ("How do I…?"), open with a bold one-to-two-sentence answer, then a tight paragraph with comparisons (Japan vs. US/Europe), sectors (B2B vs. consumer), and named organisations. End with a mini-summary or "Do now." Keep sections 120–150 words. Add synonyms (metrics/numbers/KPIs) and timeframe tags ("as of 2025"). This GEO pattern boosts retrievability while staying human. Use it for transcripts, blogs, and Do now: Convert your next talk into six answer-first sections using this exact template. Quick checklist (decision criteria) Audience level, culture, and decision horizon defined Single dominant purpose chosen Gold evidence only in-flow; silver/bronze parked for Q&A Two closes drafted; opening written last Clear call-to-action with owner + deadline Conclusion Choose your purpose, curate your proof, and architect your flow backwards from the close. Do that, and you'll inform, convince, and—when needed—inspire action, whether you're presenting in Tokyo, Sydney, or Seattle.    Dr. Greg Story, Ph.D. in Japanese Decision-Making, is President of Dale Carnegie Tokyo Training and Adjunct Professor at Griffith University. He is a two-time winner of the Dale Carnegie "One Carnegie Award" (2018, 2021) and recipient of the Griffith University Business School Outstanding Alumnus Award (2012). A Dale Carnegie Master Trainer, Greg delivers globally across leadership, communication, sales, and presentation programs. He is the author of best-sellers Japan Business Mastery, Japan Sales Mastery, and Japan Presentations Mastery, plus Japan Leadership Mastery and How to Stop Wasting Money on Training; Japanese editions include ザ営業, プレゼンの達人, and 現代版「人を動かす」リーダー. He publishes daily insights and hosts multiple podcasts and YouTube shows for executives succeeding in Japan. 

Outgrow's Marketer of the Month
Snippet: Matthew Blakemore, CEO at AI Caramba!, highlights a pressing challenge with the EU AI Act.

Outgrow's Marketer of the Month

Play Episode Listen Later Oct 24, 2025 0:49


⚖️ The EU AI Act's Biggest Hurdle: Regulating AI That's Already in UseMatthew Blakemore, CEO at AI Caramba!, highlights a pressing challenge with the EU AI Act. While the framework does a strong job of classifying AI projects into risk categories, it faces a dilemma with tools that are already in widespread public use. Many existing systems, some of which likely fall into high-risk categories, have already been trained and adopted by millions. The question becomes: should they be withdrawn, despite their popularity, or adapted under new rules? Listen to the full podcast now- https://bit.ly/40GZ9bw #AI #ArtificialIntelligence #AIRegulation #TechPolicy #AICompliance #AIInnovation #AITransformation

The Road to Accountable AI
Caroline Louveaux: Trust is Mission Critical

The Road to Accountable AI

Play Episode Listen Later Oct 23, 2025 33:13


Kevin Werbach speaks with Caroline Louveaux, Chief Privacy, AI, and Data Responsibility Officer at Mastercard, about what it means to make trust mission critical in the age of artificial intelligence. Caroline shares how Mastercard built its AI governance program long before the current AI boom, grounding it in the company's Data and Technology Responsibility Principles". She explains how privacy-by-design practices evolved into a single global AI governance framework aligned with the EU AI Act, NIST AI Risk Management, and standards. The conversation explores how Mastercard balances innovation speed with risk management, automates low-risk assessments, and maintains executive oversight through its AI Governance Council. Caroline also discusses the company's work on agentic commerce, where autonomous AI agents can initiate payments, and why trust, certification, and transparency are essential for such systems to succeed. Caroline unpacks what it takes for a global organization to innovate responsibly — from cross-functional governance and "tone from the top," to partnerships like the Data & Trust Alliance and efforts to harmonize global standards. Caroline emphasizes that responsible AI is a shared responsibility and that companies that can "innovate fast, at scale, but also do so responsibly" will be the ones that thrive. Caroline Louveaux leads Mastercard's global privacy and data responsibility strategy. She has been instrumental in building Mastercard's AI governance framework and shaping global policy discussions on data and technology. She serves on the board of the International Association of Privacy Professionals (IAPP), the WEF Task Force on Data Intermediaries, the ENISA Working Group on AI Cybersecurity, and the IEEE AI Systems Risk and Impact Executive Committee, among other activities. Transcript How Mastercard Uses AI Strategically: A Case Study (Forbes 2024) Lessons From a Pioneer: Mastercard's Experience of AI Governance (IMD, 2023) As AI Agents Gain Autonomy, Trust Becomes the New Currency. Mastercard Wants to Power Both. (Business Insider, July 2025)

The HR L&D Podcast
How AI is Making HR More Human with Daniel Strode

The HR L&D Podcast

Play Episode Listen Later Oct 21, 2025 42:25


This episode is sponsored by Deel.Ensure fair, consistent reviews with Deel's calibration template. Deel's free Performance Calibration Template helps HR teams and managers run more equitable, structured reviews. Use it to align evaluations with business goals,reduce bias in ratings, and ensure every performance conversation is fair, consistent,and grounded in shared standards.Download now: www.deel.com/nickdayIn this episode of the HR L&D Podcast, host Nick Day explores how HR can use AI to become more strategic and more human. The conversation covers where AI truly fits in HR, what changes with the EU AI Act, and how leaders can turn time saved on admin into culture, capability, and impact.You will hear practical frameworks including a simple 4Ps plus 2 model for HR AI, human in the loop hiring, guardrails to reduce hallucinations, and a clear view on when AI must be 100 percent accurate. The discussion also outlines a modern HR operating model with always on self service, plus policy steps for ethical, explainable AI.Whether you are an HR leader, CEO, or L&D professional, this conversation will help you move from pilots to scaled adoption and build an AI ready organization. Expect actionable steps to improve employee experience, strengthen compliance, and unlock productivity and performance across your teams. 100X Book on Amazon: https://www.amazon.com/dp/B0D41BP5XTNick Day's LinkedIn: https://www.linkedin.com/in/nickday/Find your ideal candidate with our job vacancy system: https://jgarecruitment.ck.page/919cf6b9eaSign up to the HR L&D Newsletter - https://jgarecruitment.ck.page/23e7b153e700:00 Intro & Preview02:25 What HR Is For03:54 Why HR + AI Now06:19 AI as Augmentation07:43 HR AI Framework & Use Cases10:14 Guardrails: Hallucinations & Accuracy12:45 Guardrails: Bias & Human in the Loop16:58 Recruiting with AI21:01 EU AI Act for HR25:16 HR Team of the Future25:56 New HR Operating Model31:54 Tools for Culture Change35:35 Rethink Processes

anseo's podcast
Guidance on Artificial Intelligence in Schools

anseo's podcast

Play Episode Listen Later Oct 21, 2025 13:20


It was too obvious not to do it. Let AI summarise the Department of Education's guidance. Sure, while I'm at it, I may as well use AI to create the show notes:Explore the safe, ethical, and responsible use of AI for primary educators and school leaders. We share practical examples, such as how a second class teacher can use Generative AI (GenAI) to create curriculum-aligned math activities, or how a fifth class teacher uses GenAI for visual support in Irish lessons. Learn strategies for integrating AI, including the essential 4P framework (Purpose, Planning, Policies, Practice). Remember to maintain human oversight and review all AI outputs for accuracy and bias. Resources like the DALI4US project support data literacy for primary teachers.

AI in Banking Podcast
The Role of AI in Risk Management and Compliance - with Miriam Fernandez and Sudeep Kesh at S&P Global Ratings

AI in Banking Podcast

Play Episode Listen Later Oct 20, 2025 28:49


As financial services accelerate their digital transformations, AI is reshaping how institutions identify, assess, and manage risk. But with that transformation comes an equally complex web of systemic risks, regulatory challenges, and questions about accountability. In this episode of the AI in Business podcast, host Matthew DeMello, Head of Content at Emerj, speaks with Miriam Fernandez, Director in the Analytical Innovation Team specializing in AI research at S&P Global Ratings, and Sudeep Kesh, Chief Innovation Officer at S&P Global Ratings. Together, they unpack how generative AI, agentic systems, and regulatory oversight are evolving within one of the most interconnected sectors of the global economy. The conversation explores how AI is amplifying both efficiency and exposure across financial ecosystems — from the promise of multimodal data integration in risk management to the growing challenge of concentration and contagion risks in increasingly digital markets. Miriam and Sudeep discuss how regulators are responding through risk-based frameworks such as the EU AI Act and DORA, and how the private sector is taking a larger role in ensuring transparency, compliance, and trust. Want to share your AI adoption story with executive peers? Click emerj.com/e2 for more information and to be a potential future guest on Emerj's flagship ‘AI in Business' podcast! If you've enjoyed or benefited from some of the insights of this episode, consider leaving us a five-star review on Apple Podcasts, and let us know what you learned, found helpful, or liked most about this show!

The FIT4PRIVACY Podcast - For those who care about privacy
Why does EU AI Act matter in your business?

The FIT4PRIVACY Podcast - For those who care about privacy

Play Episode Listen Later Oct 16, 2025 1:59


In this episode of the Fit4Privacy Podcast, host Punit Bhatia explores the EU AI Act— why it matters, what it requires, and how it impacts your business, even outside the EU.You will also hear about the Act's risk-based approach, the four categories of AI systems (unacceptable, high, limited, and minimal risk), and the penalties for non-compliance, which can be as high as 7% of global turnover or €35 million.Just like GDPR, the EU AI Act has global reach—so if your company offers AI-based products or services to EU citizens, it applies to you. Listen in to understand the requirements and discover how to turn AI compliance into an opportunity for building trust, demonstrating responsibility, and staying ahead of the competition.KEY CONVERSION 00:00:00 Introduction to the EU AI Act 00:01:22 Why the EU AI Act Matters to Your Business 00:03:40 Risk Categories Under the EU AI Act 00:04:52 Key Timelines and Provisions 00:06:07 Compliance Requirements 00:07:09 Leveraging the EU AI Act for Competitive Advantage 00:08:38 Conclusion and Contact Information  ABOUT HOST Punit Bhatia is one of the leading privacy experts who works independently and has worked with professionals in over 30 countries. Punit works with business and privacy leaders to create an organization culture with high privacy awareness and compliance as a business priority. Selectively, Punit is open to mentor and coach professionals.  Punit is the author of books “Be Ready for GDPR' which was rated as the best GDPR Book, “AI & Privacy – How to Find Balance”, “Intro To GDPR”, and “Be an Effective DPO”. Punit is a global speaker who has spoken at over 30 global events. Punit is the creator and host of the FIT4PRIVACY Podcast. This podcast has been featured amongst top GDPR and privacy podcasts.  As a person, Punit is an avid thinker and believes in thinking, believing, and acting in line with one's value to have joy in life. He has developed the philosophy named ‘ABC for joy of life' which he passionately shares. Punit is based out of Belgium, the heart of Europe.  RESOURCES Websites www.fit4privacy.com,www.punitbhatia.com Podcast https://www.fit4privacy.com/podcast Blog https://www.fit4privacy.com/blog YouTube http://youtube.com/fit4privacy 

Social Justice & Activism · The Creative Process
Will AI Lead to a More Fair Society, Or Just Widen Inequities? - RISTO UUK Head of EU Policy & Research, FUTURE OF LIFE INSTITUTE

Social Justice & Activism · The Creative Process

Play Episode Listen Later Oct 14, 2025 62:34


“The Future of Life Institute has been working on AI governance-related issues for the last decade. We're already over 10 years old, and our mission is to steer very powerful technology away from large-scale harm and toward very beneficial outcomes. You could think about any kind of extreme risks from AI, all the way to existential or extinction risk, the worst kinds of risks and the benefits. You can think about any kind of large benefits that humans could achieve from technology, all the way through to utopia, right? Utopia is the biggest benefit you can get from technology. Historically, that has meant we have focused on climate change, for example, and the impact of climate change. We have also focused on bio-related risks, pandemics and nuclear security issues. If things go well, we will be able to avoid these really bad downsides in terms of existential risk, extinction risks, mass surveillance, and really disturbing futures. We can avoid that very harmful side of AI or technology, and we can achieve some of the benefits.”Today, we take a closer look at the future of artificial intelligence and the policies that determine its place in our societies. Risto Uuk is Head of EU Policy and Research at the Future of Life Institute in Brussels, and a philosopher and researcher at KU Leuven, where he studies the systemic risks posed by AI. He has worked with the World Economic Forum, the European Commission, and leading thinkers like Stuart Russell and Daniel Susskind. He also runs one of the most widely read newsletters on the EU AI Act. As this technology is transforming economies, politics, and human life itself, we'll talk about the promises and dangers of AI, how Europe is trying to regulate it, and what it means to build safeguards for a technology that may be more powerful than anything we've seen before.Episode Websitewww.creativeprocess.info/podInstagram:@creativeprocesspodcast

Tech, Innovation & Society - The Creative Process
AI & The Future of Life with RISTO UUK, Head of EU Policy & Research, FUTURE OF LIFE INSTITUTE

Tech, Innovation & Society - The Creative Process

Play Episode Listen Later Oct 14, 2025 62:34


“The Future of Life Institute has been working on AI governance-related issues for the last decade. We're already over 10 years old, and our mission is to steer very powerful technology away from large-scale harm and toward very beneficial outcomes. You could think about any kind of extreme risks from AI, all the way to existential or extinction risk, the worst kinds of risks and the benefits. You can think about any kind of large benefits that humans could achieve from technology, all the way through to utopia, right? Utopia is the biggest benefit you can get from technology. Historically, that has meant we have focused on climate change, for example, and the impact of climate change. We have also focused on bio-related risks, pandemics and nuclear security issues. If things go well, we will be able to avoid these really bad downsides in terms of existential risk, extinction risks, mass surveillance, and really disturbing futures. We can avoid that very harmful side of AI or technology, and we can achieve some of the benefits.”Today, we take a closer look at the future of artificial intelligence and the policies that determine its place in our societies. Risto Uuk is Head of EU Policy and Research at the Future of Life Institute in Brussels, and a philosopher and researcher at KU Leuven, where he studies the systemic risks posed by AI. He has worked with the World Economic Forum, the European Commission, and leading thinkers like Stuart Russell and Daniel Susskind. He also runs one of the most widely read newsletters on the EU AI Act. As this technology is transforming economies, politics, and human life itself, we'll talk about the promises and dangers of AI, how Europe is trying to regulate it, and what it means to build safeguards for a technology that may be more powerful than anything we've seen before.Episode Websitewww.creativeprocess.info/podInstagram:@creativeprocesspodcast

Outgrow's Marketer of the Month
Snippet: Matthew Blakemore CEO at AI Caramba! on AI Regulation & Industry Motives

Outgrow's Marketer of the Month

Play Episode Listen Later Oct 6, 2025 0:43


Matthew Blakemore, CEO at AI Caramba!, reflects on the recent debates around AI regulation with a healthy dose of scepticism. He highlights how companies like OpenAI, having already secured a competitive advantage through their vast training data, may now be pushing for regulation not purely from an ethical standpoint, but as a way to protect their lead and limit competition.He acknowledges the importance of frameworks like the upcoming EU AI Act, which will play a critical role in shaping how AI is built and deployed in the future. The real test is ensuring safety and accountability without stifling innovation and fair competition.Listen to the full podcast now- https://bit.ly/40GZ9bw#AI #EUAIAct #MatthewBlakemore #TechRegulation #ArtificialIntelligence #Outgrow

Education · The Creative Process
AI & The Future of Life with RISTO UUK, Head of EU Policy & Research, FUTURE OF LIFE INSTITUTE

Education · The Creative Process

Play Episode Listen Later Oct 2, 2025 62:34


“The Future of Life Institute has been working on AI governance-related issues for the last decade. We're already over 10 years old, and our mission is to steer very powerful technology away from large-scale harm and toward very beneficial outcomes. You could think about any kind of extreme risks from AI, all the way to existential or extinction risk, the worst kinds of risks and the benefits. You can think about any kind of large benefits that humans could achieve from technology, all the way through to utopia, right? Utopia is the biggest benefit you can get from technology. Historically, that has meant we have focused on climate change, for example, and the impact of climate change. We have also focused on bio-related risks, pandemics and nuclear security issues. If things go well, we will be able to avoid these really bad downsides in terms of existential risk, extinction risks, mass surveillance, and really disturbing futures. We can avoid that very harmful side of AI or technology, and we can achieve some of the benefits.”Today, we take a closer look at the future of artificial intelligence and the policies that determine its place in our societies. Risto Uuk is Head of EU Policy and Research at the Future of Life Institute in Brussels, and a philosopher and researcher at KU Leuven, where he studies the systemic risks posed by AI. He has worked with the World Economic Forum, the European Commission, and leading thinkers like Stuart Russell and Daniel Susskind. He also runs one of the most widely read newsletters on the EU AI Act. As this technology is transforming economies, politics, and human life itself, we'll talk about the promises and dangers of AI, how Europe is trying to regulate it, and what it means to build safeguards for a technology that may be more powerful than anything we've seen before.Episode Websitewww.creativeprocess.info/podInstagram:@creativeprocesspodcast

Business of Tech
Navigating AI Governance: Trust, Accountability, and the Future of Responsible Tech

Business of Tech

Play Episode Listen Later Sep 27, 2025 45:54


Art Kleiner, co-author of "The AI Dilemma" and Principal at Kleiner Powell International, discusses the complexities of AI governance, trust, and accountability in the context of modern technology. He emphasizes the importance of being intentional about risk when deploying AI products, particularly large language models, which can inadvertently perpetuate biases and misinformation. Kleiner shares a compelling example of a Chinese AI system that failed to generate accurate images based on user requests, illustrating the inherent biases present in AI systems. He stresses the need for organizations to be aware of the human effects and unintended consequences of AI deployment.For managed service providers (MSPs) and IT leaders, Kleiner highlights the significance of compliance and oversight in the development process of AI systems. He references the EU AI Act, which mandates a "human in the loop" approach to ensure accountability and effectiveness in AI applications. This requirement encourages organizations to conduct thorough testing and evaluation of AI systems in real-world contexts, ensuring that they meet the needs of users and mitigate potential risks. Kleiner notes that small businesses, in particular, must be vigilant about the implications of AI on their operations and customer interactions.The conversation also delves into the challenges of achieving measurable ROI from AI projects, with studies indicating that a significant percentage of these initiatives fail to deliver tangible business value. Kleiner advocates for scenario planning as a tool to navigate the uncertainties of AI implementation, encouraging organizations to explore various future scenarios and their potential impacts. By understanding the different ways AI can affect productivity, business growth, and risk management, companies can better position themselves to leverage AI effectively.Finally, Kleiner urges leaders to prepare for multiple AI futures by staying informed about emerging technologies and their implications for their businesses. He emphasizes the need for organizations to build trust with their customers by using AI responsibly and transparently. By focusing on creating value and avoiding the pitfalls of "inshittification," businesses can foster stronger relationships with their clients and enhance their overall service offerings. The discussion underscores the critical role of human insight and ethical considerations in the evolving landscape of AI technology. All our Sponsors: https://businessof.tech/sponsors/ Do you want the show on your podcast app or the written versions of the stories? Subscribe to the Business of Tech: https://www.businessof.tech/subscribe/Looking for a link from the stories? The entire script of the show, with links to articles, are posted in each story on https://www.businessof.tech/ Support the show on Patreon: https://patreon.com/mspradio/ Want to be a guest on Business of Tech: Daily 10-Minute IT Services Insights? Send Dave Sobel a message on PodMatch, here: https://www.podmatch.com/hostdetailpreview/businessoftech Want our stuff? Cool Merch? Wear “Why Do We Care?” - Visit https://mspradio.myspreadshop.com Follow us on:LinkedIn: https://www.linkedin.com/company/28908079/YouTube: https://youtube.com/mspradio/Facebook: https://www.facebook.com/mspradionews/Instagram: https://www.instagram.com/mspradio/TikTok: https://www.tiktok.com/@businessoftechBluesky: https://bsky.app/profile/businessof.tech Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

The FIT4PRIVACY Podcast - For those who care about privacy
Govern and Manage AI to Create Trust with Mark Thomas and Punit Bhatia in the FIT4PRIVACY Podcast E147 S06

The FIT4PRIVACY Podcast - For those who care about privacy

Play Episode Listen Later Sep 11, 2025 32:46


Do you want to use AI without losing trust? What frameworks help build trust and manage AI responsibly?  Can we really create trust while using AI?In this episode of the FIT4PRIVACY Podcast, host Punit Bhatia and digital trust expert Mark Thomas explain how to govern and manage AI in ways that build real trust with customers, partners, and society.This episode breaks down what it means to use AI responsibly and how strong governance can help avoid risks. You'll also learn about key frameworks like the ISO 42001, the EU AI Act, and the World Economic Forum's Digital Trust Framework—and how they can guide your AI practices.Mark and Punit also talk about how organizational culture, company size, and leadership affect how AI is used—and how trust is built (or lost). They discuss real-world tips for making AI part of your existing business systems, and how to make decisions that are fair, explainable, and trustworthy.

EUVC
E574 | Paul Morgenthaler, CommerzVentures: Why CVC Works Best with a Single LP

EUVC

Play Episode Listen Later Sep 10, 2025 43:08


In this episode, Andreas Munk Holm and Jeppe Høier sit down with Paul Morgenthaler, Partner at CommerzVentures, to unpack the inner workings of a single-LP CVC and how strategic structure can drive long-term VC success. Paul shares insights from over a decade of fintech investing, offering a rare look into how one of Europe's leading corporate venture arms thinks about climate, compliance, and the coming wave of agentic AI in financial services.They explore what it takes to make a single-LP model work, how GenAI is reshaping fintech workflows, and why European regulation may be a global feature, not a bug.

AI in Banking Podcast
How International AI Safety Efforts Are Shaping the Future of Governance - with Charleyne Biondi of Moody's

AI in Banking Podcast

Play Episode Listen Later Sep 8, 2025 25:44


Today's guest on the ‘AI in Financial Services' podcast is Charleyne Biondi, Associate Vice President of Moody's Ratings in the Digital Economy Team. Charleyne Biondi, Associate Vice President of Moody's Ratings in the Digital Economy Team. Charleyne returns to the program to share her perspective on the rapidly evolving landscape of AI regulation, comparing the EU AI Act, the US sector-specific approach, and emerging international frameworks. She outlines how regulatory divergence is shaping adoption, trust, and compliance costs for companies operating globally. Charleyne also emphasizes the risks of regulatory fragmentation in the US, where state-level laws often impose requirements as stringent as Europe's. Want to share your AI adoption story with executive peers? Click emerj.com/e2 for more information and to be a potential future guest on Emerj's flagship' AI in Business' podcast! If you've enjoyed or benefited from some of the insights of this episode, consider leaving us a five-star review on Apple Podcasts, and let us know what you learned, found helpful, or liked most about this show!

The Data Chronicles
AI Issue Spotting: What is AI?

The Data Chronicles

Play Episode Listen Later Sep 8, 2025 29:02


In this episode of Data Chronicles, host Scott Loughlin explores the challenges of defining artificial intelligence (AI) under emerging laws, including under the EU AI Act. He is joined by Hogan Lovells partner Etienne Drouard and senior associate Olga Kurochkina to discuss the difficulties in drawing clear lines around what qualifies as AI, the importance of that definition under the EU AI Act for both developers and users, and the broader landscape of AI regulation. The conversation highlights the importance of distinguishing AI from automation, the compliance obligations that follow, and the ways AI legislation continues to evolve.

SHIFT
Architect of the EU AI Act Expresses Concerns

SHIFT

Play Episode Listen Later Sep 3, 2025 17:43


EU AI Act architect and lead author Gabriele Mazzini shares his experience drafting the law. He also talks about his concerns with implementation and its potential impact on European competitiveness, and how that led him to quit his job, in the latest installment of our oral history project.This episode was recorded at TEDAI in Vienna and originally ran in 2024.We Meet:MIT Media Lab Research Affiliate & MIT Connection Science Fellow Gabriele MazziniCredits:This episode of SHIFT was produced by Jennifer Strong and Emma Cillekens, and it was mixed by Garret Lang, with original music from him and Jacob Gorski. Art by Meg Marco.