Podcasts about eu ai act

  • 395PODCASTS
  • 687EPISODES
  • 35mAVG DURATION
  • 5WEEKLY NEW EPISODES
  • Dec 18, 2025LATEST

POPULARITY

20172018201920202021202220232024


Best podcasts about eu ai act

Latest podcast episodes about eu ai act

The Road to Accountable AI
Alexandru Voica: Responsible AI Video

The Road to Accountable AI

Play Episode Listen Later Dec 18, 2025 38:23


Alexandru Voica, Head of Corporate Affairs and Policy at Synthesia, discusses how the world's largest enterprise AI video platform has approached trust and safety from day one. He explains Synthesia's "three C's" framework—consent, control, and collaboration: never creating digital replicas without explicit permission, moderating every video before rendering, and engaging with policymakers to shape practical regulation. Voica acknowledges these safeguards have cost some business, but argues that for enterprise sales, trust is competitively essential. The company's content moderation has evolved from simple keyword detection to sophisticated LLM-based analysis, recently withstanding a rigorous public red team test organized by NIST and Humane Intelligence. Voica criticizes the EU AI Act's approach of regulating how AI systems are built rather than focusing on harmful outcomes, noting that smaller models can now match frontier capabilities while evading compute-threshold regulations. He points to the UK's outcome-focused approach—like criminalizing non-consensual deepfake pornography—as more effective. On adoption, Voica argues that AI companies should submit to rigorous third-party audits using ISO standards rather than publishing philosophical position papers—the thesis of his essay "Audits, Not Essays." The conversation closes personally: growing up in 1990s Romania with rare access to English tutoring, Voica sees AI-powered personalized education as a transformative opportunity to democratize learning. Alexandru Voica is the Head of Corporate Affairs and Policy at Synthesia, the UK's largest generative AI company and the world's leading AI video platform. He has worked in the technology industry for over 15 years, holding public affairs and engineering roles at Meta, NetEase, Ocado, and Arm. Voica holds an MSc in Computer Science from the Sant'Anna School of Advanced Studies and serves as an advisor to MBZUAI, the world's first AI university. Transcript Audits, Not Essays: How to Win Trust for Enterprise AI (Transformer) Synthesia's Content Moderation Systems Withstand Rigorous NIST, Humane Intelligence Red Team Test (Synthesia) Computerspeak Newsletter

The Privacy Advisor Podcast
Former AI Act negotiator Laura Caroli on the proposed EU Digital Omnibus for AI

The Privacy Advisor Podcast

Play Episode Listen Later Dec 17, 2025 49:18


On November 19, the European Commission unveiled two major omnibus packages as part of its European Data Union Strategy. One package proposes several changes to the EU General Data Protection Regulation, while the other proposes significant changes to the recently minted EU AI Act, including a proposed delay to the regulation of so-called high-risk AI systems.    Laura Caroli was a lead negotiator and policy advisor to AI Act co-rapporteur Brando Benifei and was immersed in the high-stakes negotiations leading to the AI regulation. She is also a former senior fellow at the Center for Strategic and International Studies, but recently moved back to Brussels during a time of major complexity in the EU.    IAPP Editorial Director Jedidiah Bracy caught up with Caroli to discuss her views on the proposed changes to the AI Act in the omnibus package and how she thinks the negotiations will play out. Here's what she had to say.  

The Data Diva E267 - Federico Marengo and Debbie Reynolds

"The Data Diva" Talks Privacy Podcast

Play Episode Listen Later Dec 16, 2025 39:29 Transcription Available


Send us a textIn Episode 267 of The Data Diva Talks Privacy Podcast, Debbie Reynolds, The Data Diva, talks with Federico Marengo, Associate Partner at White Label Consultancy in Italy. They explore the accelerating intersection of privacy, artificial intelligence, and governance, and discuss how organizations can build practical, responsible AI programs that align with existing privacy and security frameworks. Federico explains why AI governance cannot exist in a vacuum and must be integrated with the policies, controls, and operational practices companies already use.The conversation delves into the challenges organizations face in adopting AI responsibly, including understanding the requirements of the EU AI Act, right-sizing compliance expectations for organizations of different scales, and developing programs that allow innovation while managing risk. Federico highlights the importance of educating leadership about where AI regulations actually apply, since many businesses overestimate their obligations, and he explains why clarity around high-risk systems is essential for reducing unnecessary fear and confusion.Debbie and Federico also discuss future trends for global AI and privacy governance, including whether companies will eventually adopt unified enterprise frameworks rather than fragmented jurisdiction-specific practices. They explore how organizations can upskill their teams, embed governance into product development, and normalize AI as part of standard technology operations. Federico shares his vision for a world where professionals collaborate to advance best practices and help organizations embrace AI with confidence rather than hesitation.Support the showBecome an insider, join Data Diva Confidential for data strategy and data privacy insights delivered to your inbox.

Diritto al Digitale
AI Governance: Innovation, Privacy and the EU AI Act with Oliver Patel of AstraZeneca

Diritto al Digitale

Play Episode Listen Later Dec 16, 2025 18:29


Artificial intelligence is rapidly transforming the pharmaceutical and life sciences sector — but innovation in this field comes with some of the highest regulatory, ethical, and governance expectations.In this episode of Legal Leaders Insights from Diritto al Digitale, Giulio Coraggio of DLA Piper speaks with Oliver Patel, Head of Enterprise AI Governance at AstraZeneca, about how AI governance is implemented in practice within a global pharmaceutical company.The conversation covers:What enterprise AI governance looks like in the life sciences sectorHow to balance AI innovation with privacy, intellectual property, and complianceThe concrete implications of the EU AI Act for pharmaceutical companiesPractical governance approaches to enable responsible and scalable AIThis episode is particularly relevant for legal professionals, compliance teams, in-house counsel, data leaders, and executives working in highly regulated industries.Diritto al Digitale is the podcast where law, technology, and digital regulation intersect with real business challenges.Send us a text

Reimagining Cyber
AI Compliance : New Rules, But Are You Ready?

Reimagining Cyber

Play Episode Listen Later Dec 10, 2025 18:58


AI is evolving faster than most organizations can keep up with — and the truth is, very few companies are prepared for what's coming in 2026. In this episode of Reimagining Cyber, Rob Aragao speaks with Ken Johnston, VP of Data, Analytics and AI at Envorso, about the uncomfortable reality: autonomous AI systems are accelerating, regulations are tightening, and most businesses have no idea how much risk they're carrying.Ken explains why companies have fallen behind, how “AI governance debt” has quietly piled up, and why leaders must take action now before the EU AI Act and Colorado's 2026 regulation bring real financial consequences. From AI bias and data provenance to agentic AI guardrails, observability, audits, and model versioning — Ken lays out the essential steps organizations must take to catch up before it's too late. It's 5 years since Reimagining Cyber began. Thanks to all of our loyal listeners!As featured on Million Podcasts' Best 100 Cybersecurity Podcasts Top 50 Chief Information Security Officer CISO Podcasts Top 70 Security Hacking Podcasts This list is the most comprehensive ranking of Cyber Security Podcasts online and we are honoured to feature amongst the best! Follow or subscribe to the show on your preferred podcast platform.Share the show with others in the cybersecurity world.Get in touch via reimaginingcyber@gmail.com

AI Unraveled: Latest AI News & Trends, Master GPT, Gemini, Generative AI, LLMs, Prompting, GPT Store

Welcome back to AI Unraveled, your daily strategic briefing on the business impact of artificial intelligence.Today, we are flipping the script on the most boring word in tech: Governance. We are diving into the 'Compliance Cost Cliff'—a new reality where the ability to control your AI is not just a legal shield, but the primary engine of your velocity. We'll look at how AI hallucinations cost businesses $67 billion this year alone, why the EU AI Act is actually a springboard for global dominance, and how giants like JPMorgan and Mayo Clinic are building 'Trust Moats' to leave their competitors in the dust.1. The Strategic Inversion: From Brake to Engine The narrative of "move fast and break things" is dead. We have reached the Compliance Cost Cliff, where the financial and reputational risks of ungoverned AI far outweigh the friction of implementing it. Organizations that treat governance as infrastructure are unlocking high-risk, high-reward use cases that remain inaccessible to less disciplined competitors.2. The "Trust Moat" Theory In a market flooded with AI-generated noise and deepfakes, verified reality is the only scarce resource.Sales Friction: Governance-first companies bypass lengthy procurement security questionnaires, winning deals in the "silent" phase of the buying cycle.Pricing Power: Verified, auditable AI outputs command a premium. An AI that cites its sources is a professional tool; one that doesn't is a liability.3. The Economics of FailureThe Hallucination Bill: In 2024, AI hallucinations cost businesses $67.4 billion in direct losses, legal sanctions, and operational remediation.Regulatory Hammers: The EU AI Act introduces fines of up to 7% of global turnover—a penalty structure that can erase a year's worth of profitability for major firms.4. Sector Deep Dives: The First MoversFinance (JPMorgan Chase): Misinterpreted for initially banning ChatGPT, JPMC used the pause to build the LLM Suite—a governed platform that handles data privacy and model risk centrally. This infrastructure now allows them to deploy tools like Connect Coach safely while competitors struggle with compliance.Healthcare (Mayo Clinic): Mayo's "Deploy" platform acts as governance middleware. Insurance (AXA): With SecureGPT, AXA positions itself as a governance auditor, refusing to insure companies that cannot prove their AI safety standards—effectively monetizing governance.5. The Technical Architecture of Compliance Governance must be encoded into the software itself.Auditable RAGImmutable Audit Logs6. Future Outlook: Agentic AI & Liability As we move toward Agentic AI (systems that take action, not just chat), the liability shifts entirely to the deployer. The only defense against an agent that executes a bad trade or deletes a file is a robust, documented governance history.KeywordsAI Governance, Compliance Cost Cliff, Trust Moat, EU AI Act, Agentic AI, Hallucination Costs, JPMorgan LLM Suite, Mayo Clinic Deploy, Auditable RAG, Vector DB Audit Logs,

The Connector.
The Connector Podcast - DFS Digital Finance Summit - Fixing Europe's Fragmented Credit Data For Real Financial Mobility

The Connector.

Play Episode Listen Later Dec 8, 2025 18:00 Transcription Available


We explore how Europe's fragmented credit data blocks fair lending for people who move across borders, and how Mifundo connects bureaus and banks to make financial identities portable. We share progress to 17 countries, the mindset shift lenders need, and how to use AI under EU high‑risk rules without black boxes.• fragmentation of European credit data and its impact on borrowers• difference between cross‑border credit and financial mobility• historical and regulatory reasons for national silos• access barriers to registries and legal limitations• mindset and cultural inertia within banks and supervisors• Mifundo's coverage of 17 countries and 70% of population• bank partnerships across Spain, Greece, Romania, Scandinavia and the Baltics• ecosystem goal serving bureaus, banks and consumers• AI usage with transparency under the EU AI Act• practical endgame: seamless lending regardless of bordersPlease find me on LinkedIn. You can contact me there or via email as well.To connect and keep up to date with all the latest, head over to www.jointheconnector.com or hit subscribe via your podcast streaming platformThank you for tuning into our podcast about global trends in the FinTech industry.Check out our podcast channel.Learn more about The Connector. Follow us on LinkedIn.CheersKoen Vanderhoydonkkoen.vanderhoydonk@jointheconnector.com#FinTech #RegTech #Scaleup #WealthTech

Masters of Privacy
Oliver Patel: How the Digital Omnibus affects the EU AI Act

Masters of Privacy

Play Episode Listen Later Dec 7, 2025 30:04


On Wednesday November 19 2025, the European Commission unveiled its Digital Omnibus Package, which was basically split in two proposals: a proposed Regulation on simplification for AI rules; and a proposed Regulation on simplification of the digital legislation. We will tackle the first one today.Today we are reviewing that AI-related block with Oliver Patel, who is AI Governance Lead at the global pharma and biotech company AstraZeneca, where he helps implement and scale AI governance worldwide. He also advises governments and international policymakers as a Member of the OECD's Expert Group on AI Risk and Accountability.References:* Oliver Patel, “Fundamentals of AI Governance” (now available for pre-order)* Enterprise AI Governance, a newsletter by Oliver Patel* Oliver Patel on LinkedIn* Oliver Patel: How could the EU AI Act change?* EU proposal for a Regulation on simplification for AI rules (EU Commission, covered today)* EU proposal for a Regulation on simplification of the digital legislation (EU Commission, not covered today)* Europe's digital sovereignty: from doctrine to delivery (Politico). This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit www.mastersofprivacy.com/subscribe

Education Technology Society
Better AI in education … is regulation the answer?

Education Technology Society

Play Episode Listen Later Dec 5, 2025 25:53


We talk with legal expert Liane Colonna (Stockholm University) about the EU ‘AI Act' and what it means for the use of AI in education. To what extent can we rely on regulation to enforce safer and more beneficial forms of AI use in education? Accompanying reference >>>  Colonna, L. (2025). Artificial Intelligence in Education (AIED): Towards More Effective Regulation. European Journal of Risk Regulation, doi:10.1017/err.2025.10039 

AI Unraveled: Latest AI News & Trends, Master GPT, Gemini, Generative AI, LLMs, Prompting, GPT Store
⚖️The Billion-Dollar Decision—Building Your AI Moat vs. Buying Off-the-Shelf

AI Unraveled: Latest AI News & Trends, Master GPT, Gemini, Generative AI, LLMs, Prompting, GPT Store

Play Episode Listen Later Dec 5, 2025 16:32


Special Edition: The Billion-Dollar Decision (December 05, 2025)Today's episode is a deep dive into the strategic shift from "renting" AI to "owning" it. We explore the 2025 playbook for shifting from API wrappers to sovereign AI assets.Key Topics & Insights

Littler Labor & Employment Podcast
217 - Littler Lounge: European Employer Edition – From Policy Shifts to Workplace Solutions

Littler Labor & Employment Podcast

Play Episode Listen Later Dec 4, 2025 23:54


This episode kicks off with a little red-carpet flair – Littler's Stephan Swinkels returns from the 2025 European Executive Employer conference in London to share the inside scoop. Hosts Nicole LeFave and Claire Deason get the unfiltered download – straight from the source as they dive into the findings from Littler's 2025 European Employer Survey Report, spotlighting workplace trends shaking up Europe – from pay transparency and the EU AI Act to IE&D and return to work policies. Whether you're navigating new regulations, planning ahead, or trying to make sense of how EU directives intersect with local implementation, this conversation bridges the U.S. patchwork of state and local laws with the European landscape – offering practical insights and fresh perspectives to help employers stay ahead in a rapidly evolving environment. https://www.littler.com/news-analysis/podcast/littler-lounge-european-employer-edition-policy-shifts-workplace-solutions

EUVC
E661 | Jack Leeney, 7GC: The AI Supercycle, IPO Windows & Europe's Missing M&A Flywheel

EUVC

Play Episode Listen Later Dec 2, 2025 46:43


This week, Andreas Munk Holm sits down with Jack Leeney, co-founder of 7GC, the transatlantic growth fund bridging Silicon Valley and Europe and a backer of AI giants like Anthropic, alongside European rising stars Poolside and Fluidstack.From IPOs at Morgan Stanley to running Telefónica's US venture arm and now operating a dual-continental fund, Jack shares how 7GC reads the AI supercycle, why infrastructure and platforms win first, and what Europe must fix to unlock the next wave of venture liquidity.

Medical Device Insights
Wie ROCHE die EU Digital-Regulierung bewältigt

Medical Device Insights

Play Episode Listen Later Dec 2, 2025 33:21


Der "Head of Legal Digital & IT" bei ROCHE berichtet, wie ein Konzern * mit der EU-Digitalregulierung umgeht, * wie er auf Augenhöhe bleibt, * auf was es bei der Implementierung ankommt und * welche Kritik und welche Hoffnung an bezüglich dieser Vorgaben hat. Christian Johner beleuchtet die gleichen Fragestellungen aus der Brille von jemanden, der Einblick in viele andere Unternehmen hat.

Pods Like Us
AI in Podcasting: Humans, Voices, Ethics

Pods Like Us

Play Episode Listen Later Nov 30, 2025 79:25


Join host Martin Quibell (Marv) and a panel of industry experts as they dive deep into the impact of artificial intelligence on podcasting. From ethical debates to hands-on tools, discover how AI is shaping the future of audio and video content creation.  Guests:  ● Benjamin Field (Deep Fusion Films)  ● William Corbin (Inception Point AI)  ● John McDermott & Mark Francis (Caloroga Shark Media)   Timestamps  00:00 – Introduction  00:42 – Meet the Guests  01:45 – The State of AI in Podcasting  03:45 – Transparency, Ethics & the EU AI Act  06:00 – Nuance: How AI Is Used (Descript, Shorten Word Gaps, Remove Retakes)  08:45 – AI & Niche Content: Economic Realities  12:00 – Human Craft vs. AI Automation  15:00 – Job Evolution: Prompt Authors & QC  18:00 – Quality Control & Remastering  21:00 – Volume, Scale, and Audience  24:00 – AI Co-Hosts & Experiments (Virtually Parkinson, AI Voices)  27:00 – AI in Video & Visuals (HeyGen, Weaver)  30:00 – Responsibility & Transparency  33:00 – The Future of AI in Media  46:59 – Guest Contact Info & Closing   Tools & Platforms Mentioned  ● Descript: Shorten word gaps, remove retakes, AI voice, scriptwriting, editing  ● HeyGen: AI video avatars for podcast visuals  ● Weaver (Deep Fusion Films): AI-driven video editing and archive integration  ● Verbal: AI transcription and translation  ● AI Voices: For narration, co-hosting, and accessibility  ● Other references: Spotify, Amazon, Wikipedia, TikTok, Apple Podcasts, Google  Programmatic Ads  Contact the Guests:  - William Corbin: william@inceptionpoint.ai | LinkedIn - John McDermott: john@caloroga.com | LinkedIn - Benjamin Field: benjamin.field@deepfusionfilms.com | LinkedIn - Mark Francis: mark@caloroga.com | LinkedIn | caloroga.com  - Marv: themarvzone.org   Like, comment, and subscribe for more deep dives into the future of podcasting and media!  #Podcasting #AI #ArtificialIntelligence #Descript #HeyGen #PodcastTools #Ethics #MediaInnovation

The Connector.
The Connector Podcast - FinanceX #18 - 2026 Outlook

The Connector.

Play Episode Listen Later Nov 30, 2025 16:38 Transcription Available


The ground under finance moved, and most people only felt a rumble. We spent 2025 translating messy tech estates into DORA‑ready living registers, turning spreadsheets into real‑time risk maps, and discovering that fintech isn't a sidecar anymore—it's the engine. When a major processor or open banking provider hiccups, payments across entire regions stall. That's why regulators accelerated, why critical third parties now face continuous oversight, and why instant payments became Europe's quiet new normal.We walk through the practical realities of this shift: how DPM 4.0 and XBRL CSV forced banks and fintechs into a shared language; how SCT Inst mandated 24/7/365 settlement and price parity; and how compliance stopped being a box to tick and started acting like telemetry you can steer with. Then we pivot to AI, where the real gap isn't enthusiasm—it's insurance and accountability. Traditional policies didn't imagine self‑learning systems that fail without a hack or a human mistake. Enter AI assurance: controlled testing, stress simulation, and continuous scoring that translate governance into measurable evidence aligned to the EU AI Act's high‑risk rules hitting in 2026.Of course, intelligent agents need rails they can actually use. That's where DeFi's programmable architecture, stablecoins like USDC and PYUSD, and agent payment protocols meet internal policy engines to build compliant, verifiable machine transactions. Alongside, we show how teams killed the Excel grind by automating customer reports that cut churn and DSO, and by issuing immutable premium reports for boards and regulators. Beyond the big hubs, APAC's VLEI momentum, India's privacy advantage, and Latvia's capital‑efficient scale point to a broader acceleration powered by standards and verifiable data.The takeaway is simple and demanding: the winners in 2026 will treat compliance as a product feature, build AI‑literate operations, and interoperate across cards, account‑to‑account, and stablecoin rails. Real‑time is here, rules are written, and execution is the frontier. If AI is about to run finance at machine speed, who should own the proof of continuous resilience? Subscribe, share, and tell us your view—because the answer will define the next decade.Thank you for tuning into our podcast about global trends in the FinTech industry.Check out our podcast channel.Learn more about The Connector. Follow us on LinkedIn.CheersKoen Vanderhoydonkkoen.vanderhoydonk@jointheconnector.com#FinTech #RegTech #Scaleup #WealthTech

Ahead of the Game
GDPR and AI Regulation for Marketers

Ahead of the Game

Play Episode Listen Later Nov 28, 2025 52:55


Finding it difficult to navigate the changing landscape of data protection? In this episode of the DMI podcast, host Will Francis speaks with Steven Roberts, Group Head of Marketing at Griffith College, Chartered Director, certified Data Protection Officer, and long-time marketing leader. Steven demystifies GDPR, AI governance, and the rapidly evolving regulatory environment that marketers must now navigate. Steven explains how GDPR enforcement has matured, why AI has created a new layer of complexity, and how businesses can balance innovation with compliance. He breaks down the EU AI Act, its risk-based structure, and its implications for organizations inside and outside the EU. Steven also shares practical guidance for building internal AI policies, tackling “shadow AI,” reducing data breach risks, and supporting teams with training and clear governance. For an even deeper look into how businesses can ensure data protection compliance, check out Steven's book, Data Protection for Business: Compliance, Governance, Reputation and Trust. Steven's Top 3 Tips Build data protection into projects from the start, using tools like Data Protection Impact Assessments to uncover risks early. Invest in regular staff training to avoid common mistakes caused by human error. Balance compliance with business performance by setting clear policies, understanding your risk appetite, and iterating your AI governance over time. The Ahead of the Game podcast is brought to you by the Digital Marketing Institute and is available on ⁠⁠⁠⁠YouTube, Apple Podcasts⁠⁠⁠⁠, ⁠⁠⁠⁠Spotify⁠⁠⁠⁠, and ⁠⁠⁠⁠all other podcast platforms. And if you enjoyed this episode please leave a review so others can find us. If you have other feedback for or would like to be a guest on the show, email the podcast team! Timestamps 01:29 – AI's impact on GDPR & the explosion of new global privacy laws 03:26 – Is GDPR the global gold standard? 05:04 – GDPR enforcement today: Who gets fined and why 07:09 – Cultural attitudes toward data: EU vs. US 08:51 – The EU AI Act explained: Risk tiers, guardrails & human oversight 10:48 – What businesses must do: DPIAs, fundamental rights assessments & more 13:38 – Shadow AI, risk appetite & internal governance challenges 17:10 – Should you upload company data to ChatGPT? 20:40 – How the AI Act affects countries outside the EU 24:47 – Will privacy improve over time? 28:45 – What teams can do now: Tools, processes & data audits 33:49 – Data enrichment tools: targeting vs. Legality 36:47 – Will anyone actually check your data practices? 40:06 – Steven's top tips for navigating GDPR & AI 

AI en Onderwijs
AI en assessment binnen de Universiteit van Amsterdam

AI en Onderwijs

Play Episode Listen Later Nov 28, 2025 38:06


In deze aflevering van AI en Onderwijs gaan Thijs Wesselink en Peter Loonen in gesprek met Merel Pompe, onderwijskundig adviseur aan de Universiteit van Amsterdam op het thema AI en assessment. Zij schetst hoe de UvA is verschoven van “AI verboden” naar gecontroleerd toestaan, met een eigen UvA AI chatomgeving, persona's (custom GPT's) en duidelijke afspraken per faculteit, opleiding en vak. We onderzoeken wat AI betekent voor toetsing: waarom beoordelen met AI onder de EU AI Act niet is toegestaan, hoe validiteit en betrouwbaarheid onder druk staan bij essays en scripties, en waarom veel opleidingen “terug naar de tekentafel” moeten met leerdoelen, toetsvormen en constructive alignment. Merel deelt concrete voorbeelden zoals de two lane approach (wat toets je strikt op campus, waar mag AI juist wel meedoen?) en het assessmentnetwerk met ambassadeurs binnen de UvA. Je hoort wat andere onderwijssectoren kunnen leren van deze aanpak, welke rol AI-geletterdheid en privacy spelen, en waar de grootste zorgen nu zitten: tijd, werkdruk en het risico dat studenten studiepunten halen zonder het gewenste beheersingsniveau.

Web3 CMO Stories
Nearshoring Meets AI | S5 E49

Web3 CMO Stories

Play Episode Listen Later Nov 25, 2025 19:18 Transcription Available


Send us a textWalk the floor at Web Summit without leaving your headphones. We sit down with Jo Smets, founder of BluePanda and president of the Portuguese Belgian Luxembourg Chamber of Commerce, to unpack how nearshoring and AI are reshaping CRM, marketing, and team delivery across Europe.We start with clarity on nearshoring: why time zone, culture, and communication speed beat cost alone, and how that proximity pays off when you're wiring AI into daily work. Jo shares how BluePanda applies AI beyond demos—recruitment, performance, and operations—then translates those lessons into client outcomes. We compare adoption patterns across startups and corporates, call out the real blocker (end‑to‑end process automation), and map the role of global networks like BBN for keeping pace with tools and trends.The conversation pivots to trust and governance: practical ways to protect data, when on‑prem makes sense, and how to use EU AI Act guidance without stalling innovation. We explore the marketing shift from SEO to GEO, the idea of “AI‑proof” websites, and the move toward dynamic, persona‑aware content that renders at load. Jo offers a simple path to progress—pick one process, pilot, measure, educate—while keeping empathy at the core as managers start leading both humans and AI agents. Along the way, we spotlight how chambers and communities connect ecosystems across borders, turning events into learning loops and real partnerships.Looking to modernize without losing your team's identity? You'll leave with a plan for small wins, a lens for tool curation, and a sharper view of where marketing is headed next. If this resonated, subscribe, share it with a colleague who's wrestling with AI adoption, and drop a review to help others find the show.This episode was recorded in the official podcast booth at Web Summit (Lisbon) on November 12, 2025. Check the video footage, read the blog article and show notes here: https://webdrie.net/why-european-teams-win-with-nearshoring-and-practical-ai/..........................................................................

KPMG in Ireland
Inside Insurance - Special Focus on the EU AI Act - Episode 15

KPMG in Ireland

Play Episode Listen Later Nov 25, 2025 14:10


We're back with a brand-new episode of Inside Insurance, and on this episode we deep dive into one of the most transformative regulatory developments in recent years: the EU AI Act. Joining us for this insightful conversation is Sean Redmond, Director at KPMG and leader of our AI Risk and Regulatory team, alongside Jean Rea, Partner at KPMG. Together, they unpack the implications of this landmark legislation and what it means for insurers.

Paymentandbanking FinTech Podcast
Episode 20_25: AI in Finance: OpenAI, Google, Anthropic liefern und Europa justiert nach

Paymentandbanking FinTech Podcast

Play Episode Listen Later Nov 24, 2025 61:26


In der neuen Folge von AI in Finance passiert das, was in den letzten Monaten zum Normalzustand geworden ist: Das Tempo der KI-Industrie zieht weiter an, während Regulierung, Infrastruktur und Use Cases versuchen, mitzuhalten. Sascha und Maik haben so viele News im Gepäck, dass man locker eine Dreistundenfolge daraus hätte machen können. Es wird ein Rundflug über Europa, Silicon Valley, Big Tech, neue Modelle, neue Tools und die Frage, wie nah wir eigentlich an echter, alltäglicher KI sind.

Cambridge Law: Public Lectures from the Faculty of Law
Faithful or Traitor? The Right of Explanation in a Generative AI World: CIPIL Evening Seminar

Cambridge Law: Public Lectures from the Faculty of Law

Play Episode Listen Later Nov 24, 2025 49:02


Speaker: Professor Lilian Edwards, Emeritus Professor of Law, Innovation & Society, Newcastle Law School Biography: Lilian Edwards is a leading academic in the field of Internet law. She has taught information technology law, e-commerce law, privacy law and Internet law at undergraduate and postgraduate level since 1996 and been involved with law and artificial intelligence (AI) since 1985. She is now Emerita Professor at Newcastle and Honorary Professor at CREAte, University of Glasgow, which she helped co-found. She is the editor and major author of Law, Policy and the Internet, one of the leading textbooks in the field of Internet law (Hart, 2018, new edition forthcoming with Urquhart and Goanta, 2026). She won the Future of Privacy Forum award in 2019 for best paper ("Slave to the Algorithm" with Michael Veale) and the award for best non-technical paper at FAccT in 2020, on automated hiring. In 2004 she won the Barbara Wellberry Memorial Prize in 2004 for work on online privacy where she invented the notion of data trusts, a concept which ten years later has been proposed in EU legislation. She is a former fellow of the Alan Turing Institute on Law and AI, and the Institute for the Future of Work. Edwards has consulted for inter alia the EU Commission, the OECD, and WIPO.Abstract: The right to an explanation is having another moment. Well after the heyday of 2016-2018 when scholars tussled over whether the GDPR ( in either art 22 or arts 13-15) conferred a right to explanation, the CJEU case of Dun and Bradstreet has finally confirmed its existence, and the Platform Work Directive has wholesale revamped art 22 in its Algorithmic Management chapter. Most recently the EU AI Act added its own Frankenstein-like right to an explanation (art 86) of AI systems .None of these provisions however pin down what the essence of the explanation should be, given many notions can be invoked here ; a faithful description of source code or training data; an account that enables challenge or contestation; a “plausible” description that may be appealing in a behaviouralist sense but might be actually misleading when operationalised eg to generate a medical course of treatment. Agarwal et al argue that the tendency of UI designers, and regulators and judges alike to lean towards the plausibility end, may be unsuited to large language models which represent far more of a black box in size and optimisation than conventional machine learning, and which are trained to present encouraging but not always accurate accounts of their workings. Yet this is also the direction of travel taken by CJEU Dun & Bradstreet , above. This paper argues that explanations of large model outputs may present novel challenges needing thoughtful legal mandates.For more information (and to download slides) see: https://www.cipil.law.cam.ac.uk/seminars-and-events/cipil-seminars

AI Unraveled: Latest AI News & Trends, Master GPT, Gemini, Generative AI, LLMs, Prompting, GPT Store

Welcome to a Special Episode of AI Unraveled: The Cost of Data Gravity: Solving the Hybrid AI Deployment Nightmare.We are tackling the silent budget killer in enterprise AI: Data Gravity. You have petabytes of proprietary data—the "mass" that attracts apps and services—but moving it to the cloud for inference is becoming a financial and regulatory nightmare. We break down why the cloud-first strategy is failing for heavy data, the hidden tax of egress fees, and the new architectural playbook for 2025.Source: https://www.linkedin.com/pulse/cost-data-gravity-solving-hybrid-ai-deployment-nightmare-djamgatech-ic42cStrategic Pillars & Topics

The AI Policy Podcast
Trump's Draft AI Preemption Order, EU AI Act Delays, and Anthropic's Cyberattack Report

The AI Policy Podcast

Play Episode Listen Later Nov 21, 2025 54:26


In this episode, we start by discussing Greg's trip to India and the upcoming India AI Impact Summit in February 2026 (00:29). We then unpack the Trump Administration's draft executive order to preempt state AI laws (07:46) and break down the European Commission's new “digital omnibus” package, including proposed adjustments to the AI Act and broader regulatory simplification efforts (17:51). Finally, we discuss Anthropic's report on a China-backed “highly sophisticated cyber espionage campaign" using Claude and the mixed reactions from cybersecurity and AI policy experts (37:37).

AI Unraveled: Latest AI News & Trends, Master GPT, Gemini, Generative AI, LLMs, Prompting, GPT Store

Welcome to AI Unraveled (November 20, 2025): Your daily strategic briefing on the business impact of AI.Today's Highlights: Saudi Arabia signs landmark AI deals with xAI and Nvidia; Europe scales back crucial AI and privacy laws; Anthropic courts Microsoft and Nvidia to break free from AWS; and Google's Gemini 3 climbs leaderboards, reinforcing its path toward AGI.Strategic Pillars & Topics:

Alexa's Input (AI)
Shift Left Your AI Security with SonnyLabs Founder Liana Tomescu

Alexa's Input (AI)

Play Episode Listen Later Nov 17, 2025 64:23


In this episode of Alexa's Input (AI) Podcast, host Alexa Griffith sits down with Liana Tomescu, founder of Sonny Labs and host of the AI Hacks podcast. Dive into the world of AI security and compliance as Liana shares her journey from Microsoft to founding her own company. Discover the challenges and opportunities in making AI applications secure and compliant, and learn about the latest in AI regulations, including the EU AI Act. Whether you're an AI enthusiast or a tech professional, this episode offers valuable insights into the evolving landscape of AI technology.LinksSonnyLabs Website: https://sonnylabs.ai/SonnyLabs LinkedIn: https://www.linkedin.com/company/sonnylabs-ai/Liana's LinkedIn: https://www.linkedin.com/in/liana-anca-tomescu/Alexa's LinksLinkTree: https://linktr.ee/alexagriffithAlexa's Input YouTube Channel: https://www.youtube.com/@alexa_griffithWebsite: https://alexagriffith.com/LinkedIn: https://www.linkedin.com/in/alexa-griffith/Substack: https://alexasinput.substack.com/KeywordsAI security, compliance, female founder, Sunny Labs, EU AI Act, cybersecurity, prompt injection, AI agents, technology innovation, startup journeyChapters00:00 Introduction to Liana Tomescu and Sunny Labs02:53 The Journey of a Female Founder in Tech05:49 From Microsoft to Startup: The Transition09:04 Exploring AI Security and Compliance11:41 The Role of Curiosity in Entrepreneurship14:52 Understanding Sunny Labs and Its Mission17:52 The Importance of Community and Networking20:42 MCP: Model Context Protocol Explained23:54 Security Risks in AI and MCP Servers27:03 The Future of AI Security and Compliance38:25 Understanding Prompt Injection Risks45:34 The Shadow AI Phenomenon45:48 Navigating the EU AI Act52:28 Banned and High-Risk AI Practices01:00:43 Implementing AI Security Measures01:17:28 Exploring AI Security Training

Digitale Vorreiter - Vodafone Business Cases
Die Legal Tech Revolution: Wie das KI-Sekretariat von Jupus zur Lösung des Fachkräftemangels in Kanzleien beiträgt

Digitale Vorreiter - Vodafone Business Cases

Play Episode Listen Later Nov 17, 2025 39:58 Transcription Available


René Fergen, CEO von Jupus, ist selbst Diplom-Jurist und hat die klassische Karriere bewusst gegen die Digitalisierung der Rechtsbranche eingetauscht. Sein Unternehmen Jupus hat ein KI-Sekretariat entwickelt, das Anwaltskanzleien dabei unterstützt, alle nicht-juristischen Aufgaben vom ersten Mandantenkontakt bis zur Rechnungsstellung zu automatisieren. Hintergrund ist ein massiver Fachkräftemangel: Während die Zahl der Anwält:innen in Deutschland steigt, hat sich die Zahl der neu ausgebildeten Rechtsanwaltsfachangestellten in den letzten 30 Jahren um über 80 % reduziert. Im Gespräch mit Christoph Burseg spricht René Fergen über die Disruption in einer der ältesten Branchen der Welt , die Herausforderung, KI-Software im hochsensiblen juristischen Umfeld zu entwickeln und warum es für Kanzleien keine Alternative mehr zur Digitalisierung gibt, um wettbewerbsfähig zu bleiben. In dieser Episode erfährst du: - Welche Aufgaben die Software von Jupus übernimmt, um Anwält:innen und Personal zu entlasten. - Dass in Deutschland rund 165.000 Rechtsanwält:innen tätig sind und wie sich der Personalmangel in der Branche auf den Zugang zum Recht auswirkt. - Wie Jupus unstrukturierte Dokumente (z. B. 10.000 Seiten an Verträgen oder Korrespondenz) in Sekundenbruchteilen analysiert und strukturiert – eine Aufgabe, die sonst Tage in Anspruch nehmen würde. - Was die Telefon-KI von Jupus kann und wie sie mit Anrufern von Richtern bis zu Vertrieblern umgeht. - Warum das Team von Jupus in weniger als drei Jahren über 60 Personen stark geworden ist und über 8 Millionen Euro Kapital aufgenommen hat. - Wieso der CEO von Jupus den EU AI Act gesamtgesellschaftlich als „Katastrophe“ für Europas Wettbewerbsfähigkeit und Innovation ansieht. Christoph auf LinkedIn: https://www.linkedin.com/in/christophburseg Kontaktiere uns über Instagram: https://www.instagram.com/vodafonebusinessde/

Conversations For Leaders & Teams
E89. Responsible AI for the Modern Leader & Coach w/Colin Cosgrove

Conversations For Leaders & Teams

Play Episode Listen Later Nov 15, 2025 34:36


Send us a textExplore how leaders and coaches can adopt AI without losing the human core, turning compliance and ethics into everyday practice rather than a side office. Colin Cosgrove shares a practical arc for AI readiness, concrete use cases, and a clear view of risk, trust, and governance.• journey from big-tech compliance to leadership coaching • why AI changes the leadership environment and decision pace • making compliance human: transparency, explainability, consent • AI literacy across every function, not just data teams • the AI leader archetype arc for mindset and readiness • practical augmentation: before, during, after coaching sessions • three risks: reputational, relational, regulatory • leader as coach: trust, questions, and human skills • EU AI Act overview and risk-based obligations • governance, accountability, and cross-Reach out to Colin on LinkedIn and check out his website: Movizimo.com. Support the showBelemLeaders–Your organization's trusted partner for leader and team development. Visit our website to connect: belemleaders.org or book a discovery call today! belem.as.me/discoveryUntil next time, keep doing great things!

The Digital Executive
Quantifying AI Risk: Yakir Golan on Turning Cyber Threats Into Business Intelligence | Ep 1145

The Digital Executive

Play Episode Listen Later Nov 14, 2025 15:20


In this episode of The Digital Executive, host Brian Thomas welcomes Yakir Golan, CEO and Co-founder of Kovrr, a global leader in cyber and AI risk quantification. Drawing from his early career in Israeli intelligence and later roles in software, hardware, and product management, Yakir explains how his background shaped his holistic approach to understanding complex, interconnected risk systems.Yakir breaks down why quantifying AI and cyber risk—rather than relying on subjective, color-coded scoring—is becoming essential for enterprise leaders, boards, and regulators. He explains how Kovrr's new AI Risk Assessment and Quantification module helps organizations model real financial exposure, understand high-impact “tail risks,” and align security, GRC, and finance teams around a shared, objective language.Looking ahead, Yakir discusses how global regulation, including the EU AI Act, is accelerating the need for measurable, defensible risk management. He outlines a future where AI risk quantification becomes a board-level expectation and a foundation for resilient, responsible innovation. Through Kovrr's mission, Yakir aims to equip enterprises with the same level of intelligence-driven decision making once reserved for national security—now applied to the rapidly evolving digital risk landscape.If you liked what you heard today, please leave us a review - Apple or Spotify.See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

Irish Tech News Audio Articles
Governing AI in the Age of Risk

Irish Tech News Audio Articles

Play Episode Listen Later Nov 14, 2025 6:59


Guest article by Paul Dongha . Co-author of Governing the Machine: How to navigate the risks of AI and unlock its true potential. Artificial Intelligence (AI) has moved beyond the realm of IT, it is now the defining strategic challenge for every modern organisation. The global rush to adopt AI is shifting from a sprint for innovation to a race for survival. Yet as businesses scramble to deploy powerful systems, from predictive analytics to generative AI, they risk unleashing a wave of unintended consequences that could cripple them. That warning sits at the heart of Governing the Machine: How to navigate the risks of AI and unlock its true potential, a timely new guide for business leaders. Governing the Machine The authors, Dr Paul Dongha, Ray Eitel-Porter, and Miriam Vogel, argue that the drive to embrace AI must be matched by an equally urgent determination to govern it. Drawing on extensive experience advising global boardrooms, they cut through technical jargon to focus on the organisational realities of AI risk. Their step-by-step approach shows how companies can build responsible AI capability, adopting new systems effectively without waiting for perfect regulation or fully mature technology. That wait-and-see strategy, they warn, is a losing one: delay risks irrelevance, while reckless deployment invites legal and reputational harm. The evidence is already visible in a growing list of AI failures, from discriminatory algorithms in public services to generative models fabricating news or infringing intellectual property. These are not abstract technical flaws but concrete business risks with real-world consequences. Whose problem is it anyway? According to the authors, it is everyone's. The book forcefully argues that AI governance cannot be siloed within the technology department. It demands a cross-enterprise approach, requiring active leadership driven from the C-suite, Legal counsel, Human Resources, Privacy and Information Security teams as well as frontline staff alike. Rather than just sounding the alarm, the book provides a practical framework for action. It guides readers through the steps of building a robust AI governance programme. This includes defining clear principles and policies, establishing accountability, and implementing crucial checkpoints. A core part of this framework is a clear-eyed look at the nine key risks organisations must manage: accuracy, fairness and bias, explainability, accountability, privacy, security, intellectual property, safety, and the impact on the workforce and environment. Each risk area is explained, and numerous controls that mitigate and manage these risks are listed with ample references to allow the interested reader to follow-up. Organisations should carefully consider implementing a Governance Risk and Compliance (GRC) system, which brings together all key aspects of AI governance. GRC systems are available, both from large tech companies and from specialist vendors. A GRC system ties together all key components of AI governance, providing management with a single view of their deployed AI systems, and a window into all stages of AI governance for systems under development. The book is populated with numerous case studies and interviews with senior executives from some of the largest and well-known origanisations in the world that are grappling with AI risk management. The authors also navigate the complex and rapidly evolving global regulatory landscape. With the European Union implementing its comprehensive AI Act and the United States advancing a fragmented patchwork of state and federal rules, a strong, adaptable internal governance system is presented as the only viable path forward. The EU AI Act, which has now come into force, with staggered compliance deadlines in the coming two years, requires all organisations that operate within the EU, to implement risk mitigation controls with evidence of compliance. A key date is August 2nd 2026, by which time all 'Hig...

Vitamine A | De podcast voor accountants
Vitamine A #66 | Reality Check: overleeft de accountant de AI-revolutie?

Vitamine A | De podcast voor accountants

Play Episode Listen Later Nov 14, 2025 38:28


In deze speciale aflevering die aansluit bij de Accountantsdag 2025 met als thema Reality Check, duikt Vitamine A opnieuw in de impact van kunstmatige intelligentie op het accountantsvak. Drie gasten, drie invalshoeken en één grote vraag: wat betekent AI voor het beroep, de organisatie en de mens achter de accountant.Mona de Boer (PwC, Responsible AI) vertelt hoe AI inmiddels een dagelijkse realiteit is geworden en waarom organisaties nu moeten bepalen welke waarden ze hanteren. Ze bespreekt de betekenis van de EU AI Act en de opkomst van AI assurance als nieuw domein binnen het vertrouwen in technologie. Daarbij benadrukt ze dat de accountant geen terrein verliest maar juist aan belang wint.Nart Wielaard neemt het publiek mee in het concept van de Zero Person Company, een experimentele organisatie die draait op agents in plaats van mensen. Het experiment laat zien dat AI geen mens kan kopiëren, maar dat processen op een fundamenteel andere manier ingericht kunnen worden. De accountant speelt daarin een rol als coach, toezichthouder en kwaliteitsbewaker van AI-gedreven processen.Met Marjan Heemskerk verschuift de focus naar de dagelijkse praktijk van ondernemers. Zij ziet hoe AI basisvragen overneemt, maar vooral ruimte creëert voor een accountant die duidt, meedenkt en context biedt. Soft skills worden cruciaal. De uitdaging voor kantoren is om AI verantwoord in te zetten, medewerkers daarin mee te nemen en tegelijkertijd de verleiding van shortcuts te voorkomen.De aflevering eindigt met een reality check die zowel technologisch als menselijk is. AI verandert veel, maar het fundament van het accountantsvak blijft overeind: vertrouwen, onafhankelijkheid en het vermogen om de werkelijkheid te duiden.Vitamine A sprak eerder over AI. Esther Kox, Hakan Koçak en Nart Wielaard spreken ook op de Accountantsdag 2025, op 19 november 2025.Accountantsdag 2025: http://www.accountantsdag.nlVitamine A #63 | AI als assistent, niet als autoriteit... In gesprek met Esther KoxVitamine A #62 | AI op kantoor: Twijfelen of toepassen? Met Hakan KoçakVitamine A #43 | Betrouwbare AI en verantwoording. Hoe doe je dat? Met Mona de Boer (PwC)Vitamine A #34 | Wat betekent AI voor accountants die op zoek zijn naar waarheid? 

The Road to Accountable AI
Oliver Patel: Sharing Frameworks for AI Governance

The Road to Accountable AI

Play Episode Listen Later Nov 13, 2025 36:03


Oliver Patel has built a sizeable online following for his social media posts and Substack about enterprise AI governance, using clever acronyms and visual frameworks to distill down insights based on his experience at AstraZeneca, a major global pharmaceutical company. In this episode, he details his career journey from academic theory to government policy and now practical application, and offers insights for those new to the field. He argues that effective enterprise AI governance requires being pragmatic and picking your battles, since the role isn't to stop AI adoption but to enable organizations to adopt it safely and responsibly at speed and scale. He notes that core pillars of modern AI governance, such as AI literacy, risk classification, and maintaining an AI inventory, are incorporated into the EU AI Act and thus essential for compliance. Looking forward, Patel identifies AI democratization—how to govern AI when everyone in the workforce can use and build it—as the biggest hurdle, and offers thougths about how enteprises can respond. Oliver Patel is the Head of Enterprise AI Governance at AstraZeneca. Before moving into the corporate sector, he worked for the UK government as Head of Inbound Data Flows, where he focused on data policy and international data transfers, and was a researcher at University College London. He serves as an IAPP Faculty Member and a member of the OECD's Expert Group on AI Risk. His forthcoming book, Fundamentals of AI Governance, will be released in early 2026. Transcript Enterprise AI Governance Substack Top 10 Challenges for AI Governance Leaders in 2025 (Part 1)  Fundamentals of AI Governance book page  

VinciWorks
GDPR - What's next for data protection and compliance

VinciWorks

Play Episode Listen Later Nov 12, 2025 55:46


Over seven years since its introduction, the GDPR continues to evolve as new technologies, court rulings and regulatory guidance reshape how organisations handle personal data. In this episode, we bring you insights from our recent webinar, where experts unpacked the latest developments in GDPR and global data protection. With the EU AI Act now in force, shifting cross-border data frameworks, and regulators issuing record fines, compliance has never been more complex — or more crucial. Tune in to learn: What recent GDPR fines reveal about regulator priorities How to navigate overlaps between AI regulation and data protection rules Best practices for managing EU–UK–US data transfers after new adequacy decisions How to address emerging risks around biometrics, children's data, and AI profiling Real-world case studies showing how organisations are adapting to change This episode is a must-listen for data protection officers, compliance professionals and legal teams looking to strengthen governance, maintain trust, and stay ahead in a fast-moving regulatory landscape.

The Value Pricing Podcast
Why You Can't Analyse Client Data (Unless You Do THIS)

The Value Pricing Podcast

Play Episode Listen Later Nov 10, 2025 34:45


Can you safely analyse client data with ChatGPT? Only if you do THIS one critical thing.  We reveal a powerful use case for financial analysis—done in seconds.  But most accountants are breaking the rules without realising it.  GDPR, client consent, data security—it's all covered.  Plus, the surprising fix that makes everything compliant.  This could change how you use AI forever. The latest episode of the Value Pricing Podcast is now available: Why You Can't Analyse Client Data (Unless You Do THIS) In today's episode you will learn: How to analyse financial data with ChatGPT—safely and fastWhy free and Plus plans break GDPR rulesThe #1 mistake accountants make with client dataWhat the EU AI Act means for your firmHow to get client consent and stay compliantA powerful workaround using local AI on your computer Don't miss out on the essential AI insights every accountant needs to stay compliant, save time, and deliver smarter client reports. Listen now! 

Portfolio Checklist
Mikor érdemes betárazni a magyar csúcsrészvényekből? Jelentett az OTP és a Mol

Portfolio Checklist

Play Episode Listen Later Nov 7, 2025 30:07


A Mol és az OTP gyorsjelentéseivel foglalkoztunk, és a mélyére néztünk a legfrissebb számoknak, amelyek fogódzót jelenthetnek a befektetők számára, hogy vételben vagy eladásban gondolkodjanak. A témáról Nagy Viktor, a Portfolio vezető elemzője beszélt. A műsor második felében az uniós AI Act kapta a fókuszt, az Európai Bizottság ugyanis részben elhalasztaná a világon legszigorúbb AI-szabályozásának hatályba lépését, miután az Egyesült Államok és a nagy technológiai cégek intenzív nyomást gyakoroltak Brüsszelre. A döntés hátteréről és a hazai cégek AI Actből fakadó esetleges kötelezettségeiről is kérdeztük Petrányi Dórát, a CMS közép-kelet-európai régióért felelős ügyvezető igazgatóját. Főbb részek: Intro − (00:00) Jelentett a Mol és OTP is: venni vagy nem venni? − (02:26) EU AI Act: haladék a Big Technek − (14:15) Tőkepiaci kitekintő − (25:44) Kép forrása: Getty ImagesSee omnystudio.com/listener for privacy information.

ServiceNow Podcasts
AI Regulations Explained: EU AI Act, Colorado Law, and NIST Framework

ServiceNow Podcasts

Play Episode Listen Later Nov 6, 2025 19:02


Join host Bobby Brill as he sits down with ServiceNow's AI legal and governance experts to break down the complex world of AI regulations. Andrea LaFontain (Director of AI Legal), Ken Miller (Senior Director of Product Legal), and Navdeep Gill (Staff Senior Product Manager, Responsible AI) explain how organizations can navigate the growing landscape of AI compliance. In this episode, you'll learn about three major regulatory approaches: the risk-based EU AI Act, Colorado's algorithmic discrimination law, and the NIST voluntary framework. The experts discuss practical strategies for complying with multiple regulations simultaneously, using the EU AI Act as a baseline and measuring the delta for new requirements. Key topics covered:- Why proactive compliance matters before regulations fully take effect - How AI Control Tower helps discover and manage AI systems across your enterprise - The exponential math behind AI compliance (vendors, employees, third parties) - Setting up governance policies for high-risk AI use cases - Timeline for major compliance deadlines (Colorado June 2026, EU August 2026) - The real costs of waiting for your first violation Whether you're managing AI deployment, working in compliance, or trying to understand the regulatory landscape, this episode provides actionable insights on building responsible AI governance infrastructure. Guests - Andrea LaFountain -Director, AI Legal Ken Miller - Senior Director, Product Legal Navdeep Gill - Staff Senior Product Manager, Responsible AI Host - Bobby Brill Chapters:00:00 Introduction to AI and Regulations 00:45 Meet the Experts 01:52 Overview of Key AI Regulations 03:03 Compliance Strategies for AI Regulations 07:33 ServiceNow's AI Control Tower 14:02 Challenges and Risks in AI Governance 16:04 Future of AI Regulations 18:34 Conclusion and Final ThoughtsSee omnystudio.com/listener for privacy information.

ServiceNow TechBytes
AI Regulations Explained: EU AI Act, Colorado Law, and NIST Framework

ServiceNow TechBytes

Play Episode Listen Later Nov 6, 2025 19:02


Join host Bobby Brill as he sits down with ServiceNow's AI legal and governance experts to break down the complex world of AI regulations. Andrea LaFontain (Director of AI Legal), Ken Miller (Senior Director of Product Legal), and Navdeep Gill (Staff Senior Product Manager, Responsible AI) explain how organizations can navigate the growing landscape of AI compliance. In this episode, you'll learn about three major regulatory approaches: the risk-based EU AI Act, Colorado's algorithmic discrimination law, and the NIST voluntary framework. The experts discuss practical strategies for complying with multiple regulations simultaneously, using the EU AI Act as a baseline and measuring the delta for new requirements. Key topics covered:- Why proactive compliance matters before regulations fully take effect - How AI Control Tower helps discover and manage AI systems across your enterprise - The exponential math behind AI compliance (vendors, employees, third parties) - Setting up governance policies for high-risk AI use cases - Timeline for major compliance deadlines (Colorado June 2026, EU August 2026) - The real costs of waiting for your first violation Whether you're managing AI deployment, working in compliance, or trying to understand the regulatory landscape, this episode provides actionable insights on building responsible AI governance infrastructure. Guests - Andrea LaFountain -Director, AI Legal Ken Miller - Senior Director, Product Legal Navdeep Gill - Staff Senior Product Manager, Responsible AI Host - Bobby Brill Chapters:00:00 Introduction to AI and Regulations 00:45 Meet the Experts 01:52 Overview of Key AI Regulations 03:03 Compliance Strategies for AI Regulations 07:33 ServiceNow's AI Control Tower 14:02 Challenges and Risks in AI Governance 16:04 Future of AI Regulations 18:34 Conclusion and Final ThoughtsSee omnystudio.com/listener for privacy information.

Irish Tech News Audio Articles
Irish Leaders Face The Cyber Stress Test, Amid Rising Talent, Training And Tech Supply Chain Disruptions

Irish Tech News Audio Articles

Play Episode Listen Later Nov 6, 2025 5:51


A majority of Irish organisations have enhanced cybersecurity measures in recent months yet under-investment in key areas of training and compliance, ongoing talent shortages and AI-powered cyber threats continue to be areas of concern for Irish cyber leaders. That's according to EY Ireland's inaugural Cyber Leaders Index, which surveyed 165 of Ireland's senior cyber leaders with a particular focus on the corporate, health and life sciences and government sectors. 83% of Irish cyber leaders report enhancing cybersecurity measures over the past six months, with nearly a third (32%) noting an increase in budgets, while two thirds (67%) report investment holding steady. However, more than 70% of cyber leaders report difficulties securing budget for staff cyber awareness training. 43% cited challenges in securing budget for hiring and retaining skilled personnel, which remains a key challenge for cyber leaders. Nearly half (48%) of cyber leaders identified AI and data security as a top priority for the year ahead, and many organisations are adapting their practices in response to the EU AI Act. Yet 44% say they face challenges securing budget for AI-related security initiatives, suggesting that investment is not keeping pace with strategic intent. This may reflect internal competition for AI budgets, rather than reluctance to invest in cybersecurity, and embedding cybersecurity into AI efforts positions the function as a driver of growth and advantage. Almost seven in ten (68%) of respondents said that protecting against supply chain and vendor-related threats is a top priority within their cybersecurity programmes, however only 4% identify third-party vendor risk as one of their main concerns. Compliance with relevant regulations and data privacy laws such as NIS2 was cited as a priority by 39% of respondents, while the EU AI Act is also having an impact with nearly half (47%) of the leaders surveyed stating they have updated their data handling and monitoring practices and four in ten (39%) having updated their data protection impact assessment systems. Puneet Kukreja, Technology Consulting Partner and Head of Cyber at EY Ireland said: "In an AI-driven world where algorithms and code are reshaping both attacks and defences, cyber risk is no longer something to eliminate, it must be managed with precision. This shift demands that cyber leaders evolve from engineers and managers to architects of trust, with a seat and a voice at the top table where strategic decisions are made and budgets are shaped. Cyber threats are escalating, with major breaches reported almost every week, and it's clear that defences are only as strong as their weakest point. Yet investment is not always going where it matters most, with gaps in staff training and talent retention remaining areas of concern." Carol Murphy, Consulting Partner and Head of Markets at EY Ireland said: "Irish organisations are strengthening their cyber resilience, with most reporting enhanced defences and stable or increased budgets. The challenge now is to direct that investment towards people and partnerships, ensuring teams are trained, supported and equipped to manage the growing demands of compliance and third-party risk. Organisations must prioritise the continuous training and wellbeing of their cyber teams, recognising that resilience depends as much on people as it does on technology." Burnout Risk As Cyber Threats Remain A Top Concern Burnout and fatigue amongst cyber leaders have been identified as growing resilience risks for Irish organisations, with 37% of those surveyed reporting concern about the gaps in their organisation's cyber risk coverage. More than one in four (26%) of respondents reported negative impacts on their mental health. Puneet Kukreja said: "Our research shows that stress is fast becoming a hidden cyber risk for organisations. Cyber risk is constant, and that unrelenting pressure is taking a toll on the people who defend against it. Burnout does...

Outgrow's Marketer of the Month
Snippet: AI Caramba! CEO Matthew Blakemore Warns That Strict EU AI Rules May Push Innovation to Looser Markets Before Tools Enter the EU.

Outgrow's Marketer of the Month

Play Episode Listen Later Nov 5, 2025 0:57


Microsoft Business Applications Podcast
Copilot Success Starts with Clean Data

Microsoft Business Applications Podcast

Play Episode Listen Later Nov 2, 2025 33:44 Transcription Available


ILTA
#0134: (JIT ) ILTA Just-In-Time: What You Need to Know About New Regulations Governing AI in HR

ILTA

Play Episode Listen Later Oct 28, 2025 26:41


In this podcast, discover how to best navigate California's new employment AI regulations that recently went into effect on October 1st.    The speaker highlighted how the usage of Automated Decision Systems, which includes AI, when making employment decisions, can directly violate California law if these tools are found to discriminate against employees or applicants, either directly or indirectly, on the basis of already protected characteristics such as race, age, gender, etc.    In addition, they highlighted other recent AI regulations taking place around the world, such as the EU AI Act and more.    Moderator:  Adam Wehler, Director of eDiscovery and Litigation Technology, Smith Anderson Speaker: Kassi Burns, Senior Attorney, Trial and Global Disputes, King & Spalding

Outgrow's Marketer of the Month
Snippet: Matthew Blakemore, CEO at AI Caramba!, highlights a pressing challenge with the EU AI Act.

Outgrow's Marketer of the Month

Play Episode Listen Later Oct 24, 2025 0:49


⚖️ The EU AI Act's Biggest Hurdle: Regulating AI That's Already in UseMatthew Blakemore, CEO at AI Caramba!, highlights a pressing challenge with the EU AI Act. While the framework does a strong job of classifying AI projects into risk categories, it faces a dilemma with tools that are already in widespread public use. Many existing systems, some of which likely fall into high-risk categories, have already been trained and adopted by millions. The question becomes: should they be withdrawn, despite their popularity, or adapted under new rules? Listen to the full podcast now- https://bit.ly/40GZ9bw #AI #ArtificialIntelligence #AIRegulation #TechPolicy #AICompliance #AIInnovation #AITransformation

The Road to Accountable AI
Caroline Louveaux: Trust is Mission Critical

The Road to Accountable AI

Play Episode Listen Later Oct 23, 2025 33:13


Kevin Werbach speaks with Caroline Louveaux, Chief Privacy, AI, and Data Responsibility Officer at Mastercard, about what it means to make trust mission critical in the age of artificial intelligence. Caroline shares how Mastercard built its AI governance program long before the current AI boom, grounding it in the company's Data and Technology Responsibility Principles". She explains how privacy-by-design practices evolved into a single global AI governance framework aligned with the EU AI Act, NIST AI Risk Management, and standards. The conversation explores how Mastercard balances innovation speed with risk management, automates low-risk assessments, and maintains executive oversight through its AI Governance Council. Caroline also discusses the company's work on agentic commerce, where autonomous AI agents can initiate payments, and why trust, certification, and transparency are essential for such systems to succeed. Caroline unpacks what it takes for a global organization to innovate responsibly — from cross-functional governance and "tone from the top," to partnerships like the Data & Trust Alliance and efforts to harmonize global standards. Caroline emphasizes that responsible AI is a shared responsibility and that companies that can "innovate fast, at scale, but also do so responsibly" will be the ones that thrive. Caroline Louveaux leads Mastercard's global privacy and data responsibility strategy. She has been instrumental in building Mastercard's AI governance framework and shaping global policy discussions on data and technology. She serves on the board of the International Association of Privacy Professionals (IAPP), the WEF Task Force on Data Intermediaries, the ENISA Working Group on AI Cybersecurity, and the IEEE AI Systems Risk and Impact Executive Committee, among other activities. Transcript How Mastercard Uses AI Strategically: A Case Study (Forbes 2024) Lessons From a Pioneer: Mastercard's Experience of AI Governance (IMD, 2023) As AI Agents Gain Autonomy, Trust Becomes the New Currency. Mastercard Wants to Power Both. (Business Insider, July 2025)

The HR L&D Podcast
How AI is Making HR More Human with Daniel Strode

The HR L&D Podcast

Play Episode Listen Later Oct 21, 2025 42:25


This episode is sponsored by Deel.Ensure fair, consistent reviews with Deel's calibration template. Deel's free Performance Calibration Template helps HR teams and managers run more equitable, structured reviews. Use it to align evaluations with business goals,reduce bias in ratings, and ensure every performance conversation is fair, consistent,and grounded in shared standards.Download now: www.deel.com/nickdayIn this episode of the HR L&D Podcast, host Nick Day explores how HR can use AI to become more strategic and more human. The conversation covers where AI truly fits in HR, what changes with the EU AI Act, and how leaders can turn time saved on admin into culture, capability, and impact.You will hear practical frameworks including a simple 4Ps plus 2 model for HR AI, human in the loop hiring, guardrails to reduce hallucinations, and a clear view on when AI must be 100 percent accurate. The discussion also outlines a modern HR operating model with always on self service, plus policy steps for ethical, explainable AI.Whether you are an HR leader, CEO, or L&D professional, this conversation will help you move from pilots to scaled adoption and build an AI ready organization. Expect actionable steps to improve employee experience, strengthen compliance, and unlock productivity and performance across your teams. 100X Book on Amazon: https://www.amazon.com/dp/B0D41BP5XTNick Day's LinkedIn: https://www.linkedin.com/in/nickday/Find your ideal candidate with our job vacancy system: https://jgarecruitment.ck.page/919cf6b9eaSign up to the HR L&D Newsletter - https://jgarecruitment.ck.page/23e7b153e700:00 Intro & Preview02:25 What HR Is For03:54 Why HR + AI Now06:19 AI as Augmentation07:43 HR AI Framework & Use Cases10:14 Guardrails: Hallucinations & Accuracy12:45 Guardrails: Bias & Human in the Loop16:58 Recruiting with AI21:01 EU AI Act for HR25:16 HR Team of the Future25:56 New HR Operating Model31:54 Tools for Culture Change35:35 Rethink Processes

AI in Banking Podcast
The Role of AI in Risk Management and Compliance - with Miriam Fernandez and Sudeep Kesh at S&P Global Ratings

AI in Banking Podcast

Play Episode Listen Later Oct 20, 2025 28:49


As financial services accelerate their digital transformations, AI is reshaping how institutions identify, assess, and manage risk. But with that transformation comes an equally complex web of systemic risks, regulatory challenges, and questions about accountability. In this episode of the AI in Business podcast, host Matthew DeMello, Head of Content at Emerj, speaks with Miriam Fernandez, Director in the Analytical Innovation Team specializing in AI research at S&P Global Ratings, and Sudeep Kesh, Chief Innovation Officer at S&P Global Ratings. Together, they unpack how generative AI, agentic systems, and regulatory oversight are evolving within one of the most interconnected sectors of the global economy. The conversation explores how AI is amplifying both efficiency and exposure across financial ecosystems — from the promise of multimodal data integration in risk management to the growing challenge of concentration and contagion risks in increasingly digital markets. Miriam and Sudeep discuss how regulators are responding through risk-based frameworks such as the EU AI Act and DORA, and how the private sector is taking a larger role in ensuring transparency, compliance, and trust. Want to share your AI adoption story with executive peers? Click emerj.com/e2 for more information and to be a potential future guest on Emerj's flagship ‘AI in Business' podcast! If you've enjoyed or benefited from some of the insights of this episode, consider leaving us a five-star review on Apple Podcasts, and let us know what you learned, found helpful, or liked most about this show!

The FIT4PRIVACY Podcast - For those who care about privacy
Why does EU AI Act matter in your business?

The FIT4PRIVACY Podcast - For those who care about privacy

Play Episode Listen Later Oct 16, 2025 1:59


In this episode of the Fit4Privacy Podcast, host Punit Bhatia explores the EU AI Act— why it matters, what it requires, and how it impacts your business, even outside the EU.You will also hear about the Act's risk-based approach, the four categories of AI systems (unacceptable, high, limited, and minimal risk), and the penalties for non-compliance, which can be as high as 7% of global turnover or €35 million.Just like GDPR, the EU AI Act has global reach—so if your company offers AI-based products or services to EU citizens, it applies to you. Listen in to understand the requirements and discover how to turn AI compliance into an opportunity for building trust, demonstrating responsibility, and staying ahead of the competition.KEY CONVERSION 00:00:00 Introduction to the EU AI Act 00:01:22 Why the EU AI Act Matters to Your Business 00:03:40 Risk Categories Under the EU AI Act 00:04:52 Key Timelines and Provisions 00:06:07 Compliance Requirements 00:07:09 Leveraging the EU AI Act for Competitive Advantage 00:08:38 Conclusion and Contact Information  ABOUT HOST Punit Bhatia is one of the leading privacy experts who works independently and has worked with professionals in over 30 countries. Punit works with business and privacy leaders to create an organization culture with high privacy awareness and compliance as a business priority. Selectively, Punit is open to mentor and coach professionals.  Punit is the author of books “Be Ready for GDPR' which was rated as the best GDPR Book, “AI & Privacy – How to Find Balance”, “Intro To GDPR”, and “Be an Effective DPO”. Punit is a global speaker who has spoken at over 30 global events. Punit is the creator and host of the FIT4PRIVACY Podcast. This podcast has been featured amongst top GDPR and privacy podcasts.  As a person, Punit is an avid thinker and believes in thinking, believing, and acting in line with one's value to have joy in life. He has developed the philosophy named ‘ABC for joy of life' which he passionately shares. Punit is based out of Belgium, the heart of Europe.  RESOURCES Websites www.fit4privacy.com,www.punitbhatia.com Podcast https://www.fit4privacy.com/podcast Blog https://www.fit4privacy.com/blog YouTube http://youtube.com/fit4privacy 

Social Justice & Activism · The Creative Process
Will AI Lead to a More Fair Society, Or Just Widen Inequities? - RISTO UUK Head of EU Policy & Research, FUTURE OF LIFE INSTITUTE

Social Justice & Activism · The Creative Process

Play Episode Listen Later Oct 14, 2025 62:34


“The Future of Life Institute has been working on AI governance-related issues for the last decade. We're already over 10 years old, and our mission is to steer very powerful technology away from large-scale harm and toward very beneficial outcomes. You could think about any kind of extreme risks from AI, all the way to existential or extinction risk, the worst kinds of risks and the benefits. You can think about any kind of large benefits that humans could achieve from technology, all the way through to utopia, right? Utopia is the biggest benefit you can get from technology. Historically, that has meant we have focused on climate change, for example, and the impact of climate change. We have also focused on bio-related risks, pandemics and nuclear security issues. If things go well, we will be able to avoid these really bad downsides in terms of existential risk, extinction risks, mass surveillance, and really disturbing futures. We can avoid that very harmful side of AI or technology, and we can achieve some of the benefits.”Today, we take a closer look at the future of artificial intelligence and the policies that determine its place in our societies. Risto Uuk is Head of EU Policy and Research at the Future of Life Institute in Brussels, and a philosopher and researcher at KU Leuven, where he studies the systemic risks posed by AI. He has worked with the World Economic Forum, the European Commission, and leading thinkers like Stuart Russell and Daniel Susskind. He also runs one of the most widely read newsletters on the EU AI Act. As this technology is transforming economies, politics, and human life itself, we'll talk about the promises and dangers of AI, how Europe is trying to regulate it, and what it means to build safeguards for a technology that may be more powerful than anything we've seen before.Episode Websitewww.creativeprocess.info/podInstagram:@creativeprocesspodcast

Tech, Innovation & Society - The Creative Process
AI & The Future of Life with RISTO UUK, Head of EU Policy & Research, FUTURE OF LIFE INSTITUTE

Tech, Innovation & Society - The Creative Process

Play Episode Listen Later Oct 14, 2025 62:34


“The Future of Life Institute has been working on AI governance-related issues for the last decade. We're already over 10 years old, and our mission is to steer very powerful technology away from large-scale harm and toward very beneficial outcomes. You could think about any kind of extreme risks from AI, all the way to existential or extinction risk, the worst kinds of risks and the benefits. You can think about any kind of large benefits that humans could achieve from technology, all the way through to utopia, right? Utopia is the biggest benefit you can get from technology. Historically, that has meant we have focused on climate change, for example, and the impact of climate change. We have also focused on bio-related risks, pandemics and nuclear security issues. If things go well, we will be able to avoid these really bad downsides in terms of existential risk, extinction risks, mass surveillance, and really disturbing futures. We can avoid that very harmful side of AI or technology, and we can achieve some of the benefits.”Today, we take a closer look at the future of artificial intelligence and the policies that determine its place in our societies. Risto Uuk is Head of EU Policy and Research at the Future of Life Institute in Brussels, and a philosopher and researcher at KU Leuven, where he studies the systemic risks posed by AI. He has worked with the World Economic Forum, the European Commission, and leading thinkers like Stuart Russell and Daniel Susskind. He also runs one of the most widely read newsletters on the EU AI Act. As this technology is transforming economies, politics, and human life itself, we'll talk about the promises and dangers of AI, how Europe is trying to regulate it, and what it means to build safeguards for a technology that may be more powerful than anything we've seen before.Episode Websitewww.creativeprocess.info/podInstagram:@creativeprocesspodcast

Education · The Creative Process
AI & The Future of Life with RISTO UUK, Head of EU Policy & Research, FUTURE OF LIFE INSTITUTE

Education · The Creative Process

Play Episode Listen Later Oct 2, 2025 62:34


“The Future of Life Institute has been working on AI governance-related issues for the last decade. We're already over 10 years old, and our mission is to steer very powerful technology away from large-scale harm and toward very beneficial outcomes. You could think about any kind of extreme risks from AI, all the way to existential or extinction risk, the worst kinds of risks and the benefits. You can think about any kind of large benefits that humans could achieve from technology, all the way through to utopia, right? Utopia is the biggest benefit you can get from technology. Historically, that has meant we have focused on climate change, for example, and the impact of climate change. We have also focused on bio-related risks, pandemics and nuclear security issues. If things go well, we will be able to avoid these really bad downsides in terms of existential risk, extinction risks, mass surveillance, and really disturbing futures. We can avoid that very harmful side of AI or technology, and we can achieve some of the benefits.”Today, we take a closer look at the future of artificial intelligence and the policies that determine its place in our societies. Risto Uuk is Head of EU Policy and Research at the Future of Life Institute in Brussels, and a philosopher and researcher at KU Leuven, where he studies the systemic risks posed by AI. He has worked with the World Economic Forum, the European Commission, and leading thinkers like Stuart Russell and Daniel Susskind. He also runs one of the most widely read newsletters on the EU AI Act. As this technology is transforming economies, politics, and human life itself, we'll talk about the promises and dangers of AI, how Europe is trying to regulate it, and what it means to build safeguards for a technology that may be more powerful than anything we've seen before.Episode Websitewww.creativeprocess.info/podInstagram:@creativeprocesspodcast

Business of Tech
Navigating AI Governance: Trust, Accountability, and the Future of Responsible Tech

Business of Tech

Play Episode Listen Later Sep 27, 2025 45:54


Art Kleiner, co-author of "The AI Dilemma" and Principal at Kleiner Powell International, discusses the complexities of AI governance, trust, and accountability in the context of modern technology. He emphasizes the importance of being intentional about risk when deploying AI products, particularly large language models, which can inadvertently perpetuate biases and misinformation. Kleiner shares a compelling example of a Chinese AI system that failed to generate accurate images based on user requests, illustrating the inherent biases present in AI systems. He stresses the need for organizations to be aware of the human effects and unintended consequences of AI deployment.For managed service providers (MSPs) and IT leaders, Kleiner highlights the significance of compliance and oversight in the development process of AI systems. He references the EU AI Act, which mandates a "human in the loop" approach to ensure accountability and effectiveness in AI applications. This requirement encourages organizations to conduct thorough testing and evaluation of AI systems in real-world contexts, ensuring that they meet the needs of users and mitigate potential risks. Kleiner notes that small businesses, in particular, must be vigilant about the implications of AI on their operations and customer interactions.The conversation also delves into the challenges of achieving measurable ROI from AI projects, with studies indicating that a significant percentage of these initiatives fail to deliver tangible business value. Kleiner advocates for scenario planning as a tool to navigate the uncertainties of AI implementation, encouraging organizations to explore various future scenarios and their potential impacts. By understanding the different ways AI can affect productivity, business growth, and risk management, companies can better position themselves to leverage AI effectively.Finally, Kleiner urges leaders to prepare for multiple AI futures by staying informed about emerging technologies and their implications for their businesses. He emphasizes the need for organizations to build trust with their customers by using AI responsibly and transparently. By focusing on creating value and avoiding the pitfalls of "inshittification," businesses can foster stronger relationships with their clients and enhance their overall service offerings. The discussion underscores the critical role of human insight and ethical considerations in the evolving landscape of AI technology. All our Sponsors: https://businessof.tech/sponsors/ Do you want the show on your podcast app or the written versions of the stories? Subscribe to the Business of Tech: https://www.businessof.tech/subscribe/Looking for a link from the stories? The entire script of the show, with links to articles, are posted in each story on https://www.businessof.tech/ Support the show on Patreon: https://patreon.com/mspradio/ Want to be a guest on Business of Tech: Daily 10-Minute IT Services Insights? Send Dave Sobel a message on PodMatch, here: https://www.podmatch.com/hostdetailpreview/businessoftech Want our stuff? Cool Merch? Wear “Why Do We Care?” - Visit https://mspradio.myspreadshop.com Follow us on:LinkedIn: https://www.linkedin.com/company/28908079/YouTube: https://youtube.com/mspradio/Facebook: https://www.facebook.com/mspradionews/Instagram: https://www.instagram.com/mspradio/TikTok: https://www.tiktok.com/@businessoftechBluesky: https://bsky.app/profile/businessof.tech Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.