Podcasts about eu ai act

  • 406PODCASTS
  • 709EPISODES
  • 35mAVG DURATION
  • 5WEEKLY NEW EPISODES
  • Jan 30, 2026LATEST

POPULARITY

20192020202120222023202420252026


Best podcasts about eu ai act

Show all podcasts related to eu ai act

Latest podcast episodes about eu ai act

Follow The Brand Podcast
You Can't Govern What You Can't See: The Shadow AI Crisis with Daniel Ikem

Follow The Brand Podcast

Play Episode Listen Later Jan 30, 2026 43:03 Transcription Available


Send us a textResponsibility breaks where AI moves fastest, and that's exactly where we go today. Grant sits down with Daniel Ikem—strategic operator at the intersection of emerging technology, intellectual property, and public policy—to unpack how shadow AI, data limits, and legal gray zones collide inside modern organizations. From boardrooms pushing Copilot to teams quietly pasting prompts into other models, we trace how governance cracks form and why documentation, auditability, and accountability must evolve as quickly as the tools.Daniel shares firsthand insights from big-tech partnerships and from founding the Diverse IP Alliance, where he's helping HBCU and underrepresented students build fluency in AI and IP. We examine the core challenges leaders face: capturing tacit knowledge that models can't see, preventing biased historical data from influencing outcomes, and defining ownership of outputs when proprietary data mixes with external systems. We also tackle the jagged frontier of agentic AI—who's liable when autonomy kicks in—and the geopolitical reality that makes “slow down” easier to say than to implement.You'll walk away with pragmatic steps to act now: set clear policies on approved models and data access, capture critical processes that were never written down, design human-in-the-loop review for high-impact decisions, and build a living risk register that survives model updates. We compare U.S. uncertainty with GDPR and the EU AI Act to show where global benchmarks can guide you before domestic rules arrive. Above all, we make the case that governance is not just compliance—it's strategy, trust, and long-term resilience.If you care about AI governance, IP risk, bias, and building a talent pipeline that reflects the communities your systems will serve, this one's for you. Subscribe, share with a colleague who's wrestling with AI policy, and leave a review with your top governance question so we can tackle it next.Thanks for tuning in to this episode of Follow The Brand! We hope you enjoyed learning about the latest trends and strategies in Personal Branding, Business and Career Development, Financial Empowerment, Technology Innovation, and Executive Presence. To keep up with the latest insights and updates, visit 5starbdm.com. And don't miss Grant McGaugh's new book, First Light — a powerful guide to igniting your purpose and building a BRAVE brand that stands out in a changing world. - https://5starbdm.com/brave-masterclass/ See you next time on Follow The Brand!

Microsoft Business Applications Podcast
AI Adoption That Actually Works

Microsoft Business Applications Podcast

Play Episode Listen Later Jan 28, 2026 22:45 Transcription Available


Get featured on the show by leaving us a Voice Mail: https://bit.ly/MIPVM This episode with Michael Plettner explores how organisations move from AI curiosity to practical, business focused implementation. You will learn why user adoption and leadership support matter, how teams shift from generic Copilot use to targeted process improvement, and how the EU AI Act is pushing companies toward more mature and responsible AI practices.

Ropes & Gray Podcasts
The Data Day: World Data Protection Day & Regulatory Insights for 2026

Ropes & Gray Podcasts

Play Episode Listen Later Jan 28, 2026 17:30


On this special edition of The Data Day podcast, Ropes & Gray partner Rohan Massey—leader of the firm's data, privacy & cybersecurity practice and managing partner of the London office—is joined by counsel Edward Machin and associates Catherine Keeling and Suzie Wilson to celebrate the 19th World Data Protection Day and explore the evolving landscape of data, privacy, and cybersecurity regulation across the UK and EU in 2026. The discussion covers headline-making cybersecurity breaches, new compliance obligations under DORA and the Cyber Resilience Act, and the anticipated UK cyber bill. The panel also examines the regulatory outlook for AI, including key dates for the EU AI Act and the potential direction of the UK's AI Bill. Rounding out the conversation, the team highlights upcoming changes in digital regulation, such as the UK's Data (Use and Access) Act, the EU Data Act, and the Digital Omnibus package.

Telecom Reseller
SecurePII: Turning AI Compliance into a Revenue Opportunity, Podcast

Telecom Reseller

Play Episode Listen Later Jan 27, 2026


Recorded live at Cloud Connections, the Cloud Communications Alliance event in Delray Beach, Doug Green, Publisher of Technology Reseller News, spoke with Bill Placke, Co-Founder & President, Americas at SecurePII, about one of the most pressing challenges facing AI-driven communications today: how to scale AI while complying with global data privacy regulations—and how that challenge can become a competitive advantage. Placke explains that SecurePII was formed to address a growing structural problem in AI adoption. While organizations are eager to deploy AI and train large language models, regulatory uncertainty around personally identifiable information (PII) has stalled progress. Citing industry research showing that more than 60 percent of AI initiatives have been paused due to data privacy concerns, Placke argues that governance policies alone are not enough. Instead, SecurePII takes an architectural approach. At the core of SecurePII's solution is data minimization at the point of ingestion. The company's technology prevents sensitive information—such as credit card numbers, names, addresses, or social security numbers—from ever entering enterprise systems. SecurePII's existing PCI-focused offering already removes cardholder data from call flows, keeping organizations out of PCI scope entirely. The same approach is now being extended to broader categories of PII, enabling AI systems to operate and train on clean data streams that are free from regulated information. Placke emphasizes that this upstream architectural design fundamentally changes the compliance equation. Regulators and plaintiff attorneys, he notes, care about outcomes—not intent. If sensitive data never enters the system, compliance scope, audit costs, breach exposure, and regulatory risk are dramatically reduced. “Downstream controls don't scale with AI—architecture does,” Placke says, positioning data minimization as a foundation for both trust and growth. The discussion also highlights the role of consent and customer trust in an AI-enabled world. Rather than asking customers to consent to broad data use, SecurePII enables enterprises to clearly state that sensitive information is neither seen nor stored, while still allowing AI to learn from outcomes and sentiment. This approach removes what Placke calls the “creepy factor” associated with AI and personal data, while aligning with emerging frameworks such as the EU AI Act and long-standing NIST guidance. For MSPs, UCaaS providers, and channel partners, Placke frames compliance not as a cost center but as a revenue opportunity. By embedding privacy-preserving architectures into voice, AI, and communications solutions, service providers can differentiate themselves as trusted advisors—helping customers deploy AI safely, reduce regulatory exposure, and accelerate adoption. To learn more about SecurePII and its privacy-first AI architecture, visit https://www.securepii.cloud/.

MLOps.community
Cracking the Black Box: Real-Time Neuron Monitoring & Causality Traces

MLOps.community

Play Episode Listen Later Jan 27, 2026 47:25


Mike Oaten is the Founder and CEO of TIKOS, working on building AI assurance, explainability, and trustworthy AI infrastructure, helping organizations test, monitor, and govern AI models and systems to make them transparent, fair, robust, and compliant with emerging regulations.Cracking the Black Box: Real-Time Neuron Monitoring & Causality Traces // MLOps Podcast #358 with Mike Oaten, Founder and CEO of TIKOSJoin the Community: https://go.mlops.community/YTJoinInGet the newsletter: https://go.mlops.community/YTNewsletter// AbstractAs AI models move into high-stakes environments like Defence and Financial Services, standard input/output testing, evals, and monitoring are becoming dangerously insufficient. To achieve true compliance, MLOps teams need to access and analyse the internal reasoning of their models to achieve compliance with the EU AI Act, NIST AI RMF, and other requirements.In this session, Mike introduces the company's patent-pending AI assurance technology that moves beyond statistical proxies. He will break down the architecture of the Synapses Logger, a patent-pending technology that embeds directly into the neural activation flow to capture weights, activations, and activation paths in real-time.// BioMike Oaten serves as the CEO of TIKOS, leading the company's mission to progress trustworthy AI through unique, high-performance AI model assurance technology. A seasoned technical and data entrepreneur, Mike brings experience from successfully co-founding and exiting two previous data science startups: Riskopy Inc. (acquired by Nasdaq-listed Coupa Software in 2017) and Regulation Technologies Limited (acquired by mnAi Data Solutions in 2022).Mike's expertise spans data, analytics, and ML product and governance leadership. At TIKOS, Mike leads a VC-backed team developing technology to test and monitor deep-learning models in high-stakes environments, such as defence and financial services, so they comply with the stringent new laws and regulations.// Related LinksWebsite: https://tikos.tech/LLM guardrails: https://medium.com/tikos-tech/your-llm-output-is-confidently-wrong-heres-how-to-fix-it-08194fdf92b9Model Bias: https://medium.com/tikos-tech/from-hints-to-hard-evidence-finally-how-to-find-and-fix-model-bias-in-dnns-2553b072fd83Model Robustness: https://medium.com/tikos-tech/tikos-spots-neural-network-weaknesses-before-they-fail-the-iris-dataset-b079265c04daGPU Optimisation: https://medium.com/tikos-tech/400x-performance-a-lightweight-open-source-python-cuda-utility-to-break-vram-barriers-d545e5b6492fHyperbolic GPU Cloud: app.hyperbolic.ai.Coding Agents Conference: https://luma.com/codingagents~~~~~~~~ ✌️Connect With Us ✌️ ~~~~~~~Catch all episodes, blogs, newsletters, and more: https://go.mlops.community/TYExploreJoin our Slack community [https://go.mlops.community/slack]Follow us on X/Twitter [@mlopscommunity](https://x.com/mlopscommunity) or [LinkedIn](https://go.mlops.community/linkedin)] Sign up for the next meetup: [https://go.mlops.community/register]MLOps Swag/Merch: [https://shop.mlops.community/]Connect with Demetrios on LinkedIn: /dpbrinkmConnect with Mike on LinkedIn: /mike-oaten/Timestamps:[00:00] Regulations as Opportunity[00:25] Regulation Compliance Fun[02:49] AI Act Layers Explained[05:19] Observability in Systems vs ML[09:05] Risk Transfer in AI[11:26] LLMs and Model Approval[14:53] LLMs in Finance[17:17] Hyperbolic GPU Cloud Ad[18:16] Stakeholder Alignment and Tech[22:20] AI in Regulated Environments[28:55] Autonomous Boat Regulations[34:20] Data Compliance Mapping[39:11] Data Capture Strategy[41:13] EU AI Act Insights[44:52] Wrap up[45:45] Join the Coding Agents Conference!

Microsoft Business Applications Podcast
Governance That Accelerates Innovation

Microsoft Business Applications Podcast

Play Episode Listen Later Jan 25, 2026 32:12 Transcription Available


Get featured on the show by leaving us a Voice Mail: https://bit.ly/MIPVM John Rood shares how organisations can unlock real value from AI by balancing innovation, governance, and compliance. Learn why robust frameworks, practical training, and a bottom-up approach are key to sustainable AI adoption and risk management. 

The Tech Blog Writer Podcast
3562: Veeva Systems on AI and the Future of Clinical Trials

The Tech Blog Writer Podcast

Play Episode Listen Later Jan 22, 2026 28:22


What happens when decades of clinical research experience collide with a regulatory environment that is changing faster than ever? In this episode of Tech Talks Daily, I sat down with Dr Werner Engelbrecht, Senior Director of Strategy at Veeva Systems, for a wide-ranging conversation that explores how life sciences organizations across Europe are responding to mounting regulatory pressure, rapid advances in AI, and growing expectations around transparency and patient trust. Werner brings a rare perspective to this discussion. His career spans clinical research, pharmaceutical development, health authorities, and technology strategy, shaped by firsthand experience as an investigator and later as a senior industry leader.  That background gives him a grounded, practical view of what is actually changing inside pharma and biotech organizations, beyond the headlines around AI Acts, data rules, and compliance frameworks. We talk openly about why regulations such as GDPR, the EU AI Act, and ACT-EU are creating real pressure for organizations that are already operating in highly controlled environments. But rather than framing compliance as a blocker, Werner explains why this moment presents an opening for better collaboration, stronger data foundations, and more consistent ways of working across internal teams. According to him, the real challenge is less about technology and more about how companies manage data quality, align processes, and break down silos that slow everything from trial setup to regulatory response times. Our conversation also digs into where AI is genuinely making progress today in life sciences and where caution still matters. Werner shares why drug discovery and non-patient-facing use cases are moving faster, while areas like trial execution and real-world patient data still demand stronger evidence, cleaner datasets, and clearer governance. His perspective cuts through hype and focuses on what is realistic in an industry where patient safety remains the defining responsibility. We also explore patient recruitment, decentralized trials, and the growing complexity of diseases themselves. Advances in genomics and diagnostics are reshaping how trials are designed, which in turn raises questions about access to electronic health records, data harmonization across Europe, and the safeguards regulators care about most. Werner connects these dots in a way that highlights both the operational strain and the long-term upside. Toward the end, we look ahead at emerging technologies such as blockchain and connected devices, and how they could strengthen data integrity, monitoring, and regulatory confidence over time. It is a thoughtful discussion that reflects both optimism and realism, rooted in lived experience rather than theory. If you are working anywhere near clinical research, regulatory affairs, or digital transformation in life sciences, this episode offers a clear-eyed view of where the industry stands today and where it may be heading next. How should organizations turn regulation into momentum instead of resistance, and what will it take to earn lasting trust from patients, partners, and regulators alike? Useful Links Connect with Dr Werner Engelbrecht Learn more about Veeva Systems Viva Summit Europe and Viva Summit USA Follow on LinkedIn Thanks to our sponsors, Alcor, for supporting the show.

The Gradient Podcast
2025 in AI, with Nathan Benaich

The Gradient Podcast

Play Episode Listen Later Jan 22, 2026 61:15


Episode 144Happy New Year! This is one of my favorite episodes of the year — for the fourth time, Nathan Benaich and I did our yearly roundup of AI news and advancements, including selections from this year's State of AI Report.If you've stuck around and continue to listen, I'm really thankful you're here. I love hearing from you.You can find Nathan and Air Street Press here on Substack and on Twitter, LinkedIn, and his personal site. Check out his writing at press.airstreet.com.Find me on Twitter (or LinkedIn if you want…) for updates on new episodes, and reach me at editor@thegradient.pub for feedback, ideas, guest suggestions.Outline* (00:00) Intro* (00:44) Air Street Capital and Nathan world* Nathan's path from cancer research and bioinformatics to AI investing* The “evergreen thesis” of AI from niche to ubiquitous* Portfolio highlights: Eleven Labs, Synthesia, Crusoe* (03:44) Geographic flexibility: Europe vs. the US* Why SF isn't always the best place for original decisions* Industry diversity in New York vs. San Francisco* The Munich Security Conference and Europe's defense pivot* Playing macro games from a European vantage point* (07:55) VC investment styles and the “solo GP” approach* Taste as the determinant of investments* SF as a momentum game with small information asymmetry* Portfolio diversity: defense (Delian), embodied AI (Syriact), protein engineering* Finding entrepreneurs who “can't do anything else”* (10:44) State of AI progress in 2025* Momentous progress in writing, research, computer use, image, and video* We're in the “instruction manual” phase* The scale of investment: private markets, public markets, and nation states* (13:21) Range of outcomes and what “going bad” looks like* Today's systems are genuinely useful—worst case is a valuation problem* Financialization of AI buildouts and GPUs* (14:55) DeepSeek and China closing the capability gap* Seven-month lag analysis (Epoch AI)* Benchmark skepticism and consumer preferences (”Coca-Cola vs. Pepsi”)* Hedonic adaptation: humans reset expectations extremely quickly* Bifurcation of model companies toward specific product bets* (18:29) Export controls and the “evolutionary pressure” argument* Selective pressure breeds innovation* Chinese companies rushing to public markets (Minimax, ZAI)* (21:30) Reasoning models and test-time compute* Chain of thought faithfulness questions* Monitorability tax: does observability reduce quality?* User confusion about when models should “think”* AI for science: literature agents, hypothesis generation* (23:53) Chain of thought interpretability and safety* Anthropomorphization concerns* Alignment faking and self-preservation behaviors* Cybersecurity as a bigger risk than existential risk* Models as payloads injected into critical systems* (27:26) Commercial traction and AI adoption data* Ramp data: 44% of US businesses paying for AI (up from 5% in early 2023)* Average contract values up to $530K from $39K* State of AI survey: 92% report productivity gains* The “slow takeoff” consensus and human inertia* Use cases: meeting notes, content generation, brainstorming, coding, financial analysis* (32:53) The industrial era of AI* Stargate and XAI data centers* Energy infrastructure: gas turbines and grid investment* Labs need to own models, data, compute, and power* Poolside's approach to owning infrastructure* (35:40) Venture capital in the age of massive GPU capex* The GP lives in the present, the entrepreneur in the future, the LP in the past* Generality vs. specialism narratives* “Two or 20”: management fees vs. carried interest* Scaling funds to match entrepreneur ambitions* (40:10) NVIDIA challengers and returns analysis* Chinese challengers: 6x return vs. 26x on NVIDIA* US challengers: 2x return vs. 12x on NVIDIA* Grok acquired for $20B; Samba Nova markdown to $1.6B* “The tide is lifting all boats”—demand exceeds supply* (44:06) The hardware lottery and architecture convergence* Transformer dominance and custom ASICs making a comeback* NVIDIA still 90–95% of published AI research* (45:49) AI regulation: Trump agenda and the EU AI Act* Domain-specific regulators vs. blanket AI policy* State-level experimentation creates stochasticity* EU AI Act: “born before GPT-4, takes effect in a world shaped by GPT-7”* Only three EU member states compliant by late 2025* (50:14) Sovereign AI: what it really means* True sovereignty requires energy, compute, data, talent, chip design, and manufacturing* The US is sovereign; the UK by itself is not* Form alliances or become world-class at one level of the stack* ASML and the Netherlands as an example* (52:33) Open weight safety and containment* Three paths: model-based safeguards, scaffolding/ecosystem, procedural/governance* “Pandora's box is open”—containment on distribution, not weights* Leak risk: the most vulnerable link is often human* Developer–policymaker communication and regulator upskilling* (55:43) China's AI safety approach* Matt Sheehan's work on Chinese AI regulation* Safety summits and China's participation* New Chinese policies: minor modes, mental health intervention, data governance* UK's rebrand from “safety” to “security” institutes* (58:34) Prior predictions and patterns* Hits on regulatory/political areas; misses on semiconductor consolidation, AI video games* (59:43) 2026 Predictions* A Chinese lab overtaking US on frontier (likely ZAI or DeepSeek, on scientific reasoning)* Data center NIMBYism influencing midterm politics* (01:01:01) ClosingLinks and ResourcesNathan / Air Street Capital* Air Street Capital* State of AI Report 2025* Air Street Press — essays, analysis, and the Guide to AI newsletter* Nathan on Substack* Nathan on Twitter/X* Nathan on LinkedInFrom Air Street Press (mentioned in episode)* Is the EU AI Act Actually Useful? — by Max Cutler and Nathan Benaich* China Has No Place at the UK AI Safety Summit (2023) — by Alex Chalmers and Nathan BenaichResearch & Analysis* Epoch AI: Chinese AI Models Lag US by 7 Months — the analysis referenced on the US-China capability gap* Sara Hooker: The Hardware Lottery — the essay on how hardware determines which research ideas succeed* Matt Sheehan: China's AI Regulations and How They Get Made — Carnegie EndowmentCompanies Mentioned* Eleven Labs — AI voice synthesis (Air Street portfolio)* Synthesia — AI video generation (Air Street portfolio)* Crusoe — clean compute infrastructure (Air Street portfolio)* Poolside — AI for code (Air Street portfolio)* DeepSeek — Chinese AI lab* Minimax — Chinese AI company* ASML — semiconductor equipmentOther Resources* Search Engine Podcast: Data Centers (Part 1 & 2) — PJ Vogt's two-part series on XAI data centers and the AI financing boom* RAAIS Foundation — Nathan's AI research and education charity Get full access to The Gradient at thegradientpub.substack.com/subscribe

Alter Everything
200: What Does AI Look Like in 2026

Alter Everything

Play Episode Listen Later Jan 21, 2026 68:43


We're back! Explore the future of artificial intelligence in the year 2026 with Patrick McGarry, Federal Chief Data Officer at ServiceNow, and Dr. Jupiter Bakakeu, Lead Generative AI Technologist at Alteryx. This milestone 200th episode examines the critical shift from AI as an answer machine to AI as an autonomous work agent capable of executing tasks independently. Learn about the four characteristics of AI agents (perceive, reflect, act, learn), discover which tasks organizations should and shouldn't delegate to AI, and understand why modernization, trust, and governance matter more than model selection. Panelists:Patrick McGarry, Federal Chief Data Officer @ ServiceNow - LinkedInJupiter Bakakeu, Lead Generative AI Technologist @ Alteryx - LinkedInJoshua Burkhow, Chief Evangelist @ Alteryx - @JoshuaB, LinkedInShow notes: ServiceNowAlteryxData.world"Beyond the Algorithm" by Patrick McGarry (upcoming publication) Interested in sharing your feedback with the Alter Everything team? Take our feedback survey here!This episode was produced by Cecilia Murray, Mike Cusic, and Matt Rotundo. Special thanks to Andy Uttley for the theme music.

Cloud Security Podcast
AI Vulnerability Management: Why You Can't Patch a Neural Network

Cloud Security Podcast

Play Episode Listen Later Jan 13, 2026 41:20


Traditional vulnerability management is simple: find the flaw, patch it, and verify the fix. But what happens when the "asset" is a neural network that has learned something ethically wrong? In this episode, Sapna Paul (Senior Manager at Dayforce) explains why there are no "Patch Tuesdays" for AI models .Sapna breaks down the three critical layers of AI vulnerability management: protecting production models, securing the data layer against poisoning, and monitoring model behavior for technically correct but ethically flawed outcomes . We discuss how to update your risk register to speak the language of business and the essential skills security professionals need to survive in an AI-first world .The conversation also covers practical ways to use AI within your security team to combat alert fatigue , the importance of explainability tools like SHAP and LIME , and how to align with frameworks like the NIST AI RMF and the EU AI Act .Guest Socials - ⁠⁠Sapna's LinkedinPodcast Twitter - ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠@CloudSecPod⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠If you want to watch videos of this LIVE STREAMED episode and past episodes - Check out our other Cloud Security Social Channels:-⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Cloud Security Podcast- Youtube⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠- ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Cloud Security Newsletter ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠If you are interested in AI Security, you can check out our sister podcast -⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ AI Security Podcast⁠Questions asked:(00:00) Introduction(02:00) Who is Sapna Paul?(02:40) What is Vulnerability Management in the Age of AI? (05:00) Defining the New Asset: Neural Networks & Models (07:00) The 3 Layers of AI Vulnerability (Production, Data, Behavior) (10:20) Updating the Risk Register for AI Business Risks (13:30) Compliance vs. Innovation: Preventing AI from Going Rogue (18:20) Using AI to Solve Vulnerability Alert Fatigue (23:00) Skills Required for Future VM Professionals (25:40) Measuring AI Adoption in Security Teams (29:20) Key Frameworks: NIST AI RMF & EU AI Act (31:30) Tools for AI Security: Counterfit, SHAP, and LIME (33:30) Where to Start: Learning & Persona-Based Prompts (38:30) Fun Questions: Painting, Mentoring, and Vegan Ramen

Brave New World -- hosted by Vasant Dhar
Ep 102 - Alex 'Sandy' Pentland on Humanizing Technology

Brave New World -- hosted by Vasant Dhar

Play Episode Listen Later Jan 8, 2026 54:29


Professor Alex 'Sandy' Pentland, one of the most renowned computational scientists in the world, joins Vasant Dhar in Episode 102 of Brave New World to discuss the state and development of human-centric AI. Useful Resources: 1. Alex 'Sandy' Pentland. 2. Stanford Research Institute. 3. MIT Media Lab. 4. Distributed Computing, Blockchain. 5. Nature Magazine, Nature Machine Intelligence. 6. The Hard Problem Of Consciousness. 7. Shared Wisdom: Cultural Evolution In The Age Of AI: Alex Pentland. 8. Brave New World Episode 101: Deepak Chopra On Consciousness and Reality. 9. Digital Dharma: How AI Can Elevate Spiritual Intelligence and Personal Well-Being - Deepak Chopra. 10. Awakening: The Path to Freedom and Enlightenment - Deepak Chopra. 11. Sharing The Wisdom Of Time: Pope Francis. 12. UN, Sustainable Development Goals. 13. Jonathan Haidt. 14. Brave New World Episode 08: Jonathan Haidt, How Social Media Threatens Society. 15. Daniel Kahneman, Behavioural Economics. 16. Brave New World Episode 21: Daniel Kahneman, How Noise Hampers Judgement. 17. Loyal Agents. 18. Loyal Agents Consumer Reports19. EU - AI Act. 20. Duty Of Care. 21. Internet Engineering Task Force. 22. World Trade Organisation.   Check out Vasant Dhar's newsletter on Substack. The subscription is free! Order Vasan Dhar's new book, Thinking With Machines   Check out Vasant Dhar's newsletter on Substack. The subscription is free! Order Vasan Dhar's new book, Thinking With Machines  

Financial Crime Weekly Podcast
Financial Crime Weekly Special Episode: 2026 Horizon Scan

Financial Crime Weekly Podcast

Play Episode Listen Later Jan 1, 2026 12:47


This Financial Crime Weekly Special Episode looks ahead to 2026, a year defined by localisation and divergence in global financial crime regulation. From the EU's AMLA rollout and the US Corporate Transparency Act compliance cliff, to the UK's aggressive enforcement of the new “Failure to Prevent Fraud” offence, the episode explores how jurisdictions are reshaping rules to meet domestic priorities. With insights into sanctions reform, fraud liability shifts, capital markets changes, and the operational resilience demands of DORA, the UK's Critical Third Parties regime, and the EU AI Act, this horizon scan highlights the strategic risks and compliance imperatives that will shape the year ahead.

InfosecTrain
ISO/IEC 42001: The Global Blueprint for AI Governance

InfosecTrain

Play Episode Listen Later Jan 1, 2026 43:25


AI has the power to scale innovation at breakneck speed—but without a steering wheel, it can scale risk just as fast. Enter ISO/IEC 42001:2023, the world's first international standard for Artificial Intelligence Management Systems (AIMS). As organizations move from AI experimentation to full-scale production, this standard provides the essential framework for deploying AI that is not only powerful but also responsible, secure, and ethical.In this episode, we simplify the complexities of AI governance. We explore how to manage unique AI risks like algorithmic bias, model drift, and opaque decision-making using the proven "Plan-Do-Check-Act" (PDCA) approach. Whether you are a business leader, a developer, or a compliance officer, learn how to turn high-level ethics into operational reality.

Paul's Security Weekly
AI-Era AppSec: Transparency, Trust, and Risk Beyond the Firewall - Felipe Zipitria, Steve Springett, Aruneesh Salhotra, Ken Huang - ASW #363

Paul's Security Weekly

Play Episode Listen Later Dec 30, 2025 66:43


In an era dominated by AI-powered security tools and cloud-native architectures, are traditional Web Application Firewalls still relevant? Join us as we speak with Felipe Zipitria, co-leader of the OWASP Core Rule Set (CRS) project. Felipe has been at the forefront of open-source security, leading the development of one of the world's most widely deployed WAF rule sets, trusted by organizations globally to protect their web applications. Felipe explains why WAFs remain a critical layer in modern defense-in-depth strategies. We'll explore what makes OWASP CRS the go-to choice for security teams, dive into the project's current innovations, and discuss how traditional rule-based security is evolving to work alongside — not against — AI. Segment Resources: github.com/coreruleset/coreruleset coreruleset.org The future of CycloneDX is defined by modularity, API-first design, and deeper contextual insight, enabling transparency that is not just comprehensive, but actionable. At its heart is the Transparency Exchange API, which delivers a normalized, format-agnostic model for sharing SBOMs, attestations, risks, and more across the software supply chain. As genAI transforms every sector of modern business, the security community faces a question: how do we protect systems we can't fully see or understand? In this fireside chat, Aruneesh Salhotra, Project Lead for OWASP AIBOM and Co-Lead of OWASP AI Exchange, discusses two groundbreaking initiatives that are reshaping how organizations approach AI security and supply chain transparency. OWASP AI Exchange has emerged as the go-to single resource for AI security and privacy, providing over 200 pages of practical advice on protecting AI and data-centric systems from threats. Through its official liaison partnership with CEN/CENELEC, the project has contributed 70 pages to ISO/IEC 27090 and 40 pages to the EU AI Act security standard OWASP, achieving OWASP Flagship project status in March 2025. Meanwhile, the OWASP AIBOM Project is establishing a comprehensive framework to provide transparency into how AI models are built, trained, and deployed, extending OWASP's mission of making security visible to the rapidly evolving AI ecosystem. This conversation explores how these complementary initiatives are addressing real-world challenges—from prompt injection and data poisoning to model provenance and supply chain risks—while actively shaping international standards and regulatory frameworks. We'll discuss concrete achievements, lessons learned from global collaboration, and the ambitious roadmap ahead as these projects continue to mature and expand their impact across the AI security landscape. Segment Resources: https://owasp.org/www-project-aibom/ https://www.linkedin.com/posts/aruneeshsalhotra_owasp-ai-aisecurity-activity-7364649799800766465-DJGM/ https://www.youtube.com/@OWASPAIBOM https://www.youtube.com/@RobvanderVeer-ex3gj https://owaspai.org/ Agentic AI introduces unique and complex security challenges that render traditional risk management frameworks insufficient. In this keynote, Ken Huang, CEO of Distributedapps.ai and a key contributor to AI security standards, outlines a new approach to manage these emerging threats. The session will present a practical strategy that integrates the NIST AI Risk Management Framework with specialized tools to address the full lifecycle of Agentic AI. Segment Resources: aivss.owasp.org https://kenhuangus.substack.com/p/owasp-aivss-the-new-framework-for https://cloudsecurityalliance.org/blog/2025/02/06/agentic-ai-threat-modeling-framework-maestro This interview is sponsored by the OWASP GenAI Security Project. Visit https://securityweekly.com/owaspappsec to watch all of CyberRisk TV's interviews from the OWASP 2025 Global AppSec Conference! Visit https://www.securityweekly.com/asw for all the latest episodes! Show Notes: https://securityweekly.com/asw-363

Paul's Security Weekly TV
AI-Era AppSec: Transparency, Trust, and Risk Beyond the Firewall - Felipe Zipitria, Steve Springett, Aruneesh Salhotra, Ken Huang - ASW #363

Paul's Security Weekly TV

Play Episode Listen Later Dec 30, 2025 66:43


In an era dominated by AI-powered security tools and cloud-native architectures, are traditional Web Application Firewalls still relevant? Join us as we speak with Felipe Zipitria, co-leader of the OWASP Core Rule Set (CRS) project. Felipe has been at the forefront of open-source security, leading the development of one of the world's most widely deployed WAF rule sets, trusted by organizations globally to protect their web applications. Felipe explains why WAFs remain a critical layer in modern defense-in-depth strategies. We'll explore what makes OWASP CRS the go-to choice for security teams, dive into the project's current innovations, and discuss how traditional rule-based security is evolving to work alongside — not against — AI. Segment Resources: github.com/coreruleset/coreruleset coreruleset.org The future of CycloneDX is defined by modularity, API-first design, and deeper contextual insight, enabling transparency that is not just comprehensive, but actionable. At its heart is the Transparency Exchange API, which delivers a normalized, format-agnostic model for sharing SBOMs, attestations, risks, and more across the software supply chain. As genAI transforms every sector of modern business, the security community faces a question: how do we protect systems we can't fully see or understand? In this fireside chat, Aruneesh Salhotra, Project Lead for OWASP AIBOM and Co-Lead of OWASP AI Exchange, discusses two groundbreaking initiatives that are reshaping how organizations approach AI security and supply chain transparency. OWASP AI Exchange has emerged as the go-to single resource for AI security and privacy, providing over 200 pages of practical advice on protecting AI and data-centric systems from threats. Through its official liaison partnership with CEN/CENELEC, the project has contributed 70 pages to ISO/IEC 27090 and 40 pages to the EU AI Act security standard OWASP, achieving OWASP Flagship project status in March 2025. Meanwhile, the OWASP AIBOM Project is establishing a comprehensive framework to provide transparency into how AI models are built, trained, and deployed, extending OWASP's mission of making security visible to the rapidly evolving AI ecosystem. This conversation explores how these complementary initiatives are addressing real-world challenges—from prompt injection and data poisoning to model provenance and supply chain risks—while actively shaping international standards and regulatory frameworks. We'll discuss concrete achievements, lessons learned from global collaboration, and the ambitious roadmap ahead as these projects continue to mature and expand their impact across the AI security landscape. Segment Resources: https://owasp.org/www-project-aibom/ https://www.linkedin.com/posts/aruneeshsalhotra_owasp-ai-aisecurity-activity-7364649799800766465-DJGM/ https://www.youtube.com/@OWASPAIBOM https://www.youtube.com/@RobvanderVeer-ex3gj https://owaspai.org/ Agentic AI introduces unique and complex security challenges that render traditional risk management frameworks insufficient. In this keynote, Ken Huang, CEO of Distributedapps.ai and a key contributor to AI security standards, outlines a new approach to manage these emerging threats. The session will present a practical strategy that integrates the NIST AI Risk Management Framework with specialized tools to address the full lifecycle of Agentic AI. Segment Resources: aivss.owasp.org https://kenhuangus.substack.com/p/owasp-aivss-the-new-framework-for https://cloudsecurityalliance.org/blog/2025/02/06/agentic-ai-threat-modeling-framework-maestro This interview is sponsored by the OWASP GenAI Security Project. Visit https://securityweekly.com/owaspappsec to watch all of CyberRisk TV's interviews from the OWASP 2025 Global AppSec Conference! Show Notes: https://securityweekly.com/asw-363

Application Security Weekly (Audio)
AI-Era AppSec: Transparency, Trust, and Risk Beyond the Firewall - Felipe Zipitria, Steve Springett, Aruneesh Salhotra, Ken Huang - ASW #363

Application Security Weekly (Audio)

Play Episode Listen Later Dec 30, 2025 66:43


In an era dominated by AI-powered security tools and cloud-native architectures, are traditional Web Application Firewalls still relevant? Join us as we speak with Felipe Zipitria, co-leader of the OWASP Core Rule Set (CRS) project. Felipe has been at the forefront of open-source security, leading the development of one of the world's most widely deployed WAF rule sets, trusted by organizations globally to protect their web applications. Felipe explains why WAFs remain a critical layer in modern defense-in-depth strategies. We'll explore what makes OWASP CRS the go-to choice for security teams, dive into the project's current innovations, and discuss how traditional rule-based security is evolving to work alongside — not against — AI. Segment Resources: github.com/coreruleset/coreruleset coreruleset.org The future of CycloneDX is defined by modularity, API-first design, and deeper contextual insight, enabling transparency that is not just comprehensive, but actionable. At its heart is the Transparency Exchange API, which delivers a normalized, format-agnostic model for sharing SBOMs, attestations, risks, and more across the software supply chain. As genAI transforms every sector of modern business, the security community faces a question: how do we protect systems we can't fully see or understand? In this fireside chat, Aruneesh Salhotra, Project Lead for OWASP AIBOM and Co-Lead of OWASP AI Exchange, discusses two groundbreaking initiatives that are reshaping how organizations approach AI security and supply chain transparency. OWASP AI Exchange has emerged as the go-to single resource for AI security and privacy, providing over 200 pages of practical advice on protecting AI and data-centric systems from threats. Through its official liaison partnership with CEN/CENELEC, the project has contributed 70 pages to ISO/IEC 27090 and 40 pages to the EU AI Act security standard OWASP, achieving OWASP Flagship project status in March 2025. Meanwhile, the OWASP AIBOM Project is establishing a comprehensive framework to provide transparency into how AI models are built, trained, and deployed, extending OWASP's mission of making security visible to the rapidly evolving AI ecosystem. This conversation explores how these complementary initiatives are addressing real-world challenges—from prompt injection and data poisoning to model provenance and supply chain risks—while actively shaping international standards and regulatory frameworks. We'll discuss concrete achievements, lessons learned from global collaboration, and the ambitious roadmap ahead as these projects continue to mature and expand their impact across the AI security landscape. Segment Resources: https://owasp.org/www-project-aibom/ https://www.linkedin.com/posts/aruneeshsalhotra_owasp-ai-aisecurity-activity-7364649799800766465-DJGM/ https://www.youtube.com/@OWASPAIBOM https://www.youtube.com/@RobvanderVeer-ex3gj https://owaspai.org/ Agentic AI introduces unique and complex security challenges that render traditional risk management frameworks insufficient. In this keynote, Ken Huang, CEO of Distributedapps.ai and a key contributor to AI security standards, outlines a new approach to manage these emerging threats. The session will present a practical strategy that integrates the NIST AI Risk Management Framework with specialized tools to address the full lifecycle of Agentic AI. Segment Resources: aivss.owasp.org https://kenhuangus.substack.com/p/owasp-aivss-the-new-framework-for https://cloudsecurityalliance.org/blog/2025/02/06/agentic-ai-threat-modeling-framework-maestro This interview is sponsored by the OWASP GenAI Security Project. Visit https://securityweekly.com/owaspappsec to watch all of CyberRisk TV's interviews from the OWASP 2025 Global AppSec Conference! Visit https://www.securityweekly.com/asw for all the latest episodes! Show Notes: https://securityweekly.com/asw-363

Application Security Weekly (Video)
AI-Era AppSec: Transparency, Trust, and Risk Beyond the Firewall - Felipe Zipitria, Steve Springett, Aruneesh Salhotra, Ken Huang - ASW #363

Application Security Weekly (Video)

Play Episode Listen Later Dec 30, 2025 66:43


In an era dominated by AI-powered security tools and cloud-native architectures, are traditional Web Application Firewalls still relevant? Join us as we speak with Felipe Zipitria, co-leader of the OWASP Core Rule Set (CRS) project. Felipe has been at the forefront of open-source security, leading the development of one of the world's most widely deployed WAF rule sets, trusted by organizations globally to protect their web applications. Felipe explains why WAFs remain a critical layer in modern defense-in-depth strategies. We'll explore what makes OWASP CRS the go-to choice for security teams, dive into the project's current innovations, and discuss how traditional rule-based security is evolving to work alongside — not against — AI. Segment Resources: github.com/coreruleset/coreruleset coreruleset.org The future of CycloneDX is defined by modularity, API-first design, and deeper contextual insight, enabling transparency that is not just comprehensive, but actionable. At its heart is the Transparency Exchange API, which delivers a normalized, format-agnostic model for sharing SBOMs, attestations, risks, and more across the software supply chain. As genAI transforms every sector of modern business, the security community faces a question: how do we protect systems we can't fully see or understand? In this fireside chat, Aruneesh Salhotra, Project Lead for OWASP AIBOM and Co-Lead of OWASP AI Exchange, discusses two groundbreaking initiatives that are reshaping how organizations approach AI security and supply chain transparency. OWASP AI Exchange has emerged as the go-to single resource for AI security and privacy, providing over 200 pages of practical advice on protecting AI and data-centric systems from threats. Through its official liaison partnership with CEN/CENELEC, the project has contributed 70 pages to ISO/IEC 27090 and 40 pages to the EU AI Act security standard OWASP, achieving OWASP Flagship project status in March 2025. Meanwhile, the OWASP AIBOM Project is establishing a comprehensive framework to provide transparency into how AI models are built, trained, and deployed, extending OWASP's mission of making security visible to the rapidly evolving AI ecosystem. This conversation explores how these complementary initiatives are addressing real-world challenges—from prompt injection and data poisoning to model provenance and supply chain risks—while actively shaping international standards and regulatory frameworks. We'll discuss concrete achievements, lessons learned from global collaboration, and the ambitious roadmap ahead as these projects continue to mature and expand their impact across the AI security landscape. Segment Resources: https://owasp.org/www-project-aibom/ https://www.linkedin.com/posts/aruneeshsalhotra_owasp-ai-aisecurity-activity-7364649799800766465-DJGM/ https://www.youtube.com/@OWASPAIBOM https://www.youtube.com/@RobvanderVeer-ex3gj https://owaspai.org/ Agentic AI introduces unique and complex security challenges that render traditional risk management frameworks insufficient. In this keynote, Ken Huang, CEO of Distributedapps.ai and a key contributor to AI security standards, outlines a new approach to manage these emerging threats. The session will present a practical strategy that integrates the NIST AI Risk Management Framework with specialized tools to address the full lifecycle of Agentic AI. Segment Resources: aivss.owasp.org https://kenhuangus.substack.com/p/owasp-aivss-the-new-framework-for https://cloudsecurityalliance.org/blog/2025/02/06/agentic-ai-threat-modeling-framework-maestro This interview is sponsored by the OWASP GenAI Security Project. Visit https://securityweekly.com/owaspappsec to watch all of CyberRisk TV's interviews from the OWASP 2025 Global AppSec Conference! Show Notes: https://securityweekly.com/asw-363

Global Medical Device Podcast powered by Greenlight Guru
#439: MedTech AI Trends 2025: Scaling Regulatory Intelligence with Michelle Wu

Global Medical Device Podcast powered by Greenlight Guru

Play Episode Listen Later Dec 29, 2025 42:41


In this episode, Etienne Nichols sits down with Michelle Wu, Founder and CEO of Nyquist AI and one of the top 100 women in AI, to discuss the transformative state of artificial intelligence within the MedTech regulatory and quality space. Reflecting on her recent personal experience as a surgical patient, Michelle shares a unique perspective on the critical importance of the devices and quality systems that keep the industry running.The conversation dives deep into the "Great Rewiring" of the medical device industry. Michelle outlines how we have moved past the initial phase of AI skepticism and "AI fatigue" into a period of hyper-acceleration. With the introduction of the FDA's ELSA and the implementation of the EU AI Act, the industry has reached a point where AI is no longer a side project but a fundamental requirement for operational longevity.Finally, the episode provides a roadmap for both organizations and individual contributors. Michelle introduces her "Holy Trinity" framework for AI implementation—Data, Workflow, and Agents—and explains why the next two years will be defined by the "Invisible Colleague" or AI copilot. For junior professionals, the message is clear: knowledge is now a commodity, and the real value lies in the ability to ask high-quality, strategic questions.Key Timestamps00:00 – Introduction and Michelle Wu's background in MedTech and AI.03:45 – A founder's perspective: Michelle's personal experience in the OR seeing her clients' devices.08:12 – The 2025 Inflection Point: FDA ELSA, EU AI Act, and global AI expectations.11:50 – From billable hours to value-based output: How AI is disrupting the consulting business model.15:35 – Micro-timestamp: 2026 Predictions. The shift toward universal AI Copilots and Agents for every MedTech role.18:22 – The Holy Trinity of AI: Breaking down Data Layers, Workflow Automation, and AI Agents.22:10 – Case Study: How a top-tier MedTech company automated 17,000 quality and regulatory tasks.27:45 – The 56.8% Salary Premium: Why AI literacy is the most important skill for young RAQA professionals.31:15 – Shifting from memorization to "Clarity of Mind" and high-quality inquiry.Quotes"Knowledge is a commodity now. Previously, regulatory consultants or professionals stood out by their knowledge. Now, with AI leveling the field, the capability lies in those who can ask high-quality questions." - Michelle Wu, Nyquist AITakeawaysAI Literacy is a Financial Multiplier: LinkedIn data shows that non-engineering knowledge workers with AI literacy can command a salary premium of up to 56.8%.The 80/20 Rule of Automation: Approximately 80% of current RAQA tasks are tedious, manual, or administrative. Successful teams are using AI to automate that 80%, allowing humans to focus on the 20% that is strategic and high-value.The Three-Layer AI Strategy: To effectively implement AI, companies should look at the Data Layer (intelligence), the Workflow Layer (automation of specific tasks), and the Agent Layer (autonomous "employees").Value-Based Billing: As AI reduces the time required for regulatory submissions and gap analyses, the industry is moving away from the "billable hour" toward pricing based on the value and quality of the output.ReferencesNyquist AI: Michelle Wu's platform specializing in global regulatory intelligence and AI-driven workflow automation for MedTech.FDA ELSA: The...

The Road to Accountable AI
Alexandru Voica: Responsible AI Video

The Road to Accountable AI

Play Episode Listen Later Dec 18, 2025 38:23


Alexandru Voica, Head of Corporate Affairs and Policy at Synthesia, discusses how the world's largest enterprise AI video platform has approached trust and safety from day one. He explains Synthesia's "three C's" framework—consent, control, and collaboration: never creating digital replicas without explicit permission, moderating every video before rendering, and engaging with policymakers to shape practical regulation. Voica acknowledges these safeguards have cost some business, but argues that for enterprise sales, trust is competitively essential. The company's content moderation has evolved from simple keyword detection to sophisticated LLM-based analysis, recently withstanding a rigorous public red team test organized by NIST and Humane Intelligence. Voica criticizes the EU AI Act's approach of regulating how AI systems are built rather than focusing on harmful outcomes, noting that smaller models can now match frontier capabilities while evading compute-threshold regulations. He points to the UK's outcome-focused approach—like criminalizing non-consensual deepfake pornography—as more effective. On adoption, Voica argues that AI companies should submit to rigorous third-party audits using ISO standards rather than publishing philosophical position papers—the thesis of his essay "Audits, Not Essays." The conversation closes personally: growing up in 1990s Romania with rare access to English tutoring, Voica sees AI-powered personalized education as a transformative opportunity to democratize learning. Alexandru Voica is the Head of Corporate Affairs and Policy at Synthesia, the UK's largest generative AI company and the world's leading AI video platform. He has worked in the technology industry for over 15 years, holding public affairs and engineering roles at Meta, NetEase, Ocado, and Arm. Voica holds an MSc in Computer Science from the Sant'Anna School of Advanced Studies and serves as an advisor to MBZUAI, the world's first AI university. Transcript Audits, Not Essays: How to Win Trust for Enterprise AI (Transformer) Synthesia's Content Moderation Systems Withstand Rigorous NIST, Humane Intelligence Red Team Test (Synthesia) Computerspeak Newsletter

The Privacy Advisor Podcast
Former AI Act negotiator Laura Caroli on the proposed EU Digital Omnibus for AI

The Privacy Advisor Podcast

Play Episode Listen Later Dec 17, 2025 49:18


On November 19, the European Commission unveiled two major omnibus packages as part of its European Data Union Strategy. One package proposes several changes to the EU General Data Protection Regulation, while the other proposes significant changes to the recently minted EU AI Act, including a proposed delay to the regulation of so-called high-risk AI systems.    Laura Caroli was a lead negotiator and policy advisor to AI Act co-rapporteur Brando Benifei and was immersed in the high-stakes negotiations leading to the AI regulation. She is also a former senior fellow at the Center for Strategic and International Studies, but recently moved back to Brussels during a time of major complexity in the EU.    IAPP Editorial Director Jedidiah Bracy caught up with Caroli to discuss her views on the proposed changes to the AI Act in the omnibus package and how she thinks the negotiations will play out. Here's what she had to say.  

The Data Diva E267 - Federico Marengo and Debbie Reynolds

"The Data Diva" Talks Privacy Podcast

Play Episode Listen Later Dec 16, 2025 39:29 Transcription Available


Send us a textIn Episode 267 of The Data Diva Talks Privacy Podcast, Debbie Reynolds, The Data Diva, talks with Federico Marengo, Associate Partner at White Label Consultancy in Italy. They explore the accelerating intersection of privacy, artificial intelligence, and governance, and discuss how organizations can build practical, responsible AI programs that align with existing privacy and security frameworks. Federico explains why AI governance cannot exist in a vacuum and must be integrated with the policies, controls, and operational practices companies already use.The conversation delves into the challenges organizations face in adopting AI responsibly, including understanding the requirements of the EU AI Act, right-sizing compliance expectations for organizations of different scales, and developing programs that allow innovation while managing risk. Federico highlights the importance of educating leadership about where AI regulations actually apply, since many businesses overestimate their obligations, and he explains why clarity around high-risk systems is essential for reducing unnecessary fear and confusion.Debbie and Federico also discuss future trends for global AI and privacy governance, including whether companies will eventually adopt unified enterprise frameworks rather than fragmented jurisdiction-specific practices. They explore how organizations can upskill their teams, embed governance into product development, and normalize AI as part of standard technology operations. Federico shares his vision for a world where professionals collaborate to advance best practices and help organizations embrace AI with confidence rather than hesitation.Support the showBecome an insider, join Data Diva Confidential for data strategy and data privacy insights delivered to your inbox.

Reimagining Cyber
AI Compliance : New Rules, But Are You Ready?

Reimagining Cyber

Play Episode Listen Later Dec 10, 2025 18:58


AI is evolving faster than most organizations can keep up with — and the truth is, very few companies are prepared for what's coming in 2026. In this episode of Reimagining Cyber, Rob Aragao speaks with Ken Johnston, VP of Data, Analytics and AI at Envorso, about the uncomfortable reality: autonomous AI systems are accelerating, regulations are tightening, and most businesses have no idea how much risk they're carrying.Ken explains why companies have fallen behind, how “AI governance debt” has quietly piled up, and why leaders must take action now before the EU AI Act and Colorado's 2026 regulation bring real financial consequences. From AI bias and data provenance to agentic AI guardrails, observability, audits, and model versioning — Ken lays out the essential steps organizations must take to catch up before it's too late. It's 5 years since Reimagining Cyber began. Thanks to all of our loyal listeners!As featured on Million Podcasts' Best 100 Cybersecurity Podcasts Top 50 Chief Information Security Officer CISO Podcasts Top 70 Security Hacking Podcasts This list is the most comprehensive ranking of Cyber Security Podcasts online and we are honoured to feature amongst the best! Follow or subscribe to the show on your preferred podcast platform.Share the show with others in the cybersecurity world.Get in touch via reimaginingcyber@gmail.com

AI Unraveled: Latest AI News & Trends, Master GPT, Gemini, Generative AI, LLMs, Prompting, GPT Store

Welcome back to AI Unraveled, your daily strategic briefing on the business impact of artificial intelligence.Today, we are flipping the script on the most boring word in tech: Governance. We are diving into the 'Compliance Cost Cliff'—a new reality where the ability to control your AI is not just a legal shield, but the primary engine of your velocity. We'll look at how AI hallucinations cost businesses $67 billion this year alone, why the EU AI Act is actually a springboard for global dominance, and how giants like JPMorgan and Mayo Clinic are building 'Trust Moats' to leave their competitors in the dust.1. The Strategic Inversion: From Brake to Engine The narrative of "move fast and break things" is dead. We have reached the Compliance Cost Cliff, where the financial and reputational risks of ungoverned AI far outweigh the friction of implementing it. Organizations that treat governance as infrastructure are unlocking high-risk, high-reward use cases that remain inaccessible to less disciplined competitors.2. The "Trust Moat" Theory In a market flooded with AI-generated noise and deepfakes, verified reality is the only scarce resource.Sales Friction: Governance-first companies bypass lengthy procurement security questionnaires, winning deals in the "silent" phase of the buying cycle.Pricing Power: Verified, auditable AI outputs command a premium. An AI that cites its sources is a professional tool; one that doesn't is a liability.3. The Economics of FailureThe Hallucination Bill: In 2024, AI hallucinations cost businesses $67.4 billion in direct losses, legal sanctions, and operational remediation.Regulatory Hammers: The EU AI Act introduces fines of up to 7% of global turnover—a penalty structure that can erase a year's worth of profitability for major firms.4. Sector Deep Dives: The First MoversFinance (JPMorgan Chase): Misinterpreted for initially banning ChatGPT, JPMC used the pause to build the LLM Suite—a governed platform that handles data privacy and model risk centrally. This infrastructure now allows them to deploy tools like Connect Coach safely while competitors struggle with compliance.Healthcare (Mayo Clinic): Mayo's "Deploy" platform acts as governance middleware. Insurance (AXA): With SecureGPT, AXA positions itself as a governance auditor, refusing to insure companies that cannot prove their AI safety standards—effectively monetizing governance.5. The Technical Architecture of Compliance Governance must be encoded into the software itself.Auditable RAGImmutable Audit Logs6. Future Outlook: Agentic AI & Liability As we move toward Agentic AI (systems that take action, not just chat), the liability shifts entirely to the deployer. The only defense against an agent that executes a bad trade or deletes a file is a robust, documented governance history.KeywordsAI Governance, Compliance Cost Cliff, Trust Moat, EU AI Act, Agentic AI, Hallucination Costs, JPMorgan LLM Suite, Mayo Clinic Deploy, Auditable RAG, Vector DB Audit Logs,

Masters of Privacy
Oliver Patel: How the Digital Omnibus affects the EU AI Act

Masters of Privacy

Play Episode Listen Later Dec 7, 2025 30:04


On Wednesday November 19 2025, the European Commission unveiled its Digital Omnibus Package, which was basically split in two proposals: a proposed Regulation on simplification for AI rules; and a proposed Regulation on simplification of the digital legislation. We will tackle the first one today.Today we are reviewing that AI-related block with Oliver Patel, who is AI Governance Lead at the global pharma and biotech company AstraZeneca, where he helps implement and scale AI governance worldwide. He also advises governments and international policymakers as a Member of the OECD's Expert Group on AI Risk and Accountability.References:* Oliver Patel, “Fundamentals of AI Governance” (now available for pre-order)* Enterprise AI Governance, a newsletter by Oliver Patel* Oliver Patel on LinkedIn* Oliver Patel: How could the EU AI Act change?* EU proposal for a Regulation on simplification for AI rules (EU Commission, covered today)* EU proposal for a Regulation on simplification of the digital legislation (EU Commission, not covered today)* Europe's digital sovereignty: from doctrine to delivery (Politico). This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit www.mastersofprivacy.com/subscribe

Education Technology Society
Better AI in education … is regulation the answer?

Education Technology Society

Play Episode Listen Later Dec 5, 2025 25:53


We talk with legal expert Liane Colonna (Stockholm University) about the EU ‘AI Act' and what it means for the use of AI in education. To what extent can we rely on regulation to enforce safer and more beneficial forms of AI use in education? Accompanying reference >>>  Colonna, L. (2025). Artificial Intelligence in Education (AIED): Towards More Effective Regulation. European Journal of Risk Regulation, doi:10.1017/err.2025.10039 

AI Unraveled: Latest AI News & Trends, Master GPT, Gemini, Generative AI, LLMs, Prompting, GPT Store
⚖️The Billion-Dollar Decision—Building Your AI Moat vs. Buying Off-the-Shelf

AI Unraveled: Latest AI News & Trends, Master GPT, Gemini, Generative AI, LLMs, Prompting, GPT Store

Play Episode Listen Later Dec 5, 2025 16:32


Special Edition: The Billion-Dollar Decision (December 05, 2025)Today's episode is a deep dive into the strategic shift from "renting" AI to "owning" it. We explore the 2025 playbook for shifting from API wrappers to sovereign AI assets.Key Topics & Insights

Littler Labor & Employment Podcast
217 - Littler Lounge: European Employer Edition – From Policy Shifts to Workplace Solutions

Littler Labor & Employment Podcast

Play Episode Listen Later Dec 4, 2025 23:54


This episode kicks off with a little red-carpet flair – Littler's Stephan Swinkels returns from the 2025 European Executive Employer conference in London to share the inside scoop. Hosts Nicole LeFave and Claire Deason get the unfiltered download – straight from the source as they dive into the findings from Littler's 2025 European Employer Survey Report, spotlighting workplace trends shaking up Europe – from pay transparency and the EU AI Act to IE&D and return to work policies. Whether you're navigating new regulations, planning ahead, or trying to make sense of how EU directives intersect with local implementation, this conversation bridges the U.S. patchwork of state and local laws with the European landscape – offering practical insights and fresh perspectives to help employers stay ahead in a rapidly evolving environment. https://www.littler.com/news-analysis/podcast/littler-lounge-european-employer-edition-policy-shifts-workplace-solutions

EUVC
E661 | Jack Leeney, 7GC: The AI Supercycle, IPO Windows & Europe's Missing M&A Flywheel

EUVC

Play Episode Listen Later Dec 2, 2025 46:43


This week, Andreas Munk Holm sits down with Jack Leeney, co-founder of 7GC, the transatlantic growth fund bridging Silicon Valley and Europe and a backer of AI giants like Anthropic, alongside European rising stars Poolside and Fluidstack.From IPOs at Morgan Stanley to running Telefónica's US venture arm and now operating a dual-continental fund, Jack shares how 7GC reads the AI supercycle, why infrastructure and platforms win first, and what Europe must fix to unlock the next wave of venture liquidity.

Pods Like Us
AI in Podcasting: Humans, Voices, Ethics

Pods Like Us

Play Episode Listen Later Nov 30, 2025 79:25


Join host Martin Quibell (Marv) and a panel of industry experts as they dive deep into the impact of artificial intelligence on podcasting. From ethical debates to hands-on tools, discover how AI is shaping the future of audio and video content creation.  Guests:  ● Benjamin Field (Deep Fusion Films)  ● William Corbin (Inception Point AI)  ● John McDermott & Mark Francis (Caloroga Shark Media)   Timestamps  00:00 – Introduction  00:42 – Meet the Guests  01:45 – The State of AI in Podcasting  03:45 – Transparency, Ethics & the EU AI Act  06:00 – Nuance: How AI Is Used (Descript, Shorten Word Gaps, Remove Retakes)  08:45 – AI & Niche Content: Economic Realities  12:00 – Human Craft vs. AI Automation  15:00 – Job Evolution: Prompt Authors & QC  18:00 – Quality Control & Remastering  21:00 – Volume, Scale, and Audience  24:00 – AI Co-Hosts & Experiments (Virtually Parkinson, AI Voices)  27:00 – AI in Video & Visuals (HeyGen, Weaver)  30:00 – Responsibility & Transparency  33:00 – The Future of AI in Media  46:59 – Guest Contact Info & Closing   Tools & Platforms Mentioned  ● Descript: Shorten word gaps, remove retakes, AI voice, scriptwriting, editing  ● HeyGen: AI video avatars for podcast visuals  ● Weaver (Deep Fusion Films): AI-driven video editing and archive integration  ● Verbal: AI transcription and translation  ● AI Voices: For narration, co-hosting, and accessibility  ● Other references: Spotify, Amazon, Wikipedia, TikTok, Apple Podcasts, Google  Programmatic Ads  Contact the Guests:  - William Corbin: william@inceptionpoint.ai | LinkedIn - John McDermott: john@caloroga.com | LinkedIn - Benjamin Field: benjamin.field@deepfusionfilms.com | LinkedIn - Mark Francis: mark@caloroga.com | LinkedIn | caloroga.com  - Marv: themarvzone.org   Like, comment, and subscribe for more deep dives into the future of podcasting and media!  #Podcasting #AI #ArtificialIntelligence #Descript #HeyGen #PodcastTools #Ethics #MediaInnovation

Ahead of the Game
GDPR and AI Regulation for Marketers

Ahead of the Game

Play Episode Listen Later Nov 28, 2025 52:55


Finding it difficult to navigate the changing landscape of data protection? In this episode of the DMI podcast, host Will Francis speaks with Steven Roberts, Group Head of Marketing at Griffith College, Chartered Director, certified Data Protection Officer, and long-time marketing leader. Steven demystifies GDPR, AI governance, and the rapidly evolving regulatory environment that marketers must now navigate. Steven explains how GDPR enforcement has matured, why AI has created a new layer of complexity, and how businesses can balance innovation with compliance. He breaks down the EU AI Act, its risk-based structure, and its implications for organizations inside and outside the EU. Steven also shares practical guidance for building internal AI policies, tackling “shadow AI,” reducing data breach risks, and supporting teams with training and clear governance. For an even deeper look into how businesses can ensure data protection compliance, check out Steven's book, Data Protection for Business: Compliance, Governance, Reputation and Trust. Steven's Top 3 Tips Build data protection into projects from the start, using tools like Data Protection Impact Assessments to uncover risks early. Invest in regular staff training to avoid common mistakes caused by human error. Balance compliance with business performance by setting clear policies, understanding your risk appetite, and iterating your AI governance over time. The Ahead of the Game podcast is brought to you by the Digital Marketing Institute and is available on ⁠⁠⁠⁠YouTube, Apple Podcasts⁠⁠⁠⁠, ⁠⁠⁠⁠Spotify⁠⁠⁠⁠, and ⁠⁠⁠⁠all other podcast platforms. And if you enjoyed this episode please leave a review so others can find us. If you have other feedback for or would like to be a guest on the show, email the podcast team! Timestamps 01:29 – AI's impact on GDPR & the explosion of new global privacy laws 03:26 – Is GDPR the global gold standard? 05:04 – GDPR enforcement today: Who gets fined and why 07:09 – Cultural attitudes toward data: EU vs. US 08:51 – The EU AI Act explained: Risk tiers, guardrails & human oversight 10:48 – What businesses must do: DPIAs, fundamental rights assessments & more 13:38 – Shadow AI, risk appetite & internal governance challenges 17:10 – Should you upload company data to ChatGPT? 20:40 – How the AI Act affects countries outside the EU 24:47 – Will privacy improve over time? 28:45 – What teams can do now: Tools, processes & data audits 33:49 – Data enrichment tools: targeting vs. Legality 36:47 – Will anyone actually check your data practices? 40:06 – Steven's top tips for navigating GDPR & AI 

Web3 CMO Stories
Nearshoring Meets AI | S5 E49

Web3 CMO Stories

Play Episode Listen Later Nov 25, 2025 19:18 Transcription Available


Send us a textWalk the floor at Web Summit without leaving your headphones. We sit down with Jo Smets, founder of BluePanda and president of the Portuguese Belgian Luxembourg Chamber of Commerce, to unpack how nearshoring and AI are reshaping CRM, marketing, and team delivery across Europe.We start with clarity on nearshoring: why time zone, culture, and communication speed beat cost alone, and how that proximity pays off when you're wiring AI into daily work. Jo shares how BluePanda applies AI beyond demos—recruitment, performance, and operations—then translates those lessons into client outcomes. We compare adoption patterns across startups and corporates, call out the real blocker (end‑to‑end process automation), and map the role of global networks like BBN for keeping pace with tools and trends.The conversation pivots to trust and governance: practical ways to protect data, when on‑prem makes sense, and how to use EU AI Act guidance without stalling innovation. We explore the marketing shift from SEO to GEO, the idea of “AI‑proof” websites, and the move toward dynamic, persona‑aware content that renders at load. Jo offers a simple path to progress—pick one process, pilot, measure, educate—while keeping empathy at the core as managers start leading both humans and AI agents. Along the way, we spotlight how chambers and communities connect ecosystems across borders, turning events into learning loops and real partnerships.Looking to modernize without losing your team's identity? You'll leave with a plan for small wins, a lens for tool curation, and a sharper view of where marketing is headed next. If this resonated, subscribe, share it with a colleague who's wrestling with AI adoption, and drop a review to help others find the show.This episode was recorded in the official podcast booth at Web Summit (Lisbon) on November 12, 2025. Check the video footage, read the blog article and show notes here: https://webdrie.net/why-european-teams-win-with-nearshoring-and-practical-ai/..........................................................................

Paymentandbanking FinTech Podcast
Episode 20_25: AI in Finance: OpenAI, Google, Anthropic liefern und Europa justiert nach

Paymentandbanking FinTech Podcast

Play Episode Listen Later Nov 24, 2025 61:26


In der neuen Folge von AI in Finance passiert das, was in den letzten Monaten zum Normalzustand geworden ist: Das Tempo der KI-Industrie zieht weiter an, während Regulierung, Infrastruktur und Use Cases versuchen, mitzuhalten. Sascha und Maik haben so viele News im Gepäck, dass man locker eine Dreistundenfolge daraus hätte machen können. Es wird ein Rundflug über Europa, Silicon Valley, Big Tech, neue Modelle, neue Tools und die Frage, wie nah wir eigentlich an echter, alltäglicher KI sind.

Cambridge Law: Public Lectures from the Faculty of Law
Faithful or Traitor? The Right of Explanation in a Generative AI World: CIPIL Evening Seminar

Cambridge Law: Public Lectures from the Faculty of Law

Play Episode Listen Later Nov 24, 2025 49:02


Speaker: Professor Lilian Edwards, Emeritus Professor of Law, Innovation & Society, Newcastle Law School Biography: Lilian Edwards is a leading academic in the field of Internet law. She has taught information technology law, e-commerce law, privacy law and Internet law at undergraduate and postgraduate level since 1996 and been involved with law and artificial intelligence (AI) since 1985. She is now Emerita Professor at Newcastle and Honorary Professor at CREAte, University of Glasgow, which she helped co-found. She is the editor and major author of Law, Policy and the Internet, one of the leading textbooks in the field of Internet law (Hart, 2018, new edition forthcoming with Urquhart and Goanta, 2026). She won the Future of Privacy Forum award in 2019 for best paper ("Slave to the Algorithm" with Michael Veale) and the award for best non-technical paper at FAccT in 2020, on automated hiring. In 2004 she won the Barbara Wellberry Memorial Prize in 2004 for work on online privacy where she invented the notion of data trusts, a concept which ten years later has been proposed in EU legislation. She is a former fellow of the Alan Turing Institute on Law and AI, and the Institute for the Future of Work. Edwards has consulted for inter alia the EU Commission, the OECD, and WIPO.Abstract: The right to an explanation is having another moment. Well after the heyday of 2016-2018 when scholars tussled over whether the GDPR ( in either art 22 or arts 13-15) conferred a right to explanation, the CJEU case of Dun and Bradstreet has finally confirmed its existence, and the Platform Work Directive has wholesale revamped art 22 in its Algorithmic Management chapter. Most recently the EU AI Act added its own Frankenstein-like right to an explanation (art 86) of AI systems .None of these provisions however pin down what the essence of the explanation should be, given many notions can be invoked here ; a faithful description of source code or training data; an account that enables challenge or contestation; a “plausible” description that may be appealing in a behaviouralist sense but might be actually misleading when operationalised eg to generate a medical course of treatment. Agarwal et al argue that the tendency of UI designers, and regulators and judges alike to lean towards the plausibility end, may be unsuited to large language models which represent far more of a black box in size and optimisation than conventional machine learning, and which are trained to present encouraging but not always accurate accounts of their workings. Yet this is also the direction of travel taken by CJEU Dun & Bradstreet , above. This paper argues that explanations of large model outputs may present novel challenges needing thoughtful legal mandates.For more information (and to download slides) see: https://www.cipil.law.cam.ac.uk/seminars-and-events/cipil-seminars

The AI Policy Podcast
Trump's Draft AI Preemption Order, EU AI Act Delays, and Anthropic's Cyberattack Report

The AI Policy Podcast

Play Episode Listen Later Nov 21, 2025 54:26


In this episode, we start by discussing Greg's trip to India and the upcoming India AI Impact Summit in February 2026 (00:29). We then unpack the Trump Administration's draft executive order to preempt state AI laws (07:46) and break down the European Commission's new “digital omnibus” package, including proposed adjustments to the AI Act and broader regulatory simplification efforts (17:51). Finally, we discuss Anthropic's report on a China-backed “highly sophisticated cyber espionage campaign" using Claude and the mixed reactions from cybersecurity and AI policy experts (37:37).

Alexa's Input (AI)
Shift Left Your AI Security with SonnyLabs Founder Liana Tomescu

Alexa's Input (AI)

Play Episode Listen Later Nov 17, 2025 64:23


In this episode of Alexa's Input (AI) Podcast, host Alexa Griffith sits down with Liana Tomescu, founder of Sonny Labs and host of the AI Hacks podcast. Dive into the world of AI security and compliance as Liana shares her journey from Microsoft to founding her own company. Discover the challenges and opportunities in making AI applications secure and compliant, and learn about the latest in AI regulations, including the EU AI Act. Whether you're an AI enthusiast or a tech professional, this episode offers valuable insights into the evolving landscape of AI technology.LinksSonnyLabs Website: https://sonnylabs.ai/SonnyLabs LinkedIn: https://www.linkedin.com/company/sonnylabs-ai/Liana's LinkedIn: https://www.linkedin.com/in/liana-anca-tomescu/Alexa's LinksLinkTree: https://linktr.ee/alexagriffithAlexa's Input YouTube Channel: https://www.youtube.com/@alexa_griffithWebsite: https://alexagriffith.com/LinkedIn: https://www.linkedin.com/in/alexa-griffith/Substack: https://alexasinput.substack.com/KeywordsAI security, compliance, female founder, Sunny Labs, EU AI Act, cybersecurity, prompt injection, AI agents, technology innovation, startup journeyChapters00:00 Introduction to Liana Tomescu and Sunny Labs02:53 The Journey of a Female Founder in Tech05:49 From Microsoft to Startup: The Transition09:04 Exploring AI Security and Compliance11:41 The Role of Curiosity in Entrepreneurship14:52 Understanding Sunny Labs and Its Mission17:52 The Importance of Community and Networking20:42 MCP: Model Context Protocol Explained23:54 Security Risks in AI and MCP Servers27:03 The Future of AI Security and Compliance38:25 Understanding Prompt Injection Risks45:34 The Shadow AI Phenomenon45:48 Navigating the EU AI Act52:28 Banned and High-Risk AI Practices01:00:43 Implementing AI Security Measures01:17:28 Exploring AI Security Training

Conversations For Leaders & Teams
E89. Responsible AI for the Modern Leader & Coach w/Colin Cosgrove

Conversations For Leaders & Teams

Play Episode Listen Later Nov 15, 2025 34:36


Send us a textExplore how leaders and coaches can adopt AI without losing the human core, turning compliance and ethics into everyday practice rather than a side office. Colin Cosgrove shares a practical arc for AI readiness, concrete use cases, and a clear view of risk, trust, and governance.• journey from big-tech compliance to leadership coaching • why AI changes the leadership environment and decision pace • making compliance human: transparency, explainability, consent • AI literacy across every function, not just data teams • the AI leader archetype arc for mindset and readiness • practical augmentation: before, during, after coaching sessions • three risks: reputational, relational, regulatory • leader as coach: trust, questions, and human skills • EU AI Act overview and risk-based obligations • governance, accountability, and cross-Reach out to Colin on LinkedIn and check out his website: Movizimo.com. Support the showBelemLeaders–Your organization's trusted partner for leader and team development. Visit our website to connect: belemleaders.org or book a discovery call today! belem.as.me/discoveryUntil next time, keep doing great things!

The Digital Executive
Quantifying AI Risk: Yakir Golan on Turning Cyber Threats Into Business Intelligence | Ep 1145

The Digital Executive

Play Episode Listen Later Nov 14, 2025 15:20


In this episode of The Digital Executive, host Brian Thomas welcomes Yakir Golan, CEO and Co-founder of Kovrr, a global leader in cyber and AI risk quantification. Drawing from his early career in Israeli intelligence and later roles in software, hardware, and product management, Yakir explains how his background shaped his holistic approach to understanding complex, interconnected risk systems.Yakir breaks down why quantifying AI and cyber risk—rather than relying on subjective, color-coded scoring—is becoming essential for enterprise leaders, boards, and regulators. He explains how Kovrr's new AI Risk Assessment and Quantification module helps organizations model real financial exposure, understand high-impact “tail risks,” and align security, GRC, and finance teams around a shared, objective language.Looking ahead, Yakir discusses how global regulation, including the EU AI Act, is accelerating the need for measurable, defensible risk management. He outlines a future where AI risk quantification becomes a board-level expectation and a foundation for resilient, responsible innovation. Through Kovrr's mission, Yakir aims to equip enterprises with the same level of intelligence-driven decision making once reserved for national security—now applied to the rapidly evolving digital risk landscape.If you liked what you heard today, please leave us a review - Apple or Spotify.See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

The Road to Accountable AI
Oliver Patel: Sharing Frameworks for AI Governance

The Road to Accountable AI

Play Episode Listen Later Nov 13, 2025 36:03


Oliver Patel has built a sizeable online following for his social media posts and Substack about enterprise AI governance, using clever acronyms and visual frameworks to distill down insights based on his experience at AstraZeneca, a major global pharmaceutical company. In this episode, he details his career journey from academic theory to government policy and now practical application, and offers insights for those new to the field. He argues that effective enterprise AI governance requires being pragmatic and picking your battles, since the role isn't to stop AI adoption but to enable organizations to adopt it safely and responsibly at speed and scale. He notes that core pillars of modern AI governance, such as AI literacy, risk classification, and maintaining an AI inventory, are incorporated into the EU AI Act and thus essential for compliance. Looking forward, Patel identifies AI democratization—how to govern AI when everyone in the workforce can use and build it—as the biggest hurdle, and offers thougths about how enteprises can respond. Oliver Patel is the Head of Enterprise AI Governance at AstraZeneca. Before moving into the corporate sector, he worked for the UK government as Head of Inbound Data Flows, where he focused on data policy and international data transfers, and was a researcher at University College London. He serves as an IAPP Faculty Member and a member of the OECD's Expert Group on AI Risk. His forthcoming book, Fundamentals of AI Governance, will be released in early 2026. Transcript Enterprise AI Governance Substack Top 10 Challenges for AI Governance Leaders in 2025 (Part 1)  Fundamentals of AI Governance book page  

Portfolio Checklist
Mikor érdemes betárazni a magyar csúcsrészvényekből? Jelentett az OTP és a Mol

Portfolio Checklist

Play Episode Listen Later Nov 7, 2025 30:07


A Mol és az OTP gyorsjelentéseivel foglalkoztunk, és a mélyére néztünk a legfrissebb számoknak, amelyek fogódzót jelenthetnek a befektetők számára, hogy vételben vagy eladásban gondolkodjanak. A témáról Nagy Viktor, a Portfolio vezető elemzője beszélt. A műsor második felében az uniós AI Act kapta a fókuszt, az Európai Bizottság ugyanis részben elhalasztaná a világon legszigorúbb AI-szabályozásának hatályba lépését, miután az Egyesült Államok és a nagy technológiai cégek intenzív nyomást gyakoroltak Brüsszelre. A döntés hátteréről és a hazai cégek AI Actből fakadó esetleges kötelezettségeiről is kérdeztük Petrányi Dórát, a CMS közép-kelet-európai régióért felelős ügyvezető igazgatóját. Főbb részek: Intro − (00:00) Jelentett a Mol és OTP is: venni vagy nem venni? − (02:26) EU AI Act: haladék a Big Technek − (14:15) Tőkepiaci kitekintő − (25:44) Kép forrása: Getty ImagesSee omnystudio.com/listener for privacy information.

ServiceNow Podcasts
AI Regulations Explained: EU AI Act, Colorado Law, and NIST Framework

ServiceNow Podcasts

Play Episode Listen Later Nov 6, 2025 19:02


Join host Bobby Brill as he sits down with ServiceNow's AI legal and governance experts to break down the complex world of AI regulations. Andrea LaFontain (Director of AI Legal), Ken Miller (Senior Director of Product Legal), and Navdeep Gill (Staff Senior Product Manager, Responsible AI) explain how organizations can navigate the growing landscape of AI compliance. In this episode, you'll learn about three major regulatory approaches: the risk-based EU AI Act, Colorado's algorithmic discrimination law, and the NIST voluntary framework. The experts discuss practical strategies for complying with multiple regulations simultaneously, using the EU AI Act as a baseline and measuring the delta for new requirements. Key topics covered:- Why proactive compliance matters before regulations fully take effect - How AI Control Tower helps discover and manage AI systems across your enterprise - The exponential math behind AI compliance (vendors, employees, third parties) - Setting up governance policies for high-risk AI use cases - Timeline for major compliance deadlines (Colorado June 2026, EU August 2026) - The real costs of waiting for your first violation Whether you're managing AI deployment, working in compliance, or trying to understand the regulatory landscape, this episode provides actionable insights on building responsible AI governance infrastructure. Guests - Andrea LaFountain -Director, AI Legal Ken Miller - Senior Director, Product Legal Navdeep Gill - Staff Senior Product Manager, Responsible AI Host - Bobby Brill Chapters:00:00 Introduction to AI and Regulations 00:45 Meet the Experts 01:52 Overview of Key AI Regulations 03:03 Compliance Strategies for AI Regulations 07:33 ServiceNow's AI Control Tower 14:02 Challenges and Risks in AI Governance 16:04 Future of AI Regulations 18:34 Conclusion and Final ThoughtsSee omnystudio.com/listener for privacy information.

Outgrow's Marketer of the Month
Snippet: AI Caramba! CEO Matthew Blakemore Warns That Strict EU AI Rules May Push Innovation to Looser Markets Before Tools Enter the EU.

Outgrow's Marketer of the Month

Play Episode Listen Later Nov 5, 2025 0:57


Microsoft Business Applications Podcast
Copilot Success Starts with Clean Data

Microsoft Business Applications Podcast

Play Episode Listen Later Nov 2, 2025 33:44 Transcription Available


ILTA
#0134: (JIT ) ILTA Just-In-Time: What You Need to Know About New Regulations Governing AI in HR

ILTA

Play Episode Listen Later Oct 28, 2025 26:41


In this podcast, discover how to best navigate California's new employment AI regulations that recently went into effect on October 1st.    The speaker highlighted how the usage of Automated Decision Systems, which includes AI, when making employment decisions, can directly violate California law if these tools are found to discriminate against employees or applicants, either directly or indirectly, on the basis of already protected characteristics such as race, age, gender, etc.    In addition, they highlighted other recent AI regulations taking place around the world, such as the EU AI Act and more.    Moderator:  Adam Wehler, Director of eDiscovery and Litigation Technology, Smith Anderson Speaker: Kassi Burns, Senior Attorney, Trial and Global Disputes, King & Spalding

The Road to Accountable AI
Caroline Louveaux: Trust is Mission Critical

The Road to Accountable AI

Play Episode Listen Later Oct 23, 2025 33:13


Kevin Werbach speaks with Caroline Louveaux, Chief Privacy, AI, and Data Responsibility Officer at Mastercard, about what it means to make trust mission critical in the age of artificial intelligence. Caroline shares how Mastercard built its AI governance program long before the current AI boom, grounding it in the company's Data and Technology Responsibility Principles". She explains how privacy-by-design practices evolved into a single global AI governance framework aligned with the EU AI Act, NIST AI Risk Management, and standards. The conversation explores how Mastercard balances innovation speed with risk management, automates low-risk assessments, and maintains executive oversight through its AI Governance Council. Caroline also discusses the company's work on agentic commerce, where autonomous AI agents can initiate payments, and why trust, certification, and transparency are essential for such systems to succeed. Caroline unpacks what it takes for a global organization to innovate responsibly — from cross-functional governance and "tone from the top," to partnerships like the Data & Trust Alliance and efforts to harmonize global standards. Caroline emphasizes that responsible AI is a shared responsibility and that companies that can "innovate fast, at scale, but also do so responsibly" will be the ones that thrive. Caroline Louveaux leads Mastercard's global privacy and data responsibility strategy. She has been instrumental in building Mastercard's AI governance framework and shaping global policy discussions on data and technology. She serves on the board of the International Association of Privacy Professionals (IAPP), the WEF Task Force on Data Intermediaries, the ENISA Working Group on AI Cybersecurity, and the IEEE AI Systems Risk and Impact Executive Committee, among other activities. Transcript How Mastercard Uses AI Strategically: A Case Study (Forbes 2024) Lessons From a Pioneer: Mastercard's Experience of AI Governance (IMD, 2023) As AI Agents Gain Autonomy, Trust Becomes the New Currency. Mastercard Wants to Power Both. (Business Insider, July 2025)

The HR L&D Podcast
How AI is Making HR More Human with Daniel Strode

The HR L&D Podcast

Play Episode Listen Later Oct 21, 2025 42:25


This episode is sponsored by Deel.Ensure fair, consistent reviews with Deel's calibration template. Deel's free Performance Calibration Template helps HR teams and managers run more equitable, structured reviews. Use it to align evaluations with business goals,reduce bias in ratings, and ensure every performance conversation is fair, consistent,and grounded in shared standards.Download now: www.deel.com/nickdayIn this episode of the HR L&D Podcast, host Nick Day explores how HR can use AI to become more strategic and more human. The conversation covers where AI truly fits in HR, what changes with the EU AI Act, and how leaders can turn time saved on admin into culture, capability, and impact.You will hear practical frameworks including a simple 4Ps plus 2 model for HR AI, human in the loop hiring, guardrails to reduce hallucinations, and a clear view on when AI must be 100 percent accurate. The discussion also outlines a modern HR operating model with always on self service, plus policy steps for ethical, explainable AI.Whether you are an HR leader, CEO, or L&D professional, this conversation will help you move from pilots to scaled adoption and build an AI ready organization. Expect actionable steps to improve employee experience, strengthen compliance, and unlock productivity and performance across your teams. 100X Book on Amazon: https://www.amazon.com/dp/B0D41BP5XTNick Day's LinkedIn: https://www.linkedin.com/in/nickday/Find your ideal candidate with our job vacancy system: https://jgarecruitment.ck.page/919cf6b9eaSign up to the HR L&D Newsletter - https://jgarecruitment.ck.page/23e7b153e700:00 Intro & Preview02:25 What HR Is For03:54 Why HR + AI Now06:19 AI as Augmentation07:43 HR AI Framework & Use Cases10:14 Guardrails: Hallucinations & Accuracy12:45 Guardrails: Bias & Human in the Loop16:58 Recruiting with AI21:01 EU AI Act for HR25:16 HR Team of the Future25:56 New HR Operating Model31:54 Tools for Culture Change35:35 Rethink Processes

AI in Banking Podcast
The Role of AI in Risk Management and Compliance - with Miriam Fernandez and Sudeep Kesh at S&P Global Ratings

AI in Banking Podcast

Play Episode Listen Later Oct 20, 2025 28:49


As financial services accelerate their digital transformations, AI is reshaping how institutions identify, assess, and manage risk. But with that transformation comes an equally complex web of systemic risks, regulatory challenges, and questions about accountability. In this episode of the AI in Business podcast, host Matthew DeMello, Head of Content at Emerj, speaks with Miriam Fernandez, Director in the Analytical Innovation Team specializing in AI research at S&P Global Ratings, and Sudeep Kesh, Chief Innovation Officer at S&P Global Ratings. Together, they unpack how generative AI, agentic systems, and regulatory oversight are evolving within one of the most interconnected sectors of the global economy. The conversation explores how AI is amplifying both efficiency and exposure across financial ecosystems — from the promise of multimodal data integration in risk management to the growing challenge of concentration and contagion risks in increasingly digital markets. Miriam and Sudeep discuss how regulators are responding through risk-based frameworks such as the EU AI Act and DORA, and how the private sector is taking a larger role in ensuring transparency, compliance, and trust. Want to share your AI adoption story with executive peers? Click emerj.com/e2 for more information and to be a potential future guest on Emerj's flagship ‘AI in Business' podcast! If you've enjoyed or benefited from some of the insights of this episode, consider leaving us a five-star review on Apple Podcasts, and let us know what you learned, found helpful, or liked most about this show!

The FIT4PRIVACY Podcast - For those who care about privacy
Why does EU AI Act matter in your business?

The FIT4PRIVACY Podcast - For those who care about privacy

Play Episode Listen Later Oct 16, 2025 1:59


In this episode of the Fit4Privacy Podcast, host Punit Bhatia explores the EU AI Act— why it matters, what it requires, and how it impacts your business, even outside the EU.You will also hear about the Act's risk-based approach, the four categories of AI systems (unacceptable, high, limited, and minimal risk), and the penalties for non-compliance, which can be as high as 7% of global turnover or €35 million.Just like GDPR, the EU AI Act has global reach—so if your company offers AI-based products or services to EU citizens, it applies to you. Listen in to understand the requirements and discover how to turn AI compliance into an opportunity for building trust, demonstrating responsibility, and staying ahead of the competition.KEY CONVERSION 00:00:00 Introduction to the EU AI Act 00:01:22 Why the EU AI Act Matters to Your Business 00:03:40 Risk Categories Under the EU AI Act 00:04:52 Key Timelines and Provisions 00:06:07 Compliance Requirements 00:07:09 Leveraging the EU AI Act for Competitive Advantage 00:08:38 Conclusion and Contact Information  ABOUT HOST Punit Bhatia is one of the leading privacy experts who works independently and has worked with professionals in over 30 countries. Punit works with business and privacy leaders to create an organization culture with high privacy awareness and compliance as a business priority. Selectively, Punit is open to mentor and coach professionals.  Punit is the author of books “Be Ready for GDPR' which was rated as the best GDPR Book, “AI & Privacy – How to Find Balance”, “Intro To GDPR”, and “Be an Effective DPO”. Punit is a global speaker who has spoken at over 30 global events. Punit is the creator and host of the FIT4PRIVACY Podcast. This podcast has been featured amongst top GDPR and privacy podcasts.  As a person, Punit is an avid thinker and believes in thinking, believing, and acting in line with one's value to have joy in life. He has developed the philosophy named ‘ABC for joy of life' which he passionately shares. Punit is based out of Belgium, the heart of Europe.  RESOURCES Websites www.fit4privacy.com,www.punitbhatia.com Podcast https://www.fit4privacy.com/podcast Blog https://www.fit4privacy.com/blog YouTube http://youtube.com/fit4privacy 

Social Justice & Activism · The Creative Process
Will AI Lead to a More Fair Society, Or Just Widen Inequities? - RISTO UUK Head of EU Policy & Research, FUTURE OF LIFE INSTITUTE

Social Justice & Activism · The Creative Process

Play Episode Listen Later Oct 14, 2025 62:34


“The Future of Life Institute has been working on AI governance-related issues for the last decade. We're already over 10 years old, and our mission is to steer very powerful technology away from large-scale harm and toward very beneficial outcomes. You could think about any kind of extreme risks from AI, all the way to existential or extinction risk, the worst kinds of risks and the benefits. You can think about any kind of large benefits that humans could achieve from technology, all the way through to utopia, right? Utopia is the biggest benefit you can get from technology. Historically, that has meant we have focused on climate change, for example, and the impact of climate change. We have also focused on bio-related risks, pandemics and nuclear security issues. If things go well, we will be able to avoid these really bad downsides in terms of existential risk, extinction risks, mass surveillance, and really disturbing futures. We can avoid that very harmful side of AI or technology, and we can achieve some of the benefits.”Today, we take a closer look at the future of artificial intelligence and the policies that determine its place in our societies. Risto Uuk is Head of EU Policy and Research at the Future of Life Institute in Brussels, and a philosopher and researcher at KU Leuven, where he studies the systemic risks posed by AI. He has worked with the World Economic Forum, the European Commission, and leading thinkers like Stuart Russell and Daniel Susskind. He also runs one of the most widely read newsletters on the EU AI Act. As this technology is transforming economies, politics, and human life itself, we'll talk about the promises and dangers of AI, how Europe is trying to regulate it, and what it means to build safeguards for a technology that may be more powerful than anything we've seen before.Episode Websitewww.creativeprocess.info/podInstagram:@creativeprocesspodcast

Tech, Innovation & Society - The Creative Process
AI & The Future of Life with RISTO UUK, Head of EU Policy & Research, FUTURE OF LIFE INSTITUTE

Tech, Innovation & Society - The Creative Process

Play Episode Listen Later Oct 14, 2025 62:34


“The Future of Life Institute has been working on AI governance-related issues for the last decade. We're already over 10 years old, and our mission is to steer very powerful technology away from large-scale harm and toward very beneficial outcomes. You could think about any kind of extreme risks from AI, all the way to existential or extinction risk, the worst kinds of risks and the benefits. You can think about any kind of large benefits that humans could achieve from technology, all the way through to utopia, right? Utopia is the biggest benefit you can get from technology. Historically, that has meant we have focused on climate change, for example, and the impact of climate change. We have also focused on bio-related risks, pandemics and nuclear security issues. If things go well, we will be able to avoid these really bad downsides in terms of existential risk, extinction risks, mass surveillance, and really disturbing futures. We can avoid that very harmful side of AI or technology, and we can achieve some of the benefits.”Today, we take a closer look at the future of artificial intelligence and the policies that determine its place in our societies. Risto Uuk is Head of EU Policy and Research at the Future of Life Institute in Brussels, and a philosopher and researcher at KU Leuven, where he studies the systemic risks posed by AI. He has worked with the World Economic Forum, the European Commission, and leading thinkers like Stuart Russell and Daniel Susskind. He also runs one of the most widely read newsletters on the EU AI Act. As this technology is transforming economies, politics, and human life itself, we'll talk about the promises and dangers of AI, how Europe is trying to regulate it, and what it means to build safeguards for a technology that may be more powerful than anything we've seen before.Episode Websitewww.creativeprocess.info/podInstagram:@creativeprocesspodcast