Podcasts about eu ai act

  • 321PODCASTS
  • 533EPISODES
  • 36mAVG DURATION
  • 5WEEKLY NEW EPISODES
  • Jun 17, 2025LATEST

POPULARITY

20172018201920202021202220232024


Best podcasts about eu ai act

Latest podcast episodes about eu ai act

121STUNDEN talk - Online Marketing weekly I 121WATT School for Digital Marketing & Innovation
KI-Recht verstehen und anwenden: Was der EU AI Act für dein Unternehmen wirklich bedeutet | 121WATT Podcast #153

121STUNDEN talk - Online Marketing weekly I 121WATT School for Digital Marketing & Innovation

Play Episode Listen Later Jun 17, 2025 41:44


In Episode #153 des 121WATT Podcasts sprechen Sarah, Patrick und Fachanwalt für IT-Recht Dr. Martin Schirmbacher über die rechtlichen Herausforderungen im KI-Alltag – vom EU AI Act über die KI-Kompetenzpflicht bis hin zu Fragen rund um Nutzungsrechte und Deepfakes.

Irish Tech News Audio Articles
Expleo research reveals 70% of large enterprises in Ireland believe AI should be managed like an employee

Irish Tech News Audio Articles

Play Episode Listen Later Jun 16, 2025 5:17


Expleo, the global technology, engineering and consulting service provider, today launches its Business Transformation Index 2025. To mark the launch, Expleo is revealing new data showing that 70% of Ireland's largest enterprises believe AI's impact on workforces is so profound that it should be managed like an employee to avoid conflicts with company culture and people. The sixth edition of Expleo's award-winning Business Transformation Index (BTI) assesses the attitudes and sentiments of 200 IT and business decision-makers in Ireland, in enterprises with 250 employees or more. The report examines themes including digital transformation, geopolitics, AI and DEI and provides strategic recommendations for organisations to overcome challenges relating to these. BTI 2025 found that while 98% of large enterprises are using AI in some form, 67% believe their organisation can't effectively use AI because their data is too disorganised. As a result, just 30% have integrated and scaled AI models into their systems. Almost a quarter (23%) admitted that they are struggling to find use cases for AI beyond the use of off-the-shelf large language models (LLMs). Despite remaining in the early stages of AI deployment, senior decision-makers are already making fundamental changes to the skills makeup of their teams due to AI's influence and its capabilities. Expleo's research found that 72% of organisations have made changes to the criteria they seek from job candidates because AI can now take on some tasks, while its application requires expertise in other areas. Meanwhile, more than two-thirds (68%) of enterprises who are deploying AI have stopped hiring for certain roles entirely because AI can handle the requirements. The research shows that as AI absorbs tasks in some areas, it is offering workforce opportunities in others. While 30% of enterprise leaders cite workforce displacement as one of their greatest fears resulting from AI, 72% report that they will pay more for team members who have AI-specific skills. The colliding worlds of humans and machines are further revealed in BTI 2025 as 78% of organisations say the correct and ethical use of AI is now covered in their employment contracts. However, the BTI indicates that employers themselves may not be living up to their side of the bargain, as 25% of business and IT leaders conceded a possibility that the AI used for hiring, retention or employee progression in their organisation could be biased. The uncertainty about the objectivity of their AI could explain why 25% of decision-makers are also not confident that their organisation is compliant with the EU AI Act. The Act, it seems, is a bone of contention for many as 76% believe the EU AI Act will hinder adoption of AI in their organisation. Phil Codd, Managing Director, Expleo Ireland, said: "The pace of change that we are seeing from AI is like nothing we have seen before - not even the Industrial Revolution unfolded so quickly or indiscriminately in terms of the industries and people it impacted. And, the workforce's relationship with AI is complicated - on the one hand, they are turning to AI to make their jobs more manageable and to reduce stress, but at the same time, they worry that its broad deployment across their organisation could impinge on their work and therefore their value as an employee. "Business leaders are entering untrodden ground as they try to solve how AI can work for them - both practically and ethically - and without causing clashes within teams. There is no question that there is a new digital colleague joining Irish workplaces and it will define the next chapter of our working lives and economy. However, the success of this seemingly autonomous technology will always depend on the humans and data that back it up. "At Expleo, we work with enterprises to ensure they are reaping the benefits of AI by looking holistically at their people, processes and data. AI requires, and will bring, significant changes...

AI Lawyer Talking Tech
June 13, 2025 - AI in Law: Shaping the Future of Practice

AI Lawyer Talking Tech

Play Episode Listen Later Jun 13, 2025 21:15


Welcome to 'AI Lawyer Talking Tech,' your weekly exploration into the dynamic intersection of artificial intelligence and the legal profession. In today's episode, we delve into how AI is profoundly reshaping legal practice, from internal operations to client interactions and the delivery of legal services. We'll discuss the emerging legal and regulatory frameworks taking shape, including New York's new law on personalized algorithmic pricing and AI companions, which mandates transparency and safety protocols. We'll also examine the far-reaching EU AI Act, a comprehensive legal framework with global impact that categorizes AI systems by risk and applies extraterritorially, requiring U.S. organizations to evaluate and update their AI governance programs. The conversation will also cover the growing intellectual property challenges, as highlighted by landmark copyright infringement lawsuits against AI image generators like Midjourney, addressing concerns over the unauthorized use of copyrighted content and the ethical boundaries of AI creativity. Furthermore, we'll explore the transformative power of agentic AI in automating tedious tasks, analyzing unstructured data, and improving efficiency for law firms, along with its potential in evaluating complex forensic evidence more cautiously and scientifically. We'll also touch upon the importance of optimizing online presence through Generative Engine Optimization (GEO) in this new era of AI-driven search, which demands structured data and authoritative content. Join us as we consider these vital developments, understanding the opportunities and complexities that AI introduces to the legal world, and how legal professionals can continue to thrive and adapt in this evolving landscape.NY Passes Law Governing Personalized Algorithmic Pricing; AI Companions13 Jun 2025JD SupraDisney and Universal Take Legal Action Against A.I. Firm Midjourney12 Jun 2025QUE.comEarly adoption of agentic AI in law firms and the opportunity costs of waiting12 Jun 2025Legal.ThomsonReuters.comMickey Mouse Vs Machine: Disney Sues AI Firm Over 'Endless Bootlegs' Of Its Beloved Characters12 Jun 2025International Business Times UKAI models show promise in evaluating complex forensic evidence in legal contexts12 Jun 2025PhysOrg.comWhat Is Generative Engine Optimization (GEO) and Why Digital PR Alone Isn't Enough12 Jun 2025JD SupraHigh profile libel lawyer prepares group action against tech giants for alleged AI violations12 Jun 2025Legal Technology InsiderPaladin Collaborates with Some 30 Law Schools to Launch A Pro Bono Platform for Law Students12 Jun 2025LawSitesFlank: $10 Million Raised For Scaling Autonomous Legal Agents12 Jun 2025Pulse 2.0Legal tech platform Definely raises $30M Series B to make contract reviewing more efficient12 Jun 2025TechCrunchAre Online Legal Services Legit? What the Law Says in 202512 Jun 2025Lawyer Monthly5 Online Elements To Boost Your Law Firm's SEO12 Jun 2025Forbes.comHow Legal Tech is Closing the Gap Between the Legal Industry and Innovation12 Jun 2025Programming InsiderA consultant who helps law firms decide which software to buy explains why legal tech is in trouble12 Jun 2025DNyuzWhere are the challenges for SME law firm leadership changing?12 Jun 2025Legal FuturesWhy Your Legal Team Needs More Than Just Document Storage12 Jun 2025MatterSuiteeSentio Technologies Welcomes Andy Ward as Global Director of Information Governance12 Jun 2025Legal Technology News - Legal IT Professionals | Everything legal technologyMinerva 26 CEO Kelly Twigger on Leading E-Discovery into the AI Age12 Jun 2025Technically Legal - A Legal Technology and Innovation PodcastThe EU AI Act: What U.S. Companies Need to Know12 Jun 2025Bond Schoeneck & KingCongress Considers AI Whistleblower Law: What Employers Need to Know Now12 Jun 2025Fisher & Phillips LLP

AI Lawyer Talking Tech
June 11, 2025 - AI in Law: The Revolution and the Reckoning

AI Lawyer Talking Tech

Play Episode Listen Later Jun 11, 2025 17:12


Welcome to 'AI Lawyer Talking Tech', where we delve into the rapid transformation of the legal industry by artificial intelligence. Today, we explore how AI is revolutionizing legal workflows, from enhancing efficiency in document review and research to automating complex tasks, while simultaneously presenting significant challenges related to over-reliance, skill degradation, ethical considerations, and compliance. We'll touch upon the critical balance between leveraging AI as a springboard for higher-value work and avoiding the pitfalls of using it as a crutch. Join us as we discuss the evolving regulatory landscape, including key court decisions on data privacy, copyright, and algorithmic liability, and examine how the legal market is adapting through technology adoption, partnerships, and new service delivery models.Is agentic AI a crutch or springboard for lawyers?10 Jun 2025Legal.ThomsonReuters.comReasoning, tool calling, and agentic systems in legal LLMs10 Jun 2025Legal.ThomsonReuters.comThe Human Element Remains: How Legal Professionals and AI Can Best Collaborate in the Future of E-Discovery and Litigation10 Jun 2025JD SupraIn Profile: Rafie Faruq, CEO of Genie10 Jun 2025Fintech TimesAI legal battles: U.K. trial and new U.S. lawsuit unfold10 Jun 2025SearchEnterpriseAI - TechTargetSmall Law, Big Tech: AI's Role In Revolutionizing The Legal Industry10 Jun 2025Forbes.comMay 2025 Tech Litigation Roundup10 Jun 2025TechPolicy.pressLaw.com Readies June 16 Launch of Major Redesign Focused on Content Integration, Modernized UI and Global Perspective10 Jun 2025LawSitesSociety revives campaign to save civil legal aid10 Jun 2025Law Society GazetteReform UK faces legal challenge over data failures10 Jun 2025Computing.co.ukLondon AI firm claims Getty copyright lawsuit is an ‘existential threat' to generative tech industry10 Jun 2025Business Matters MagazineGeospatial Law, Ethics & the Edge of Innovation: A Conversation with Kevin Pomfret10 Jun 2025GIM InternationalEuropean legal AI deal to watch: Doctrine enters Germany with investment in dejure.org10 Jun 2025Legal Technology InsiderHow London Built the World's Best Legal AI Ecosystem10 Jun 2025Entrepreneur EuropeAI Just Changed Legal Research Forever: Here's What Every Law Firm Should Know10 Jun 2025Legaltech on MediumFrom risk to ROI: The business case for AI governance10 Jun 2025Legal Technology News - Legal IT Professionals | Everything legal technologyEuropean Commission Launches Stakeholder Consultation on the EU AI Act's Rules for High-Risk AI Systems10 Jun 2025SteptoeHouse Budget Reconciliation Bill Would Delay State AI Regulation10 Jun 2025McDermott Will and EmeryConnecticut Signals an Increased Focus on Biometric Data Compliance10 Jun 2025Venable LLPUpdates on CIPA Reform: CA SB 690 Progresses to the Assembly Without a Private Right of Action10 Jun 2025BeneschPresident Trump Signs Cybersecurity Executive Order10 Jun 2025Mayer BrownFederal Judge Denies CIPA Lawsuit's Class Certification: 5 Key Takeaways for Businesses10 Jun 2025Fisher & Phillips LLPMetaLight Inc. Announces HK$242.35 Million IPO10 Jun 2025CooleyArtificial Intelligence or innocent ignorance? Hard lessons yield best practices10 Jun 2025Clark HillNavigating the South Korean AI Act: Implications for US and European Businesses10 Jun 2025Steptoe

Transform Your Workplace
How HR Can Lead the AI Revolution Without Losing Its Humanity

Transform Your Workplace

Play Episode Listen Later Jun 10, 2025 36:56


HR consultant Daniel Strode discusses AI's impact on human resources, highlighting recruitment and data analytics as prime areas for adoption. He introduces his "5P model" emphasizing policy/governance and people/culture transformation as critical success factors. While AI adoption remains slow—only 25% of adults regularly use tools like ChatGPT—organizations are unknowingly integrating AI through software updates. Strode advocates for proper governance policies ahead of regulations like the EU AI Act, positioning AI as a collaborative tool enhancing rather than replacing human capabilities. TAKEAWAYS 5P Framework: Success requires addressing process enhancement, personalization, predictive insights, policy/governance, and people/culture transformation Governance First: Establish AI ethics policies, bias auditing, and compliance training before implementation, especially with upcoming EU AI Act regulations Human-AI Partnership: Use AI for manual processes while focusing HR professionals on strategic work like employee experience and change management A QUICK GLIMPSE INTO OUR PODCAST

MLOps.community
Packaging MLOps Tech Neatly for Engineers and Non-engineers // Jukka Remes // #322

MLOps.community

Play Episode Listen Later Jun 10, 2025 55:30


Packaging MLOps Tech Neatly for Engineers and Non-engineers // MLOps Podcast #322 with Jukka Remes, Senior Lecturer (SW dev & AI), AI Architect at Haaga-Helia UAS, Founder & CTO at 8wave AI. Join the Community: https://go.mlops.community/YTJoinInGet the newsletter: https://go.mlops.community/YTNewsletter// AbstractAI is already complex—adding the need for deep engineering expertise to use MLOps tools only makes it harder, especially for SMEs and research teams with limited resources. Yet, good MLOps is essential for managing experiments, sharing GPU compute, tracking models, and meeting AI regulations. While cloud providers offer MLOps tools, many organizations need flexible, open-source setups that work anywhere—from laptops to supercomputers. Shared setups can boost collaboration, productivity, and compute efficiency.In this session, Jukka introduces an open-source MLOps platform from Silo AI, now packaged for easy deployment across environments. With Git-based workflows and CI/CD automation, users can focus on building models while the platform handles the MLOps.// BioFounder & CTO, 8wave AI | Senior Lecturer, Haaga-Helia University of Applied SciencesJukka Remes has 28+ years of experience in software, machine learning, and infrastructure. Starting with SW dev in the late 1990s and analytics pipelines of fMRI research in early 2000s, he's worked across deep learning (Nokia Technologies), GPU and cloud infrastructure (IBM), and AI consulting (Silo AI), where he also led MLOps platform development. Now a senior lecturer at Haaga-Helia, Jukka continues evolving that open-source MLOps platform with partners like the University of Helsinki. He leads R&D on GenAI and AI-enabled software, and is the founder of 8wave AI, which develops AI Business Operations software for next-gen AI enablement, including regulatory compliance of AI.// Related LinksOpen source -based MLOps k8s platform setup originally developed by Jukka's team at Silo AI - free for any use and installable in any environment from laptops to supercomputing: https://github.com/OSS-MLOPS-PLATFORM/oss-mlops-platformJukka's new company:https://8wave.ai~~~~~~~~ ✌️Connect With Us ✌️ ~~~~~~~Catch all episodes, blogs, newsletters, and more: https://go.mlops.community/TYExploreJoin our Slack community [https://go.mlops.community/slack]Follow us on X/Twitter [@mlopscommunity](https://x.com/mlopscommunity) or [LinkedIn](https://go.mlops.community/linkedin)] Sign up for the next meetup: [https://go.mlops.community/register]MLOps Swag/Merch: [https://shop.mlops.community/]Connect with Demetrios on LinkedIn: /dpbrinkmConnect with Jukka on LinkedIn: /jukka-remesTimestamps:[00:00] Jukka's preferred coffee[00:39] Open-Source Platform Benefits[01:56] Silo MLOps Platform Explanation[05:18] AI Model Production Processes[10:42] AI Platform Use Cases[16:54] Reproducibility in Research Models[26:51] Pipeline setup automation[33:26] MLOps Adoption Journey[38:31] EU AI Act and Open Source[41:38] MLOps and 8wave AI[45:46] Optimizing Cross-Stakeholder Collaboration[52:15] Open Source ML Platform[55:06] Wrap up

KVD Service Podcast
IT-Security: So geht Cybersicherheit für kleine und mittlere Unternehmen

KVD Service Podcast

Play Episode Listen Later Jun 5, 2025 43:14


In dieser Folge des Podcasts wirft KVD-Redakteur Michael Braun gemeinsam mit Julian Rupp vom Bundesamt für Sicherheit in der Informationstechnik (BSI) einen fundierten Blick auf die Herausforderungen und Chancen im Bereich Cybersicherheit – insbesondere für kleine und mittlere Unternehmen (KMU). Julian Rupp erklärt eindrücklich, warum Informationssicherheit kein Zustand, sondern ein kontinuierlicher Prozess ist – und warum es längst nicht mehr reicht, das Thema "irgendwann mal" auf die Agenda zu setzen. Besonders gefährdet sind dabei Unternehmen ohne eigene IT-Abteilung – also die überwältigende Mehrheit der Betriebe in Deutschland. Einige der Kernaussagen im Überblick: - Die Bedrohungslage ist real – täglich entstehen über 300.000 neue Schadsoftware-Varianten. - Die größten Schwachstellen liegen oft nicht in der Technik, sondern in veralteten Systemen, fehlenden Updates und menschlichem Fehlverhalten. - Ransomware, Phishing und Social Engineering sind längst keine abstrakten Bedrohungen mehr – sie treffen ganz konkrete Unternehmen im Alltag. Julian Rupp bringt aber auch praktikable Lösungen mit: Mit dem Cyber-Risikocheck bietet das BSI einen leicht zugänglichen Einstieg in die Sicherheitsprüfung für Unternehmen – speziell auf die Bedürfnisse kleiner Betriebe zugeschnitten. Drei zentrale Maßnahmen, die jedes Unternehmen sofort umsetzen kann: - Updates sind Pflicht, keine Option – veraltete Systeme sind Einfallstore Nummer eins. - Mehrfaktor-Authentifizierung einführen – sie schützt auch dann, wenn Passwörter kompromittiert wurden. - Mitarbeitende regelmäßig sensibilisieren – denn am Ende sitzt immer ein Mensch vor dem Bildschirm. Zusätzlich diskutieren sie die Rolle künstlicher Intelligenz im Bereich Cyberkriminalität, die Entwicklung einer digitalen Schattenwirtschaft und die Bedeutung regulatorischer Antworten wie dem EU AI Act. Ein spannendes Gespräch, das nicht nur Risiken aufzeigt, sondern vor allem lösungsorientierte Ansätze bietet – und deutlich macht: Cybersicherheit ist Chefsache.

The Roadmap
Ep 36: Medical device software market on the brink? The AI Act alarm bells are ringing - Part 3

The Roadmap

Play Episode Listen Later Jun 4, 2025 7:40


In this final episode of our three-part mini series on the EU AI Act and its implications for manufacturers of medical devices, Marc Dautlich and Alex Denoon look in detail at Annex V of the Act. Annex V of the EU AI Act requires, amongst other things, that a manufacturer provide the Notified Body making a conformity assessment with a statement that the high risk AI system that is being assessed and which forms part of the medical device, and which involves the processing of personal data (as many medical devices will), "complies with the GDPR". What exactly does a statement like that look like? And how are the Notified Bodies responsible for the conformity assessment going to evaluate the inevitable qualifications and caveats to any such statement that applicants are likely to make?Send us a textThanks for listening!If you have any feedback, questions or comments, please email us at theroadmap@bristows.comFind all the episodes as we release them here, and don't forget to subscribe! Follow us on X and LinkedIn using #TheRoadmapPod

Ecosystemic Futures
91. Navigating the Cognitive Revolution: What Makes Us Human in an AI World

Ecosystemic Futures

Play Episode Listen Later Jun 3, 2025 49:22


As AI systems approach and potentially surpass human cognitive benchmarks, how do we design hybrid intelligence frameworks that preserve human agency while leveraging artificial cognitive enhancements?In this exploration of human-AI convergence, anthropologist and organizational learning expert Dr. Lollie Mancey presents a framework for the "cognitive revolution,” the fourth transformational shift in human civilization following agricultural, industrial, and digital eras. Drawing from Berkeley's research on the science of awe, Vatican AI policy frameworks, and indigenous knowledge systems, Mancey analyzes how current AI capabilities (GPT-4 operating at Einstein-level IQ) are fundamentally reshaping cognitive labor and social structures. She examines the EU AI Act's predictive policing clauses, the implications of quantum computing, and the emerging grief tech sector as indicators of broader systemic transformation. Mancey identifies three meta-cognitive capabilities essential for human-AI collaboration: Critical information interrogation, Systematic curiosity protocols, and Epistemic skepticism frameworksHer research on AI companion platforms reveals neurological patterns like addiction pathways. At the same time, her fieldwork with Balinese communities demonstrates alternative models of technological integration based on reciprocal participation rather than extractiveoptimization. This conversation provides actionable intelligence for organizations navigating the transition from human-centric to hybrid cognitive systems.Key Research Insights• Cognitive Revolution Metrics: Compound technological acceleration outpaces regulatory adaptation, with education systems lagging significantly, requiring new frameworks for cognitive load management and decision architecture in research environments • Einstein IQ Parity Achieved: GPT-4 operates at Einstein-level intelligence yet lacks breakthrough innovation capabilities, highlighting critical distinctions between pattern recognition and creative synthesis for R&D resource allocation • Neurological Dependency Patterns: AI companion platforms demonstrate "catnip-like" effects with users exhibiting hyper-fixation behaviors and difficulty with "digital divorce"—profound implications for workforce cognitive resilience • Epistemic Security Crisis: Deep fakes eliminated content authentication while AI hallucinations embed systemic biases from internet-scale training data, requiring new verification protocols and decision-making frameworks • Alternative Integration Architecture: Balinese reciprocal participation models versus Western extractive paradigms offer scalable approaches for sustainable innovation ecosystems and human-technology collaboration#EcosystemicFutures #CognitiveRevolution #HybridIntelligence #NeuroCognition #QuantumComputing #SociotechnicalSystems #HumanAugmentation #SystemsThinking #FutureOfScience Guest: Lorraine Mancey, Programme Director at UCD Innovation Academy Host: Marco Annunziata, Co-Founder, Annunziata Desai PartnersSeries Hosts:Vikram Shyam, Lead Futurist, NASA Glenn Research CenterDyan Finkhousen, Founder & CEO, Shoshin WorksEcosystemic Futures is provided by NASA - National Aeronautics and Space Administration Convergent Aeronautics Solutions Project in collaboration with Shoshin Works.

Amplify Leadership Podcast Shorts with Harrison Painter
They Ran a Secret AI Experiment on Millions—No One Knew

Amplify Leadership Podcast Shorts with Harrison Painter

Play Episode Listen Later May 31, 2025 21:45


They Ran a Secret AI Experiment on Millions—No One KnewWhat happens when AI pretends to be human—and changes your mind without telling you?In today's episode, we break down the real Reddit experiment where AI bots secretly argued with 3.8 million users. No labels. No disclosure. Just data-driven manipulation designed to win. And it worked—6x better than real people.This isn't science fiction. It's a warning.We'll cover:✅ How the bots were trained to persuade✅ Why the study violated basic ethics and trust✅ What the EU AI Act says about manipulation✅ What this means for marketers, leaders, and everyday users✅ Why this moment shaped the launch of my new companyThe age of AI is here. And the rules aren't ready.Subscribe for more human-first AI insightsWhat's your take? Drop a comment. This one's personal.

Agile-Lean Ireland (ALI) Podcast
ChatGPT, Compliance & Chill – EU AI Act Made Easy for SMEs

Agile-Lean Ireland (ALI) Podcast

Play Episode Listen Later May 29, 2025 41:24


Send us a text Feeling lost in the AI law lingo? Think the EU AI Act is just for tech giants? Think again.Join us for an easy-to-digest (and dare we say fun?) session where Joanna breaks down Regulation (EU) 2024/1689—aka the world's first big AI law—and what it actually means for small businesses, startups, and everyday professionals.We'll cover: ⚖️ What the EU AI Act is (no jargon, we promise)

The Roadmap
Ep 35: Medical device software market on the brink? The AI Act alarm bells are ringing - Part 2

The Roadmap

Play Episode Listen Later May 28, 2025 11:31


As rumours swirl of a delay to the date of application of the AI Act, here is part 2 of specialists Marc Dautlich and Alex Denoon's podcast.In this episode, they delve further into Marc's territory – the principles based GDPR - and note an uncomfortable practical constraint hidden away in Annex V of the EU AI Act.This issue wasn't addressed in Team-NB's Position Paper, but is likely to be a significant challenge.Send us a textThanks for listening!If you have any feedback, questions or comments, please email us at theroadmap@bristows.comFind all the episodes as we release them here, and don't forget to subscribe! Follow us on X and LinkedIn using #TheRoadmapPod

Irish Tech News Audio Articles
Ireland Well Placed to Influence AI EU Innovation

Irish Tech News Audio Articles

Play Episode Listen Later May 23, 2025 4:22


European Movement Ireland and Konrad- Adenauer- Stiftung (KAS) UK and Ireland hosted 'Artificial Intelligence - How will Europe Innovate?' The event explored the challenges and opportunities ahead for AI innovation, political leadership and the future development of AI across Europe, as the European Union sets out its ambitious agenda to become a global leader in AI. AI EU Innovation The EU AI act, which forms part of this vision, is the world's first act to regulate the use of AI globally. In force since 2024, with some exemptions for high-risk AI until 2027, the EU AI Act will be fully applicable from 2026, coinciding with Ireland's Presidency of the European Council. Given the presence of multinational tech companies, and leading research institutions in the country, Ireland is well positioned to influence how AI is advanced across the bloc into the future. Chair of the Oireachtas Committee on EU Affairs, Barry Ward TD said: "As Europe takes bold steps toward responsible AI innovation, today's discussion underscores the need for political leadership that is both visionary and grounded in our shared values. With Ireland preparing to take on the Presidency of the European Council in 2026, along with our thriving tech sector and academic excellence, we are uniquely placed to help lead this conversation and ensure AI development in Europe is ethical, innovative, and inclusive." Noelle O Connell, CEO European Movement Ireland said; "As the global race continues for leadership in AI, I am delighted to hear the statement from Minister Smyth, welcome Chair of the Oireachtas Committee on EU Affairs Barry Ward TD, and listen to the insights from the expert panel today on AI innovation, as it increasingly shapes all aspects of our daily lives and influences decision making. We are at a pivotal time when trust in institutions is falling, as revealed by EM Ireland's EU Poll 2025, a majority stated (40%) they do not trust any institution and less than one in three (30%) expressed trust in the EU in Ireland. As the EU seeks to be bold in its vision for AI, it must ensure developments in AI work to serve the public good, and do not erode trust into the future." The Minister for Trade Promotion, Artificial Intelligence and Digital Transformation Niamh Smyth TD appeared prior to the discussion with a short video statement. The expert panel was moderated by Noelle O Connell and included Barry Ward TD, Chair of the Oireachtas Committee on European Union Affairs, Stephanie Anderson, Public Policy Manager, Meta, Dr. Eamonn Cahill, Principal Officer, AI and Digital Regulation Unit, Department of Enterprise, Trade and Employment and Kai Zenner of Office and Digital Policy Adviser for MEP Axel Voss. Dr. Canan Atilgan, Konrad- Adenauer- Stiftung (KAS) UK and Ireland said; "The EU aims to become a global leader in AI and has unveiled an ambitious Action Plan - a bold strategy designed not merely to compete, but to lead ethically, with a clear, human-centred vision." Artificial Intelligence - How Will Europe Innovate? brought citizens, businesses, and policymakers together to explore the themes of the future of AI, and the regulation of AI in practice. The hashtag #EMIKAS and the handles @KAS_UKIRL and @emireland were used during the event. See more breaking stories here. More about Irish Tech News Irish Tech News are Ireland's No. 1 Online Tech Publication and often Ireland's No.1 Tech Podcast too. You can find hundreds of fantastic previous episodes and subscribe using whatever platform you like via our Anchor.fm page here: https://anchor.fm/irish-tech-news If you'd like to be featured in an upcoming Podcast email us at Simon@IrishTechNews.ie now to discuss. Irish Tech News have a range of services available to help promote your business. Why not drop us a line at Info@IrishTechNews.ie now to find out more about how we can help you reach our audience. You can also find and follow us on Twitter, LinkedIn, Facebook, Instagram, TikTok and Snapchat.

Tech Radio
1070: I/O I/O Off to Work We Go

Tech Radio

Play Episode Listen Later May 22, 2025 58:24


This week we have new AI announcements from Google I/O and Microsoft Build, Bitcoin hits a new high and Elon Musk says goodbye to the world of politics. Plus, Barry Scannell from William Fry explains how to stay on the right side of the EU AI Act.Listen to Tech Radio now on Apple, Spotify and YouTube—----- Apple - https://podcasts.apple.com/us/podcast/tech-radio-ireland/id256279328Spotify - https://open.spotify.com/show/5vAWM1vvHbQKYE79dgCGY2YouTube - https://www.youtube.com/@TechRadioIrelandRSS - https://feeds.transistor.fm/techradio

Diritto al Digitale
Legal Leaders Insights | Ronan Davy Associate General Counsel at Anthropic

Diritto al Digitale

Play Episode Listen Later May 22, 2025 18:38


Join Giulio Coraggio of the law firm DLA Piper in this exciting episode of Legal Leaders Insights, featuring Ronan Davy, the Associate General Counsel Europe of Anthropic, a leading company in responsible artificial intelligence. Dive into an insightful conversation on the future of AI law, compliance, and innovation.Discover the career journey of a top legal executive who has successfully navigated the evolving landscape of artificial intelligence. Learn how Anthropic aligns its ambitious AI safety goals with the rigorous demands of European legal compliance, and get an expert perspective on the anticipated impact of the EU AI Act on the AI industry.The episode also includes invaluable advice for aspiring legal professionals aiming for leadership roles in AI law—highlighting the most crucial skill necessary for success.Subscribe to Legal Leaders Insights, activate notifications for future episodes, and leave us a 5-star review on Apple Podcasts or Spotify if you enjoyed this discussion.Send us a text

Liebe Zeitarbeit
KI, Community & Zukunft: Wie Netzwerke den Fortschritt treiben - Christoph Seipp

Liebe Zeitarbeit

Play Episode Listen Later May 21, 2025 40:42


Campus 10178
Ethical AI

Campus 10178

Play Episode Listen Later May 21, 2025 37:28


Exploring the societal impact of analytics and artificial intelligence with Catalina Stefanescu-Cuntze and Urs Mueller In this episode of Campus 10178 – the podcast of ESMT Berlin – Catalina Stefanescu-Cuntze and Urs Mueller join host Tammi L. Coles for a conversation about the ethical dimensions of artificial intelligence and analytics. Drawing on their experience as educators and researchers in the ESMT Master in Analytics and Artificial Intelligence (MAAI) program, they reflect on the human values behind the data, the implications of algorithmic decision making, and the need for cross-cultural dialogue in designing responsible technologies. The conversation explores how ethical considerations arise throughout the data value chain – from collection to analysis to implementation – and why a technical solution alone is not enough. They also discuss the evolving regulatory landscape, including the EU AI Act, and the importance of embedding ethical frameworks into both education and practice. Key discussion points Ethical considerations in analytics and artificial intelligence The relationship between data neutrality and human interpretation The role of educational programs in fostering critical, values-based reflection Differences in regulatory approaches across jurisdictions Why future development must center people and society Guest information Catalina Stefanescu-Cuntze is professor of management science at ESMT Berlin and the faculty lead of the Master in Analytics and Artificial Intelligence (MAAI) program. She joined ESMT in 2009 as an associate professor, becoming the first holder of the Deutsche Post DHL Chair, and has served in multiple leadership roles, including director of research (2010–2012) and dean of faculty (2012–2019). Prior to ESMT, she was assistant professor of decision sciences at London Business School. Catalina holds a PhD and MS in operations research from Cornell University and a BS in mathematics from the University of Bucharest. Her research and teaching focus on analytics and AI, and she is passionate about fostering the growth of this critical domain.. Urs Mueller is associate professor of practice at SDA Bocconi School of Management in Milan and a visiting lecturer at ESMT Berlin. He teaches courses on ethics, responsibility, and societal impact within data and AI systems. He has worked with organizations on business ethics and decision making and leads the “Analytics and Society” course in the MAAI program. Resources and links Master in Analytics and Artificial Intelligence (MAAI) program Catalina Stefanescu-Cuntze – ESMT Berlin faculty profile Urs Mueller – Personal faculty profile   About Campus 10178 Campus 10178 is Germany's #1 podcast on the business research behind business practice. Brought to you each month by ESMT Berlin, the 45-minute show brings together top scholars, executives, and policymakers to discuss today's hottest topics in leadership, innovation, and analytics. Campus 10178 – where education meets business.  Want to recommend a guest? Email our podcast host at campus10178@esmt.org. Want to share comments? Join the conversation on: Facebook: ESMT Berlin's Facebook page LinkedIn: ESMT Berlin's announcements on LinkedIn

The Roadmap
Ep 34: Medical device software market on the brink? The AI Act alarm bells are ringing - Part 1

The Roadmap

Play Episode Listen Later May 19, 2025 13:20


Team-NB, representing the majority of Notified Bodies for EU MDR and IVDR medical devices, has issued a stark warning: the implementation of the EU AI Act carries a significant risk of major disruption to the medical device software sector.Their recently published Position Paper doesn't hold back, highlighting critical shortcomings in the implementation framework and emphasising the dwindling time left to address them. Team-NB is urgently calling for swift action to prevent widespread issues.To unpack this crucial situation, our specialists Marc Dautlich and Alex Denoon offer their expert analysis of the Team-NB findings – and an additional key consideration – from the vantage points of data protection and product regulatory in this three part series.Send us a textThanks for listening!If you have any feedback, questions or comments, please email us at theroadmap@bristows.comFind all the episodes as we release them here, and don't forget to subscribe! Follow us on X and LinkedIn using #TheRoadmapPod

Ogletree Deakins Podcasts
Workplace Strategies Watercooler 2025: The AI-Powered Workplace of Today and Tomorrow

Ogletree Deakins Podcasts

Play Episode Listen Later May 16, 2025 16:55


In this installment of our Workplace Strategies Watercooler 2025 podcast series, Jenn Betts (shareholder, Pittsburgh), Simon McMenemy (partner, London), and Danielle Ochs (shareholder, San Francisco) discuss the evolving landscape of artificial intelligence (AI) in the workplace and provide an update on the global regulatory frameworks governing AI use. Simon, who is co-chair of Ogletree's Cybersecurity and Privacy Practice Group, breaks down the four levels of risk and their associated regulations specified in the EU AI Act, which will take effect in August 2026, and the need for employers to prepare now for the Act's stringent regulations and steep penalties for noncompliance. Jenn and Danielle, who are co-chairs of the Technology Practice Group, discuss the Trump administration's focus on innovation with limited regulation, as well as the likelihood of state-level regulation.

Science 4-Hire
Scaling AI Innovation for Hiring: Lessons from the Frontlines

Science 4-Hire

Play Episode Listen Later May 12, 2025 52:21


Guest: Christine Boyce, Global Innovation Leader at ManpowerGroup/Right Management“We have to stress-test innovation in the messiness of real-world hiring, not just ideal lab conditions.”-Christine BoyceIn this episode of Psych Tech @ Work, I'm joined by my longtime friend Christine Boyce, Global Innovation Leader at ManpowerGroup/Right Management, to explore how innovation — especially around AI — is reshaping hiring and talent development at scale, and why solving for trust, transparency, and operational realities matters more than ever.SummaryAt the heart of this conversation is the reality that scaling AI innovation in hiring brings massive complexity. While AI offers incredible promise, solving for accuracy, fairness, and operational reality becomes exponentially harder when you're dealing with a large number of unique clients.Christine Boyce, through her work at ManpowerGroup & Right Management, operates at the intersection of these challenges every day. Unlike internal talent acquisition leaders who focus on one organization's needs, Christine must help innovate across a vast client portfolio. Each client presents different barriers — from data limitations, to ethical concerns, to regulatory pressures — and innovation must be modular, defensible, and adaptable to succeed.This vantage point gives Christine a unique, big-picture view of how AI adoption really plays out across industries and markets.We dive into the practical challenges of innovating responsibly: earning trust, scaling solutions across diverse environments, and balancing speed with fairness. Christine's work at ManpowerGroup & Right Management highlights how innovation must be deeply disciplined if it is to achieve true scale and impact.The Core Challenge: Scaling Accuracy and FairnessAt the heart of using AI for hiring lies the challenge of achieving accuracy and fairness at scale. AI's true value isn't just its ability to make individual decisions — it's in processing vast amounts of data and automating judgment across thousands of candidates. However, scale magnifies both strengths and weaknesses: minor biases can grow into systemic problems, and small inefficiencies can snowball into major failures.Staffing firms like ManpowerGroup offer critical real-world lessons:* Scale forces discipline — Every AI tool must be rigorously vetted for fairness, transparency, and defensibility before deployment.* Real-world variation stresses the system for the better — Tools must flexibly adapt to diverse jobs, industries, and candidate pools. This makes the overall path of innovation better and drives great learnings across the board.* Speed must not erode trust — Productivity gains must still respect ethical standards and candidate experience.* External accountability keeps AI honest — Clients demand transparency, validation, and explainability before adoption.Real Barriers to AI Adoption: What Clients Are FacingDespite AI's potential, Christine identifies several persistent hurdles that she faces when serving her diverse slate of clients:* Resistance to Behavior Change: Even demonstrably valuable AI tools often struggle against entrenched workflows and distrust of automation.* Ethical and Trust Concerns: Clients demand AI systems that are transparent, explainable, and defensible, fearing reputational or regulatory risks.* Vendor Noise Overload: Saturation by "AI-washed" vendors makes it hard to differentiate true innovation from hype.* Mismatch Between Hype and Practical Needs: Clients need tools that solve today's operational problems — not just futuristic visions disconnected from reality.* Fear of Creeping AI Adoption: Organizations worry about AI capabilities being embedded into systems without visibility or intentionality.* Compliance and Regulation Anxiety: Global and local regulations (like the EU AI Act or pending US laws) create urgency for proven, compliant AI solutions.* Talent Data Readiness: Without clean, structured internal data, even the best AI solutions struggle to deliver meaningful results.These challenges aren't isolated — they reveal the broader realities companies must manage when trying to adopt AI responsibly at scale.Ultimately, client concerns have a hand in AI innovation because they are critical for the adoption of these technologies, shaping how staffing firms and vendors must design, validate, and deploy solutions.There's an inherent tension between the drive for scale and the need for trust, fairness, and operational reality.Christine's experience demonstrates that true innovation in AI for hiring isn't just about introducing new tools — it's about creating resilient, transparent systems that can adapt to real-world complexity. Managing the tension between speed, scale, trust, and fairness represents the path to a bright future. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit charleshandler.substack.com

NOW of Work
"What lives in HR dies in HR." with Ritu Mohanka, CEO at VONQ

NOW of Work

Play Episode Listen Later May 10, 2025 55:30


On this week's Now of Work Digital Meetup, Ritu Mohanka will joined Jess Von Bank and Jason Averbook to dig into how AI can actually reduce bias in hiring and why we should be moving away from a “matching” model.Ritu shares how VONQ's shift to a scoring system, evaluating candidates across 15 transparent, job-relevant criteria, is enabling skills-based hiring, improving candidate experience, and aligning with the EU AI Act's push for explainable AI.

CXO.fm | Transformation Leader's Podcast
Winning with AI Compliance

CXO.fm | Transformation Leader's Podcast

Play Episode Listen Later May 9, 2025 13:34 Transcription Available


Mastering the EU AI Act is no longer optional—it's a strategic necessity. In this episode, we unpack the critical compliance gaps that separate thriving companies from those falling behind. Learn how to categorise your AI systems, mitigate risk, and turn regulation into a competitive advantage. Perfect for business leaders, consultants, and transformation professionals navigating AI governance. 

The Road to Accountable AI
Kelly Trindel: AI Governance Across the Enterprise? All in a Day's Work

The Road to Accountable AI

Play Episode Listen Later May 8, 2025 36:32


In this episode, Kevin Werbach interviews Kelly Trindel, Head of Responsible AI at Workday. Although Trindel's team is housed within Workday's legal department, it operates as a multidisciplinary group, bringing together legal, policy, data science, and product expertise. This structure helps ensure that responsible AI practices are integrated not just at the compliance level but throughout product development and deployment. She describes formal mechanisms—such as model review boards and cross-functional risk assessments—that embed AI governance into product workflows across the company. The conversation covers how Workday evaluates model risks based on context and potential human impact, especially in sensitive areas like hiring and performance evaluation. Trindel outlines how the company conducts bias testing, maintains documentation, and uses third-party audits to support transparency and trustworthiness. She also discusses how Workday is preparing for emerging regulatory frameworks, including the EU AI Act, and how internal governance systems are designed to be flexible in the face of evolving policy and technological change. Other topics include communicating AI risks to customers, sustaining post-deployment oversight, and building trust through accountability infrastructure. Dr. Kelly Trindel directs Workday's AI governance program. As a pioneer in the responsible AI movement, Kelly has significantly contributed to the field, including testifying before the U.S. Equal Employment Opportunity Commission (EEOC) and later leading an EEOC task force on ethical AI—one of the government's first. With more than 15 years of experience in quantitative science, civil rights, public policy, and AI ethics, Kelly's influence and commitment to responsible AI are instrumental in driving the industry forward and fostering AI solutions that have a positive societal impact.  Transcript Responsible AI: Empowering Innovation with Integrity   Putting Responsible AI into Action (video masterclass)  

VinciWorks
AI compliance and ethical practices

VinciWorks

Play Episode Listen Later May 7, 2025 55:09


AI is no longer just hype; it's here, powerful, and already reshaping how organisations operate. But with that power comes legal and ethical responsibility. This episode explores how businesses can harness AI while staying within the law and public trust. From the EU AI Act to GDPR and the emerging frameworks in the UK and US, we unpack what compliance looks like in an AI-driven world. Here's what we cover: The latest AI compliance frameworks and global regulations How to embed ethical principles into your AI systems Spotting and mitigating risks like bias and discrimination Building an AI governance framework that stands up to scrutiny Real-life case studies: what works, what doesn't Tools and tech to help your compliance team keep up If your organisation is using or exploring AI, this is a must-listen.

AI in Education Podcast
Uber Prompts and AI Myths

AI in Education Podcast

Play Episode Listen Later May 1, 2025 42:21


In this episode of the AI in Education Podcast, Ray and Dan return from a short break with a packed roundup of AI developments across education and beyond. They discuss the online launch of the AEIOU interdisciplinary research hub that Dan attended, explore the promise and pitfalls of prompt engineering—including the idea of the “Uber prompt”—and share first impressions of the OpenAI Academy. Ray unpacks misleading headlines about Bill Gates “replacing teachers” with AI and instead spotlights the real message about AI tutors. They also dive into the 2027 AI forecast report, the emerging impact of the EU AI Act, and Microsoft's latest Work Trend Index, which introduces the idea of "agent bosses" in the AI-driven workplace. And then round off with Ben Williamson's list of AI fails in education and a startling story of an AI radio presenter nobody realised was fake. Here's all the links so you too can fall down the AI news rabbithole

Ropes & Gray Podcasts
R&G Tech Studio: Navigating AI Literacy—Understanding the EU AI Act

Ropes & Gray Podcasts

Play Episode Listen Later Apr 29, 2025 13:07


On this episode of the R&G Tech Studio podcast, Rohan Massey, a leader of Ropes & Gray's data, privacy and cybersecurity practice, is joined by data, privacy and cybersecurity counsel Edward Machin to discuss the AI literacy measures of the EU AI Act and how companies can meet its requirements to ensure their teams are adequately AI literate. The conversation delves into the broad definition of AI systems under the EU AI Act, the importance of AI literacy for providers and deployers of AI systems, and the context-specific nature of AI literacy requirements. They also provide insights into the steps organizations should take to understand their roles under the AI Act, develop training modules, and implement policies and procedures to comply with AI literacy principles. 

The FIT4PRIVACY Podcast - For those who care about privacy
Privacy Enhancing Technologies with Jetro Wils and Punit Bhatia in the FIT4PRIVACY Podcast E137 S06

The FIT4PRIVACY Podcast - For those who care about privacy

Play Episode Listen Later Apr 24, 2025 31:25


How Privacy-Enhancing Technologies (PETs) can safeguard data in an AI-driven world. As organizations increasingly rely on AI, concerns around data privacy, security, and compliance grow. PETs provide a technical safeguard to ensure sensitive information remains protected, even in the most advanced AI applications. With new regulations like the EU AI Act, organizations must adopt privacy-first strategies. PETs are a critical tool to ensure AI transparency, fairness, and trust while maintaining regulatory compliance.Our guest, Jetro Wils, cybersecurity expert and researcher, breaks down how PETs help organizations de-risk AI adoption while ensuring privacy, compliance, and security.Watch now to discover how PETs can help you build digital trust and secure AI-powered innovations!KEY CONVERSION POINT 00:01:33 How would you define digital trust?00:02:32 What is Privacy Enhancing Technology?00:04:21 Why do we need PET when we have laws and principles?00:10:19 Kind of AI risk that can also be mitigated by these PETS00:15:12 How would a PET de-risk that in an AI adoption situation ABOUT GUEST Jetro Wils is a Cloud & Information Security Officer and Cybersecurity Advisor, dedicated to helping organizations operate securely in the cloud era. With a strong focus on information security and compliance, he enables businesses to reduce risk, strengthen cybersecurity frameworks, and achieve peace of mind.With 18 years of experience in Belgium's tech industry, Jetro has held roles spanning software development, business analysis, product management, and cloud specialization. Since 2016, he has witnessed the rapid evolution of cloud technology and the growing challenge organizations face in securely adopting it. Jetro is a 3x Microsoft Certified Azure Expert and a 2x Microsoft Certified Trainer (2022-2024), conducting 10-20 certified training sessions annually on cloud, AI, and security. He has trained over 100 professionals, including enterprise architects, project managers, and engineers. As a technical reviewer for Packt Publishing, he ensures the accuracy of books on cloud and cybersecurity. Additionally, he hosts the BlueDragon Podcast, where he discusses cloud, AI, and security trends with European decision-makers.Jetro holds a professional Bachelor's Degree in Applied Computer Science (2006) and is currently pursuing a Master's in IT Risk and Cybersecurity Management at Antwerp Management School (2023-2025). His research focuses on derisking AI adoption by enhancing AI security through Privacy Enhancing Technologies (PETs). He is also a certified NIS 2 Lead Implementer working toward a DORA certification.  ABOUT HOST Punit Bhatia is one of the leading privacy experts who works independently and has worked with professionals in over 30 countries. Punit works with business and privacy leaders to create an organization culture with high privacy awareness and compliance as a business priority. Selectively, Punit is open to mentor and coach professionals.  Punit is the author of books “Be Ready for GDPR'' which was rated as the best GDPR Book, “AI & Privacy – How to Find Balance”, “Intro To GDPR”, and “Be an Effective DPO”. Punit is a global speaker who has spoken at over 30 global events. Punit is the creator and host of the FIT4PRIVACY Podcast. This podcast has been featured amongst top GDPR and privacy podcasts.As a person, Punit is an avid thinker and believes in thinking, believing, and acting in line with one's value to have joy in life. He has developed the philosophy named ‘ABC for joy of life' which passionately shares. Punit is based out of Belgium, the heart of Europe.  RESOURCES Websites www.fit4privacy.com, www.punitbhatia.com,  https://www.linkedin.com/in/jetrow/  Podcast https://www.fit4privacy.com/podcast Blog https://www.fit4privacy.com/blog YouTube http://youtube.com/fit4privacy   

Law, disrupted
Re-release: Emerging Trends in AI Regulation

Law, disrupted

Play Episode Listen Later Apr 17, 2025 46:34


John is joined by Courtney Bowman, the Global Director of Privacy and Civil Liberties at Palantir, one of the foremost companies in the world specializing in software platforms for big data analytics. They discuss the emerging trends in AI regulation.  Courtney explains the AI Act recently passed by the EU Parliament, including the four levels of risk it assesses for different AI systems and the different regulatory obligations imposed on each risk level, how the Act treats general purpose AI systems and how the final Act evolved in response to lobbying by emerging European companies in the AI space. They discuss whether the EU AI Act will become the global standard international companies default to because the European market is too large to abandon. Courtney also explains recent federal regulatory developments in  the U.S. including the framework for AI put out by the National Institute of Science and Technology, the AI Bill of Rights announced by the White House which calls for voluntary compliance to certain principles by industry and the Executive Order on Safe, Secure and Trustworthy Development and Use of Artificial Intelligence which requires each department of the federal government to develop its own plan for the use and deployment of AI.  They also discuss the wide range of state level AI legislative initiatives and the leading role California has played in this process.  Finally, they discuss the upcoming issues legislatures will need to address including translating principles like accountability, fairness and transparency into concrete best practices, instituting testing, evaluation and validation methodologies to ensure that AI systems are doing what they're supposed to do in a reliable and trustworthy way, and addressing concerns around maintaining AI systems over time as the data used by the system continuously evolves over time until it no longer accurately represents the world that it was originally designed to represent.Podcast Link: Law-disrupted.fmHost: John B. Quinn Producer: Alexis HydeMusic and Editing by: Alexander Rossi

Scouting for Growth
Areiel Wolanow On Unleashing AI, Quantum, and Emerging Tech

Scouting for Growth

Play Episode Listen Later Apr 16, 2025 49:08


On this episode of the Scouting For Growth podcast, Sabine meets Areiel Wolanow, the managing director of Finserv Experts, who discusses his journey from IBM to founding FinServ Experts, emphasising the importance of focusing on business models enabled by technology rather than the technology itself. Areiel delves into the challenges and opportunities presented by artificial intelligence, responsible AI practices, and the implications of quantum computing for data security, highlighting the need for organisations to adapt their approaches to digital transformation, advocating for a migration strategy over traditional transformation methods KEY TAKEAWAYS Emerging tech should be leveraged to create new business models rather than just re-engineering existing ones. Understanding the business implications of technology is crucial for delivering value. When harnessing artificial intelligence, it's essential to identify the real underlying problems within an organisation, assess its maturity, and build self-awareness before applying maturity models and gap analyses. The EU AI Act serves as a comprehensive guideline for responsible AI use, offering risk categories and controls that can benefit companies outside the EU by providing a framework for ethical AI practices without the burden of compliance. Organisations should prepare for the future of quantum computing by ensuring their data is protected against potential vulnerabilities. This involves adopting quantum-resilient algorithms and planning for the transition well in advance. Leaders should place significant responsibility on younger team members who are more familiar with emerging technologies. Providing them with autonomy and support can lead to innovative solutions and successful business outcomes. BEST MOMENTS 'We focus not on the technology itself, but on the business models the tech enables.' 'The first thing you have to do... is to say, OK, is the proximate cause the real problem?' 'The best AI regulations out there is the EU AI Act... it actually benefits AI companies outside the EU more than it benefits within.' 'Digital transformations have two things in common. One is they're expensive, and two is they always fail.' ABOUT THE GUEST Areiel Wolanow is the managing director of Finserv Experts. He is an experienced business leader with over 25 years of experience in business transformation solutioning, sales, and execution. He served as one of IBM’s key thought leaders in blockchain, machine learning, and financial inclusion. Areiel has deep experience leading large, globally distributed teams; he has led programs of over 100 people through the full delivery life cycle, and has managed budgets in the tens of millions of dollars. In addition to his delivery experience, Areiel also serves as a senior advisor on blockchain, machine learning, and technology adoption; he has worked with central banks and financial regulators around the world, and is currently serving as the insurance industry advisor for the UK Parliament’s working group on blockchain. LinkedIn ABOUT THE HOST Sabine is a corporate strategist turned entrepreneur. She is the CEO and Managing Partner of Alchemy Crew a venture lab that accelerates the curation, validation, & commercialization of new tech business models. Sabine is renowned within the insurance sector for building some of the most renowned tech startup accelerators around the world working with over 30 corporate insurers, accelerated over 100 startup ventures. Sabine is the co-editor of the bestseller The INSURTECH Book, a top 50 Women in Tech, a FinTech and InsurTech Influencer, an investor & multi-award winner. Twitter LinkedIn Instagram Facebook TikTok Email Website

Artificial Intelligence in Industry with Daniel Faggella
Global AI Regulations and Their Impact on Industry Leaders - with Micheal Berger of Munich Re

Artificial Intelligence in Industry with Daniel Faggella

Play Episode Listen Later Apr 15, 2025 21:01


Today's guest is Michael Berger, Head of Insure AI at Munich Re. Michael returns to the Emerj podcast platform to discuss the impact of legislation such as the EU AI Act on the insurance industry and broader AI adoption. Our conversation covers how regulatory approaches differ between the United States and the European Union, highlighting the risk-based framework of the EU AI Act and the litigation-driven environment in the U.S. Michael explores key legal precedents, including AI liability cases, and what they signal for business leaders implementing AI-driven solutions. If you've enjoyed or benefited from some of the insights of this episode, consider leaving us a five-star review on Apple Podcasts, and let us know what you learned, found helpful, or liked most about this show!

AI Tool Report Live
Biden vs Trump: How U.S. AI Policy Is Shifting

AI Tool Report Live

Play Episode Listen Later Apr 15, 2025 31:40


In this episode of The AI Report, Christine Walker joins Arturo Ferreira to launch a new series on the legal side of artificial intelligence. Christine is a practicing attorney helping businesses understand how to navigate AI risk, compliance, and governance in a rapidly changing policy environment.They explore how the shift from the Biden to the Trump administration is changing the tone on AI regulation, what the EU AI Act means for U.S. companies, and why many of the legal frameworks we need for AI already exist. Christine breaks down how lawyers apply traditional legal principles to today's AI challenges from intellectual property and employment law to bias and defamation.Also in this episode: • The risk of waiting for regulation to catch up • How companies can conduct internal AI audits • What courts are already doing with AI tools • Why even lawyers are still figuring this out in real time • What businesses should be doing now to reduce liabilityChristine offers a grounded, practical view of what it means to use AI responsibly, even when the law seems unclear.Subscribe to The AI Report:theaireport.aiJoin our community:skool.com/the-ai-report-community/aboutChapters:(00:00) The Legal Risks of AI and Why It's Still a Black Box(01:13) Christine Walker's Background in Law and Tech(03:07) Biden vs Trump: Competing AI Governance Philosophies(04:53) What Governance Means and Why It Matters(06:26) Comparing the EU AI Act with the U.S. Legal Vacuum(08:14) Case Law on IP, Bias, and Discrimination(10:50) Why the Fear Around AI May Be Misplaced(13:15) Legal Precedents: What Tech History Teaches Us(16:06) The GOP's AI Stance and Regulatory Philosophy(18:35) Most AI Use Cases Already Fall Under Existing Law(21:11) Why Precedents Take So Long—and What That Means(23:08) Will AI Accelerate the Legal System?(25:24) AI + Lawyers: A Collaborative Model(27:15) Hallucinations, Case Law, and Legal Responsibility(28:36) Building Policy Now to Avoid Legal Pain Later(30:59) Christine's Final Advice for Businesses and Builders

Science 4-Hire
Responsible AI In 2025 and Beyond – Three pillars of progress

Science 4-Hire

Play Episode Listen Later Apr 15, 2025 54:44


"Part of putting an AI strategy together is understanding the limitations and where unintended consequences could occur, which is why you need diversity of thought within committees created to guide AI governance and ethics." – Bob PulverMy guest for this episode is my friend in ethical/responsible AI, Bob Pulver, the founder of CognitivePath.io and host of the podcast "Elevate Your AIQ." Bob specializes in helping organizations navigate the complexities of responsible AI, from strategic adoption to effective governance practices.  Bob was my guest about a year ago and in this episode he drops back in to discuss what has changed in the faced paced world of AI across three pillars of responsible AI usage.  * Human-Centric AI * AI Adoption and Readiness * AI Regulation and GovernanceThe past year's progress explained through three pillars that are shaping ethical AI:These are the themes that we explore in our conversation and our thoughts on what has changed/evolved in the past year.1. Human-Centric AIChange from Last Year:* Shift from compliance-driven AI towards a more holistic, human-focused perspective, emphasizing AI's potential to enhance human capabilities and fairness.Reasons for Change:* Increasing comfort level with AI and experience with the benefits that it brings to our work* Continued exploration and development of low stakes, low friction use cases* AI continues to be seen as a partner and magnifier of human capabilitiesWhat to Expect in the Next Year:* Increased experience with human machine partnerships* Increased opportunities to build superpowers* Increased adoption of human centric tools by employers2. AI Adoption and ReadinessChange from Last Year:* Organizations have moved from cautious, fragmented adoption to structured, strategic readiness and literacy initiatives.* Significant growth in AI educational resources and adoption within teams, rather than just individuals.Reasons for Change:* Improved understanding of AI's benefits and limitations, reducing fears and resistance.* Availability of targeted AI literacy programs, promoting organization-wide AI understanding and capability building.What to Expect in the Next Year:* More systematic frameworks for AI adoption across entire organizations.* Increased demand for formal AI proficiency assessments to ensure responsible and effective usage.3. AI Regulation and GovernanceChange from Last Year:* Transition from broad discussions about potential regulations towards concrete legislative actions, particularly at state and international levels (e.g., EU AI Act, California laws).* Momentum to hold vendors of AI increasingly accountable for ethical AI use.Reasons for Change:* Growing awareness of risks associated with unchecked AI deployment.* Increased push to stay on the right side of AI via legislative activity at state and global levels addressing transparency, accountability, and fairness.What to Expect in the Next Year:* Implementation of stricter AI audits and compliance standards.* Clearer responsibilities for vendors and organizations regarding ethical AI practices.* Finally some concrete standards that will require fundamental changes in oversight and create messy situations.Practical Takeaways:What should I/we be doing to move the ball fwd and realize AI's full potential while limiting collateral damage?Prioritize Human-Centric AI Design* Define Clear Use Cases: Ensure AI is solving a genuine human-centered problem rather than just introducing technology for technology's sake.* Promote Transparency and Trust: Clearly communicate how and why AI is being used, ensuring it enhances rather than replaces human judgment and involvement.Build Robust AI Literacy and Education Programs* Develop Organizational AI Literacy: Implement structured training initiatives that educate employees about fundamental AI concepts, the practical implications of AI use, and ethical considerations.* Create Role-Specific Training: Provide tailored AI skill-building programs based on roles and responsibilities, moving beyond individual productivity to team-based effectiveness.Strengthen AI Governance and Oversight* Adopt Proactive Compliance Practices: Align internal policies with rigorous standards such as the EU AI Act to preemptively prepare for emerging local and global legislation.* Vendor Accountability: Develop clear guidelines and rigorous vetting processes for vendors to ensure transparency and responsible use, preparing your organization for upcoming regulatory audits.Monitor AI Effectiveness and Impact* Continuous Monitoring: Shift from periodic audits to continuous monitoring of AI tools to ensure fairness, transparency, and functionality.* Evaluate Human Impact Regularly: Regularly assess the human impact of AI tools on employee experience, fairness in decision-making, and organizational trust.Email Bob- bob@cognitivepath.io Listen to Bob's awesome podcast - Elevate you AIQ This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit charleshandler.substack.com

The Tech Blog Writer Podcast
3241: Transparency, Trust, and AI: Atlassian's Legal Framework in Action

The Tech Blog Writer Podcast

Play Episode Listen Later Apr 14, 2025 23:38


At Team '25 in Anaheim, I had the unique opportunity to sit down with Stan Shepherd, General Counsel at Atlassian, for a conversation that pulled back the curtain on how legal and technology are intersecting in the age of AI. Stan's journey from journalism to law to shaping legal operations at one of the world's most forward-thinking companies is as fascinating as it is relevant. What emerged from our discussion is a clear signal that legal teams are no longer trailing behind innovation—they're often at the front of it. Stan shared how Atlassian's legal function achieved 85 percent daily usage of AI tools, including the company's in-house assistant, Rovo. This is remarkable when compared to the industry norm, where legal teams typically lag in AI adoption. Instead of resisting change, Stan's team leaned into it, focusing on automation for repetitive tasks while reserving high-value thinking for their legal experts. We explore Atlassian's responsible tech framework, their principles around transparency and accountability, and how these inform product development from day one. Stan also walked me through how Atlassian is navigating the emerging global regulatory landscape, from the EU AI Act to evolving compliance in the US. His insights on embedding legal counsel directly into product teams, rather than operating on the sidelines, reveal a model of collaboration that turns risk management into a growth enabler. For legal professionals, compliance leaders, and tech decision-makers wrestling with how to integrate AI responsibly, this episode offers a grounded, real-world blueprint. It's not just about mitigating risk—it's about building trust, preserving human judgment, and future-proofing your operations. If you're wondering what responsible AI adoption looks like at scale, you'll want to hear this one. So how are you preparing your legal and compliance strategy for the AI-powered workplace? Let's keep the conversation going.

HFS PODCASTS
Unfiltered Stories | Beyond Bots – Unifying Automation with Universal Orchestration

HFS PODCASTS

Play Episode Listen Later Apr 9, 2025 10:56


Join HFS Practice Leader Ashish Chaturvedi and C TWO CEO Erik Lien as they unpack the critical business imperatives behind intelligent automation orchestration. Discover how effective orchestration can significantly boost bot utilization, simplify governance, and drive measurable ROI. Explore insights on transitioning from siloed RPA to unified AI-driven automation, managing autonomous AI agents, ensuring compliance, and leveraging advanced analytics to optimize automation strategies. The key points discussed include:Maximizing automation ROI: Intelligent orchestration increases bot utilization by over 50%, reduces manual overhead by 75%, and resolves up to 90% of support issues.Breaking automation silos: Effective orchestration integrates fragmented automation initiatives, moving enterprises beyond isolated RPA deployments to comprehensive intelligent automation platforms.Governance and compliance: Orchestration provides essential governance, auditability, and error handling, ensuring compliance with evolving regulatory frameworks like the EU AI Act. Managing autonomous AI agents: Advanced orchestration manages deterministic bots and autonomous AI agents seamlessly, ensuring control, prioritization, and efficiency at an item level. Future automation landscape: The convergence of automation, AI, analytics, and optimization through orchestration platforms is key to achieving higher efficiency, governance, and business-driven insights. Dive deeper into the future of intelligent automation orchestration. Visit the HFS website to access the full report titled “RPA supervisor to IA orchestrator—C TWO advances up the Generative Enterprise S-curve” here: https://www.hfsresearch.com/research/rpa-ia-orchestrator-ctwo-enterprise-s-curve/

SocialTalent's The Shortlist
The Legal Side of AI in Hiring with Paul Britton

SocialTalent's The Shortlist

Play Episode Listen Later Apr 2, 2025 17:22


In this special episode of Hiring Excellence, originally broadcast during our SocialTalent Live event, we're tackling one of the most pressing challenges in modern hiring: AI-driven candidate cheating. Legal expert Paul Britton, Managing Partner at Britton & Time Solicitors, joins us to break down the real implications of the EU AI Act, what it means for hiring teams globally, and how to navigate the growing risks around candidate misrepresentation. From legal accountability to practical policy, this conversation is packed with must-know insights for every TA and HR leader.

Between Two COO's with Michael Koenig
AI and Privacy: Navigating the EU's New AI Act & the Impact on US Companies with Flick Fisher

Between Two COO's with Michael Koenig

Play Episode Listen Later Apr 1, 2025 36:43


Try Fellow's AI Meeting Copilot - 90 days FREE - fellow.app/cooAI and Privacy: Navigating the EU's New AI Act with Flick FisherIn this episode of Between Two COOs, host Michael Koenig welcomes back Flick Fisher, an expert on EU privacy law. They dive deep into the newly enacted EU Artificial Intelligence Act and its implications for businesses globally. They discuss compliance challenges, prohibited AI practices, and the potential geopolitical impact of AI regulation. For leaders and operators navigating AI in business, this episode provides crucial insights into managing AI technology within regulatory frameworks.00:00 Introduction to Fellow and AI Meeting Assistant01:01 Introduction to Between Two COOs Episode02:08 What is the EU's AI Act?03:42 Prohibited AI Practices in the EU07:46 Enforcement and Compliance Challenges12:18 US vs EU: Regulatory Landscape29:58 Impact on Companies and Consumers31:55 Future of AI RegulationBetween Two COO's - https://betweentwocoos.com Between Two COO's Episode Michael Koenig on LinkedInFlick Fisher on LinkedInFlick on Data Privacy and GDPR on Between Two COO'sMore on Flick's take of the EU's AI Act

Machine Learning and AI applications
#120 The March AI Sandwich - What You Might Have Missed

Machine Learning and AI applications

Play Episode Listen Later Mar 29, 2025 19:17


March 2025 delivered some of the most important global AI updates we've seen this year — but with the speed of change, it's easy to miss the big picture.In this Season 12 finale, we're joined by AI ecosystem expert Manjeet to slice through the noise and serve up the “March AI Sandwich” — four essential layers of innovation, insight, and impact.

The Strategic GC, Gartner’s General Counsel Podcast
How to Navigate Global AI Regulation

The Strategic GC, Gartner’s General Counsel Podcast

Play Episode Listen Later Mar 28, 2025 2:20


Only have time to listen in bite-sized chunks? Skip straight to the parts of the podcast most relevant to you:Get a rundown of the global AI regulatory landscape. (1:03)Discover which U.S. states have enacted, or are considering, AI laws. (2:18)Focus on the critical aspects of the EU AI Act. (4:49)Hear which three principles AI laws worldwide have converged around. (7:40)Determine the transparency requirements in the AI laws and how GC should respond. (8:40)Find out actions to meet laws' risk management requirements. (10:27)Discern how to ensure fairness in AI systems. (13:16)Know what the regulatory requirements mean for AI risk governance. (14:54)Learn why the general counsel's (GC's) role is to “steady the ship.” (17:31)In this installment of the Strategic GC Podcast, Gartner Research Director Stuart Strome and host Laura Cohn discuss the GC's role in helping organizations navigate the steady rise in the volume and complexity of AI regulations worldwide.Listen now to get a rundown on what GC need to know about the current regulatory landscape, including developments in the U.S. and the EU. Plus, learn how GC can streamline compliance by focusing on the three common principles AI laws worldwide have converged around — transparency, risk management and fairness — and make organizations more adaptable to new regulations. You also can hear action steps GC can take to incorporate new requirements into existing processes to create consistency in policies and procedures while minimizing the burden on the business.Eager to hear more? The Strategic GC Podcast publishes the last Thursday of every month. Plus, listen back to past episodes: The Strategic GC Podcast (2024 Season)About the GuestStuart Strome is a research director for Gartner's assurance practice, managing the legal and compliance risk management process research agenda. Much of his research focuses on the impact of AI regulations on legal and compliance departments and best practices for identifying, governing and mitigating legal and compliance-related AI risks. Before Gartner, Strome, who has a Ph.D. in political science from the University of Florida, held roles conducting research in a variety of fields, including criminology, public health and international security.Take Gartner with you. Gartner clients can listen to the full episode and read more provocative insights and expertise on the go with Gartner Mobile App.  Become a Gartner client to access exclusive content from global thought leaders. Visit www.gartner.com today!

AWS for Software Companies Podcast
Ep088: Monetizing and Productizing Generative AI for SaaS with RingCentral & Zoom

AWS for Software Companies Podcast

Play Episode Listen Later Mar 27, 2025 36:30


Tech leaders from RingCentral, Zoom and AWS discuss how generative AI is transforming business communications while balancing challenges & regulatory concerns in this rapidly evolving landscape.Topics Include:Introduction of panel on generative AI's impact on businesses.How to transition AI from prototypes to production.Understanding value creation for customers through AI.Introduction of Khurram Tajji from RingCentral.Introduction of Brendan Ittleson from Zoom.How generative AI fits into Zoom's product offerings.Zoom's AI companion available to all paid customers.Zoom's federated approach to AI model selection.RingCentral's new AI Receptionist (AIR) launch.How AIR routes calls using generative AI capabilities.AI improving customer experience through sentiment analysis.The disproportionate value of real-time AI assistance.Economics of delivering real-time AI capabilities.Real-time AI compliance monitoring in banking.Value of preventing regulatory fines through AI.Voice cloning detection through AI security.Democratizing AI access across Zoom's platform.Monetizing specialized AI solutions for business value.Challenges in taking AI prototypes to production.Importance of selecting the right AI models.Privacy considerations when training AI models.Maintaining quality without using customer data for training.Co-innovation with customers during product development.Scaling challenges for AI businesses.Case study of AI in legal case assessment.Ensuring unit economics work before scaling AI applications.Zoom's approach to scaling AI across products.Importance of centralizing but federating AI capabilities.Breaking down data silos for effective AI context.Navigating evolving regulations around AI.EU AI Act restrictions on emotion inference.Balancing regulations with customer experience needs.Future of AI agents interacting with other agents.How AI enhances human connection by handling routine tasks.Impact of AI on company valuations and M&A activity.Participants:Khurram Tajji – Group CMO & Partnerships, RingCentralBrendan Ittleson – Chief Ecosystem Officer, ZoomSirish Chandrasekaran – VP of Analytics, AWSSee how Amazon Web Services gives you the freedom to migrate, innovate, and scale your software company at https://aws.amazon/isv/

Innovation in Compliance with Tom Fox
Navigating AI Governance in 2025 with Christine Uri

Innovation in Compliance with Tom Fox

Play Episode Listen Later Mar 11, 2025 35:55


Innovation comes in many forms, and compliance professionals need to be ready for it and embrace it. Join Tom Fox, the Voice of Compliance, as he visits with top innovative minds, thinkers, and creators in the award-winning Innovation in Compliance podcast. In this episode, host Tom Fox welcomes Christine Uri to discuss her insights and experiences in AI governance. Christine shares her extensive background as a legal executive and outlines her current work in advising general counsels on governance and sustainability issues at her consulting firm, CURI Insights. Christine emphasizes the importance of a cross-functional committee to oversee AI governance and highlights AI technology's rapid evolution and inherent risks. The episode also covers the implications of the EU AI Act, the urgency of building AI literacy, and the challenges of managing AI risks in a dynamic regulatory landscape. As AI continues to evolve at a breakneck pace, Christine offers practical advice on how companies can keep up and ensure robust governance frameworks are in place to mitigate risks. Key highlights: AI Governance and Compliance AI Governance in 2025 EU AI Act and Its Implications Building AI Literacy in Compliance Future of AI and Compliance Resources: Christine Uri on LinkedIn Allie K Miller Luiza Jarvosky Hard Fork podcast CURI Insights Tom Fox Instagram Facebook YouTube Twitter LinkedIn

Risk Management Show
AI Security Risks - what every Risk Manager Must Know with Dr. Peter Garraghan

Risk Management Show

Play Episode Listen Later Mar 5, 2025 25:54


In this episode of the Risk Management Show podcast, we explore AI Security Risks and what every risk manager must know. Dr. Peter Garraghan, CEO and co-founder of Mind Guard and a professor of computer science at Lancaster University, shares his expertise on managing the evolving threat landscape in AI. With over €11M in research funding and 60+ published papers, he reveals why traditional cybersecurity tools often fail to address AI-specific vulnerabilities and how organizations can safely adopt AI while mitigating risks. We discuss AI's role in Risk Management, Cyber Security, and Sustainability, and provide actionable insights for Chief Risk Officers and compliance professionals. Dr. Garraghan outlines practical steps for minimizing risks, aligning AI with regulatory frameworks like GDPR, and leveraging tools like ISO 42001 and the EU AI Act. He also breaks down misconceptions about AI and its potential impact on businesses and society. If you want to be our guest or suggest a guest, send your email to info@globalriskconsult.com with the subject line "Podcast Guest Inquiry." Don't miss this essential conversation for anyone navigating AI and risk management!

Tech Law Talks
AI explained: The EU AI Act, the Colorado AI Act and the EDPB

Tech Law Talks

Play Episode Listen Later Mar 4, 2025 22:33 Transcription Available


Partners Catherine Castaldo, Andy Splittgerber, Thomas Fischl and Tyler Thompson discuss various recent AI acts around the world, including the EU AI Act and the Colorado AI Act, as well as guidance from the European Data Protection Board (EDPB) on AI models and data protection. The team presents an in-depth explanation of the different acts and points out the similarities and differences between the two. What should we do today, even though the Colorado AI Act is not in effect yet? What do these two acts mean for the future of AI?

Digitale Optimisten: Perspektiven aus dem Silicon Valley
AI Act: Reguliert die EU Künstliche Intelligenz zu Tode? (mit Benedikt Flöter, Partner bei YPOG)

Digitale Optimisten: Perspektiven aus dem Silicon Valley

Play Episode Listen Later Mar 2, 2025 69:44


#209 Alles zum EU AI Act mit Benedikt Flöter, Partner YPOGWenn der AI Act der EU der große Durchbruch ist, warum kommen die neuesten AI-Modelle dann nicht nach Deutschland? Alex spricht mit Benedikt, um mehr zu erfahren.Partner dieser Ausgabe: QONTO. GmbH in Rekordzeit gründen. 3 Monate kostenloses Konto. Besser geht's kaum. Hier klicken: http://qonto.de/optimistenKapitel:(00:00) Intro(02:58) Was will die EU mit dem AI Act bewirken?(12:18) Was sind die Risiko-Kategorien?(23:14 Ist das überhaupt praktikabel(26:54) Schlupflöcher des AI Acts(40:25) Warum launchen neueste KI-Modelle nicht in Europa(1:05:10) Geschäftsidee von BenediktMehr Infos:In dieser Episode diskutieren Alexander Mrozek und Benedikt Flöter, Partner von YPOG, den EU AI Act, der darauf abzielt, einen harmonisierten Rechtsrahmen für Künstliche Intelligenz in Europa zu schaffen. Sie beleuchten die Ziele des Gesetzes, die Herausforderungen bei der Regulierung, die verschiedenen Risikokategorien von KI und die Compliance-Anforderungen für Unternehmen. Die Diskussion umfasst auch die Balance zwischen Technologie und Anwendungsfällen sowie die Auswirkungen auf Innovation und kleine Unternehmen. In dieser Diskussion werden die Graubereiche des AI-Acts und die Ausklammerung militärischer Anwendungen thematisiert. Es wird erörtert, wie die EU versucht, einen harmonisierten Rechtsraum zu schaffen, während gleichzeitig die Herausforderungen der Umsetzung und mögliche Fragmentierungen des Rechtsraums angesprochen werden. Zudem wird die Risiko-Klassifikation von KI-Anwendungen und deren Auswirkungen auf Start-ups diskutiert. Abschließend wird die internationale Perspektive des AI-Acts beleuchtet, insbesondere warum viele Unternehmen, trotz der Regulierung, zögern, ihre Produkte in der EU zu launchen, und warum viele Start-ups in die USA abwandern. In dieser Episode diskutieren Benedikt und Alex die Herausforderungen und Chancen für KI-Startups in Deutschland, die Auswirkungen des AI-Acts, die Bedeutung von Venture Clienting und die neuesten Entwicklungen in der KI-Technologie, insbesondere mit dem Aufkommen von Deep Seek. Sie betonen die Notwendigkeit, Innovationen zu fördern und die Zusammenarbeit zwischen Startups und großen Unternehmen zu stärken, um im globalen Wettbewerb bestehen zu können.Keywords:EU AI Act, Regulierung, Künstliche Intelligenz, Innovation, Compliance, Risikomanagement, Datenschutz, digitale Wirtschaft, Technologie, Anwendungsfälle, AI-Act, Militäranwendungen, EU-Rechtsraum, Risiko-Klassifikationen, internationale Perspektive, Start-ups, US-Flip, Regulierung, Compliance, Künstliche Intelligenz, KI, Startups, AI-Act, Venture Clienting, Deep Seek, Innovation, Regulierung, Marktchancen, Deutschland, USA

Microsoft Business Applications Podcast
AI's Transformative Power: Navigating Regulation, Ethics, and Workplace Innovation

Microsoft Business Applications Podcast

Play Episode Listen Later Feb 25, 2025 34:52 Transcription Available


Get featured on the show by leaving us a Voice Mail: https://bit.ly/MIPVM  FULL SHOW NOTES https://www.microsoftinnovationpodcast.com/659  What if AI could transform the way we navigate our professional and personal lives? This episode addresses the pressing challenges of balancing AI innovation with the regulatory frameworks emerging worldwide. We explore the differences between the EU and U.S. approaches to AI regulation, the importance of human rights considerations, and the responsibility organizations have in navigating ethical implications.TAKEAWAYS• Highlighting advancements in AI tools • Discussion of Grok 3 and its capabilities • Exploring the EU AI Act versus U.S. regulatory approaches • Complexity of navigating international AI regulations • The risk of human rights violations with AI algorithms • Emphasizing educational needs for organizations • Importance of a responsible culture in AI implementationThis year we're adding a new show to our line up - The AI Advantage. We'll discuss the skills you need to thrive in an AI-enabled world. DynamicsMinds is a world-class event in Slovenia that brings together Microsoft product managers, industry leaders, and dedicated users to explore the latest in Microsoft Dynamics 365, the Power Platform, and Copilot.Early bird tickets are on sale now and listeners of the Microsoft Innovation Podcast get 10% off with the code MIPVIP144bff https://www.dynamicsminds.com/register/?voucher=MIPVIP144bff Accelerate your Microsoft career with the 90 Day Mentoring Challenge We've helped 1,300+ people across 70+ countries establish successful careers in the Microsoft Power Platform and Dynamics 365 ecosystem.Benefit from expert guidance, a supportive community, and a clear career roadmap. A lot can change in 90 days, get started today!Support the showIf you want to get in touch with me, you can message me here on Linkedin.Thanks for listening

The Brave Marketer
The EU's Approach to Digital Policy and Lessons Learned From The GDPR

The Brave Marketer

Play Episode Listen Later Feb 12, 2025 50:36


Kai Zenner, Head of Office and Digital Policy Adviser for German Member of the European Parliament Axel Voss, discusses the emerging regulatory landscape for artificial intelligence in Europe and its implications for innovation and consumer safety. He also discusses implementation hurdles of the EU AI Act, specifically the shortage of AI experts and the complexity of enforcement across 27 member states. Key Takeaways:  Challenges with the AI Act (such as vague use cases, balancing innovation with regulation, and the impact on SMEs) Lessons from GDPR, including upcoming changes being considered that could impact data privacy Horizontal legislative approaches and their implications Future prospects for AI regulation in Europe Guest Bio: Kai Zenner is Head of Office and Digital Policy Adviser for MEP Axel Voss in the European Parliament, focusing on AI, privacy, and the EU's digital transition. He is involved in negotiations on the AI Act, AI Liability Directive, ePrivacy Regulation, and GDPR revision. A member of the OECD.AI Network of Experts and the World Economic Forum's AI Governance Alliance, Zenner also served on the UN's High-Level Advisory Body on AI. He was named Best MEP Assistant in 2023, and ranked #13 in Politico's Power 40 for his influence on EU digital policy. ---------------------------------------------------------------------------------------- About this Show: The Brave Technologist is here to shed light on the opportunities and challenges of emerging tech. To make it digestible, less scary, and more approachable for all! Join us as we embark on a mission to demystify artificial intelligence, challenge the status quo, and empower everyday people to embrace the digital revolution. Whether you're a tech enthusiast, a curious mind, or an industry professional, this podcast invites you to join the conversation and explore the future of AI together. The Brave Technologist Podcast is hosted by Luke Mulks, VP Business Operations at Brave Software—makers of the privacy-respecting Brave browser and Search engine, and now powering AI everywhere with the Brave Search API. Music by: Ari Dvorin Produced by: Sam Laliberte  

Discover Daily by Perplexity
Altman Reconsiders Open-Source Strategy, EU Bans Risky AI Systems, and Super-Earth Discovered

Discover Daily by Perplexity

Play Episode Listen Later Feb 7, 2025 9:09 Transcription Available


We're experimenting and would love to hear from you!In this episode of 'Discover Daily', we explore Sam Altman's recent acknowledgment that OpenAI may need to reconsider its open-source strategy, suggesting they might be "on the wrong side of history." This significant shift comes as competitive pressure from open-source models like DeepSeek R1 continues to mount, with Altman praising DeepSeek as "a very good model" that has narrowed OpenAI's traditional lead in the field.The European Union has taken a historic step in AI regulation with the implementation of the EU AI Act's first phase on February 2, 2025. The legislation prohibits AI systems deemed to pose "unacceptable risks," including manipulative systems, social scoring, and untargeted facial recognition databases. Violations can result in substantial penalties of up to €35 million or 7% of a company's total worldwide annual turnover, demonstrating the EU's commitment to establishing itself as a global leader in trustworthy AI development.Our main story focuses on two remarkable super-Earth discoveries within their stars' habitable zones. TOI-715 b, located 137 light-years away, is approximately 1.5 times wider than Earth and orbits its red dwarf star every 19 days. The second discovery, HD 20794 d, orbits a Sun-like star just 20 light-years from Earth and is roughly six times more massive than Earth, with an elliptical orbit that moves in and out of the habitable zone. These discoveries represent significant milestones in our search for potentially habitable worlds and provide promising targets for future research with advanced instruments like the James Webb Space Telescope.From Perplexity's Discover Feed: https://www.perplexity.ai/page/altman-reconsiders-open-source-fT0uV12jTna0XkxW8xkEDQ https://www.perplexity.ai/page/eu-bans-risky-ai-systems-.iTygUNvS2mKll.lL9xFdAhttps://www.perplexity.ai/page/super-earth-discovered-WR42RfwCSQWU1ebaQaeQxw Perplexity is the fastest and most powerful way to search the web. Perplexity crawls the web and curates the most relevant and up-to-date sources (from academic papers to Reddit threads) to create the perfect response to any question or topic you're interested in. Take the world's knowledge with you anywhere. Available on iOS and Android Join our growing Discord community for the latest updates and exclusive content. Follow us on: Instagram Threads X (Twitter) YouTube Linkedin

This Week in Google (MP3)
IM 805: Doomers, Gloomers, Bloomers, and Zoomers - Zack Kass Interview, DeepSeek Hype, EU AI Act

This Week in Google (MP3)

Play Episode Listen Later Feb 6, 2025 165:30


Interview with Zack Kass, Former GTM for Open AI Why you can deep-six the DeepSeek hype Gemini 2.0 is now available to everyone OpenAI has undergone its first ever rebrand, giving fresh life to ChatGPT interactions AI Has Shown Me My Future. Here's What I've Learned. Senator Hawley Proposes Jail Time for People Who Download DeepSeek Hugging Face researchers aim to build an 'open' version of OpenAI's deep researh tool Anthropic makes 'jailbreak' advance to stop AI models producing harmful results WSJ: The Manhattan Project Was Secret. Should America's AI Work Be Too? EU AI Act: Ban on certain AI practices and requirements for AI literacy come into effect Cathy Gellis: When It's Not Just A Coup But A CFAA Violation Too a16z slides on AI and voice Microsoft AI CEO Mustafa Suleyman poaches three Google DeepMind former colleagues, including two who built NotebookLM's Audio Overviews and worked on Astra Meta's CTO said the metaverse could be a 'legendary misadventure' if the company doesn't boost sales, leaked memo shows The Salvadoran Mega-Prison Offering to Take America's Worst Criminals Hilarious analyst on Tesla How the DJI Flip uses AI Marketers will have to market to AI agents AI systems could be 'caused to suffer' if consciousness achieved, says research Hosts: Leo Laporte, Jeff Jarvis, and Mike Elgan Guest: Zack Kass Download or subscribe to Intelligent Machines at https://twit.tv/shows/intelligent-machines. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsor: zscaler.com/security

This Week in Google (Video HI)
IM 805: Doomers, Gloomers, Bloomers, and Zoomers - Zack Kass Interview, DeepSeek Hype, EU AI Act

This Week in Google (Video HI)

Play Episode Listen Later Feb 6, 2025 165:30


Interview with Zack Kass, Former GTM for Open AI Why you can deep-six the DeepSeek hype Gemini 2.0 is now available to everyone OpenAI has undergone its first ever rebrand, giving fresh life to ChatGPT interactions AI Has Shown Me My Future. Here's What I've Learned. Senator Hawley Proposes Jail Time for People Who Download DeepSeek Hugging Face researchers aim to build an 'open' version of OpenAI's deep researh tool Anthropic makes 'jailbreak' advance to stop AI models producing harmful results WSJ: The Manhattan Project Was Secret. Should America's AI Work Be Too? EU AI Act: Ban on certain AI practices and requirements for AI literacy come into effect Cathy Gellis: When It's Not Just A Coup But A CFAA Violation Too a16z slides on AI and voice Microsoft AI CEO Mustafa Suleyman poaches three Google DeepMind former colleagues, including two who built NotebookLM's Audio Overviews and worked on Astra Meta's CTO said the metaverse could be a 'legendary misadventure' if the company doesn't boost sales, leaked memo shows The Salvadoran Mega-Prison Offering to Take America's Worst Criminals Hilarious analyst on Tesla How the DJI Flip uses AI Marketers will have to market to AI agents AI systems could be 'caused to suffer' if consciousness achieved, says research Hosts: Leo Laporte, Jeff Jarvis, and Mike Elgan Guest: Zack Kass Download or subscribe to Intelligent Machines at https://twit.tv/shows/intelligent-machines. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsor: zscaler.com/security

This Week in Machine Learning & Artificial Intelligence (AI) Podcast
Ensuring Privacy for Any LLM with Patricia Thaine - #716

This Week in Machine Learning & Artificial Intelligence (AI) Podcast

Play Episode Listen Later Jan 28, 2025 51:33


Today, we're joined by Patricia Thaine, co-founder and CEO of Private AI to discuss techniques for ensuring privacy, data minimization, and compliance when using 3rd-party large language models (LLMs) and other AI services. We explore the risks of data leakage from LLMs and embeddings, the complexities of identifying and redacting personal information across various data flows, and the approach Private AI has taken to mitigate these risks. We also dig into the challenges of entity recognition in multimodal systems including OCR files, documents, images, and audio, and the importance of data quality and model accuracy. Additionally, Patricia shares insights on the limitations of data anonymization, the benefits of balancing real-world and synthetic data in model training and development, and the relationship between privacy and bias in AI. Finally, we touch on the evolving landscape of AI regulations like GDPR, CPRA, and the EU AI Act, and the future of privacy in artificial intelligence. The complete show notes for this episode can be found at https://twimlai.com/go/716.