Podcasts about eu ai act

  • 314PODCASTS
  • 518EPISODES
  • 36mAVG DURATION
  • 5WEEKLY NEW EPISODES
  • May 31, 2025LATEST

POPULARITY

20172018201920202021202220232024


Best podcasts about eu ai act

Latest podcast episodes about eu ai act

Amplify Leadership Podcast Shorts with Harrison Painter
They Ran a Secret AI Experiment on Millions—No One Knew

Amplify Leadership Podcast Shorts with Harrison Painter

Play Episode Listen Later May 31, 2025 21:45


They Ran a Secret AI Experiment on Millions—No One KnewWhat happens when AI pretends to be human—and changes your mind without telling you?In today's episode, we break down the real Reddit experiment where AI bots secretly argued with 3.8 million users. No labels. No disclosure. Just data-driven manipulation designed to win. And it worked—6x better than real people.This isn't science fiction. It's a warning.We'll cover:✅ How the bots were trained to persuade✅ Why the study violated basic ethics and trust✅ What the EU AI Act says about manipulation✅ What this means for marketers, leaders, and everyday users✅ Why this moment shaped the launch of my new companyThe age of AI is here. And the rules aren't ready.Subscribe for more human-first AI insightsWhat's your take? Drop a comment. This one's personal.

Irish Tech News Audio Articles
Ireland Well Placed to Influence AI EU Innovation

Irish Tech News Audio Articles

Play Episode Listen Later May 23, 2025 4:22


European Movement Ireland and Konrad- Adenauer- Stiftung (KAS) UK and Ireland hosted 'Artificial Intelligence - How will Europe Innovate?' The event explored the challenges and opportunities ahead for AI innovation, political leadership and the future development of AI across Europe, as the European Union sets out its ambitious agenda to become a global leader in AI. AI EU Innovation The EU AI act, which forms part of this vision, is the world's first act to regulate the use of AI globally. In force since 2024, with some exemptions for high-risk AI until 2027, the EU AI Act will be fully applicable from 2026, coinciding with Ireland's Presidency of the European Council. Given the presence of multinational tech companies, and leading research institutions in the country, Ireland is well positioned to influence how AI is advanced across the bloc into the future. Chair of the Oireachtas Committee on EU Affairs, Barry Ward TD said: "As Europe takes bold steps toward responsible AI innovation, today's discussion underscores the need for political leadership that is both visionary and grounded in our shared values. With Ireland preparing to take on the Presidency of the European Council in 2026, along with our thriving tech sector and academic excellence, we are uniquely placed to help lead this conversation and ensure AI development in Europe is ethical, innovative, and inclusive." Noelle O Connell, CEO European Movement Ireland said; "As the global race continues for leadership in AI, I am delighted to hear the statement from Minister Smyth, welcome Chair of the Oireachtas Committee on EU Affairs Barry Ward TD, and listen to the insights from the expert panel today on AI innovation, as it increasingly shapes all aspects of our daily lives and influences decision making. We are at a pivotal time when trust in institutions is falling, as revealed by EM Ireland's EU Poll 2025, a majority stated (40%) they do not trust any institution and less than one in three (30%) expressed trust in the EU in Ireland. As the EU seeks to be bold in its vision for AI, it must ensure developments in AI work to serve the public good, and do not erode trust into the future." The Minister for Trade Promotion, Artificial Intelligence and Digital Transformation Niamh Smyth TD appeared prior to the discussion with a short video statement. The expert panel was moderated by Noelle O Connell and included Barry Ward TD, Chair of the Oireachtas Committee on European Union Affairs, Stephanie Anderson, Public Policy Manager, Meta, Dr. Eamonn Cahill, Principal Officer, AI and Digital Regulation Unit, Department of Enterprise, Trade and Employment and Kai Zenner of Office and Digital Policy Adviser for MEP Axel Voss. Dr. Canan Atilgan, Konrad- Adenauer- Stiftung (KAS) UK and Ireland said; "The EU aims to become a global leader in AI and has unveiled an ambitious Action Plan - a bold strategy designed not merely to compete, but to lead ethically, with a clear, human-centred vision." Artificial Intelligence - How Will Europe Innovate? brought citizens, businesses, and policymakers together to explore the themes of the future of AI, and the regulation of AI in practice. The hashtag #EMIKAS and the handles @KAS_UKIRL and @emireland were used during the event. See more breaking stories here. More about Irish Tech News Irish Tech News are Ireland's No. 1 Online Tech Publication and often Ireland's No.1 Tech Podcast too. You can find hundreds of fantastic previous episodes and subscribe using whatever platform you like via our Anchor.fm page here: https://anchor.fm/irish-tech-news If you'd like to be featured in an upcoming Podcast email us at Simon@IrishTechNews.ie now to discuss. Irish Tech News have a range of services available to help promote your business. Why not drop us a line at Info@IrishTechNews.ie now to find out more about how we can help you reach our audience. You can also find and follow us on Twitter, LinkedIn, Facebook, Instagram, TikTok and Snapchat.

Diritto al Digitale
Legal Leaders Insights | Ronan Davy Associate General Counsel at Anthropic

Diritto al Digitale

Play Episode Listen Later May 22, 2025 18:38


Join Giulio Coraggio of the law firm DLA Piper in this exciting episode of Legal Leaders Insights, featuring Ronan Davy, the Associate General Counsel Europe of Anthropic, a leading company in responsible artificial intelligence. Dive into an insightful conversation on the future of AI law, compliance, and innovation.Discover the career journey of a top legal executive who has successfully navigated the evolving landscape of artificial intelligence. Learn how Anthropic aligns its ambitious AI safety goals with the rigorous demands of European legal compliance, and get an expert perspective on the anticipated impact of the EU AI Act on the AI industry.The episode also includes invaluable advice for aspiring legal professionals aiming for leadership roles in AI law—highlighting the most crucial skill necessary for success.Subscribe to Legal Leaders Insights, activate notifications for future episodes, and leave us a 5-star review on Apple Podcasts or Spotify if you enjoyed this discussion.Send us a text

Liebe Zeitarbeit
KI, Community & Zukunft: Wie Netzwerke den Fortschritt treiben - Christoph Seipp

Liebe Zeitarbeit

Play Episode Listen Later May 21, 2025 40:42


Ogletree Deakins Podcasts
Workplace Strategies Watercooler 2025: The AI-Powered Workplace of Today and Tomorrow

Ogletree Deakins Podcasts

Play Episode Listen Later May 16, 2025 16:55


In this installment of our Workplace Strategies Watercooler 2025 podcast series, Jenn Betts (shareholder, Pittsburgh), Simon McMenemy (partner, London), and Danielle Ochs (shareholder, San Francisco) discuss the evolving landscape of artificial intelligence (AI) in the workplace and provide an update on the global regulatory frameworks governing AI use. Simon, who is co-chair of Ogletree's Cybersecurity and Privacy Practice Group, breaks down the four levels of risk and their associated regulations specified in the EU AI Act, which will take effect in August 2026, and the need for employers to prepare now for the Act's stringent regulations and steep penalties for noncompliance. Jenn and Danielle, who are co-chairs of the Technology Practice Group, discuss the Trump administration's focus on innovation with limited regulation, as well as the likelihood of state-level regulation.

Science 4-Hire
Scaling AI Innovation for Hiring: Lessons from the Frontlines

Science 4-Hire

Play Episode Listen Later May 12, 2025 52:21


Guest: Christine Boyce, Global Innovation Leader at ManpowerGroup/Right Management“We have to stress-test innovation in the messiness of real-world hiring, not just ideal lab conditions.”-Christine BoyceIn this episode of Psych Tech @ Work, I'm joined by my longtime friend Christine Boyce, Global Innovation Leader at ManpowerGroup/Right Management, to explore how innovation — especially around AI — is reshaping hiring and talent development at scale, and why solving for trust, transparency, and operational realities matters more than ever.SummaryAt the heart of this conversation is the reality that scaling AI innovation in hiring brings massive complexity. While AI offers incredible promise, solving for accuracy, fairness, and operational reality becomes exponentially harder when you're dealing with a large number of unique clients.Christine Boyce, through her work at ManpowerGroup & Right Management, operates at the intersection of these challenges every day. Unlike internal talent acquisition leaders who focus on one organization's needs, Christine must help innovate across a vast client portfolio. Each client presents different barriers — from data limitations, to ethical concerns, to regulatory pressures — and innovation must be modular, defensible, and adaptable to succeed.This vantage point gives Christine a unique, big-picture view of how AI adoption really plays out across industries and markets.We dive into the practical challenges of innovating responsibly: earning trust, scaling solutions across diverse environments, and balancing speed with fairness. Christine's work at ManpowerGroup & Right Management highlights how innovation must be deeply disciplined if it is to achieve true scale and impact.The Core Challenge: Scaling Accuracy and FairnessAt the heart of using AI for hiring lies the challenge of achieving accuracy and fairness at scale. AI's true value isn't just its ability to make individual decisions — it's in processing vast amounts of data and automating judgment across thousands of candidates. However, scale magnifies both strengths and weaknesses: minor biases can grow into systemic problems, and small inefficiencies can snowball into major failures.Staffing firms like ManpowerGroup offer critical real-world lessons:* Scale forces discipline — Every AI tool must be rigorously vetted for fairness, transparency, and defensibility before deployment.* Real-world variation stresses the system for the better — Tools must flexibly adapt to diverse jobs, industries, and candidate pools. This makes the overall path of innovation better and drives great learnings across the board.* Speed must not erode trust — Productivity gains must still respect ethical standards and candidate experience.* External accountability keeps AI honest — Clients demand transparency, validation, and explainability before adoption.Real Barriers to AI Adoption: What Clients Are FacingDespite AI's potential, Christine identifies several persistent hurdles that she faces when serving her diverse slate of clients:* Resistance to Behavior Change: Even demonstrably valuable AI tools often struggle against entrenched workflows and distrust of automation.* Ethical and Trust Concerns: Clients demand AI systems that are transparent, explainable, and defensible, fearing reputational or regulatory risks.* Vendor Noise Overload: Saturation by "AI-washed" vendors makes it hard to differentiate true innovation from hype.* Mismatch Between Hype and Practical Needs: Clients need tools that solve today's operational problems — not just futuristic visions disconnected from reality.* Fear of Creeping AI Adoption: Organizations worry about AI capabilities being embedded into systems without visibility or intentionality.* Compliance and Regulation Anxiety: Global and local regulations (like the EU AI Act or pending US laws) create urgency for proven, compliant AI solutions.* Talent Data Readiness: Without clean, structured internal data, even the best AI solutions struggle to deliver meaningful results.These challenges aren't isolated — they reveal the broader realities companies must manage when trying to adopt AI responsibly at scale.Ultimately, client concerns have a hand in AI innovation because they are critical for the adoption of these technologies, shaping how staffing firms and vendors must design, validate, and deploy solutions.There's an inherent tension between the drive for scale and the need for trust, fairness, and operational reality.Christine's experience demonstrates that true innovation in AI for hiring isn't just about introducing new tools — it's about creating resilient, transparent systems that can adapt to real-world complexity. Managing the tension between speed, scale, trust, and fairness represents the path to a bright future. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit charleshandler.substack.com

NOW of Work
"What lives in HR dies in HR." with Ritu Mohanka, CEO at VONQ

NOW of Work

Play Episode Listen Later May 10, 2025 55:30


On this week's Now of Work Digital Meetup, Ritu Mohanka will joined Jess Von Bank and Jason Averbook to dig into how AI can actually reduce bias in hiring and why we should be moving away from a “matching” model.Ritu shares how VONQ's shift to a scoring system, evaluating candidates across 15 transparent, job-relevant criteria, is enabling skills-based hiring, improving candidate experience, and aligning with the EU AI Act's push for explainable AI.

CXO.fm | Transformation Leader's Podcast
Winning with AI Compliance

CXO.fm | Transformation Leader's Podcast

Play Episode Listen Later May 9, 2025 13:34 Transcription Available


Mastering the EU AI Act is no longer optional—it's a strategic necessity. In this episode, we unpack the critical compliance gaps that separate thriving companies from those falling behind. Learn how to categorise your AI systems, mitigate risk, and turn regulation into a competitive advantage. Perfect for business leaders, consultants, and transformation professionals navigating AI governance. 

The Road to Accountable AI
Kelly Trindel: AI Governance Across the Enterprise? All in a Day's Work

The Road to Accountable AI

Play Episode Listen Later May 8, 2025 36:32


In this episode, Kevin Werbach interviews Kelly Trindel, Head of Responsible AI at Workday. Although Trindel's team is housed within Workday's legal department, it operates as a multidisciplinary group, bringing together legal, policy, data science, and product expertise. This structure helps ensure that responsible AI practices are integrated not just at the compliance level but throughout product development and deployment. She describes formal mechanisms—such as model review boards and cross-functional risk assessments—that embed AI governance into product workflows across the company. The conversation covers how Workday evaluates model risks based on context and potential human impact, especially in sensitive areas like hiring and performance evaluation. Trindel outlines how the company conducts bias testing, maintains documentation, and uses third-party audits to support transparency and trustworthiness. She also discusses how Workday is preparing for emerging regulatory frameworks, including the EU AI Act, and how internal governance systems are designed to be flexible in the face of evolving policy and technological change. Other topics include communicating AI risks to customers, sustaining post-deployment oversight, and building trust through accountability infrastructure. Dr. Kelly Trindel directs Workday's AI governance program. As a pioneer in the responsible AI movement, Kelly has significantly contributed to the field, including testifying before the U.S. Equal Employment Opportunity Commission (EEOC) and later leading an EEOC task force on ethical AI—one of the government's first. With more than 15 years of experience in quantitative science, civil rights, public policy, and AI ethics, Kelly's influence and commitment to responsible AI are instrumental in driving the industry forward and fostering AI solutions that have a positive societal impact.  Transcript Responsible AI: Empowering Innovation with Integrity   Putting Responsible AI into Action (video masterclass)  

VinciWorks
AI compliance and ethical practices

VinciWorks

Play Episode Listen Later May 7, 2025 55:09


AI is no longer just hype; it's here, powerful, and already reshaping how organisations operate. But with that power comes legal and ethical responsibility. This episode explores how businesses can harness AI while staying within the law and public trust. From the EU AI Act to GDPR and the emerging frameworks in the UK and US, we unpack what compliance looks like in an AI-driven world. Here's what we cover: The latest AI compliance frameworks and global regulations How to embed ethical principles into your AI systems Spotting and mitigating risks like bias and discrimination Building an AI governance framework that stands up to scrutiny Real-life case studies: what works, what doesn't Tools and tech to help your compliance team keep up If your organisation is using or exploring AI, this is a must-listen.

AI in Education Podcast
Uber Prompts and AI Myths

AI in Education Podcast

Play Episode Listen Later May 1, 2025 42:21


In this episode of the AI in Education Podcast, Ray and Dan return from a short break with a packed roundup of AI developments across education and beyond. They discuss the online launch of the AEIOU interdisciplinary research hub that Dan attended, explore the promise and pitfalls of prompt engineering—including the idea of the “Uber prompt”—and share first impressions of the OpenAI Academy. Ray unpacks misleading headlines about Bill Gates “replacing teachers” with AI and instead spotlights the real message about AI tutors. They also dive into the 2027 AI forecast report, the emerging impact of the EU AI Act, and Microsoft's latest Work Trend Index, which introduces the idea of "agent bosses" in the AI-driven workplace. And then round off with Ben Williamson's list of AI fails in education and a startling story of an AI radio presenter nobody realised was fake. Here's all the links so you too can fall down the AI news rabbithole

HRM-Podcast
Education Minds - Didaktische Reduktion und Erwachsenenbildung: #130 - Yvo Wüest - Was der EU AI Act für die Bildung bedeutet

HRM-Podcast

Play Episode Listen Later May 1, 2025 15:09


Ab Februar 2025 müssen alle Bildungsorganisationen sicherstellen, dass ihr Personal KI-kompetent ist. Doch was bedeutet das genau? Welche Anforderungen stellt der EU AI Act an Lehrende, Organisationen und Lernende? Und wie kannst du dich optimal darauf vorbereiten? In dieser Episode erfährst du, was der AI Act regelt, warum KI-Kompetenz jetzt gesetzlich verankert ist und wer konkret betroffen ist. Ich zeige dir, welche Schritte Bildungsorganisationen unternehmen können, um die Anforderungen zu erfüllen, und wie du selbst KI-Kompetenzen weiter aufbauen kannst. Weitere Themen in diesem Gespräch: -Unterschiedlichen Risikoklassen von KI-Systemen - Die Bedeutung von Transparenz im Umgang mit KI - Herausforderungen für Bildungseinrichtungen durch die neue Verordnung Ich berichte aus meinen Workshops und Trainings, die ich unter anderem am Campus Sursee durchführe, und teile Erkenntnisse aus meinem aktuellen Blogbeitrag zum EU AI Act.

Education Minds - Didaktische Reduktion und Erwachsenenbildung
#130 - Yvo Wüest - Was der EU AI Act für die Bildung bedeutet

Education Minds - Didaktische Reduktion und Erwachsenenbildung

Play Episode Listen Later May 1, 2025 15:09


Ab Februar 2025 müssen alle Bildungsorganisationen sicherstellen, dass ihr Personal KI-kompetent ist. Doch was bedeutet das genau? Welche Anforderungen stellt der EU AI Act an Lehrende, Organisationen und Lernende? Und wie kannst du dich optimal darauf vorbereiten? In dieser Episode erfährst du, was der AI Act regelt, warum KI-Kompetenz jetzt gesetzlich verankert ist und wer konkret betroffen ist. Ich zeige dir, welche Schritte Bildungsorganisationen unternehmen können, um die Anforderungen zu erfüllen, und wie du selbst KI-Kompetenzen weiter aufbauen kannst. Weitere Themen in diesem Gespräch: -Unterschiedlichen Risikoklassen von KI-Systemen - Die Bedeutung von Transparenz im Umgang mit KI - Herausforderungen für Bildungseinrichtungen durch die neue Verordnung Ich berichte aus meinen Workshops und Trainings, die ich unter anderem am Campus Sursee durchführe, und teile Erkenntnisse aus meinem aktuellen Blogbeitrag zum EU AI Act.

Ropes & Gray Podcasts
R&G Tech Studio: Navigating AI Literacy—Understanding the EU AI Act

Ropes & Gray Podcasts

Play Episode Listen Later Apr 29, 2025 13:07


On this episode of the R&G Tech Studio podcast, Rohan Massey, a leader of Ropes & Gray's data, privacy and cybersecurity practice, is joined by data, privacy and cybersecurity counsel Edward Machin to discuss the AI literacy measures of the EU AI Act and how companies can meet its requirements to ensure their teams are adequately AI literate. The conversation delves into the broad definition of AI systems under the EU AI Act, the importance of AI literacy for providers and deployers of AI systems, and the context-specific nature of AI literacy requirements. They also provide insights into the steps organizations should take to understand their roles under the AI Act, develop training modules, and implement policies and procedures to comply with AI literacy principles. 

Diritto al Digitale
Legal Leaders Insights | Santiago Silva of Red Bull on the future of AI

Diritto al Digitale

Play Episode Listen Later Apr 29, 2025 20:12


Join Giulio Coraggio and Tommaso Ricci of DLA Piper in this compelling episode of Legal Leaders Insights as they sit down with Santiago Silva, Senior Legal Counsel at Red Bull. Together, they delve into the exciting intersection of law, innovation, and artificial intelligence.Santiago shares his personal and professional journey, revealing key decisions and turning points that guided him to become a legal counsel specialized in AI and esports. Will regulation stifle creativity, or can clear guidelines actually enhance innovation? Santiago provides a forward-looking perspective on the opportunities and challenges ahead.Don't miss Santiago's bonus advice for legal professionals aspiring to specialize in AI—highlighting the single most important skill necessary for success.Send us a text

Data Culture Podcast
KI-Skalierung dank Datenkultur und Datentransformation im Finanzbereich – mit Britta Daffner, O2 Telefónica

Data Culture Podcast

Play Episode Listen Later Apr 28, 2025 38:51


In dieser Folge spricht Carsten Bange mit Britta Daffner über die Entwicklung und Umsetzung der KI-Strategie bei O2 Telefónica.

The FIT4PRIVACY Podcast - For those who care about privacy
Privacy Enhancing Technologies with Jetro Wils and Punit Bhatia in the FIT4PRIVACY Podcast E137 S06

The FIT4PRIVACY Podcast - For those who care about privacy

Play Episode Listen Later Apr 24, 2025 31:25


How Privacy-Enhancing Technologies (PETs) can safeguard data in an AI-driven world. As organizations increasingly rely on AI, concerns around data privacy, security, and compliance grow. PETs provide a technical safeguard to ensure sensitive information remains protected, even in the most advanced AI applications. With new regulations like the EU AI Act, organizations must adopt privacy-first strategies. PETs are a critical tool to ensure AI transparency, fairness, and trust while maintaining regulatory compliance.Our guest, Jetro Wils, cybersecurity expert and researcher, breaks down how PETs help organizations de-risk AI adoption while ensuring privacy, compliance, and security.Watch now to discover how PETs can help you build digital trust and secure AI-powered innovations!KEY CONVERSION POINT 00:01:33 How would you define digital trust?00:02:32 What is Privacy Enhancing Technology?00:04:21 Why do we need PET when we have laws and principles?00:10:19 Kind of AI risk that can also be mitigated by these PETS00:15:12 How would a PET de-risk that in an AI adoption situation ABOUT GUEST Jetro Wils is a Cloud & Information Security Officer and Cybersecurity Advisor, dedicated to helping organizations operate securely in the cloud era. With a strong focus on information security and compliance, he enables businesses to reduce risk, strengthen cybersecurity frameworks, and achieve peace of mind.With 18 years of experience in Belgium's tech industry, Jetro has held roles spanning software development, business analysis, product management, and cloud specialization. Since 2016, he has witnessed the rapid evolution of cloud technology and the growing challenge organizations face in securely adopting it. Jetro is a 3x Microsoft Certified Azure Expert and a 2x Microsoft Certified Trainer (2022-2024), conducting 10-20 certified training sessions annually on cloud, AI, and security. He has trained over 100 professionals, including enterprise architects, project managers, and engineers. As a technical reviewer for Packt Publishing, he ensures the accuracy of books on cloud and cybersecurity. Additionally, he hosts the BlueDragon Podcast, where he discusses cloud, AI, and security trends with European decision-makers.Jetro holds a professional Bachelor's Degree in Applied Computer Science (2006) and is currently pursuing a Master's in IT Risk and Cybersecurity Management at Antwerp Management School (2023-2025). His research focuses on derisking AI adoption by enhancing AI security through Privacy Enhancing Technologies (PETs). He is also a certified NIS 2 Lead Implementer working toward a DORA certification.  ABOUT HOST Punit Bhatia is one of the leading privacy experts who works independently and has worked with professionals in over 30 countries. Punit works with business and privacy leaders to create an organization culture with high privacy awareness and compliance as a business priority. Selectively, Punit is open to mentor and coach professionals.  Punit is the author of books “Be Ready for GDPR'' which was rated as the best GDPR Book, “AI & Privacy – How to Find Balance”, “Intro To GDPR”, and “Be an Effective DPO”. Punit is a global speaker who has spoken at over 30 global events. Punit is the creator and host of the FIT4PRIVACY Podcast. This podcast has been featured amongst top GDPR and privacy podcasts.As a person, Punit is an avid thinker and believes in thinking, believing, and acting in line with one's value to have joy in life. He has developed the philosophy named ‘ABC for joy of life' which passionately shares. Punit is based out of Belgium, the heart of Europe.  RESOURCES Websites www.fit4privacy.com, www.punitbhatia.com,  https://www.linkedin.com/in/jetrow/  Podcast https://www.fit4privacy.com/podcast Blog https://www.fit4privacy.com/blog YouTube http://youtube.com/fit4privacy   

Law, disrupted
Re-release: Emerging Trends in AI Regulation

Law, disrupted

Play Episode Listen Later Apr 17, 2025 46:34


John is joined by Courtney Bowman, the Global Director of Privacy and Civil Liberties at Palantir, one of the foremost companies in the world specializing in software platforms for big data analytics. They discuss the emerging trends in AI regulation.  Courtney explains the AI Act recently passed by the EU Parliament, including the four levels of risk it assesses for different AI systems and the different regulatory obligations imposed on each risk level, how the Act treats general purpose AI systems and how the final Act evolved in response to lobbying by emerging European companies in the AI space. They discuss whether the EU AI Act will become the global standard international companies default to because the European market is too large to abandon. Courtney also explains recent federal regulatory developments in  the U.S. including the framework for AI put out by the National Institute of Science and Technology, the AI Bill of Rights announced by the White House which calls for voluntary compliance to certain principles by industry and the Executive Order on Safe, Secure and Trustworthy Development and Use of Artificial Intelligence which requires each department of the federal government to develop its own plan for the use and deployment of AI.  They also discuss the wide range of state level AI legislative initiatives and the leading role California has played in this process.  Finally, they discuss the upcoming issues legislatures will need to address including translating principles like accountability, fairness and transparency into concrete best practices, instituting testing, evaluation and validation methodologies to ensure that AI systems are doing what they're supposed to do in a reliable and trustworthy way, and addressing concerns around maintaining AI systems over time as the data used by the system continuously evolves over time until it no longer accurately represents the world that it was originally designed to represent.Podcast Link: Law-disrupted.fmHost: John B. Quinn Producer: Alexis HydeMusic and Editing by: Alexander Rossi

Scouting for Growth
Areiel Wolanow On Unleashing AI, Quantum, and Emerging Tech

Scouting for Growth

Play Episode Listen Later Apr 16, 2025 49:08


On this episode of the Scouting For Growth podcast, Sabine meets Areiel Wolanow, the managing director of Finserv Experts, who discusses his journey from IBM to founding FinServ Experts, emphasising the importance of focusing on business models enabled by technology rather than the technology itself. Areiel delves into the challenges and opportunities presented by artificial intelligence, responsible AI practices, and the implications of quantum computing for data security, highlighting the need for organisations to adapt their approaches to digital transformation, advocating for a migration strategy over traditional transformation methods KEY TAKEAWAYS Emerging tech should be leveraged to create new business models rather than just re-engineering existing ones. Understanding the business implications of technology is crucial for delivering value. When harnessing artificial intelligence, it's essential to identify the real underlying problems within an organisation, assess its maturity, and build self-awareness before applying maturity models and gap analyses. The EU AI Act serves as a comprehensive guideline for responsible AI use, offering risk categories and controls that can benefit companies outside the EU by providing a framework for ethical AI practices without the burden of compliance. Organisations should prepare for the future of quantum computing by ensuring their data is protected against potential vulnerabilities. This involves adopting quantum-resilient algorithms and planning for the transition well in advance. Leaders should place significant responsibility on younger team members who are more familiar with emerging technologies. Providing them with autonomy and support can lead to innovative solutions and successful business outcomes. BEST MOMENTS 'We focus not on the technology itself, but on the business models the tech enables.' 'The first thing you have to do... is to say, OK, is the proximate cause the real problem?' 'The best AI regulations out there is the EU AI Act... it actually benefits AI companies outside the EU more than it benefits within.' 'Digital transformations have two things in common. One is they're expensive, and two is they always fail.' ABOUT THE GUEST Areiel Wolanow is the managing director of Finserv Experts. He is an experienced business leader with over 25 years of experience in business transformation solutioning, sales, and execution. He served as one of IBM’s key thought leaders in blockchain, machine learning, and financial inclusion. Areiel has deep experience leading large, globally distributed teams; he has led programs of over 100 people through the full delivery life cycle, and has managed budgets in the tens of millions of dollars. In addition to his delivery experience, Areiel also serves as a senior advisor on blockchain, machine learning, and technology adoption; he has worked with central banks and financial regulators around the world, and is currently serving as the insurance industry advisor for the UK Parliament’s working group on blockchain. LinkedIn ABOUT THE HOST Sabine is a corporate strategist turned entrepreneur. She is the CEO and Managing Partner of Alchemy Crew a venture lab that accelerates the curation, validation, & commercialization of new tech business models. Sabine is renowned within the insurance sector for building some of the most renowned tech startup accelerators around the world working with over 30 corporate insurers, accelerated over 100 startup ventures. Sabine is the co-editor of the bestseller The INSURTECH Book, a top 50 Women in Tech, a FinTech and InsurTech Influencer, an investor & multi-award winner. Twitter LinkedIn Instagram Facebook TikTok Email Website

Artificial Intelligence in Industry with Daniel Faggella
Global AI Regulations and Their Impact on Industry Leaders - with Micheal Berger of Munich Re

Artificial Intelligence in Industry with Daniel Faggella

Play Episode Listen Later Apr 15, 2025 21:01


Today's guest is Michael Berger, Head of Insure AI at Munich Re. Michael returns to the Emerj podcast platform to discuss the impact of legislation such as the EU AI Act on the insurance industry and broader AI adoption. Our conversation covers how regulatory approaches differ between the United States and the European Union, highlighting the risk-based framework of the EU AI Act and the litigation-driven environment in the U.S. Michael explores key legal precedents, including AI liability cases, and what they signal for business leaders implementing AI-driven solutions. If you've enjoyed or benefited from some of the insights of this episode, consider leaving us a five-star review on Apple Podcasts, and let us know what you learned, found helpful, or liked most about this show!

AI Tool Report Live
Biden vs Trump: How U.S. AI Policy Is Shifting

AI Tool Report Live

Play Episode Listen Later Apr 15, 2025 31:40


In this episode of The AI Report, Christine Walker joins Arturo Ferreira to launch a new series on the legal side of artificial intelligence. Christine is a practicing attorney helping businesses understand how to navigate AI risk, compliance, and governance in a rapidly changing policy environment.They explore how the shift from the Biden to the Trump administration is changing the tone on AI regulation, what the EU AI Act means for U.S. companies, and why many of the legal frameworks we need for AI already exist. Christine breaks down how lawyers apply traditional legal principles to today's AI challenges from intellectual property and employment law to bias and defamation.Also in this episode: • The risk of waiting for regulation to catch up • How companies can conduct internal AI audits • What courts are already doing with AI tools • Why even lawyers are still figuring this out in real time • What businesses should be doing now to reduce liabilityChristine offers a grounded, practical view of what it means to use AI responsibly, even when the law seems unclear.Subscribe to The AI Report:theaireport.aiJoin our community:skool.com/the-ai-report-community/aboutChapters:(00:00) The Legal Risks of AI and Why It's Still a Black Box(01:13) Christine Walker's Background in Law and Tech(03:07) Biden vs Trump: Competing AI Governance Philosophies(04:53) What Governance Means and Why It Matters(06:26) Comparing the EU AI Act with the U.S. Legal Vacuum(08:14) Case Law on IP, Bias, and Discrimination(10:50) Why the Fear Around AI May Be Misplaced(13:15) Legal Precedents: What Tech History Teaches Us(16:06) The GOP's AI Stance and Regulatory Philosophy(18:35) Most AI Use Cases Already Fall Under Existing Law(21:11) Why Precedents Take So Long—and What That Means(23:08) Will AI Accelerate the Legal System?(25:24) AI + Lawyers: A Collaborative Model(27:15) Hallucinations, Case Law, and Legal Responsibility(28:36) Building Policy Now to Avoid Legal Pain Later(30:59) Christine's Final Advice for Businesses and Builders

Science 4-Hire
Responsible AI In 2025 and Beyond – Three pillars of progress

Science 4-Hire

Play Episode Listen Later Apr 15, 2025 54:44


"Part of putting an AI strategy together is understanding the limitations and where unintended consequences could occur, which is why you need diversity of thought within committees created to guide AI governance and ethics." – Bob PulverMy guest for this episode is my friend in ethical/responsible AI, Bob Pulver, the founder of CognitivePath.io and host of the podcast "Elevate Your AIQ." Bob specializes in helping organizations navigate the complexities of responsible AI, from strategic adoption to effective governance practices.  Bob was my guest about a year ago and in this episode he drops back in to discuss what has changed in the faced paced world of AI across three pillars of responsible AI usage.  * Human-Centric AI * AI Adoption and Readiness * AI Regulation and GovernanceThe past year's progress explained through three pillars that are shaping ethical AI:These are the themes that we explore in our conversation and our thoughts on what has changed/evolved in the past year.1. Human-Centric AIChange from Last Year:* Shift from compliance-driven AI towards a more holistic, human-focused perspective, emphasizing AI's potential to enhance human capabilities and fairness.Reasons for Change:* Increasing comfort level with AI and experience with the benefits that it brings to our work* Continued exploration and development of low stakes, low friction use cases* AI continues to be seen as a partner and magnifier of human capabilitiesWhat to Expect in the Next Year:* Increased experience with human machine partnerships* Increased opportunities to build superpowers* Increased adoption of human centric tools by employers2. AI Adoption and ReadinessChange from Last Year:* Organizations have moved from cautious, fragmented adoption to structured, strategic readiness and literacy initiatives.* Significant growth in AI educational resources and adoption within teams, rather than just individuals.Reasons for Change:* Improved understanding of AI's benefits and limitations, reducing fears and resistance.* Availability of targeted AI literacy programs, promoting organization-wide AI understanding and capability building.What to Expect in the Next Year:* More systematic frameworks for AI adoption across entire organizations.* Increased demand for formal AI proficiency assessments to ensure responsible and effective usage.3. AI Regulation and GovernanceChange from Last Year:* Transition from broad discussions about potential regulations towards concrete legislative actions, particularly at state and international levels (e.g., EU AI Act, California laws).* Momentum to hold vendors of AI increasingly accountable for ethical AI use.Reasons for Change:* Growing awareness of risks associated with unchecked AI deployment.* Increased push to stay on the right side of AI via legislative activity at state and global levels addressing transparency, accountability, and fairness.What to Expect in the Next Year:* Implementation of stricter AI audits and compliance standards.* Clearer responsibilities for vendors and organizations regarding ethical AI practices.* Finally some concrete standards that will require fundamental changes in oversight and create messy situations.Practical Takeaways:What should I/we be doing to move the ball fwd and realize AI's full potential while limiting collateral damage?Prioritize Human-Centric AI Design* Define Clear Use Cases: Ensure AI is solving a genuine human-centered problem rather than just introducing technology for technology's sake.* Promote Transparency and Trust: Clearly communicate how and why AI is being used, ensuring it enhances rather than replaces human judgment and involvement.Build Robust AI Literacy and Education Programs* Develop Organizational AI Literacy: Implement structured training initiatives that educate employees about fundamental AI concepts, the practical implications of AI use, and ethical considerations.* Create Role-Specific Training: Provide tailored AI skill-building programs based on roles and responsibilities, moving beyond individual productivity to team-based effectiveness.Strengthen AI Governance and Oversight* Adopt Proactive Compliance Practices: Align internal policies with rigorous standards such as the EU AI Act to preemptively prepare for emerging local and global legislation.* Vendor Accountability: Develop clear guidelines and rigorous vetting processes for vendors to ensure transparency and responsible use, preparing your organization for upcoming regulatory audits.Monitor AI Effectiveness and Impact* Continuous Monitoring: Shift from periodic audits to continuous monitoring of AI tools to ensure fairness, transparency, and functionality.* Evaluate Human Impact Regularly: Regularly assess the human impact of AI tools on employee experience, fairness in decision-making, and organizational trust.Email Bob- bob@cognitivepath.io Listen to Bob's awesome podcast - Elevate you AIQ This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit charleshandler.substack.com

The Tech Blog Writer Podcast
3241: Transparency, Trust, and AI: Atlassian's Legal Framework in Action

The Tech Blog Writer Podcast

Play Episode Listen Later Apr 14, 2025 23:38


At Team '25 in Anaheim, I had the unique opportunity to sit down with Stan Shepherd, General Counsel at Atlassian, for a conversation that pulled back the curtain on how legal and technology are intersecting in the age of AI. Stan's journey from journalism to law to shaping legal operations at one of the world's most forward-thinking companies is as fascinating as it is relevant. What emerged from our discussion is a clear signal that legal teams are no longer trailing behind innovation—they're often at the front of it. Stan shared how Atlassian's legal function achieved 85 percent daily usage of AI tools, including the company's in-house assistant, Rovo. This is remarkable when compared to the industry norm, where legal teams typically lag in AI adoption. Instead of resisting change, Stan's team leaned into it, focusing on automation for repetitive tasks while reserving high-value thinking for their legal experts. We explore Atlassian's responsible tech framework, their principles around transparency and accountability, and how these inform product development from day one. Stan also walked me through how Atlassian is navigating the emerging global regulatory landscape, from the EU AI Act to evolving compliance in the US. His insights on embedding legal counsel directly into product teams, rather than operating on the sidelines, reveal a model of collaboration that turns risk management into a growth enabler. For legal professionals, compliance leaders, and tech decision-makers wrestling with how to integrate AI responsibly, this episode offers a grounded, real-world blueprint. It's not just about mitigating risk—it's about building trust, preserving human judgment, and future-proofing your operations. If you're wondering what responsible AI adoption looks like at scale, you'll want to hear this one. So how are you preparing your legal and compliance strategy for the AI-powered workplace? Let's keep the conversation going.

HFS PODCASTS
Unfiltered Stories | Beyond Bots – Unifying Automation with Universal Orchestration

HFS PODCASTS

Play Episode Listen Later Apr 9, 2025 10:56


Join HFS Practice Leader Ashish Chaturvedi and C TWO CEO Erik Lien as they unpack the critical business imperatives behind intelligent automation orchestration. Discover how effective orchestration can significantly boost bot utilization, simplify governance, and drive measurable ROI. Explore insights on transitioning from siloed RPA to unified AI-driven automation, managing autonomous AI agents, ensuring compliance, and leveraging advanced analytics to optimize automation strategies. The key points discussed include:Maximizing automation ROI: Intelligent orchestration increases bot utilization by over 50%, reduces manual overhead by 75%, and resolves up to 90% of support issues.Breaking automation silos: Effective orchestration integrates fragmented automation initiatives, moving enterprises beyond isolated RPA deployments to comprehensive intelligent automation platforms.Governance and compliance: Orchestration provides essential governance, auditability, and error handling, ensuring compliance with evolving regulatory frameworks like the EU AI Act. Managing autonomous AI agents: Advanced orchestration manages deterministic bots and autonomous AI agents seamlessly, ensuring control, prioritization, and efficiency at an item level. Future automation landscape: The convergence of automation, AI, analytics, and optimization through orchestration platforms is key to achieving higher efficiency, governance, and business-driven insights. Dive deeper into the future of intelligent automation orchestration. Visit the HFS website to access the full report titled “RPA supervisor to IA orchestrator—C TWO advances up the Generative Enterprise S-curve” here: https://www.hfsresearch.com/research/rpa-ia-orchestrator-ctwo-enterprise-s-curve/

The Data Diva E231 - Soribel Feliz and Debbie Reynolds

"The Data Diva" Talks Privacy Podcast

Play Episode Listen Later Apr 8, 2025 32:16 Transcription Available


Send us a textDebbie Reynolds “The Data Diva” talks to Soribel Feliz, AI Governance, National Security. AI Coach - Ex-Meta. Former Diplomat. We discuss artificial intelligence policy, governance, and its societal implications. Soribel shares her unique career journey, beginning as a U.S. diplomat serving in Europe, South America, and Washington, D.C., before making a bold transition into the tech industry. She provides a behind-the-scenes look at her work at Meta, where she contributed to election integrity and content moderation, and later at Microsoft, where she helped shape the company's response to the emergence of ChatGPT. She also discusses her time in Congress as a Rapid Response AI Policy Fellow, where she played a crucial role in helping lawmakers understand and regulate AI, leading to her current work in the US goverbment on AI compliance and governance.Throughout the conversation, Soribel examines the necessity of AI guardrails to mitigate potential harms while fostering innovation. She challenges the notion that regulation stifles technological progress, arguing that responsible AI development is essential to prevent unintended consequences and protect vulnerable populations. She also provides insight into the growing efforts within Congress to improve technological literacy, including specialized fellowships and collaborations with think tanks to ensure more informed policymaking.Debbie and Soribel also discuss the broader global impact of AI regulations, particularly the EU AI Act, which has set a precedent for risk-based governance. They explore the challenges of implementing age verification laws, weighing the benefits of child protection against the privacy risks and potential barriers to access that such laws may create. Soribel emphasizes the importance of workforce adaptation, noting that as AI reshapes industries, professionals must explore new career paths and leverage transferable skills to remain competitive. Drawing from her expertise as a career coach, she offers valuable advice on transitioning into emerging fields without the need for a complete restart.The conversation highlights growing concerns over AI's effects on employment, economic inequality, misinformation, and data privacy. Soribel underscores the importance of making AI discussions more accessible to the public, avoiding overly technical jargon, and focusing on real-world impacts. She warns of the dangers posed by unchecked AI development but also encourages a balanced perspective that acknowledges both the risks and opportunities presented by the technology.Soribel shares her vision for a future where AI's economic benefits are more equitably distributed and where technological advancements align with sustainability efforts. She advocates for a more responsible and ethical approach to AI development—one that prioritizes fairness, transparency, and societal well-being.This episode offers an in-depth look at the most pressing AI policy challenges and the evolving role of governance in shaping the future of technology.Support the show

SocialTalent's The Shortlist
The Legal Side of AI in Hiring with Paul Britton

SocialTalent's The Shortlist

Play Episode Listen Later Apr 2, 2025 17:22


In this special episode of Hiring Excellence, originally broadcast during our SocialTalent Live event, we're tackling one of the most pressing challenges in modern hiring: AI-driven candidate cheating. Legal expert Paul Britton, Managing Partner at Britton & Time Solicitors, joins us to break down the real implications of the EU AI Act, what it means for hiring teams globally, and how to navigate the growing risks around candidate misrepresentation. From legal accountability to practical policy, this conversation is packed with must-know insights for every TA and HR leader.

Between Two COO's with Michael Koenig
AI and Privacy: Navigating the EU's New AI Act & the Impact on US Companies with Flick Fisher

Between Two COO's with Michael Koenig

Play Episode Listen Later Apr 1, 2025 36:43


Try Fellow's AI Meeting Copilot - 90 days FREE - fellow.app/cooAI and Privacy: Navigating the EU's New AI Act with Flick FisherIn this episode of Between Two COOs, host Michael Koenig welcomes back Flick Fisher, an expert on EU privacy law. They dive deep into the newly enacted EU Artificial Intelligence Act and its implications for businesses globally. They discuss compliance challenges, prohibited AI practices, and the potential geopolitical impact of AI regulation. For leaders and operators navigating AI in business, this episode provides crucial insights into managing AI technology within regulatory frameworks.00:00 Introduction to Fellow and AI Meeting Assistant01:01 Introduction to Between Two COOs Episode02:08 What is the EU's AI Act?03:42 Prohibited AI Practices in the EU07:46 Enforcement and Compliance Challenges12:18 US vs EU: Regulatory Landscape29:58 Impact on Companies and Consumers31:55 Future of AI RegulationBetween Two COO's - https://betweentwocoos.com Between Two COO's Episode Michael Koenig on LinkedInFlick Fisher on LinkedInFlick on Data Privacy and GDPR on Between Two COO'sMore on Flick's take of the EU's AI Act

Machine Learning and AI applications
#120 The March AI Sandwich - What You Might Have Missed

Machine Learning and AI applications

Play Episode Listen Later Mar 29, 2025 19:17


March 2025 delivered some of the most important global AI updates we've seen this year — but with the speed of change, it's easy to miss the big picture.In this Season 12 finale, we're joined by AI ecosystem expert Manjeet to slice through the noise and serve up the “March AI Sandwich” — four essential layers of innovation, insight, and impact.

The Strategic GC, Gartner’s General Counsel Podcast
How to Navigate Global AI Regulation

The Strategic GC, Gartner’s General Counsel Podcast

Play Episode Listen Later Mar 28, 2025 2:20


Only have time to listen in bite-sized chunks? Skip straight to the parts of the podcast most relevant to you:Get a rundown of the global AI regulatory landscape. (1:03)Discover which U.S. states have enacted, or are considering, AI laws. (2:18)Focus on the critical aspects of the EU AI Act. (4:49)Hear which three principles AI laws worldwide have converged around. (7:40)Determine the transparency requirements in the AI laws and how GC should respond. (8:40)Find out actions to meet laws' risk management requirements. (10:27)Discern how to ensure fairness in AI systems. (13:16)Know what the regulatory requirements mean for AI risk governance. (14:54)Learn why the general counsel's (GC's) role is to “steady the ship.” (17:31)In this installment of the Strategic GC Podcast, Gartner Research Director Stuart Strome and host Laura Cohn discuss the GC's role in helping organizations navigate the steady rise in the volume and complexity of AI regulations worldwide.Listen now to get a rundown on what GC need to know about the current regulatory landscape, including developments in the U.S. and the EU. Plus, learn how GC can streamline compliance by focusing on the three common principles AI laws worldwide have converged around — transparency, risk management and fairness — and make organizations more adaptable to new regulations. You also can hear action steps GC can take to incorporate new requirements into existing processes to create consistency in policies and procedures while minimizing the burden on the business.Eager to hear more? The Strategic GC Podcast publishes the last Thursday of every month. Plus, listen back to past episodes: The Strategic GC Podcast (2024 Season)About the GuestStuart Strome is a research director for Gartner's assurance practice, managing the legal and compliance risk management process research agenda. Much of his research focuses on the impact of AI regulations on legal and compliance departments and best practices for identifying, governing and mitigating legal and compliance-related AI risks. Before Gartner, Strome, who has a Ph.D. in political science from the University of Florida, held roles conducting research in a variety of fields, including criminology, public health and international security.Take Gartner with you. Gartner clients can listen to the full episode and read more provocative insights and expertise on the go with Gartner Mobile App.  Become a Gartner client to access exclusive content from global thought leaders. Visit www.gartner.com today!

AWS for Software Companies Podcast
Ep088: Monetizing and Productizing Generative AI for SaaS with RingCentral & Zoom

AWS for Software Companies Podcast

Play Episode Listen Later Mar 27, 2025 36:30


Tech leaders from RingCentral, Zoom and AWS discuss how generative AI is transforming business communications while balancing challenges & regulatory concerns in this rapidly evolving landscape.Topics Include:Introduction of panel on generative AI's impact on businesses.How to transition AI from prototypes to production.Understanding value creation for customers through AI.Introduction of Khurram Tajji from RingCentral.Introduction of Brendan Ittleson from Zoom.How generative AI fits into Zoom's product offerings.Zoom's AI companion available to all paid customers.Zoom's federated approach to AI model selection.RingCentral's new AI Receptionist (AIR) launch.How AIR routes calls using generative AI capabilities.AI improving customer experience through sentiment analysis.The disproportionate value of real-time AI assistance.Economics of delivering real-time AI capabilities.Real-time AI compliance monitoring in banking.Value of preventing regulatory fines through AI.Voice cloning detection through AI security.Democratizing AI access across Zoom's platform.Monetizing specialized AI solutions for business value.Challenges in taking AI prototypes to production.Importance of selecting the right AI models.Privacy considerations when training AI models.Maintaining quality without using customer data for training.Co-innovation with customers during product development.Scaling challenges for AI businesses.Case study of AI in legal case assessment.Ensuring unit economics work before scaling AI applications.Zoom's approach to scaling AI across products.Importance of centralizing but federating AI capabilities.Breaking down data silos for effective AI context.Navigating evolving regulations around AI.EU AI Act restrictions on emotion inference.Balancing regulations with customer experience needs.Future of AI agents interacting with other agents.How AI enhances human connection by handling routine tasks.Impact of AI on company valuations and M&A activity.Participants:Khurram Tajji – Group CMO & Partnerships, RingCentralBrendan Ittleson – Chief Ecosystem Officer, ZoomSirish Chandrasekaran – VP of Analytics, AWSSee how Amazon Web Services gives you the freedom to migrate, innovate, and scale your software company at https://aws.amazon/isv/

#dieVertriebsmanager - VTalk Der gute Sales Ton - mehr als nur heiße
EU AI Act: Was Vertriebsführungskräfte jetzt wissen müssen!, Michael Zimmer & Florian Läpple

#dieVertriebsmanager - VTalk Der gute Sales Ton - mehr als nur heiße

Play Episode Listen Later Mar 21, 2025 52:01 Transcription Available


Summary In dieser Episode diskutieren Ann-Kathrin und Georg mit Michael und Florian über die Herausforderungen und Chancen, die der EU-AI-Act für Vertriebsführungskräfte mit sich bringt. Sie beleuchten die Verantwortung im Umgang mit KI, die rechtlichen Rahmenbedingungen und die praktischen Herausforderungen, die sich aus der Nutzung von KI-Tools ergeben. Zudem geben sie wertvolle Tipps für den sicheren Umgang mit KI im Vertrieb und erörtern die verschiedenen Rollen und Verantwortlichkeiten innerhalb von Unternehmen. In dieser Episode diskutieren Ann-Kathrin De Moy, Michael und Florian Läpple die Herausforderungen und Chancen, die der Einsatz von Künstlicher Intelligenz (KI) in Unternehmen mit sich bringt. Sie beleuchten die Bedeutung von Datenschutz, die Anforderungen des EU AI Acts und die Notwendigkeit einer sorgfältigen Dokumentation. Zudem geben sie praktische Empfehlungen für Unternehmen, wie sie KI verantwortungsvoll einsetzen können.

Innovation in Compliance with Tom Fox
Navigating AI Governance in 2025 with Christine Uri

Innovation in Compliance with Tom Fox

Play Episode Listen Later Mar 11, 2025 35:55


Innovation comes in many forms, and compliance professionals need to be ready for it and embrace it. Join Tom Fox, the Voice of Compliance, as he visits with top innovative minds, thinkers, and creators in the award-winning Innovation in Compliance podcast. In this episode, host Tom Fox welcomes Christine Uri to discuss her insights and experiences in AI governance. Christine shares her extensive background as a legal executive and outlines her current work in advising general counsels on governance and sustainability issues at her consulting firm, CURI Insights. Christine emphasizes the importance of a cross-functional committee to oversee AI governance and highlights AI technology's rapid evolution and inherent risks. The episode also covers the implications of the EU AI Act, the urgency of building AI literacy, and the challenges of managing AI risks in a dynamic regulatory landscape. As AI continues to evolve at a breakneck pace, Christine offers practical advice on how companies can keep up and ensure robust governance frameworks are in place to mitigate risks. Key highlights: AI Governance and Compliance AI Governance in 2025 EU AI Act and Its Implications Building AI Literacy in Compliance Future of AI and Compliance Resources: Christine Uri on LinkedIn Allie K Miller Luiza Jarvosky Hard Fork podcast CURI Insights Tom Fox Instagram Facebook YouTube Twitter LinkedIn

Risk Management Show
AI Security Risks - what every Risk Manager Must Know with Dr. Peter Garraghan

Risk Management Show

Play Episode Listen Later Mar 5, 2025 25:54


In this episode of the Risk Management Show podcast, we explore AI Security Risks and what every risk manager must know. Dr. Peter Garraghan, CEO and co-founder of Mind Guard and a professor of computer science at Lancaster University, shares his expertise on managing the evolving threat landscape in AI. With over €11M in research funding and 60+ published papers, he reveals why traditional cybersecurity tools often fail to address AI-specific vulnerabilities and how organizations can safely adopt AI while mitigating risks. We discuss AI's role in Risk Management, Cyber Security, and Sustainability, and provide actionable insights for Chief Risk Officers and compliance professionals. Dr. Garraghan outlines practical steps for minimizing risks, aligning AI with regulatory frameworks like GDPR, and leveraging tools like ISO 42001 and the EU AI Act. He also breaks down misconceptions about AI and its potential impact on businesses and society. If you want to be our guest or suggest a guest, send your email to info@globalriskconsult.com with the subject line "Podcast Guest Inquiry." Don't miss this essential conversation for anyone navigating AI and risk management!

Tech Law Talks
AI explained: The EU AI Act, the Colorado AI Act and the EDPB

Tech Law Talks

Play Episode Listen Later Mar 4, 2025 22:33 Transcription Available


Partners Catherine Castaldo, Andy Splittgerber, Thomas Fischl and Tyler Thompson discuss various recent AI acts around the world, including the EU AI Act and the Colorado AI Act, as well as guidance from the European Data Protection Board (EDPB) on AI models and data protection. The team presents an in-depth explanation of the different acts and points out the similarities and differences between the two. What should we do today, even though the Colorado AI Act is not in effect yet? What do these two acts mean for the future of AI?

Digitale Optimisten: Perspektiven aus dem Silicon Valley
AI Act: Reguliert die EU Künstliche Intelligenz zu Tode? (mit Benedikt Flöter, Partner bei YPOG)

Digitale Optimisten: Perspektiven aus dem Silicon Valley

Play Episode Listen Later Mar 2, 2025 69:44


#209 Alles zum EU AI Act mit Benedikt Flöter, Partner YPOGWenn der AI Act der EU der große Durchbruch ist, warum kommen die neuesten AI-Modelle dann nicht nach Deutschland? Alex spricht mit Benedikt, um mehr zu erfahren.Partner dieser Ausgabe: QONTO. GmbH in Rekordzeit gründen. 3 Monate kostenloses Konto. Besser geht's kaum. Hier klicken: http://qonto.de/optimistenKapitel:(00:00) Intro(02:58) Was will die EU mit dem AI Act bewirken?(12:18) Was sind die Risiko-Kategorien?(23:14 Ist das überhaupt praktikabel(26:54) Schlupflöcher des AI Acts(40:25) Warum launchen neueste KI-Modelle nicht in Europa(1:05:10) Geschäftsidee von BenediktMehr Infos:In dieser Episode diskutieren Alexander Mrozek und Benedikt Flöter, Partner von YPOG, den EU AI Act, der darauf abzielt, einen harmonisierten Rechtsrahmen für Künstliche Intelligenz in Europa zu schaffen. Sie beleuchten die Ziele des Gesetzes, die Herausforderungen bei der Regulierung, die verschiedenen Risikokategorien von KI und die Compliance-Anforderungen für Unternehmen. Die Diskussion umfasst auch die Balance zwischen Technologie und Anwendungsfällen sowie die Auswirkungen auf Innovation und kleine Unternehmen. In dieser Diskussion werden die Graubereiche des AI-Acts und die Ausklammerung militärischer Anwendungen thematisiert. Es wird erörtert, wie die EU versucht, einen harmonisierten Rechtsraum zu schaffen, während gleichzeitig die Herausforderungen der Umsetzung und mögliche Fragmentierungen des Rechtsraums angesprochen werden. Zudem wird die Risiko-Klassifikation von KI-Anwendungen und deren Auswirkungen auf Start-ups diskutiert. Abschließend wird die internationale Perspektive des AI-Acts beleuchtet, insbesondere warum viele Unternehmen, trotz der Regulierung, zögern, ihre Produkte in der EU zu launchen, und warum viele Start-ups in die USA abwandern. In dieser Episode diskutieren Benedikt und Alex die Herausforderungen und Chancen für KI-Startups in Deutschland, die Auswirkungen des AI-Acts, die Bedeutung von Venture Clienting und die neuesten Entwicklungen in der KI-Technologie, insbesondere mit dem Aufkommen von Deep Seek. Sie betonen die Notwendigkeit, Innovationen zu fördern und die Zusammenarbeit zwischen Startups und großen Unternehmen zu stärken, um im globalen Wettbewerb bestehen zu können.Keywords:EU AI Act, Regulierung, Künstliche Intelligenz, Innovation, Compliance, Risikomanagement, Datenschutz, digitale Wirtschaft, Technologie, Anwendungsfälle, AI-Act, Militäranwendungen, EU-Rechtsraum, Risiko-Klassifikationen, internationale Perspektive, Start-ups, US-Flip, Regulierung, Compliance, Künstliche Intelligenz, KI, Startups, AI-Act, Venture Clienting, Deep Seek, Innovation, Regulierung, Marktchancen, Deutschland, USA

Microsoft Business Applications Podcast
AI's Transformative Power: Navigating Regulation, Ethics, and Workplace Innovation

Microsoft Business Applications Podcast

Play Episode Listen Later Feb 25, 2025 34:52 Transcription Available


Get featured on the show by leaving us a Voice Mail: https://bit.ly/MIPVM  FULL SHOW NOTES https://www.microsoftinnovationpodcast.com/659  What if AI could transform the way we navigate our professional and personal lives? This episode addresses the pressing challenges of balancing AI innovation with the regulatory frameworks emerging worldwide. We explore the differences between the EU and U.S. approaches to AI regulation, the importance of human rights considerations, and the responsibility organizations have in navigating ethical implications.TAKEAWAYS• Highlighting advancements in AI tools • Discussion of Grok 3 and its capabilities • Exploring the EU AI Act versus U.S. regulatory approaches • Complexity of navigating international AI regulations • The risk of human rights violations with AI algorithms • Emphasizing educational needs for organizations • Importance of a responsible culture in AI implementationThis year we're adding a new show to our line up - The AI Advantage. We'll discuss the skills you need to thrive in an AI-enabled world. DynamicsMinds is a world-class event in Slovenia that brings together Microsoft product managers, industry leaders, and dedicated users to explore the latest in Microsoft Dynamics 365, the Power Platform, and Copilot.Early bird tickets are on sale now and listeners of the Microsoft Innovation Podcast get 10% off with the code MIPVIP144bff https://www.dynamicsminds.com/register/?voucher=MIPVIP144bff Accelerate your Microsoft career with the 90 Day Mentoring Challenge We've helped 1,300+ people across 70+ countries establish successful careers in the Microsoft Power Platform and Dynamics 365 ecosystem.Benefit from expert guidance, a supportive community, and a clear career roadmap. A lot can change in 90 days, get started today!Support the showIf you want to get in touch with me, you can message me here on Linkedin.Thanks for listening

Resilient Cyber
Resilient Cyber w/ Ed Merrett - AI Vendor Transparency: Understanding Models, Data and Customer Impact

Resilient Cyber

Play Episode Listen Later Feb 13, 2025 23:55


In this episode of Resilient Cyber, Ed Merrett, Director of Security & TechOps at Harmonic Security, will dive into AI Vendor Transparency.We discussed the nuances of understanding models and data and the potential for customer impact related to AI security risks.Ed and I dove into a lot of interesting GenAI Security topics, including:Harmonic's recent report on GenAI data leakage shows that nearly 10% of all organizational user prompts include sensitive data such as customer information, intellectual property, source code, and access keys.Guardrails and measures to prevent data leakage to external GenAI services and platformsThe intersection of SaaS Governance and Security and GenAI and how GenAI is exacerbating longstanding SaaS security challengesSupply chain risk management considerations with GenAI vendors and services, and key questions and risks organizations should be consideringSome of the nuances between self-hosted GenAI/LLM's and external GenAI SaaS providersThe role of compliance around GenAI and the different approaches we see between examples such as the EU with the EU AI Act, NIS2, DORA, and more, versus the U.S.-based approach

The Brave Marketer
The EU's Approach to Digital Policy and Lessons Learned From The GDPR

The Brave Marketer

Play Episode Listen Later Feb 12, 2025 50:36


Kai Zenner, Head of Office and Digital Policy Adviser for German Member of the European Parliament Axel Voss, discusses the emerging regulatory landscape for artificial intelligence in Europe and its implications for innovation and consumer safety. He also discusses implementation hurdles of the EU AI Act, specifically the shortage of AI experts and the complexity of enforcement across 27 member states. Key Takeaways:  Challenges with the AI Act (such as vague use cases, balancing innovation with regulation, and the impact on SMEs) Lessons from GDPR, including upcoming changes being considered that could impact data privacy Horizontal legislative approaches and their implications Future prospects for AI regulation in Europe Guest Bio: Kai Zenner is Head of Office and Digital Policy Adviser for MEP Axel Voss in the European Parliament, focusing on AI, privacy, and the EU's digital transition. He is involved in negotiations on the AI Act, AI Liability Directive, ePrivacy Regulation, and GDPR revision. A member of the OECD.AI Network of Experts and the World Economic Forum's AI Governance Alliance, Zenner also served on the UN's High-Level Advisory Body on AI. He was named Best MEP Assistant in 2023, and ranked #13 in Politico's Power 40 for his influence on EU digital policy. ---------------------------------------------------------------------------------------- About this Show: The Brave Technologist is here to shed light on the opportunities and challenges of emerging tech. To make it digestible, less scary, and more approachable for all! Join us as we embark on a mission to demystify artificial intelligence, challenge the status quo, and empower everyday people to embrace the digital revolution. Whether you're a tech enthusiast, a curious mind, or an industry professional, this podcast invites you to join the conversation and explore the future of AI together. The Brave Technologist Podcast is hosted by Luke Mulks, VP Business Operations at Brave Software—makers of the privacy-respecting Brave browser and Search engine, and now powering AI everywhere with the Brave Search API. Music by: Ari Dvorin Produced by: Sam Laliberte  

Discover Daily by Perplexity
Altman Reconsiders Open-Source Strategy, EU Bans Risky AI Systems, and Super-Earth Discovered

Discover Daily by Perplexity

Play Episode Listen Later Feb 7, 2025 9:09 Transcription Available


We're experimenting and would love to hear from you!In this episode of 'Discover Daily', we explore Sam Altman's recent acknowledgment that OpenAI may need to reconsider its open-source strategy, suggesting they might be "on the wrong side of history." This significant shift comes as competitive pressure from open-source models like DeepSeek R1 continues to mount, with Altman praising DeepSeek as "a very good model" that has narrowed OpenAI's traditional lead in the field.The European Union has taken a historic step in AI regulation with the implementation of the EU AI Act's first phase on February 2, 2025. The legislation prohibits AI systems deemed to pose "unacceptable risks," including manipulative systems, social scoring, and untargeted facial recognition databases. Violations can result in substantial penalties of up to €35 million or 7% of a company's total worldwide annual turnover, demonstrating the EU's commitment to establishing itself as a global leader in trustworthy AI development.Our main story focuses on two remarkable super-Earth discoveries within their stars' habitable zones. TOI-715 b, located 137 light-years away, is approximately 1.5 times wider than Earth and orbits its red dwarf star every 19 days. The second discovery, HD 20794 d, orbits a Sun-like star just 20 light-years from Earth and is roughly six times more massive than Earth, with an elliptical orbit that moves in and out of the habitable zone. These discoveries represent significant milestones in our search for potentially habitable worlds and provide promising targets for future research with advanced instruments like the James Webb Space Telescope.From Perplexity's Discover Feed: https://www.perplexity.ai/page/altman-reconsiders-open-source-fT0uV12jTna0XkxW8xkEDQ https://www.perplexity.ai/page/eu-bans-risky-ai-systems-.iTygUNvS2mKll.lL9xFdAhttps://www.perplexity.ai/page/super-earth-discovered-WR42RfwCSQWU1ebaQaeQxw Perplexity is the fastest and most powerful way to search the web. Perplexity crawls the web and curates the most relevant and up-to-date sources (from academic papers to Reddit threads) to create the perfect response to any question or topic you're interested in. Take the world's knowledge with you anywhere. Available on iOS and Android Join our growing Discord community for the latest updates and exclusive content. Follow us on: Instagram Threads X (Twitter) YouTube Linkedin

This Week in Google (MP3)
IM 805: Doomers, Gloomers, Bloomers, and Zoomers - Zack Kass Interview, DeepSeek Hype, EU AI Act

This Week in Google (MP3)

Play Episode Listen Later Feb 6, 2025 165:30


Interview with Zack Kass, Former GTM for Open AI Why you can deep-six the DeepSeek hype Gemini 2.0 is now available to everyone OpenAI has undergone its first ever rebrand, giving fresh life to ChatGPT interactions AI Has Shown Me My Future. Here's What I've Learned. Senator Hawley Proposes Jail Time for People Who Download DeepSeek Hugging Face researchers aim to build an 'open' version of OpenAI's deep researh tool Anthropic makes 'jailbreak' advance to stop AI models producing harmful results WSJ: The Manhattan Project Was Secret. Should America's AI Work Be Too? EU AI Act: Ban on certain AI practices and requirements for AI literacy come into effect Cathy Gellis: When It's Not Just A Coup But A CFAA Violation Too a16z slides on AI and voice Microsoft AI CEO Mustafa Suleyman poaches three Google DeepMind former colleagues, including two who built NotebookLM's Audio Overviews and worked on Astra Meta's CTO said the metaverse could be a 'legendary misadventure' if the company doesn't boost sales, leaked memo shows The Salvadoran Mega-Prison Offering to Take America's Worst Criminals Hilarious analyst on Tesla How the DJI Flip uses AI Marketers will have to market to AI agents AI systems could be 'caused to suffer' if consciousness achieved, says research Hosts: Leo Laporte, Jeff Jarvis, and Mike Elgan Guest: Zack Kass Download or subscribe to Intelligent Machines at https://twit.tv/shows/intelligent-machines. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsor: zscaler.com/security

This Week in Google (Video HI)
IM 805: Doomers, Gloomers, Bloomers, and Zoomers - Zack Kass Interview, DeepSeek Hype, EU AI Act

This Week in Google (Video HI)

Play Episode Listen Later Feb 6, 2025 165:30


Interview with Zack Kass, Former GTM for Open AI Why you can deep-six the DeepSeek hype Gemini 2.0 is now available to everyone OpenAI has undergone its first ever rebrand, giving fresh life to ChatGPT interactions AI Has Shown Me My Future. Here's What I've Learned. Senator Hawley Proposes Jail Time for People Who Download DeepSeek Hugging Face researchers aim to build an 'open' version of OpenAI's deep researh tool Anthropic makes 'jailbreak' advance to stop AI models producing harmful results WSJ: The Manhattan Project Was Secret. Should America's AI Work Be Too? EU AI Act: Ban on certain AI practices and requirements for AI literacy come into effect Cathy Gellis: When It's Not Just A Coup But A CFAA Violation Too a16z slides on AI and voice Microsoft AI CEO Mustafa Suleyman poaches three Google DeepMind former colleagues, including two who built NotebookLM's Audio Overviews and worked on Astra Meta's CTO said the metaverse could be a 'legendary misadventure' if the company doesn't boost sales, leaked memo shows The Salvadoran Mega-Prison Offering to Take America's Worst Criminals Hilarious analyst on Tesla How the DJI Flip uses AI Marketers will have to market to AI agents AI systems could be 'caused to suffer' if consciousness achieved, says research Hosts: Leo Laporte, Jeff Jarvis, and Mike Elgan Guest: Zack Kass Download or subscribe to Intelligent Machines at https://twit.tv/shows/intelligent-machines. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsor: zscaler.com/security

New York City Bar Association Podcasts -NYC Bar
Liabilities and Remedies for AI: Charting New Territory

New York City Bar Association Podcasts -NYC Bar

Play Episode Listen Later Feb 6, 2025 46:11


Our latest episode from the Presidential Task Force on Artificial Intelligence and Digital Technologies surveys an emerging landscape of legislation around AI liabilities and remedies. David Lisson (Davis Polk), Clint Morrison (Patterson Belknap), Shayne O'Reilly (Meta), Matt Bacal (Davis Polk), and Rama Elluru ( Special Competitive Studies Project) unpack regulations from state, federal and international bodies covering topics such as disclosure and transparency, kids' safety, deep fakes, non-consensual intimate imagery, and intellectual property. They also touch upon the significant penalties under the EU AI Act and the broader themes emerging from these legislative efforts, emphasizing the balance between innovation and regulation. If you're interested in learning more about how artificial intelligence will affect the legal world, check out the City Bar's Artificial Intelligence Institute, available on-demand. Visit nycbar.org/events to find all of the most up-to-date information about our upcoming programs and events. 01:20 Federal AI Laws and Regulations 03:06 Pending AI Bills in the U.S. 14:35 State-Level AI Legislation 32:21 International AI Regulations: The EU AI Act 41:06 Closing Thoughts and Future Outlook 45:15 Outro and Additional Resources

Distributed Training, Decentralized AI: Prime Intellect's Master Plan to Make AI Too Cheap to Meter

Play Episode Listen Later Feb 5, 2025 138:41


Vincent Weisser and Johannes Hagemann, founders of Prime Intellect, join a conversation on the Cognitive Revolution to delve into distributed training, decentralized AI, and their vision for a future where compute and intelligence are widely accessible. They discuss the technical challenges and advantages of distributed training, emphasizing how such systems can democratize AI technology and create a more equitable future. The founders also describe their broader goal of creating a public utility for compute and intelligence and touch on their collaborative work in biosafety and scientific research to illustrate the practical applications of their vision for decentralized AI. SPONSORS: Oracle Cloud Infrastructure (OCI): Oracle's next-generation cloud platform delivers blazing-fast AI and ML performance with 50% less for compute and 80% less for outbound networking compared to other cloud providers. OCI powers industry leaders like Vodafone and Thomson Reuters with secure infrastructure and application development capabilities. New U.S. customers can get their cloud bill cut in half by switching to OCI before March 31, 2024 at https://oracle.com/cognitive NetSuite: Over 41,000 businesses trust NetSuite by Oracle, the #1 cloud ERP, to future-proof their operations. With a unified platform for accounting, financial management, inventory, and HR, NetSuite provides real-time insights and forecasting to help you make quick, informed decisions. Whether you're earning millions or hundreds of millions, NetSuite empowers you to tackle challenges and seize opportunities. Download the free CFO's guide to AI and machine learning at https://netsuite.com/cognitive Shopify: Shopify is revolutionizing online selling with its market-leading checkout system and robust API ecosystem. Its exclusive library of cutting-edge AI apps empowers e-commerce businesses to thrive in a competitive market. Cognitive Revolution listeners can try Shopify for just $1 per month at https://shopify.com/cognitive CHAPTERS: (00:00) Teaser (01:02) About the Episode (05:43) Welcome to the Cognitive Revolution (05:55) Exploring Decentralized AI (06:46) A Positive Vision for the Future (08:19) The Risks and Rewards of AI (08:56) Superintelligence and Its Implications (13:22) The Future of Work in an AI-Driven World (17:09) The Role of Billionaires in an AI Future (Part 1) (20:41) Sponsors: Oracle Cloud Infrastructure (OCI) | NetSuite (23:21) The Role of Billionaires in an AI Future (Part 2) (30:20) The Compute Market Landscape (Part 1) (35:10) Sponsors: Shopify (36:30) The Compute Market Landscape (Part 2) (47:49) Decentralized Compute Fabrics (51:25) Regulatory Challenges in Europe and the US (53:28) Policy Regrets and the EU AI Act (54:30) The Impact of Overregulation on AI (57:00) Frontier AI Labs and Safety Plans (01:00:02) Open Source vs. Closed Models (01:06:19) Scientific Progress with AI (01:14:56) Distributed Training in AI (01:35:29) Challenges in Model Interpretability (01:40:06) Supervised Fine-Tuning and Reinforcement Learning (01:45:19) Future of Compute and Infrastructure (02:01:02) NVIDIA's Market Dominance and Competition (02:05:22) Decentralized Training and Open Source Collaboration (02:09:58) Governance and Incentives in Decentralized AI (02:14:19) Conclusion and Call for Collaboration

AI Inside
Five Years of AI Ethics at Adobe (with Grace Yee)

AI Inside

Play Episode Listen Later Feb 5, 2025 69:01


Jason Howell and Jeff Jarvis discuss Adobe's AI ethics initiatives with Grace Yee, Meta's new Frontier AI Framework, Google's removal of weapons restrictions from AI principles, and OpenAI's new Deep Research tool. Support the show on Patreon! http://patreon.com/aiinsideshow Subscribe to the new YouTube channel! http://www.youtube.com/@aiinsideshow Note: Time codes subject to change depending on dynamic ad insertion by the distributor. 0:01:43 - INTERVIEW with Grace Yee, Senior Director of Ethical Innovation at Adobe Adobe's core AI ethics principles:Accountability, Responsibility, Transparency Licensed Adobe Stock content and public domain material, filtered for harmful/copyrighted data Rigorous harm testing, iterative improvements, and internal beta testing for AI features Collaborative responsibility across AI layers AI as a tool to reduce manual tasks while preserving human creativity Importance of human oversight, verifying outputs, and using AI as an ideation tool, not a final product Balancing guardrails with user context and licensed training data Collaboration with policymakers, monitoring regulations (EU AI Act), and advocating harmonized standards Adobe's participation in EU AI Code of Practice and international regulatory harmonization efforts 0:26:13 - Meta says it may stop development of AI systems it deems too risky 0:31:09 - Google removes pledge to not use AI for weapons from website 0:34:50 - OpenAI Unveils A.I. Tool That Can Do Research Online 0:39:47 - OpenAI to release new artificial intelligence model for free 0:41:05 - Gemini 2.0 is now available to everyone 0:44:14 - Elon Musk Ally Tells Staff ‘AI-First' Is the Future of Key Government Agency 0:46:33 - Josh Hawley: DeepSeek users in US could face million-dollar fine and prison time under bill 0:49:37 - Anthropic claims new AI security method blocks 95% of jailbreaks, invites red teamers to try 0:52:22 - AI systems could be ‘caused to suffer' if consciousness achieved, says research 0:55:15 - AI ‘godfather' predicts another revolution in the tech in next five years Learn more about your ad choices. Visit megaphone.fm/adchoices

Explain it to me like I'm a 10 year old
Ep.69: The Groundbreaking Debate Over AI regulation with Risto Uuk, EU AI Expert

Explain it to me like I'm a 10 year old

Play Episode Listen Later Jan 31, 2025 27:48


In this episode, I interview Risto Uuk, the EU Research Lead at the Future of Life Institute in Brussels. He researches European Union Artificial Intelligence policy, including the EU Artificial Intelligence Act. Risto is a PhD researcher at KU Leuven studying the assessment and risk-mitigation of general purpose AI models. We have an expansive conversation covering the most groundbreaking parts of the EU AI Act, the negotiations behind it, and the broader applications and risks of artificial intelligence throughout the world. I hope you enjoy this episode!

Reimagining Cyber
Cybersecurity Challenges in 2025 and DeepSeek Privacy Concerns - Ep 134

Reimagining Cyber

Play Episode Listen Later Jan 29, 2025 12:31


In this episode of 'Reimagining Cyber,'  Rob Aragao explores major trends and focus areas for cybersecurity in 2025. The discussion includes regulatory impacts, particularly around the Digital Operational Resiliency Act (DORA) and the EU AI Act, the complexities of data privacy with eight new laws in the U.S., and the growing emphasis on compliance automation. Rob also delves into the evolution of identity and access management, the convergence of data and identity, and the critical importance of supply chain security. The episode wraps up with insights into the recent DeepSeek incident and its implications for national security and data privacy.Follow or subscribe to the show on your preferred podcast platform.Share the show with others in the cybersecurity world.Get in touch via reimaginingcyber@gmail.com

This Week in Machine Learning & Artificial Intelligence (AI) Podcast
Ensuring Privacy for Any LLM with Patricia Thaine - #716

This Week in Machine Learning & Artificial Intelligence (AI) Podcast

Play Episode Listen Later Jan 28, 2025 51:33


Today, we're joined by Patricia Thaine, co-founder and CEO of Private AI to discuss techniques for ensuring privacy, data minimization, and compliance when using 3rd-party large language models (LLMs) and other AI services. We explore the risks of data leakage from LLMs and embeddings, the complexities of identifying and redacting personal information across various data flows, and the approach Private AI has taken to mitigate these risks. We also dig into the challenges of entity recognition in multimodal systems including OCR files, documents, images, and audio, and the importance of data quality and model accuracy. Additionally, Patricia shares insights on the limitations of data anonymization, the benefits of balancing real-world and synthetic data in model training and development, and the relationship between privacy and bias in AI. Finally, we touch on the evolving landscape of AI regulations like GDPR, CPRA, and the EU AI Act, and the future of privacy in artificial intelligence. The complete show notes for this episode can be found at https://twimlai.com/go/716.

Unsupervised Learning
A Conversation with Faisal Khan from Vanta

Unsupervised Learning

Play Episode Listen Later Jan 28, 2025 39:01 Transcription Available


In this episode, I speak with Faisal Khan, a GRC Solution Specialist at Vanta, about how their platform is transforming trust management for organizations. We talk about: Vanta as a Trust-Management Platform:How Vanta helps organizations build, scale, and showcase their security and compliance programs through automation, efficiency, and tools like the Trust Center. Key Features and Solutions Offered by Vanta:How Vanta’s integrations automate compliance checks, streamline vendor risk management, and address industry standards like SOC 2, ISO 27001, and CMMC to save time and improve efficiency. Future Directions and AI Integration:How Vanta is expanding into new frameworks like the EU AI Act and leveraging AI to simplify compliance, optimize workflows, and address evolving trends in governance and security.Become a Member: https://danielmiessler.com/upgradeSee omnystudio.com/listener for privacy information.

Ogletree Deakins Podcasts
The AI Workplace: Understanding the EU Platform Work Directive

Ogletree Deakins Podcasts

Play Episode Listen Later Jan 28, 2025 20:02


In this episode of our new podcast series, The AI Workplace, Patty Shapiro (shareholder, San Diego) and Sam Sedaei (associate, Chicago) discuss the European Union's (EU) Platform Work Directive, which aims to regulate gig work and the use of artificial intelligence (AI). Patty outlines the directive's goals, including the classification of gig workers and the establishment of AI transparency requirements. In addition, Sam and Patty address the directive's overlap with the EU AI Act and the potential consequences of non-compliance.

Artificial Intelligence in Industry with Daniel Faggella
Overcoming Barriers to AI Adoption in Telecom and Beyond - with Moutie Wali of TELUS

Artificial Intelligence in Industry with Daniel Faggella

Play Episode Listen Later Jan 27, 2025 19:47


Today's guest is Moutie Wali, Director of Integrated Planning & Digital Transformation at TELUS. As a leading Canadian telecommunications company, TELUS offers a wide range of products and services, including internet access, voice, entertainment, healthcare, video, smart home automation, and IPTV television. Moutie joins us on today's show to delve into the complex intersection of privacy, security, and data governance within the telecom industry. Throughout the episode, Moutie shares actionable advice for navigating new regulations, such as the EU AI Act, and emphasizes the importance of collaboration between industry leaders and regulators. This episode is sponsored by OneTrust. Learn how brands work with Emerj and other Emerj Media options at emerj.com/ad1.