Podcasts about eu ai act

  • 358PODCASTS
  • 612EPISODES
  • 35mAVG DURATION
  • 5WEEKLY NEW EPISODES
  • Sep 16, 2025LATEST

POPULARITY

20172018201920202021202220232024


Best podcasts about eu ai act

Latest podcast episodes about eu ai act

IDG TechTalk | Voice of Digital
EU-AI-Act – mit Dagmar Schuller und Philipp Hartmann

IDG TechTalk | Voice of Digital

Play Episode Listen Later Sep 16, 2025 56:38


Ist die Regulierung ein notwendiger Meilenstein für vertrauenswürdige KI oder doch ein Innovationshemmnis, das Europa ausbremst? Dazu haben wir in dieser Episode zwei absolute Top-Experten zu Gast. Dagmar Schuller, Professorin für Wirtschaftsinformatik & Digital Entrepreneurship an der Hochschule Landshut in Bayern, äußert sich kritisch zum EU-AI-Act. Dr. Philipp Hartmann, Director of AI Strategy bei der appliedAI Initiative, bewertet die EU-Vorgaben hingegen als Enabler für KI-Unternehmen.

The FIT4PRIVACY Podcast - For those who care about privacy
Govern and Manage AI to Create Trust with Mark Thomas and Punit Bhatia in the FIT4PRIVACY Podcast E147 S06

The FIT4PRIVACY Podcast - For those who care about privacy

Play Episode Listen Later Sep 11, 2025 32:46


Do you want to use AI without losing trust? What frameworks help build trust and manage AI responsibly?  Can we really create trust while using AI?In this episode of the FIT4PRIVACY Podcast, host Punit Bhatia and digital trust expert Mark Thomas explain how to govern and manage AI in ways that build real trust with customers, partners, and society.This episode breaks down what it means to use AI responsibly and how strong governance can help avoid risks. You'll also learn about key frameworks like the ISO 42001, the EU AI Act, and the World Economic Forum's Digital Trust Framework—and how they can guide your AI practices.Mark and Punit also talk about how organizational culture, company size, and leadership affect how AI is used—and how trust is built (or lost). They discuss real-world tips for making AI part of your existing business systems, and how to make decisions that are fair, explainable, and trustworthy.

EUVC
E574 | Paul Morgenthaler, CommerzVentures: Why CVC Works Best with a Single LP

EUVC

Play Episode Listen Later Sep 10, 2025 43:08


In this episode, Andreas Munk Holm and Jeppe Høier sit down with Paul Morgenthaler, Partner at CommerzVentures, to unpack the inner workings of a single-LP CVC and how strategic structure can drive long-term VC success. Paul shares insights from over a decade of fintech investing, offering a rare look into how one of Europe's leading corporate venture arms thinks about climate, compliance, and the coming wave of agentic AI in financial services.They explore what it takes to make a single-LP model work, how GenAI is reshaping fintech workflows, and why European regulation may be a global feature, not a bug.

AI in Banking Podcast
How International AI Safety Efforts Are Shaping the Future of Governance - with Charleyne Biondi of Moody's

AI in Banking Podcast

Play Episode Listen Later Sep 8, 2025 25:44


Today's guest on the ‘AI in Financial Services' podcast is Charleyne Biondi, Associate Vice President of Moody's Ratings in the Digital Economy Team. Charleyne Biondi, Associate Vice President of Moody's Ratings in the Digital Economy Team. Charleyne returns to the program to share her perspective on the rapidly evolving landscape of AI regulation, comparing the EU AI Act, the US sector-specific approach, and emerging international frameworks. She outlines how regulatory divergence is shaping adoption, trust, and compliance costs for companies operating globally. Charleyne also emphasizes the risks of regulatory fragmentation in the US, where state-level laws often impose requirements as stringent as Europe's. Want to share your AI adoption story with executive peers? Click emerj.com/e2 for more information and to be a potential future guest on Emerj's flagship' AI in Business' podcast! If you've enjoyed or benefited from some of the insights of this episode, consider leaving us a five-star review on Apple Podcasts, and let us know what you learned, found helpful, or liked most about this show!

The Data Chronicles
AI Issue Spotting: What is AI?

The Data Chronicles

Play Episode Listen Later Sep 8, 2025 29:02


In this episode of Data Chronicles, host Scott Loughlin explores the challenges of defining artificial intelligence (AI) under emerging laws, including under the EU AI Act. He is joined by Hogan Lovells partner Etienne Drouard and senior associate Olga Kurochkina to discuss the difficulties in drawing clear lines around what qualifies as AI, the importance of that definition under the EU AI Act for both developers and users, and the broader landscape of AI regulation. The conversation highlights the importance of distinguishing AI from automation, the compliance obligations that follow, and the ways AI legislation continues to evolve.

Diritto al Digitale
Legal Leaders Insights | Emerald De Leeuw-Goggin, Global Head of AI Governance & Privacy at Logitech

Diritto al Digitale

Play Episode Listen Later Sep 8, 2025 18:07


In this episode of Legal Leaders Insights, Giulio Coraggio, Head of Intellectual Property & Technology at DLA Piper Italy, interviews Emerald De Leeuw-Goggin, Global Head of AI Governance & Privacy at Logitech.We dive into her career journey from founding Eurocomply to leading AI governance and privacy at one of the world's most innovative consumer electronics companies. Emerald reveals the pivotal moments that shaped her path and shares practical insights for navigating the rapidly evolving world of AI compliance, privacy, and regulation.What you'll learn in this episode:How to integrate AI governance, privacy, and intellectual property in consumer electronics.The challenges of deploying AI responsibly while ensuring compliance with the EU AI Act and privacy regulations.The future impact of AI laws on consumer technology and business strategy.How to close the funding gap for female entrepreneurs and build a more inclusive tech ecosystem.Whether you're a lawyer, entrepreneur, or business leader, this conversation will give you a front-row seat to the future of AI, compliance, and innovation.

Data Culture Podcast
Der EU AI Act muss weg – mit Prof. Patrick Glauner, Hochschule Deggendorf

Data Culture Podcast

Play Episode Listen Later Sep 8, 2025 28:17


The AI Policy Podcast
Unpacking the EU AI Act Code of Practice with Marietje Schaake

The AI Policy Podcast

Play Episode Listen Later Sep 5, 2025 50:53


In this episode, we are joined by Marietje Schaake, former Member of the European Parliament, to unpack the EU AI Act Code of Practice. Schaake served as Chair of the Working Group on Internal Risk Management and Governance of General-Purpose AI Providers for the Code of Practice, with a focus on AI model safety and security. We discuss the development and drafting of the EU AI Act and Code of Practice (16:47), break down how the Code helps AI companies demonstrate compliance with the Act (28:25), and explore the kinds of systemic risks the AI Act seeks to address (32:00).

Irish Tech News Audio Articles
AIM Centre Launches National AI Accelerator Programme for Manufacturing Companies

Irish Tech News Audio Articles

Play Episode Listen Later Sep 4, 2025 3:54


Advancing Innovation in Manufacturing (AIM) Centre has announced the launch of its new AI Accelerator Programme for Manufacturing Companies, a ten-week hybrid initiative designed to help businesses across Ireland understand, adopt, and scale artificial intelligence effectively. For many manufacturers, the hardest part of AI adoption is knowing where to begin. AIM's new Accelerator helps turn uncertainty into action - guiding companies toward the most valuable use cases for their operations. Starting on 1st October 2025, applications are now open to manufacturing companies of all sizes and sub-sectors. The programme is delivered by the National AI Studio for Manufacturing at AIM Centre, co-funded by the Government of Ireland and the European Union through the ERDF Northern and Western Regional Programme 2021- 27. Supporting AI Adoption in Irish Manufacturing The AI Accelerator provides a structured pathway from AI strategy to deployment, enabling participating companies to build a working demonstrator tailored to their specific needs. The programme blends online delivery with in-person events, including access to Ireland's National AI Studio for Manufacturing. Over the ten weeks (one day per week) participants will gain: • A working AI demonstrator aligned with their operations, paired with a structured use case brief to support future deployment. • Guidance on integrating AI within existing ERP and data systems. • Access to industry-specific use cases, demos, and prototypes, highlighting tangible opportunities for business transformation. • Expert guidance on governance, risk, and compliance - including EU AI Act requirements. • Insights into scaling AI across operations. AIM Centre advises teams of two people attend from each company, ensuring both strategic and technical perspectives are represented. Accreditation and Funding Support The programme is CPD-accredited by Engineers Ireland, recognising its value in professional development. To support participation, SMEs can access up to 80% funding, while larger companies can also avail of significant funding support. This ensures that businesses of all sizes can take advantage of the opportunity to future-proof their operations with AI. AIM-ing for Real Results "Irish manufacturing is at a pivotal moment in its digital transformation journey. Through this programme, we aim to demystify AI and give companies the tools, confidence, and practical outcomes they need to adopt it responsibly and at scale and with measurable business impact" - David Bermingham, Director of AI, AIM Centre. How to Apply Applications are open now. To find out more or register your interest, visit: www.aimcentre.ie/ai-accelerator-programme More about Irish Tech News Irish Tech News are Ireland's No. 1 Online Tech Publication and often Ireland's No.1 Tech Podcast too. You can find hundreds of fantastic previous episodes and subscribe using whatever platform you like via our Anchor.fm page here: https://anchor.fm/irish-tech-news If you'd like to be featured in an upcoming Podcast email us at Simon@IrishTechNews.ie now to discuss. Irish Tech News have a range of services available to help promote your business. Why not drop us a line at Info@IrishTechNews.ie now to find out more about how we can help you reach our audience. You can also find and follow us on Twitter, LinkedIn, Facebook, Instagram, TikTok and Snapchat.

SHIFT
Architect of the EU AI Act Expresses Concerns

SHIFT

Play Episode Listen Later Sep 3, 2025 17:43


EU AI Act architect and lead author Gabriele Mazzini shares his experience drafting the law. He also talks about his concerns with implementation and its potential impact on European competitiveness, and how that led him to quit his job, in the latest installment of our oral history project.This episode was recorded at TEDAI in Vienna and originally ran in 2024.We Meet:MIT Media Lab Research Affiliate & MIT Connection Science Fellow Gabriele MazziniCredits:This episode of SHIFT was produced by Jennifer Strong and Emma Cillekens, and it was mixed by Garret Lang, with original music from him and Jacob Gorski. Art by Meg Marco. 

Insuring Cyber Podcast - Insurance Journal TV
U.S. AI Action Plan vs. EU AI Act: Competing Approaches to Global Leadership

Insuring Cyber Podcast - Insurance Journal TV

Play Episode Listen Later Sep 3, 2025 1:53


In this Insuring Cyber podcast highlight, the discussion shows the key contrasts between the U.S. and EU strategies for artificial intelligence. The U.S. AI Action Plan is positioned … Read More » The post U.S. AI Action Plan vs. EU AI Act: Competing Approaches to Global Leadership appeared first on Insurance Journal TV.

iTunes - Insurance Journal TV
U.S. AI Action Plan vs. EU AI Act: Competing Approaches to Global Leadership

iTunes - Insurance Journal TV

Play Episode Listen Later Sep 3, 2025 1:53


In this Insuring Cyber podcast highlight, the discussion shows the key contrasts between the U.S. and EU strategies for artificial intelligence. The U.S. AI Action Plan is positioned … Read More » The post U.S. AI Action Plan vs. EU AI Act: Competing Approaches to Global Leadership appeared first on Insurance Journal TV.

Podcasts – Insurance Journal TV
U.S. AI Action Plan vs. EU AI Act: Competing Approaches to Global Leadership

Podcasts – Insurance Journal TV

Play Episode Listen Later Sep 3, 2025 1:53


In this Insuring Cyber podcast highlight, the discussion shows the key contrasts between the U.S. and EU strategies for artificial intelligence. The U.S. AI Action Plan is positioned … Read More » The post U.S. AI Action Plan vs. EU AI Act: Competing Approaches to Global Leadership appeared first on Insurance Journal TV.

Human Firewall Podcast
Zwischen Innovation und Kontrolle: Europas neuer AI Act erklärt

Human Firewall Podcast

Play Episode Listen Later Sep 3, 2025 18:03


In dieser Folge nehmen Charline und Christian den EU AI Act unter die Lupe – das weltweit erste umfassende Regelwerk für den Einsatz von künstlicher Intelligenz. Ab Sommer 2025 ist der AI Act keine Zukunftsmusik mehr, sondern verbindliche Realität für Unternehmen in ganz Europa. Doch was genau bedeutet das für Entwickler, Anbieter und Nutzer von KI-Systemen? Sie sprechen darüber, wie der AI Act KI in verschiedene Risikokategorien einteilt, welche Pflichten insbesondere für Hochrisiko-Systeme und General Purpose AI gelten und warum die Regulierung nicht als Innovationsbremse, sondern als Rahmen für verantwortungsvollen Fortschritt verstanden werden sollte. Außerdem werfen sie einen Blick auf den Zeitplan der Umsetzung und die möglichen Sanktionen bei Verstößen. Besonders spannend: Was bedeutet das konkret für KI-Modelle wie ChatGPT oder Gemini, und welche Rolle spielen Transparenz, Cybersicherheit und menschliche Aufsicht in diesem neuen Regelwerk? Mit O-Tönen von: Alexander Ingelheim (CEO, datenschutzexperte.de) Du möchtest mehr über SoSafe erfahren? Dann schau hier vorbei: https://linktr.ee/humanfirewallpodcast Du hast Ideen, Anregungen, Fragen oder möchtest selbst zu Gast im Human Firewall Podcast sein? Dann schreib uns unter podcast@sosafe.de

Tech Deciphered
68 – “Winning the AI Race”… America's AI Action Plan

Tech Deciphered

Play Episode Listen Later Aug 28, 2025 52:38


America's AI action plan … “Winning the AI race” has just been announced. What is it all about? What are the implications? How will the rest of the world react? A deep dive into the announcement, approaches by EU and China, and overall implications of these action plans.Navigation:Intro (01:34)Context of the White House AI SummitPillar I – Accelerating AI InnovationPillar II – Building American AI InfrastructurePillar III – Leading in International AI Diplomacy & SecurityComparing Approaches – U.S. Action Plan vs. EU AI Act vs. China's StrategyImplications and SynthesisConclusionOur co-hosts:Bertrand Schmitt, Entrepreneur in Residence at Red River West, co-founder of App Annie / Data.ai, business angel, advisor to startups and VC funds, @bschmittNuno Goncalves Pedro, Investor, Managing Partner, Founder at Chamaeleon, @ngpedroOur show: Tech DECIPHERED brings you the Entrepreneur and Investor views on Big Tech, VC and Start-up news, opinion pieces and research. We decipher their meaning, and add inside knowledge and context. Being nerds, we also discuss the latest gadgets and pop culture news Subscribe To Our Podcast Nuno G. PedroWelcome to episode 68 of Tech Deciphered. This episode will focus on America's AI action plan, winning the AI race, which has just been announced a couple of weeks in by President Trump in the White House. Today, we'll be discussing the pillars of this plan, from pillar I, the acceleration of AI innovation, to pillar II, building of American AI infrastructure, to pillar III, leading in international AI diplomacy and security.We'll also further contextualise it, as well as compare the approaches between the US Action plan, and what we see from the EU and China strategy at this point in time. We'll finalise with implications and synthesis. Bertrand, is this a watershed moment for the industry? Is this the moment we were all waiting for in terms of clarity for AI in the US?Bertrand SchmittYeah, that's a great question. I must say I'm quite excited. I'm not sure I can remember anything like it since basically John F. Kennedy announcing the race to go to the moon in the early '60s. It feels, as you say, a watershed moment because suddenly you can see that there is a grand vision, a grand plan, that AI is not just important, but critical to the future success of America. It looks like the White House is putting all the ducks in order in order to make it happen. There is, like in the '60s with JFK, a realisation that there is an adversary, there is a competitor, and you want to beat them to that race. Except this time it's not Russia, it's China. A lot of similarities, I would say.Nuno G. PedroYeah. It seems relatively comprehensive. Obviously, we'll deep dive into it today across a variety of elements like regulation, investments, view in relation to exports and imports and the rest of the world. So, relatively comprehensive from what we can see. Obviously, we don't know all the details. We know from the announcement that the plan has identified 90 federal policy actions across the three pillars. Obviously, we'll see how these come into practice over the next few months, few years.To your point, it is a defining moment. It feels a little bit like the space race of '60s, et cetera. It's probably warranted. We know that, obviously, AI platforms, AI services and products are changing the world as we speak. It's pretty important to figure out what is the US response to it.Also interesting to know that we normally don't talk about the US too much in terms of industrial policy. The US seems to have a private sector that, in and of itself, actually stands up to the game, and in particular in tech and high-tech, normally fulfils or fills the gaps that are introduced by big generational shifts in terms of technology. But in this case, there seems to be an industrial policy. This seems to set the stage for that industrial policy and how it moves forward,

Compliance Perspectives
Jonathan Armstrong on The General Purpose AI Code of Practice [Podcast]

Compliance Perspectives

Play Episode Listen Later Aug 28, 2025 13:39


By Adam Turteltaub On July 10, 2025 the European Commission posted The General-Purpose AI Code of Practice.  Unlike the EU AI Act, this new Code of Practice is not compulsory, at least not yet. Still, it seems prudent to start understanding what it says and what expectations are being laid, as well as what the definition of general-purpose AI (GPAI) is.  To that end, we spoke with  London-based Jonathan Armstrong, Partner at Punter Southall. Jonathan explains that GPAI systems perform generally applicable functions such as image and speech recognition, audio and video generation, pattern recognition, question answering and translation.  It is similar to generative AI but is not the same. He then shares that the Code of Practice contains three sections:  transparency, copyright, and safety and security. Transparency is a hugely important issues for AI.  Organizations need to keep their technical documents related to their AI use current and address topics such as how the AI was designed, the technical means by which it performs functions and energy consumption. Copyright is a significant source of litigation at present.  Authors and other content creators see the use of their work by AI engines as a violation.  AI developers see the use of those works as furthering a greater good.  The Code of Practice sets out measures designed to help navigate these difficult waters. Safety & Security guidance is targeted predominantly at the most impactful GPAI operations.  The Code calls for extra efforts to examine cybersecurity and the impact of the technology.  This chapter of the document also includes 10 commitments for organizations to make. Listen in to the podcast and then spend some time reviewing The General-Purpose AI Code of Practice.   It's worth seeing where regulations, and perhaps your AI efforts, are going.

The Data Diva E250 - Marianne Mazaud and Debbie Reynolds

"The Data Diva" Talks Privacy Podcast

Play Episode Listen Later Aug 19, 2025 33:16 Transcription Available


Send us a textIn episode 250 of The Data Diva Talks Privacy Podcast, host Debbie Reynolds, “The Data Diva,” welcomes Marianne Mazaud, Co-Founder of AI ON US, an International Executive Summit Focused on Responsible Artificial Intelligence, co-created with Thomas Lozopone. They explore the powerful relationship between AI, privacy, and trust, emphasizing how leaders can take actionable steps to create inclusive and ethically grounded AI systems.Marianne shares insights from her extensive experience in creative performance marketing and brand protection, including how generative AI technologies have created both opportunities and new risks. She stresses the importance of privacy and inclusion in AI governance, especially in high-risk sectors like healthcare and education.The conversation moves to public trust in AI. Marianne references a study revealing widespread distrust in AI systems due to cybersecurity concerns, algorithmic bias, and lack of transparency. She highlights the need to involve more diverse voices, including individuals with disabilities and children, in the development of emerging technologies. Marianne and Debbie also examine the role of data privacy in consumer trust, citing a PricewaterhouseCoopers report showing that 83% of consumers believe data protection is essential to building trust with businesses.They compare AI regulatory landscapes across the European Union and the United States. Marianne outlines how the EU AI Act places joint responsibility on AI developers and providers, which can introduce compliance complexities, especially for small businesses. She explains how these regulations can be difficult to implement retroactively and may impact innovation when not considered early in the development process.Marianne closes by introducing the AI On Us initiative and the International Summit on Responsible AI for Executives. These efforts are designed to support leaders navigating AI governance through immersive workshops, best practices, and applied exercises. She also describes the Arborus Charter, a commitment to gender equality and inclusion in AI that has been adopted by 150 companies globally.They discuss the erosion of public trust in AI and the contributing role of biased algorithms, black-box decision-making, and regulatory fragmentation across regions. Marianne describes the uneven distribution of protections for vulnerable populations, such as children and persons with disabilities, and the failure of many AI systems to account for culturally or biologically diverse user bases. She emphasizes that privacy harms are not only about data collection but also about downstream effects and misuse, especially in sectors like healthcare, hiring, and public policy.Debbie and Marianne contrast the emerging regulatory models in the United States and the European Union, noting that the U.S. often lacks forward-looking obligations for AI developers, whereas the EU imposes preemptive risk requirements. Despite these differences, both agree that building AI systems that are trustworthy, explainable, and fair must become a global imperative. Marianne closes by describing how AI on Us was founded to help global executives take practical, values-driven steps toward responsible AI. Through events, tools, and shared ethical commitments, the initiative encourages leaders to treat AI responsibility as a competitive advantage, not just a compliance obligation.#AIandPrivacy #ResponsibleAI #Governance #SyntheticContent #TrustworthyAI #InclusiveTech #AlgorithmicAccountability #PrivacyHarms #EtSupport the show

The Armen Show
451: The Human Touch in AI Co-Creation

The Armen Show

Play Episode Listen Later Aug 15, 2025


In this episode, Armen Shirvanian explores the intersection of artificial intelligence and creativity, discussing how to co-create with AI while maintaining the human touch. He delves into copyright issues surrounding AI-generated content, the implications of the EU AI Act, and the dual nature of AI as both a tool for enhancing creativity and a potential […]

No Brainer - An AI Podcast for Marketers
NB63 - Smart Resilience for the AI Age

No Brainer - An AI Podcast for Marketers

Play Episode Listen Later Aug 13, 2025 53:12


Dr. Rebekka Reinhard and Thomas Vasek -- the team behind human magazine -- join CognitivePath founders Greg Verdino and Geoff Livingston for a provocative conversation about why smart resilience, ethics, regulation and responsibility are essential for creating a human forward future in the age of AI. Tune in for a deep dive into the philosophical and practical implications of AI on society, democracy, and our collective future. Chapters 00:00 Introduction 03:34 Smart Resilience in the Age of AI 07:09 Navigating Crises in a Complex World 11:03 Cultural Perspectives on Resilience 12:06 Global Perspectives on AI Development 16:12 Ethics and Morality in AI Regulation 21:32 The EU AI Act and Its Implications 26:09 Power Dynamics and Global Perception 28:28 AI's Role in Democracy 32:14 AI's Impact on Human Resilience 34:38 The Dangers of AI in the Workplace 38:19 Repression and Job Replacement through AI 41:09 A Hopeful Vision for the Future About Rebekka Dr. Rebekka Reinhard is a philosopher and SPIEGEL bestselling author. It's her mission to take philosophy out of the ivory tower and put it back where it belongs: real life. The is the founder of human, the first German magazine about life and work in the AI age. Connect with her at https://linkedin.com/in/rebekkareinhard About Thomas Thomas Vasek is editor-in-chief and head of content at human. He began his journalism career as an investigative reporter at the Austrian news magazine Profil. As founding editor-in-chief, he launched the German edition of the renowned MIT Technology Review in 2003 and the philosophy magazine HOHE LUFT in 2010. From 2006 to 2010, he served as editor-in-chief of P.M. Magazin. Connect with him at https://www.linkedin.com/in/thomas-va%C5%A1ek-637b6b233/   About human Magazine human is the first magazine to take a holistic look at the impact of AI on business, politics, society, and culture – always with a focus on the human being. Issues are published in German (print/digital) and English (digital only). Learn more and subscribe: https://human-magazin.de/ Download the free “Smart Resilience” white paper: https://human-magazin.de/#consulting Learn more about your ad choices. Visit megaphone.fm/adchoices

Chattinn Cyber
Legal Insights on AI: Protecting Privacy in a Data-Driven World with Colleen García

Chattinn Cyber

Play Episode Listen Later Aug 13, 2025 10:59


Summary In this episode, Marc is chattin' with Colleen García, a seasoned privacy attorney. The conversation begins with an introduction to Colleen's extensive background in cybersecurity law, including her experience working with the U.S. government before transitioning to the private sector. This sets the stage for a deep dive into the complex relationship between data privacy and artificial intelligence (AI), highlighting the importance of understanding legal and ethical considerations as AI technology continues to evolve rapidly. The core of the discussion centers on how AI models are trained on vast amounts of data, often containing personal identifiable information (PII). Colleen emphasizes that respecting individuals' data privacy rights is crucial, especially when it comes to obtaining proper consent for the use of their data in AI systems. She points out that while AI offers many benefits, it also raises significant concerns about data misuse, leakage, and the potential for infringing on privacy rights, which companies must carefully navigate to avoid legal and reputational risks. Colleen elaborates on the current legal landscape, noting that existing data privacy laws—such as those in the U.S., the European Union, Canada, and Singapore—are being adapted to address AI-specific issues. She mentions upcoming regulations like the EU AI Act and highlights the role of the Federal Trade Commission (FTC) in enforcing transparency and honesty in AI disclosures. Although some laws do not explicitly mention AI, their principles are increasingly being applied to regulate AI development and deployment, emphasizing the need for companies to stay compliant and transparent. The conversation then expands to a global perspective, with Colleen discussing how different countries are approaching the intersection of data privacy and AI. She notes that international efforts are underway to develop legal frameworks that address the unique challenges posed by AI, reflecting a broader recognition that AI regulation is a worldwide concern. This global outlook underscores the importance for companies operating across borders to stay informed about evolving legal standards and best practices. In closing, Colleen offers practical advice for businesses seeking to responsibly implement AI. She stresses the importance of building AI systems on a strong foundation of data privacy, including thorough vetting of training data and transparency with users. She predicts that future legislative efforts may lead to more state-level AI laws and possibly a comprehensive federal framework, although the current landscape remains fragmented. The podcast concludes with Colleen inviting listeners to connect with her for further discussion, emphasizing the need for proactive, thoughtful approaches to AI and data privacy in the evolving legal environment. Key Points The Relationship Between Data Privacy and AI: The discussion emphasizes how AI models are trained on data that often includes personal identifiable information (PII), highlighting the importance of respecting privacy rights and obtaining proper consent. Legal Risks and Challenges in AI and Data Privacy: Colleen outlines potential risks such as data leakage, misuse, and the complexities of ensuring compliance with existing privacy laws when deploying AI systems. Current and Emerging Data Privacy Laws: The conversation covers how existing laws (like those from the U.S., EU, Canada, and Singapore) are being adapted to regulate AI, along with upcoming regulations such as the EU AI Act and the role of agencies like the FTC. International Perspectives on AI and Data Privacy: The interview highlights how different countries are approaching AI regulation, emphasizing that this is a global issue with ongoing legislative developments worldwide. Practical Advice for Responsible AI Deployment: Colleen offers guidance for companies to build AI systems on a strong data privacy foun...

Microsoft Business Applications Podcast
AI Regulation: Innovation's Hidden Accelerator

Microsoft Business Applications Podcast

Play Episode Listen Later Aug 12, 2025 39:01 Transcription Available


Irish Tech News Audio Articles
Is the AI Skills Gap Actually a Confidence Crisis?

Irish Tech News Audio Articles

Play Episode Listen Later Aug 11, 2025 6:35


Guest post by Ronnie Hamilton, Pre Sales Director, Climb Channel Solutions Ireland There have been hundreds of headlines about the AI skills gap. Analysts are warning that millions of roles could go unfilled. Universities and education providers are launching fast-track courses and bootcamps. And in the channel, partners are under pressure to bring in the right capabilities or risk being left behind. But the challenge isn't always technical. Often, it's much more basic. The biggest question, for many, is where to begin? More often than not, organisations are keen to explore the potential of AI but they don't know how to approach it in a structured way. It's not a lack of intelligence or initiative or skill holding them back - far from it, it's the absence of a shared framework, a common language, or a clear starting point. From marketing departments using ChatGPT to create content to developers trialling Copilot to streamline workflows, individuals are already experimenting with AI. However, these activities tend to happen in isolation, with such tools used informally rather than strategically. Without a roadmap or any kind of unifying policy, businesses are often left with a fragmented view or approach - the result of which is that AI becomes something that happens around the organisation rather than being a part of it. This can also introduce more risks, particularly when employees input sensitive data into external tools without proper controls or oversight. As models become more integrated and capable, even seemingly innocuous actions, like granting access to an email inbox or uploading internal documents, can expose large volumes of confidential company data. Without visibility into how that data is handled and used, organisations may unknowingly be increasing their risk surface. Rethinking what 'AI skills' means The term "AI skills" is often used to describe high-end technical roles like data scientists, machine learning engineers, or prompt specialists. Such an interpretation has its drawbacks. After all, organisations don't just need deep technical expertise, they need an understanding of how AI can be applied in a business context to deliver value. For example, organisations may want to consider how these tools can be used to support customers or identify ways of automating processes. Adopting AI in this way can encourage communication around it and allows people to engage with AI confidently and constructively, regardless of their technical background. Unfortunately, the industry's obsession with large language models (LLMs) has narrowed the conversation. AI has become almost entirely associated with a select number of tools. The focus has moved to interacting with models, rather than applying AI to support and improve existing work. Yet for many partners, the most valuable AI use cases will be far more understated - including automating support tickets, streamlining compliance checks, and improving threat detection. These outcomes won't come from prompt engineering, but from thoughtful experimentation with process optimisation and orchestration. Removing the barriers to adoption For many businesses, the real blocker to full-scale AI adoption isn't technical complexity, it's structural uncertainty. AI adoption is happening, but not in a coordinated way. There are few formal policies in place, and often no designated owner. In many cases, tools are actively blocked due to data security concerns or regulatory ambiguity. That caution isn't misplaced. The EU AI Act, for example, requires any organisation operating within or doing business with the EU to ensure at least one trained individual is responsible for AI. By itself, this raises important questions in terms of accountability and strategy. This lack of ownership - as opposed to the technology itself - is where the real risk lies. There's also an emotional barrier at play. We hear it all the time: the sense that others are further ahead, and that trying to catch...

AskAlli: Self-Publishing Advice Podcast
News: Shopify Bot Attack Hits Authors, UK and EU Enforce AI and Safety Laws, US Plans Pro-Tech AI Policy

AskAlli: Self-Publishing Advice Podcast

Play Episode Listen Later Aug 8, 2025 12:24


On this episode of the Self-Publishing News Podcast, Dan Holloway reports on a coordinated bot attack that hit indie authors using Shopify, leaving some with unexpected fees and limited recourse. He also covers new and proposed legislation across the UK, EU, and US, including the UK's Online Safety Act, concerns over enforcement of the EU AI Act, and the US White House's pro-tech AI action plan—all with implications for author rights and content access. Sponsors Self-Publishing News is proudly sponsored by Bookvault. Sell high-quality, print-on-demand books directly to readers worldwide and earn maximum royalties selling directly. Automate fulfillment and create stunning special editions with BookvaultBespoke. Visit Bookvault.app today for an instant quote. Self-Publishing News is also sponsored by book cover design company Miblart. They offer unlimited revisions, take no deposit to start work and you pay only when you love the final result. Get a book cover that will become your number-one marketing tool. Find more author advice, tips, and tools at our Self-publishing Author Advice Center, with a huge archive of nearly 2,000 blog posts and a handy search box to find key info on the topic you need. And, if you haven't already, we invite you to join our organization and become a self-publishing ally. About the Host Dan Holloway is a novelist, poet, and spoken word artist. He is the MC of the performance arts show The New Libertines, He competed at the National Poetry Slam final at the Royal Albert Hall. His latest collection, The Transparency of Sutures, is available on Kindle.

AI Briefing Room
EP-338 Aws & Openai's Strategic Partnership

AI Briefing Room

Play Episode Listen Later Aug 6, 2025 2:09


i'm wall-e, welcoming you to today's tech briefing for wednesday, august 6th. explore the latest in tech: openai & aws collaboration: aws now offers openai models through amazon ai services like bedrock and sagemaker, enhancing generative ai integration for enterprises and challenging microsoft's cloud dominance. openai's new open-source models: introducing gpt-oss-120b and gpt-oss-20b on hugging face, these ai reasoning models mark openai's return to open source since gpt-2, offering robust performance despite occasional hallucinations. government endorsement: openai, google, and anthropic join the list of approved ai vendors for u.s. federal agencies, streamlining ai service contracts and supporting federal ai goals. eu ai act progress: europe's ai regulatory framework advances, balancing innovation with risk prevention, setting compliance deadlines, and drawing varied reactions from advocacy groups and tech companies. leadership at emed population health: linda yaccarino, known for her negotiation skills, steps in as ceo, bringing bold leadership to the ai-driven health platform focused on glp-1 medications. that's all for today. we'll see you back here tomorrow!

Liebe Zeitarbeit
Was bedeutet der EU AI Act für dein Unternehmen? – Jetzt handeln statt zahlen!

Liebe Zeitarbeit

Play Episode Listen Later Aug 4, 2025 31:00


Die EU reguliert Künstliche Intelligenz – und das mit harten Strafen: Bis zu 7 % vom Jahresumsatz können bei Verstößen gegen den neuen EU AI Act fällig werden. Doch keine Panik: In dieser Folge erklärt dir Daniel Müllergemeinsam mit KI-Experte Andreas Mai, worauf du achten musst – und wie du dich & dein Team rechtssicher aufstellst.

5 Minutes Podcast with Ricardo Vargas
General-Purpose AI in the Spotlight: What the EU AI Act Means for Your Projects

5 Minutes Podcast with Ricardo Vargas

Play Episode Listen Later Aug 3, 2025 5:11


In this episode, Ricardo discusses the impact of the AI Act, the European regulation on artificial intelligence (General-Purpose AI models). The law, passed in 2024 and fully in force in 2026, began imposing strict rules on general-purpose AI models such as GPT, Claude, and Gemini on August 2, 2025. Projects using these AIs, even for simple integration, must also follow ethical, privacy, and transparency requirements. This changes the role of the project manager, who now needs to ensure legal compliance. Despite criticism that the law limits innovation, Ricardo emphasizes that it signals technological maturity. For him, adapting is essential to avoid risks and add value to projects. Listen to the podcast to learn more! https://rvarg.as/euactslide https://rvarg.as/euact

5 Minutes Podcast com Ricardo Vargas
AI de Propósito Geral no Foco: O que o EU AI Act Significa para Nossos Projetos

5 Minutes Podcast com Ricardo Vargas

Play Episode Listen Later Aug 3, 2025 6:08


Neste episódio, Ricardo comenta o impacto da AI Act, regulamentação europeia da inteligência artificial (General‑Purpose AI models). A lei, aprovada em 2024 e em vigor plena em 2026, começou a impor, desde 2/08/25, regras rígidas aos modelos de IA de uso geral, como GPT, Claude e Gemini. Os projetos que usam essas IAs, mesmo como integração simples, também devem seguir exigências sobre ética, privacidade e transparência. Isso muda o papel do gerente de projetos, que agora precisa garantir conformidade legal. Apesar das críticas de que a lei limita a inovação, Ricardo destaca que ela sinaliza maturidade tecnológica. Para ele, adaptar-se é essencial para evitar riscos e agregar valor aos projetos. Escute o podcast para saber mais! https://rvarg.as/euactslide https://rvarg.as/euact

Taking Stock with Vincent Wall
The EU AI Act: Who is really winning the AI race?

Taking Stock with Vincent Wall

Play Episode Listen Later Aug 1, 2025 47:20


This week on Taking Stock with Susan HayesCulleton:Sarah Collins, Brussels Correspondent with the Business Post & John Fitzgerald, Professor in the Department of Economics at Trinity College Dublin, join Susan to give their views on this week's EU-US trade deal.Susan looks to find out more about the next phase of the EU AI Act that comes into force this week with John Callahan, President and CTO of Partsol.Plus, Aidan Donnelly, Head of Equities at Davy, talks US inflation, equities, and the dollar outlook.

KI-Update – ein Heise-Podcast
KI-Update kompakt: EU AI Act, Meta-Umsätze und Überläufer von Apple, Nvidia

KI-Update – ein Heise-Podcast

Play Episode Listen Later Aug 1, 2025 11:25


Der EU-AI-Act tritt in die nächste Phase Meta steigert Umsatz und Gewinne Apples KI-Team dagegen verliert Mitarbeiter Nvidia soll sich gegenüber Chinas Regierung erklären Links zu allen Themen der heutigen Folge findet Ihr hier: https://heise.de/-10506541 https://www.heise.de/thema/KI-Update https://pro.heise.de/ki/ https://www.heise.de/newsletter/anmeldung.html?id=ki-update https://www.heise.de/thema/Kuenstliche-Intelligenz https://the-decoder.de/ https://www.heiseplus.de/podcast https://www.ct.de/ki Eine neue Folge gibt es montags, mittwochs und freitags ab 15 Uhr.

Everyday AI Podcast – An AI and ChatGPT Podcast
EP 576: GPT-5 release timeline update, Google and Microsoft vibe coding and more AI News That Matters

Everyday AI Podcast – An AI and ChatGPT Podcast

Play Episode Listen Later Jul 28, 2025 46:58


Could GPT-5 only be weeks away?Why are Microsoft and Google going all in on vibe coding?What's the White House AI Action Plan actually mean?Don't spend hours a day trying to figure out what AI means for your company or career. That's our job. So join us on Mondays as we bring you the AI News That Matters. No fluff. Just what you need to ACTUALLY pay attention to in the business side of AI. Newsletter: Sign up for our free daily newsletterMore on this Episode: Episode PageJoin the discussion: Thoughts on this? Join the convo and connect with other AI leaders on LinkedIn.Upcoming Episodes: Check out the upcoming Everyday AI Livestream lineupWebsite: YourEverydayAI.comEmail The Show: info@youreverydayai.comConnect with Jordan on LinkedInTopics Covered in This Episode:GPT-5 Release Timeline and FeaturesGoogle Opal AI Vibe Coding ToolNvidia B200 AI Chip Black Market ChinaTrump White House AI Action Plan DetailsMicrosoft GitHub Spark AI Coding LaunchGoogle's AI News Licensing NegotiationsMicrosoft Copilot Visual Avatar (“Clippy” AI)Netflix Uses Generative AI for Visual EffectsOpenAI Warns of AI-Driven Fraud CrisisNew Google, Claude, and Runway AI Feature UpdatesTimestamps:00:00 "OpenAI's GPT-5 Release Announced"04:57 OpenAI Faces Pressure from Gemini07:13 EU AI Act vs. US AI Priorities12:12 Black Market Thrives for Nvidia Chips13:46 US AI Action Plan Unveiled19:34 Microsoft's GitHub Spark Unveiled21:17 Google vs. Microsoft: AI Showdown25:28 Google's New AI Partnership Strategy29:23 Microsoft's Animated AI Assistant Revival33:52 Generative AI in Film Industry38:55 AI Race & Imminent Fraud Crisis40:15 AI Threats and Future InnovationsKeywords:GPT 5 release date, OpenAI, GPT-4, GPT-4O, advanced reasoning abilities, artificial general intelligence, AGI, O3 reasoning, GPT-5 Mini, GPT-5 Nano, API access, Microsoft Copilot, model selector, LM arena, Gemini 2.5 Pro, Google Vibe Coding, Opal, no-code AI, low-code app maker, Google Labs, AI-powered web apps, app development, visual workflow editor, generative AI, AI app creation, Anthropic Claude Sonet 4, GitHub Copilot Spark, Microsoft GitHub, Copilot Pro Plus, AI coding tools, AI search, Perplexity, news licensing deals, Google AI Overview, AI summaries, click-through rate, organic search traffic, Associated Press, Condé Nast, The Atlantic, LA Times, AI in publishing, generative AI video, Netflix, El Eternauta, AI-generated visual effects, AI-powered VFX, Runway, AI for film and TV, job displacement from AI, AI-driven fraud, AI voice cloning, AI impersonation, financial scams, AI regulation, White House AI Action Plan, executive orders on AI, AI innovation, AI deregulaSend Everyday AI and Jordan a text message. (We can't reply back unless you leave contact info) Ready for ROI on GenAI? Go to youreverydayai.com/partner

EUVC
VC | E532 | Eoghan O'Neill, Senior Policy Officer at the European Commission AI Office

EUVC

Play Episode Listen Later Jul 26, 2025 59:07


In this episode, Andreas Munk Holm is joined by Eoghan O'Neill, Senior Policy Officer at the European Commission's AI Office, to break down the EU AI Act, Europe's strategy to lead the global AI wave with trust, safety, and world-class infrastructure.They dive into why the EU's approach to AI is not just regulatory red tape but a proactive framework to ensure innovation and adoption flourish across sectors—from startups to supercomputers. Eoghan unpacks how startups can navigate the Act, why Europe's regulatory clarity is an advantage, and how investors should be thinking about this new paradigm.Here's what's covered:02:41 Eoghan's Unorthodox Journey: From Gravy Systems to AI Policy04:32 The Mission & Structure of the AI Office05:52 Understanding the AI Act: A Product Safety Framework09:40 How the AI Act Was Created: An Open, 1,000+ Stakeholder Process17:42 What Counts as High-Risk AI (And What Doesn't)21:23 Learning from GDPR & Ensuring Innovation Isn't Crushed26:10 Transparency, Trust & The Limits of Regulation30:15 What VCs Need to Know: Obligations, Timelines & Opportunities34:42 Europe's Global AI Position: Infra, Engineers, Strategy43:33 Global Dynamics: Commoditization, Gulf States & the Future of AGI48:46 What's Coming: Apply AI Strategy in September

The Audit Podcast
IA on AI – McDonald's Bot Breach, Google's AI Cyber Win, and Nvidia Hits $4 Trillion

The Audit Podcast

Play Episode Listen Later Jul 24, 2025 8:15


This week on IA on AI, we break down the McDonald's hiring bot fiasco — yes, the one where an AI chatbot exposed data from over 60 million job applicants due to a shockingly simple security lapse. We explore why these matters to internal auditors and what basic control failures like this can teach us about staying vigilant as AI becomes more embedded in business processes.   Plus: An update on the EU AI Act and why U.S.-based organizations should still be paying attention How Google's AI caught a cyberattack in real time — and what this signals for the future of human-in-the-loop systems A $4 trillion milestone for Nvidia and a record-setting $2B seed round for a new AI startup A reality check on AGI: what it is, what it isn't, and why the hype may be outpacing the science Be sure to follow us on our social media accounts on LinkedIn: https://www.linkedin.com/company/the-audit-podcast Instagram: https://www.instagram.com/theauditpodcast TikTok: https://www.tiktok.com/@theauditpodcast?lang=en   Also be sure to sign up for The Audit Podcast newsletter and to check the full video interview on The Audit Podcast YouTube channel.  * This podcast is brought to you by Greenskies Analytics. the services firm that helps auditors leapfrog up the analytics maturity model. Their approach for launching audit analytics programs with a series of proven quick-win analytics will guarantee the results worthy of the analytics hype.   Whether your audit team needs a data strategy, methodology, governance, literacy, or anything else related to audit and analytics, schedule time with Greenskies Analytics.  

Employment Matters
683: The EU AI Act and Its Impact on the US

Employment Matters

Play Episode Listen Later Jul 24, 2025 17:25


Generative AI continues to drive conversation and concern, and not surprisingly, the focus on the promise of AI and how best to regulate it have created controversial positions. The EU has been one of those leaders in addressing regulation of AI, primarily through the EU AI Act. On today's episode, we will learn more from David about the EU AI Act, as well as a US perspective from Derek on the status of AI regulation, and how US companies may be impacted by the EU AI Act. Subscribe to our podcast today to stay up to date on employment issues from law experts worldwide.Host: Tara Stingley (email) (Cline Williams Wright Johnson & Oldfather, LLP)Guest Speakers: David van Boven (email) (Plesner / Denmark) & Derek Ishikawa (email) (Hirschfeld Kraemer LLP / California)Support the showRegister on the ELA website here to receive email invitations to future programs.

BDO in the Boardroom
Risk Aspects of Technological Innovation May Boards NOT Be Thinking About

BDO in the Boardroom

Play Episode Listen Later Jul 24, 2025 25:22


Human Resources and Workforce Impact: Bias in Automation: Ensure that automated HR processes undergo regular audits to identify and mitigate biases, particularly in candidate selection and hiring. Regulatory Oversight: Implement annual bias audits for automated employment decision tools to comply with regulations. Employee Surveillance: Review and update employee monitoring practices to ensure compliance with privacy regulations, and OSHA and HIPAA. Regulatory Compliance and Legal Risks: Decentralized AI Regulation: Develop a comprehensive strategy to track and comply with AI regulations across different states. EU AI Act: Assess the impact of the EU AI Act on your operations and ensure compliance with its requirements, even if your systems are used within the EU. Terms of Service: Establish a process to monitor and review changes in terms of service for AI, other technology and communications tools, ensuring compliance and proper data usage. Operational Resilience and Business Continuity: System Dependencies: Regularly evaluate AI systems for data representativeness and bias and adapt to real-time changes in company operations. Supply Chain Vulnerabilities: Conduct frequent audits of third-party components and vendors to identify and mitigate supply chain vulnerabilities. Cyber Threats: Update employee training programs to include awareness and prevention of deepfake scams and other sophisticated cyber threats. Strategic Oversight and Accountability: Ethical Considerations: Form multidisciplinary task forces for AI adoption, including general counsel, to classify use cases based on risk levels. ROI and Uncertainty: Ask for detailed ROI estimates, timelines, and milestones for AI projects, considering the uncertainty and potential qualitative outcomes. Director Education: Encourage directors to engage in educational opportunities, such as NACD masterclasses and other governance-focused content, to enhance their understanding of AI governance.

Cyber Security Weekly Podcast
Episode 452 - STATE OF CYBER (Part 1)

Cyber Security Weekly Podcast

Play Episode Listen Later Jul 23, 2025 59:01


Special Virtual Episodes with ISACA Leaders: State of Cyber (Part 1) - Maintaining readiness in a complex threat environmentSpeakers:Jamie Norton - ISACA Board Member Chirag Joshi - Sydney Chapter Board Member Abby Zhang - Auckland Chapter Board Member Jason Wood - Auckland Chapter former PresidentBharat Bajaj - ISACA Melbourne Board DirectorFor the full series visit: https://mysecuritymarketplace.com/security-amp-risk-professional-insight-series-2025/#mysecuritytv #isaca #cybersecurity OVERVIEWAccording to ISACA research, almost half of companies exclude cybersecurity teams when developing, onboarding, and implementing AI solutions.Only around a quarter (26%) of cybersecurity professionals or teams in Oceania are involved in developing policy governing the use of AI technology in their enterprise, and nearly half (45%) report no involvement in the development, onboarding, or implementation of AI solutions, according to the recently released 2024 State of Cybersecurity survey report from global IT professional association ISACA.Key Report Findings Security teams in Oceania noted they are primarily using AI for: Automating threat detection/response (36% vs 28% globally); Endpoint security (33% vs 27% globally); Automating routine security tasks (22% vs 24% globally); and Fraud detection (6% vs 13% globally).Additional AI resources to help cybersecurity and other digital trust professionalso EU AI Act white papero Examining Authentication in the Deepfake EraSYNOPSISISACA's 2024 State of Cybersecurity report reveals that stress levels are on the rise for cybersecurity professionals, largely due to an increasingly challenging threat landscape. The annual ISACA research also identifies key skills gaps in cybersecurity, how artificial intelligence is impacting the field, the role of risk assessments and cyber insurance in enterprises' security programs, and more.The demand for cybersecurity talent has been consistently high, yet efforts to increase supply are not reflected in the global ISACA IS/IT-community workforce. The current cybersecurity practitioners are aging, and the efforts to increase staffing with younger professionals are making little progress. Left unchecked, this situation will create business continuity issues in the future. Shrinking budgets and employee compensation carry the potential to adversely affect cybersecurity readiness much sooner than the aging workforce, when the Big Stay passes. Declines in vacant positions across all reporting categories may lead some enterprises to believe that the pendulum of power will swing back to employers, but the increasingly complex threat environment is greatly increasing stress in cybersecurity teams; therefore, the concern is not if, but when, employees will reach their tipping point to vacate current positions.

Shifting Our Schools - Education : Technology : Leadership
AI Companions: the risks and benefits, and what educators need to know

Shifting Our Schools - Education : Technology : Leadership

Play Episode Listen Later Jul 21, 2025 43:59


How do we prepare students—and ourselves—for a world where AI grief companions and "deadbots" are a reality? In this eye-opening episode, Jeff Utecht sits down with Dr. Tomasz Hollanek, a critical design and AI ethics researcher at the University of Cambridge's Leverhulme Centre for the Future of Intelligence, to discuss: The rise of AI companions like Character.AI and Replika Emotional manipulation risks and the ethics of human-AI relationships What educators need to know about the EU AI Act and digital consent How to teach AI literacy beyond skill-building—focusing on ethics, emotional health, and the environmental impact of generative AI Promising examples: preserving Indigenous languages and Holocaust survivor testimonies through AI From griefbots to regulation loopholes, Tomasz explains why educators are essential voices in shaping how AI unfolds in schools and society—and how we can avoid repeating the harms of the social media era. Dr Tomasz Hollanek is a Postdoctoral Research Fellow at the Leverhulme Centre for the Future of Intelligence (LCFI) and an Affiliated Lecturer in the Department of Computer Science and Technology at the University of Cambridge, working at the intersection of AI ethics and critical design. His current research focuses on the ethics of human-AI interaction design and the challenges of developing critical AI literacy among diverse stakeholder groups; related to the latter research stream is the work on AI, media, and communications that he is leading at LCFI. Connect with him: https://link.springer.com/article/10.1007/s13347-024-00744-w   https://www.repository.cam.ac.uk/items/d3229fe5-db87-42ff-869b-11e0538014d8   https://www.desirableai.com/journalism-toolkit  

Serious Privacy
Personal Integrity, Regulatory capture & a week in Privacy

Serious Privacy

Play Episode Listen Later Jul 16, 2025 32:49


Send us a textWith Paul away, Join K and Ralph on a riotous discussion of personal integrity and what positions we can work with and for - with regulators and industry cross pollinating individuals and resources.  Can regulators remain ethical and independent, when we rely on skills and abilities for industry?Also, a week of news in Privacy and Data Protection with a round up of EU, UK, US and beyond news, cases, regulations and standards - including age verification, censorship, EU AI Act, privacy preserving advertising, freedom of speech laws and new developments across the globe! If you have comments or questions, find us on LinkedIn and Instagram @seriousprivacy, and on BlueSky under @seriousprivacy.eu, @europaulb.seriousprivacy.eu, @heartofprivacy.bsky.app and @igrobrien.seriousprivacy.eu, and email podcast@seriousprivacy.eu. Rate and Review us! From Season 6, our episodes are edited by Fey O'Brien. Our intro and exit music is Channel Intro 24 by Sascha Ende, licensed under CC BY 4.0. with the voiceover by Tim Foley.

The Customer Success Playbook
Customer Success Playbook S3 E68 - Gayle Gorvett - Who's Liable When AI Goes Wrong?

The Customer Success Playbook

Play Episode Listen Later Jul 16, 2025 12:38 Transcription Available


Send us a textWhen AI systems fail spectacularly, who pays the price? Part two of our conversation with global tech lawyer Gayle Gorvett tackles the million-dollar question every business leader is afraid to ask. With federal AI regulation potentially paused for a decade while technology races ahead at breakneck speed, companies are left creating their own rules in an accountability vacuum. Gayle reveals why waiting for government guidance could be a costly mistake and how smart businesses are turning governance policies into competitive advantages. From the EU AI Act's complexity challenges to state-by-state regulatory patchwork, this customer success playbook episode exposes the legal landmines hiding in your AI implementation—and shows you how to navigate them before they explode.Detailed AnalysisThe accountability crisis in AI represents one of the most pressing challenges facing modern businesses, yet most organizations remain dangerously unprepared. Gayle Gorvett's revelation about the federal government's proposed 10-year pause on state AI laws while crafting comprehensive regulation highlights a sobering reality: businesses must become their own regulatory bodies or risk operating in a legal minefield.The concept of "private regulation" that Gayle introduces becomes particularly relevant for customer success teams managing AI-powered interactions. When your chatbots handle customer complaints, your predictive models influence renewal decisions, or your recommendation engines shape customer experiences, the liability implications extend far beyond technical malfunctions. Every AI decision becomes a potential point of legal exposure, making governance frameworks essential risk management tools rather than optional compliance exercises.Perhaps most intriguingly, Gayle's perspective on governance policies as competitive differentiators challenges the common view of compliance as a business burden. In the customer success playbook framework, transparency becomes a trust-building mechanism that strengthens customer relationships rather than merely checking regulatory boxes. Companies that proactively communicate their AI governance practices position themselves as trustworthy partners in an industry where trust remains scarce.The legal profession's response to AI—requiring disclosure to clients and technical proficiency from practitioners—offers a compelling model for other industries. This approach acknowledges that AI literacy isn't just a technical requirement but a professional responsibility. For customer success leaders, this translates into a dual mandate: understanding AI capabilities enough to leverage them effectively while maintaining enough oversight to protect customer interests.The EU AI Act's implementation challenges that Gayle describes reveal the complexity of regulating rapidly evolving technology. Even comprehensive regulatory frameworks struggle to keep pace with innovation, reinforcing the importance of internal governance structures that can adapt quickly to new AI capabilities and emerging risks. This agility becomes particularly crucial for customer-facing teams who often serve as the first line of defense Kevin's offeringPlease Like, Comment, Share and Subscribe. You can also find the CS Playbook Podcast:YouTube - @CustomerSuccessPlaybookPodcastTwitter - @CS_PlaybookYou can find Kevin at:Metzgerbusiness.com - Kevin's person web siteKevin Metzger on Linked In.You can find Roman at:Roman Trebon on Linked In.

The Tech Blog Writer Podcast
3345: Veeva Systems and the Future of Agentic AI in Pharma

The Tech Blog Writer Podcast

Play Episode Listen Later Jul 13, 2025 30:51


AI is racing ahead, but for industries like life sciences, the stakes are higher and the rules more complex. In this episode, recorded just before the July heatwave hit its peak, I spoke with Chris Moore, President of Europe at Veeva Systems, from his impressively climate-controlled garden office. We covered everything from the trajectory of agentic AI to the practicalities of embedding intelligence in highly regulated pharma workflows, and how Veeva is quietly but confidently positioning itself to deliver where others are still making announcements. Chris brings a unique perspective shaped by a career that spans ICI Pharmaceuticals, PwC, IBM, and EY. That journey taught him how often the industry is forced to rebuild the same tech infrastructure again and again until Veeva came along. He shares how Veeva's decision to build a life sciences-specific cloud platform from the ground up has enabled a deeper, more compliant integration of AI. We explored what makes Veeva AI different, from the CRM bot that handles compliant free text to MLR agents that support content review and approval. Chris explains how Veeva's AI agents inherit the context and controls of their applications, making them far more than chat wrappers or automation tools. They are embedded directly into workflows, helping companies stay compliant while reducing friction and saving time. And perhaps more importantly, he makes a strong case for why the EU AI Act isn't a barrier. It's a validation. From auto-summarising regulatory documents to pulling metadata from health authority correspondence, the real-world examples Chris offers show how Veeva AI will reduce repetitive work while ensuring integrity at every step. He also shares how Veeva is preparing for a future where companies may want to bring their LLMs or even run different ones by geography or task. Their flexible, harness-based approach is designed to support exactly that. Looking ahead to the product's first release in December, Chris outlines how Veeva is working hand-in-hand with customers to ensure readiness and reliability from day one. We also touch on the broader mission: using AI not as a shiny add-on, but as a tool to accelerate drug development, reach patients faster, and relieve the pressure on already overstretched specialist teams. Chris closes with a dose of humanity, offering a book and song that both reflect Veeva's mindset, embracing disruption while staying grounded. This one is for anyone curious about how real, applied AI is unfolding inside one of the world's most important sectors, and what it means for the future of medicine.

The Sunday Show
How the EU's Voluntary AI Code is Testing Industry and Regulators Alike

The Sunday Show

Play Episode Listen Later Jul 13, 2025 21:39


Last week, following months of negotiation and just weeks before the first legal deadlines under the EU AI Act take effect, the European Commission published the final Code of Practice on General-Purpose AI. The Code is voluntary and intended to help companies demonstrate compliance with the AI Act. It sets out detailed expectations around transparency, copyright, and measures to mitigate systemic risks. Signatories will need to publish summaries of training data, avoid unauthorized use of copyrighted content, and establish internal frameworks to monitor risks. Companies that sign on will see a “reduced administrative burden” and greater legal clarity, the Commission said. At the same time, both European and American tech companies have raised concerns about the AI Act's implementation timeline, with some calling to “stop the clock” on the AI Act's rollout.To learn more, Tech Policy Press associate editor Ramsha Jahangir spoke to Luca Bertuzzi, senior AI correspondent at MLex, to unpack the final Code of Practice on GPAI, why it matters, and how it fits into the broader rollout of the AI Act.

That Tech Pod
Innocent Until the Algorithm Says Otherwise. Trusting Tech When AI Gets It Wrong with Evan J. Schwartz

That Tech Pod

Play Episode Listen Later Jul 8, 2025 30:52


In this week's episode, Laura and Kevin sit down with Evan J. Schwartz, Chief Innovation Officer at AMCS Group, to explore where AI is actually making a difference and where it's doing real harm. From logistics and sustainability to law enforcement and digital identity, we dig into how AI is being used (and misused) in ways that affect millions of lives.We talk about a real-world case Evan worked on involving predictive analytics in law enforcement, and the dangers of trusting databases more than people. If someone hacks your digital footprint or plants fake records, how do you prove you're not the person your data says you are? We dive into the Karen Read case, the ethics of “precrime” models like in Minority Report, and a story where AI helped thieves trick a bank into wiring $40 million. The common thread? We've put a lot of faith in data... sometimes more than it deserves.With the EU AI Act now passed and other countries tightening regulation, Evan offers advice on how U.S.-based companies should prepare for a future where AI governance isn't optional. He also breaks down “dark AI” and whether we're getting close to machines making life-altering decisions without humans in the loop. Whether you're in tech, law, policy, or just trying to understand how AI might impact your own rights and identity, this conversation pulls back the curtain on how fast things are moving and what we might be missing.Evan J. Schwartz brings over 35 years of experience in enterprise tech and digital transformation. At AMCS Group, he leads innovation efforts focused on AI, data science, and sustainability in the logistics and resource recovery industries. He's held executive roles in operations, architecture, and M&A, and also teaches graduate courses in AI, cybersecurity, and project management. Evan serves on the Forbes Tech Council and advises at Jacksonville University. He's also the author of People, Places, and Things, an Amazon best-seller on ERP implementation. His work blends technical depth with a sharp focus on ethics and real-world impact.

Risk Management Show
AI Regulations: What Risk Managers Must Do Now with Caspar Bullock

Risk Management Show

Play Episode Listen Later Jul 7, 2025 23:31


In this episode of the Risk Management Show, we dive into the critical topic of "AI Regulations: What Risk Managers Must Do Now." Join host Boris Agranovich and special guest Caspar Bullock, Director of Strategy at Axiom GRC, as they tackle the challenges and opportunities businesses face in navigating risk management, cybersecurity, and sustainability in today's rapidly evolving landscape. We discuss the growing importance of monitoring AI developments, preparing for upcoming regulations like the EU AI Act, and setting clear internal policies to meet customer demands and legal requirements. Caspar shares his expert perspective on building organizational resilience, the ROI of compliance programs, and addressing third-party risks in a complex supply chain environment. Whether you're a Chief Risk Officer, a compliance professional, or a business leader, this conversation offers actionable insights to help you stay ahead of emerging trends. If you want to be our guest or suggest a guest, send your email to info@globalriskconsult.com with the subject line “Podcast Guest.”

The Lawfare Podcast
Lawfare Archive: Itsiq Benizri on the EU AI Act

The Lawfare Podcast

Play Episode Listen Later Jul 5, 2025 43:54


From February 16, 2024: The EU has finally agreed to its AI Act. Despite the political agreement reached in December 2023, some nations maintained some reservations about the text, making it uncertain whether there was a final agreement or not. They recently reached an agreement on the technical text, moving the process closer to a successful conclusion. The challenge now will be effective implementation.To discuss the act and its implications, Lawfare Fellow in Technology Policy and Law Eugenia Lostri sat down with Itsiq Benizri, counsel at the law firm WilmerHale Brussels. They discussed how domestic politics shaped the final text, how governments and businesses can best prepare for new requirements, and whether the European act will set the international roadmap for AI regulation.To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/lawfare-institute.Support this show http://supporter.acast.com/lawfare. Hosted on Acast. See acast.com/privacy for more information.

TechSurge: The Deep Tech Podcast
Open vs. Closed AI: Risks, Rewards, and Realities of Open Source Innovation

TechSurge: The Deep Tech Podcast

Play Episode Listen Later Jul 1, 2025 26:16


In TechSurge's Season 1 Finale episode, we explore an important debate: should AI development be open source or closed? AI technology leader and UN Senior Fellow Senthil Kumar joins Michael Marks for a deep dive into one of the most consequential debates in artificial intelligence, exploring the fundamental tensions between democratizing AI access and maintaining safety controls.Sparked by DeepSeek's recent model release that delivered GPT-4 class performance at a fraction of the cost and compute, the discussion spans the economics of AI development, trust and transparency concerns, regulatory approaches across different countries, and the unique opportunities AI presents for developing nations.From Meta's shift from closed to open and OpenAI's evolution from open to closed, to practical examples of guardrails and the geopolitical implications of AI governance, this episode provides essential insights into how the future of artificial intelligence will be shaped not just by technological breakthroughs, but by the choices we make as a global community.If you enjoy this episode, please subscribe and leave us a review on your favorite podcast platform. Sign up for our newsletter at techsurgepodcast.com for updates on upcoming TechSurge Live Summits and news about Season 2 of the TechSurge podcast. Thanks for listening! Links:Slate.ai - AI-powered construction technology: https://slate.ai/World Economic Forum on open-source AI: https://www.weforum.org/stories/2025/02/open-source-ai-innovation-deepseek/EU AI Act overview: https://digital-strategy.ec.europa.eu/en/policies/european-approach-artificial-intelligence

Artificial Intelligence in Industry with Daniel Faggella
AI in Healthcare Devices and the Challenge of Data Privacy - with Dr. Ankur Sharma at Bayer

Artificial Intelligence in Industry with Daniel Faggella

Play Episode Listen Later Jun 24, 2025 19:06


Today's guest is Dr. Ankur Sharma, Head of Medical Affairs for Medical Devices and Digital Radiology at Bayer. Dr. Sharma joins Emerj Editorial Director Matthew DeMello to explore the complex intersection of AI, medical devices, and data governance in healthcare. Dr. Sharma outlines the key challenges that healthcare institutions face in adopting AI tools, including data privacy, system interoperability, and regulatory uncertainty. He also clarifies the distinction between regulated predictive models and unregulated generative tools, as well as how each fits into current clinical workflows. The conversation explores the evolving roles of the FDA and EU AI Act, the potential for AI to bridge clinical research and patient care, and the need for new reimbursement models to support digital innovation. This episode is sponsored by Medable. Learn how brands work with Emerj and other Emerj Media options at emerj.com/ad1. Want to share your AI adoption story with executive peers? Click emerj.com/expert2 for more information and to be a potential future guest on the ‘AI in Business' podcast!