Podcasts about eu ai act

  • 333PODCASTS
  • 558EPISODES
  • 36mAVG DURATION
  • 5WEEKLY NEW EPISODES
  • Jul 16, 2025LATEST

POPULARITY

20172018201920202021202220232024


Best podcasts about eu ai act

Latest podcast episodes about eu ai act

Serious Privacy
Personal Integrity, Regulatory capture & a week in Privacy

Serious Privacy

Play Episode Listen Later Jul 16, 2025 32:49


Send us a textWith Paul away, Join K and Ralph on a riotous discussion of personal integrity and what positions we can work with and for - with regulators and industry cross pollinating individuals and resources.  Can regulators remain ethical and independent, when we rely on skills and abilities for industry?Also, a week of news in Privacy and Data Protection with a round up of EU, UK, US and beyond news, cases, regulations and standards - including age verification, censorship, EU AI Act, privacy preserving advertising, freedom of speech laws and new developments across the globe! If you have comments or questions, find us on LinkedIn and Instagram @seriousprivacy, and on BlueSky under @seriousprivacy.eu, @europaulb.seriousprivacy.eu, @heartofprivacy.bsky.app and @igrobrien.seriousprivacy.eu, and email podcast@seriousprivacy.eu. Rate and Review us! From Season 6, our episodes are edited by Fey O'Brien. Our intro and exit music is Channel Intro 24 by Sascha Ende, licensed under CC BY 4.0. with the voiceover by Tim Foley.

The Customer Success Playbook
Customer Success Playbook S3 E68 - Gayle Gorvett - Who's Liable When AI Goes Wrong?

The Customer Success Playbook

Play Episode Listen Later Jul 16, 2025 12:38 Transcription Available


Send us a textWhen AI systems fail spectacularly, who pays the price? Part two of our conversation with global tech lawyer Gayle Gorvett tackles the million-dollar question every business leader is afraid to ask. With federal AI regulation potentially paused for a decade while technology races ahead at breakneck speed, companies are left creating their own rules in an accountability vacuum. Gayle reveals why waiting for government guidance could be a costly mistake and how smart businesses are turning governance policies into competitive advantages. From the EU AI Act's complexity challenges to state-by-state regulatory patchwork, this customer success playbook episode exposes the legal landmines hiding in your AI implementation—and shows you how to navigate them before they explode.Detailed AnalysisThe accountability crisis in AI represents one of the most pressing challenges facing modern businesses, yet most organizations remain dangerously unprepared. Gayle Gorvett's revelation about the federal government's proposed 10-year pause on state AI laws while crafting comprehensive regulation highlights a sobering reality: businesses must become their own regulatory bodies or risk operating in a legal minefield.The concept of "private regulation" that Gayle introduces becomes particularly relevant for customer success teams managing AI-powered interactions. When your chatbots handle customer complaints, your predictive models influence renewal decisions, or your recommendation engines shape customer experiences, the liability implications extend far beyond technical malfunctions. Every AI decision becomes a potential point of legal exposure, making governance frameworks essential risk management tools rather than optional compliance exercises.Perhaps most intriguingly, Gayle's perspective on governance policies as competitive differentiators challenges the common view of compliance as a business burden. In the customer success playbook framework, transparency becomes a trust-building mechanism that strengthens customer relationships rather than merely checking regulatory boxes. Companies that proactively communicate their AI governance practices position themselves as trustworthy partners in an industry where trust remains scarce.The legal profession's response to AI—requiring disclosure to clients and technical proficiency from practitioners—offers a compelling model for other industries. This approach acknowledges that AI literacy isn't just a technical requirement but a professional responsibility. For customer success leaders, this translates into a dual mandate: understanding AI capabilities enough to leverage them effectively while maintaining enough oversight to protect customer interests.The EU AI Act's implementation challenges that Gayle describes reveal the complexity of regulating rapidly evolving technology. Even comprehensive regulatory frameworks struggle to keep pace with innovation, reinforcing the importance of internal governance structures that can adapt quickly to new AI capabilities and emerging risks. This agility becomes particularly crucial for customer-facing teams who often serve as the first line of defense Kevin's offeringPlease Like, Comment, Share and Subscribe. You can also find the CS Playbook Podcast:YouTube - @CustomerSuccessPlaybookPodcastTwitter - @CS_PlaybookYou can find Kevin at:Metzgerbusiness.com - Kevin's person web siteKevin Metzger on Linked In.You can find Roman at:Roman Trebon on Linked In.

Paymentandbanking FinTech Podcast
Episode 13_25: AI in Finance: Klarna klont CEO und Revolut integriert KI

Paymentandbanking FinTech Podcast

Play Episode Listen Later Jul 14, 2025 66:06


In Folge 13 sprechen Maik Klotz und Sascha Dewald über die KI-Pläne von Klarna und Revolut und machen einen Recap der BaFinTech 25. Auch abseits der Fintech-Szene war viel los: Der AI Act der EU erhitzt die Gemüter, OpenAI fischt bei Tesla und Meta und ausgerechnet ein deutsches Verteidigungs-Startup will mit KI groß werden.

The Tech Blog Writer Podcast
3345: Veeva Systems and the Future of Agentic AI in Pharma

The Tech Blog Writer Podcast

Play Episode Listen Later Jul 13, 2025 30:51


AI is racing ahead, but for industries like life sciences, the stakes are higher and the rules more complex. In this episode, recorded just before the July heatwave hit its peak, I spoke with Chris Moore, President of Europe at Veeva Systems, from his impressively climate-controlled garden office. We covered everything from the trajectory of agentic AI to the practicalities of embedding intelligence in highly regulated pharma workflows, and how Veeva is quietly but confidently positioning itself to deliver where others are still making announcements. Chris brings a unique perspective shaped by a career that spans ICI Pharmaceuticals, PwC, IBM, and EY. That journey taught him how often the industry is forced to rebuild the same tech infrastructure again and again until Veeva came along. He shares how Veeva's decision to build a life sciences-specific cloud platform from the ground up has enabled a deeper, more compliant integration of AI. We explored what makes Veeva AI different, from the CRM bot that handles compliant free text to MLR agents that support content review and approval. Chris explains how Veeva's AI agents inherit the context and controls of their applications, making them far more than chat wrappers or automation tools. They are embedded directly into workflows, helping companies stay compliant while reducing friction and saving time. And perhaps more importantly, he makes a strong case for why the EU AI Act isn't a barrier. It's a validation. From auto-summarising regulatory documents to pulling metadata from health authority correspondence, the real-world examples Chris offers show how Veeva AI will reduce repetitive work while ensuring integrity at every step. He also shares how Veeva is preparing for a future where companies may want to bring their LLMs or even run different ones by geography or task. Their flexible, harness-based approach is designed to support exactly that. Looking ahead to the product's first release in December, Chris outlines how Veeva is working hand-in-hand with customers to ensure readiness and reliability from day one. We also touch on the broader mission: using AI not as a shiny add-on, but as a tool to accelerate drug development, reach patients faster, and relieve the pressure on already overstretched specialist teams. Chris closes with a dose of humanity, offering a book and song that both reflect Veeva's mindset, embracing disruption while staying grounded. This one is for anyone curious about how real, applied AI is unfolding inside one of the world's most important sectors, and what it means for the future of medicine.

The Sunday Show
How the EU's Voluntary AI Code is Testing Industry and Regulators Alike

The Sunday Show

Play Episode Listen Later Jul 13, 2025 21:39


Last week, following months of negotiation and just weeks before the first legal deadlines under the EU AI Act take effect, the European Commission published the final Code of Practice on General-Purpose AI. The Code is voluntary and intended to help companies demonstrate compliance with the AI Act. It sets out detailed expectations around transparency, copyright, and measures to mitigate systemic risks. Signatories will need to publish summaries of training data, avoid unauthorized use of copyrighted content, and establish internal frameworks to monitor risks. Companies that sign on will see a “reduced administrative burden” and greater legal clarity, the Commission said. At the same time, both European and American tech companies have raised concerns about the AI Act's implementation timeline, with some calling to “stop the clock” on the AI Act's rollout.To learn more, Tech Policy Press associate editor Ramsha Jahangir spoke to Luca Bertuzzi, senior AI correspondent at MLex, to unpack the final Code of Practice on GPAI, why it matters, and how it fits into the broader rollout of the AI Act.

SAP Basis & Security
„Könnt ihr bitte Deepseek freigeben?“ – Max Beckmann über KI Compliance in der Praxis

SAP Basis & Security

Play Episode Listen Later Jul 9, 2025 38:46


In dieser Folge spreche ich mit Max Beckmann, Security- und KI-Consultant bei mindsquare, über Herausforderungen, gesetzliche Anforderungen und praktische Ansätze im Umgang mit Künstlicher Intelligenz in Unternehmen. Im Zentrum des Gesprächs: Wie lässt sich der Einsatz von KI sinnvoll und sicher gestalten – auch mit Blick auf aktuelle Regulierungen wie den EU AI Act?

That Tech Pod
Innocent Until the Algorithm Says Otherwise. Trusting Tech When AI Gets It Wrong with Evan J. Schwartz

That Tech Pod

Play Episode Listen Later Jul 8, 2025 30:52


In this week's episode, Laura and Kevin sit down with Evan J. Schwartz, Chief Innovation Officer at AMCS Group, to explore where AI is actually making a difference and where it's doing real harm. From logistics and sustainability to law enforcement and digital identity, we dig into how AI is being used (and misused) in ways that affect millions of lives.We talk about a real-world case Evan worked on involving predictive analytics in law enforcement, and the dangers of trusting databases more than people. If someone hacks your digital footprint or plants fake records, how do you prove you're not the person your data says you are? We dive into the Karen Read case, the ethics of “precrime” models like in Minority Report, and a story where AI helped thieves trick a bank into wiring $40 million. The common thread? We've put a lot of faith in data... sometimes more than it deserves.With the EU AI Act now passed and other countries tightening regulation, Evan offers advice on how U.S.-based companies should prepare for a future where AI governance isn't optional. He also breaks down “dark AI” and whether we're getting close to machines making life-altering decisions without humans in the loop. Whether you're in tech, law, policy, or just trying to understand how AI might impact your own rights and identity, this conversation pulls back the curtain on how fast things are moving and what we might be missing.Evan J. Schwartz brings over 35 years of experience in enterprise tech and digital transformation. At AMCS Group, he leads innovation efforts focused on AI, data science, and sustainability in the logistics and resource recovery industries. He's held executive roles in operations, architecture, and M&A, and also teaches graduate courses in AI, cybersecurity, and project management. Evan serves on the Forbes Tech Council and advises at Jacksonville University. He's also the author of People, Places, and Things, an Amazon best-seller on ERP implementation. His work blends technical depth with a sharp focus on ethics and real-world impact.

Horizon Scanning
Copyright and Generative AI: An update from the experts

Horizon Scanning

Play Episode Listen Later Jul 8, 2025 24:52


As the interplay between copyright and generative AI continues to dominate the headlines, the pressure on governments to find workable solutions, and to provide clarity, for both the creative and AI sectors is palpable. In our latest podcast, Laura Houston and Richard Barker discuss: the current position in the EU, including the scope of the existing exceptions for text and data mining, the copyright-focussed obligations in the EU AI Act, and the latest draft of the General Purpose AI Code of Practice; the UK's latest consultation on copyright and AI; and the key cases relating to copyright and generative AI in the UK and the EU.

Risk Management Show
AI Regulations: What Risk Managers Must Do Now with Caspar Bullock

Risk Management Show

Play Episode Listen Later Jul 7, 2025 23:31


In this episode of the Risk Management Show, we dive into the critical topic of "AI Regulations: What Risk Managers Must Do Now." Join host Boris Agranovich and special guest Caspar Bullock, Director of Strategy at Axiom GRC, as they tackle the challenges and opportunities businesses face in navigating risk management, cybersecurity, and sustainability in today's rapidly evolving landscape. We discuss the growing importance of monitoring AI developments, preparing for upcoming regulations like the EU AI Act, and setting clear internal policies to meet customer demands and legal requirements. Caspar shares his expert perspective on building organizational resilience, the ROI of compliance programs, and addressing third-party risks in a complex supply chain environment. Whether you're a Chief Risk Officer, a compliance professional, or a business leader, this conversation offers actionable insights to help you stay ahead of emerging trends. If you want to be our guest or suggest a guest, send your email to info@globalriskconsult.com with the subject line “Podcast Guest.”

The Lawfare Podcast
Lawfare Archive: Itsiq Benizri on the EU AI Act

The Lawfare Podcast

Play Episode Listen Later Jul 5, 2025 43:54


From February 16, 2024: The EU has finally agreed to its AI Act. Despite the political agreement reached in December 2023, some nations maintained some reservations about the text, making it uncertain whether there was a final agreement or not. They recently reached an agreement on the technical text, moving the process closer to a successful conclusion. The challenge now will be effective implementation.To discuss the act and its implications, Lawfare Fellow in Technology Policy and Law Eugenia Lostri sat down with Itsiq Benizri, counsel at the law firm WilmerHale Brussels. They discussed how domestic politics shaped the final text, how governments and businesses can best prepare for new requirements, and whether the European act will set the international roadmap for AI regulation.To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/lawfare-institute.Support this show http://supporter.acast.com/lawfare. Hosted on Acast. See acast.com/privacy for more information.

Digitaliseringspådden
Hva betyr EU AI Act for Norge? - med advokat Vebjørn Søndersrød fra Føyen

Digitaliseringspådden

Play Episode Listen Later Jul 4, 2025 93:59


Send us a text1. juli kom høringen om ny norsk lov om kunstig intelligens – også kjent som KI-forordningen, Norges versjon av EUs AI Act. Hva betyr dette regelverket for norske virksomheter i praksis? I denne episoden får vi besøk av Vebjørn Søndersrød, advokat og partner i Føyen – et advokatfirma med IKT-kompetanse og en teknologi-avdeling med 14 advokater. Vebjørn gir oss en grundig gjennomgang av hva som er forbudt, hvilke risikonivåer som finnes, og hvordan man kan tilpasse seg det nye regelverket. Programlederne Jens Christian Bang og Dag Rustad, leder samtalen og stiller de kritiske spørsmålene. Dette er en episode for deg som vil forstå hva KI-forordningen faktisk innebærer – og hvordan du bør forberede deg.Digitaliseringspådden lages av Already On og CW.no. Besøk oss på digitaliseringspodden.alreadyon.com. Du finner Digitaliseringspådden på alle plattformer – lytt via Spotify, Apple Podcasts eller YouTube Podcasts.

InfosecTrain
IAPP AIGP Certification: Essentials for AI Governance & Career Growth

InfosecTrain

Play Episode Listen Later Jul 4, 2025 50:58


In this session, we explore the IAPP AI Governance Professional (AIGP) certification and its growing relevance in today's AI-driven world. As artificial intelligence becomes deeply integrated into business and government, mastering AI governance, ethics, and compliance is essential for professionals across privacy, legal, and tech domains. You'll learn the fundamentals of responsible AI, the implications of regulations like the EU AI Act and GDPR, and how the AIGP certification equips you to lead in a rapidly evolving regulatory landscape.We also cover the key topics in the exam, its career benefits, and preparation strategies to help you succeed and stand out as a trusted AI governance expert.

HRM-Podcast
Recruiting DNA | Mitarbeiter finden, erfolgreich führen und motivieren: Klartext: KI & Feedback | 210

HRM-Podcast

Play Episode Listen Later Jul 2, 2025 17:55


Willkommen zum neuen Format von Recruiting DNA: „Klartext mit Max“. Schluss mit Floskeln – jetzt gibt's echte Geschichten, ungeschminkte Wahrheiten und vor allem: praktikable Lösungen direkt aus dem Recruiting-Alltag. In dieser ersten Folge spricht Max Kraft über zwei brennende Themen: 1. KI im Recruiting – Goldgrube oder Bürokratiemonster? Max analysiert, wie KI im Recruiting aktuell in Deutschland eingesetzt wird – oder eben nicht. Datenschutz, EU AI Act und andere Regularien bremsen die Entwicklung. Max berichtet aus dem eigenen Alltag, was sie konkret tun, um KI trotzdem sinnvoll einzusetzen – transparent, datenschutzkonform und mit Fokus auf den Kandidaten. 2. Feedback im Recruiting – Warum es so selten passiert und warum das fatal ist Ehrliches Feedback im Bewerbungsprozess ist rar – und das schadet allen Beteiligten. Max erklärt, warum Feedback essenziell ist, wo es im Prozess auftaucht und warum „Wir melden uns zeitnah“ keine ernst gemeinte Rückmeldung ist. Er zeigt anhand praktischer Beispiele, wie gutes Feedback gelingt, was es bewirken kann – und warum es ein echter Game-Changer für Recruiting und Kandidaten sein kann. Diese Folge ist ein leidenschaftlicher Appell für mehr Ehrlichkeit, mehr Klarheit und mehr Mut im Recruiting.

Handelskraft Digital Business Talk
Handelskraft #63: KI-Souveränität | Eine Technologie. Viele Abhängigkeiten. Mit Rebekka Weiß.

Handelskraft Digital Business Talk

Play Episode Listen Later Jul 2, 2025 42:58


In dieser Folge spricht Host Samuel Stötzner mit Rebekka Weiß, Head of Regulatory Policy bei Microsoft Deutschland. Gemeinsam gehen sie der Frage nach, was KI-Souveränität ausmacht: Zwischen Regulierungswahn und technischer Realität, EU AI Act und geopolitischer Unsicherheit, Datenschutz und Investitionen in neue europäische Rechenzentren.

TechSurge: The Deep Tech Podcast
Open vs. Closed AI: Risks, Rewards, and Realities of Open Source Innovation

TechSurge: The Deep Tech Podcast

Play Episode Listen Later Jul 1, 2025 26:16


In TechSurge's Season 1 Finale episode, we explore an important debate: should AI development be open source or closed? AI technology leader and UN Senior Fellow Senthil Kumar joins Michael Marks for a deep dive into one of the most consequential debates in artificial intelligence, exploring the fundamental tensions between democratizing AI access and maintaining safety controls.Sparked by DeepSeek's recent model release that delivered GPT-4 class performance at a fraction of the cost and compute, the discussion spans the economics of AI development, trust and transparency concerns, regulatory approaches across different countries, and the unique opportunities AI presents for developing nations.From Meta's shift from closed to open and OpenAI's evolution from open to closed, to practical examples of guardrails and the geopolitical implications of AI governance, this episode provides essential insights into how the future of artificial intelligence will be shaped not just by technological breakthroughs, but by the choices we make as a global community.If you enjoy this episode, please subscribe and leave us a review on your favorite podcast platform. Sign up for our newsletter at techsurgepodcast.com for updates on upcoming TechSurge Live Summits and news about Season 2 of the TechSurge podcast. Thanks for listening! Links:Slate.ai - AI-powered construction technology: https://slate.ai/World Economic Forum on open-source AI: https://www.weforum.org/stories/2025/02/open-source-ai-innovation-deepseek/EU AI Act overview: https://digital-strategy.ec.europa.eu/en/policies/european-approach-artificial-intelligence

Doppelgänger Tech Talk
Was passiert mit Siri | Yupp AI | OpenAI nutzt Google-TPUs #471

Doppelgänger Tech Talk

Play Episode Listen Later Jul 1, 2025 68:08


Die Talent-Abwerbeschlacht um KI-Forscher geht weiter. Apple verhandelt mit OpenAI und Anthropic, um Siri über fremde LLMs aufzurüsten, plant zugleich ein günstiges A18-MacBook Air und mehrere leichte AR-Brillen. Shein stolpert vor dem Börsengang: London scheitert, jetzt vertrauliches Hongkong-Filing bei sinkendem Wachstum. Berlins Datenschutzaufsicht will DeepSeek aus deutschen App-Stores verbannen. Yupp AI startet als Meta-Suchmaschine für LLMs: ein Prompt, zwei Antworten, User küren das bessere Modell. OpenAI schwenkt für Inferenz auf Google-TPUs – billiger, schneller, unabhängiger. Roger Federer knackt dank seiner On-Beteiligung die Milliarde. WhatsApp Business rechnet künftig pro Nachricht ab und verdient an KI-Bots. Tesla verliert seinen Produktionschef, X holt Produkt-Tüftler Nikita Bier. Trumps Team plant 47 ATF-Deregulierungen, das TikTok-Verbot wird erneut vertagt. Amazon beschäftigt jetzt über eine Million Roboter, Start-ups fordern ein Moratorium für den EU AI Act, Microsofts Diagnose-KI schlägt Ärzte bei seltenen Fällen. Unterstütze unseren Podcast und entdecke die Angebote unserer Werbepartner auf ⁠⁠⁠⁠⁠doppelgaenger.io/werbung⁠⁠⁠⁠⁠. Vielen Dank!  Philipp Glöckler und Philipp Klöckner sprechen heute über: (00:00:00) Meta wirbt OpenAI-Forscher ab (00:07:55) Apple sucht LLM-Partner (00:14:00) Shein-IPO wankt (00:23:45) Berliner Datenschutz will DeepSeek aus App-Stores werfen (00:25:33) Yupp AI Side-by-Side-Vergleich von LLM-Antworten (00:31:20) OpenAI nutzt Google-TPUs für Inferenz (00:43:20) Roger Federer wird Milliardär dank On-Beteiligung (00:48:00) WhatsApp Business: Wechsel zu Pay-per-Message-Modell (00:49:30) Tesla verliert Produktionschef; Nikita Bier neuer X-Produktchef (00:55:00) Trump (00:56:00) TikTok-Verbot erneut verschoben (00:58:00) Amazon meldet 1 Mio Roboter (01:00:05) Gute News Des Tages Shownotes OpenAI-Führung reagiert auf Meta-Angebote – wired.com Zuckerberg kündigt Meta-‘Superintelligenz'-Projekt an – bloomberg.com Apple erwägt Anthropic oder OpenAI für Siri – bloomberg.com Apple arbeitet an 7 Kopf-Displays – 9to5mac.com Apple bringt günstigeres MacBook mit iPhone-Prozessor heraus – 9to5mac.com Shein plant geheime Hongkong-Börsennotierung – reuters.com US-Käufer meiden Shein und Temu nach Schließung der Steuerschlupflücke durch Trump – ft.com DeepSeek droht Verbot in deutschen App-Stores von Apple und Google – reuters.com Google überzeugt OpenAI zur Nutzung von TPU-Chips – theinformation.com Roger Federers langfristige Deals machen ihn zum Tennis-Milliardär – bloomberg.com 500+ KI-Modelle im Vergleich – x.com WhatsApp Business Plattform Preise | WhatsApp API Preise – business.whatsapp.com Elon Musk Vertrauter Omead Afshar verlässt Tesla –  bloomberg.com Musk's X stellt Unternehmer Nikita Bier als Produktchef ein – bloomberg.com DOGE tritt ATF bei, um Waffenregulierungen zu reduzieren – washingtonpost.com TikTok in den USA: Trump findet Käufer – zeit.de Amazon kurz davor, mehr Roboter als Menschen in Lagern einzusetzen – wsj.com Europäische Startups und VCs fordern EU auf, AI Act zu pausieren – sifted.eu Microsoft: Neues KI-System diagnostiziert genauer als Ärzte – wired.com

Irish Tech News Audio Articles
Why Compliance is the Next Big Opportunity for IT Channel Partners

Irish Tech News Audio Articles

Play Episode Listen Later Jun 27, 2025 7:47


Guest post by Ronnie Hamilton, Pre-Sales Director, Climb Channel Solutions Ireland If compliance feels overwhelming right now, you're not imagining it. New regulations covering cybersecurity, data protection, AI, and more are emerging - from the latest PCI DSS updates to the EU AI Act. As a result, compliance is actively shaping the IT channel, influencing how we do business, how we anticipate industry shifts, and how we support our partners and customers with the right solutions to stay ahead. Navigating compliance in 2025 means being aligned with regulatory requirements, but it's a balancing act because at the end of the day, we all still have a job to do: delivering the right solutions, tailoring services to customer needs, and being a trusted partner. With new regulations coming into force and the mounting challenge of understanding cybersecurity, AI governance, and data integrity requirements, it's more important than ever to stay ahead. On the other hand, those who stay agile and deliver solutions that meet regulatory demands have an opportunity to turn the compliance headache into a competitive advantage. The Agility Advantage of Smaller Partners Smaller channel partners face growing pressure from complex customer environments, resource constraints, and fierce competition for skilled talent. However, their agility provides a unique advantage. Unlike larger enterprises, they can quickly adapt to evolving customer needs, position themselves as trusted advisors, and identify emerging vendors - particularly those offering AI-powered and automated solutions. AI adoption plays a critical role in maintaining a competitive edge. By embracing AI, smaller partners can deliver exceptional managed services with fewer resources, keeping costs low and service quality high. This approach ensures they remain competitive in a crowded market. Tackling the EU NIS2 Directive The EU NIS2 Directive reinforces the need for robust cybersecurity measures, urging businesses to adopt a more comprehensive approach to risk management. Essential security practices such as multi-factor authentication, regular cybersecurity training, incident response planning, and strong supply chain security are no longer optional but essential. A key principle underlying the directive is the Identify, Detect, Protect, Respond, and Recover framework. While most organisations focus heavily on detection and protection, recovery is sometimes a weak link. A lengthy recovery period following a breach can be as harmful as failing to detect the threat in the first place. The integration of automation into threat detection and response processes is becoming more important for meeting compliance requirements. The EU AI Act: Compliance Meets Innovation The EU AI Act introduces new obligations for organisations deploying AI solutions - emphasising transparency, accountability, and risk management throughout the AI lifecycle. These requirements extend to all aspects of AI implementation, from data sourcing and model training to real-world deployment. To address compliance risks, managed service providers may consider introducing AI governance roles, such as "AI Managers as a Service." These specialists help organisations navigate AI regulations without requiring full-time in-house expertise. While compliance with AI regulations may introduce additional costs, the long-term benefits - such as enhanced customer trust, clear documentation, and ethical AI practices - can significantly outweigh the initial investment. Rather than viewing compliance as a regulatory burden, partners should position it as an opportunity to strengthen customer relationships and stand out. Automation and AI: Key Enablers of Compliance AI and automation are proving indispensable for managing compliance complexity. From automating repetitive processes to monitoring security events and ensuring adherence to evolving standards, these technologies help organisations streamline compliance efforts while mini...

PwC Luxembourg TechTalk
Financial services' road to AI: Where we are and where we're headed

PwC Luxembourg TechTalk

Play Episode Listen Later Jun 26, 2025 51:13


In this episode of TechTalk, we explore how financial services are steering toward AI — covering emerging regulations like the EU AI Act, trust-building, collaboration, and the shift from experimentation to real-world applications. To guide us through this evolving landscape, we're joined by Ulf Herbig, Chairman of the EFAMA AI Task Force and Chairman of ALFI's Digital Finance Working Group on Innovation and Technology; and Sébastien Schmitt, Partner in Regulatory Risk and Compliance at PwC Luxembourg. 

Experten & Marketing
EU AI ACT – wie du KI rechtssicher in deinem Expertenbusiness einsetzt

Experten & Marketing

Play Episode Listen Later Jun 26, 2025 18:08


318: Der EU AI Act ist kein Zukunftsszenario, sondern Realität. Seit Februar 2025 gilt: Wer KI-Tools nutzt, steht in der Verantwortung – und zwar du persönlich.   Auch als Solopreneur oder kleines Unternehmen. In dieser Folge erfährst du, was der EU AI Act für deinen Experten- und Berateralltag konkret bedeutet – und wo rechtliche Fallstricke lauern, die dich bares Geld kosten können.     Du bekommst Klarheit über Risikoklassen, Transparenzpflichten und Urheberrechte. Vor allem aber: Wie du KI sinnvoll und rechtssicher einsetzt – ohne deine Expertise aus der Hand zu geben.Wenn du mit KI arbeitest (oder es planst), ist diese Folge Pflichtprogramm.  Also ab zum Fox-Cast – Play-Taste drücken und KI entspannt nutzen.  Shownotes:  Web: https://martina-fuchs.com/318  ► KI KOMPETENZ-Nachweiss- SCHULUNG   Infos und Anmeldung mit Code „FOXCAST“ für deinen Special-Deal: https://www.martina-fuchs.com/kischulung  ► STRATEGIE-CALL – Dein Weg zum Status:Ausgebucht – hol dir   deinen persönlichen Fahrplan für 0 € und  buche jetzt deinen Termin hier: https://www.martina-fuchs.com/termin  ► EXPERT BRANDING KICK START KIT   https://www.martina-fuchs.com/kit  ►Buch 1: https://www.martina-fuchs.com/statusausgebucht  ►Buch 2: https://www.martina-fuchs.com/buch  Impressum: https://martina-fuchs.com/impressum  LinkedIn: https://www.linkedin.com/in/martinafuchs Instagram: https://www.instagram.com/martinafuchs.official  Threads: https://www.threads.net/@martinafuchs.official  Facebook: https://www.facebook.com/smartzumerfolg YouTube: https://www.youtube.com/FuchsMartina 

FundraisingAI
Episode 61 - Navigating Super Intelligence, Governance, and Human-First Transformation

FundraisingAI

Play Episode Listen Later Jun 25, 2025 31:33


In the rapidly accelerating world of Artificial Intelligence, the pace of innovation can feel overwhelming. From groundbreaking advancements to the ongoing debate about governance and ethical implications, AI is not just a tool; it's a transformative force. As we race towards super intelligence and navigate increasingly sophisticated models, how do we ensure that human values remain at the core of this technological revolution? How do we, especially in the trust-based nonprofit sector, lead with intentionality and ensure AI serves humanity rather than superseding it?   In this episode, Nathan and Scott dive into the relentless evolution of AI, highlighting Meta's staggering $15 billion investment in the race for super intelligence and the critical absence of robust regulation. They reflect on the essential shift from viewing AI adoption as a finite "destination" to embracing it as an ongoing "journey." Nathan shares insights on how AI amplifies human capabilities, particularly for those who are "marginally" good at certain skills, advocating for finding your "why" and offloading tasks AI can do better. Scott discusses his recent AI governance certification, underscoring the complexities and lack of "meat on the bone" in US regulations compared to the EU AI Act. The conversation also explores the concept of AI agents, offering practical tips for leveraging them, even for those with no coding experience. They conclude with a powerful reminder: AI is a mirror reflecting our values, and the nonprofit sector has a vital role in shaping its ethical future.   HIGHLIGHTS  [01:15] AI Transformation: A Journey, Not a Destination [03.00] If AI Can Do It Better: Finding Your Human "Why"   [04:05] AI Outperforming Human Capabilities   [05:00] Meta's $15 Billion Investment in Super Intelligence  [07:16] The Manipulative Nature of AI and the "Arms Race" for Super Intelligence   [09:27] The Importance and Challenges of AI Governance and Regulation   [14:50] AI as a Compass, Not a Silver Bullet   [16:39] Beware the AI Finish Line Illusion   [18:12] Small Steps, Sustained Momentum: The "Baby Steps" Approach to AI   [26:48] Tip of the Week: The Rise of AI Agents and Practical Use Cases  [32.24] The Power of Curiosity in AI Exploration   RESOURCES  Relay.app: relay.app   Zapier: zapier.com   Make.com: make.com   N.io: n.io   Connect with Nathan and Scott: LinkedIn (Nathan): ⁠⁠⁠⁠⁠linkedin.com/in/nathanchappell/⁠⁠⁠⁠⁠ LinkedIn (Scott): ⁠⁠⁠⁠⁠linkedin.com/in/scott-rosenkrans⁠⁠⁠⁠⁠ Website: ⁠⁠⁠⁠⁠fundraising.ai/⁠

Artificial Intelligence in Industry with Daniel Faggella
AI in Healthcare Devices and the Challenge of Data Privacy - with Dr. Ankur Sharma at Bayer

Artificial Intelligence in Industry with Daniel Faggella

Play Episode Listen Later Jun 24, 2025 19:06


Today's guest is Dr. Ankur Sharma, Head of Medical Affairs for Medical Devices and Digital Radiology at Bayer. Dr. Sharma joins Emerj Editorial Director Matthew DeMello to explore the complex intersection of AI, medical devices, and data governance in healthcare. Dr. Sharma outlines the key challenges that healthcare institutions face in adopting AI tools, including data privacy, system interoperability, and regulatory uncertainty. He also clarifies the distinction between regulated predictive models and unregulated generative tools, as well as how each fits into current clinical workflows. The conversation explores the evolving roles of the FDA and EU AI Act, the potential for AI to bridge clinical research and patient care, and the need for new reimbursement models to support digital innovation. This episode is sponsored by Medable. Learn how brands work with Emerj and other Emerj Media options at emerj.com/ad1. Want to share your AI adoption story with executive peers? Click emerj.com/expert2 for more information and to be a potential future guest on the ‘AI in Business' podcast!

Irish Tech News Audio Articles
€0.5m in funding for Trinity to develop AI platform for teachers

Irish Tech News Audio Articles

Play Episode Listen Later Jun 24, 2025 5:18


A team of researchers at Trinity College Dublin has received €500,000 in funding to develop an AI-enabled platform to help teachers create assessments and provide formative feedback to learners. The project is called Diotima and is supported by The Learnovate Centre, a global research and innovation centre in learning technology in Trinity College Dublin. Diotima began its partnership with Learnovate in February this year and is expected to spin out as a company in 2026. The €500,000 funding was granted under Enterprise Ireland's Commercialisation Fund, which supports third-level researchers to translate their research into innovative and commercially viable products, services and companies. Diotima supports teaching practice by using responsible AI to provide learners with feedback, leading to more and better assessments and improved learning outcomes for students, and a more manageable workload for teachers. The project was co-founded by Siobhan Ryan, a former secondary school teacher, biochemist and environmental scientist, and Jonathan Dempsey, an EdTech professional with both start-up and corporate experience. Associate Professor Ann Devitt, Head of the Trinity School of Education, and Carl Vogel, Professor of Computational Linguistics and Director of the Trinity Centre for Computing and Language Studies, are serving as co-principal investigators on the project. Diotima received the funding in February. Since then, the project leaders have established an education advisory group formed of representatives from post-primary and professional education organisations. The Enterprise Ireland funding has facilitated the hiring of two post-doctoral researchers. They are now leading AI research ahead of the launch of an initial version of the platform in September 2025. Diotima aims to conduct two major trials of the platform as they also seek investment. Co-founder Siobhan Ryan is Diotima's Learning Lead. After a 12-year career in the brewing industry with Diageo, Siobhan re-trained as a secondary school teacher before leaving the profession to develop the business case for a formative assessment and feedback platform. Her experience in the classroom made her realise that she could have a greater impact by leveraging AI to create a platform to support teachers in a safe, transparent, and empowering way. Her fellow co-founder Jonathan Dempsey is Commercial Lead at Diotima. He had been CEO of the Enterprise Ireland-backed EdTech firm Digitary, which is now part of multinational Instructure Inc. He held the role of Director of UK and Ireland for US education system provider Ellucian and Head of Education and Education Platforms for Europe with Indian multinational TCS. Jonathan has a wealth of experience at bringing education technologies to market. Learnovate Centre Director Nessa McEniff says: "We are delighted to have collaborated with the Diotima team to secure €500,000 investment from Enterprise Ireland's Commercialisation Fund. Diotima promises to develop into a revolutionary platform for learners in secondary schools and professional education organisations, delivering formative feedback and better outcomes overall. We look forward to supporting them further as they continue to develop the platform in the months ahead." Enterprise Ireland Head of Research, Innovation and Infrastructure Marina Donohoe says: "Enterprise Ireland is delighted to support Diotima under the Commercialisation Fund. We look forward to seeing them continue in their mission to transform teaching practice through AI enabled assessment and feedback. We believe that the combination of excellence in AI and in education from Trinity College, expertise in education technology from the Learnovate Centre and focus on compliance with the EU AI Act and other regulations will see the Diotima team make a global impact". Diotima Learning Lead and co-founder Siobhan Ryan says: "We're delighted to have received such a significant award from the Enterprise Ireland C...

The Data Diva E241 - Phillip Mason and Debbie Reynolds

"The Data Diva" Talks Privacy Podcast

Play Episode Listen Later Jun 17, 2025 45:16 Transcription Available


Send us a textIn episode 241 of The Data Diva Talks Privacy Podcast, host Debbie Reynolds, “The Data Diva,” welcomes Phillip Mason, Global Privacy Program Manager at Corning, Inc. Phillip joins Debbie to discuss the complicated interplay between AI advancement, regulatory frameworks, and the ethical imperative of human judgment. Drawing from his diverse background in accounting, law, and privacy, Phillip offers an informed and multidimensional perspective on how businesses navigate emerging risks. He critiques overbroad AI legislation like the EU AI Act, which he believes may have unintended consequences for innovation, particularly among smaller firms lacking legal and compliance resources. Debbie and Phillip dive into examples of poorly executed AI rollouts that sparked public backlash, such as LinkedIn's data harvesting practices and Microsoft's Recall feature, emphasizing the importance of transparency and foresight. Phillip also unpacks the difference between having a “human in the loop” and placing real ethical judgment into practice. They discuss how organizations can build a culture of trust and accountability where data science and compliance work harmoniously. The conversation ultimately underscores that as algorithms get smarter, human oversight must also evolve, with thoughtful governance, interdisciplinary collaboration, and values-driven leadership.Support the show

121STUNDEN talk - Online Marketing weekly I 121WATT School for Digital Marketing & Innovation
KI-Recht verstehen und anwenden: Was der EU AI Act für dein Unternehmen wirklich bedeutet | 121WATT Podcast #153

121STUNDEN talk - Online Marketing weekly I 121WATT School for Digital Marketing & Innovation

Play Episode Listen Later Jun 17, 2025 41:44


In Episode #153 des 121WATT Podcasts sprechen Sarah, Patrick und Fachanwalt für IT-Recht Dr. Martin Schirmbacher über die rechtlichen Herausforderungen im KI-Alltag – vom EU AI Act über die KI-Kompetenzpflicht bis hin zu Fragen rund um Nutzungsrechte und Deepfakes.

Irish Tech News Audio Articles
Expleo research reveals 70% of large enterprises in Ireland believe AI should be managed like an employee

Irish Tech News Audio Articles

Play Episode Listen Later Jun 16, 2025 5:17


Expleo, the global technology, engineering and consulting service provider, today launches its Business Transformation Index 2025. To mark the launch, Expleo is revealing new data showing that 70% of Ireland's largest enterprises believe AI's impact on workforces is so profound that it should be managed like an employee to avoid conflicts with company culture and people. The sixth edition of Expleo's award-winning Business Transformation Index (BTI) assesses the attitudes and sentiments of 200 IT and business decision-makers in Ireland, in enterprises with 250 employees or more. The report examines themes including digital transformation, geopolitics, AI and DEI and provides strategic recommendations for organisations to overcome challenges relating to these. BTI 2025 found that while 98% of large enterprises are using AI in some form, 67% believe their organisation can't effectively use AI because their data is too disorganised. As a result, just 30% have integrated and scaled AI models into their systems. Almost a quarter (23%) admitted that they are struggling to find use cases for AI beyond the use of off-the-shelf large language models (LLMs). Despite remaining in the early stages of AI deployment, senior decision-makers are already making fundamental changes to the skills makeup of their teams due to AI's influence and its capabilities. Expleo's research found that 72% of organisations have made changes to the criteria they seek from job candidates because AI can now take on some tasks, while its application requires expertise in other areas. Meanwhile, more than two-thirds (68%) of enterprises who are deploying AI have stopped hiring for certain roles entirely because AI can handle the requirements. The research shows that as AI absorbs tasks in some areas, it is offering workforce opportunities in others. While 30% of enterprise leaders cite workforce displacement as one of their greatest fears resulting from AI, 72% report that they will pay more for team members who have AI-specific skills. The colliding worlds of humans and machines are further revealed in BTI 2025 as 78% of organisations say the correct and ethical use of AI is now covered in their employment contracts. However, the BTI indicates that employers themselves may not be living up to their side of the bargain, as 25% of business and IT leaders conceded a possibility that the AI used for hiring, retention or employee progression in their organisation could be biased. The uncertainty about the objectivity of their AI could explain why 25% of decision-makers are also not confident that their organisation is compliant with the EU AI Act. The Act, it seems, is a bone of contention for many as 76% believe the EU AI Act will hinder adoption of AI in their organisation. Phil Codd, Managing Director, Expleo Ireland, said: "The pace of change that we are seeing from AI is like nothing we have seen before - not even the Industrial Revolution unfolded so quickly or indiscriminately in terms of the industries and people it impacted. And, the workforce's relationship with AI is complicated - on the one hand, they are turning to AI to make their jobs more manageable and to reduce stress, but at the same time, they worry that its broad deployment across their organisation could impinge on their work and therefore their value as an employee. "Business leaders are entering untrodden ground as they try to solve how AI can work for them - both practically and ethically - and without causing clashes within teams. There is no question that there is a new digital colleague joining Irish workplaces and it will define the next chapter of our working lives and economy. However, the success of this seemingly autonomous technology will always depend on the humans and data that back it up. "At Expleo, we work with enterprises to ensure they are reaping the benefits of AI by looking holistically at their people, processes and data. AI requires, and will bring, significant changes...

Legal Leaders Exchange
Episode 28: Regulating Intelligence: Navigating AI Governance with Wolters Kluwer

Legal Leaders Exchange

Play Episode Listen Later Jun 12, 2025 31:02


As AI becomes more embedded in the daily lives of legal departments, the call for robustregulatory frameworks is louder than ever. In the 28th episode of Legal Leaders Exchange, wesit down with experts Ken Crutchfield and Jennifer McIver to explore the evolving landscapeof AI regulations—from the EU AI Act to global compliance trends. We unpack what thesechanges mean for legal professionals, compliance officers, and tech leaders, and howorganizations can proactively prepare for the future of AI governance.

Transform Your Workplace
How HR Can Lead the AI Revolution Without Losing Its Humanity

Transform Your Workplace

Play Episode Listen Later Jun 10, 2025 36:56


HR consultant Daniel Strode discusses AI's impact on human resources, highlighting recruitment and data analytics as prime areas for adoption. He introduces his "5P model" emphasizing policy/governance and people/culture transformation as critical success factors. While AI adoption remains slow—only 25% of adults regularly use tools like ChatGPT—organizations are unknowingly integrating AI through software updates. Strode advocates for proper governance policies ahead of regulations like the EU AI Act, positioning AI as a collaborative tool enhancing rather than replacing human capabilities. TAKEAWAYS 5P Framework: Success requires addressing process enhancement, personalization, predictive insights, policy/governance, and people/culture transformation Governance First: Establish AI ethics policies, bias auditing, and compliance training before implementation, especially with upcoming EU AI Act regulations Human-AI Partnership: Use AI for manual processes while focusing HR professionals on strategic work like employee experience and change management A QUICK GLIMPSE INTO OUR PODCAST

MLOps.community
Packaging MLOps Tech Neatly for Engineers and Non-engineers // Jukka Remes // #322

MLOps.community

Play Episode Listen Later Jun 10, 2025 55:30


Packaging MLOps Tech Neatly for Engineers and Non-engineers // MLOps Podcast #322 with Jukka Remes, Senior Lecturer (SW dev & AI), AI Architect at Haaga-Helia UAS, Founder & CTO at 8wave AI. Join the Community: https://go.mlops.community/YTJoinInGet the newsletter: https://go.mlops.community/YTNewsletter// AbstractAI is already complex—adding the need for deep engineering expertise to use MLOps tools only makes it harder, especially for SMEs and research teams with limited resources. Yet, good MLOps is essential for managing experiments, sharing GPU compute, tracking models, and meeting AI regulations. While cloud providers offer MLOps tools, many organizations need flexible, open-source setups that work anywhere—from laptops to supercomputers. Shared setups can boost collaboration, productivity, and compute efficiency.In this session, Jukka introduces an open-source MLOps platform from Silo AI, now packaged for easy deployment across environments. With Git-based workflows and CI/CD automation, users can focus on building models while the platform handles the MLOps.// BioFounder & CTO, 8wave AI | Senior Lecturer, Haaga-Helia University of Applied SciencesJukka Remes has 28+ years of experience in software, machine learning, and infrastructure. Starting with SW dev in the late 1990s and analytics pipelines of fMRI research in early 2000s, he's worked across deep learning (Nokia Technologies), GPU and cloud infrastructure (IBM), and AI consulting (Silo AI), where he also led MLOps platform development. Now a senior lecturer at Haaga-Helia, Jukka continues evolving that open-source MLOps platform with partners like the University of Helsinki. He leads R&D on GenAI and AI-enabled software, and is the founder of 8wave AI, which develops AI Business Operations software for next-gen AI enablement, including regulatory compliance of AI.// Related LinksOpen source -based MLOps k8s platform setup originally developed by Jukka's team at Silo AI - free for any use and installable in any environment from laptops to supercomputing: https://github.com/OSS-MLOPS-PLATFORM/oss-mlops-platformJukka's new company:https://8wave.ai~~~~~~~~ ✌️Connect With Us ✌️ ~~~~~~~Catch all episodes, blogs, newsletters, and more: https://go.mlops.community/TYExploreJoin our Slack community [https://go.mlops.community/slack]Follow us on X/Twitter [@mlopscommunity](https://x.com/mlopscommunity) or [LinkedIn](https://go.mlops.community/linkedin)] Sign up for the next meetup: [https://go.mlops.community/register]MLOps Swag/Merch: [https://shop.mlops.community/]Connect with Demetrios on LinkedIn: /dpbrinkmConnect with Jukka on LinkedIn: /jukka-remesTimestamps:[00:00] Jukka's preferred coffee[00:39] Open-Source Platform Benefits[01:56] Silo MLOps Platform Explanation[05:18] AI Model Production Processes[10:42] AI Platform Use Cases[16:54] Reproducibility in Research Models[26:51] Pipeline setup automation[33:26] MLOps Adoption Journey[38:31] EU AI Act and Open Source[41:38] MLOps and 8wave AI[45:46] Optimizing Cross-Stakeholder Collaboration[52:15] Open Source ML Platform[55:06] Wrap up

Ecosystemic Futures
91. Navigating the Cognitive Revolution: What Makes Us Human in an AI World

Ecosystemic Futures

Play Episode Listen Later Jun 3, 2025 49:22


As AI systems approach and potentially surpass human cognitive benchmarks, how do we design hybrid intelligence frameworks that preserve human agency while leveraging artificial cognitive enhancements?In this exploration of human-AI convergence, anthropologist and organizational learning expert Dr. Lollie Mancey presents a framework for the "cognitive revolution,” the fourth transformational shift in human civilization following agricultural, industrial, and digital eras. Drawing from Berkeley's research on the science of awe, Vatican AI policy frameworks, and indigenous knowledge systems, Mancey analyzes how current AI capabilities (GPT-4 operating at Einstein-level IQ) are fundamentally reshaping cognitive labor and social structures. She examines the EU AI Act's predictive policing clauses, the implications of quantum computing, and the emerging grief tech sector as indicators of broader systemic transformation. Mancey identifies three meta-cognitive capabilities essential for human-AI collaboration: Critical information interrogation, Systematic curiosity protocols, and Epistemic skepticism frameworksHer research on AI companion platforms reveals neurological patterns like addiction pathways. At the same time, her fieldwork with Balinese communities demonstrates alternative models of technological integration based on reciprocal participation rather than extractiveoptimization. This conversation provides actionable intelligence for organizations navigating the transition from human-centric to hybrid cognitive systems.Key Research Insights• Cognitive Revolution Metrics: Compound technological acceleration outpaces regulatory adaptation, with education systems lagging significantly, requiring new frameworks for cognitive load management and decision architecture in research environments • Einstein IQ Parity Achieved: GPT-4 operates at Einstein-level intelligence yet lacks breakthrough innovation capabilities, highlighting critical distinctions between pattern recognition and creative synthesis for R&D resource allocation • Neurological Dependency Patterns: AI companion platforms demonstrate "catnip-like" effects with users exhibiting hyper-fixation behaviors and difficulty with "digital divorce"—profound implications for workforce cognitive resilience • Epistemic Security Crisis: Deep fakes eliminated content authentication while AI hallucinations embed systemic biases from internet-scale training data, requiring new verification protocols and decision-making frameworks • Alternative Integration Architecture: Balinese reciprocal participation models versus Western extractive paradigms offer scalable approaches for sustainable innovation ecosystems and human-technology collaboration#EcosystemicFutures #CognitiveRevolution #HybridIntelligence #NeuroCognition #QuantumComputing #SociotechnicalSystems #HumanAugmentation #SystemsThinking #FutureOfScience Guest: Lorraine Mancey, Programme Director at UCD Innovation Academy Host: Marco Annunziata, Co-Founder, Annunziata Desai PartnersSeries Hosts:Vikram Shyam, Lead Futurist, NASA Glenn Research CenterDyan Finkhousen, Founder & CEO, Shoshin WorksEcosystemic Futures is provided by NASA - National Aeronautics and Space Administration Convergent Aeronautics Solutions Project in collaboration with Shoshin Works.

Irish Tech News Audio Articles
Ireland Well Placed to Influence AI EU Innovation

Irish Tech News Audio Articles

Play Episode Listen Later May 23, 2025 4:22


European Movement Ireland and Konrad- Adenauer- Stiftung (KAS) UK and Ireland hosted 'Artificial Intelligence - How will Europe Innovate?' The event explored the challenges and opportunities ahead for AI innovation, political leadership and the future development of AI across Europe, as the European Union sets out its ambitious agenda to become a global leader in AI. AI EU Innovation The EU AI act, which forms part of this vision, is the world's first act to regulate the use of AI globally. In force since 2024, with some exemptions for high-risk AI until 2027, the EU AI Act will be fully applicable from 2026, coinciding with Ireland's Presidency of the European Council. Given the presence of multinational tech companies, and leading research institutions in the country, Ireland is well positioned to influence how AI is advanced across the bloc into the future. Chair of the Oireachtas Committee on EU Affairs, Barry Ward TD said: "As Europe takes bold steps toward responsible AI innovation, today's discussion underscores the need for political leadership that is both visionary and grounded in our shared values. With Ireland preparing to take on the Presidency of the European Council in 2026, along with our thriving tech sector and academic excellence, we are uniquely placed to help lead this conversation and ensure AI development in Europe is ethical, innovative, and inclusive." Noelle O Connell, CEO European Movement Ireland said; "As the global race continues for leadership in AI, I am delighted to hear the statement from Minister Smyth, welcome Chair of the Oireachtas Committee on EU Affairs Barry Ward TD, and listen to the insights from the expert panel today on AI innovation, as it increasingly shapes all aspects of our daily lives and influences decision making. We are at a pivotal time when trust in institutions is falling, as revealed by EM Ireland's EU Poll 2025, a majority stated (40%) they do not trust any institution and less than one in three (30%) expressed trust in the EU in Ireland. As the EU seeks to be bold in its vision for AI, it must ensure developments in AI work to serve the public good, and do not erode trust into the future." The Minister for Trade Promotion, Artificial Intelligence and Digital Transformation Niamh Smyth TD appeared prior to the discussion with a short video statement. The expert panel was moderated by Noelle O Connell and included Barry Ward TD, Chair of the Oireachtas Committee on European Union Affairs, Stephanie Anderson, Public Policy Manager, Meta, Dr. Eamonn Cahill, Principal Officer, AI and Digital Regulation Unit, Department of Enterprise, Trade and Employment and Kai Zenner of Office and Digital Policy Adviser for MEP Axel Voss. Dr. Canan Atilgan, Konrad- Adenauer- Stiftung (KAS) UK and Ireland said; "The EU aims to become a global leader in AI and has unveiled an ambitious Action Plan - a bold strategy designed not merely to compete, but to lead ethically, with a clear, human-centred vision." Artificial Intelligence - How Will Europe Innovate? brought citizens, businesses, and policymakers together to explore the themes of the future of AI, and the regulation of AI in practice. The hashtag #EMIKAS and the handles @KAS_UKIRL and @emireland were used during the event. See more breaking stories here. More about Irish Tech News Irish Tech News are Ireland's No. 1 Online Tech Publication and often Ireland's No.1 Tech Podcast too. You can find hundreds of fantastic previous episodes and subscribe using whatever platform you like via our Anchor.fm page here: https://anchor.fm/irish-tech-news If you'd like to be featured in an upcoming Podcast email us at Simon@IrishTechNews.ie now to discuss. Irish Tech News have a range of services available to help promote your business. Why not drop us a line at Info@IrishTechNews.ie now to find out more about how we can help you reach our audience. You can also find and follow us on Twitter, LinkedIn, Facebook, Instagram, TikTok and Snapchat.

Tech Radio
1070: I/O I/O Off to Work We Go

Tech Radio

Play Episode Listen Later May 22, 2025 58:24


This week we have new AI announcements from Google I/O and Microsoft Build, Bitcoin hits a new high and Elon Musk says goodbye to the world of politics. Plus, Barry Scannell from William Fry explains how to stay on the right side of the EU AI Act.Listen to Tech Radio now on Apple, Spotify and YouTube—----- Apple - https://podcasts.apple.com/us/podcast/tech-radio-ireland/id256279328Spotify - https://open.spotify.com/show/5vAWM1vvHbQKYE79dgCGY2YouTube - https://www.youtube.com/@TechRadioIrelandRSS - https://feeds.transistor.fm/techradio

Liebe Zeitarbeit
KI, Community & Zukunft: Wie Netzwerke den Fortschritt treiben - Christoph Seipp

Liebe Zeitarbeit

Play Episode Listen Later May 21, 2025 40:42


Campus 10178
Ethical AI

Campus 10178

Play Episode Listen Later May 21, 2025 37:28


Exploring the societal impact of analytics and artificial intelligence with Catalina Stefanescu-Cuntze and Urs Mueller In this episode of Campus 10178 – the podcast of ESMT Berlin – Catalina Stefanescu-Cuntze and Urs Mueller join host Tammi L. Coles for a conversation about the ethical dimensions of artificial intelligence and analytics. Drawing on their experience as educators and researchers in the ESMT Master in Analytics and Artificial Intelligence (MAAI) program, they reflect on the human values behind the data, the implications of algorithmic decision making, and the need for cross-cultural dialogue in designing responsible technologies. The conversation explores how ethical considerations arise throughout the data value chain – from collection to analysis to implementation – and why a technical solution alone is not enough. They also discuss the evolving regulatory landscape, including the EU AI Act, and the importance of embedding ethical frameworks into both education and practice. Key discussion points Ethical considerations in analytics and artificial intelligence The relationship between data neutrality and human interpretation The role of educational programs in fostering critical, values-based reflection Differences in regulatory approaches across jurisdictions Why future development must center people and society Guest information Catalina Stefanescu-Cuntze is professor of management science at ESMT Berlin and the faculty lead of the Master in Analytics and Artificial Intelligence (MAAI) program. She joined ESMT in 2009 as an associate professor, becoming the first holder of the Deutsche Post DHL Chair, and has served in multiple leadership roles, including director of research (2010–2012) and dean of faculty (2012–2019). Prior to ESMT, she was assistant professor of decision sciences at London Business School. Catalina holds a PhD and MS in operations research from Cornell University and a BS in mathematics from the University of Bucharest. Her research and teaching focus on analytics and AI, and she is passionate about fostering the growth of this critical domain.. Urs Mueller is associate professor of practice at SDA Bocconi School of Management in Milan and a visiting lecturer at ESMT Berlin. He teaches courses on ethics, responsibility, and societal impact within data and AI systems. He has worked with organizations on business ethics and decision making and leads the “Analytics and Society” course in the MAAI program. Resources and links Master in Analytics and Artificial Intelligence (MAAI) program Catalina Stefanescu-Cuntze – ESMT Berlin faculty profile Urs Mueller – Personal faculty profile   About Campus 10178 Campus 10178 is Germany's #1 podcast on the business research behind business practice. Brought to you each month by ESMT Berlin, the 45-minute show brings together top scholars, executives, and policymakers to discuss today's hottest topics in leadership, innovation, and analytics. Campus 10178 – where education meets business.  Want to recommend a guest? Email our podcast host at campus10178@esmt.org. Want to share comments? Join the conversation on: Facebook: ESMT Berlin's Facebook page LinkedIn: ESMT Berlin's announcements on LinkedIn

Ogletree Deakins Podcasts
Workplace Strategies Watercooler 2025: The AI-Powered Workplace of Today and Tomorrow

Ogletree Deakins Podcasts

Play Episode Listen Later May 16, 2025 16:55


In this installment of our Workplace Strategies Watercooler 2025 podcast series, Jenn Betts (shareholder, Pittsburgh), Simon McMenemy (partner, London), and Danielle Ochs (shareholder, San Francisco) discuss the evolving landscape of artificial intelligence (AI) in the workplace and provide an update on the global regulatory frameworks governing AI use. Simon, who is co-chair of Ogletree's Cybersecurity and Privacy Practice Group, breaks down the four levels of risk and their associated regulations specified in the EU AI Act, which will take effect in August 2026, and the need for employers to prepare now for the Act's stringent regulations and steep penalties for noncompliance. Jenn and Danielle, who are co-chairs of the Technology Practice Group, discuss the Trump administration's focus on innovation with limited regulation, as well as the likelihood of state-level regulation.

Science 4-Hire
Scaling AI Innovation for Hiring: Lessons from the Frontlines

Science 4-Hire

Play Episode Listen Later May 12, 2025 52:21


Guest: Christine Boyce, Global Innovation Leader at ManpowerGroup/Right Management“We have to stress-test innovation in the messiness of real-world hiring, not just ideal lab conditions.”-Christine BoyceIn this episode of Psych Tech @ Work, I'm joined by my longtime friend Christine Boyce, Global Innovation Leader at ManpowerGroup/Right Management, to explore how innovation — especially around AI — is reshaping hiring and talent development at scale, and why solving for trust, transparency, and operational realities matters more than ever.SummaryAt the heart of this conversation is the reality that scaling AI innovation in hiring brings massive complexity. While AI offers incredible promise, solving for accuracy, fairness, and operational reality becomes exponentially harder when you're dealing with a large number of unique clients.Christine Boyce, through her work at ManpowerGroup & Right Management, operates at the intersection of these challenges every day. Unlike internal talent acquisition leaders who focus on one organization's needs, Christine must help innovate across a vast client portfolio. Each client presents different barriers — from data limitations, to ethical concerns, to regulatory pressures — and innovation must be modular, defensible, and adaptable to succeed.This vantage point gives Christine a unique, big-picture view of how AI adoption really plays out across industries and markets.We dive into the practical challenges of innovating responsibly: earning trust, scaling solutions across diverse environments, and balancing speed with fairness. Christine's work at ManpowerGroup & Right Management highlights how innovation must be deeply disciplined if it is to achieve true scale and impact.The Core Challenge: Scaling Accuracy and FairnessAt the heart of using AI for hiring lies the challenge of achieving accuracy and fairness at scale. AI's true value isn't just its ability to make individual decisions — it's in processing vast amounts of data and automating judgment across thousands of candidates. However, scale magnifies both strengths and weaknesses: minor biases can grow into systemic problems, and small inefficiencies can snowball into major failures.Staffing firms like ManpowerGroup offer critical real-world lessons:* Scale forces discipline — Every AI tool must be rigorously vetted for fairness, transparency, and defensibility before deployment.* Real-world variation stresses the system for the better — Tools must flexibly adapt to diverse jobs, industries, and candidate pools. This makes the overall path of innovation better and drives great learnings across the board.* Speed must not erode trust — Productivity gains must still respect ethical standards and candidate experience.* External accountability keeps AI honest — Clients demand transparency, validation, and explainability before adoption.Real Barriers to AI Adoption: What Clients Are FacingDespite AI's potential, Christine identifies several persistent hurdles that she faces when serving her diverse slate of clients:* Resistance to Behavior Change: Even demonstrably valuable AI tools often struggle against entrenched workflows and distrust of automation.* Ethical and Trust Concerns: Clients demand AI systems that are transparent, explainable, and defensible, fearing reputational or regulatory risks.* Vendor Noise Overload: Saturation by "AI-washed" vendors makes it hard to differentiate true innovation from hype.* Mismatch Between Hype and Practical Needs: Clients need tools that solve today's operational problems — not just futuristic visions disconnected from reality.* Fear of Creeping AI Adoption: Organizations worry about AI capabilities being embedded into systems without visibility or intentionality.* Compliance and Regulation Anxiety: Global and local regulations (like the EU AI Act or pending US laws) create urgency for proven, compliant AI solutions.* Talent Data Readiness: Without clean, structured internal data, even the best AI solutions struggle to deliver meaningful results.These challenges aren't isolated — they reveal the broader realities companies must manage when trying to adopt AI responsibly at scale.Ultimately, client concerns have a hand in AI innovation because they are critical for the adoption of these technologies, shaping how staffing firms and vendors must design, validate, and deploy solutions.There's an inherent tension between the drive for scale and the need for trust, fairness, and operational reality.Christine's experience demonstrates that true innovation in AI for hiring isn't just about introducing new tools — it's about creating resilient, transparent systems that can adapt to real-world complexity. Managing the tension between speed, scale, trust, and fairness represents the path to a bright future. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit charleshandler.substack.com

NOW of Work
"What lives in HR dies in HR." with Ritu Mohanka, CEO at VONQ

NOW of Work

Play Episode Listen Later May 10, 2025 55:30


On this week's Now of Work Digital Meetup, Ritu Mohanka will joined Jess Von Bank and Jason Averbook to dig into how AI can actually reduce bias in hiring and why we should be moving away from a “matching” model.Ritu shares how VONQ's shift to a scoring system, evaluating candidates across 15 transparent, job-relevant criteria, is enabling skills-based hiring, improving candidate experience, and aligning with the EU AI Act's push for explainable AI.

CXO.fm | Transformation Leader's Podcast
Winning with AI Compliance

CXO.fm | Transformation Leader's Podcast

Play Episode Listen Later May 9, 2025 13:34 Transcription Available


Mastering the EU AI Act is no longer optional—it's a strategic necessity. In this episode, we unpack the critical compliance gaps that separate thriving companies from those falling behind. Learn how to categorise your AI systems, mitigate risk, and turn regulation into a competitive advantage. Perfect for business leaders, consultants, and transformation professionals navigating AI governance. 

The Road to Accountable AI
Kelly Trindel: AI Governance Across the Enterprise? All in a Day's Work

The Road to Accountable AI

Play Episode Listen Later May 8, 2025 36:32


In this episode, Kevin Werbach interviews Kelly Trindel, Head of Responsible AI at Workday. Although Trindel's team is housed within Workday's legal department, it operates as a multidisciplinary group, bringing together legal, policy, data science, and product expertise. This structure helps ensure that responsible AI practices are integrated not just at the compliance level but throughout product development and deployment. She describes formal mechanisms—such as model review boards and cross-functional risk assessments—that embed AI governance into product workflows across the company. The conversation covers how Workday evaluates model risks based on context and potential human impact, especially in sensitive areas like hiring and performance evaluation. Trindel outlines how the company conducts bias testing, maintains documentation, and uses third-party audits to support transparency and trustworthiness. She also discusses how Workday is preparing for emerging regulatory frameworks, including the EU AI Act, and how internal governance systems are designed to be flexible in the face of evolving policy and technological change. Other topics include communicating AI risks to customers, sustaining post-deployment oversight, and building trust through accountability infrastructure. Dr. Kelly Trindel directs Workday's AI governance program. As a pioneer in the responsible AI movement, Kelly has significantly contributed to the field, including testifying before the U.S. Equal Employment Opportunity Commission (EEOC) and later leading an EEOC task force on ethical AI—one of the government's first. With more than 15 years of experience in quantitative science, civil rights, public policy, and AI ethics, Kelly's influence and commitment to responsible AI are instrumental in driving the industry forward and fostering AI solutions that have a positive societal impact.  Transcript Responsible AI: Empowering Innovation with Integrity   Putting Responsible AI into Action (video masterclass)  

VinciWorks
AI compliance and ethical practices

VinciWorks

Play Episode Listen Later May 7, 2025 55:09


AI is no longer just hype; it's here, powerful, and already reshaping how organisations operate. But with that power comes legal and ethical responsibility. This episode explores how businesses can harness AI while staying within the law and public trust. From the EU AI Act to GDPR and the emerging frameworks in the UK and US, we unpack what compliance looks like in an AI-driven world. Here's what we cover: The latest AI compliance frameworks and global regulations How to embed ethical principles into your AI systems Spotting and mitigating risks like bias and discrimination Building an AI governance framework that stands up to scrutiny Real-life case studies: what works, what doesn't Tools and tech to help your compliance team keep up If your organisation is using or exploring AI, this is a must-listen.

AI in Education Podcast
Uber Prompts and AI Myths

AI in Education Podcast

Play Episode Listen Later May 1, 2025 42:21


In this episode of the AI in Education Podcast, Ray and Dan return from a short break with a packed roundup of AI developments across education and beyond. They discuss the online launch of the AEIOU interdisciplinary research hub that Dan attended, explore the promise and pitfalls of prompt engineering—including the idea of the “Uber prompt”—and share first impressions of the OpenAI Academy. Ray unpacks misleading headlines about Bill Gates “replacing teachers” with AI and instead spotlights the real message about AI tutors. They also dive into the 2027 AI forecast report, the emerging impact of the EU AI Act, and Microsoft's latest Work Trend Index, which introduces the idea of "agent bosses" in the AI-driven workplace. And then round off with Ben Williamson's list of AI fails in education and a startling story of an AI radio presenter nobody realised was fake. Here's all the links so you too can fall down the AI news rabbithole

Ropes & Gray Podcasts
R&G Tech Studio: Navigating AI Literacy—Understanding the EU AI Act

Ropes & Gray Podcasts

Play Episode Listen Later Apr 29, 2025 13:07


On this episode of the R&G Tech Studio podcast, Rohan Massey, a leader of Ropes & Gray's data, privacy and cybersecurity practice, is joined by data, privacy and cybersecurity counsel Edward Machin to discuss the AI literacy measures of the EU AI Act and how companies can meet its requirements to ensure their teams are adequately AI literate. The conversation delves into the broad definition of AI systems under the EU AI Act, the importance of AI literacy for providers and deployers of AI systems, and the context-specific nature of AI literacy requirements. They also provide insights into the steps organizations should take to understand their roles under the AI Act, develop training modules, and implement policies and procedures to comply with AI literacy principles. 

The FIT4PRIVACY Podcast - For those who care about privacy
Privacy Enhancing Technologies with Jetro Wils and Punit Bhatia in the FIT4PRIVACY Podcast E137 S06

The FIT4PRIVACY Podcast - For those who care about privacy

Play Episode Listen Later Apr 24, 2025 31:25


How Privacy-Enhancing Technologies (PETs) can safeguard data in an AI-driven world. As organizations increasingly rely on AI, concerns around data privacy, security, and compliance grow. PETs provide a technical safeguard to ensure sensitive information remains protected, even in the most advanced AI applications. With new regulations like the EU AI Act, organizations must adopt privacy-first strategies. PETs are a critical tool to ensure AI transparency, fairness, and trust while maintaining regulatory compliance.Our guest, Jetro Wils, cybersecurity expert and researcher, breaks down how PETs help organizations de-risk AI adoption while ensuring privacy, compliance, and security.Watch now to discover how PETs can help you build digital trust and secure AI-powered innovations!KEY CONVERSION POINT 00:01:33 How would you define digital trust?00:02:32 What is Privacy Enhancing Technology?00:04:21 Why do we need PET when we have laws and principles?00:10:19 Kind of AI risk that can also be mitigated by these PETS00:15:12 How would a PET de-risk that in an AI adoption situation ABOUT GUEST Jetro Wils is a Cloud & Information Security Officer and Cybersecurity Advisor, dedicated to helping organizations operate securely in the cloud era. With a strong focus on information security and compliance, he enables businesses to reduce risk, strengthen cybersecurity frameworks, and achieve peace of mind.With 18 years of experience in Belgium's tech industry, Jetro has held roles spanning software development, business analysis, product management, and cloud specialization. Since 2016, he has witnessed the rapid evolution of cloud technology and the growing challenge organizations face in securely adopting it. Jetro is a 3x Microsoft Certified Azure Expert and a 2x Microsoft Certified Trainer (2022-2024), conducting 10-20 certified training sessions annually on cloud, AI, and security. He has trained over 100 professionals, including enterprise architects, project managers, and engineers. As a technical reviewer for Packt Publishing, he ensures the accuracy of books on cloud and cybersecurity. Additionally, he hosts the BlueDragon Podcast, where he discusses cloud, AI, and security trends with European decision-makers.Jetro holds a professional Bachelor's Degree in Applied Computer Science (2006) and is currently pursuing a Master's in IT Risk and Cybersecurity Management at Antwerp Management School (2023-2025). His research focuses on derisking AI adoption by enhancing AI security through Privacy Enhancing Technologies (PETs). He is also a certified NIS 2 Lead Implementer working toward a DORA certification.  ABOUT HOST Punit Bhatia is one of the leading privacy experts who works independently and has worked with professionals in over 30 countries. Punit works with business and privacy leaders to create an organization culture with high privacy awareness and compliance as a business priority. Selectively, Punit is open to mentor and coach professionals.  Punit is the author of books “Be Ready for GDPR'' which was rated as the best GDPR Book, “AI & Privacy – How to Find Balance”, “Intro To GDPR”, and “Be an Effective DPO”. Punit is a global speaker who has spoken at over 30 global events. Punit is the creator and host of the FIT4PRIVACY Podcast. This podcast has been featured amongst top GDPR and privacy podcasts.As a person, Punit is an avid thinker and believes in thinking, believing, and acting in line with one's value to have joy in life. He has developed the philosophy named ‘ABC for joy of life' which passionately shares. Punit is based out of Belgium, the heart of Europe.  RESOURCES Websites www.fit4privacy.com, www.punitbhatia.com,  https://www.linkedin.com/in/jetrow/  Podcast https://www.fit4privacy.com/podcast Blog https://www.fit4privacy.com/blog YouTube http://youtube.com/fit4privacy   

Law, disrupted
Re-release: Emerging Trends in AI Regulation

Law, disrupted

Play Episode Listen Later Apr 17, 2025 46:34


John is joined by Courtney Bowman, the Global Director of Privacy and Civil Liberties at Palantir, one of the foremost companies in the world specializing in software platforms for big data analytics. They discuss the emerging trends in AI regulation.  Courtney explains the AI Act recently passed by the EU Parliament, including the four levels of risk it assesses for different AI systems and the different regulatory obligations imposed on each risk level, how the Act treats general purpose AI systems and how the final Act evolved in response to lobbying by emerging European companies in the AI space. They discuss whether the EU AI Act will become the global standard international companies default to because the European market is too large to abandon. Courtney also explains recent federal regulatory developments in  the U.S. including the framework for AI put out by the National Institute of Science and Technology, the AI Bill of Rights announced by the White House which calls for voluntary compliance to certain principles by industry and the Executive Order on Safe, Secure and Trustworthy Development and Use of Artificial Intelligence which requires each department of the federal government to develop its own plan for the use and deployment of AI.  They also discuss the wide range of state level AI legislative initiatives and the leading role California has played in this process.  Finally, they discuss the upcoming issues legislatures will need to address including translating principles like accountability, fairness and transparency into concrete best practices, instituting testing, evaluation and validation methodologies to ensure that AI systems are doing what they're supposed to do in a reliable and trustworthy way, and addressing concerns around maintaining AI systems over time as the data used by the system continuously evolves over time until it no longer accurately represents the world that it was originally designed to represent.Podcast Link: Law-disrupted.fmHost: John B. Quinn Producer: Alexis HydeMusic and Editing by: Alexander Rossi

Scouting for Growth
Areiel Wolanow On Unleashing AI, Quantum, and Emerging Tech

Scouting for Growth

Play Episode Listen Later Apr 16, 2025 49:08


On this episode of the Scouting For Growth podcast, Sabine meets Areiel Wolanow, the managing director of Finserv Experts, who discusses his journey from IBM to founding FinServ Experts, emphasising the importance of focusing on business models enabled by technology rather than the technology itself. Areiel delves into the challenges and opportunities presented by artificial intelligence, responsible AI practices, and the implications of quantum computing for data security, highlighting the need for organisations to adapt their approaches to digital transformation, advocating for a migration strategy over traditional transformation methods KEY TAKEAWAYS Emerging tech should be leveraged to create new business models rather than just re-engineering existing ones. Understanding the business implications of technology is crucial for delivering value. When harnessing artificial intelligence, it's essential to identify the real underlying problems within an organisation, assess its maturity, and build self-awareness before applying maturity models and gap analyses. The EU AI Act serves as a comprehensive guideline for responsible AI use, offering risk categories and controls that can benefit companies outside the EU by providing a framework for ethical AI practices without the burden of compliance. Organisations should prepare for the future of quantum computing by ensuring their data is protected against potential vulnerabilities. This involves adopting quantum-resilient algorithms and planning for the transition well in advance. Leaders should place significant responsibility on younger team members who are more familiar with emerging technologies. Providing them with autonomy and support can lead to innovative solutions and successful business outcomes. BEST MOMENTS 'We focus not on the technology itself, but on the business models the tech enables.' 'The first thing you have to do... is to say, OK, is the proximate cause the real problem?' 'The best AI regulations out there is the EU AI Act... it actually benefits AI companies outside the EU more than it benefits within.' 'Digital transformations have two things in common. One is they're expensive, and two is they always fail.' ABOUT THE GUEST Areiel Wolanow is the managing director of Finserv Experts. He is an experienced business leader with over 25 years of experience in business transformation solutioning, sales, and execution. He served as one of IBM’s key thought leaders in blockchain, machine learning, and financial inclusion. Areiel has deep experience leading large, globally distributed teams; he has led programs of over 100 people through the full delivery life cycle, and has managed budgets in the tens of millions of dollars. In addition to his delivery experience, Areiel also serves as a senior advisor on blockchain, machine learning, and technology adoption; he has worked with central banks and financial regulators around the world, and is currently serving as the insurance industry advisor for the UK Parliament’s working group on blockchain. LinkedIn ABOUT THE HOST Sabine is a corporate strategist turned entrepreneur. She is the CEO and Managing Partner of Alchemy Crew a venture lab that accelerates the curation, validation, & commercialization of new tech business models. Sabine is renowned within the insurance sector for building some of the most renowned tech startup accelerators around the world working with over 30 corporate insurers, accelerated over 100 startup ventures. Sabine is the co-editor of the bestseller The INSURTECH Book, a top 50 Women in Tech, a FinTech and InsurTech Influencer, an investor & multi-award winner. Twitter LinkedIn Instagram Facebook TikTok Email Website

Artificial Intelligence in Industry with Daniel Faggella
Global AI Regulations and Their Impact on Industry Leaders - with Micheal Berger of Munich Re

Artificial Intelligence in Industry with Daniel Faggella

Play Episode Listen Later Apr 15, 2025 21:01


Today's guest is Michael Berger, Head of Insure AI at Munich Re. Michael returns to the Emerj podcast platform to discuss the impact of legislation such as the EU AI Act on the insurance industry and broader AI adoption. Our conversation covers how regulatory approaches differ between the United States and the European Union, highlighting the risk-based framework of the EU AI Act and the litigation-driven environment in the U.S. Michael explores key legal precedents, including AI liability cases, and what they signal for business leaders implementing AI-driven solutions. If you've enjoyed or benefited from some of the insights of this episode, consider leaving us a five-star review on Apple Podcasts, and let us know what you learned, found helpful, or liked most about this show!

AI Tool Report Live
Biden vs Trump: How U.S. AI Policy Is Shifting

AI Tool Report Live

Play Episode Listen Later Apr 15, 2025 31:40


In this episode of The AI Report, Christine Walker joins Arturo Ferreira to launch a new series on the legal side of artificial intelligence. Christine is a practicing attorney helping businesses understand how to navigate AI risk, compliance, and governance in a rapidly changing policy environment.They explore how the shift from the Biden to the Trump administration is changing the tone on AI regulation, what the EU AI Act means for U.S. companies, and why many of the legal frameworks we need for AI already exist. Christine breaks down how lawyers apply traditional legal principles to today's AI challenges from intellectual property and employment law to bias and defamation.Also in this episode: • The risk of waiting for regulation to catch up • How companies can conduct internal AI audits • What courts are already doing with AI tools • Why even lawyers are still figuring this out in real time • What businesses should be doing now to reduce liabilityChristine offers a grounded, practical view of what it means to use AI responsibly, even when the law seems unclear.Subscribe to The AI Report:theaireport.aiJoin our community:skool.com/the-ai-report-community/aboutChapters:(00:00) The Legal Risks of AI and Why It's Still a Black Box(01:13) Christine Walker's Background in Law and Tech(03:07) Biden vs Trump: Competing AI Governance Philosophies(04:53) What Governance Means and Why It Matters(06:26) Comparing the EU AI Act with the U.S. Legal Vacuum(08:14) Case Law on IP, Bias, and Discrimination(10:50) Why the Fear Around AI May Be Misplaced(13:15) Legal Precedents: What Tech History Teaches Us(16:06) The GOP's AI Stance and Regulatory Philosophy(18:35) Most AI Use Cases Already Fall Under Existing Law(21:11) Why Precedents Take So Long—and What That Means(23:08) Will AI Accelerate the Legal System?(25:24) AI + Lawyers: A Collaborative Model(27:15) Hallucinations, Case Law, and Legal Responsibility(28:36) Building Policy Now to Avoid Legal Pain Later(30:59) Christine's Final Advice for Businesses and Builders

Science 4-Hire
Responsible AI In 2025 and Beyond – Three pillars of progress

Science 4-Hire

Play Episode Listen Later Apr 15, 2025 54:44


"Part of putting an AI strategy together is understanding the limitations and where unintended consequences could occur, which is why you need diversity of thought within committees created to guide AI governance and ethics." – Bob PulverMy guest for this episode is my friend in ethical/responsible AI, Bob Pulver, the founder of CognitivePath.io and host of the podcast "Elevate Your AIQ." Bob specializes in helping organizations navigate the complexities of responsible AI, from strategic adoption to effective governance practices.  Bob was my guest about a year ago and in this episode he drops back in to discuss what has changed in the faced paced world of AI across three pillars of responsible AI usage.  * Human-Centric AI * AI Adoption and Readiness * AI Regulation and GovernanceThe past year's progress explained through three pillars that are shaping ethical AI:These are the themes that we explore in our conversation and our thoughts on what has changed/evolved in the past year.1. Human-Centric AIChange from Last Year:* Shift from compliance-driven AI towards a more holistic, human-focused perspective, emphasizing AI's potential to enhance human capabilities and fairness.Reasons for Change:* Increasing comfort level with AI and experience with the benefits that it brings to our work* Continued exploration and development of low stakes, low friction use cases* AI continues to be seen as a partner and magnifier of human capabilitiesWhat to Expect in the Next Year:* Increased experience with human machine partnerships* Increased opportunities to build superpowers* Increased adoption of human centric tools by employers2. AI Adoption and ReadinessChange from Last Year:* Organizations have moved from cautious, fragmented adoption to structured, strategic readiness and literacy initiatives.* Significant growth in AI educational resources and adoption within teams, rather than just individuals.Reasons for Change:* Improved understanding of AI's benefits and limitations, reducing fears and resistance.* Availability of targeted AI literacy programs, promoting organization-wide AI understanding and capability building.What to Expect in the Next Year:* More systematic frameworks for AI adoption across entire organizations.* Increased demand for formal AI proficiency assessments to ensure responsible and effective usage.3. AI Regulation and GovernanceChange from Last Year:* Transition from broad discussions about potential regulations towards concrete legislative actions, particularly at state and international levels (e.g., EU AI Act, California laws).* Momentum to hold vendors of AI increasingly accountable for ethical AI use.Reasons for Change:* Growing awareness of risks associated with unchecked AI deployment.* Increased push to stay on the right side of AI via legislative activity at state and global levels addressing transparency, accountability, and fairness.What to Expect in the Next Year:* Implementation of stricter AI audits and compliance standards.* Clearer responsibilities for vendors and organizations regarding ethical AI practices.* Finally some concrete standards that will require fundamental changes in oversight and create messy situations.Practical Takeaways:What should I/we be doing to move the ball fwd and realize AI's full potential while limiting collateral damage?Prioritize Human-Centric AI Design* Define Clear Use Cases: Ensure AI is solving a genuine human-centered problem rather than just introducing technology for technology's sake.* Promote Transparency and Trust: Clearly communicate how and why AI is being used, ensuring it enhances rather than replaces human judgment and involvement.Build Robust AI Literacy and Education Programs* Develop Organizational AI Literacy: Implement structured training initiatives that educate employees about fundamental AI concepts, the practical implications of AI use, and ethical considerations.* Create Role-Specific Training: Provide tailored AI skill-building programs based on roles and responsibilities, moving beyond individual productivity to team-based effectiveness.Strengthen AI Governance and Oversight* Adopt Proactive Compliance Practices: Align internal policies with rigorous standards such as the EU AI Act to preemptively prepare for emerging local and global legislation.* Vendor Accountability: Develop clear guidelines and rigorous vetting processes for vendors to ensure transparency and responsible use, preparing your organization for upcoming regulatory audits.Monitor AI Effectiveness and Impact* Continuous Monitoring: Shift from periodic audits to continuous monitoring of AI tools to ensure fairness, transparency, and functionality.* Evaluate Human Impact Regularly: Regularly assess the human impact of AI tools on employee experience, fairness in decision-making, and organizational trust.Email Bob- bob@cognitivepath.io Listen to Bob's awesome podcast - Elevate you AIQ This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit charleshandler.substack.com

The Tech Blog Writer Podcast
3241: Transparency, Trust, and AI: Atlassian's Legal Framework in Action

The Tech Blog Writer Podcast

Play Episode Listen Later Apr 14, 2025 23:38


At Team '25 in Anaheim, I had the unique opportunity to sit down with Stan Shepherd, General Counsel at Atlassian, for a conversation that pulled back the curtain on how legal and technology are intersecting in the age of AI. Stan's journey from journalism to law to shaping legal operations at one of the world's most forward-thinking companies is as fascinating as it is relevant. What emerged from our discussion is a clear signal that legal teams are no longer trailing behind innovation—they're often at the front of it. Stan shared how Atlassian's legal function achieved 85 percent daily usage of AI tools, including the company's in-house assistant, Rovo. This is remarkable when compared to the industry norm, where legal teams typically lag in AI adoption. Instead of resisting change, Stan's team leaned into it, focusing on automation for repetitive tasks while reserving high-value thinking for their legal experts. We explore Atlassian's responsible tech framework, their principles around transparency and accountability, and how these inform product development from day one. Stan also walked me through how Atlassian is navigating the emerging global regulatory landscape, from the EU AI Act to evolving compliance in the US. His insights on embedding legal counsel directly into product teams, rather than operating on the sidelines, reveal a model of collaboration that turns risk management into a growth enabler. For legal professionals, compliance leaders, and tech decision-makers wrestling with how to integrate AI responsibly, this episode offers a grounded, real-world blueprint. It's not just about mitigating risk—it's about building trust, preserving human judgment, and future-proofing your operations. If you're wondering what responsible AI adoption looks like at scale, you'll want to hear this one. So how are you preparing your legal and compliance strategy for the AI-powered workplace? Let's keep the conversation going.

Between Two COO's with Michael Koenig
AI and Privacy: Navigating the EU's New AI Act & the Impact on US Companies with Flick Fisher

Between Two COO's with Michael Koenig

Play Episode Listen Later Apr 1, 2025 36:43


Try Fellow's AI Meeting Copilot - 90 days FREE - fellow.app/cooAI and Privacy: Navigating the EU's New AI Act with Flick FisherIn this episode of Between Two COOs, host Michael Koenig welcomes back Flick Fisher, an expert on EU privacy law. They dive deep into the newly enacted EU Artificial Intelligence Act and its implications for businesses globally. They discuss compliance challenges, prohibited AI practices, and the potential geopolitical impact of AI regulation. For leaders and operators navigating AI in business, this episode provides crucial insights into managing AI technology within regulatory frameworks.00:00 Introduction to Fellow and AI Meeting Assistant01:01 Introduction to Between Two COOs Episode02:08 What is the EU's AI Act?03:42 Prohibited AI Practices in the EU07:46 Enforcement and Compliance Challenges12:18 US vs EU: Regulatory Landscape29:58 Impact on Companies and Consumers31:55 Future of AI RegulationBetween Two COO's - https://betweentwocoos.com Between Two COO's Episode Michael Koenig on LinkedInFlick Fisher on LinkedInFlick on Data Privacy and GDPR on Between Two COO'sMore on Flick's take of the EU's AI Act