POPULARITY
“Der EU AI Act ist die beste KI-Schulung, die der Mittelstand nie selbst gebucht hätte…” sagt mein heutiger Gast Podcast Frank Stadler, Geschäftsführer von KIVOREX, im aktuellen #KIundTECH Podcast-Interview…Der EU AI Act verändert, wie Unternehmen mit Künstlicher Intelligenz arbeiten. Deshalb sprechen Holger Winkler und Frank Stadler darüber, ob mit dem EU AI Act eine KI-Schulung zur neuen Pflicht für Unternehmen wird – und warum diese Regulierung mehr ist als Bürokratie.Das Interview zeigt, wie der Mittelstand den EU AI Act als Chance nutzen kann, um KI-Kompetenz aufzubauen, Risiken zu minimieren und langfristig wettbewerbsfähiger zu werden.Warum sollten Sie dieses Interview nicht verpassen?Sie erfahren, welche konkreten Pflichten der EU AI Act für Unternehmen mit sich bringtSie lernen, wie Regulierung zum Wettbewerbsvorteil werden kannSie verstehen, wie KI-Kompetenz künftig zum Erfolgsfaktor wirdSie erhalten Einblicke, wie Unternehmen KI sicher und profitabel einsetzen könnenSie erfahren, warum jetzt der richtige Zeitpunkt ist, sich mit dem Thema zu beschäftigenTakeaways aus dem Interview:Der EU AI Act zwingt Unternehmen, sich aktiv mit KI auseinanderzusetzenKI-Regulierung kann Innovation fördern statt bremsenSchulung ist kein Verwaltungsakt, sondern schafft messbaren MehrwertFrühzeitige Vorbereitung sichert WettbewerbsvorteileRisiken liegen weniger in der Regulierung als in der UnkenntnisKlare Prozesse sind entscheidend für sichere KI-NutzungKI-Kompetenz wird zum strategischen Erfolgsfaktor
⚖️ The EU AI Act's Biggest Hurdle: Regulating AI That's Already in UseMatthew Blakemore, CEO at AI Caramba!, highlights a pressing challenge with the EU AI Act. While the framework does a strong job of classifying AI projects into risk categories, it faces a dilemma with tools that are already in widespread public use. Many existing systems, some of which likely fall into high-risk categories, have already been trained and adopted by millions. The question becomes: should they be withdrawn, despite their popularity, or adapted under new rules? Listen to the full podcast now- https://bit.ly/40GZ9bw #AI #ArtificialIntelligence #AIRegulation #TechPolicy #AICompliance #AIInnovation #AITransformation
Kevin Werbach speaks with Caroline Louveaux, Chief Privacy, AI, and Data Responsibility Officer at Mastercard, about what it means to make trust mission critical in the age of artificial intelligence. Caroline shares how Mastercard built its AI governance program long before the current AI boom, grounding it in the company's Data and Technology Responsibility Principles". She explains how privacy-by-design practices evolved into a single global AI governance framework aligned with the EU AI Act, NIST AI Risk Management, and standards. The conversation explores how Mastercard balances innovation speed with risk management, automates low-risk assessments, and maintains executive oversight through its AI Governance Council. Caroline also discusses the company's work on agentic commerce, where autonomous AI agents can initiate payments, and why trust, certification, and transparency are essential for such systems to succeed. Caroline unpacks what it takes for a global organization to innovate responsibly — from cross-functional governance and "tone from the top," to partnerships like the Data & Trust Alliance and efforts to harmonize global standards. Caroline emphasizes that responsible AI is a shared responsibility and that companies that can "innovate fast, at scale, but also do so responsibly" will be the ones that thrive. Caroline Louveaux leads Mastercard's global privacy and data responsibility strategy. She has been instrumental in building Mastercard's AI governance framework and shaping global policy discussions on data and technology. She serves on the board of the International Association of Privacy Professionals (IAPP), the WEF Task Force on Data Intermediaries, the ENISA Working Group on AI Cybersecurity, and the IEEE AI Systems Risk and Impact Executive Committee, among other activities. Transcript How Mastercard Uses AI Strategically: A Case Study (Forbes 2024) Lessons From a Pioneer: Mastercard's Experience of AI Governance (IMD, 2023) As AI Agents Gain Autonomy, Trust Becomes the New Currency. Mastercard Wants to Power Both. (Business Insider, July 2025)
This episode is sponsored by Deel.Ensure fair, consistent reviews with Deel's calibration template. Deel's free Performance Calibration Template helps HR teams and managers run more equitable, structured reviews. Use it to align evaluations with business goals,reduce bias in ratings, and ensure every performance conversation is fair, consistent,and grounded in shared standards.Download now: www.deel.com/nickdayIn this episode of the HR L&D Podcast, host Nick Day explores how HR can use AI to become more strategic and more human. The conversation covers where AI truly fits in HR, what changes with the EU AI Act, and how leaders can turn time saved on admin into culture, capability, and impact.You will hear practical frameworks including a simple 4Ps plus 2 model for HR AI, human in the loop hiring, guardrails to reduce hallucinations, and a clear view on when AI must be 100 percent accurate. The discussion also outlines a modern HR operating model with always on self service, plus policy steps for ethical, explainable AI.Whether you are an HR leader, CEO, or L&D professional, this conversation will help you move from pilots to scaled adoption and build an AI ready organization. Expect actionable steps to improve employee experience, strengthen compliance, and unlock productivity and performance across your teams. 100X Book on Amazon: https://www.amazon.com/dp/B0D41BP5XTNick Day's LinkedIn: https://www.linkedin.com/in/nickday/Find your ideal candidate with our job vacancy system: https://jgarecruitment.ck.page/919cf6b9eaSign up to the HR L&D Newsletter - https://jgarecruitment.ck.page/23e7b153e700:00 Intro & Preview02:25 What HR Is For03:54 Why HR + AI Now06:19 AI as Augmentation07:43 HR AI Framework & Use Cases10:14 Guardrails: Hallucinations & Accuracy12:45 Guardrails: Bias & Human in the Loop16:58 Recruiting with AI21:01 EU AI Act for HR25:16 HR Team of the Future25:56 New HR Operating Model31:54 Tools for Culture Change35:35 Rethink Processes
It was too obvious not to do it. Let AI summarise the Department of Education's guidance. Sure, while I'm at it, I may as well use AI to create the show notes:Explore the safe, ethical, and responsible use of AI for primary educators and school leaders. We share practical examples, such as how a second class teacher can use Generative AI (GenAI) to create curriculum-aligned math activities, or how a fifth class teacher uses GenAI for visual support in Irish lessons. Learn strategies for integrating AI, including the essential 4P framework (Purpose, Planning, Policies, Practice). Remember to maintain human oversight and review all AI outputs for accuracy and bias. Resources like the DALI4US project support data literacy for primary teachers.
It was too obvious not to do it. Let AI summarise the Department of Education's guidance. Sure, while I'm at it, I may as well use AI to create the show notes:Explore the safe, ethical, and responsible use of AI for primary educators and school leaders. We share practical examples, such as how a second class teacher can use Generative AI (GenAI) to create curriculum-aligned math activities, or how a fifth class teacher uses GenAI for visual support in Irish lessons. Learn strategies for integrating AI, including the essential 4P framework (Purpose, Planning, Policies, Practice). Remember to maintain human oversight and review all AI outputs for accuracy and bias. Resources like the DALI4US project support data literacy for primary teachers.
As financial services accelerate their digital transformations, AI is reshaping how institutions identify, assess, and manage risk. But with that transformation comes an equally complex web of systemic risks, regulatory challenges, and questions about accountability. In this episode of the AI in Business podcast, host Matthew DeMello, Head of Content at Emerj, speaks with Miriam Fernandez, Director in the Analytical Innovation Team specializing in AI research at S&P Global Ratings, and Sudeep Kesh, Chief Innovation Officer at S&P Global Ratings. Together, they unpack how generative AI, agentic systems, and regulatory oversight are evolving within one of the most interconnected sectors of the global economy. The conversation explores how AI is amplifying both efficiency and exposure across financial ecosystems — from the promise of multimodal data integration in risk management to the growing challenge of concentration and contagion risks in increasingly digital markets. Miriam and Sudeep discuss how regulators are responding through risk-based frameworks such as the EU AI Act and DORA, and how the private sector is taking a larger role in ensuring transparency, compliance, and trust. Want to share your AI adoption story with executive peers? Click emerj.com/e2 for more information and to be a potential future guest on Emerj's flagship ‘AI in Business' podcast! If you've enjoyed or benefited from some of the insights of this episode, consider leaving us a five-star review on Apple Podcasts, and let us know what you learned, found helpful, or liked most about this show!
KI-Standort Europa: Warum wir im globalen Rennen abgehängt werden Während die Welt mit atemberaubender Geschwindigkeit in die Zukunft der Künstlichen Intelligenz investiert, wirkt Europa wie ein Zuschauer am Spielfeldrand. In dieser Podcast-Episode sprichst du offen über die dramatische Diskrepanz zwischen dem globalen Investitionsverhalten und der Realität in Deutschland und Europa – und was das für Unternehmer, Technologieentwickler und uns alle bedeutet. Torsten Körting auf LinkedIn: LinkedIn - https://www.linkedin.com/in/torstenkoerting/ Europas Investitionsschwäche: Ein systemisches Problem Im Vergleich zu den USA oder China fällt das europäische Investitionsvolumen in KI geradezu mickrig aus. Während in Übersee Signing Fees von bis zu 100 Millionen Dollar für Top-Talente gezahlt werden, feiern wir hier Investitionsankündigungen in Höhe von 500 Millionen Euro – die im Verhältnis kaum ins Gewicht fallen. Das eigentliche Problem: Es fehlt nicht nur an Kapital, sondern auch an Konsequenz, Geschwindigkeit und Risikobereitschaft. Wer global mithalten will, muss mehr tun als Symbolpolitik. Technologische Abhängigkeit: Wir konsumieren, statt zu gestalten Ob DeepSeek, OpenAI oder Perplexity – wir greifen fast ausschließlich auf Technologien aus China oder den USA zurück. Europa fehlt es an konkurrenzfähigen, skalierbaren KI-Lösungen, die international bestehen können. Selbst wenn innovative Startups wie N8N oder andere europäische Player an den Start gehen, ist die Exit-Strategie oft der Verkauf ins Ausland. Damit entziehen wir uns selbst die Möglichkeit, technologisch unabhängig zu werden und eine eigene digitale Infrastruktur aufzubauen. Regulierung statt Innovation: Der EU AI Act als Bremsklotz? Mit dem EU AI Act setzt Brüssel auf Regulierung – gut gemeint, aber oft realitätsfern. Statt Innovationsräume zu schaffen, entsteht ein Klima der Vorsicht. Selbst die Experten, die in der Kommission sitzen oder die Bundesregierung beraten, stoßen auf massive Hürden und trägen Verwaltungsapparat. Währenddessen arbeiten deutsche Behörden noch mit Tools wie Skype – ein System, das längst eingestellt wurde. Von „AI First" in Ämtern und im Gesundheitswesen ist unter solchen Bedingungen wenig zu spüren. Fazit: Ohne Mut kein Vorsprung Wenn Europa technologisch wieder Anschluss finden will, braucht es mehr als nur politische Willensbekundungen. Es braucht unternehmerischen Mut, massive Investitionen, einen echten Talentmagneten und vor allem den festen Willen, selbst gestalten zu wollen – statt ständig nur nachzuziehen. Wer KI wirklich nutzen will, muss bereit sein, Verantwortung zu übernehmen – technologisch, wirtschaftlich und gesellschaftlich. Noch mehr von den Koertings ... Das KI-Café ... jede Woche Mittwoch (>350 Teilnehmer) von 08:30 bis 10:00 Uhr ... online via Zoom .. kostenlos und nicht umsonstJede Woche Mittwoch um 08:30 Uhr öffnet das KI-Café seine Online-Pforten ... wir lösen KI-Anwendungsfälle live auf der Bühne ... moderieren Expertenpanel zu speziellen Themen (bspw. KI im Recruiting ... KI in der Qualitätssicherung ... KI im Projektmanagement ... und vieles mehr) ... ordnen die neuen Entwicklungen in der KI-Welt ein und geben einen Ausblick ... und laden Experten ein für spezielle Themen ... und gehen auch mal in die Tiefe und durchdringen bestimmte Bereiche ganz konkret ... alles für dein Weiterkommen. Melde dich kostenfrei an ... www.koerting-institute.com/ki-cafe/ Mit jedem Prompt ein WOW! ... für Selbstständige und Unternehmer Ein klarer Leitfaden für Unternehmer, Selbstständige und Entscheider, die Künstliche Intelligenz nicht nur verstehen, sondern wirksam einsetzen wollen. Dieses Buch zeigt dir, wie du relevante KI-Anwendungsfälle erkennst und die KI als echten Sparringspartner nutzt, um diese Realität werden zu lassen. Praxisnah, mit echten Beispielen und vollständig umsetzungsorientiert. Das Buch ist ein Geschenk, nur Versandkosten von 9,95 € fallen an. Perfekt für Anfänger und Fortgeschrittene, die mit KI ihr Potenzial ausschöpfen möchten. Das Buch in deinen Briefkasten ... https://koerting-institute.com/shop/buch-mit-jedem-prompt-ein-wow/ Die KI-Lounge ... unsere Community für den Einstieg in die KI (>2800 Mitglieder) Die KI-Lounge ist eine Community für alle, die mehr über generative KI erfahren und anwenden möchten. Mitglieder erhalten exklusive monatliche KI-Updates, Experten-Interviews, Vorträge des KI-Speaker-Slams, KI-Café-Aufzeichnungen und einen 3-stündigen ChatGPT-Kurs. Tausche dich mit über 2800 KI-Enthusiasten aus, stelle Fragen und starte durch. Initiiert von Torsten & Birgit Koerting, bietet die KI-Lounge Orientierung und Inspiration für den Einstieg in die KI-Revolution. Hier findet der Austausch statt ... www.koerting-institute.com/ki-lounge/ Starte mit uns in die 1:1 Zusammenarbeit Wenn du direkt mit uns arbeiten und KI in deinem Business integrieren möchtest, buche dir einen Termin für ein persönliches Gespräch. Gemeinsam finden wir Antworten auf deine Fragen und finden heraus, wie wir dich unterstützen können. Klicke hier, um einen Termin zu buchen und deine Fragen zu klären. Buche dir jetzt deinen Termin mit uns ... www.koerting-institute.com/termin/ Weitere Impulse im Netflix Stil ... Wenn du auf der Suche nach weiteren spannenden Impulsen für deine Selbstständigkeit bist, dann gehe jetzt auf unsere Impulseseite und lass die zahlreichen spannenden Impulse auf dich wirken. Inspiration pur ... www.koerting-institute.com/impulse/ Die Koertings auf die Ohren ... Wenn dir diese Podcastfolge gefallen hat, dann höre dir jetzt noch weitere informative und spannende Folgen an ... über 440 Folgen findest du hier ... www.koerting-institute.com/podcast/ Wir freuen uns darauf, dich auf deinem Weg zu begleiten!
In this episode of the Fit4Privacy Podcast, host Punit Bhatia explores the EU AI Act— why it matters, what it requires, and how it impacts your business, even outside the EU.You will also hear about the Act's risk-based approach, the four categories of AI systems (unacceptable, high, limited, and minimal risk), and the penalties for non-compliance, which can be as high as 7% of global turnover or €35 million.Just like GDPR, the EU AI Act has global reach—so if your company offers AI-based products or services to EU citizens, it applies to you. Listen in to understand the requirements and discover how to turn AI compliance into an opportunity for building trust, demonstrating responsibility, and staying ahead of the competition.KEY CONVERSION 00:00:00 Introduction to the EU AI Act 00:01:22 Why the EU AI Act Matters to Your Business 00:03:40 Risk Categories Under the EU AI Act 00:04:52 Key Timelines and Provisions 00:06:07 Compliance Requirements 00:07:09 Leveraging the EU AI Act for Competitive Advantage 00:08:38 Conclusion and Contact Information ABOUT HOST Punit Bhatia is one of the leading privacy experts who works independently and has worked with professionals in over 30 countries. Punit works with business and privacy leaders to create an organization culture with high privacy awareness and compliance as a business priority. Selectively, Punit is open to mentor and coach professionals. Punit is the author of books “Be Ready for GDPR' which was rated as the best GDPR Book, “AI & Privacy – How to Find Balance”, “Intro To GDPR”, and “Be an Effective DPO”. Punit is a global speaker who has spoken at over 30 global events. Punit is the creator and host of the FIT4PRIVACY Podcast. This podcast has been featured amongst top GDPR and privacy podcasts. As a person, Punit is an avid thinker and believes in thinking, believing, and acting in line with one's value to have joy in life. He has developed the philosophy named ‘ABC for joy of life' which he passionately shares. Punit is based out of Belgium, the heart of Europe. RESOURCES Websites www.fit4privacy.com,www.punitbhatia.com Podcast https://www.fit4privacy.com/podcast Blog https://www.fit4privacy.com/blog YouTube http://youtube.com/fit4privacy
“The Future of Life Institute has been working on AI governance-related issues for the last decade. We're already over 10 years old, and our mission is to steer very powerful technology away from large-scale harm and toward very beneficial outcomes. You could think about any kind of extreme risks from AI, all the way to existential or extinction risk, the worst kinds of risks and the benefits. You can think about any kind of large benefits that humans could achieve from technology, all the way through to utopia, right? Utopia is the biggest benefit you can get from technology. Historically, that has meant we have focused on climate change, for example, and the impact of climate change. We have also focused on bio-related risks, pandemics and nuclear security issues. If things go well, we will be able to avoid these really bad downsides in terms of existential risk, extinction risks, mass surveillance, and really disturbing futures. We can avoid that very harmful side of AI or technology, and we can achieve some of the benefits.”Today, we take a closer look at the future of artificial intelligence and the policies that determine its place in our societies. Risto Uuk is Head of EU Policy and Research at the Future of Life Institute in Brussels, and a philosopher and researcher at KU Leuven, where he studies the systemic risks posed by AI. He has worked with the World Economic Forum, the European Commission, and leading thinkers like Stuart Russell and Daniel Susskind. He also runs one of the most widely read newsletters on the EU AI Act. As this technology is transforming economies, politics, and human life itself, we'll talk about the promises and dangers of AI, how Europe is trying to regulate it, and what it means to build safeguards for a technology that may be more powerful than anything we've seen before.Episode Websitewww.creativeprocess.info/podInstagram:@creativeprocesspodcast
“The Future of Life Institute has been working on AI governance-related issues for the last decade. We're already over 10 years old, and our mission is to steer very powerful technology away from large-scale harm and toward very beneficial outcomes. You could think about any kind of extreme risks from AI, all the way to existential or extinction risk, the worst kinds of risks and the benefits. You can think about any kind of large benefits that humans could achieve from technology, all the way through to utopia, right? Utopia is the biggest benefit you can get from technology. Historically, that has meant we have focused on climate change, for example, and the impact of climate change. We have also focused on bio-related risks, pandemics and nuclear security issues. If things go well, we will be able to avoid these really bad downsides in terms of existential risk, extinction risks, mass surveillance, and really disturbing futures. We can avoid that very harmful side of AI or technology, and we can achieve some of the benefits.”Today, we take a closer look at the future of artificial intelligence and the policies that determine its place in our societies. Risto Uuk is Head of EU Policy and Research at the Future of Life Institute in Brussels, and a philosopher and researcher at KU Leuven, where he studies the systemic risks posed by AI. He has worked with the World Economic Forum, the European Commission, and leading thinkers like Stuart Russell and Daniel Susskind. He also runs one of the most widely read newsletters on the EU AI Act. As this technology is transforming economies, politics, and human life itself, we'll talk about the promises and dangers of AI, how Europe is trying to regulate it, and what it means to build safeguards for a technology that may be more powerful than anything we've seen before.Episode Websitewww.creativeprocess.info/podInstagram:@creativeprocesspodcast
Matthew Blakemore, CEO at AI Caramba!, reflects on the recent debates around AI regulation with a healthy dose of scepticism. He highlights how companies like OpenAI, having already secured a competitive advantage through their vast training data, may now be pushing for regulation not purely from an ethical standpoint, but as a way to protect their lead and limit competition.He acknowledges the importance of frameworks like the upcoming EU AI Act, which will play a critical role in shaping how AI is built and deployed in the future. The real test is ensuring safety and accountability without stifling innovation and fair competition.Listen to the full podcast now- https://bit.ly/40GZ9bw#AI #EUAIAct #MatthewBlakemore #TechRegulation #ArtificialIntelligence #Outgrow
“The Future of Life Institute has been working on AI governance-related issues for the last decade. We're already over 10 years old, and our mission is to steer very powerful technology away from large-scale harm and toward very beneficial outcomes. You could think about any kind of extreme risks from AI, all the way to existential or extinction risk, the worst kinds of risks and the benefits. You can think about any kind of large benefits that humans could achieve from technology, all the way through to utopia, right? Utopia is the biggest benefit you can get from technology. Historically, that has meant we have focused on climate change, for example, and the impact of climate change. We have also focused on bio-related risks, pandemics and nuclear security issues. If things go well, we will be able to avoid these really bad downsides in terms of existential risk, extinction risks, mass surveillance, and really disturbing futures. We can avoid that very harmful side of AI or technology, and we can achieve some of the benefits.”Today, we take a closer look at the future of artificial intelligence and the policies that determine its place in our societies. Risto Uuk is Head of EU Policy and Research at the Future of Life Institute in Brussels, and a philosopher and researcher at KU Leuven, where he studies the systemic risks posed by AI. He has worked with the World Economic Forum, the European Commission, and leading thinkers like Stuart Russell and Daniel Susskind. He also runs one of the most widely read newsletters on the EU AI Act. As this technology is transforming economies, politics, and human life itself, we'll talk about the promises and dangers of AI, how Europe is trying to regulate it, and what it means to build safeguards for a technology that may be more powerful than anything we've seen before.Episode Websitewww.creativeprocess.info/podInstagram:@creativeprocesspodcast
Lasse Lung ist Gründer und Geschäftsführer von Qualimero, einem Startup, das Unternehmen mit KI-Mitarbeitern unterstützt – digitalen Teamkollegen, die Prozesse im Vertrieb, Recruiting und Service eigenständig übernehmen.Im Gespräch mit Host Christian spricht Lasse darüber, warum KI-Mitarbeiter mehr sind als stumpfe Chatbots, wie sie in nur zwei Wochen produktiv eingesetzt werden können und welche messbaren Erfolge Unternehmen bereits erzielen – von steigenden Warenkorbwerten über mehr qualifizierte Leads bis hin zu effizienteren Recruiting-Prozessen.Außerdem verrät er, welche Faktoren den erfolgreichen Einsatz von KI-Mitarbeitern prägen – von klaren Use Cases über Datenqualität bis hin zu Transparenz und Akzeptanz im Team. Und wie auch andere Gewerke im B2B Marketing wie SEO und SEA profitieren können.
Samen met ondernemer/investeerder Anco Scholten ter Horst en stemacteur/AI-bouwer Barnier Geerling duiken we in de vraag: waar heeft menselijke stem nog voorsprong en waar loopt AI voorop? Barnier blikt terug op honderden rollen (van Baymax tot muck uit Bob de Bouwer), het bouwen van eigen TTS-modellen en het nieuwe plan: geen acteurs vervangen, maar stem “unlocken” waar mensen niet kunnen komen. We trekken het breder: deep-tech vs. wrappers, on-device modellen, ethiek rond trainingsdata, privacy-eerst companions en de Europese ambitie. Anco deelt lessons learned over kapitaal, tempo en waarom je soms eerst met slimme showcase-apps tractie verdient. Van NPC-stemmen in games tot een voice-only journaling companion die lokaal draait: sneller, persoonlijker en mét regie over je data. Shownotes A10n.io – privacy-first voice companion & TTS-technologie Freedom Internet – ISP waar Anco mede-oprichter van is The Good Cloud – Europese cloudopslag van Anco 14 Voices – agency van Barnier Geerling Whisper – spraak-naar-tekst model (ASR) ElevenLabs – commerciële TTS (discussie over data/ethiek) EU AI Act – Europese AI-regelgeving ASML – chipmachinetitaan, genoemd in investeringscontext Kobo – e-readers; idee: on-device TTS Big Hero 6 – Baymax (NL-stem door Barnier) Bob de Bouwer – reeks waar Barnier in te horen is Lovable – “vibecoding” tool genoemd als marktvoorbeeld Cursor – AI code-editor vergeleken met Lovable Lex Fridman × Zelensky – meertalige AI-audio release mnot.nl – meer over show & community (Slack, Vriend van de Show) Tijdschema 0:00:00 AI bij stemacteurs, companion-stem & vervanging?0:02:14 Barnier’s loopbaan en eerste studiojaren0:06:17 Bulk vs. ambacht: voice-over en stemacteren0:10:20 Markt, agencies en 14 Voices0:12:51 Thuisstudio’s, groei en impact van AI-tools0:17:22 Origin story: DASIS → expressieve, controleerbare TTS0:21:30 Van DASIS naar Eten: voice-only journaling companion0:31:08 Games & NPC’s: waar AI-stem wel/niet past0:34:31 On-device modellen (telefoon/e-reader), latency & streaming0:44:57 Modelgrootte, parameters en energieverbruik0:47:41 Verdienmodel: freemium companion & showcases0:56:10 Augmentation i.p.v. vervanging; naam & visie van Eten0:58:27 Privacy-first: lokaal en open-source verifieerbaar1:13:24 Is er een AI-bubbel? Hype vs. echte waarde1:19:37 Afkondiging, links en contact #AI #TextToSpeech #Stemacteur #VoiceOver #CompanionAI #Journaling #Gaming #NPC #Privacy #OpenSource #DeepTech #Europa #Startups #FreedomInternet #TheGoodCloud #EUAIAct #ASML #Whisper #ElevenLabs #MetNerdsOmTafelSee omnystudio.com/listener for privacy information.
The hype cycle is over; the results era is here. We unpack how finance moves from shiny demos to measurable outcomes by pairing agentic AI with the enterprise plumbing that actually makes change stick: orchestration, shared standards, and rigorous governance. Along the way, we confront the 93% project failure narrative and show why a simple time filter—necessary, automatable, delegable—cuts through noise and aligns work with true ROI.We walk through real examples that shrink lending cycle times from months to minutes, explain why outcome-based pricing beats per-seat models, and clarify the critical differences between reactive generative tools and proactive, goal-driven agents. Then we zoom out to the architecture that turns outputs into outcomes, from orchestration as the command layer to BIAN's service domains and business object model that finally give banks a common language. Security and resilience get equal weight as we step into the AI-versus-AI battlefield and map how DORA and the EU AI Act overlap, including provider vs deployer responsibilities and incident reporting that keeps operations and model risk in check.There's more: the accessibility mandate reshapes what “done” means for digital onboarding and automated flows; wealth tech emerges as the next growth engine with a massive generational transfer on the horizon; and social impact cases show AI driving nonprofit income stability and financial literacy at scale. Through it all, we return to the human edge—professionals who translate automation into trust, navigate ambiguity, and make better decisions with cleaner data and clearer guardrails.If this conversation helps you see where AI creates real value in your stack, support the show: follow, share with a teammate, and leave a quick review with your biggest takeaway.Thank you for tuning into our podcast about global trends in the FinTech industry.Check out our podcast channel.Learn more about The Connector. Follow us on LinkedIn.CheersKoen Vanderhoydonkkoen.vanderhoydonk@jointheconnector.com#FinTech #RegTech #Scaleup #WealthTech
Art Kleiner, co-author of "The AI Dilemma" and Principal at Kleiner Powell International, discusses the complexities of AI governance, trust, and accountability in the context of modern technology. He emphasizes the importance of being intentional about risk when deploying AI products, particularly large language models, which can inadvertently perpetuate biases and misinformation. Kleiner shares a compelling example of a Chinese AI system that failed to generate accurate images based on user requests, illustrating the inherent biases present in AI systems. He stresses the need for organizations to be aware of the human effects and unintended consequences of AI deployment.For managed service providers (MSPs) and IT leaders, Kleiner highlights the significance of compliance and oversight in the development process of AI systems. He references the EU AI Act, which mandates a "human in the loop" approach to ensure accountability and effectiveness in AI applications. This requirement encourages organizations to conduct thorough testing and evaluation of AI systems in real-world contexts, ensuring that they meet the needs of users and mitigate potential risks. Kleiner notes that small businesses, in particular, must be vigilant about the implications of AI on their operations and customer interactions.The conversation also delves into the challenges of achieving measurable ROI from AI projects, with studies indicating that a significant percentage of these initiatives fail to deliver tangible business value. Kleiner advocates for scenario planning as a tool to navigate the uncertainties of AI implementation, encouraging organizations to explore various future scenarios and their potential impacts. By understanding the different ways AI can affect productivity, business growth, and risk management, companies can better position themselves to leverage AI effectively.Finally, Kleiner urges leaders to prepare for multiple AI futures by staying informed about emerging technologies and their implications for their businesses. He emphasizes the need for organizations to build trust with their customers by using AI responsibly and transparently. By focusing on creating value and avoiding the pitfalls of "inshittification," businesses can foster stronger relationships with their clients and enhance their overall service offerings. The discussion underscores the critical role of human insight and ethical considerations in the evolving landscape of AI technology. All our Sponsors: https://businessof.tech/sponsors/ Do you want the show on your podcast app or the written versions of the stories? Subscribe to the Business of Tech: https://www.businessof.tech/subscribe/Looking for a link from the stories? The entire script of the show, with links to articles, are posted in each story on https://www.businessof.tech/ Support the show on Patreon: https://patreon.com/mspradio/ Want to be a guest on Business of Tech: Daily 10-Minute IT Services Insights? Send Dave Sobel a message on PodMatch, here: https://www.podmatch.com/hostdetailpreview/businessoftech Want our stuff? Cool Merch? Wear “Why Do We Care?” - Visit https://mspradio.myspreadshop.com Follow us on:LinkedIn: https://www.linkedin.com/company/28908079/YouTube: https://youtube.com/mspradio/Facebook: https://www.facebook.com/mspradionews/Instagram: https://www.instagram.com/mspradio/TikTok: https://www.tiktok.com/@businessoftechBluesky: https://bsky.app/profile/businessof.tech Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
Do you want to use AI without losing trust? What frameworks help build trust and manage AI responsibly? Can we really create trust while using AI?In this episode of the FIT4PRIVACY Podcast, host Punit Bhatia and digital trust expert Mark Thomas explain how to govern and manage AI in ways that build real trust with customers, partners, and society.This episode breaks down what it means to use AI responsibly and how strong governance can help avoid risks. You'll also learn about key frameworks like the ISO 42001, the EU AI Act, and the World Economic Forum's Digital Trust Framework—and how they can guide your AI practices.Mark and Punit also talk about how organizational culture, company size, and leadership affect how AI is used—and how trust is built (or lost). They discuss real-world tips for making AI part of your existing business systems, and how to make decisions that are fair, explainable, and trustworthy.
In this episode, Andreas Munk Holm and Jeppe Høier sit down with Paul Morgenthaler, Partner at CommerzVentures, to unpack the inner workings of a single-LP CVC and how strategic structure can drive long-term VC success. Paul shares insights from over a decade of fintech investing, offering a rare look into how one of Europe's leading corporate venture arms thinks about climate, compliance, and the coming wave of agentic AI in financial services.They explore what it takes to make a single-LP model work, how GenAI is reshaping fintech workflows, and why European regulation may be a global feature, not a bug.
Today's guest on the ‘AI in Financial Services' podcast is Charleyne Biondi, Associate Vice President of Moody's Ratings in the Digital Economy Team. Charleyne Biondi, Associate Vice President of Moody's Ratings in the Digital Economy Team. Charleyne returns to the program to share her perspective on the rapidly evolving landscape of AI regulation, comparing the EU AI Act, the US sector-specific approach, and emerging international frameworks. She outlines how regulatory divergence is shaping adoption, trust, and compliance costs for companies operating globally. Charleyne also emphasizes the risks of regulatory fragmentation in the US, where state-level laws often impose requirements as stringent as Europe's. Want to share your AI adoption story with executive peers? Click emerj.com/e2 for more information and to be a potential future guest on Emerj's flagship' AI in Business' podcast! If you've enjoyed or benefited from some of the insights of this episode, consider leaving us a five-star review on Apple Podcasts, and let us know what you learned, found helpful, or liked most about this show!
In this episode of Data Chronicles, host Scott Loughlin explores the challenges of defining artificial intelligence (AI) under emerging laws, including under the EU AI Act. He is joined by Hogan Lovells partner Etienne Drouard and senior associate Olga Kurochkina to discuss the difficulties in drawing clear lines around what qualifies as AI, the importance of that definition under the EU AI Act for both developers and users, and the broader landscape of AI regulation. The conversation highlights the importance of distinguishing AI from automation, the compliance obligations that follow, and the ways AI legislation continues to evolve.
In this episode, we are joined by Marietje Schaake, former Member of the European Parliament, to unpack the EU AI Act Code of Practice. Schaake served as Chair of the Working Group on Internal Risk Management and Governance of General-Purpose AI Providers for the Code of Practice, with a focus on AI model safety and security. We discuss the development and drafting of the EU AI Act and Code of Practice (16:47), break down how the Code helps AI companies demonstrate compliance with the Act (28:25), and explore the kinds of systemic risks the AI Act seeks to address (32:00).
Advancing Innovation in Manufacturing (AIM) Centre has announced the launch of its new AI Accelerator Programme for Manufacturing Companies, a ten-week hybrid initiative designed to help businesses across Ireland understand, adopt, and scale artificial intelligence effectively. For many manufacturers, the hardest part of AI adoption is knowing where to begin. AIM's new Accelerator helps turn uncertainty into action - guiding companies toward the most valuable use cases for their operations. Starting on 1st October 2025, applications are now open to manufacturing companies of all sizes and sub-sectors. The programme is delivered by the National AI Studio for Manufacturing at AIM Centre, co-funded by the Government of Ireland and the European Union through the ERDF Northern and Western Regional Programme 2021- 27. Supporting AI Adoption in Irish Manufacturing The AI Accelerator provides a structured pathway from AI strategy to deployment, enabling participating companies to build a working demonstrator tailored to their specific needs. The programme blends online delivery with in-person events, including access to Ireland's National AI Studio for Manufacturing. Over the ten weeks (one day per week) participants will gain: • A working AI demonstrator aligned with their operations, paired with a structured use case brief to support future deployment. • Guidance on integrating AI within existing ERP and data systems. • Access to industry-specific use cases, demos, and prototypes, highlighting tangible opportunities for business transformation. • Expert guidance on governance, risk, and compliance - including EU AI Act requirements. • Insights into scaling AI across operations. AIM Centre advises teams of two people attend from each company, ensuring both strategic and technical perspectives are represented. Accreditation and Funding Support The programme is CPD-accredited by Engineers Ireland, recognising its value in professional development. To support participation, SMEs can access up to 80% funding, while larger companies can also avail of significant funding support. This ensures that businesses of all sizes can take advantage of the opportunity to future-proof their operations with AI. AIM-ing for Real Results "Irish manufacturing is at a pivotal moment in its digital transformation journey. Through this programme, we aim to demystify AI and give companies the tools, confidence, and practical outcomes they need to adopt it responsibly and at scale and with measurable business impact" - David Bermingham, Director of AI, AIM Centre. How to Apply Applications are open now. To find out more or register your interest, visit: www.aimcentre.ie/ai-accelerator-programme More about Irish Tech News Irish Tech News are Ireland's No. 1 Online Tech Publication and often Ireland's No.1 Tech Podcast too. You can find hundreds of fantastic previous episodes and subscribe using whatever platform you like via our Anchor.fm page here: https://anchor.fm/irish-tech-news If you'd like to be featured in an upcoming Podcast email us at Simon@IrishTechNews.ie now to discuss. Irish Tech News have a range of services available to help promote your business. Why not drop us a line at Info@IrishTechNews.ie now to find out more about how we can help you reach our audience. You can also find and follow us on Twitter, LinkedIn, Facebook, Instagram, TikTok and Snapchat.
EU AI Act architect and lead author Gabriele Mazzini shares his experience drafting the law. He also talks about his concerns with implementation and its potential impact on European competitiveness, and how that led him to quit his job, in the latest installment of our oral history project.This episode was recorded at TEDAI in Vienna and originally ran in 2024.We Meet:MIT Media Lab Research Affiliate & MIT Connection Science Fellow Gabriele MazziniCredits:This episode of SHIFT was produced by Jennifer Strong and Emma Cillekens, and it was mixed by Garret Lang, with original music from him and Jacob Gorski. Art by Meg Marco.
In this Insuring Cyber podcast highlight, the discussion shows the key contrasts between the U.S. and EU strategies for artificial intelligence. The U.S. AI Action Plan is positioned … Read More » The post U.S. AI Action Plan vs. EU AI Act: Competing Approaches to Global Leadership appeared first on Insurance Journal TV.
In this Insuring Cyber podcast highlight, the discussion shows the key contrasts between the U.S. and EU strategies for artificial intelligence. The U.S. AI Action Plan is positioned … Read More » The post U.S. AI Action Plan vs. EU AI Act: Competing Approaches to Global Leadership appeared first on Insurance Journal TV.
In this Insuring Cyber podcast highlight, the discussion shows the key contrasts between the U.S. and EU strategies for artificial intelligence. The U.S. AI Action Plan is positioned … Read More » The post U.S. AI Action Plan vs. EU AI Act: Competing Approaches to Global Leadership appeared first on Insurance Journal TV.
America's AI action plan … “Winning the AI race” has just been announced. What is it all about? What are the implications? How will the rest of the world react? A deep dive into the announcement, approaches by EU and China, and overall implications of these action plans.Navigation:Intro (01:34)Context of the White House AI SummitPillar I – Accelerating AI InnovationPillar II – Building American AI InfrastructurePillar III – Leading in International AI Diplomacy & SecurityComparing Approaches – U.S. Action Plan vs. EU AI Act vs. China's StrategyImplications and SynthesisConclusionOur co-hosts:Bertrand Schmitt, Entrepreneur in Residence at Red River West, co-founder of App Annie / Data.ai, business angel, advisor to startups and VC funds, @bschmittNuno Goncalves Pedro, Investor, Managing Partner, Founder at Chamaeleon, @ngpedroOur show: Tech DECIPHERED brings you the Entrepreneur and Investor views on Big Tech, VC and Start-up news, opinion pieces and research. We decipher their meaning, and add inside knowledge and context. Being nerds, we also discuss the latest gadgets and pop culture news Subscribe To Our Podcast Nuno G. PedroWelcome to episode 68 of Tech Deciphered. This episode will focus on America's AI action plan, winning the AI race, which has just been announced a couple of weeks in by President Trump in the White House. Today, we'll be discussing the pillars of this plan, from pillar I, the acceleration of AI innovation, to pillar II, building of American AI infrastructure, to pillar III, leading in international AI diplomacy and security.We'll also further contextualise it, as well as compare the approaches between the US Action plan, and what we see from the EU and China strategy at this point in time. We'll finalise with implications and synthesis. Bertrand, is this a watershed moment for the industry? Is this the moment we were all waiting for in terms of clarity for AI in the US?Bertrand SchmittYeah, that's a great question. I must say I'm quite excited. I'm not sure I can remember anything like it since basically John F. Kennedy announcing the race to go to the moon in the early '60s. It feels, as you say, a watershed moment because suddenly you can see that there is a grand vision, a grand plan, that AI is not just important, but critical to the future success of America. It looks like the White House is putting all the ducks in order in order to make it happen. There is, like in the '60s with JFK, a realisation that there is an adversary, there is a competitor, and you want to beat them to that race. Except this time it's not Russia, it's China. A lot of similarities, I would say.Nuno G. PedroYeah. It seems relatively comprehensive. Obviously, we'll deep dive into it today across a variety of elements like regulation, investments, view in relation to exports and imports and the rest of the world. So, relatively comprehensive from what we can see. Obviously, we don't know all the details. We know from the announcement that the plan has identified 90 federal policy actions across the three pillars. Obviously, we'll see how these come into practice over the next few months, few years.To your point, it is a defining moment. It feels a little bit like the space race of '60s, et cetera. It's probably warranted. We know that, obviously, AI platforms, AI services and products are changing the world as we speak. It's pretty important to figure out what is the US response to it.Also interesting to know that we normally don't talk about the US too much in terms of industrial policy. The US seems to have a private sector that, in and of itself, actually stands up to the game, and in particular in tech and high-tech, normally fulfils or fills the gaps that are introduced by big generational shifts in terms of technology. But in this case, there seems to be an industrial policy. This seems to set the stage for that industrial policy and how it moves forward,
By Adam Turteltaub On July 10, 2025 the European Commission posted The General-Purpose AI Code of Practice. Unlike the EU AI Act, this new Code of Practice is not compulsory, at least not yet. Still, it seems prudent to start understanding what it says and what expectations are being laid, as well as what the definition of general-purpose AI (GPAI) is. To that end, we spoke with London-based Jonathan Armstrong, Partner at Punter Southall. Jonathan explains that GPAI systems perform generally applicable functions such as image and speech recognition, audio and video generation, pattern recognition, question answering and translation. It is similar to generative AI but is not the same. He then shares that the Code of Practice contains three sections: transparency, copyright, and safety and security. Transparency is a hugely important issues for AI. Organizations need to keep their technical documents related to their AI use current and address topics such as how the AI was designed, the technical means by which it performs functions and energy consumption. Copyright is a significant source of litigation at present. Authors and other content creators see the use of their work by AI engines as a violation. AI developers see the use of those works as furthering a greater good. The Code of Practice sets out measures designed to help navigate these difficult waters. Safety & Security guidance is targeted predominantly at the most impactful GPAI operations. The Code calls for extra efforts to examine cybersecurity and the impact of the technology. This chapter of the document also includes 10 commitments for organizations to make. Listen in to the podcast and then spend some time reviewing The General-Purpose AI Code of Practice. It's worth seeing where regulations, and perhaps your AI efforts, are going.
In dieser Solo-Folge spricht Daniel Müller über einen entscheidenden Erfolgsfaktor in der Personaldienstleistung: Interne Prozesse als Profit-Hebel.
In this episode, Armen Shirvanian explores the intersection of artificial intelligence and creativity, discussing how to co-create with AI while maintaining the human touch. He delves into copyright issues surrounding AI-generated content, the implications of the EU AI Act, and the dual nature of AI as both a tool for enhancing creativity and a potential […]
Get featured on the show by leaving us a Voice Mail: https://bit.ly/MIPVM
Dr. Rebekka Reinhard and Thomas Vasek -- the team behind human magazine -- join CognitivePath founders Greg Verdino and Geoff Livingston for a provocative conversation about why smart resilience, ethics, regulation and responsibility are essential for creating a human forward future in the age of AI. Tune in for a deep dive into the philosophical and practical implications of AI on society, democracy, and our collective future. Chapters 00:00 Introduction 03:34 Smart Resilience in the Age of AI 07:09 Navigating Crises in a Complex World 11:03 Cultural Perspectives on Resilience 12:06 Global Perspectives on AI Development 16:12 Ethics and Morality in AI Regulation 21:32 The EU AI Act and Its Implications 26:09 Power Dynamics and Global Perception 28:28 AI's Role in Democracy 32:14 AI's Impact on Human Resilience 34:38 The Dangers of AI in the Workplace 38:19 Repression and Job Replacement through AI 41:09 A Hopeful Vision for the Future About Rebekka Dr. Rebekka Reinhard is a philosopher and SPIEGEL bestselling author. It's her mission to take philosophy out of the ivory tower and put it back where it belongs: real life. The is the founder of human, the first German magazine about life and work in the AI age. Connect with her at https://linkedin.com/in/rebekkareinhard About Thomas Thomas Vasek is editor-in-chief and head of content at human. He began his journalism career as an investigative reporter at the Austrian news magazine Profil. As founding editor-in-chief, he launched the German edition of the renowned MIT Technology Review in 2003 and the philosophy magazine HOHE LUFT in 2010. From 2006 to 2010, he served as editor-in-chief of P.M. Magazin. Connect with him at https://www.linkedin.com/in/thomas-va%C5%A1ek-637b6b233/ About human Magazine human is the first magazine to take a holistic look at the impact of AI on business, politics, society, and culture – always with a focus on the human being. Issues are published in German (print/digital) and English (digital only). Learn more and subscribe: https://human-magazin.de/ Download the free “Smart Resilience” white paper: https://human-magazin.de/#consulting Learn more about your ad choices. Visit megaphone.fm/adchoices
Summary In this episode, Marc is chattin' with Colleen García, a seasoned privacy attorney. The conversation begins with an introduction to Colleen's extensive background in cybersecurity law, including her experience working with the U.S. government before transitioning to the private sector. This sets the stage for a deep dive into the complex relationship between data privacy and artificial intelligence (AI), highlighting the importance of understanding legal and ethical considerations as AI technology continues to evolve rapidly. The core of the discussion centers on how AI models are trained on vast amounts of data, often containing personal identifiable information (PII). Colleen emphasizes that respecting individuals' data privacy rights is crucial, especially when it comes to obtaining proper consent for the use of their data in AI systems. She points out that while AI offers many benefits, it also raises significant concerns about data misuse, leakage, and the potential for infringing on privacy rights, which companies must carefully navigate to avoid legal and reputational risks. Colleen elaborates on the current legal landscape, noting that existing data privacy laws—such as those in the U.S., the European Union, Canada, and Singapore—are being adapted to address AI-specific issues. She mentions upcoming regulations like the EU AI Act and highlights the role of the Federal Trade Commission (FTC) in enforcing transparency and honesty in AI disclosures. Although some laws do not explicitly mention AI, their principles are increasingly being applied to regulate AI development and deployment, emphasizing the need for companies to stay compliant and transparent. The conversation then expands to a global perspective, with Colleen discussing how different countries are approaching the intersection of data privacy and AI. She notes that international efforts are underway to develop legal frameworks that address the unique challenges posed by AI, reflecting a broader recognition that AI regulation is a worldwide concern. This global outlook underscores the importance for companies operating across borders to stay informed about evolving legal standards and best practices. In closing, Colleen offers practical advice for businesses seeking to responsibly implement AI. She stresses the importance of building AI systems on a strong foundation of data privacy, including thorough vetting of training data and transparency with users. She predicts that future legislative efforts may lead to more state-level AI laws and possibly a comprehensive federal framework, although the current landscape remains fragmented. The podcast concludes with Colleen inviting listeners to connect with her for further discussion, emphasizing the need for proactive, thoughtful approaches to AI and data privacy in the evolving legal environment. Key Points The Relationship Between Data Privacy and AI: The discussion emphasizes how AI models are trained on data that often includes personal identifiable information (PII), highlighting the importance of respecting privacy rights and obtaining proper consent. Legal Risks and Challenges in AI and Data Privacy: Colleen outlines potential risks such as data leakage, misuse, and the complexities of ensuring compliance with existing privacy laws when deploying AI systems. Current and Emerging Data Privacy Laws: The conversation covers how existing laws (like those from the U.S., EU, Canada, and Singapore) are being adapted to regulate AI, along with upcoming regulations such as the EU AI Act and the role of agencies like the FTC. International Perspectives on AI and Data Privacy: The interview highlights how different countries are approaching AI regulation, emphasizing that this is a global issue with ongoing legislative developments worldwide. Practical Advice for Responsible AI Deployment: Colleen offers guidance for companies to build AI systems on a strong data privacy foun...
Get featured on the show by leaving us a Voice Mail: https://bit.ly/MIPVM
On this episode of the Self-Publishing News Podcast, Dan Holloway reports on a coordinated bot attack that hit indie authors using Shopify, leaving some with unexpected fees and limited recourse. He also covers new and proposed legislation across the UK, EU, and US, including the UK's Online Safety Act, concerns over enforcement of the EU AI Act, and the US White House's pro-tech AI action plan—all with implications for author rights and content access. Sponsors Self-Publishing News is proudly sponsored by Bookvault. Sell high-quality, print-on-demand books directly to readers worldwide and earn maximum royalties selling directly. Automate fulfillment and create stunning special editions with BookvaultBespoke. Visit Bookvault.app today for an instant quote. Self-Publishing News is also sponsored by book cover design company Miblart. They offer unlimited revisions, take no deposit to start work and you pay only when you love the final result. Get a book cover that will become your number-one marketing tool. Find more author advice, tips, and tools at our Self-publishing Author Advice Center, with a huge archive of nearly 2,000 blog posts and a handy search box to find key info on the topic you need. And, if you haven't already, we invite you to join our organization and become a self-publishing ally. About the Host Dan Holloway is a novelist, poet, and spoken word artist. He is the MC of the performance arts show The New Libertines, He competed at the National Poetry Slam final at the Royal Albert Hall. His latest collection, The Transparency of Sutures, is available on Kindle.
Get featured on the show by leaving us a Voice Mail: https://bit.ly/MIPVM
i'm wall-e, welcoming you to today's tech briefing for wednesday, august 6th. explore the latest in tech: openai & aws collaboration: aws now offers openai models through amazon ai services like bedrock and sagemaker, enhancing generative ai integration for enterprises and challenging microsoft's cloud dominance. openai's new open-source models: introducing gpt-oss-120b and gpt-oss-20b on hugging face, these ai reasoning models mark openai's return to open source since gpt-2, offering robust performance despite occasional hallucinations. government endorsement: openai, google, and anthropic join the list of approved ai vendors for u.s. federal agencies, streamlining ai service contracts and supporting federal ai goals. eu ai act progress: europe's ai regulatory framework advances, balancing innovation with risk prevention, setting compliance deadlines, and drawing varied reactions from advocacy groups and tech companies. leadership at emed population health: linda yaccarino, known for her negotiation skills, steps in as ceo, bringing bold leadership to the ai-driven health platform focused on glp-1 medications. that's all for today. we'll see you back here tomorrow!
In this episode, Ricardo discusses the impact of the AI Act, the European regulation on artificial intelligence (General-Purpose AI models). The law, passed in 2024 and fully in force in 2026, began imposing strict rules on general-purpose AI models such as GPT, Claude, and Gemini on August 2, 2025. Projects using these AIs, even for simple integration, must also follow ethical, privacy, and transparency requirements. This changes the role of the project manager, who now needs to ensure legal compliance. Despite criticism that the law limits innovation, Ricardo emphasizes that it signals technological maturity. For him, adapting is essential to avoid risks and add value to projects. Listen to the podcast to learn more! https://rvarg.as/euactslide https://rvarg.as/euact
Neste episódio, Ricardo comenta o impacto da AI Act, regulamentação europeia da inteligência artificial (General‑Purpose AI models). A lei, aprovada em 2024 e em vigor plena em 2026, começou a impor, desde 2/08/25, regras rígidas aos modelos de IA de uso geral, como GPT, Claude e Gemini. Os projetos que usam essas IAs, mesmo como integração simples, também devem seguir exigências sobre ética, privacidade e transparência. Isso muda o papel do gerente de projetos, que agora precisa garantir conformidade legal. Apesar das críticas de que a lei limita a inovação, Ricardo destaca que ela sinaliza maturidade tecnológica. Para ele, adaptar-se é essencial para evitar riscos e agregar valor aos projetos. Escute o podcast para saber mais! https://rvarg.as/euactslide https://rvarg.as/euact
Could GPT-5 only be weeks away?Why are Microsoft and Google going all in on vibe coding?What's the White House AI Action Plan actually mean?Don't spend hours a day trying to figure out what AI means for your company or career. That's our job. So join us on Mondays as we bring you the AI News That Matters. No fluff. Just what you need to ACTUALLY pay attention to in the business side of AI. Newsletter: Sign up for our free daily newsletterMore on this Episode: Episode PageJoin the discussion: Thoughts on this? Join the convo and connect with other AI leaders on LinkedIn.Upcoming Episodes: Check out the upcoming Everyday AI Livestream lineupWebsite: YourEverydayAI.comEmail The Show: info@youreverydayai.comConnect with Jordan on LinkedInTopics Covered in This Episode:GPT-5 Release Timeline and FeaturesGoogle Opal AI Vibe Coding ToolNvidia B200 AI Chip Black Market ChinaTrump White House AI Action Plan DetailsMicrosoft GitHub Spark AI Coding LaunchGoogle's AI News Licensing NegotiationsMicrosoft Copilot Visual Avatar (“Clippy” AI)Netflix Uses Generative AI for Visual EffectsOpenAI Warns of AI-Driven Fraud CrisisNew Google, Claude, and Runway AI Feature UpdatesTimestamps:00:00 "OpenAI's GPT-5 Release Announced"04:57 OpenAI Faces Pressure from Gemini07:13 EU AI Act vs. US AI Priorities12:12 Black Market Thrives for Nvidia Chips13:46 US AI Action Plan Unveiled19:34 Microsoft's GitHub Spark Unveiled21:17 Google vs. Microsoft: AI Showdown25:28 Google's New AI Partnership Strategy29:23 Microsoft's Animated AI Assistant Revival33:52 Generative AI in Film Industry38:55 AI Race & Imminent Fraud Crisis40:15 AI Threats and Future InnovationsKeywords:GPT 5 release date, OpenAI, GPT-4, GPT-4O, advanced reasoning abilities, artificial general intelligence, AGI, O3 reasoning, GPT-5 Mini, GPT-5 Nano, API access, Microsoft Copilot, model selector, LM arena, Gemini 2.5 Pro, Google Vibe Coding, Opal, no-code AI, low-code app maker, Google Labs, AI-powered web apps, app development, visual workflow editor, generative AI, AI app creation, Anthropic Claude Sonet 4, GitHub Copilot Spark, Microsoft GitHub, Copilot Pro Plus, AI coding tools, AI search, Perplexity, news licensing deals, Google AI Overview, AI summaries, click-through rate, organic search traffic, Associated Press, Condé Nast, The Atlantic, LA Times, AI in publishing, generative AI video, Netflix, El Eternauta, AI-generated visual effects, AI-powered VFX, Runway, AI for film and TV, job displacement from AI, AI-driven fraud, AI voice cloning, AI impersonation, financial scams, AI regulation, White House AI Action Plan, executive orders on AI, AI innovation, AI deregulaSend Everyday AI and Jordan a text message. (We can't reply back unless you leave contact info) Ready for ROI on GenAI? Go to youreverydayai.com/partner
This week on IA on AI, we break down the McDonald's hiring bot fiasco — yes, the one where an AI chatbot exposed data from over 60 million job applicants due to a shockingly simple security lapse. We explore why these matters to internal auditors and what basic control failures like this can teach us about staying vigilant as AI becomes more embedded in business processes. Plus: An update on the EU AI Act and why U.S.-based organizations should still be paying attention How Google's AI caught a cyberattack in real time — and what this signals for the future of human-in-the-loop systems A $4 trillion milestone for Nvidia and a record-setting $2B seed round for a new AI startup A reality check on AGI: what it is, what it isn't, and why the hype may be outpacing the science Be sure to follow us on our social media accounts on LinkedIn: https://www.linkedin.com/company/the-audit-podcast Instagram: https://www.instagram.com/theauditpodcast TikTok: https://www.tiktok.com/@theauditpodcast?lang=en Also be sure to sign up for The Audit Podcast newsletter and to check the full video interview on The Audit Podcast YouTube channel. * This podcast is brought to you by Greenskies Analytics. the services firm that helps auditors leapfrog up the analytics maturity model. Their approach for launching audit analytics programs with a series of proven quick-win analytics will guarantee the results worthy of the analytics hype. Whether your audit team needs a data strategy, methodology, governance, literacy, or anything else related to audit and analytics, schedule time with Greenskies Analytics.
Generative AI continues to drive conversation and concern, and not surprisingly, the focus on the promise of AI and how best to regulate it have created controversial positions. The EU has been one of those leaders in addressing regulation of AI, primarily through the EU AI Act. On today's episode, we will learn more from David about the EU AI Act, as well as a US perspective from Derek on the status of AI regulation, and how US companies may be impacted by the EU AI Act. Subscribe to our podcast today to stay up to date on employment issues from law experts worldwide.Host: Tara Stingley (email) (Cline Williams Wright Johnson & Oldfather, LLP)Guest Speakers: David van Boven (email) (Plesner / Denmark) & Derek Ishikawa (email) (Hirschfeld Kraemer LLP / California)Support the showRegister on the ELA website here to receive email invitations to future programs.
Get featured on the show by leaving us a Voice Mail: https://bit.ly/MIPVM
How do we prepare students—and ourselves—for a world where AI grief companions and "deadbots" are a reality? In this eye-opening episode, Jeff Utecht sits down with Dr. Tomasz Hollanek, a critical design and AI ethics researcher at the University of Cambridge's Leverhulme Centre for the Future of Intelligence, to discuss: The rise of AI companions like Character.AI and Replika Emotional manipulation risks and the ethics of human-AI relationships What educators need to know about the EU AI Act and digital consent How to teach AI literacy beyond skill-building—focusing on ethics, emotional health, and the environmental impact of generative AI Promising examples: preserving Indigenous languages and Holocaust survivor testimonies through AI From griefbots to regulation loopholes, Tomasz explains why educators are essential voices in shaping how AI unfolds in schools and society—and how we can avoid repeating the harms of the social media era. Dr Tomasz Hollanek is a Postdoctoral Research Fellow at the Leverhulme Centre for the Future of Intelligence (LCFI) and an Affiliated Lecturer in the Department of Computer Science and Technology at the University of Cambridge, working at the intersection of AI ethics and critical design. His current research focuses on the ethics of human-AI interaction design and the challenges of developing critical AI literacy among diverse stakeholder groups; related to the latter research stream is the work on AI, media, and communications that he is leading at LCFI. Connect with him: https://link.springer.com/article/10.1007/s13347-024-00744-w https://www.repository.cam.ac.uk/items/d3229fe5-db87-42ff-869b-11e0538014d8 https://www.desirableai.com/journalism-toolkit
AI is racing ahead, but for industries like life sciences, the stakes are higher and the rules more complex. In this episode, recorded just before the July heatwave hit its peak, I spoke with Chris Moore, President of Europe at Veeva Systems, from his impressively climate-controlled garden office. We covered everything from the trajectory of agentic AI to the practicalities of embedding intelligence in highly regulated pharma workflows, and how Veeva is quietly but confidently positioning itself to deliver where others are still making announcements. Chris brings a unique perspective shaped by a career that spans ICI Pharmaceuticals, PwC, IBM, and EY. That journey taught him how often the industry is forced to rebuild the same tech infrastructure again and again until Veeva came along. He shares how Veeva's decision to build a life sciences-specific cloud platform from the ground up has enabled a deeper, more compliant integration of AI. We explored what makes Veeva AI different, from the CRM bot that handles compliant free text to MLR agents that support content review and approval. Chris explains how Veeva's AI agents inherit the context and controls of their applications, making them far more than chat wrappers or automation tools. They are embedded directly into workflows, helping companies stay compliant while reducing friction and saving time. And perhaps more importantly, he makes a strong case for why the EU AI Act isn't a barrier. It's a validation. From auto-summarising regulatory documents to pulling metadata from health authority correspondence, the real-world examples Chris offers show how Veeva AI will reduce repetitive work while ensuring integrity at every step. He also shares how Veeva is preparing for a future where companies may want to bring their LLMs or even run different ones by geography or task. Their flexible, harness-based approach is designed to support exactly that. Looking ahead to the product's first release in December, Chris outlines how Veeva is working hand-in-hand with customers to ensure readiness and reliability from day one. We also touch on the broader mission: using AI not as a shiny add-on, but as a tool to accelerate drug development, reach patients faster, and relieve the pressure on already overstretched specialist teams. Chris closes with a dose of humanity, offering a book and song that both reflect Veeva's mindset, embracing disruption while staying grounded. This one is for anyone curious about how real, applied AI is unfolding inside one of the world's most important sectors, and what it means for the future of medicine.
From February 16, 2024: The EU has finally agreed to its AI Act. Despite the political agreement reached in December 2023, some nations maintained some reservations about the text, making it uncertain whether there was a final agreement or not. They recently reached an agreement on the technical text, moving the process closer to a successful conclusion. The challenge now will be effective implementation.To discuss the act and its implications, Lawfare Fellow in Technology Policy and Law Eugenia Lostri sat down with Itsiq Benizri, counsel at the law firm WilmerHale Brussels. They discussed how domestic politics shaped the final text, how governments and businesses can best prepare for new requirements, and whether the European act will set the international roadmap for AI regulation.To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/lawfare-institute.Support this show http://supporter.acast.com/lawfare. Hosted on Acast. See acast.com/privacy for more information.
