POPULARITY
Get featured on the show by leaving us a Voice Mail: https://bit.ly/MIPVM
Die EU reguliert Künstliche Intelligenz – und das mit harten Strafen: Bis zu 7 % vom Jahresumsatz können bei Verstößen gegen den neuen EU AI Act fällig werden. Doch keine Panik: In dieser Folge erklärt dir Daniel Müllergemeinsam mit KI-Experte Andreas Mai, worauf du achten musst – und wie du dich & dein Team rechtssicher aufstellst.
Seit dem 2. August 2025 ist der EU AI Act in Kraft. Ein Verhaltenskodex definiert, welche Anforderungen KI-Anbieter dafür genau erfüllen müssen, etwa Trainingsdaten transparenter machen. Der Kodex ist aber freiwillig. 26 Firmen haben ihn unterschrieben. Nicht dabei: Meta.**********Ihr könnt uns auch auf diesen Kanälen folgen: TikTok und Instagram .
In this episode, Ricardo discusses the impact of the AI Act, the European regulation on artificial intelligence (General-Purpose AI models). The law, passed in 2024 and fully in force in 2026, began imposing strict rules on general-purpose AI models such as GPT, Claude, and Gemini on August 2, 2025. Projects using these AIs, even for simple integration, must also follow ethical, privacy, and transparency requirements. This changes the role of the project manager, who now needs to ensure legal compliance. Despite criticism that the law limits innovation, Ricardo emphasizes that it signals technological maturity. For him, adapting is essential to avoid risks and add value to projects. Listen to the podcast to learn more! https://rvarg.as/euactslide https://rvarg.as/euact
Neste episódio, Ricardo comenta o impacto da AI Act, regulamentação europeia da inteligência artificial (General‑Purpose AI models). A lei, aprovada em 2024 e em vigor plena em 2026, começou a impor, desde 2/08/25, regras rígidas aos modelos de IA de uso geral, como GPT, Claude e Gemini. Os projetos que usam essas IAs, mesmo como integração simples, também devem seguir exigências sobre ética, privacidade e transparência. Isso muda o papel do gerente de projetos, que agora precisa garantir conformidade legal. Apesar das críticas de que a lei limita a inovação, Ricardo destaca que ela sinaliza maturidade tecnológica. Para ele, adaptar-se é essencial para evitar riscos e agregar valor aos projetos. Escute o podcast para saber mais! https://rvarg.as/euactslide https://rvarg.as/euact
Die neue KI Verordnung der EU (AI Act) tritt in Kraft. Sie soll für eine strengere Transparenzpflicht für Sprachmodelle wie ChatGPT sorgen. Was ändert sich für KI-StartUps konkret? Dazu Jonas Becher, Gründer des KI StartUps Masasana. Von WDR 5.
This week on Taking Stock with Susan HayesCulleton:Sarah Collins, Brussels Correspondent with the Business Post & John Fitzgerald, Professor in the Department of Economics at Trinity College Dublin, join Susan to give their views on this week's EU-US trade deal.Susan looks to find out more about the next phase of the EU AI Act that comes into force this week with John Callahan, President and CTO of Partsol.Plus, Aidan Donnelly, Head of Equities at Davy, talks US inflation, equities, and the dollar outlook.
Der EU-AI-Act tritt in die nächste Phase Meta steigert Umsatz und Gewinne Apples KI-Team dagegen verliert Mitarbeiter Nvidia soll sich gegenüber Chinas Regierung erklären Links zu allen Themen der heutigen Folge findet Ihr hier: https://heise.de/-10506541 https://www.heise.de/thema/KI-Update https://pro.heise.de/ki/ https://www.heise.de/newsletter/anmeldung.html?id=ki-update https://www.heise.de/thema/Kuenstliche-Intelligenz https://the-decoder.de/ https://www.heiseplus.de/podcast https://www.ct.de/ki Eine neue Folge gibt es montags, mittwochs und freitags ab 15 Uhr.
The $600 billion MedTech industry is undergoing a technological transformation. From AI-powered medical imaging to smart diagnostics and remote monitoring tools, artificial intelligence and machine learning are reshaping how care is delivered — and increasingly how patients manage their own health. In this episode of the BioRevolution podcast, we are joined by our guest, idalab's MedTech expert Julian Beimes to discuss how this AI-driven wave aligns with broader shifts in medicine: virtualization, personalization, and prevention. But alongside the innovation, we also unpack the challenges — especially the complex and often fragmented regulatory environment. Are policies like the EU AI Act promoting safety, or holding back progress? Find Julian here: https://www.linkedin.com/in/julian-beimes/ Find idalab here: https://idalab.de/ Disclaimer: Louise von Stechow & Andreas Horchler and their guests express their personal opinions, which are founded on research on the respective topics, but do not claim to give medical, investment or even life advice in the podcast. Learn more about the future of biotech in our podcasts and keynotes. Contact us here: scientific communication: https://science-tales.com/ Podcasts: https://www.podcon.de/ Keynotes: https://www.zukunftsinstitut.de/louise-von-stechow Image: Igor Saikin via Unsplash
Could GPT-5 only be weeks away?Why are Microsoft and Google going all in on vibe coding?What's the White House AI Action Plan actually mean?Don't spend hours a day trying to figure out what AI means for your company or career. That's our job. So join us on Mondays as we bring you the AI News That Matters. No fluff. Just what you need to ACTUALLY pay attention to in the business side of AI. Newsletter: Sign up for our free daily newsletterMore on this Episode: Episode PageJoin the discussion: Thoughts on this? Join the convo and connect with other AI leaders on LinkedIn.Upcoming Episodes: Check out the upcoming Everyday AI Livestream lineupWebsite: YourEverydayAI.comEmail The Show: info@youreverydayai.comConnect with Jordan on LinkedInTopics Covered in This Episode:GPT-5 Release Timeline and FeaturesGoogle Opal AI Vibe Coding ToolNvidia B200 AI Chip Black Market ChinaTrump White House AI Action Plan DetailsMicrosoft GitHub Spark AI Coding LaunchGoogle's AI News Licensing NegotiationsMicrosoft Copilot Visual Avatar (“Clippy” AI)Netflix Uses Generative AI for Visual EffectsOpenAI Warns of AI-Driven Fraud CrisisNew Google, Claude, and Runway AI Feature UpdatesTimestamps:00:00 "OpenAI's GPT-5 Release Announced"04:57 OpenAI Faces Pressure from Gemini07:13 EU AI Act vs. US AI Priorities12:12 Black Market Thrives for Nvidia Chips13:46 US AI Action Plan Unveiled19:34 Microsoft's GitHub Spark Unveiled21:17 Google vs. Microsoft: AI Showdown25:28 Google's New AI Partnership Strategy29:23 Microsoft's Animated AI Assistant Revival33:52 Generative AI in Film Industry38:55 AI Race & Imminent Fraud Crisis40:15 AI Threats and Future InnovationsKeywords:GPT 5 release date, OpenAI, GPT-4, GPT-4O, advanced reasoning abilities, artificial general intelligence, AGI, O3 reasoning, GPT-5 Mini, GPT-5 Nano, API access, Microsoft Copilot, model selector, LM arena, Gemini 2.5 Pro, Google Vibe Coding, Opal, no-code AI, low-code app maker, Google Labs, AI-powered web apps, app development, visual workflow editor, generative AI, AI app creation, Anthropic Claude Sonet 4, GitHub Copilot Spark, Microsoft GitHub, Copilot Pro Plus, AI coding tools, AI search, Perplexity, news licensing deals, Google AI Overview, AI summaries, click-through rate, organic search traffic, Associated Press, Condé Nast, The Atlantic, LA Times, AI in publishing, generative AI video, Netflix, El Eternauta, AI-generated visual effects, AI-powered VFX, Runway, AI for film and TV, job displacement from AI, AI-driven fraud, AI voice cloning, AI impersonation, financial scams, AI regulation, White House AI Action Plan, executive orders on AI, AI innovation, AI deregulaSend Everyday AI and Jordan a text message. (We can't reply back unless you leave contact info) Ready for ROI on GenAI? Go to youreverydayai.com/partner
In this episode, Andreas Munk Holm is joined by Eoghan O'Neill, Senior Policy Officer at the European Commission's AI Office, to break down the EU AI Act, Europe's strategy to lead the global AI wave with trust, safety, and world-class infrastructure.They dive into why the EU's approach to AI is not just regulatory red tape but a proactive framework to ensure innovation and adoption flourish across sectors—from startups to supercomputers. Eoghan unpacks how startups can navigate the Act, why Europe's regulatory clarity is an advantage, and how investors should be thinking about this new paradigm.Here's what's covered:02:41 Eoghan's Unorthodox Journey: From Gravy Systems to AI Policy04:32 The Mission & Structure of the AI Office05:52 Understanding the AI Act: A Product Safety Framework09:40 How the AI Act Was Created: An Open, 1,000+ Stakeholder Process17:42 What Counts as High-Risk AI (And What Doesn't)21:23 Learning from GDPR & Ensuring Innovation Isn't Crushed26:10 Transparency, Trust & The Limits of Regulation30:15 What VCs Need to Know: Obligations, Timelines & Opportunities34:42 Europe's Global AI Position: Infra, Engineers, Strategy43:33 Global Dynamics: Commoditization, Gulf States & the Future of AGI48:46 What's Coming: Apply AI Strategy in September
This week on IA on AI, we break down the McDonald's hiring bot fiasco — yes, the one where an AI chatbot exposed data from over 60 million job applicants due to a shockingly simple security lapse. We explore why these matters to internal auditors and what basic control failures like this can teach us about staying vigilant as AI becomes more embedded in business processes. Plus: An update on the EU AI Act and why U.S.-based organizations should still be paying attention How Google's AI caught a cyberattack in real time — and what this signals for the future of human-in-the-loop systems A $4 trillion milestone for Nvidia and a record-setting $2B seed round for a new AI startup A reality check on AGI: what it is, what it isn't, and why the hype may be outpacing the science Be sure to follow us on our social media accounts on LinkedIn: https://www.linkedin.com/company/the-audit-podcast Instagram: https://www.instagram.com/theauditpodcast TikTok: https://www.tiktok.com/@theauditpodcast?lang=en Also be sure to sign up for The Audit Podcast newsletter and to check the full video interview on The Audit Podcast YouTube channel. * This podcast is brought to you by Greenskies Analytics. the services firm that helps auditors leapfrog up the analytics maturity model. Their approach for launching audit analytics programs with a series of proven quick-win analytics will guarantee the results worthy of the analytics hype. Whether your audit team needs a data strategy, methodology, governance, literacy, or anything else related to audit and analytics, schedule time with Greenskies Analytics.
Generative AI continues to drive conversation and concern, and not surprisingly, the focus on the promise of AI and how best to regulate it have created controversial positions. The EU has been one of those leaders in addressing regulation of AI, primarily through the EU AI Act. On today's episode, we will learn more from David about the EU AI Act, as well as a US perspective from Derek on the status of AI regulation, and how US companies may be impacted by the EU AI Act. Subscribe to our podcast today to stay up to date on employment issues from law experts worldwide.Host: Tara Stingley (email) (Cline Williams Wright Johnson & Oldfather, LLP)Guest Speakers: David van Boven (email) (Plesner / Denmark) & Derek Ishikawa (email) (Hirschfeld Kraemer LLP / California)Support the showRegister on the ELA website here to receive email invitations to future programs.
Human Resources and Workforce Impact: Bias in Automation: Ensure that automated HR processes undergo regular audits to identify and mitigate biases, particularly in candidate selection and hiring. Regulatory Oversight: Implement annual bias audits for automated employment decision tools to comply with regulations. Employee Surveillance: Review and update employee monitoring practices to ensure compliance with privacy regulations, and OSHA and HIPAA. Regulatory Compliance and Legal Risks: Decentralized AI Regulation: Develop a comprehensive strategy to track and comply with AI regulations across different states. EU AI Act: Assess the impact of the EU AI Act on your operations and ensure compliance with its requirements, even if your systems are used within the EU. Terms of Service: Establish a process to monitor and review changes in terms of service for AI, other technology and communications tools, ensuring compliance and proper data usage. Operational Resilience and Business Continuity: System Dependencies: Regularly evaluate AI systems for data representativeness and bias and adapt to real-time changes in company operations. Supply Chain Vulnerabilities: Conduct frequent audits of third-party components and vendors to identify and mitigate supply chain vulnerabilities. Cyber Threats: Update employee training programs to include awareness and prevention of deepfake scams and other sophisticated cyber threats. Strategic Oversight and Accountability: Ethical Considerations: Form multidisciplinary task forces for AI adoption, including general counsel, to classify use cases based on risk levels. ROI and Uncertainty: Ask for detailed ROI estimates, timelines, and milestones for AI projects, considering the uncertainty and potential qualitative outcomes. Director Education: Encourage directors to engage in educational opportunities, such as NACD masterclasses and other governance-focused content, to enhance their understanding of AI governance.
Get featured on the show by leaving us a Voice Mail: https://bit.ly/MIPVM
Special Virtual Episodes with ISACA Leaders: State of Cyber (Part 1) - Maintaining readiness in a complex threat environmentSpeakers:Jamie Norton - ISACA Board Member Chirag Joshi - Sydney Chapter Board Member Abby Zhang - Auckland Chapter Board Member Jason Wood - Auckland Chapter former PresidentBharat Bajaj - ISACA Melbourne Board DirectorFor the full series visit: https://mysecuritymarketplace.com/security-amp-risk-professional-insight-series-2025/#mysecuritytv #isaca #cybersecurity OVERVIEWAccording to ISACA research, almost half of companies exclude cybersecurity teams when developing, onboarding, and implementing AI solutions.Only around a quarter (26%) of cybersecurity professionals or teams in Oceania are involved in developing policy governing the use of AI technology in their enterprise, and nearly half (45%) report no involvement in the development, onboarding, or implementation of AI solutions, according to the recently released 2024 State of Cybersecurity survey report from global IT professional association ISACA.Key Report Findings Security teams in Oceania noted they are primarily using AI for: Automating threat detection/response (36% vs 28% globally); Endpoint security (33% vs 27% globally); Automating routine security tasks (22% vs 24% globally); and Fraud detection (6% vs 13% globally).Additional AI resources to help cybersecurity and other digital trust professionalso EU AI Act white papero Examining Authentication in the Deepfake EraSYNOPSISISACA's 2024 State of Cybersecurity report reveals that stress levels are on the rise for cybersecurity professionals, largely due to an increasingly challenging threat landscape. The annual ISACA research also identifies key skills gaps in cybersecurity, how artificial intelligence is impacting the field, the role of risk assessments and cyber insurance in enterprises' security programs, and more.The demand for cybersecurity talent has been consistently high, yet efforts to increase supply are not reflected in the global ISACA IS/IT-community workforce. The current cybersecurity practitioners are aging, and the efforts to increase staffing with younger professionals are making little progress. Left unchecked, this situation will create business continuity issues in the future. Shrinking budgets and employee compensation carry the potential to adversely affect cybersecurity readiness much sooner than the aging workforce, when the Big Stay passes. Declines in vacant positions across all reporting categories may lead some enterprises to believe that the pendulum of power will swing back to employers, but the increasingly complex threat environment is greatly increasing stress in cybersecurity teams; therefore, the concern is not if, but when, employees will reach their tipping point to vacate current positions.
Welcome to 'AI Lawyer Talking Tech,' your weekly deep dive into the profound shifts occurring across the legal sector. From the European Union's forthcoming AI Act, set to take effect in August 2025, which introduces comprehensive regulations for general-purpose AI models, to the complex patchwork of AI legislation emerging across US states like Colorado, California, and Texas, the legal landscape is undergoing a fundamental transformation. Today, we'll explore how artificial intelligence is revolutionizing legal practice, enhancing efficiency, and improving productivity through tools like agentic AI systems designed to execute complex, multi-step workflows and integrated practice management software. However, this evolution also brings significant challenges: from the critical concern of AI 'hallucinations' producing false information and their potential impact on critical thinking skills for young lawyers, to the new legal and ethical questions surrounding confidentiality and privilege with AI notetaking tools. We'll discuss how legal professionals are working to uphold professional standards and ensure responsible innovation, adapting their skill sets, and preparing for an AI-driven future that demands both technological literacy and enduring human judgment. Join us as we unpack the opportunities and challenges of AI in law.No ‘Stop the Clock' For the EU AI Act (and a belated General-Purpose AI Code of Practice): What Does This Mean to You?23 Jul 2025King & SpaldingHow Small Businesses in Texas Are Getting Sued Over Data Breaches22 Jul 2025Legal ReaderKristen Bateman Leis Selected as Chair of College of Law Practice Management Annual Conference22 Jul 2025Parker Poe Adams & Bernstein LLCAt Law Librarians' Annual Meeting, Panel Tackles the Challenge of Benchmarking AI Research Tools22 Jul 2025LawSitesGenAI And Critical Thinking: The Problem Is The Problem22 Jul 2025Above The LawOn LawNext: CoCounsel's Next Generation – TR's Emily Colbert and Rawia Ashraf on Agentic AI for Lawyers22 Jul 2025LawSitesEMPLOYEE AND SKILL RETENTION DURING THE AI TRANSITION IN THE LEGAL SECTOR22 Jul 2025AIJourn.comNavigating Disruption: The Unofficial Theme of the 2025 AMC22 Jul 2025Wisconsin Lawyer MagazineOpenLaw raises $3.5M to fix America's broken legal system and democratize access to justice22 Jul 2025TechStartups.comWolfe Pincavage Strengthens Litigation Practice With Addition Of Veteran Trial Attorney & Stanford Law Alum Anna Torres22 Jul 2025CityBiz.coImpact Lawyers and FashionCapital Launch Exclusive One – Day Funding Readiness Programme for Creative Founders | Fashion Capital22 Jul 2025Fashion CapitalAI Watch: Global regulatory tracker - United States - Update22 Jul 2025JD SupraChoosing the Right U.S. Corporate Domicile in the Age of Dexit: Key Considerations22 Jul 2025JD SupraPotential Federal Court Circuit Split Increases Uncertainty Around FCC's Enforcement Authority22 Jul 2025JD SupraAbu Dhabi Judicial Department Accepts AE Coin for Legal Fees22 Jul 2025CryptoNews.netJudicial panel explores AI's promise and perils in the courtroom22 Jul 2025Florida Bar NewsLaw School Toolbox Podcast Episode 513: Grappling with AI as a Law Student and Lawyer (1L Summer Series)22 Jul 2025JD SupraThe Importance of Timely Document Serving in Global Operations22 Jul 2025World Business OutlookAI Notetaking in the Legal and Business Context: Does It Risk Confidentiality or Privilege?22 Jul 2025JD SupraUK judge warns of justice risks as lawyers cite fake AI-generated cases in court22 Jul 2025Computing.co.ukThe AI Rat Scandal: Publishing's Latest Integrity Crisis and the Role of Legal Libraries22 Jul 2025Real Lawyers Have BlogsWhy You Need a Crypto Law Compliance Expert in Florida for Secure Blockchain Operations22 Jul 2025TechBullionRescission of Cross References22 Jul 2025Federal Register
In dieser Podcast-Folge spreche ich wieder mit meinem Kollegen Thomas Fröhlich, Head of AI & Automation, über den Wahnsinn der letzten Monate im KI-Bereich. Hier sind fünf Highlights aus unserer Diskussion: KI-Texte erkennen: Thomas und ich haben festgestellt, dass wir mittlerweile ein "KI-Radar" entwickelt haben. Bestimmte Schreibweisen, wie zum Beispiel übermäßig lange Gedankenstriche, verraten uns sofort, dass ein Text von einer KI stammt. Das ist ein interessantes Phänomen – die KI entwickelt einen eigenen Stil, an den wir uns gewöhnen müssen. Es stellt die Frage nach der zukünftigen Bedeutung von individueller Ausdrucksform im Text in Frage. KI im Kundenservice (RAG): Die Allianz UK nutzt KI, um riesige Dokumente im Underwriting blitzschnell zu durchsuchen. Das spart enorm Zeit und bringt neue Mitarbeiter schneller auf den neuesten Stand. Technisch ist das zwar kein komplett neuer Ansatz (Retrieval Augmented Generation, oder RAG), aber der praktische Einsatz ist der Knaller! Es ist nicht einfach nur die Technologie, sondern die Umsetzung, die zählt. Mindset und KI: Wir haben über die Wichtigkeit des richtigen Mindsets im Umgang mit KI gesprochen. Es geht darum, sich bewusst zu machen, wann man KI zur Überprüfung von Informationen nutzen sollte, um Fehler zu vermeiden und das eigene Wissen zu erweitern. Ähnlich wie wir bei der Kindererziehung auf neue Forschungsergebnisse zurückgreifen, sollten wir auch im Berufsleben KI als Werkzeug für bessere Entscheidungen nutzen. EU AI Act und Schulungen: Der EU AI Act ist in Kraft getreten und schreibt Schulungen für Mitarbeiter zum Thema KI vor. Thomas und ich sind uns einig, dass praktische Schulungen viel effektiver sind als langweilige Klick-durch-PowerPoint-Präsentationen. Tatsächlich kann KI sogar hervorragend als Schulungstool selbst eingesetzt werden – man muss es nur richtig nutzen. Text-to-Video und Business Cases: Wir haben über die Anwendung von Text-to-Video in der Versicherungsbranche gesprochen. Die Möglichkeit, Schadensfälle durch Videos zu rekonstruieren oder automatisiert zu analysieren, könnte extrem effizient sein und viel Zeit sparen. Dabei ist die Qualität des Prompts natürlich entscheidend, um realistische Ergebnisse zu erzielen. Links in dieser Ausgabe Zur Homepage von Jonas Piela Zum LinkedIn-Profil von Jonas Piela Zum LinkedIn-Profil von Thomas Fröhlich Die Liferay Digital Experience Platform Kunden erwarten digitale Services für die Kommunikation, Schadensmeldung und -abwicklung. Liferays Digital Experience Platform bietet Out-of-the-Box-Funktionen wie Low-Code, höchste Sicherheit & Zuverlässigkeit. Jetzt Kontakt aufnehmen.
Send us a textIn episode 246 of “The Data Diva” Talks Privacy Podcast, Debbie Reynolds talks to Aparna Bhushan, a co-host of the Rethinking Tech podcast and a seasoned data protection and governance attorney licensed in both the U.S. and Canada. Together, they explore the critical intersection of geopolitics, tech policy, and data ethics. Aparna shares her professional journey from startups to global corporations and international organizations, such as UNICEF, where her passion for ethical and practical data governance took root. The conversation explores the fast-paced and often contradictory dynamics facing governments, companies, and users in the digital age, highlighting how the collapse of traditional rules has left many institutions scrambling for direction.Debbie and Aparna discuss how companies are navigating conflicting global regulations, the growing risks of consumer backlash, and the real-world consequences of poor data decisions, such as the fallout from GM's data broker scandal and the potential sale of sensitive genetic data in the 23andMe bankruptcy. They also address the dangers of regulation lag, scope creep, and public distrust in platforms that mishandle personal data. Aparna shares her perspective on the emerging global impact of the EU AI Act and the regulatory vacuum in the U.S., arguing that proactive privacy strategies and consumer trust are more valuable than merely checking compliance boxes.The two dive deep into the complexities of age verification laws, questioning the practicality and privacy implications of requiring IDs or weakening encryption to protect children online. They emphasize the need for innovation that respects user rights and propose creative approaches to solving systemic data challenges, including Aparna's vision for AI systems that can audit other AI models for fairness and bias.To close the episode, Aparna shares her global privacy wish list: a more conscious, intentional user culture and a renewed investment in responsible technology development. This thoughtful and wide-ranging conversation is a must-listen for anyone interested in the ethical evolution of data governance in a rapidly shifting global landscape.Support the show
How do we prepare students—and ourselves—for a world where AI grief companions and "deadbots" are a reality? In this eye-opening episode, Jeff Utecht sits down with Dr. Tomasz Hollanek, a critical design and AI ethics researcher at the University of Cambridge's Leverhulme Centre for the Future of Intelligence, to discuss: The rise of AI companions like Character.AI and Replika Emotional manipulation risks and the ethics of human-AI relationships What educators need to know about the EU AI Act and digital consent How to teach AI literacy beyond skill-building—focusing on ethics, emotional health, and the environmental impact of generative AI Promising examples: preserving Indigenous languages and Holocaust survivor testimonies through AI From griefbots to regulation loopholes, Tomasz explains why educators are essential voices in shaping how AI unfolds in schools and society—and how we can avoid repeating the harms of the social media era. Dr Tomasz Hollanek is a Postdoctoral Research Fellow at the Leverhulme Centre for the Future of Intelligence (LCFI) and an Affiliated Lecturer in the Department of Computer Science and Technology at the University of Cambridge, working at the intersection of AI ethics and critical design. His current research focuses on the ethics of human-AI interaction design and the challenges of developing critical AI literacy among diverse stakeholder groups; related to the latter research stream is the work on AI, media, and communications that he is leading at LCFI. Connect with him: https://link.springer.com/article/10.1007/s13347-024-00744-w https://www.repository.cam.ac.uk/items/d3229fe5-db87-42ff-869b-11e0538014d8 https://www.desirableai.com/journalism-toolkit
Across EMEA, Artificial Intelligence (AI) is redefining industries, inspiring innovation, improving operations, and driving, growth. Government and Irish businesses are embracing and capitalising on AI's potential to enhance customer experiences and gain a competitive advantage. But as adoption accelerates, new security challenges arise, demanding vigilant attention to protect these investments. Forecasts indicate that AI could contribute trillions to the global economy by 2030, with Ireland well-positioned to capture a significant share of this value. According to Dell Technologies' Innovation Catalyst Study, 76% say AI and Generative AI (GenAI) is a key part of their organisation's business strategy while 66% of organisations are already in early-to mid-stages of their AI and GenAI journey. As AI becomes more embedded in everything from customer management to critical infrastructure, safeguarding these investments and tackling the evolving cyber threat landscape must be a priority. To that end the success of integrating AI in the region depends on addressing three critical security imperatives: managing risks associated with AI usage, proactively defend against AI-enhanced attacks, and employing AI to enhance their overall security posture. Managing the Risks of AI Usage Ireland as a digital hub within the EU, must navigate the complex regulatory environment like the Digital Operational Resilience Act (DORA), NIS2 Directive, the Cyber Resilience Act and the recently launched EU AI Act. These frameworks introduce stringent cybersecurity requirements that businesses leveraging AI must meet to ensure resilience and compliance. AI's reliance on vast amounts of data presents unique challenges. AI models are built, trained, and fine-tuned with data sets, making protection paramount. To meet these challenges, Irish organisations must embed cybersecurity principles such as least privilege access, robust authentication controls, and real-time monitoring into every stage of the AI lifecycle. However, technology and implementing these measures effectively isn't enough. The Innovation Catalyst Study highlighted that a lack of skills and expertise ranks as one of the top three challenges faced by organisations looking to modernize their defenses. Bridging this skills gap is vital to delivering secure and scalable AI solutions because only with the right talent, governance, and security-first mindset can Ireland unlock the full potential of AI innovation in a resilient and responsible way. A further step that Irish businesses can take to address AI risks, is to integrate risk considerations across ethical, safety, and cultural domains. A multidisciplinary approach can help ensure that AI is deployed responsibly. Establishing comprehensive AI governance frameworks is essential. These frameworks should include perspectives from experts across the organisation to balance security, compliance, and innovation within a single, cohesive risk management strategy. Countering AI-Powered Threats While AI has enormous potential, bad actors are leveraging AI to enhance the speed, scale, and sophistication of attacks. Social engineering schemes, advanced fraud tactics, and AI-generated phishing emails are becoming more difficult to detect, with some leading to significant financial losses. Deepfakes, for instance, are finding their way into targeted scams aimed at compromising organisations. A 2024 ENISA report highlighted that AI-enhanced phishing attacks have surged by 35% in the past year, underscoring the need for stronger cybersecurity measures. To stay ahead organisations must prepare for an era where cyberattacks operate at machines' speed. Transitioning to a defensive approach anchored in automation is key to responding swiftly and effectively, minimizing the impact of advanced attacks. The future of AI agents in the cybersecurity domain may not be far off. This means deploying AI-powered security tools that can detect anomalies in real time...
CeADAR, Ireland's Centre for AI, this month celebrated enrolling its 1,500th learner in AI for You, an online course for Irish enterprises and public sector organisations who want to increase their AI awareness and literacy and boost their knowledge of regulations governing AI, such as the EU AI Act. The AI for You programme was developed by CeADAR in conjunction with the Department of Enterprise, Tourism and Employment (DETE). The course is fully funded, supported by CeADAR's European Digital Innovation Hub (EDIH) for AI programme, which itself is funded by Enterprise Ireland and the European Commission. The programme is self-paced, so it can be completed in a learner's own time, and is made up of five modules, including introduction to AI, the concepts underpinning AI, the applications and impacts of AI, the future with AI, and AI governance and the EU AI Act. The first-ever legal framework on AI, the EU AI Act sets out rules for AI providers and those that deploy AI technology on the specific uses of AI. The EU AI Act came into effect in August last year. Those interested in enrolling in the programme can do so by following the instructions on the CeADAR website (www.ceadar.ie/edih/skills-and-training/). The EDIH is a €700m European initiative comprising of more than 160 tech hubs across 30 countries. CeADAR's selection as the EDIH for AI in Ireland came with an initial funding boost of €6 million over three years. The award is jointly supported by the EU and the Government of Ireland through Enterprise Ireland. Minister Smyth, Minister of State for Trade Promotion, Artificial Intelligence and Digital Transformation said: "I am very pleased with the success of the AI for You online course and I congratulate CeADAR on the achievement of enrolling the 1500th learner. This reflects the growing appetite for AI skills in Ireland but also our commitment to equipping citizens and businesses with the knowledge and tools they need to thrive in the digital age." CeADAR's Director of Innovation and Development and EDIH for AI Programme Director, Dr. Ricardo Simon Carbajo said: "This is a significant milestone and is contributing to companies and public sector organisation's ability to understand and comply with the EU AI Act. We thank all those who signed up for this course and look forward to welcoming more in the future." See more stories here. More about Irish Tech News Irish Tech News are Ireland's No. 1 Online Tech Publication and often Ireland's No.1 Tech Podcast too. You can find hundreds of fantastic previous episodes and subscribe using whatever platform you like via our Anchor.fm page here: https://anchor.fm/irish-tech-news If you'd like to be featured in an upcoming Podcast email us at Simon@IrishTechNews.ie now to discuss. Irish Tech News have a range of services available to help promote your business. Why not drop us a line at Info@IrishTechNews.ie now to find out more about how we can help you reach our audience. You can also find and follow us on Twitter, LinkedIn, Facebook, Instagram, TikTok and Snapchat.
Send us a textWith Paul away, Join K and Ralph on a riotous discussion of personal integrity and what positions we can work with and for - with regulators and industry cross pollinating individuals and resources. Can regulators remain ethical and independent, when we rely on skills and abilities for industry?Also, a week of news in Privacy and Data Protection with a round up of EU, UK, US and beyond news, cases, regulations and standards - including age verification, censorship, EU AI Act, privacy preserving advertising, freedom of speech laws and new developments across the globe! If you have comments or questions, find us on LinkedIn and Instagram @seriousprivacy, and on BlueSky under @seriousprivacy.eu, @europaulb.seriousprivacy.eu, @heartofprivacy.bsky.app and @igrobrien.seriousprivacy.eu, and email podcast@seriousprivacy.eu. Rate and Review us! From Season 6, our episodes are edited by Fey O'Brien. Our intro and exit music is Channel Intro 24 by Sascha Ende, licensed under CC BY 4.0. with the voiceover by Tim Foley.
Send us a textWhen AI systems fail spectacularly, who pays the price? Part two of our conversation with global tech lawyer Gayle Gorvett tackles the million-dollar question every business leader is afraid to ask. With federal AI regulation potentially paused for a decade while technology races ahead at breakneck speed, companies are left creating their own rules in an accountability vacuum. Gayle reveals why waiting for government guidance could be a costly mistake and how smart businesses are turning governance policies into competitive advantages. From the EU AI Act's complexity challenges to state-by-state regulatory patchwork, this customer success playbook episode exposes the legal landmines hiding in your AI implementation—and shows you how to navigate them before they explode.Detailed AnalysisThe accountability crisis in AI represents one of the most pressing challenges facing modern businesses, yet most organizations remain dangerously unprepared. Gayle Gorvett's revelation about the federal government's proposed 10-year pause on state AI laws while crafting comprehensive regulation highlights a sobering reality: businesses must become their own regulatory bodies or risk operating in a legal minefield.The concept of "private regulation" that Gayle introduces becomes particularly relevant for customer success teams managing AI-powered interactions. When your chatbots handle customer complaints, your predictive models influence renewal decisions, or your recommendation engines shape customer experiences, the liability implications extend far beyond technical malfunctions. Every AI decision becomes a potential point of legal exposure, making governance frameworks essential risk management tools rather than optional compliance exercises.Perhaps most intriguingly, Gayle's perspective on governance policies as competitive differentiators challenges the common view of compliance as a business burden. In the customer success playbook framework, transparency becomes a trust-building mechanism that strengthens customer relationships rather than merely checking regulatory boxes. Companies that proactively communicate their AI governance practices position themselves as trustworthy partners in an industry where trust remains scarce.The legal profession's response to AI—requiring disclosure to clients and technical proficiency from practitioners—offers a compelling model for other industries. This approach acknowledges that AI literacy isn't just a technical requirement but a professional responsibility. For customer success leaders, this translates into a dual mandate: understanding AI capabilities enough to leverage them effectively while maintaining enough oversight to protect customer interests.The EU AI Act's implementation challenges that Gayle describes reveal the complexity of regulating rapidly evolving technology. Even comprehensive regulatory frameworks struggle to keep pace with innovation, reinforcing the importance of internal governance structures that can adapt quickly to new AI capabilities and emerging risks. This agility becomes particularly crucial for customer-facing teams who often serve as the first line of defense Kevin's offeringPlease Like, Comment, Share and Subscribe. You can also find the CS Playbook Podcast:YouTube - @CustomerSuccessPlaybookPodcastTwitter - @CS_PlaybookYou can find Kevin at:Metzgerbusiness.com - Kevin's person web siteKevin Metzger on Linked In.You can find Roman at:Roman Trebon on Linked In.
In Folge 13 sprechen Maik Klotz und Sascha Dewald über die KI-Pläne von Klarna und Revolut und machen einen Recap der BaFinTech 25. Auch abseits der Fintech-Szene war viel los: Der AI Act der EU erhitzt die Gemüter, OpenAI fischt bei Tesla und Meta und ausgerechnet ein deutsches Verteidigungs-Startup will mit KI groß werden.
AI is racing ahead, but for industries like life sciences, the stakes are higher and the rules more complex. In this episode, recorded just before the July heatwave hit its peak, I spoke with Chris Moore, President of Europe at Veeva Systems, from his impressively climate-controlled garden office. We covered everything from the trajectory of agentic AI to the practicalities of embedding intelligence in highly regulated pharma workflows, and how Veeva is quietly but confidently positioning itself to deliver where others are still making announcements. Chris brings a unique perspective shaped by a career that spans ICI Pharmaceuticals, PwC, IBM, and EY. That journey taught him how often the industry is forced to rebuild the same tech infrastructure again and again until Veeva came along. He shares how Veeva's decision to build a life sciences-specific cloud platform from the ground up has enabled a deeper, more compliant integration of AI. We explored what makes Veeva AI different, from the CRM bot that handles compliant free text to MLR agents that support content review and approval. Chris explains how Veeva's AI agents inherit the context and controls of their applications, making them far more than chat wrappers or automation tools. They are embedded directly into workflows, helping companies stay compliant while reducing friction and saving time. And perhaps more importantly, he makes a strong case for why the EU AI Act isn't a barrier. It's a validation. From auto-summarising regulatory documents to pulling metadata from health authority correspondence, the real-world examples Chris offers show how Veeva AI will reduce repetitive work while ensuring integrity at every step. He also shares how Veeva is preparing for a future where companies may want to bring their LLMs or even run different ones by geography or task. Their flexible, harness-based approach is designed to support exactly that. Looking ahead to the product's first release in December, Chris outlines how Veeva is working hand-in-hand with customers to ensure readiness and reliability from day one. We also touch on the broader mission: using AI not as a shiny add-on, but as a tool to accelerate drug development, reach patients faster, and relieve the pressure on already overstretched specialist teams. Chris closes with a dose of humanity, offering a book and song that both reflect Veeva's mindset, embracing disruption while staying grounded. This one is for anyone curious about how real, applied AI is unfolding inside one of the world's most important sectors, and what it means for the future of medicine.
Last week, following months of negotiation and just weeks before the first legal deadlines under the EU AI Act take effect, the European Commission published the final Code of Practice on General-Purpose AI. The Code is voluntary and intended to help companies demonstrate compliance with the AI Act. It sets out detailed expectations around transparency, copyright, and measures to mitigate systemic risks. Signatories will need to publish summaries of training data, avoid unauthorized use of copyrighted content, and establish internal frameworks to monitor risks. Companies that sign on will see a “reduced administrative burden” and greater legal clarity, the Commission said. At the same time, both European and American tech companies have raised concerns about the AI Act's implementation timeline, with some calling to “stop the clock” on the AI Act's rollout.To learn more, Tech Policy Press associate editor Ramsha Jahangir spoke to Luca Bertuzzi, senior AI correspondent at MLex, to unpack the final Code of Practice on GPAI, why it matters, and how it fits into the broader rollout of the AI Act.
In dieser Folge spreche ich mit Max Beckmann, Security- und KI-Consultant bei mindsquare, über Herausforderungen, gesetzliche Anforderungen und praktische Ansätze im Umgang mit Künstlicher Intelligenz in Unternehmen. Im Zentrum des Gesprächs: Wie lässt sich der Einsatz von KI sinnvoll und sicher gestalten – auch mit Blick auf aktuelle Regulierungen wie den EU AI Act?
In this week's episode, Laura and Kevin sit down with Evan J. Schwartz, Chief Innovation Officer at AMCS Group, to explore where AI is actually making a difference and where it's doing real harm. From logistics and sustainability to law enforcement and digital identity, we dig into how AI is being used (and misused) in ways that affect millions of lives.We talk about a real-world case Evan worked on involving predictive analytics in law enforcement, and the dangers of trusting databases more than people. If someone hacks your digital footprint or plants fake records, how do you prove you're not the person your data says you are? We dive into the Karen Read case, the ethics of “precrime” models like in Minority Report, and a story where AI helped thieves trick a bank into wiring $40 million. The common thread? We've put a lot of faith in data... sometimes more than it deserves.With the EU AI Act now passed and other countries tightening regulation, Evan offers advice on how U.S.-based companies should prepare for a future where AI governance isn't optional. He also breaks down “dark AI” and whether we're getting close to machines making life-altering decisions without humans in the loop. Whether you're in tech, law, policy, or just trying to understand how AI might impact your own rights and identity, this conversation pulls back the curtain on how fast things are moving and what we might be missing.Evan J. Schwartz brings over 35 years of experience in enterprise tech and digital transformation. At AMCS Group, he leads innovation efforts focused on AI, data science, and sustainability in the logistics and resource recovery industries. He's held executive roles in operations, architecture, and M&A, and also teaches graduate courses in AI, cybersecurity, and project management. Evan serves on the Forbes Tech Council and advises at Jacksonville University. He's also the author of People, Places, and Things, an Amazon best-seller on ERP implementation. His work blends technical depth with a sharp focus on ethics and real-world impact.
In this episode of the Risk Management Show, we dive into the critical topic of "AI Regulations: What Risk Managers Must Do Now." Join host Boris Agranovich and special guest Caspar Bullock, Director of Strategy at Axiom GRC, as they tackle the challenges and opportunities businesses face in navigating risk management, cybersecurity, and sustainability in today's rapidly evolving landscape. We discuss the growing importance of monitoring AI developments, preparing for upcoming regulations like the EU AI Act, and setting clear internal policies to meet customer demands and legal requirements. Caspar shares his expert perspective on building organizational resilience, the ROI of compliance programs, and addressing third-party risks in a complex supply chain environment. Whether you're a Chief Risk Officer, a compliance professional, or a business leader, this conversation offers actionable insights to help you stay ahead of emerging trends. If you want to be our guest or suggest a guest, send your email to info@globalriskconsult.com with the subject line “Podcast Guest.”
From February 16, 2024: The EU has finally agreed to its AI Act. Despite the political agreement reached in December 2023, some nations maintained some reservations about the text, making it uncertain whether there was a final agreement or not. They recently reached an agreement on the technical text, moving the process closer to a successful conclusion. The challenge now will be effective implementation.To discuss the act and its implications, Lawfare Fellow in Technology Policy and Law Eugenia Lostri sat down with Itsiq Benizri, counsel at the law firm WilmerHale Brussels. They discussed how domestic politics shaped the final text, how governments and businesses can best prepare for new requirements, and whether the European act will set the international roadmap for AI regulation.To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/lawfare-institute.Support this show http://supporter.acast.com/lawfare. Hosted on Acast. See acast.com/privacy for more information.
In TechSurge's Season 1 Finale episode, we explore an important debate: should AI development be open source or closed? AI technology leader and UN Senior Fellow Senthil Kumar joins Michael Marks for a deep dive into one of the most consequential debates in artificial intelligence, exploring the fundamental tensions between democratizing AI access and maintaining safety controls.Sparked by DeepSeek's recent model release that delivered GPT-4 class performance at a fraction of the cost and compute, the discussion spans the economics of AI development, trust and transparency concerns, regulatory approaches across different countries, and the unique opportunities AI presents for developing nations.From Meta's shift from closed to open and OpenAI's evolution from open to closed, to practical examples of guardrails and the geopolitical implications of AI governance, this episode provides essential insights into how the future of artificial intelligence will be shaped not just by technological breakthroughs, but by the choices we make as a global community.If you enjoy this episode, please subscribe and leave us a review on your favorite podcast platform. Sign up for our newsletter at techsurgepodcast.com for updates on upcoming TechSurge Live Summits and news about Season 2 of the TechSurge podcast. Thanks for listening! Links:Slate.ai - AI-powered construction technology: https://slate.ai/World Economic Forum on open-source AI: https://www.weforum.org/stories/2025/02/open-source-ai-innovation-deepseek/EU AI Act overview: https://digital-strategy.ec.europa.eu/en/policies/european-approach-artificial-intelligence
Die Talent-Abwerbeschlacht um KI-Forscher geht weiter. Apple verhandelt mit OpenAI und Anthropic, um Siri über fremde LLMs aufzurüsten, plant zugleich ein günstiges A18-MacBook Air und mehrere leichte AR-Brillen. Shein stolpert vor dem Börsengang: London scheitert, jetzt vertrauliches Hongkong-Filing bei sinkendem Wachstum. Berlins Datenschutzaufsicht will DeepSeek aus deutschen App-Stores verbannen. Yupp AI startet als Meta-Suchmaschine für LLMs: ein Prompt, zwei Antworten, User küren das bessere Modell. OpenAI schwenkt für Inferenz auf Google-TPUs – billiger, schneller, unabhängiger. Roger Federer knackt dank seiner On-Beteiligung die Milliarde. WhatsApp Business rechnet künftig pro Nachricht ab und verdient an KI-Bots. Tesla verliert seinen Produktionschef, X holt Produkt-Tüftler Nikita Bier. Trumps Team plant 47 ATF-Deregulierungen, das TikTok-Verbot wird erneut vertagt. Amazon beschäftigt jetzt über eine Million Roboter, Start-ups fordern ein Moratorium für den EU AI Act, Microsofts Diagnose-KI schlägt Ärzte bei seltenen Fällen. Unterstütze unseren Podcast und entdecke die Angebote unserer Werbepartner auf doppelgaenger.io/werbung. Vielen Dank! Philipp Glöckler und Philipp Klöckner sprechen heute über: (00:00:00) Meta wirbt OpenAI-Forscher ab (00:07:55) Apple sucht LLM-Partner (00:14:00) Shein-IPO wankt (00:23:45) Berliner Datenschutz will DeepSeek aus App-Stores werfen (00:25:33) Yupp AI Side-by-Side-Vergleich von LLM-Antworten (00:31:20) OpenAI nutzt Google-TPUs für Inferenz (00:43:20) Roger Federer wird Milliardär dank On-Beteiligung (00:48:00) WhatsApp Business: Wechsel zu Pay-per-Message-Modell (00:49:30) Tesla verliert Produktionschef; Nikita Bier neuer X-Produktchef (00:55:00) Trump (00:56:00) TikTok-Verbot erneut verschoben (00:58:00) Amazon meldet 1 Mio Roboter (01:00:05) Gute News Des Tages Shownotes OpenAI-Führung reagiert auf Meta-Angebote – wired.com Zuckerberg kündigt Meta-‘Superintelligenz'-Projekt an – bloomberg.com Apple erwägt Anthropic oder OpenAI für Siri – bloomberg.com Apple arbeitet an 7 Kopf-Displays – 9to5mac.com Apple bringt günstigeres MacBook mit iPhone-Prozessor heraus – 9to5mac.com Shein plant geheime Hongkong-Börsennotierung – reuters.com US-Käufer meiden Shein und Temu nach Schließung der Steuerschlupflücke durch Trump – ft.com DeepSeek droht Verbot in deutschen App-Stores von Apple und Google – reuters.com Google überzeugt OpenAI zur Nutzung von TPU-Chips – theinformation.com Roger Federers langfristige Deals machen ihn zum Tennis-Milliardär – bloomberg.com 500+ KI-Modelle im Vergleich – x.com WhatsApp Business Plattform Preise | WhatsApp API Preise – business.whatsapp.com Elon Musk Vertrauter Omead Afshar verlässt Tesla – bloomberg.com Musk's X stellt Unternehmer Nikita Bier als Produktchef ein – bloomberg.com DOGE tritt ATF bei, um Waffenregulierungen zu reduzieren – washingtonpost.com TikTok in den USA: Trump findet Käufer – zeit.de Amazon kurz davor, mehr Roboter als Menschen in Lagern einzusetzen – wsj.com Europäische Startups und VCs fordern EU auf, AI Act zu pausieren – sifted.eu Microsoft: Neues KI-System diagnostiziert genauer als Ärzte – wired.com
Guest post by Ronnie Hamilton, Pre-Sales Director, Climb Channel Solutions Ireland If compliance feels overwhelming right now, you're not imagining it. New regulations covering cybersecurity, data protection, AI, and more are emerging - from the latest PCI DSS updates to the EU AI Act. As a result, compliance is actively shaping the IT channel, influencing how we do business, how we anticipate industry shifts, and how we support our partners and customers with the right solutions to stay ahead. Navigating compliance in 2025 means being aligned with regulatory requirements, but it's a balancing act because at the end of the day, we all still have a job to do: delivering the right solutions, tailoring services to customer needs, and being a trusted partner. With new regulations coming into force and the mounting challenge of understanding cybersecurity, AI governance, and data integrity requirements, it's more important than ever to stay ahead. On the other hand, those who stay agile and deliver solutions that meet regulatory demands have an opportunity to turn the compliance headache into a competitive advantage. The Agility Advantage of Smaller Partners Smaller channel partners face growing pressure from complex customer environments, resource constraints, and fierce competition for skilled talent. However, their agility provides a unique advantage. Unlike larger enterprises, they can quickly adapt to evolving customer needs, position themselves as trusted advisors, and identify emerging vendors - particularly those offering AI-powered and automated solutions. AI adoption plays a critical role in maintaining a competitive edge. By embracing AI, smaller partners can deliver exceptional managed services with fewer resources, keeping costs low and service quality high. This approach ensures they remain competitive in a crowded market. Tackling the EU NIS2 Directive The EU NIS2 Directive reinforces the need for robust cybersecurity measures, urging businesses to adopt a more comprehensive approach to risk management. Essential security practices such as multi-factor authentication, regular cybersecurity training, incident response planning, and strong supply chain security are no longer optional but essential. A key principle underlying the directive is the Identify, Detect, Protect, Respond, and Recover framework. While most organisations focus heavily on detection and protection, recovery is sometimes a weak link. A lengthy recovery period following a breach can be as harmful as failing to detect the threat in the first place. The integration of automation into threat detection and response processes is becoming more important for meeting compliance requirements. The EU AI Act: Compliance Meets Innovation The EU AI Act introduces new obligations for organisations deploying AI solutions - emphasising transparency, accountability, and risk management throughout the AI lifecycle. These requirements extend to all aspects of AI implementation, from data sourcing and model training to real-world deployment. To address compliance risks, managed service providers may consider introducing AI governance roles, such as "AI Managers as a Service." These specialists help organisations navigate AI regulations without requiring full-time in-house expertise. While compliance with AI regulations may introduce additional costs, the long-term benefits - such as enhanced customer trust, clear documentation, and ethical AI practices - can significantly outweigh the initial investment. Rather than viewing compliance as a regulatory burden, partners should position it as an opportunity to strengthen customer relationships and stand out. Automation and AI: Key Enablers of Compliance AI and automation are proving indispensable for managing compliance complexity. From automating repetitive processes to monitoring security events and ensuring adherence to evolving standards, these technologies help organisations streamline compliance efforts while mini...
In this episode of TechTalk, we explore how financial services are steering toward AI — covering emerging regulations like the EU AI Act, trust-building, collaboration, and the shift from experimentation to real-world applications. To guide us through this evolving landscape, we're joined by Ulf Herbig, Chairman of the EFAMA AI Task Force and Chairman of ALFI's Digital Finance Working Group on Innovation and Technology; and Sébastien Schmitt, Partner in Regulatory Risk and Compliance at PwC Luxembourg.
In the rapidly accelerating world of Artificial Intelligence, the pace of innovation can feel overwhelming. From groundbreaking advancements to the ongoing debate about governance and ethical implications, AI is not just a tool; it's a transformative force. As we race towards super intelligence and navigate increasingly sophisticated models, how do we ensure that human values remain at the core of this technological revolution? How do we, especially in the trust-based nonprofit sector, lead with intentionality and ensure AI serves humanity rather than superseding it? In this episode, Nathan and Scott dive into the relentless evolution of AI, highlighting Meta's staggering $15 billion investment in the race for super intelligence and the critical absence of robust regulation. They reflect on the essential shift from viewing AI adoption as a finite "destination" to embracing it as an ongoing "journey." Nathan shares insights on how AI amplifies human capabilities, particularly for those who are "marginally" good at certain skills, advocating for finding your "why" and offloading tasks AI can do better. Scott discusses his recent AI governance certification, underscoring the complexities and lack of "meat on the bone" in US regulations compared to the EU AI Act. The conversation also explores the concept of AI agents, offering practical tips for leveraging them, even for those with no coding experience. They conclude with a powerful reminder: AI is a mirror reflecting our values, and the nonprofit sector has a vital role in shaping its ethical future. HIGHLIGHTS [01:15] AI Transformation: A Journey, Not a Destination [03.00] If AI Can Do It Better: Finding Your Human "Why" [04:05] AI Outperforming Human Capabilities [05:00] Meta's $15 Billion Investment in Super Intelligence [07:16] The Manipulative Nature of AI and the "Arms Race" for Super Intelligence [09:27] The Importance and Challenges of AI Governance and Regulation [14:50] AI as a Compass, Not a Silver Bullet [16:39] Beware the AI Finish Line Illusion [18:12] Small Steps, Sustained Momentum: The "Baby Steps" Approach to AI [26:48] Tip of the Week: The Rise of AI Agents and Practical Use Cases [32.24] The Power of Curiosity in AI Exploration RESOURCES Relay.app: relay.app Zapier: zapier.com Make.com: make.com N.io: n.io Connect with Nathan and Scott: LinkedIn (Nathan): linkedin.com/in/nathanchappell/ LinkedIn (Scott): linkedin.com/in/scott-rosenkrans Website: fundraising.ai/
Today's guest is Dr. Ankur Sharma, Head of Medical Affairs for Medical Devices and Digital Radiology at Bayer. Dr. Sharma joins Emerj Editorial Director Matthew DeMello to explore the complex intersection of AI, medical devices, and data governance in healthcare. Dr. Sharma outlines the key challenges that healthcare institutions face in adopting AI tools, including data privacy, system interoperability, and regulatory uncertainty. He also clarifies the distinction between regulated predictive models and unregulated generative tools, as well as how each fits into current clinical workflows. The conversation explores the evolving roles of the FDA and EU AI Act, the potential for AI to bridge clinical research and patient care, and the need for new reimbursement models to support digital innovation. This episode is sponsored by Medable. Learn how brands work with Emerj and other Emerj Media options at emerj.com/ad1. Want to share your AI adoption story with executive peers? Click emerj.com/expert2 for more information and to be a potential future guest on the ‘AI in Business' podcast!
A team of researchers at Trinity College Dublin has received €500,000 in funding to develop an AI-enabled platform to help teachers create assessments and provide formative feedback to learners. The project is called Diotima and is supported by The Learnovate Centre, a global research and innovation centre in learning technology in Trinity College Dublin. Diotima began its partnership with Learnovate in February this year and is expected to spin out as a company in 2026. The €500,000 funding was granted under Enterprise Ireland's Commercialisation Fund, which supports third-level researchers to translate their research into innovative and commercially viable products, services and companies. Diotima supports teaching practice by using responsible AI to provide learners with feedback, leading to more and better assessments and improved learning outcomes for students, and a more manageable workload for teachers. The project was co-founded by Siobhan Ryan, a former secondary school teacher, biochemist and environmental scientist, and Jonathan Dempsey, an EdTech professional with both start-up and corporate experience. Associate Professor Ann Devitt, Head of the Trinity School of Education, and Carl Vogel, Professor of Computational Linguistics and Director of the Trinity Centre for Computing and Language Studies, are serving as co-principal investigators on the project. Diotima received the funding in February. Since then, the project leaders have established an education advisory group formed of representatives from post-primary and professional education organisations. The Enterprise Ireland funding has facilitated the hiring of two post-doctoral researchers. They are now leading AI research ahead of the launch of an initial version of the platform in September 2025. Diotima aims to conduct two major trials of the platform as they also seek investment. Co-founder Siobhan Ryan is Diotima's Learning Lead. After a 12-year career in the brewing industry with Diageo, Siobhan re-trained as a secondary school teacher before leaving the profession to develop the business case for a formative assessment and feedback platform. Her experience in the classroom made her realise that she could have a greater impact by leveraging AI to create a platform to support teachers in a safe, transparent, and empowering way. Her fellow co-founder Jonathan Dempsey is Commercial Lead at Diotima. He had been CEO of the Enterprise Ireland-backed EdTech firm Digitary, which is now part of multinational Instructure Inc. He held the role of Director of UK and Ireland for US education system provider Ellucian and Head of Education and Education Platforms for Europe with Indian multinational TCS. Jonathan has a wealth of experience at bringing education technologies to market. Learnovate Centre Director Nessa McEniff says: "We are delighted to have collaborated with the Diotima team to secure €500,000 investment from Enterprise Ireland's Commercialisation Fund. Diotima promises to develop into a revolutionary platform for learners in secondary schools and professional education organisations, delivering formative feedback and better outcomes overall. We look forward to supporting them further as they continue to develop the platform in the months ahead." Enterprise Ireland Head of Research, Innovation and Infrastructure Marina Donohoe says: "Enterprise Ireland is delighted to support Diotima under the Commercialisation Fund. We look forward to seeing them continue in their mission to transform teaching practice through AI enabled assessment and feedback. We believe that the combination of excellence in AI and in education from Trinity College, expertise in education technology from the Learnovate Centre and focus on compliance with the EU AI Act and other regulations will see the Diotima team make a global impact". Diotima Learning Lead and co-founder Siobhan Ryan says: "We're delighted to have received such a significant award from the Enterprise Ireland C...
Send us a textIn episode 241 of The Data Diva Talks Privacy Podcast, host Debbie Reynolds, “The Data Diva,” welcomes Phillip Mason, Global Privacy Program Manager at Corning, Inc. Phillip joins Debbie to discuss the complicated interplay between AI advancement, regulatory frameworks, and the ethical imperative of human judgment. Drawing from his diverse background in accounting, law, and privacy, Phillip offers an informed and multidimensional perspective on how businesses navigate emerging risks. He critiques overbroad AI legislation like the EU AI Act, which he believes may have unintended consequences for innovation, particularly among smaller firms lacking legal and compliance resources. Debbie and Phillip dive into examples of poorly executed AI rollouts that sparked public backlash, such as LinkedIn's data harvesting practices and Microsoft's Recall feature, emphasizing the importance of transparency and foresight. Phillip also unpacks the difference between having a “human in the loop” and placing real ethical judgment into practice. They discuss how organizations can build a culture of trust and accountability where data science and compliance work harmoniously. The conversation ultimately underscores that as algorithms get smarter, human oversight must also evolve, with thoughtful governance, interdisciplinary collaboration, and values-driven leadership.Support the show
Expleo, the global technology, engineering and consulting service provider, today launches its Business Transformation Index 2025. To mark the launch, Expleo is revealing new data showing that 70% of Ireland's largest enterprises believe AI's impact on workforces is so profound that it should be managed like an employee to avoid conflicts with company culture and people. The sixth edition of Expleo's award-winning Business Transformation Index (BTI) assesses the attitudes and sentiments of 200 IT and business decision-makers in Ireland, in enterprises with 250 employees or more. The report examines themes including digital transformation, geopolitics, AI and DEI and provides strategic recommendations for organisations to overcome challenges relating to these. BTI 2025 found that while 98% of large enterprises are using AI in some form, 67% believe their organisation can't effectively use AI because their data is too disorganised. As a result, just 30% have integrated and scaled AI models into their systems. Almost a quarter (23%) admitted that they are struggling to find use cases for AI beyond the use of off-the-shelf large language models (LLMs). Despite remaining in the early stages of AI deployment, senior decision-makers are already making fundamental changes to the skills makeup of their teams due to AI's influence and its capabilities. Expleo's research found that 72% of organisations have made changes to the criteria they seek from job candidates because AI can now take on some tasks, while its application requires expertise in other areas. Meanwhile, more than two-thirds (68%) of enterprises who are deploying AI have stopped hiring for certain roles entirely because AI can handle the requirements. The research shows that as AI absorbs tasks in some areas, it is offering workforce opportunities in others. While 30% of enterprise leaders cite workforce displacement as one of their greatest fears resulting from AI, 72% report that they will pay more for team members who have AI-specific skills. The colliding worlds of humans and machines are further revealed in BTI 2025 as 78% of organisations say the correct and ethical use of AI is now covered in their employment contracts. However, the BTI indicates that employers themselves may not be living up to their side of the bargain, as 25% of business and IT leaders conceded a possibility that the AI used for hiring, retention or employee progression in their organisation could be biased. The uncertainty about the objectivity of their AI could explain why 25% of decision-makers are also not confident that their organisation is compliant with the EU AI Act. The Act, it seems, is a bone of contention for many as 76% believe the EU AI Act will hinder adoption of AI in their organisation. Phil Codd, Managing Director, Expleo Ireland, said: "The pace of change that we are seeing from AI is like nothing we have seen before - not even the Industrial Revolution unfolded so quickly or indiscriminately in terms of the industries and people it impacted. And, the workforce's relationship with AI is complicated - on the one hand, they are turning to AI to make their jobs more manageable and to reduce stress, but at the same time, they worry that its broad deployment across their organisation could impinge on their work and therefore their value as an employee. "Business leaders are entering untrodden ground as they try to solve how AI can work for them - both practically and ethically - and without causing clashes within teams. There is no question that there is a new digital colleague joining Irish workplaces and it will define the next chapter of our working lives and economy. However, the success of this seemingly autonomous technology will always depend on the humans and data that back it up. "At Expleo, we work with enterprises to ensure they are reaping the benefits of AI by looking holistically at their people, processes and data. AI requires, and will bring, significant changes...
As AI becomes more embedded in the daily lives of legal departments, the call for robustregulatory frameworks is louder than ever. In the 28th episode of Legal Leaders Exchange, wesit down with experts Ken Crutchfield and Jennifer McIver to explore the evolving landscapeof AI regulations—from the EU AI Act to global compliance trends. We unpack what thesechanges mean for legal professionals, compliance officers, and tech leaders, and howorganizations can proactively prepare for the future of AI governance.
HR consultant Daniel Strode discusses AI's impact on human resources, highlighting recruitment and data analytics as prime areas for adoption. He introduces his "5P model" emphasizing policy/governance and people/culture transformation as critical success factors. While AI adoption remains slow—only 25% of adults regularly use tools like ChatGPT—organizations are unknowingly integrating AI through software updates. Strode advocates for proper governance policies ahead of regulations like the EU AI Act, positioning AI as a collaborative tool enhancing rather than replacing human capabilities. TAKEAWAYS 5P Framework: Success requires addressing process enhancement, personalization, predictive insights, policy/governance, and people/culture transformation Governance First: Establish AI ethics policies, bias auditing, and compliance training before implementation, especially with upcoming EU AI Act regulations Human-AI Partnership: Use AI for manual processes while focusing HR professionals on strategic work like employee experience and change management A QUICK GLIMPSE INTO OUR PODCAST
Packaging MLOps Tech Neatly for Engineers and Non-engineers // MLOps Podcast #322 with Jukka Remes, Senior Lecturer (SW dev & AI), AI Architect at Haaga-Helia UAS, Founder & CTO at 8wave AI. Join the Community: https://go.mlops.community/YTJoinInGet the newsletter: https://go.mlops.community/YTNewsletter// AbstractAI is already complex—adding the need for deep engineering expertise to use MLOps tools only makes it harder, especially for SMEs and research teams with limited resources. Yet, good MLOps is essential for managing experiments, sharing GPU compute, tracking models, and meeting AI regulations. While cloud providers offer MLOps tools, many organizations need flexible, open-source setups that work anywhere—from laptops to supercomputers. Shared setups can boost collaboration, productivity, and compute efficiency.In this session, Jukka introduces an open-source MLOps platform from Silo AI, now packaged for easy deployment across environments. With Git-based workflows and CI/CD automation, users can focus on building models while the platform handles the MLOps.// BioFounder & CTO, 8wave AI | Senior Lecturer, Haaga-Helia University of Applied SciencesJukka Remes has 28+ years of experience in software, machine learning, and infrastructure. Starting with SW dev in the late 1990s and analytics pipelines of fMRI research in early 2000s, he's worked across deep learning (Nokia Technologies), GPU and cloud infrastructure (IBM), and AI consulting (Silo AI), where he also led MLOps platform development. Now a senior lecturer at Haaga-Helia, Jukka continues evolving that open-source MLOps platform with partners like the University of Helsinki. He leads R&D on GenAI and AI-enabled software, and is the founder of 8wave AI, which develops AI Business Operations software for next-gen AI enablement, including regulatory compliance of AI.// Related LinksOpen source -based MLOps k8s platform setup originally developed by Jukka's team at Silo AI - free for any use and installable in any environment from laptops to supercomputing: https://github.com/OSS-MLOPS-PLATFORM/oss-mlops-platformJukka's new company:https://8wave.ai~~~~~~~~ ✌️Connect With Us ✌️ ~~~~~~~Catch all episodes, blogs, newsletters, and more: https://go.mlops.community/TYExploreJoin our Slack community [https://go.mlops.community/slack]Follow us on X/Twitter [@mlopscommunity](https://x.com/mlopscommunity) or [LinkedIn](https://go.mlops.community/linkedin)] Sign up for the next meetup: [https://go.mlops.community/register]MLOps Swag/Merch: [https://shop.mlops.community/]Connect with Demetrios on LinkedIn: /dpbrinkmConnect with Jukka on LinkedIn: /jukka-remesTimestamps:[00:00] Jukka's preferred coffee[00:39] Open-Source Platform Benefits[01:56] Silo MLOps Platform Explanation[05:18] AI Model Production Processes[10:42] AI Platform Use Cases[16:54] Reproducibility in Research Models[26:51] Pipeline setup automation[33:26] MLOps Adoption Journey[38:31] EU AI Act and Open Source[41:38] MLOps and 8wave AI[45:46] Optimizing Cross-Stakeholder Collaboration[52:15] Open Source ML Platform[55:06] Wrap up
As AI systems approach and potentially surpass human cognitive benchmarks, how do we design hybrid intelligence frameworks that preserve human agency while leveraging artificial cognitive enhancements?In this exploration of human-AI convergence, anthropologist and organizational learning expert Dr. Lollie Mancey presents a framework for the "cognitive revolution,” the fourth transformational shift in human civilization following agricultural, industrial, and digital eras. Drawing from Berkeley's research on the science of awe, Vatican AI policy frameworks, and indigenous knowledge systems, Mancey analyzes how current AI capabilities (GPT-4 operating at Einstein-level IQ) are fundamentally reshaping cognitive labor and social structures. She examines the EU AI Act's predictive policing clauses, the implications of quantum computing, and the emerging grief tech sector as indicators of broader systemic transformation. Mancey identifies three meta-cognitive capabilities essential for human-AI collaboration: Critical information interrogation, Systematic curiosity protocols, and Epistemic skepticism frameworksHer research on AI companion platforms reveals neurological patterns like addiction pathways. At the same time, her fieldwork with Balinese communities demonstrates alternative models of technological integration based on reciprocal participation rather than extractiveoptimization. This conversation provides actionable intelligence for organizations navigating the transition from human-centric to hybrid cognitive systems.Key Research Insights• Cognitive Revolution Metrics: Compound technological acceleration outpaces regulatory adaptation, with education systems lagging significantly, requiring new frameworks for cognitive load management and decision architecture in research environments • Einstein IQ Parity Achieved: GPT-4 operates at Einstein-level intelligence yet lacks breakthrough innovation capabilities, highlighting critical distinctions between pattern recognition and creative synthesis for R&D resource allocation • Neurological Dependency Patterns: AI companion platforms demonstrate "catnip-like" effects with users exhibiting hyper-fixation behaviors and difficulty with "digital divorce"—profound implications for workforce cognitive resilience • Epistemic Security Crisis: Deep fakes eliminated content authentication while AI hallucinations embed systemic biases from internet-scale training data, requiring new verification protocols and decision-making frameworks • Alternative Integration Architecture: Balinese reciprocal participation models versus Western extractive paradigms offer scalable approaches for sustainable innovation ecosystems and human-technology collaboration#EcosystemicFutures #CognitiveRevolution #HybridIntelligence #NeuroCognition #QuantumComputing #SociotechnicalSystems #HumanAugmentation #SystemsThinking #FutureOfScience Guest: Lorraine Mancey, Programme Director at UCD Innovation Academy Host: Marco Annunziata, Co-Founder, Annunziata Desai PartnersSeries Hosts:Vikram Shyam, Lead Futurist, NASA Glenn Research CenterDyan Finkhousen, Founder & CEO, Shoshin WorksEcosystemic Futures is provided by NASA - National Aeronautics and Space Administration Convergent Aeronautics Solutions Project in collaboration with Shoshin Works.
In this installment of our Workplace Strategies Watercooler 2025 podcast series, Jenn Betts (shareholder, Pittsburgh), Simon McMenemy (partner, London), and Danielle Ochs (shareholder, San Francisco) discuss the evolving landscape of artificial intelligence (AI) in the workplace and provide an update on the global regulatory frameworks governing AI use. Simon, who is co-chair of Ogletree's Cybersecurity and Privacy Practice Group, breaks down the four levels of risk and their associated regulations specified in the EU AI Act, which will take effect in August 2026, and the need for employers to prepare now for the Act's stringent regulations and steep penalties for noncompliance. Jenn and Danielle, who are co-chairs of the Technology Practice Group, discuss the Trump administration's focus on innovation with limited regulation, as well as the likelihood of state-level regulation.
On this week's Now of Work Digital Meetup, Ritu Mohanka will joined Jess Von Bank and Jason Averbook to dig into how AI can actually reduce bias in hiring and why we should be moving away from a “matching” model.Ritu shares how VONQ's shift to a scoring system, evaluating candidates across 15 transparent, job-relevant criteria, is enabling skills-based hiring, improving candidate experience, and aligning with the EU AI Act's push for explainable AI.
In this episode, Kevin Werbach interviews Kelly Trindel, Head of Responsible AI at Workday. Although Trindel's team is housed within Workday's legal department, it operates as a multidisciplinary group, bringing together legal, policy, data science, and product expertise. This structure helps ensure that responsible AI practices are integrated not just at the compliance level but throughout product development and deployment. She describes formal mechanisms—such as model review boards and cross-functional risk assessments—that embed AI governance into product workflows across the company. The conversation covers how Workday evaluates model risks based on context and potential human impact, especially in sensitive areas like hiring and performance evaluation. Trindel outlines how the company conducts bias testing, maintains documentation, and uses third-party audits to support transparency and trustworthiness. She also discusses how Workday is preparing for emerging regulatory frameworks, including the EU AI Act, and how internal governance systems are designed to be flexible in the face of evolving policy and technological change. Other topics include communicating AI risks to customers, sustaining post-deployment oversight, and building trust through accountability infrastructure. Dr. Kelly Trindel directs Workday's AI governance program. As a pioneer in the responsible AI movement, Kelly has significantly contributed to the field, including testifying before the U.S. Equal Employment Opportunity Commission (EEOC) and later leading an EEOC task force on ethical AI—one of the government's first. With more than 15 years of experience in quantitative science, civil rights, public policy, and AI ethics, Kelly's influence and commitment to responsible AI are instrumental in driving the industry forward and fostering AI solutions that have a positive societal impact. Transcript Responsible AI: Empowering Innovation with Integrity Putting Responsible AI into Action (video masterclass)
On this episode of the Scouting For Growth podcast, Sabine meets Areiel Wolanow, the managing director of Finserv Experts, who discusses his journey from IBM to founding FinServ Experts, emphasising the importance of focusing on business models enabled by technology rather than the technology itself. Areiel delves into the challenges and opportunities presented by artificial intelligence, responsible AI practices, and the implications of quantum computing for data security, highlighting the need for organisations to adapt their approaches to digital transformation, advocating for a migration strategy over traditional transformation methods KEY TAKEAWAYS Emerging tech should be leveraged to create new business models rather than just re-engineering existing ones. Understanding the business implications of technology is crucial for delivering value. When harnessing artificial intelligence, it's essential to identify the real underlying problems within an organisation, assess its maturity, and build self-awareness before applying maturity models and gap analyses. The EU AI Act serves as a comprehensive guideline for responsible AI use, offering risk categories and controls that can benefit companies outside the EU by providing a framework for ethical AI practices without the burden of compliance. Organisations should prepare for the future of quantum computing by ensuring their data is protected against potential vulnerabilities. This involves adopting quantum-resilient algorithms and planning for the transition well in advance. Leaders should place significant responsibility on younger team members who are more familiar with emerging technologies. Providing them with autonomy and support can lead to innovative solutions and successful business outcomes. BEST MOMENTS 'We focus not on the technology itself, but on the business models the tech enables.' 'The first thing you have to do... is to say, OK, is the proximate cause the real problem?' 'The best AI regulations out there is the EU AI Act... it actually benefits AI companies outside the EU more than it benefits within.' 'Digital transformations have two things in common. One is they're expensive, and two is they always fail.' ABOUT THE GUEST Areiel Wolanow is the managing director of Finserv Experts. He is an experienced business leader with over 25 years of experience in business transformation solutioning, sales, and execution. He served as one of IBM’s key thought leaders in blockchain, machine learning, and financial inclusion. Areiel has deep experience leading large, globally distributed teams; he has led programs of over 100 people through the full delivery life cycle, and has managed budgets in the tens of millions of dollars. In addition to his delivery experience, Areiel also serves as a senior advisor on blockchain, machine learning, and technology adoption; he has worked with central banks and financial regulators around the world, and is currently serving as the insurance industry advisor for the UK Parliament’s working group on blockchain. LinkedIn ABOUT THE HOST Sabine is a corporate strategist turned entrepreneur. She is the CEO and Managing Partner of Alchemy Crew a venture lab that accelerates the curation, validation, & commercialization of new tech business models. Sabine is renowned within the insurance sector for building some of the most renowned tech startup accelerators around the world working with over 30 corporate insurers, accelerated over 100 startup ventures. Sabine is the co-editor of the bestseller The INSURTECH Book, a top 50 Women in Tech, a FinTech and InsurTech Influencer, an investor & multi-award winner. Twitter LinkedIn Instagram Facebook TikTok Email Website
At Team '25 in Anaheim, I had the unique opportunity to sit down with Stan Shepherd, General Counsel at Atlassian, for a conversation that pulled back the curtain on how legal and technology are intersecting in the age of AI. Stan's journey from journalism to law to shaping legal operations at one of the world's most forward-thinking companies is as fascinating as it is relevant. What emerged from our discussion is a clear signal that legal teams are no longer trailing behind innovation—they're often at the front of it. Stan shared how Atlassian's legal function achieved 85 percent daily usage of AI tools, including the company's in-house assistant, Rovo. This is remarkable when compared to the industry norm, where legal teams typically lag in AI adoption. Instead of resisting change, Stan's team leaned into it, focusing on automation for repetitive tasks while reserving high-value thinking for their legal experts. We explore Atlassian's responsible tech framework, their principles around transparency and accountability, and how these inform product development from day one. Stan also walked me through how Atlassian is navigating the emerging global regulatory landscape, from the EU AI Act to evolving compliance in the US. His insights on embedding legal counsel directly into product teams, rather than operating on the sidelines, reveal a model of collaboration that turns risk management into a growth enabler. For legal professionals, compliance leaders, and tech decision-makers wrestling with how to integrate AI responsibly, this episode offers a grounded, real-world blueprint. It's not just about mitigating risk—it's about building trust, preserving human judgment, and future-proofing your operations. If you're wondering what responsible AI adoption looks like at scale, you'll want to hear this one. So how are you preparing your legal and compliance strategy for the AI-powered workplace? Let's keep the conversation going.