Podcasts about eu ai act

  • 351PODCASTS
  • 590EPISODES
  • 36mAVG DURATION
  • 5WEEKLY NEW EPISODES
  • Aug 28, 2025LATEST

POPULARITY

20172018201920202021202220232024


Best podcasts about eu ai act

Latest podcast episodes about eu ai act

Tech Deciphered
68 – “Winning the AI Race”… America's AI Action Plan

Tech Deciphered

Play Episode Listen Later Aug 28, 2025 52:38


America's AI action plan … “Winning the AI race” has just been announced. What is it all about? What are the implications? How will the rest of the world react? A deep dive into the announcement, approaches by EU and China, and overall implications of these action plans.Navigation:Intro (01:34)Context of the White House AI SummitPillar I – Accelerating AI InnovationPillar II – Building American AI InfrastructurePillar III – Leading in International AI Diplomacy & SecurityComparing Approaches – U.S. Action Plan vs. EU AI Act vs. China's StrategyImplications and SynthesisConclusionOur co-hosts:Bertrand Schmitt, Entrepreneur in Residence at Red River West, co-founder of App Annie / Data.ai, business angel, advisor to startups and VC funds, @bschmittNuno Goncalves Pedro, Investor, Managing Partner, Founder at Chamaeleon, @ngpedroOur show: Tech DECIPHERED brings you the Entrepreneur and Investor views on Big Tech, VC and Start-up news, opinion pieces and research. We decipher their meaning, and add inside knowledge and context. Being nerds, we also discuss the latest gadgets and pop culture news Subscribe To Our Podcast Nuno G. PedroWelcome to episode 68 of Tech Deciphered. This episode will focus on America's AI action plan, winning the AI race, which has just been announced a couple of weeks in by President Trump in the White House. Today, we'll be discussing the pillars of this plan, from pillar I, the acceleration of AI innovation, to pillar II, building of American AI infrastructure, to pillar III, leading in international AI diplomacy and security.We'll also further contextualise it, as well as compare the approaches between the US Action plan, and what we see from the EU and China strategy at this point in time. We'll finalise with implications and synthesis. Bertrand, is this a watershed moment for the industry? Is this the moment we were all waiting for in terms of clarity for AI in the US?Bertrand SchmittYeah, that's a great question. I must say I'm quite excited. I'm not sure I can remember anything like it since basically John F. Kennedy announcing the race to go to the moon in the early '60s. It feels, as you say, a watershed moment because suddenly you can see that there is a grand vision, a grand plan, that AI is not just important, but critical to the future success of America. It looks like the White House is putting all the ducks in order in order to make it happen. There is, like in the '60s with JFK, a realisation that there is an adversary, there is a competitor, and you want to beat them to that race. Except this time it's not Russia, it's China. A lot of similarities, I would say.Nuno G. PedroYeah. It seems relatively comprehensive. Obviously, we'll deep dive into it today across a variety of elements like regulation, investments, view in relation to exports and imports and the rest of the world. So, relatively comprehensive from what we can see. Obviously, we don't know all the details. We know from the announcement that the plan has identified 90 federal policy actions across the three pillars. Obviously, we'll see how these come into practice over the next few months, few years.To your point, it is a defining moment. It feels a little bit like the space race of '60s, et cetera. It's probably warranted. We know that, obviously, AI platforms, AI services and products are changing the world as we speak. It's pretty important to figure out what is the US response to it.Also interesting to know that we normally don't talk about the US too much in terms of industrial policy. The US seems to have a private sector that, in and of itself, actually stands up to the game, and in particular in tech and high-tech, normally fulfils or fills the gaps that are introduced by big generational shifts in terms of technology. But in this case, there seems to be an industrial policy. This seems to set the stage for that industrial policy and how it moves forward,

The Armen Show
451: The Human Touch in AI Co-Creation

The Armen Show

Play Episode Listen Later Aug 15, 2025


In this episode, Armen Shirvanian explores the intersection of artificial intelligence and creativity, discussing how to co-create with AI while maintaining the human touch. He delves into copyright issues surrounding AI-generated content, the implications of the EU AI Act, and the dual nature of AI as both a tool for enhancing creativity and a potential […]

No Brainer - An AI Podcast for Marketers
NB63 - Smart Resilience for the AI Age

No Brainer - An AI Podcast for Marketers

Play Episode Listen Later Aug 13, 2025 53:12


Dr. Rebekka Reinhard and Thomas Vasek -- the team behind human magazine -- join CognitivePath founders Greg Verdino and Geoff Livingston for a provocative conversation about why smart resilience, ethics, regulation and responsibility are essential for creating a human forward future in the age of AI. Tune in for a deep dive into the philosophical and practical implications of AI on society, democracy, and our collective future. Chapters 00:00 Introduction 03:34 Smart Resilience in the Age of AI 07:09 Navigating Crises in a Complex World 11:03 Cultural Perspectives on Resilience 12:06 Global Perspectives on AI Development 16:12 Ethics and Morality in AI Regulation 21:32 The EU AI Act and Its Implications 26:09 Power Dynamics and Global Perception 28:28 AI's Role in Democracy 32:14 AI's Impact on Human Resilience 34:38 The Dangers of AI in the Workplace 38:19 Repression and Job Replacement through AI 41:09 A Hopeful Vision for the Future About Rebekka Dr. Rebekka Reinhard is a philosopher and SPIEGEL bestselling author. It's her mission to take philosophy out of the ivory tower and put it back where it belongs: real life. The is the founder of human, the first German magazine about life and work in the AI age. Connect with her at https://linkedin.com/in/rebekkareinhard About Thomas Thomas Vasek is editor-in-chief and head of content at human. He began his journalism career as an investigative reporter at the Austrian news magazine Profil. As founding editor-in-chief, he launched the German edition of the renowned MIT Technology Review in 2003 and the philosophy magazine HOHE LUFT in 2010. From 2006 to 2010, he served as editor-in-chief of P.M. Magazin. Connect with him at https://www.linkedin.com/in/thomas-va%C5%A1ek-637b6b233/   About human Magazine human is the first magazine to take a holistic look at the impact of AI on business, politics, society, and culture – always with a focus on the human being. Issues are published in German (print/digital) and English (digital only). Learn more and subscribe: https://human-magazin.de/ Download the free “Smart Resilience” white paper: https://human-magazin.de/#consulting Learn more about your ad choices. Visit megaphone.fm/adchoices

Chattinn Cyber
Legal Insights on AI: Protecting Privacy in a Data-Driven World with Colleen García

Chattinn Cyber

Play Episode Listen Later Aug 13, 2025 10:59


Summary In this episode, Marc is chattin' with Colleen García, a seasoned privacy attorney. The conversation begins with an introduction to Colleen's extensive background in cybersecurity law, including her experience working with the U.S. government before transitioning to the private sector. This sets the stage for a deep dive into the complex relationship between data privacy and artificial intelligence (AI), highlighting the importance of understanding legal and ethical considerations as AI technology continues to evolve rapidly. The core of the discussion centers on how AI models are trained on vast amounts of data, often containing personal identifiable information (PII). Colleen emphasizes that respecting individuals' data privacy rights is crucial, especially when it comes to obtaining proper consent for the use of their data in AI systems. She points out that while AI offers many benefits, it also raises significant concerns about data misuse, leakage, and the potential for infringing on privacy rights, which companies must carefully navigate to avoid legal and reputational risks. Colleen elaborates on the current legal landscape, noting that existing data privacy laws—such as those in the U.S., the European Union, Canada, and Singapore—are being adapted to address AI-specific issues. She mentions upcoming regulations like the EU AI Act and highlights the role of the Federal Trade Commission (FTC) in enforcing transparency and honesty in AI disclosures. Although some laws do not explicitly mention AI, their principles are increasingly being applied to regulate AI development and deployment, emphasizing the need for companies to stay compliant and transparent. The conversation then expands to a global perspective, with Colleen discussing how different countries are approaching the intersection of data privacy and AI. She notes that international efforts are underway to develop legal frameworks that address the unique challenges posed by AI, reflecting a broader recognition that AI regulation is a worldwide concern. This global outlook underscores the importance for companies operating across borders to stay informed about evolving legal standards and best practices. In closing, Colleen offers practical advice for businesses seeking to responsibly implement AI. She stresses the importance of building AI systems on a strong foundation of data privacy, including thorough vetting of training data and transparency with users. She predicts that future legislative efforts may lead to more state-level AI laws and possibly a comprehensive federal framework, although the current landscape remains fragmented. The podcast concludes with Colleen inviting listeners to connect with her for further discussion, emphasizing the need for proactive, thoughtful approaches to AI and data privacy in the evolving legal environment. Key Points The Relationship Between Data Privacy and AI: The discussion emphasizes how AI models are trained on data that often includes personal identifiable information (PII), highlighting the importance of respecting privacy rights and obtaining proper consent. Legal Risks and Challenges in AI and Data Privacy: Colleen outlines potential risks such as data leakage, misuse, and the complexities of ensuring compliance with existing privacy laws when deploying AI systems. Current and Emerging Data Privacy Laws: The conversation covers how existing laws (like those from the U.S., EU, Canada, and Singapore) are being adapted to regulate AI, along with upcoming regulations such as the EU AI Act and the role of agencies like the FTC. International Perspectives on AI and Data Privacy: The interview highlights how different countries are approaching AI regulation, emphasizing that this is a global issue with ongoing legislative developments worldwide. Practical Advice for Responsible AI Deployment: Colleen offers guidance for companies to build AI systems on a strong data privacy foun...

Microsoft Business Applications Podcast
AI Regulation: Innovation's Hidden Accelerator

Microsoft Business Applications Podcast

Play Episode Listen Later Aug 12, 2025 39:01 Transcription Available


Irish Tech News Audio Articles
Is the AI Skills Gap Actually a Confidence Crisis?

Irish Tech News Audio Articles

Play Episode Listen Later Aug 11, 2025 6:35


Guest post by Ronnie Hamilton, Pre Sales Director, Climb Channel Solutions Ireland There have been hundreds of headlines about the AI skills gap. Analysts are warning that millions of roles could go unfilled. Universities and education providers are launching fast-track courses and bootcamps. And in the channel, partners are under pressure to bring in the right capabilities or risk being left behind. But the challenge isn't always technical. Often, it's much more basic. The biggest question, for many, is where to begin? More often than not, organisations are keen to explore the potential of AI but they don't know how to approach it in a structured way. It's not a lack of intelligence or initiative or skill holding them back - far from it, it's the absence of a shared framework, a common language, or a clear starting point. From marketing departments using ChatGPT to create content to developers trialling Copilot to streamline workflows, individuals are already experimenting with AI. However, these activities tend to happen in isolation, with such tools used informally rather than strategically. Without a roadmap or any kind of unifying policy, businesses are often left with a fragmented view or approach - the result of which is that AI becomes something that happens around the organisation rather than being a part of it. This can also introduce more risks, particularly when employees input sensitive data into external tools without proper controls or oversight. As models become more integrated and capable, even seemingly innocuous actions, like granting access to an email inbox or uploading internal documents, can expose large volumes of confidential company data. Without visibility into how that data is handled and used, organisations may unknowingly be increasing their risk surface. Rethinking what 'AI skills' means The term "AI skills" is often used to describe high-end technical roles like data scientists, machine learning engineers, or prompt specialists. Such an interpretation has its drawbacks. After all, organisations don't just need deep technical expertise, they need an understanding of how AI can be applied in a business context to deliver value. For example, organisations may want to consider how these tools can be used to support customers or identify ways of automating processes. Adopting AI in this way can encourage communication around it and allows people to engage with AI confidently and constructively, regardless of their technical background. Unfortunately, the industry's obsession with large language models (LLMs) has narrowed the conversation. AI has become almost entirely associated with a select number of tools. The focus has moved to interacting with models, rather than applying AI to support and improve existing work. Yet for many partners, the most valuable AI use cases will be far more understated - including automating support tickets, streamlining compliance checks, and improving threat detection. These outcomes won't come from prompt engineering, but from thoughtful experimentation with process optimisation and orchestration. Removing the barriers to adoption For many businesses, the real blocker to full-scale AI adoption isn't technical complexity, it's structural uncertainty. AI adoption is happening, but not in a coordinated way. There are few formal policies in place, and often no designated owner. In many cases, tools are actively blocked due to data security concerns or regulatory ambiguity. That caution isn't misplaced. The EU AI Act, for example, requires any organisation operating within or doing business with the EU to ensure at least one trained individual is responsible for AI. By itself, this raises important questions in terms of accountability and strategy. This lack of ownership - as opposed to the technology itself - is where the real risk lies. There's also an emotional barrier at play. We hear it all the time: the sense that others are further ahead, and that trying to catch...

AskAlli: Self-Publishing Advice Podcast
News: Shopify Bot Attack Hits Authors, UK and EU Enforce AI and Safety Laws, US Plans Pro-Tech AI Policy

AskAlli: Self-Publishing Advice Podcast

Play Episode Listen Later Aug 8, 2025 12:24


On this episode of the Self-Publishing News Podcast, Dan Holloway reports on a coordinated bot attack that hit indie authors using Shopify, leaving some with unexpected fees and limited recourse. He also covers new and proposed legislation across the UK, EU, and US, including the UK's Online Safety Act, concerns over enforcement of the EU AI Act, and the US White House's pro-tech AI action plan—all with implications for author rights and content access. Sponsors Self-Publishing News is proudly sponsored by Bookvault. Sell high-quality, print-on-demand books directly to readers worldwide and earn maximum royalties selling directly. Automate fulfillment and create stunning special editions with BookvaultBespoke. Visit Bookvault.app today for an instant quote. Self-Publishing News is also sponsored by book cover design company Miblart. They offer unlimited revisions, take no deposit to start work and you pay only when you love the final result. Get a book cover that will become your number-one marketing tool. Find more author advice, tips, and tools at our Self-publishing Author Advice Center, with a huge archive of nearly 2,000 blog posts and a handy search box to find key info on the topic you need. And, if you haven't already, we invite you to join our organization and become a self-publishing ally. About the Host Dan Holloway is a novelist, poet, and spoken word artist. He is the MC of the performance arts show The New Libertines, He competed at the National Poetry Slam final at the Royal Albert Hall. His latest collection, The Transparency of Sutures, is available on Kindle.

AI Briefing Room
EP-338 Aws & Openai's Strategic Partnership

AI Briefing Room

Play Episode Listen Later Aug 6, 2025 2:09


i'm wall-e, welcoming you to today's tech briefing for wednesday, august 6th. explore the latest in tech: openai & aws collaboration: aws now offers openai models through amazon ai services like bedrock and sagemaker, enhancing generative ai integration for enterprises and challenging microsoft's cloud dominance. openai's new open-source models: introducing gpt-oss-120b and gpt-oss-20b on hugging face, these ai reasoning models mark openai's return to open source since gpt-2, offering robust performance despite occasional hallucinations. government endorsement: openai, google, and anthropic join the list of approved ai vendors for u.s. federal agencies, streamlining ai service contracts and supporting federal ai goals. eu ai act progress: europe's ai regulatory framework advances, balancing innovation with risk prevention, setting compliance deadlines, and drawing varied reactions from advocacy groups and tech companies. leadership at emed population health: linda yaccarino, known for her negotiation skills, steps in as ceo, bringing bold leadership to the ai-driven health platform focused on glp-1 medications. that's all for today. we'll see you back here tomorrow!

Liebe Zeitarbeit
Was bedeutet der EU AI Act für dein Unternehmen? – Jetzt handeln statt zahlen!

Liebe Zeitarbeit

Play Episode Listen Later Aug 4, 2025 31:00


Die EU reguliert Künstliche Intelligenz – und das mit harten Strafen: Bis zu 7 % vom Jahresumsatz können bei Verstößen gegen den neuen EU AI Act fällig werden. Doch keine Panik: In dieser Folge erklärt dir Daniel Müllergemeinsam mit KI-Experte Andreas Mai, worauf du achten musst – und wie du dich & dein Team rechtssicher aufstellst.

Hielscher oder Haase - Deutschlandfunk Nova
EU AI Act - Mehr Transparenz bei KI-Trainingsdaten auf freiwilliger Basis

Hielscher oder Haase - Deutschlandfunk Nova

Play Episode Listen Later Aug 4, 2025 5:39


Seit dem 2. August 2025 ist der EU AI Act in Kraft. Ein Verhaltenskodex definiert, welche Anforderungen KI-Anbieter dafür genau erfüllen müssen, etwa Trainingsdaten transparenter machen. Der Kodex ist aber freiwillig. 26 Firmen haben ihn unterschrieben. Nicht dabei: Meta.**********Ihr könnt uns auch auf diesen Kanälen folgen: TikTok und Instagram .

5 Minutes Podcast with Ricardo Vargas
General-Purpose AI in the Spotlight: What the EU AI Act Means for Your Projects

5 Minutes Podcast with Ricardo Vargas

Play Episode Listen Later Aug 3, 2025 5:11


In this episode, Ricardo discusses the impact of the AI Act, the European regulation on artificial intelligence (General-Purpose AI models). The law, passed in 2024 and fully in force in 2026, began imposing strict rules on general-purpose AI models such as GPT, Claude, and Gemini on August 2, 2025. Projects using these AIs, even for simple integration, must also follow ethical, privacy, and transparency requirements. This changes the role of the project manager, who now needs to ensure legal compliance. Despite criticism that the law limits innovation, Ricardo emphasizes that it signals technological maturity. For him, adapting is essential to avoid risks and add value to projects. Listen to the podcast to learn more! https://rvarg.as/euactslide https://rvarg.as/euact

5 Minutes Podcast com Ricardo Vargas
AI de Propósito Geral no Foco: O que o EU AI Act Significa para Nossos Projetos

5 Minutes Podcast com Ricardo Vargas

Play Episode Listen Later Aug 3, 2025 6:08


Neste episódio, Ricardo comenta o impacto da AI Act, regulamentação europeia da inteligência artificial (General‑Purpose AI models). A lei, aprovada em 2024 e em vigor plena em 2026, começou a impor, desde 2/08/25, regras rígidas aos modelos de IA de uso geral, como GPT, Claude e Gemini. Os projetos que usam essas IAs, mesmo como integração simples, também devem seguir exigências sobre ética, privacidade e transparência. Isso muda o papel do gerente de projetos, que agora precisa garantir conformidade legal. Apesar das críticas de que a lei limita a inovação, Ricardo destaca que ela sinaliza maturidade tecnológica. Para ele, adaptar-se é essencial para evitar riscos e agregar valor aos projetos. Escute o podcast para saber mais! https://rvarg.as/euactslide https://rvarg.as/euact

WDR 5 Morgenecho
KI-Verordnung der EU: Mit welchen Folgen für KI-Startups?

WDR 5 Morgenecho

Play Episode Listen Later Aug 2, 2025 7:52


Die neue KI Verordnung der EU (AI Act) tritt in Kraft. Sie soll für eine strengere Transparenzpflicht für Sprachmodelle wie ChatGPT sorgen. Was ändert sich für KI-StartUps konkret? Dazu Jonas Becher, Gründer des KI StartUps Masasana. Von WDR 5.

Taking Stock with Vincent Wall
The EU AI Act: Who is really winning the AI race?

Taking Stock with Vincent Wall

Play Episode Listen Later Aug 1, 2025 47:20


This week on Taking Stock with Susan HayesCulleton:Sarah Collins, Brussels Correspondent with the Business Post & John Fitzgerald, Professor in the Department of Economics at Trinity College Dublin, join Susan to give their views on this week's EU-US trade deal.Susan looks to find out more about the next phase of the EU AI Act that comes into force this week with John Callahan, President and CTO of Partsol.Plus, Aidan Donnelly, Head of Equities at Davy, talks US inflation, equities, and the dollar outlook.

KI-Update – ein Heise-Podcast
KI-Update kompakt: EU AI Act, Meta-Umsätze und Überläufer von Apple, Nvidia

KI-Update – ein Heise-Podcast

Play Episode Listen Later Aug 1, 2025 11:25


Der EU-AI-Act tritt in die nächste Phase Meta steigert Umsatz und Gewinne Apples KI-Team dagegen verliert Mitarbeiter Nvidia soll sich gegenüber Chinas Regierung erklären Links zu allen Themen der heutigen Folge findet Ihr hier: https://heise.de/-10506541 https://www.heise.de/thema/KI-Update https://pro.heise.de/ki/ https://www.heise.de/newsletter/anmeldung.html?id=ki-update https://www.heise.de/thema/Kuenstliche-Intelligenz https://the-decoder.de/ https://www.heiseplus.de/podcast https://www.ct.de/ki Eine neue Folge gibt es montags, mittwochs und freitags ab 15 Uhr.

We're doomed we're saved - The Biorevolution Podcast

The $600 billion MedTech industry is undergoing a technological transformation. From AI-powered medical imaging to smart diagnostics and remote monitoring tools, artificial intelligence and machine learning are reshaping how care is delivered — and increasingly how patients manage their own health. In this episode of the BioRevolution podcast, we are joined by our guest, idalab's MedTech expert Julian Beimes to discuss how this AI-driven wave aligns with broader shifts in medicine: virtualization, personalization, and prevention. But alongside the innovation, we also unpack the challenges — especially the complex and often fragmented regulatory environment. Are policies like the EU AI Act promoting safety, or holding back progress? Find Julian here: https://www.linkedin.com/in/julian-beimes/ Find idalab here: https://idalab.de/ Disclaimer: Louise von Stechow & Andreas Horchler and their guests express their personal opinions, which are founded on research on the respective topics, but do not claim to give medical, investment or even life advice in the podcast. Learn more about the future of biotech in our podcasts and keynotes. Contact us here: scientific communication: https://science-tales.com/ Podcasts: https://www.podcon.de/ Keynotes: https://www.zukunftsinstitut.de/louise-von-stechow Image: Igor Saikin via Unsplash

Everyday AI Podcast – An AI and ChatGPT Podcast
EP 576: GPT-5 release timeline update, Google and Microsoft vibe coding and more AI News That Matters

Everyday AI Podcast – An AI and ChatGPT Podcast

Play Episode Listen Later Jul 28, 2025 46:58


Could GPT-5 only be weeks away?Why are Microsoft and Google going all in on vibe coding?What's the White House AI Action Plan actually mean?Don't spend hours a day trying to figure out what AI means for your company or career. That's our job. So join us on Mondays as we bring you the AI News That Matters. No fluff. Just what you need to ACTUALLY pay attention to in the business side of AI. Newsletter: Sign up for our free daily newsletterMore on this Episode: Episode PageJoin the discussion: Thoughts on this? Join the convo and connect with other AI leaders on LinkedIn.Upcoming Episodes: Check out the upcoming Everyday AI Livestream lineupWebsite: YourEverydayAI.comEmail The Show: info@youreverydayai.comConnect with Jordan on LinkedInTopics Covered in This Episode:GPT-5 Release Timeline and FeaturesGoogle Opal AI Vibe Coding ToolNvidia B200 AI Chip Black Market ChinaTrump White House AI Action Plan DetailsMicrosoft GitHub Spark AI Coding LaunchGoogle's AI News Licensing NegotiationsMicrosoft Copilot Visual Avatar (“Clippy” AI)Netflix Uses Generative AI for Visual EffectsOpenAI Warns of AI-Driven Fraud CrisisNew Google, Claude, and Runway AI Feature UpdatesTimestamps:00:00 "OpenAI's GPT-5 Release Announced"04:57 OpenAI Faces Pressure from Gemini07:13 EU AI Act vs. US AI Priorities12:12 Black Market Thrives for Nvidia Chips13:46 US AI Action Plan Unveiled19:34 Microsoft's GitHub Spark Unveiled21:17 Google vs. Microsoft: AI Showdown25:28 Google's New AI Partnership Strategy29:23 Microsoft's Animated AI Assistant Revival33:52 Generative AI in Film Industry38:55 AI Race & Imminent Fraud Crisis40:15 AI Threats and Future InnovationsKeywords:GPT 5 release date, OpenAI, GPT-4, GPT-4O, advanced reasoning abilities, artificial general intelligence, AGI, O3 reasoning, GPT-5 Mini, GPT-5 Nano, API access, Microsoft Copilot, model selector, LM arena, Gemini 2.5 Pro, Google Vibe Coding, Opal, no-code AI, low-code app maker, Google Labs, AI-powered web apps, app development, visual workflow editor, generative AI, AI app creation, Anthropic Claude Sonet 4, GitHub Copilot Spark, Microsoft GitHub, Copilot Pro Plus, AI coding tools, AI search, Perplexity, news licensing deals, Google AI Overview, AI summaries, click-through rate, organic search traffic, Associated Press, Condé Nast, The Atlantic, LA Times, AI in publishing, generative AI video, Netflix, El Eternauta, AI-generated visual effects, AI-powered VFX, Runway, AI for film and TV, job displacement from AI, AI-driven fraud, AI voice cloning, AI impersonation, financial scams, AI regulation, White House AI Action Plan, executive orders on AI, AI innovation, AI deregulaSend Everyday AI and Jordan a text message. (We can't reply back unless you leave contact info) Ready for ROI on GenAI? Go to youreverydayai.com/partner

EUVC
VC | E532 | Eoghan O'Neill, Senior Policy Officer at the European Commission AI Office

EUVC

Play Episode Listen Later Jul 26, 2025 59:07


In this episode, Andreas Munk Holm is joined by Eoghan O'Neill, Senior Policy Officer at the European Commission's AI Office, to break down the EU AI Act, Europe's strategy to lead the global AI wave with trust, safety, and world-class infrastructure.They dive into why the EU's approach to AI is not just regulatory red tape but a proactive framework to ensure innovation and adoption flourish across sectors—from startups to supercomputers. Eoghan unpacks how startups can navigate the Act, why Europe's regulatory clarity is an advantage, and how investors should be thinking about this new paradigm.Here's what's covered:02:41 Eoghan's Unorthodox Journey: From Gravy Systems to AI Policy04:32 The Mission & Structure of the AI Office05:52 Understanding the AI Act: A Product Safety Framework09:40 How the AI Act Was Created: An Open, 1,000+ Stakeholder Process17:42 What Counts as High-Risk AI (And What Doesn't)21:23 Learning from GDPR & Ensuring Innovation Isn't Crushed26:10 Transparency, Trust & The Limits of Regulation30:15 What VCs Need to Know: Obligations, Timelines & Opportunities34:42 Europe's Global AI Position: Infra, Engineers, Strategy43:33 Global Dynamics: Commoditization, Gulf States & the Future of AGI48:46 What's Coming: Apply AI Strategy in September

The Audit Podcast
IA on AI – McDonald's Bot Breach, Google's AI Cyber Win, and Nvidia Hits $4 Trillion

The Audit Podcast

Play Episode Listen Later Jul 24, 2025 8:15


This week on IA on AI, we break down the McDonald's hiring bot fiasco — yes, the one where an AI chatbot exposed data from over 60 million job applicants due to a shockingly simple security lapse. We explore why these matters to internal auditors and what basic control failures like this can teach us about staying vigilant as AI becomes more embedded in business processes.   Plus: An update on the EU AI Act and why U.S.-based organizations should still be paying attention How Google's AI caught a cyberattack in real time — and what this signals for the future of human-in-the-loop systems A $4 trillion milestone for Nvidia and a record-setting $2B seed round for a new AI startup A reality check on AGI: what it is, what it isn't, and why the hype may be outpacing the science Be sure to follow us on our social media accounts on LinkedIn: https://www.linkedin.com/company/the-audit-podcast Instagram: https://www.instagram.com/theauditpodcast TikTok: https://www.tiktok.com/@theauditpodcast?lang=en   Also be sure to sign up for The Audit Podcast newsletter and to check the full video interview on The Audit Podcast YouTube channel.  * This podcast is brought to you by Greenskies Analytics. the services firm that helps auditors leapfrog up the analytics maturity model. Their approach for launching audit analytics programs with a series of proven quick-win analytics will guarantee the results worthy of the analytics hype.   Whether your audit team needs a data strategy, methodology, governance, literacy, or anything else related to audit and analytics, schedule time with Greenskies Analytics.  

Employment Matters
683: The EU AI Act and Its Impact on the US

Employment Matters

Play Episode Listen Later Jul 24, 2025 17:25


Generative AI continues to drive conversation and concern, and not surprisingly, the focus on the promise of AI and how best to regulate it have created controversial positions. The EU has been one of those leaders in addressing regulation of AI, primarily through the EU AI Act. On today's episode, we will learn more from David about the EU AI Act, as well as a US perspective from Derek on the status of AI regulation, and how US companies may be impacted by the EU AI Act. Subscribe to our podcast today to stay up to date on employment issues from law experts worldwide.Host: Tara Stingley (email) (Cline Williams Wright Johnson & Oldfather, LLP)Guest Speakers: David van Boven (email) (Plesner / Denmark) & Derek Ishikawa (email) (Hirschfeld Kraemer LLP / California)Support the showRegister on the ELA website here to receive email invitations to future programs.

BDO in the Boardroom
Risk Aspects of Technological Innovation May Boards NOT Be Thinking About

BDO in the Boardroom

Play Episode Listen Later Jul 24, 2025 25:22


Human Resources and Workforce Impact: Bias in Automation: Ensure that automated HR processes undergo regular audits to identify and mitigate biases, particularly in candidate selection and hiring. Regulatory Oversight: Implement annual bias audits for automated employment decision tools to comply with regulations. Employee Surveillance: Review and update employee monitoring practices to ensure compliance with privacy regulations, and OSHA and HIPAA. Regulatory Compliance and Legal Risks: Decentralized AI Regulation: Develop a comprehensive strategy to track and comply with AI regulations across different states. EU AI Act: Assess the impact of the EU AI Act on your operations and ensure compliance with its requirements, even if your systems are used within the EU. Terms of Service: Establish a process to monitor and review changes in terms of service for AI, other technology and communications tools, ensuring compliance and proper data usage. Operational Resilience and Business Continuity: System Dependencies: Regularly evaluate AI systems for data representativeness and bias and adapt to real-time changes in company operations. Supply Chain Vulnerabilities: Conduct frequent audits of third-party components and vendors to identify and mitigate supply chain vulnerabilities. Cyber Threats: Update employee training programs to include awareness and prevention of deepfake scams and other sophisticated cyber threats. Strategic Oversight and Accountability: Ethical Considerations: Form multidisciplinary task forces for AI adoption, including general counsel, to classify use cases based on risk levels. ROI and Uncertainty: Ask for detailed ROI estimates, timelines, and milestones for AI projects, considering the uncertainty and potential qualitative outcomes. Director Education: Encourage directors to engage in educational opportunities, such as NACD masterclasses and other governance-focused content, to enhance their understanding of AI governance.

Cyber Security Weekly Podcast
Episode 452 - STATE OF CYBER (Part 1)

Cyber Security Weekly Podcast

Play Episode Listen Later Jul 23, 2025 59:01


Special Virtual Episodes with ISACA Leaders: State of Cyber (Part 1) - Maintaining readiness in a complex threat environmentSpeakers:Jamie Norton - ISACA Board Member Chirag Joshi - Sydney Chapter Board Member Abby Zhang - Auckland Chapter Board Member Jason Wood - Auckland Chapter former PresidentBharat Bajaj - ISACA Melbourne Board DirectorFor the full series visit: https://mysecuritymarketplace.com/security-amp-risk-professional-insight-series-2025/#mysecuritytv #isaca #cybersecurity OVERVIEWAccording to ISACA research, almost half of companies exclude cybersecurity teams when developing, onboarding, and implementing AI solutions.Only around a quarter (26%) of cybersecurity professionals or teams in Oceania are involved in developing policy governing the use of AI technology in their enterprise, and nearly half (45%) report no involvement in the development, onboarding, or implementation of AI solutions, according to the recently released 2024 State of Cybersecurity survey report from global IT professional association ISACA.Key Report Findings Security teams in Oceania noted they are primarily using AI for: Automating threat detection/response (36% vs 28% globally); Endpoint security (33% vs 27% globally); Automating routine security tasks (22% vs 24% globally); and Fraud detection (6% vs 13% globally).Additional AI resources to help cybersecurity and other digital trust professionalso EU AI Act white papero Examining Authentication in the Deepfake EraSYNOPSISISACA's 2024 State of Cybersecurity report reveals that stress levels are on the rise for cybersecurity professionals, largely due to an increasingly challenging threat landscape. The annual ISACA research also identifies key skills gaps in cybersecurity, how artificial intelligence is impacting the field, the role of risk assessments and cyber insurance in enterprises' security programs, and more.The demand for cybersecurity talent has been consistently high, yet efforts to increase supply are not reflected in the global ISACA IS/IT-community workforce. The current cybersecurity practitioners are aging, and the efforts to increase staffing with younger professionals are making little progress. Left unchecked, this situation will create business continuity issues in the future. Shrinking budgets and employee compensation carry the potential to adversely affect cybersecurity readiness much sooner than the aging workforce, when the Big Stay passes. Declines in vacant positions across all reporting categories may lead some enterprises to believe that the pendulum of power will swing back to employers, but the increasingly complex threat environment is greatly increasing stress in cybersecurity teams; therefore, the concern is not if, but when, employees will reach their tipping point to vacate current positions.

The Data Diva E246 - Aparna Bhushan and Debbie Reynolds

"The Data Diva" Talks Privacy Podcast

Play Episode Listen Later Jul 22, 2025 40:48 Transcription Available


Send us a textIn episode 246 of “The Data Diva” Talks Privacy Podcast, Debbie Reynolds talks to Aparna Bhushan, a co-host of the Rethinking Tech podcast and a seasoned data protection and governance attorney licensed in both the U.S. and Canada. Together, they explore the critical intersection of geopolitics, tech policy, and data ethics. Aparna shares her professional journey from startups to global corporations and international organizations, such as UNICEF, where her passion for ethical and practical data governance took root. The conversation explores the fast-paced and often contradictory dynamics facing governments, companies, and users in the digital age, highlighting how the collapse of traditional rules has left many institutions scrambling for direction.Debbie and Aparna discuss how companies are navigating conflicting global regulations, the growing risks of consumer backlash, and the real-world consequences of poor data decisions, such as the fallout from GM's data broker scandal and the potential sale of sensitive genetic data in the 23andMe bankruptcy. They also address the dangers of regulation lag, scope creep, and public distrust in platforms that mishandle personal data. Aparna shares her perspective on the emerging global impact of the EU AI Act and the regulatory vacuum in the U.S., arguing that proactive privacy strategies and consumer trust are more valuable than merely checking compliance boxes.The two dive deep into the complexities of age verification laws, questioning the practicality and privacy implications of requiring IDs or weakening encryption to protect children online. They emphasize the need for innovation that respects user rights and propose creative approaches to solving systemic data challenges, including Aparna's vision for AI systems that can audit other AI models for fairness and bias.To close the episode, Aparna shares her global privacy wish list: a more conscious, intentional user culture and a renewed investment in responsible technology development. This thoughtful and wide-ranging conversation is a must-listen for anyone interested in the ethical evolution of data governance in a rapidly shifting global landscape.Support the show

Shifting Our Schools - Education : Technology : Leadership
AI Companions: the risks and benefits, and what educators need to know

Shifting Our Schools - Education : Technology : Leadership

Play Episode Listen Later Jul 21, 2025 43:59


How do we prepare students—and ourselves—for a world where AI grief companions and "deadbots" are a reality? In this eye-opening episode, Jeff Utecht sits down with Dr. Tomasz Hollanek, a critical design and AI ethics researcher at the University of Cambridge's Leverhulme Centre for the Future of Intelligence, to discuss: The rise of AI companions like Character.AI and Replika Emotional manipulation risks and the ethics of human-AI relationships What educators need to know about the EU AI Act and digital consent How to teach AI literacy beyond skill-building—focusing on ethics, emotional health, and the environmental impact of generative AI Promising examples: preserving Indigenous languages and Holocaust survivor testimonies through AI From griefbots to regulation loopholes, Tomasz explains why educators are essential voices in shaping how AI unfolds in schools and society—and how we can avoid repeating the harms of the social media era. Dr Tomasz Hollanek is a Postdoctoral Research Fellow at the Leverhulme Centre for the Future of Intelligence (LCFI) and an Affiliated Lecturer in the Department of Computer Science and Technology at the University of Cambridge, working at the intersection of AI ethics and critical design. His current research focuses on the ethics of human-AI interaction design and the challenges of developing critical AI literacy among diverse stakeholder groups; related to the latter research stream is the work on AI, media, and communications that he is leading at LCFI. Connect with him: https://link.springer.com/article/10.1007/s13347-024-00744-w   https://www.repository.cam.ac.uk/items/d3229fe5-db87-42ff-869b-11e0538014d8   https://www.desirableai.com/journalism-toolkit  

Irish Tech News Audio Articles
Unlocking AI's value securely: Navigating Key Security Imperatives

Irish Tech News Audio Articles

Play Episode Listen Later Jul 18, 2025 7:38


Across EMEA, Artificial Intelligence (AI) is redefining industries, inspiring innovation, improving operations, and driving, growth. Government and Irish businesses are embracing and capitalising on AI's potential to enhance customer experiences and gain a competitive advantage. But as adoption accelerates, new security challenges arise, demanding vigilant attention to protect these investments. Forecasts indicate that AI could contribute trillions to the global economy by 2030, with Ireland well-positioned to capture a significant share of this value. According to Dell Technologies' Innovation Catalyst Study, 76% say AI and Generative AI (GenAI) is a key part of their organisation's business strategy while 66% of organisations are already in early-to mid-stages of their AI and GenAI journey. As AI becomes more embedded in everything from customer management to critical infrastructure, safeguarding these investments and tackling the evolving cyber threat landscape must be a priority. To that end the success of integrating AI in the region depends on addressing three critical security imperatives: managing risks associated with AI usage, proactively defend against AI-enhanced attacks, and employing AI to enhance their overall security posture. Managing the Risks of AI Usage Ireland as a digital hub within the EU, must navigate the complex regulatory environment like the Digital Operational Resilience Act (DORA), NIS2 Directive, the Cyber Resilience Act and the recently launched EU AI Act. These frameworks introduce stringent cybersecurity requirements that businesses leveraging AI must meet to ensure resilience and compliance. AI's reliance on vast amounts of data presents unique challenges. AI models are built, trained, and fine-tuned with data sets, making protection paramount. To meet these challenges, Irish organisations must embed cybersecurity principles such as least privilege access, robust authentication controls, and real-time monitoring into every stage of the AI lifecycle. However, technology and implementing these measures effectively isn't enough. The Innovation Catalyst Study highlighted that a lack of skills and expertise ranks as one of the top three challenges faced by organisations looking to modernize their defenses. Bridging this skills gap is vital to delivering secure and scalable AI solutions because only with the right talent, governance, and security-first mindset can Ireland unlock the full potential of AI innovation in a resilient and responsible way. A further step that Irish businesses can take to address AI risks, is to integrate risk considerations across ethical, safety, and cultural domains. A multidisciplinary approach can help ensure that AI is deployed responsibly. Establishing comprehensive AI governance frameworks is essential. These frameworks should include perspectives from experts across the organisation to balance security, compliance, and innovation within a single, cohesive risk management strategy. Countering AI-Powered Threats While AI has enormous potential, bad actors are leveraging AI to enhance the speed, scale, and sophistication of attacks. Social engineering schemes, advanced fraud tactics, and AI-generated phishing emails are becoming more difficult to detect, with some leading to significant financial losses. Deepfakes, for instance, are finding their way into targeted scams aimed at compromising organisations. A 2024 ENISA report highlighted that AI-enhanced phishing attacks have surged by 35% in the past year, underscoring the need for stronger cybersecurity measures. To stay ahead organisations must prepare for an era where cyberattacks operate at machines' speed. Transitioning to a defensive approach anchored in automation is key to responding swiftly and effectively, minimizing the impact of advanced attacks. The future of AI agents in the cybersecurity domain may not be far off. This means deploying AI-powered security tools that can detect anomalies in real time...

Irish Tech News Audio Articles
CeADAR enrols 1,500th learner in AI for You online course for Irish enterprises and public sector

Irish Tech News Audio Articles

Play Episode Listen Later Jul 17, 2025 3:34


CeADAR, Ireland's Centre for AI, this month celebrated enrolling its 1,500th learner in AI for You, an online course for Irish enterprises and public sector organisations who want to increase their AI awareness and literacy and boost their knowledge of regulations governing AI, such as the EU AI Act. The AI for You programme was developed by CeADAR in conjunction with the Department of Enterprise, Tourism and Employment (DETE). The course is fully funded, supported by CeADAR's European Digital Innovation Hub (EDIH) for AI programme, which itself is funded by Enterprise Ireland and the European Commission. The programme is self-paced, so it can be completed in a learner's own time, and is made up of five modules, including introduction to AI, the concepts underpinning AI, the applications and impacts of AI, the future with AI, and AI governance and the EU AI Act. The first-ever legal framework on AI, the EU AI Act sets out rules for AI providers and those that deploy AI technology on the specific uses of AI. The EU AI Act came into effect in August last year. Those interested in enrolling in the programme can do so by following the instructions on the CeADAR website (www.ceadar.ie/edih/skills-and-training/). The EDIH is a €700m European initiative comprising of more than 160 tech hubs across 30 countries. CeADAR's selection as the EDIH for AI in Ireland came with an initial funding boost of €6 million over three years. The award is jointly supported by the EU and the Government of Ireland through Enterprise Ireland. Minister Smyth, Minister of State for Trade Promotion, Artificial Intelligence and Digital Transformation said: "I am very pleased with the success of the AI for You online course and I congratulate CeADAR on the achievement of enrolling the 1500th learner. This reflects the growing appetite for AI skills in Ireland but also our commitment to equipping citizens and businesses with the knowledge and tools they need to thrive in the digital age." CeADAR's Director of Innovation and Development and EDIH for AI Programme Director, Dr. Ricardo Simon Carbajo said: "This is a significant milestone and is contributing to companies and public sector organisation's ability to understand and comply with the EU AI Act. We thank all those who signed up for this course and look forward to welcoming more in the future." See more stories here. More about Irish Tech News Irish Tech News are Ireland's No. 1 Online Tech Publication and often Ireland's No.1 Tech Podcast too. You can find hundreds of fantastic previous episodes and subscribe using whatever platform you like via our Anchor.fm page here: https://anchor.fm/irish-tech-news If you'd like to be featured in an upcoming Podcast email us at Simon@IrishTechNews.ie now to discuss. Irish Tech News have a range of services available to help promote your business. Why not drop us a line at Info@IrishTechNews.ie now to find out more about how we can help you reach our audience. You can also find and follow us on Twitter, LinkedIn, Facebook, Instagram, TikTok and Snapchat.

Serious Privacy
Personal Integrity, Regulatory capture & a week in Privacy

Serious Privacy

Play Episode Listen Later Jul 16, 2025 32:49


Send us a textWith Paul away, Join K and Ralph on a riotous discussion of personal integrity and what positions we can work with and for - with regulators and industry cross pollinating individuals and resources.  Can regulators remain ethical and independent, when we rely on skills and abilities for industry?Also, a week of news in Privacy and Data Protection with a round up of EU, UK, US and beyond news, cases, regulations and standards - including age verification, censorship, EU AI Act, privacy preserving advertising, freedom of speech laws and new developments across the globe! If you have comments or questions, find us on LinkedIn and Instagram @seriousprivacy, and on BlueSky under @seriousprivacy.eu, @europaulb.seriousprivacy.eu, @heartofprivacy.bsky.app and @igrobrien.seriousprivacy.eu, and email podcast@seriousprivacy.eu. Rate and Review us! From Season 6, our episodes are edited by Fey O'Brien. Our intro and exit music is Channel Intro 24 by Sascha Ende, licensed under CC BY 4.0. with the voiceover by Tim Foley.

The Customer Success Playbook
Customer Success Playbook S3 E68 - Gayle Gorvett - Who's Liable When AI Goes Wrong?

The Customer Success Playbook

Play Episode Listen Later Jul 16, 2025 12:38 Transcription Available


Send us a textWhen AI systems fail spectacularly, who pays the price? Part two of our conversation with global tech lawyer Gayle Gorvett tackles the million-dollar question every business leader is afraid to ask. With federal AI regulation potentially paused for a decade while technology races ahead at breakneck speed, companies are left creating their own rules in an accountability vacuum. Gayle reveals why waiting for government guidance could be a costly mistake and how smart businesses are turning governance policies into competitive advantages. From the EU AI Act's complexity challenges to state-by-state regulatory patchwork, this customer success playbook episode exposes the legal landmines hiding in your AI implementation—and shows you how to navigate them before they explode.Detailed AnalysisThe accountability crisis in AI represents one of the most pressing challenges facing modern businesses, yet most organizations remain dangerously unprepared. Gayle Gorvett's revelation about the federal government's proposed 10-year pause on state AI laws while crafting comprehensive regulation highlights a sobering reality: businesses must become their own regulatory bodies or risk operating in a legal minefield.The concept of "private regulation" that Gayle introduces becomes particularly relevant for customer success teams managing AI-powered interactions. When your chatbots handle customer complaints, your predictive models influence renewal decisions, or your recommendation engines shape customer experiences, the liability implications extend far beyond technical malfunctions. Every AI decision becomes a potential point of legal exposure, making governance frameworks essential risk management tools rather than optional compliance exercises.Perhaps most intriguingly, Gayle's perspective on governance policies as competitive differentiators challenges the common view of compliance as a business burden. In the customer success playbook framework, transparency becomes a trust-building mechanism that strengthens customer relationships rather than merely checking regulatory boxes. Companies that proactively communicate their AI governance practices position themselves as trustworthy partners in an industry where trust remains scarce.The legal profession's response to AI—requiring disclosure to clients and technical proficiency from practitioners—offers a compelling model for other industries. This approach acknowledges that AI literacy isn't just a technical requirement but a professional responsibility. For customer success leaders, this translates into a dual mandate: understanding AI capabilities enough to leverage them effectively while maintaining enough oversight to protect customer interests.The EU AI Act's implementation challenges that Gayle describes reveal the complexity of regulating rapidly evolving technology. Even comprehensive regulatory frameworks struggle to keep pace with innovation, reinforcing the importance of internal governance structures that can adapt quickly to new AI capabilities and emerging risks. This agility becomes particularly crucial for customer-facing teams who often serve as the first line of defense Kevin's offeringPlease Like, Comment, Share and Subscribe. You can also find the CS Playbook Podcast:YouTube - @CustomerSuccessPlaybookPodcastTwitter - @CS_PlaybookYou can find Kevin at:Metzgerbusiness.com - Kevin's person web siteKevin Metzger on Linked In.You can find Roman at:Roman Trebon on Linked In.

Paymentandbanking FinTech Podcast
Episode 13_25: AI in Finance: Klarna klont CEO und Revolut integriert KI

Paymentandbanking FinTech Podcast

Play Episode Listen Later Jul 14, 2025 66:06


In Folge 13 sprechen Maik Klotz und Sascha Dewald über die KI-Pläne von Klarna und Revolut und machen einen Recap der BaFinTech 25. Auch abseits der Fintech-Szene war viel los: Der AI Act der EU erhitzt die Gemüter, OpenAI fischt bei Tesla und Meta und ausgerechnet ein deutsches Verteidigungs-Startup will mit KI groß werden.

The Tech Blog Writer Podcast
3345: Veeva Systems and the Future of Agentic AI in Pharma

The Tech Blog Writer Podcast

Play Episode Listen Later Jul 13, 2025 30:51


AI is racing ahead, but for industries like life sciences, the stakes are higher and the rules more complex. In this episode, recorded just before the July heatwave hit its peak, I spoke with Chris Moore, President of Europe at Veeva Systems, from his impressively climate-controlled garden office. We covered everything from the trajectory of agentic AI to the practicalities of embedding intelligence in highly regulated pharma workflows, and how Veeva is quietly but confidently positioning itself to deliver where others are still making announcements. Chris brings a unique perspective shaped by a career that spans ICI Pharmaceuticals, PwC, IBM, and EY. That journey taught him how often the industry is forced to rebuild the same tech infrastructure again and again until Veeva came along. He shares how Veeva's decision to build a life sciences-specific cloud platform from the ground up has enabled a deeper, more compliant integration of AI. We explored what makes Veeva AI different, from the CRM bot that handles compliant free text to MLR agents that support content review and approval. Chris explains how Veeva's AI agents inherit the context and controls of their applications, making them far more than chat wrappers or automation tools. They are embedded directly into workflows, helping companies stay compliant while reducing friction and saving time. And perhaps more importantly, he makes a strong case for why the EU AI Act isn't a barrier. It's a validation. From auto-summarising regulatory documents to pulling metadata from health authority correspondence, the real-world examples Chris offers show how Veeva AI will reduce repetitive work while ensuring integrity at every step. He also shares how Veeva is preparing for a future where companies may want to bring their LLMs or even run different ones by geography or task. Their flexible, harness-based approach is designed to support exactly that. Looking ahead to the product's first release in December, Chris outlines how Veeva is working hand-in-hand with customers to ensure readiness and reliability from day one. We also touch on the broader mission: using AI not as a shiny add-on, but as a tool to accelerate drug development, reach patients faster, and relieve the pressure on already overstretched specialist teams. Chris closes with a dose of humanity, offering a book and song that both reflect Veeva's mindset, embracing disruption while staying grounded. This one is for anyone curious about how real, applied AI is unfolding inside one of the world's most important sectors, and what it means for the future of medicine.

The Sunday Show
How the EU's Voluntary AI Code is Testing Industry and Regulators Alike

The Sunday Show

Play Episode Listen Later Jul 13, 2025 21:39


Last week, following months of negotiation and just weeks before the first legal deadlines under the EU AI Act take effect, the European Commission published the final Code of Practice on General-Purpose AI. The Code is voluntary and intended to help companies demonstrate compliance with the AI Act. It sets out detailed expectations around transparency, copyright, and measures to mitigate systemic risks. Signatories will need to publish summaries of training data, avoid unauthorized use of copyrighted content, and establish internal frameworks to monitor risks. Companies that sign on will see a “reduced administrative burden” and greater legal clarity, the Commission said. At the same time, both European and American tech companies have raised concerns about the AI Act's implementation timeline, with some calling to “stop the clock” on the AI Act's rollout.To learn more, Tech Policy Press associate editor Ramsha Jahangir spoke to Luca Bertuzzi, senior AI correspondent at MLex, to unpack the final Code of Practice on GPAI, why it matters, and how it fits into the broader rollout of the AI Act.

That Tech Pod
Innocent Until the Algorithm Says Otherwise. Trusting Tech When AI Gets It Wrong with Evan J. Schwartz

That Tech Pod

Play Episode Listen Later Jul 8, 2025 30:52


In this week's episode, Laura and Kevin sit down with Evan J. Schwartz, Chief Innovation Officer at AMCS Group, to explore where AI is actually making a difference and where it's doing real harm. From logistics and sustainability to law enforcement and digital identity, we dig into how AI is being used (and misused) in ways that affect millions of lives.We talk about a real-world case Evan worked on involving predictive analytics in law enforcement, and the dangers of trusting databases more than people. If someone hacks your digital footprint or plants fake records, how do you prove you're not the person your data says you are? We dive into the Karen Read case, the ethics of “precrime” models like in Minority Report, and a story where AI helped thieves trick a bank into wiring $40 million. The common thread? We've put a lot of faith in data... sometimes more than it deserves.With the EU AI Act now passed and other countries tightening regulation, Evan offers advice on how U.S.-based companies should prepare for a future where AI governance isn't optional. He also breaks down “dark AI” and whether we're getting close to machines making life-altering decisions without humans in the loop. Whether you're in tech, law, policy, or just trying to understand how AI might impact your own rights and identity, this conversation pulls back the curtain on how fast things are moving and what we might be missing.Evan J. Schwartz brings over 35 years of experience in enterprise tech and digital transformation. At AMCS Group, he leads innovation efforts focused on AI, data science, and sustainability in the logistics and resource recovery industries. He's held executive roles in operations, architecture, and M&A, and also teaches graduate courses in AI, cybersecurity, and project management. Evan serves on the Forbes Tech Council and advises at Jacksonville University. He's also the author of People, Places, and Things, an Amazon best-seller on ERP implementation. His work blends technical depth with a sharp focus on ethics and real-world impact.

Risk Management Show
AI Regulations: What Risk Managers Must Do Now with Caspar Bullock

Risk Management Show

Play Episode Listen Later Jul 7, 2025 23:31


In this episode of the Risk Management Show, we dive into the critical topic of "AI Regulations: What Risk Managers Must Do Now." Join host Boris Agranovich and special guest Caspar Bullock, Director of Strategy at Axiom GRC, as they tackle the challenges and opportunities businesses face in navigating risk management, cybersecurity, and sustainability in today's rapidly evolving landscape. We discuss the growing importance of monitoring AI developments, preparing for upcoming regulations like the EU AI Act, and setting clear internal policies to meet customer demands and legal requirements. Caspar shares his expert perspective on building organizational resilience, the ROI of compliance programs, and addressing third-party risks in a complex supply chain environment. Whether you're a Chief Risk Officer, a compliance professional, or a business leader, this conversation offers actionable insights to help you stay ahead of emerging trends. If you want to be our guest or suggest a guest, send your email to info@globalriskconsult.com with the subject line “Podcast Guest.”

The Lawfare Podcast
Lawfare Archive: Itsiq Benizri on the EU AI Act

The Lawfare Podcast

Play Episode Listen Later Jul 5, 2025 43:54


From February 16, 2024: The EU has finally agreed to its AI Act. Despite the political agreement reached in December 2023, some nations maintained some reservations about the text, making it uncertain whether there was a final agreement or not. They recently reached an agreement on the technical text, moving the process closer to a successful conclusion. The challenge now will be effective implementation.To discuss the act and its implications, Lawfare Fellow in Technology Policy and Law Eugenia Lostri sat down with Itsiq Benizri, counsel at the law firm WilmerHale Brussels. They discussed how domestic politics shaped the final text, how governments and businesses can best prepare for new requirements, and whether the European act will set the international roadmap for AI regulation.To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/lawfare-institute.Support this show http://supporter.acast.com/lawfare. Hosted on Acast. See acast.com/privacy for more information.

TechSurge: The Deep Tech Podcast
Open vs. Closed AI: Risks, Rewards, and Realities of Open Source Innovation

TechSurge: The Deep Tech Podcast

Play Episode Listen Later Jul 1, 2025 26:16


In TechSurge's Season 1 Finale episode, we explore an important debate: should AI development be open source or closed? AI technology leader and UN Senior Fellow Senthil Kumar joins Michael Marks for a deep dive into one of the most consequential debates in artificial intelligence, exploring the fundamental tensions between democratizing AI access and maintaining safety controls.Sparked by DeepSeek's recent model release that delivered GPT-4 class performance at a fraction of the cost and compute, the discussion spans the economics of AI development, trust and transparency concerns, regulatory approaches across different countries, and the unique opportunities AI presents for developing nations.From Meta's shift from closed to open and OpenAI's evolution from open to closed, to practical examples of guardrails and the geopolitical implications of AI governance, this episode provides essential insights into how the future of artificial intelligence will be shaped not just by technological breakthroughs, but by the choices we make as a global community.If you enjoy this episode, please subscribe and leave us a review on your favorite podcast platform. Sign up for our newsletter at techsurgepodcast.com for updates on upcoming TechSurge Live Summits and news about Season 2 of the TechSurge podcast. Thanks for listening! Links:Slate.ai - AI-powered construction technology: https://slate.ai/World Economic Forum on open-source AI: https://www.weforum.org/stories/2025/02/open-source-ai-innovation-deepseek/EU AI Act overview: https://digital-strategy.ec.europa.eu/en/policies/european-approach-artificial-intelligence

Doppelgänger Tech Talk
Was passiert mit Siri | Yupp AI | OpenAI nutzt Google-TPUs #471

Doppelgänger Tech Talk

Play Episode Listen Later Jul 1, 2025 68:08


Die Talent-Abwerbeschlacht um KI-Forscher geht weiter. Apple verhandelt mit OpenAI und Anthropic, um Siri über fremde LLMs aufzurüsten, plant zugleich ein günstiges A18-MacBook Air und mehrere leichte AR-Brillen. Shein stolpert vor dem Börsengang: London scheitert, jetzt vertrauliches Hongkong-Filing bei sinkendem Wachstum. Berlins Datenschutzaufsicht will DeepSeek aus deutschen App-Stores verbannen. Yupp AI startet als Meta-Suchmaschine für LLMs: ein Prompt, zwei Antworten, User küren das bessere Modell. OpenAI schwenkt für Inferenz auf Google-TPUs – billiger, schneller, unabhängiger. Roger Federer knackt dank seiner On-Beteiligung die Milliarde. WhatsApp Business rechnet künftig pro Nachricht ab und verdient an KI-Bots. Tesla verliert seinen Produktionschef, X holt Produkt-Tüftler Nikita Bier. Trumps Team plant 47 ATF-Deregulierungen, das TikTok-Verbot wird erneut vertagt. Amazon beschäftigt jetzt über eine Million Roboter, Start-ups fordern ein Moratorium für den EU AI Act, Microsofts Diagnose-KI schlägt Ärzte bei seltenen Fällen. Unterstütze unseren Podcast und entdecke die Angebote unserer Werbepartner auf ⁠⁠⁠⁠⁠doppelgaenger.io/werbung⁠⁠⁠⁠⁠. Vielen Dank!  Philipp Glöckler und Philipp Klöckner sprechen heute über: (00:00:00) Meta wirbt OpenAI-Forscher ab (00:07:55) Apple sucht LLM-Partner (00:14:00) Shein-IPO wankt (00:23:45) Berliner Datenschutz will DeepSeek aus App-Stores werfen (00:25:33) Yupp AI Side-by-Side-Vergleich von LLM-Antworten (00:31:20) OpenAI nutzt Google-TPUs für Inferenz (00:43:20) Roger Federer wird Milliardär dank On-Beteiligung (00:48:00) WhatsApp Business: Wechsel zu Pay-per-Message-Modell (00:49:30) Tesla verliert Produktionschef; Nikita Bier neuer X-Produktchef (00:55:00) Trump (00:56:00) TikTok-Verbot erneut verschoben (00:58:00) Amazon meldet 1 Mio Roboter (01:00:05) Gute News Des Tages Shownotes OpenAI-Führung reagiert auf Meta-Angebote – wired.com Zuckerberg kündigt Meta-‘Superintelligenz'-Projekt an – bloomberg.com Apple erwägt Anthropic oder OpenAI für Siri – bloomberg.com Apple arbeitet an 7 Kopf-Displays – 9to5mac.com Apple bringt günstigeres MacBook mit iPhone-Prozessor heraus – 9to5mac.com Shein plant geheime Hongkong-Börsennotierung – reuters.com US-Käufer meiden Shein und Temu nach Schließung der Steuerschlupflücke durch Trump – ft.com DeepSeek droht Verbot in deutschen App-Stores von Apple und Google – reuters.com Google überzeugt OpenAI zur Nutzung von TPU-Chips – theinformation.com Roger Federers langfristige Deals machen ihn zum Tennis-Milliardär – bloomberg.com 500+ KI-Modelle im Vergleich – x.com WhatsApp Business Plattform Preise | WhatsApp API Preise – business.whatsapp.com Elon Musk Vertrauter Omead Afshar verlässt Tesla –  bloomberg.com Musk's X stellt Unternehmer Nikita Bier als Produktchef ein – bloomberg.com DOGE tritt ATF bei, um Waffenregulierungen zu reduzieren – washingtonpost.com TikTok in den USA: Trump findet Käufer – zeit.de Amazon kurz davor, mehr Roboter als Menschen in Lagern einzusetzen – wsj.com Europäische Startups und VCs fordern EU auf, AI Act zu pausieren – sifted.eu Microsoft: Neues KI-System diagnostiziert genauer als Ärzte – wired.com

Irish Tech News Audio Articles
Why Compliance is the Next Big Opportunity for IT Channel Partners

Irish Tech News Audio Articles

Play Episode Listen Later Jun 27, 2025 7:47


Guest post by Ronnie Hamilton, Pre-Sales Director, Climb Channel Solutions Ireland If compliance feels overwhelming right now, you're not imagining it. New regulations covering cybersecurity, data protection, AI, and more are emerging - from the latest PCI DSS updates to the EU AI Act. As a result, compliance is actively shaping the IT channel, influencing how we do business, how we anticipate industry shifts, and how we support our partners and customers with the right solutions to stay ahead. Navigating compliance in 2025 means being aligned with regulatory requirements, but it's a balancing act because at the end of the day, we all still have a job to do: delivering the right solutions, tailoring services to customer needs, and being a trusted partner. With new regulations coming into force and the mounting challenge of understanding cybersecurity, AI governance, and data integrity requirements, it's more important than ever to stay ahead. On the other hand, those who stay agile and deliver solutions that meet regulatory demands have an opportunity to turn the compliance headache into a competitive advantage. The Agility Advantage of Smaller Partners Smaller channel partners face growing pressure from complex customer environments, resource constraints, and fierce competition for skilled talent. However, their agility provides a unique advantage. Unlike larger enterprises, they can quickly adapt to evolving customer needs, position themselves as trusted advisors, and identify emerging vendors - particularly those offering AI-powered and automated solutions. AI adoption plays a critical role in maintaining a competitive edge. By embracing AI, smaller partners can deliver exceptional managed services with fewer resources, keeping costs low and service quality high. This approach ensures they remain competitive in a crowded market. Tackling the EU NIS2 Directive The EU NIS2 Directive reinforces the need for robust cybersecurity measures, urging businesses to adopt a more comprehensive approach to risk management. Essential security practices such as multi-factor authentication, regular cybersecurity training, incident response planning, and strong supply chain security are no longer optional but essential. A key principle underlying the directive is the Identify, Detect, Protect, Respond, and Recover framework. While most organisations focus heavily on detection and protection, recovery is sometimes a weak link. A lengthy recovery period following a breach can be as harmful as failing to detect the threat in the first place. The integration of automation into threat detection and response processes is becoming more important for meeting compliance requirements. The EU AI Act: Compliance Meets Innovation The EU AI Act introduces new obligations for organisations deploying AI solutions - emphasising transparency, accountability, and risk management throughout the AI lifecycle. These requirements extend to all aspects of AI implementation, from data sourcing and model training to real-world deployment. To address compliance risks, managed service providers may consider introducing AI governance roles, such as "AI Managers as a Service." These specialists help organisations navigate AI regulations without requiring full-time in-house expertise. While compliance with AI regulations may introduce additional costs, the long-term benefits - such as enhanced customer trust, clear documentation, and ethical AI practices - can significantly outweigh the initial investment. Rather than viewing compliance as a regulatory burden, partners should position it as an opportunity to strengthen customer relationships and stand out. Automation and AI: Key Enablers of Compliance AI and automation are proving indispensable for managing compliance complexity. From automating repetitive processes to monitoring security events and ensuring adherence to evolving standards, these technologies help organisations streamline compliance efforts while mini...

PwC Luxembourg TechTalk
Financial services' road to AI: Where we are and where we're headed

PwC Luxembourg TechTalk

Play Episode Listen Later Jun 26, 2025 51:13


In this episode of TechTalk, we explore how financial services are steering toward AI — covering emerging regulations like the EU AI Act, trust-building, collaboration, and the shift from experimentation to real-world applications. To guide us through this evolving landscape, we're joined by Ulf Herbig, Chairman of the EFAMA AI Task Force and Chairman of ALFI's Digital Finance Working Group on Innovation and Technology; and Sébastien Schmitt, Partner in Regulatory Risk and Compliance at PwC Luxembourg. 

FundraisingAI
Episode 61 - Navigating Super Intelligence, Governance, and Human-First Transformation

FundraisingAI

Play Episode Listen Later Jun 25, 2025 31:33


In the rapidly accelerating world of Artificial Intelligence, the pace of innovation can feel overwhelming. From groundbreaking advancements to the ongoing debate about governance and ethical implications, AI is not just a tool; it's a transformative force. As we race towards super intelligence and navigate increasingly sophisticated models, how do we ensure that human values remain at the core of this technological revolution? How do we, especially in the trust-based nonprofit sector, lead with intentionality and ensure AI serves humanity rather than superseding it?   In this episode, Nathan and Scott dive into the relentless evolution of AI, highlighting Meta's staggering $15 billion investment in the race for super intelligence and the critical absence of robust regulation. They reflect on the essential shift from viewing AI adoption as a finite "destination" to embracing it as an ongoing "journey." Nathan shares insights on how AI amplifies human capabilities, particularly for those who are "marginally" good at certain skills, advocating for finding your "why" and offloading tasks AI can do better. Scott discusses his recent AI governance certification, underscoring the complexities and lack of "meat on the bone" in US regulations compared to the EU AI Act. The conversation also explores the concept of AI agents, offering practical tips for leveraging them, even for those with no coding experience. They conclude with a powerful reminder: AI is a mirror reflecting our values, and the nonprofit sector has a vital role in shaping its ethical future.   HIGHLIGHTS  [01:15] AI Transformation: A Journey, Not a Destination [03.00] If AI Can Do It Better: Finding Your Human "Why"   [04:05] AI Outperforming Human Capabilities   [05:00] Meta's $15 Billion Investment in Super Intelligence  [07:16] The Manipulative Nature of AI and the "Arms Race" for Super Intelligence   [09:27] The Importance and Challenges of AI Governance and Regulation   [14:50] AI as a Compass, Not a Silver Bullet   [16:39] Beware the AI Finish Line Illusion   [18:12] Small Steps, Sustained Momentum: The "Baby Steps" Approach to AI   [26:48] Tip of the Week: The Rise of AI Agents and Practical Use Cases  [32.24] The Power of Curiosity in AI Exploration   RESOURCES  Relay.app: relay.app   Zapier: zapier.com   Make.com: make.com   N.io: n.io   Connect with Nathan and Scott: LinkedIn (Nathan): ⁠⁠⁠⁠⁠linkedin.com/in/nathanchappell/⁠⁠⁠⁠⁠ LinkedIn (Scott): ⁠⁠⁠⁠⁠linkedin.com/in/scott-rosenkrans⁠⁠⁠⁠⁠ Website: ⁠⁠⁠⁠⁠fundraising.ai/⁠

Artificial Intelligence in Industry with Daniel Faggella
AI in Healthcare Devices and the Challenge of Data Privacy - with Dr. Ankur Sharma at Bayer

Artificial Intelligence in Industry with Daniel Faggella

Play Episode Listen Later Jun 24, 2025 19:06


Today's guest is Dr. Ankur Sharma, Head of Medical Affairs for Medical Devices and Digital Radiology at Bayer. Dr. Sharma joins Emerj Editorial Director Matthew DeMello to explore the complex intersection of AI, medical devices, and data governance in healthcare. Dr. Sharma outlines the key challenges that healthcare institutions face in adopting AI tools, including data privacy, system interoperability, and regulatory uncertainty. He also clarifies the distinction between regulated predictive models and unregulated generative tools, as well as how each fits into current clinical workflows. The conversation explores the evolving roles of the FDA and EU AI Act, the potential for AI to bridge clinical research and patient care, and the need for new reimbursement models to support digital innovation. This episode is sponsored by Medable. Learn how brands work with Emerj and other Emerj Media options at emerj.com/ad1. Want to share your AI adoption story with executive peers? Click emerj.com/expert2 for more information and to be a potential future guest on the ‘AI in Business' podcast!

Irish Tech News Audio Articles
€0.5m in funding for Trinity to develop AI platform for teachers

Irish Tech News Audio Articles

Play Episode Listen Later Jun 24, 2025 5:18


A team of researchers at Trinity College Dublin has received €500,000 in funding to develop an AI-enabled platform to help teachers create assessments and provide formative feedback to learners. The project is called Diotima and is supported by The Learnovate Centre, a global research and innovation centre in learning technology in Trinity College Dublin. Diotima began its partnership with Learnovate in February this year and is expected to spin out as a company in 2026. The €500,000 funding was granted under Enterprise Ireland's Commercialisation Fund, which supports third-level researchers to translate their research into innovative and commercially viable products, services and companies. Diotima supports teaching practice by using responsible AI to provide learners with feedback, leading to more and better assessments and improved learning outcomes for students, and a more manageable workload for teachers. The project was co-founded by Siobhan Ryan, a former secondary school teacher, biochemist and environmental scientist, and Jonathan Dempsey, an EdTech professional with both start-up and corporate experience. Associate Professor Ann Devitt, Head of the Trinity School of Education, and Carl Vogel, Professor of Computational Linguistics and Director of the Trinity Centre for Computing and Language Studies, are serving as co-principal investigators on the project. Diotima received the funding in February. Since then, the project leaders have established an education advisory group formed of representatives from post-primary and professional education organisations. The Enterprise Ireland funding has facilitated the hiring of two post-doctoral researchers. They are now leading AI research ahead of the launch of an initial version of the platform in September 2025. Diotima aims to conduct two major trials of the platform as they also seek investment. Co-founder Siobhan Ryan is Diotima's Learning Lead. After a 12-year career in the brewing industry with Diageo, Siobhan re-trained as a secondary school teacher before leaving the profession to develop the business case for a formative assessment and feedback platform. Her experience in the classroom made her realise that she could have a greater impact by leveraging AI to create a platform to support teachers in a safe, transparent, and empowering way. Her fellow co-founder Jonathan Dempsey is Commercial Lead at Diotima. He had been CEO of the Enterprise Ireland-backed EdTech firm Digitary, which is now part of multinational Instructure Inc. He held the role of Director of UK and Ireland for US education system provider Ellucian and Head of Education and Education Platforms for Europe with Indian multinational TCS. Jonathan has a wealth of experience at bringing education technologies to market. Learnovate Centre Director Nessa McEniff says: "We are delighted to have collaborated with the Diotima team to secure €500,000 investment from Enterprise Ireland's Commercialisation Fund. Diotima promises to develop into a revolutionary platform for learners in secondary schools and professional education organisations, delivering formative feedback and better outcomes overall. We look forward to supporting them further as they continue to develop the platform in the months ahead." Enterprise Ireland Head of Research, Innovation and Infrastructure Marina Donohoe says: "Enterprise Ireland is delighted to support Diotima under the Commercialisation Fund. We look forward to seeing them continue in their mission to transform teaching practice through AI enabled assessment and feedback. We believe that the combination of excellence in AI and in education from Trinity College, expertise in education technology from the Learnovate Centre and focus on compliance with the EU AI Act and other regulations will see the Diotima team make a global impact". Diotima Learning Lead and co-founder Siobhan Ryan says: "We're delighted to have received such a significant award from the Enterprise Ireland C...

Transform Your Workplace
How HR Can Lead the AI Revolution Without Losing Its Humanity

Transform Your Workplace

Play Episode Listen Later Jun 10, 2025 36:56


HR consultant Daniel Strode discusses AI's impact on human resources, highlighting recruitment and data analytics as prime areas for adoption. He introduces his "5P model" emphasizing policy/governance and people/culture transformation as critical success factors. While AI adoption remains slow—only 25% of adults regularly use tools like ChatGPT—organizations are unknowingly integrating AI through software updates. Strode advocates for proper governance policies ahead of regulations like the EU AI Act, positioning AI as a collaborative tool enhancing rather than replacing human capabilities. TAKEAWAYS 5P Framework: Success requires addressing process enhancement, personalization, predictive insights, policy/governance, and people/culture transformation Governance First: Establish AI ethics policies, bias auditing, and compliance training before implementation, especially with upcoming EU AI Act regulations Human-AI Partnership: Use AI for manual processes while focusing HR professionals on strategic work like employee experience and change management A QUICK GLIMPSE INTO OUR PODCAST

MLOps.community
Packaging MLOps Tech Neatly for Engineers and Non-engineers // Jukka Remes // #322

MLOps.community

Play Episode Listen Later Jun 10, 2025 55:30


Packaging MLOps Tech Neatly for Engineers and Non-engineers // MLOps Podcast #322 with Jukka Remes, Senior Lecturer (SW dev & AI), AI Architect at Haaga-Helia UAS, Founder & CTO at 8wave AI. Join the Community: https://go.mlops.community/YTJoinInGet the newsletter: https://go.mlops.community/YTNewsletter// AbstractAI is already complex—adding the need for deep engineering expertise to use MLOps tools only makes it harder, especially for SMEs and research teams with limited resources. Yet, good MLOps is essential for managing experiments, sharing GPU compute, tracking models, and meeting AI regulations. While cloud providers offer MLOps tools, many organizations need flexible, open-source setups that work anywhere—from laptops to supercomputers. Shared setups can boost collaboration, productivity, and compute efficiency.In this session, Jukka introduces an open-source MLOps platform from Silo AI, now packaged for easy deployment across environments. With Git-based workflows and CI/CD automation, users can focus on building models while the platform handles the MLOps.// BioFounder & CTO, 8wave AI | Senior Lecturer, Haaga-Helia University of Applied SciencesJukka Remes has 28+ years of experience in software, machine learning, and infrastructure. Starting with SW dev in the late 1990s and analytics pipelines of fMRI research in early 2000s, he's worked across deep learning (Nokia Technologies), GPU and cloud infrastructure (IBM), and AI consulting (Silo AI), where he also led MLOps platform development. Now a senior lecturer at Haaga-Helia, Jukka continues evolving that open-source MLOps platform with partners like the University of Helsinki. He leads R&D on GenAI and AI-enabled software, and is the founder of 8wave AI, which develops AI Business Operations software for next-gen AI enablement, including regulatory compliance of AI.// Related LinksOpen source -based MLOps k8s platform setup originally developed by Jukka's team at Silo AI - free for any use and installable in any environment from laptops to supercomputing: https://github.com/OSS-MLOPS-PLATFORM/oss-mlops-platformJukka's new company:https://8wave.ai~~~~~~~~ ✌️Connect With Us ✌️ ~~~~~~~Catch all episodes, blogs, newsletters, and more: https://go.mlops.community/TYExploreJoin our Slack community [https://go.mlops.community/slack]Follow us on X/Twitter [@mlopscommunity](https://x.com/mlopscommunity) or [LinkedIn](https://go.mlops.community/linkedin)] Sign up for the next meetup: [https://go.mlops.community/register]MLOps Swag/Merch: [https://shop.mlops.community/]Connect with Demetrios on LinkedIn: /dpbrinkmConnect with Jukka on LinkedIn: /jukka-remesTimestamps:[00:00] Jukka's preferred coffee[00:39] Open-Source Platform Benefits[01:56] Silo MLOps Platform Explanation[05:18] AI Model Production Processes[10:42] AI Platform Use Cases[16:54] Reproducibility in Research Models[26:51] Pipeline setup automation[33:26] MLOps Adoption Journey[38:31] EU AI Act and Open Source[41:38] MLOps and 8wave AI[45:46] Optimizing Cross-Stakeholder Collaboration[52:15] Open Source ML Platform[55:06] Wrap up

Ecosystemic Futures
91. Navigating the Cognitive Revolution: What Makes Us Human in an AI World

Ecosystemic Futures

Play Episode Listen Later Jun 3, 2025 49:22


As AI systems approach and potentially surpass human cognitive benchmarks, how do we design hybrid intelligence frameworks that preserve human agency while leveraging artificial cognitive enhancements?In this exploration of human-AI convergence, anthropologist and organizational learning expert Dr. Lollie Mancey presents a framework for the "cognitive revolution,” the fourth transformational shift in human civilization following agricultural, industrial, and digital eras. Drawing from Berkeley's research on the science of awe, Vatican AI policy frameworks, and indigenous knowledge systems, Mancey analyzes how current AI capabilities (GPT-4 operating at Einstein-level IQ) are fundamentally reshaping cognitive labor and social structures. She examines the EU AI Act's predictive policing clauses, the implications of quantum computing, and the emerging grief tech sector as indicators of broader systemic transformation. Mancey identifies three meta-cognitive capabilities essential for human-AI collaboration: Critical information interrogation, Systematic curiosity protocols, and Epistemic skepticism frameworksHer research on AI companion platforms reveals neurological patterns like addiction pathways. At the same time, her fieldwork with Balinese communities demonstrates alternative models of technological integration based on reciprocal participation rather than extractiveoptimization. This conversation provides actionable intelligence for organizations navigating the transition from human-centric to hybrid cognitive systems.Key Research Insights• Cognitive Revolution Metrics: Compound technological acceleration outpaces regulatory adaptation, with education systems lagging significantly, requiring new frameworks for cognitive load management and decision architecture in research environments • Einstein IQ Parity Achieved: GPT-4 operates at Einstein-level intelligence yet lacks breakthrough innovation capabilities, highlighting critical distinctions between pattern recognition and creative synthesis for R&D resource allocation • Neurological Dependency Patterns: AI companion platforms demonstrate "catnip-like" effects with users exhibiting hyper-fixation behaviors and difficulty with "digital divorce"—profound implications for workforce cognitive resilience • Epistemic Security Crisis: Deep fakes eliminated content authentication while AI hallucinations embed systemic biases from internet-scale training data, requiring new verification protocols and decision-making frameworks • Alternative Integration Architecture: Balinese reciprocal participation models versus Western extractive paradigms offer scalable approaches for sustainable innovation ecosystems and human-technology collaboration#EcosystemicFutures #CognitiveRevolution #HybridIntelligence #NeuroCognition #QuantumComputing #SociotechnicalSystems #HumanAugmentation #SystemsThinking #FutureOfScience Guest: Lorraine Mancey, Programme Director at UCD Innovation Academy Host: Marco Annunziata, Co-Founder, Annunziata Desai PartnersSeries Hosts:Vikram Shyam, Lead Futurist, NASA Glenn Research CenterDyan Finkhousen, Founder & CEO, Shoshin WorksEcosystemic Futures is provided by NASA - National Aeronautics and Space Administration Convergent Aeronautics Solutions Project in collaboration with Shoshin Works.

Ogletree Deakins Podcasts
Workplace Strategies Watercooler 2025: The AI-Powered Workplace of Today and Tomorrow

Ogletree Deakins Podcasts

Play Episode Listen Later May 16, 2025 16:55


In this installment of our Workplace Strategies Watercooler 2025 podcast series, Jenn Betts (shareholder, Pittsburgh), Simon McMenemy (partner, London), and Danielle Ochs (shareholder, San Francisco) discuss the evolving landscape of artificial intelligence (AI) in the workplace and provide an update on the global regulatory frameworks governing AI use. Simon, who is co-chair of Ogletree's Cybersecurity and Privacy Practice Group, breaks down the four levels of risk and their associated regulations specified in the EU AI Act, which will take effect in August 2026, and the need for employers to prepare now for the Act's stringent regulations and steep penalties for noncompliance. Jenn and Danielle, who are co-chairs of the Technology Practice Group, discuss the Trump administration's focus on innovation with limited regulation, as well as the likelihood of state-level regulation.

Scouting for Growth
Areiel Wolanow On Unleashing AI, Quantum, and Emerging Tech

Scouting for Growth

Play Episode Listen Later Apr 16, 2025 49:08


On this episode of the Scouting For Growth podcast, Sabine meets Areiel Wolanow, the managing director of Finserv Experts, who discusses his journey from IBM to founding FinServ Experts, emphasising the importance of focusing on business models enabled by technology rather than the technology itself. Areiel delves into the challenges and opportunities presented by artificial intelligence, responsible AI practices, and the implications of quantum computing for data security, highlighting the need for organisations to adapt their approaches to digital transformation, advocating for a migration strategy over traditional transformation methods KEY TAKEAWAYS Emerging tech should be leveraged to create new business models rather than just re-engineering existing ones. Understanding the business implications of technology is crucial for delivering value. When harnessing artificial intelligence, it's essential to identify the real underlying problems within an organisation, assess its maturity, and build self-awareness before applying maturity models and gap analyses. The EU AI Act serves as a comprehensive guideline for responsible AI use, offering risk categories and controls that can benefit companies outside the EU by providing a framework for ethical AI practices without the burden of compliance. Organisations should prepare for the future of quantum computing by ensuring their data is protected against potential vulnerabilities. This involves adopting quantum-resilient algorithms and planning for the transition well in advance. Leaders should place significant responsibility on younger team members who are more familiar with emerging technologies. Providing them with autonomy and support can lead to innovative solutions and successful business outcomes. BEST MOMENTS 'We focus not on the technology itself, but on the business models the tech enables.' 'The first thing you have to do... is to say, OK, is the proximate cause the real problem?' 'The best AI regulations out there is the EU AI Act... it actually benefits AI companies outside the EU more than it benefits within.' 'Digital transformations have two things in common. One is they're expensive, and two is they always fail.' ABOUT THE GUEST Areiel Wolanow is the managing director of Finserv Experts. He is an experienced business leader with over 25 years of experience in business transformation solutioning, sales, and execution. He served as one of IBM’s key thought leaders in blockchain, machine learning, and financial inclusion. Areiel has deep experience leading large, globally distributed teams; he has led programs of over 100 people through the full delivery life cycle, and has managed budgets in the tens of millions of dollars. In addition to his delivery experience, Areiel also serves as a senior advisor on blockchain, machine learning, and technology adoption; he has worked with central banks and financial regulators around the world, and is currently serving as the insurance industry advisor for the UK Parliament’s working group on blockchain. LinkedIn ABOUT THE HOST Sabine is a corporate strategist turned entrepreneur. She is the CEO and Managing Partner of Alchemy Crew a venture lab that accelerates the curation, validation, & commercialization of new tech business models. Sabine is renowned within the insurance sector for building some of the most renowned tech startup accelerators around the world working with over 30 corporate insurers, accelerated over 100 startup ventures. Sabine is the co-editor of the bestseller The INSURTECH Book, a top 50 Women in Tech, a FinTech and InsurTech Influencer, an investor & multi-award winner. Twitter LinkedIn Instagram Facebook TikTok Email Website