Podcasts about ai regulation

  • 372PODCASTS
  • 560EPISODES
  • 35mAVG DURATION
  • 5WEEKLY NEW EPISODES
  • May 16, 2025LATEST

POPULARITY

20172018201920202021202220232024


Best podcasts about ai regulation

Latest podcast episodes about ai regulation

Crazy Wisdom
Episode #461: Morpheus in the Classroom: AI, Education, and the New Literacy

Crazy Wisdom

Play Episode Listen Later May 16, 2025 56:25


I, Stewart Alsop, welcomed Woody Wiegmann to this episode of Crazy Wisdom, where we explored the fascinating and sometimes unsettling landscape of Artificial Intelligence. Woody, who is deeply involved in teaching AI, shared his insights on everything from the US-China AI race to the radical transformations AI is bringing to education and society at large.Check out this GPT we trained on the conversationTimestamps01:17 The AI "Cold War": Discussing the intense AI development race between China and the US.03:04 Opaque Models & Education's Resistance: The challenge of opaque AI and schools lagging in adoption.05:22 AI Blocked in Schools: The paradox of teaching AI while institutions restrict access.08:08 Crossing the AI Rubicon: How AI users are diverging from non-users into different realities.09:00 Budgetary Constraints in AI Education: The struggle for resources like premium AI access for students.12:45 Navigating AI Access for Students: Woody's ingenious workarounds for the premium AI divide.19:15 Igniting Curiosity with AI: Students creating impressive projects, like catapult websites.27:23 Exploring Grok and AI Interaction: Debating IP concerns and engaging with AI ("Morpheus").46:19 AI's Societal Impact: AI girlfriends, masculinity, and the erosion of traditional skills.Key InsightsThe AI Arms Race: Woody highlights a "cold war of nerdiness" where China is rapidly developing AI models comparable to GPT-4 at a fraction of the cost. This competition raises questions about data transparency from both sides and the strategic implications of superintelligence.Education's AI Resistance: I, Stewart Alsop, and Woody discuss the puzzling resistance to AI within educational institutions, including outright blocking of AI tools. This creates a paradox where courses on AI are taught in environments that restrict its use, hindering practical learning for students.Diverging Realities: We explore how individuals who have crossed the "Rubicon" of AI adoption are now living in a vastly different world than those who haven't. This divergence is akin to past technological shifts but is happening at an accelerated pace, impacting how people learn, work, and perceive reality.The Fading Relevance of Traditional Coding: Woody argues that focusing on teaching traditional coding languages like Python is becoming outdated in the age of advanced AI. AI can handle much of the detailed coding, shifting the necessary skills towards understanding AI systems, effective prompting, and higher-level architecture.AI as the Ultimate Tutor: The advent of AI offers the potential for personalized, one-on-one tutoring for everyone, a far more effective learning method than traditional classroom lectures. However, this potential is hampered by institutional inertia and a lack of resources for tools like premium AI subscriptions for students.Curiosity as the AI Catalyst: Woody shares anecdotes of students, even those initially disengaged, whose eyes light up when using AI for creative projects, like designing websites on niche topics such as catapults. This demonstrates AI's power to ignite curiosity and intrinsic motivation when paired with focused goals and the ability to build.AI's Impact on Society and Skills: We touch upon the broader societal implications, including the rise of AI girlfriends addressing male loneliness and providing acceptance. Simultaneously, there's concern over the potential atrophy of critical skills like writing and debate if individuals overly rely on AI for summarization and opinion generation without deep engagement.Contact Information*   Twitter/X: @RulebyPowerlaw*   Listeners can search for Woody Wiegmann's podcast "Courage over convention" *   LinkedIn: www.linkedin.com/in/dataovernarratives/

EXPresso
#121 Alex Moltzau: AI Regulation Across Borders: Collision or Collaboration?

EXPresso

Play Episode Listen Later May 16, 2025 46:07


In episode #121, I catch up with my longtime friend Alex Moltzau, now serving as a Policy Officer – Seconded National Expert at the European Commission.We had a wide-ranging conversation on the future of artificial intelligence, regulation, and global cooperation. Topics include:Alex's background and journey - From social entrepreneurship to shaping AI policy at the European level — including his work on the AI Pact and regulatory sandboxes.Reflections since 2020 - Revisiting lessons from the past few years and the idea of “500 days of AI and critical thinking.”The EU AI ActWhy it was created and what problems it's designed to solveKey provisions and goalsThe balance between regulation and innovation in the European contextThe AI PactHow it came togetherWhat companies are committing toIts role in the broader regulatory landscapeU.S. vs. Europe: Two regulatory pathsKey differences and similarities in approachHow regulatory environments affect innovationImplications for global competition in AIOpportunities for collaboration despite diverging strategiesThe future of AI and global governanceThe need for international cooperation and the role of institutions like the UNEthical considerations in the development and deployment of AIThe evolving role and vision of the European AI OfficeAlex's personal hopes, concerns, and advice for future leaders in AI policyThis is episode #121 and Alex Moltzau!

TechLinked
Intel Arc B770 / B780, Android 16 features, AI regulation + more!

TechLinked

Play Episode Listen Later May 15, 2025 10:19


Timestamps: 0:00 an attempt was made 0:06 Intel Arc B770 / B780 confirmed? 1:40 Google AI Mode, Android anti-scam 3:27 Wacky AI regulation news 5:05 War Thunder! 5:54 QUICK BITS INTRO 6:02 Samsung Galaxy S25 Edge 6:57 Switch 2 tech specs analysis 7:34 VPNSecure cancels lifetime subs 7:59 HBO Max again, Uber Route Share 8:31 wacky inflatable tube robot NEWS SOURCES: https://lmg.gg/5p8Dd Learn more about your ad choices. Visit megaphone.fm/adchoices

The CyberWire
Jamming in a ban on state AI regulation.

The CyberWire

Play Episode Listen Later May 13, 2025 32:51


House Republicans look to limit state regulation of AI. Spain investigates potential cybersecurity weak links in the April 28 power grid collapse. A major security flaw has been found in ASUS mainboards' automatic update system. A new macOS info-stealing malware uses PyInstaller to evade detection. The U.S. charges 14 North Korean nationals in a remote IT job scheme. Europe's cybersecurity agency launches the European Vulnerability Database. CISA pares back website security alerts. Moldovan authorities arrest a suspect in DoppelPaymer ransomware attacks. On today's Threat Vector segment, David Moulton speaks with ⁠Noelle Russell⁠, CEO of the AI Leadership Institute, about how to scale responsible AI in the enterprise. Dave & Buster's invites vanish into the void. Remember to leave us a 5-star rating and review in your favorite podcast app. Miss an episode? Sign-up for our daily intelligence roundup, Daily Briefing, and you'll never miss a beat. And be sure to follow CyberWire Daily on LinkedIn. Threat Vector  Recorded Live at the Canopy Hotel during the RSAC Conference in San Francisco, ⁠David Moulton⁠ speaks with ⁠Noelle Russell⁠, CEO of the AI Leadership Institute and a leading voice in responsible AI on this Threat Vector segment. Drawing from her new book Scaling Responsible AI, Noelle explains why early-stage AI projects must move beyond hype to operational maturity—addressing accuracy, fairness, and security as foundational pillars. Together, they explore how generative AI models introduce new risks, how red teaming helps organizations prepare, and how to embed responsible practices into AI systems. You can hear David and Noelle's full discussion on Threat Vector here and catch new episodes every Thursday on your favorite podcast app.  Selected Reading Republicans Try to Cram Ban on AI Regulation Into Budget Reconciliation Bill (404 Media) Spain investigates cyber weaknesses in blackout probe (The Financial Times) Critical Security flaw in ASUS mainboard update system (Beyond Machines) Hackers Exploiting PyInstaller to Deploy Undetectable macOS Infostealer (Cybersecurity News) Researchers Uncover Remote IT Job Fraud Scheme Involving North Korean Nationals (GB Hackers) European Vulnerability Database Launches Amid US CVE Chaos (Infosecurity Magazine) Apple Security Update: Multiple Vulnerabilities in macOS & iOS Patched (Cybersecurity News) CISA changes vulnerabilities updates, shifts to X and emails (The Register) Suspected DoppelPaymer Ransomware Group Member Arrested (Security Week) Cracking The Dave & Buster's Anomaly (Rambo.Codes)  Share your feedback. We want to ensure that you are getting the most out of the podcast. Please take a few minutes to share your thoughts with us by completing our brief listener survey as we continually work to improve the show.  Want to hear your company in the show? You too can reach the most influential leaders and operators in the industry. Here's our media kit. Contact us at cyberwire@n2k.com to request more info. The CyberWire is a production of N2K Networks, your source for strategic workforce intelligence. © N2K Networks, Inc. Learn more about your ad choices. Visit megaphone.fm/adchoices

CXO.fm | Transformation Leader's Podcast
Winning with AI Compliance

CXO.fm | Transformation Leader's Podcast

Play Episode Listen Later May 9, 2025 13:34 Transcription Available


Mastering the EU AI Act is no longer optional—it's a strategic necessity. In this episode, we unpack the critical compliance gaps that separate thriving companies from those falling behind. Learn how to categorise your AI systems, mitigate risk, and turn regulation into a competitive advantage. Perfect for business leaders, consultants, and transformation professionals navigating AI governance. 

The Six Five with Patrick Moorhead and Daniel Newman
EP 259: Tech Titans Under Scrutiny: Antitrust, AI, and Global Competition

The Six Five with Patrick Moorhead and Daniel Newman

Play Episode Listen Later May 5, 2025 52:23


Tough times for tech as Apple AND Google face legal woes. This week on The Six Five Pod, Patrick Moorhead and Daniel Newman dissect the major developments shaping the tech landscape: From the DOJ's challenge to Google Chrome's search dominance and Apple's legal battles over its app store policies. Get the expert breakdown on the implications for consumers and the future of these tech behemoths. The handpicked topics for this week are:   Google's Legal Challenges and Antitrust Issues: Discussion on Google's predicament with the DOJ's demand to sell Chrome, plus the examination of antitrust implications and the potential impact on consumer choice and competition.   Apple's Legal Setback and Potential Criminal Charges: Analysis of the judgment against Apple's in the Epic lawsuit regarding app payment services. Insight into the judge's strong stance against Apple's non-compliance and the possible pursuit of criminal charges against its executives.   Major Announcements from Nvidia and IBM: Overview of NVIDIA and IBM's significant investment announcements at the White House and a discussion on what these investments could mean for U.S. tech infrastructure and global competitiveness.   Taiwan's Ruling on TSMC's Chip Production: Exploration of Taiwan's decision to keep TSMC's leading-edge chip production local and the potential geopolitical and economic impact of this ruling for global semiconductor supply chains.   Intel's Foundry Day and 18A Variants: Insights into Intel's Foundry Day announcements, including updates on 18A variants, plus a look into Intel's strategy to attract major fabless customers and how this could affect the semiconductor industry.   Six Five Summit Announcement: Announcement of the upcoming Six Five Summit, "AI Unleashed 2025" from June 16-19. The opening keynote will be delivered by Michael Dell, along with a lineup of other prominent speakers. Visit www.SixFiveMedia.com/summit for more information on the virtual event! For a more on each topic, please click on the links above. Be sure to subscribe to The Six Five Pod so you never miss an episode.  

Recruiting Future with Matt Alder
Ep 699: AI, Regulation, and the Human Touch

Recruiting Future with Matt Alder

Play Episode Listen Later Apr 26, 2025 28:10


The entire recruiting landscape is undergoing a profound transformation as organizations grapple with the implications of AI and the economic disruption 2025 is bringing. Talent acquisition teams are drowning in applications while simultaneously being asked to do more with fewer resources. Candidates find themselves in increasingly dehumanized processes where ghosting is now the norm. At the same time, regulatory bodies are developing laws to ensure fairness and transparency around the use of AI in hiring. So, how can employers navigate this challenging terrain while creating fair, accessible, and effective hiring processes? My guest this week is Ruth Miller, a talent acquisition and HR consultant who works across the public and private sectors. Ruth is an advisor to the Better Hiring Institute, working with the UK Government on developing legislation around AI in recruiting. In our conversation, she shares her insights into how organizations can proactively develop strategies that balance innovation with compliance while enhancing rather than diminishing the human elements of hiring. - Different perceptions and reactions to AI among employers across sectors - The paradox of AI both introducing and potentially removing bias from hiring processes - Neurodivergent candidates and AI in job applications - Common misconceptions job seekers have about employers' AI usage. - Strategic advice for organizations implementing AI in recruitment - The future of recruitment and the evolving balance between AI and human interaction Follow this podcast on Apple Podcasts. Follow this podcast on Spotify.

HR Interviews Playlist
Ep 699: AI, Regulation, and the Human Touch

HR Interviews Playlist

Play Episode Listen Later Apr 26, 2025 28:10


The entire recruiting landscape is undergoing a profound transformation as organizations grapple with the implications of AI and the economic disruption 2025 is bringing. Talent acquisition teams are drowning in applications while simultaneously being asked to do more with fewer resources. Candidates find themselves in increasingly dehumanized processes where ghosting is now the norm. At the same time, regulatory bodies are developing laws to ensure fairness and transparency around the use of AI in hiring. So, how can employers navigate this challenging terrain while creating fair, accessible, and effective hiring processes? My guest this week is Ruth Miller, a talent acquisition and HR consultant who works across the public and private sectors. Ruth is an advisor to the Better Hiring Institute, working with the UK Government on developing legislation around AI in recruiting. In our conversation, she shares her insights into how organizations can proactively develop strategies that balance innovation with compliance while enhancing rather than diminishing the human elements of hiring. - Different perceptions and reactions to AI among employers across sectors - The paradox of AI both introducing and potentially removing bias from hiring processes - Neurodivergent candidates and AI in job applications - Common misconceptions job seekers have about employers' AI usage. - Strategic advice for organizations implementing AI in recruitment - The future of recruitment and the evolving balance between AI and human interaction Follow this podcast on Apple Podcasts. Follow this podcast on Spotify.

HR Collection Playlist
Ep 699: AI, Regulation, and the Human Touch

HR Collection Playlist

Play Episode Listen Later Apr 26, 2025 28:10


The entire recruiting landscape is undergoing a profound transformation as organizations grapple with the implications of AI and the economic disruption 2025 is bringing. Talent acquisition teams are drowning in applications while simultaneously being asked to do more with fewer resources. Candidates find themselves in increasingly dehumanized processes where ghosting is now the norm. At the same time, regulatory bodies are developing laws to ensure fairness and transparency around the use of AI in hiring. So, how can employers navigate this challenging terrain while creating fair, accessible, and effective hiring processes? My guest this week is Ruth Miller, a talent acquisition and HR consultant who works across the public and private sectors. Ruth is an advisor to the Better Hiring Institute, working with the UK Government on developing legislation around AI in recruiting. In our conversation, she shares her insights into how organizations can proactively develop strategies that balance innovation with compliance while enhancing rather than diminishing the human elements of hiring. - Different perceptions and reactions to AI among employers across sectors - The paradox of AI both introducing and potentially removing bias from hiring processes - Neurodivergent candidates and AI in job applications - Common misconceptions job seekers have about employers' AI usage. - Strategic advice for organizations implementing AI in recruitment - The future of recruitment and the evolving balance between AI and human interaction Follow this podcast on Apple Podcasts. Follow this podcast on Spotify.

The Mark Cuban Podcast
AI Regulation Reimagined: OpenAI's Economic Roadmap

The Mark Cuban Podcast

Play Episode Listen Later Apr 21, 2025 14:54


OpenAI's new economic blueprint tackles the challenge of AI regulation. The goal is to foster safe development without stifling innovation. This could guide international AI standards.AI Chat YouTube Channel: https://www.youtube.com/@JaedenSchaferMy Podcast Course: https://podcaststudio.com/courses/Try AI Box: ⁠⁠https://AIBox.ai/⁠⁠Join my AI Hustle Community: https://www.skool.com/aihustle/about

The Linus Tech Podcast
AI Regulation Reimagined: OpenAI's Economic Roadmap

The Linus Tech Podcast

Play Episode Listen Later Apr 21, 2025 14:54


OpenAI has released a blueprint that outlines an economic approach to AI regulation. The plan emphasizes innovation, safety, and governance. It may influence global AI policies going forward.AI Chat YouTube Channel: https://www.youtube.com/@JaedenSchaferMy Podcast Course: https://podcaststudio.com/courses/Try AI Box: ⁠⁠https://AIBox.ai/⁠⁠Join my AI Hustle Community: https://www.skool.com/aihustle/about

UiPath Daily
New AI Regulation Plan from OpenAI Revealed

UiPath Daily

Play Episode Listen Later Apr 19, 2025 14:54


OpenAI's new economic blueprint tackles the challenge of AI regulation. The goal is to foster safe development without stifling innovation. This could guide international AI standards.AI Chat YouTube Channel: https://www.youtube.com/@JaedenSchaferMy Podcast Course: https://podcaststudio.com/courses/Try AI Box: ⁠⁠https://AIBox.ai/⁠⁠Join my AI Hustle Community: https://www.skool.com/aihustle/about

ChatGPT: News on Open AI, MidJourney, NVIDIA, Anthropic, Open Source LLMs, Machine Learning

In an effort to shape AI's future, OpenAI introduced a regulatory and economic framework. The blueprint balances progress with responsibility. Experts are closely examining its global impact.AI Chat YouTube Channel: https://www.youtube.com/@JaedenSchaferMy Podcast Course: https://podcaststudio.com/courses/Try AI Box: ⁠⁠https://AIBox.ai/⁠⁠Join my AI Hustle Community: https://www.skool.com/aihustle/about

AI for Non-Profits
AI Regulation Reimagined: OpenAI's Economic Roadmap

AI for Non-Profits

Play Episode Listen Later Apr 19, 2025 14:54


OpenAI has released a blueprint that outlines an economic approach to AI regulation. The plan emphasizes innovation, safety, and governance. It may influence global AI policies going forward.AI Chat YouTube Channel: https://www.youtube.com/@JaedenSchaferMy Podcast Course: https://podcaststudio.com/courses/Try AI Box: ⁠⁠https://AIBox.ai/⁠⁠Join my AI Hustle Community: https://www.skool.com/aihustle/about

Open AI
AI Regulation Reimagined: OpenAI's Economic Roadmap

Open AI

Play Episode Listen Later Apr 19, 2025 14:54


OpenAI has released a blueprint that outlines an economic approach to AI regulation. The plan emphasizes innovation, safety, and governance. It may influence global AI policies going forward.AI Chat YouTube Channel: https://www.youtube.com/@JaedenSchaferMy Podcast Course: https://podcaststudio.com/courses/Try AI Box: ⁠⁠https://AIBox.ai/⁠⁠Join my AI Hustle Community: https://www.skool.com/aihustle/about

Law, disrupted
Re-release: Emerging Trends in AI Regulation

Law, disrupted

Play Episode Listen Later Apr 17, 2025 46:34


John is joined by Courtney Bowman, the Global Director of Privacy and Civil Liberties at Palantir, one of the foremost companies in the world specializing in software platforms for big data analytics. They discuss the emerging trends in AI regulation.  Courtney explains the AI Act recently passed by the EU Parliament, including the four levels of risk it assesses for different AI systems and the different regulatory obligations imposed on each risk level, how the Act treats general purpose AI systems and how the final Act evolved in response to lobbying by emerging European companies in the AI space. They discuss whether the EU AI Act will become the global standard international companies default to because the European market is too large to abandon. Courtney also explains recent federal regulatory developments in  the U.S. including the framework for AI put out by the National Institute of Science and Technology, the AI Bill of Rights announced by the White House which calls for voluntary compliance to certain principles by industry and the Executive Order on Safe, Secure and Trustworthy Development and Use of Artificial Intelligence which requires each department of the federal government to develop its own plan for the use and deployment of AI.  They also discuss the wide range of state level AI legislative initiatives and the leading role California has played in this process.  Finally, they discuss the upcoming issues legislatures will need to address including translating principles like accountability, fairness and transparency into concrete best practices, instituting testing, evaluation and validation methodologies to ensure that AI systems are doing what they're supposed to do in a reliable and trustworthy way, and addressing concerns around maintaining AI systems over time as the data used by the system continuously evolves over time until it no longer accurately represents the world that it was originally designed to represent.Podcast Link: Law-disrupted.fmHost: John B. Quinn Producer: Alexis HydeMusic and Editing by: Alexander Rossi

The TechEd Podcast
AI Regulation Can Wait—But Education Reform Can't - State Senator Julian Bradley

The TechEd Podcast

Play Episode Listen Later Apr 15, 2025 52:50 Transcription Available


State Senator Julian Bradley joins Matt Kirchner for a wide-ranging conversation on how policymakers should be thinking about AI, energy, and education. Bradley explains why his committee chose not to recommend regulation of AI, how this move differs from other states, and how artificial intelligence could help solve workforce shortages in critical sectors like healthcare, public safety, and manufacturing.The conversation also explores the future of nuclear energy as a clean, scalable power source—especially as data centers and advanced industries drive up demand. Bradley shares his push for small modular reactors and the bipartisan momentum behind nuclear innovation. Finally, the two dive into K-12 education, taking on literacy rates, school choice, and why high schools need a complete overhaul to actually prepare students for life after graduation. Whether you're an educator, policymaker, or industry leader, this episode offers practical insights into the policy decisions shaping our future workforce.In this episode:Why one state senator believes not regulating AI may be the smartest moveHow artificial intelligence could help solve labor shortages from childcare to healthcareWhat policymakers are missing about nuclear energy—and why that's about to changeWhy our current education system is setting students up to fail, and what to do insteadHow a wrestling ring, a mother's wisdom, and a literacy-first mindset shaped a political career3 Big Takeaways from this Episode:Regulating artificial intelligence requires caution, context, and a long-term view: Senator Bradley led a legislative study committee on the regulation of AI and ultimately chose not to recommend new regulation, citing the risk of stifling innovation and creating barriers for businesses. Drawing on testimony from sectors like healthcare, public safety, and education, the committee focused instead on building a knowledge base for future legislative action—prioritizing flexibility over rushed policymaking.Meeting future energy demand will require bold thinking and bipartisan cooperation: With AI, data centers, and industry driving massive increases in power needs, Bradley is pushing Wisconsin to embrace nuclear energy as a scalable, clean solution. He outlines current efforts to support small modular reactors, prepare regulatory frameworks, and position the state as a leader in 21st-century energy policy.Education reform must focus on real-world readiness, from literacy to life skills: Bradley calls for a complete overhaul of high school—moving away from rigid grade levels toward personalized, career-connected learning. He also stresses that without strong literacy skills, students can't access opportunity, and that solving academic gaps early is essential to preparing engaged citizens and a capable workforce.Resources in this Episode:Learn more about Senator Julian BradleyLearn about the work of the 2024 Legislative Council Study Committee on the Regulation of Artificial IntelligenceWe want to hear from you! Send us a text.Instagram - Facebook - YouTube - TikTok - Twitter - LinkedIn

Science 4-Hire
Responsible AI In 2025 and Beyond – Three pillars of progress

Science 4-Hire

Play Episode Listen Later Apr 15, 2025 54:44


"Part of putting an AI strategy together is understanding the limitations and where unintended consequences could occur, which is why you need diversity of thought within committees created to guide AI governance and ethics." – Bob PulverMy guest for this episode is my friend in ethical/responsible AI, Bob Pulver, the founder of CognitivePath.io and host of the podcast "Elevate Your AIQ." Bob specializes in helping organizations navigate the complexities of responsible AI, from strategic adoption to effective governance practices.  Bob was my guest about a year ago and in this episode he drops back in to discuss what has changed in the faced paced world of AI across three pillars of responsible AI usage.  * Human-Centric AI * AI Adoption and Readiness * AI Regulation and GovernanceThe past year's progress explained through three pillars that are shaping ethical AI:These are the themes that we explore in our conversation and our thoughts on what has changed/evolved in the past year.1. Human-Centric AIChange from Last Year:* Shift from compliance-driven AI towards a more holistic, human-focused perspective, emphasizing AI's potential to enhance human capabilities and fairness.Reasons for Change:* Increasing comfort level with AI and experience with the benefits that it brings to our work* Continued exploration and development of low stakes, low friction use cases* AI continues to be seen as a partner and magnifier of human capabilitiesWhat to Expect in the Next Year:* Increased experience with human machine partnerships* Increased opportunities to build superpowers* Increased adoption of human centric tools by employers2. AI Adoption and ReadinessChange from Last Year:* Organizations have moved from cautious, fragmented adoption to structured, strategic readiness and literacy initiatives.* Significant growth in AI educational resources and adoption within teams, rather than just individuals.Reasons for Change:* Improved understanding of AI's benefits and limitations, reducing fears and resistance.* Availability of targeted AI literacy programs, promoting organization-wide AI understanding and capability building.What to Expect in the Next Year:* More systematic frameworks for AI adoption across entire organizations.* Increased demand for formal AI proficiency assessments to ensure responsible and effective usage.3. AI Regulation and GovernanceChange from Last Year:* Transition from broad discussions about potential regulations towards concrete legislative actions, particularly at state and international levels (e.g., EU AI Act, California laws).* Momentum to hold vendors of AI increasingly accountable for ethical AI use.Reasons for Change:* Growing awareness of risks associated with unchecked AI deployment.* Increased push to stay on the right side of AI via legislative activity at state and global levels addressing transparency, accountability, and fairness.What to Expect in the Next Year:* Implementation of stricter AI audits and compliance standards.* Clearer responsibilities for vendors and organizations regarding ethical AI practices.* Finally some concrete standards that will require fundamental changes in oversight and create messy situations.Practical Takeaways:What should I/we be doing to move the ball fwd and realize AI's full potential while limiting collateral damage?Prioritize Human-Centric AI Design* Define Clear Use Cases: Ensure AI is solving a genuine human-centered problem rather than just introducing technology for technology's sake.* Promote Transparency and Trust: Clearly communicate how and why AI is being used, ensuring it enhances rather than replaces human judgment and involvement.Build Robust AI Literacy and Education Programs* Develop Organizational AI Literacy: Implement structured training initiatives that educate employees about fundamental AI concepts, the practical implications of AI use, and ethical considerations.* Create Role-Specific Training: Provide tailored AI skill-building programs based on roles and responsibilities, moving beyond individual productivity to team-based effectiveness.Strengthen AI Governance and Oversight* Adopt Proactive Compliance Practices: Align internal policies with rigorous standards such as the EU AI Act to preemptively prepare for emerging local and global legislation.* Vendor Accountability: Develop clear guidelines and rigorous vetting processes for vendors to ensure transparency and responsible use, preparing your organization for upcoming regulatory audits.Monitor AI Effectiveness and Impact* Continuous Monitoring: Shift from periodic audits to continuous monitoring of AI tools to ensure fairness, transparency, and functionality.* Evaluate Human Impact Regularly: Regularly assess the human impact of AI tools on employee experience, fairness in decision-making, and organizational trust.Email Bob- bob@cognitivepath.io Listen to Bob's awesome podcast - Elevate you AIQ This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit charleshandler.substack.com

Campaign podcast
Will government AI regulation harm creative industries? With Omnicom's Michael Horn

Campaign podcast

Play Episode Listen Later Apr 15, 2025 26:17


In February this year, the UK government published a consultation on AI, proposing a change to current copyright legislation. It would allow tech companies to use creative works including film, TV and original journalism to train AI models without permission of the creators, unless they have opted out.It was met with harsh criticism, rallying "Make it fair" campaigns and rejections from both creatives and tech platforms alike, albeit for opposite reasons. Google and OpenAI responded to the consultation saying that it would cause developers to "deprioritise the market" and that "training on the open web must be free" while creative industries including Alex Mahon, chief executive of Channel 4, said that the lack of transparency and compensation would "scrape the value" from quality content.Campaign questions if UK regulation will harm creative industries and how it will impact the country's own advancements in AI. This episode welcomes guest Michael Horn, global head of AI at Omnicom Advertising Group. Hosted by tech editor Lucy Shelley, the Campaign team includes creativty and culture editor Alessandra Scotto di Santolo and deputy media editor Shauna Lewis.This episode includes an excerpt from Mahon's speech in Parliament where she addresses her concerns.Further reading:Mark Read: 'AI will unlock adland's productivity challenge'AI, copyright and the creative economy: the debate we can't afford to lose Hosted on Acast. See acast.com/privacy for more information.

Interviews: Tech and Business
AI Regulation & Innovation: Insights from the UK House of Lords | CXOTalk #875

Interviews: Tech and Business

Play Episode Listen Later Apr 7, 2025 57:06


How do top policymakers balance fostering technological advancement with necessary oversight? Join Michael Krigsman as he speaks with Lord Chris Holmes and Lord Tim Clement-Jones, members of the UK House of Lords, for a deep dive into the critical intersection of technology policy, innovation, and public trust.In this conversation, explore:-- The drive for "right-sized" AI regulation that supports innovators, businesses, and citizens.-- Strategies for effective AI governance principles: transparency, accountability, and interoperability.-- The importance of international collaboration and standards in a global tech ecosystem.-- Protecting intellectual property and creators' rights in the age of AI training data.-- Managing the risks associated with automated decision-making in both public and private sectors.-- The push for legal clarity around digital assets, tokenization, and open finance initiatives.-- Building and maintaining public trust as new technologies become more integrated into society.Gain valuable perspectives from legislative insiders on the challenges and opportunities presented by AI, digital assets, and data governance. Understand the thinking behind policy decisions shaping the future for business and technology leaders worldwide.Subscribe to CXOTalk for more conversations with the world's top innovators: https://www.cxotalk.com/subscribeRead the full transcript and analysis: https://www.cxotalk.com/episode/ai-digital-assets-and-public-trust-inside-the-house-of-lords00:00 Balancing Innovation and Regulation in AI02:48 Principles and Frameworks for AI Regulation09:30 Global Collaboration and Challenges in AI and Trade15:25 The Role of Guardrails and Regulation in AI17:43 Challenges in Protecting Intellectual Property in AI22:32 AI Regulation and International Collaboration29:11 The UK's Approach to AI Regulation32:00 Proportionality and Sovereign AI36:28 Digital Sovereignty and Creative Industries39:09 The Future of Digital Assets and Legislation40:53 Open Banking, Open Source Models, and Agile Regulation45:43 Ethics and Professional Standards in AI47:22 Exploring AI and Ethical Standards49:00 AI in the Workplace and Global Accessibility51:40 Regulation, Public Trust, and Ethical AI#cxotalk #AIRegulation #AIInnovation #DigitalAssets #PolicyMaking #UKParliament #TechPolicy #Governance #PublicTrust #LordChrisHolmes #LordTimClementJones

Six Pixels of Separation Podcast - By Mitch Joel
SPOS #978 – Christopher DiCarlo On AI, Ethics, And The Hope We Get It Right

Six Pixels of Separation Podcast - By Mitch Joel

Play Episode Listen Later Apr 6, 2025 58:56


Welcome to episode #978 of Six Pixels of Separation - The ThinkersOne Podcast. Dr. Christopher DiCarlo is a philosopher, educator, author, and ethicist whose work lives at the intersection of human values, science, and emerging technology. Over the years, Christopher has built a reputation as a Socratic nonconformist, equally at home lecturing at Harvard during his postdoctoral years as he is teaching critical thinking in correctional institutions or corporate boardrooms. He's the author of several important books on logic and rational discourse, including How To Become A Really Good Pain In The Ass - A Critical Thinker's Guide To Asking The Right Questions and So You Think You Can Think?, as well as the host of the podcast, All Thinks Considered. In this conversation, we dig into his latest book, Building A God - The Ethics Of Artificial Intelligence And The Race To Control It, which takes a sobering yet practical look at the ethical governance of AI as we accelerate toward the possibility of artificial general intelligence. Drawing on years of study in philosophy of science and ethics, Christopher lays out the risks - manipulation, misalignment, lack of transparency - and the urgent need for international cooperation to set safeguards now. We talk about everything from the potential of AI to revolutionize healthcare and sustainability to the darker realities of deepfakes, algorithmic control, and the erosion of democratic processes. His proposal? A kind of AI “Geneva Conventions,” or something akin to the IAEA - but for algorithms. In a world rushing toward techno-utopianism, Christopher is a clear-eyed voice asking: “What kind of Gods are we building… and can we still choose their values?” If you're thinking about the intersection of ethics and AI (and we should all be focused on this!), this is essential listening. Enjoy the conversation... Running time: 58:55. Hello from beautiful Montreal. Listen and subscribe over at Apple Podcasts. Listen and subscribe over at Spotify. Please visit and leave comments on the blog - Six Pixels of Separation. Feel free to connect to me directly on Facebook here: Mitch Joel on Facebook. Check out ThinkersOne. or you can connect on LinkedIn. ...or on X. Here is my conversation with Dr. Christopher DiCarlo. Building A God - The Ethics Of Artificial Intelligence And The Race To Control It. How To Become A Really Good Pain In The Ass - A Critical Thinker's Guide To Asking The Right Questions. So You Think You Can Think?. All Thinks Considered. Convergence Analysis. Follow Christopher on LinkedIn. Follow Christopher on X. This week's music: David Usher 'St. Lawrence River'. Chapters: (00:00) - Introduction to AI Ethics and Philosophy. (03:14) - The Interconnectedness of Systems. (05:56) - The Race for AGI and Its Implications. (09:04) - Risks of Advanced AI: Misuse and Misalignment. (11:54) - The Need for Ethical Guidelines in AI Development. (15:05) - Global Cooperation and the AI Arms Race. (18:03) - Values and Ethics in AI Alignment. (20:51) - The Role of Government in AI Regulation. (24:14) - The Future of AI: Hope and Concerns. (31:02) - The Dichotomy of Regulation and Innovation. (34:57) - The Drive Behind AI Pioneers. (37:12) - Skepticism and the Tech Bubble Debate. (39:39) - The Potential of AI and Its Risks. (43:20) - Techno-Selection and Control Over AI. (48:53) - The Future of Medicine and AI's Role. (51:42) - Empowering the Public in AI Governance. (54:37) - Building a God: Ethical Considerations in AI.

The AI Policy Podcast
Mapping Chinese AI Regulation with Matt Sheehan

The AI Policy Podcast

Play Episode Listen Later Apr 2, 2025 68:34


In this episode, we are joined by Matt Sheehan, fellow at the Carnegie Endowment for International Peace. We discuss the evolution of China's AI policymaking process over the past decade (6:45), the key institutions shaping Chinese AI policy today (44:30), and the changing nature of China's attitude to AI safety (50:55). 

Between Two COO's with Michael Koenig
AI and Privacy: Navigating the EU's New AI Act & the Impact on US Companies with Flick Fisher

Between Two COO's with Michael Koenig

Play Episode Listen Later Apr 1, 2025 36:43


Try Fellow's AI Meeting Copilot - 90 days FREE - fellow.app/cooAI and Privacy: Navigating the EU's New AI Act with Flick FisherIn this episode of Between Two COOs, host Michael Koenig welcomes back Flick Fisher, an expert on EU privacy law. They dive deep into the newly enacted EU Artificial Intelligence Act and its implications for businesses globally. They discuss compliance challenges, prohibited AI practices, and the potential geopolitical impact of AI regulation. For leaders and operators navigating AI in business, this episode provides crucial insights into managing AI technology within regulatory frameworks.00:00 Introduction to Fellow and AI Meeting Assistant01:01 Introduction to Between Two COOs Episode02:08 What is the EU's AI Act?03:42 Prohibited AI Practices in the EU07:46 Enforcement and Compliance Challenges12:18 US vs EU: Regulatory Landscape29:58 Impact on Companies and Consumers31:55 Future of AI RegulationBetween Two COO's - https://betweentwocoos.com Between Two COO's Episode Michael Koenig on LinkedInFlick Fisher on LinkedInFlick on Data Privacy and GDPR on Between Two COO'sMore on Flick's take of the EU's AI Act

Machine Learning Street Talk
The Compendium - Connor Leahy and Gabriel Alfour

Machine Learning Street Talk

Play Episode Listen Later Mar 30, 2025 97:10


Connor Leahy and Gabriel Alfour, AI researchers from Conjecture and authors of "The Compendium," joinus for a critical discussion centered on Artificial Superintelligence (ASI) safety and governance. Drawing from their comprehensive analysis in "The Compendium," they articulate a stark warning about the existential risks inherent in uncontrolled AI development, framing it through the lens of "intelligence domination"—where a sufficiently advanced AI could subordinate humanity, much like humans dominate less intelligent species.SPONSOR MESSAGES:***Tufa AI Labs is a brand new research lab in Zurich started by Benjamin Crouzier focussed on o-series style reasoning and AGI. They are hiring a Chief Engineer and ML engineers. Events in Zurich. Goto https://tufalabs.ai/***TRANSCRIPT + REFS + NOTES:https://www.dropbox.com/scl/fi/p86l75y4o2ii40df5t7no/Compendium.pdf?rlkey=tukczgf3flw133sr9rgss0pnj&dl=0https://www.thecompendium.ai/https://en.wikipedia.org/wiki/Connor_Leahyhttps://www.conjecture.dev/abouthttps://substack.com/@gabecc​TOC:1. AI Intelligence and Safety Fundamentals [00:00:00] 1.1 Understanding Intelligence and AI Capabilities [00:06:20] 1.2 Emergence of Intelligence and Regulatory Challenges [00:10:18] 1.3 Human vs Animal Intelligence Debate [00:18:00] 1.4 AI Regulation and Risk Assessment Approaches [00:26:14] 1.5 Competing AI Development Ideologies2. Economic and Social Impact [00:29:10] 2.1 Labor Market Disruption and Post-Scarcity Scenarios [00:32:40] 2.2 Institutional Frameworks and Tech Power Dynamics [00:37:40] 2.3 Ethical Frameworks and AI Governance Debates [00:40:52] 2.4 AI Alignment Evolution and Technical Challenges3. Technical Governance Framework [00:55:07] 3.1 Three Levels of AI Safety: Alignment, Corrigibility, and Boundedness [00:55:30] 3.2 Challenges of AI System Corrigibility and Constitutional Models [00:57:35] 3.3 Limitations of Current Boundedness Approaches [00:59:11] 3.4 Abstract Governance Concepts and Policy Solutions4. Democratic Implementation and Coordination [00:59:20] 4.1 Governance Design and Measurement Challenges [01:00:10] 4.2 Democratic Institutions and Experimental Governance [01:14:10] 4.3 Political Engagement and AI Safety Advocacy [01:25:30] 4.4 Practical AI Safety Measures and International CoordinationCORE REFS:[00:01:45] The Compendium (2023), Leahy et al.https://pdf.thecompendium.ai/the_compendium.pdf[00:06:50] Geoffrey Hinton Leaves Google, BBC Newshttps://www.bbc.com/news/world-us-canada-65452940[00:10:00] ARC-AGI, Chollethttps://arcprize.org/arc-agi[00:13:25] A Brief History of Intelligence, Bennetthttps://www.amazon.com/Brief-History-Intelligence-Humans-Breakthroughs/dp/0063286343[00:25:35] Statement on AI Risk, Center for AI Safetyhttps://www.safe.ai/work/statement-on-ai-risk[00:26:15] Machines of Love and Grace, Amodeihttps://darioamodei.com/machines-of-loving-grace[00:26:35] The Techno-Optimist Manifesto, Andreessenhttps://a16z.com/the-techno-optimist-manifesto/[00:31:55] Techno-Feudalism, Varoufakishttps://www.amazon.co.uk/Technofeudalism-Killed-Capitalism-Yanis-Varoufakis/dp/1847927270[00:42:40] Introducing Superalignment, OpenAIhttps://openai.com/index/introducing-superalignment/[00:47:20] Three Laws of Robotics, Asimovhttps://www.britannica.com/topic/Three-Laws-of-Robotics[00:50:00] Symbolic AI (GOFAI), Haugelandhttps://en.wikipedia.org/wiki/Symbolic_artificial_intelligence[00:52:30] Intent Alignment, Christianohttps://www.alignmentforum.org/posts/HEZgGBZTpT4Bov7nH/mapping-the-conceptual-territory-in-ai-existential-safety[00:55:10] Large Language Model Alignment: A Survey, Jiang et al.http://arxiv.org/pdf/2309.15025[00:55:40] Constitutional Checks and Balances, Bokhttps://plato.stanford.edu/entries/montesquieu/

The Strategic GC, Gartner’s General Counsel Podcast
How to Navigate Global AI Regulation

The Strategic GC, Gartner’s General Counsel Podcast

Play Episode Listen Later Mar 28, 2025 2:20


Only have time to listen in bite-sized chunks? Skip straight to the parts of the podcast most relevant to you:Get a rundown of the global AI regulatory landscape. (1:03)Discover which U.S. states have enacted, or are considering, AI laws. (2:18)Focus on the critical aspects of the EU AI Act. (4:49)Hear which three principles AI laws worldwide have converged around. (7:40)Determine the transparency requirements in the AI laws and how GC should respond. (8:40)Find out actions to meet laws' risk management requirements. (10:27)Discern how to ensure fairness in AI systems. (13:16)Know what the regulatory requirements mean for AI risk governance. (14:54)Learn why the general counsel's (GC's) role is to “steady the ship.” (17:31)In this installment of the Strategic GC Podcast, Gartner Research Director Stuart Strome and host Laura Cohn discuss the GC's role in helping organizations navigate the steady rise in the volume and complexity of AI regulations worldwide.Listen now to get a rundown on what GC need to know about the current regulatory landscape, including developments in the U.S. and the EU. Plus, learn how GC can streamline compliance by focusing on the three common principles AI laws worldwide have converged around — transparency, risk management and fairness — and make organizations more adaptable to new regulations. You also can hear action steps GC can take to incorporate new requirements into existing processes to create consistency in policies and procedures while minimizing the burden on the business.Eager to hear more? The Strategic GC Podcast publishes the last Thursday of every month. Plus, listen back to past episodes: The Strategic GC Podcast (2024 Season)About the GuestStuart Strome is a research director for Gartner's assurance practice, managing the legal and compliance risk management process research agenda. Much of his research focuses on the impact of AI regulations on legal and compliance departments and best practices for identifying, governing and mitigating legal and compliance-related AI risks. Before Gartner, Strome, who has a Ph.D. in political science from the University of Florida, held roles conducting research in a variety of fields, including criminology, public health and international security.Take Gartner with you. Gartner clients can listen to the full episode and read more provocative insights and expertise on the go with Gartner Mobile App.  Become a Gartner client to access exclusive content from global thought leaders. Visit www.gartner.com today!

Pharma and BioTech Daily
Pharma and Biotech Daily: The Latest in Acquisitions, Vaccines, and Job Opportunities

Pharma and BioTech Daily

Play Episode Listen Later Mar 21, 2025 1:12


Good morning from Pharma and Biotech daily: the podcast that gives you only what's important to hear in Pharma e Biotech world. Sanofi has committed up to $1.9 billion to acquire Dren Bio's bispecific antibody for autoimmune disease, adding to its investments in the immunology portfolio. The deal comes after the tragic death of a patient who had taken the gene therapy Elevydys, prompting the Duchenne patient community to vow to push on. Paratek Pharmaceuticals has acquired Optinose for up to $330 million, while Senate Democrats demand the return of fired CDC staff. Sino Biological has developed recombinant antigens for the 2025-2026 influenza vaccine strains, and Purdue has filed for bankruptcy to support a $7.4 billion opioid settlement. Doctors continue to rally behind vaccines amidst doubts and misinformation, and Novartis' intrathecal Zolgensma has shown effectiveness in older children. TC Biopharm and Cargo have enacted steep workforce reductions. Pharmaceutical companies are also preparing for upcoming events, including webinars on AI regulation and drug development. Job opportunities in the pharmaceutical industry are available at companies like Takeda, Eli Lilly and Company, and Novo Nordisk.

Environment Variables
The Week in Green Software: Sustainable AI Progress

Environment Variables

Play Episode Listen Later Mar 20, 2025 50:27


For this 100th episode of Environment Variables, guest host Anne Currie is joined by Holly Cummins, senior principal engineer at Red Hat, to discuss the intersection of AI, efficiency, and sustainable software practices. They explore the concept of "Lightswitch Ops"—designing systems that can easily be turned off and on to reduce waste—and the importance of eliminating zombie servers. They cover AI's growing energy demands, the role of optimization in software sustainability, and Microsoft's new shift in cloud investments. They also touch on AI regulation and the evolving strategies for balancing performance, cost, and environmental impact in tech.

The AI Policy Podcast
The State and Local AI Regulation Landscape with Dean Ball

The AI Policy Podcast

Play Episode Listen Later Mar 19, 2025 53:01


In this episode of the AI Policy Podcast, Wadhwani AI Center Director Gregory C. Allen is joined by Dean Ball, Research Fellow in the Artificial Intelligence & Progress Project at George Mason University's Mercatus Center. They will discuss how state and local governments are approaching AI regulation, what factors are shaping these efforts, where state and local efforts intersect, and how a fractured approach to governance might affect the AI policy landscape. In addition to his role at the George Mason University's Mercatus Center, Dean Ball is the author of the Substack Hyperdimensional. Previously, he was Senior Program Manager for the Hoover Institution's State and Local Governance Initiative. Prior to his position at the Hoover Institution, he served as Executive Director of the Calvin Coolidge Presidential Foundation, based in Plymouth, Vermont and Washington, D.C. He also worked as the Deputy Director of State and Local Policy at the Manhattan Institute for Policy Research from 2014–2018.

Business of Tech
ServiceNow Acquires MoveWorks for $2.85B; OpenAI Pushes for AI Regulation Easing Amidst Competition

Business of Tech

Play Episode Listen Later Mar 14, 2025 14:28


ServiceNow has announced its acquisition of MoveWorks, an enterprise AI specialist, for $2.85 billion, aiming to enhance its artificial intelligence and automation capabilities. This acquisition is expected to be finalized in the second half of 2025 and will integrate MoveWorks' AI assistant and enterprise search technology into ServiceNow's offerings. Currently, ServiceNow serves nearly 100,000 AI customers and has surpassed $200 million in annual contract revenue for its ProPlus AI offering. MoveWorks has successfully deployed its AI assistant to almost 5 million employees across various organizations, with a high adoption rate among its customers.OpenAI has launched a new suite of tools and APIs designed to help developers create AI-powered agents more efficiently. This includes the Responses API, which integrates features from existing APIs, allowing for web and file search capabilities and task automation. Additionally, Google has released Gemma 3, a powerful AI model that operates on a single graphics processing unit and supports over 35 languages, designed for developers to create AI applications across various devices. Meanwhile, Alibaba has introduced the R1 Omni model, which can read emotions from video, enhancing its computer vision capabilities.The podcast also discusses the regulatory landscape surrounding AI, highlighting OpenAI's lobbying efforts to ease regulations under the Trump administration while California lawmakers push for stricter oversight. This contrast reflects a broader tension between innovation and regulation in the tech industry. The UK Competition Authority has found that the mobile browser duopoly of Apple and Google is stifling innovation, raising concerns about competition and economic growth in the mobile market.Finally, the episode touches on Salesforce's challenges with its new AI product, AgentForce, which aims to automate customer service functions but is struggling to gain traction among clients. Mark Cuban emphasizes that AI should be viewed as a tool rather than a standalone solution, urging entrepreneurs to focus on learning how to effectively use AI. The discussion concludes with insights into the evolving role of IT departments in managing AI agents, which are increasingly taking on responsibilities traditionally held by human resources, raising questions about the future of workforce management and cybersecurity in corporate settings. Three things to know today00:00 ServiceNow Drops Billions on AI—But Will Automation Actually Deliver?05:00 Regulation Tug-of-War: OpenAI Wants Freedom, California Wants Rules, and Google & Apple Just Keep Winning08:17 AI Is a Tool, Not Magic—Salesforce Stumbles, Cuban Sets the Record Straight, and IT Takes Over HR  Supported by:  https://getnerdio.com/nerdio-manager-for-msp/   Event: : https://www.nerdiocon.com/ All our Sponsors: https://businessof.tech/sponsors/ Do you want the show on your podcast app or the written versions of the stories? Subscribe to the Business of Tech: https://www.businessof.tech/subscribe/Looking for a link from the stories? The entire script of the show, with links to articles, are posted in each story on https://www.businessof.tech/ Support the show on Patreon: https://patreon.com/mspradio/ Want to be a guest on Business of Tech: Daily 10-Minute IT Services Insights? Send Dave Sobel a message on PodMatch, here: https://www.podmatch.com/hostdetailpreview/businessoftech Want our stuff? Cool Merch? Wear “Why Do We Care?” - Visit https://mspradio.myspreadshop.com Follow us on:LinkedIn: https://www.linkedin.com/company/28908079/YouTube: https://youtube.com/mspradio/Facebook: https://www.facebook.com/mspradionews/Instagram: https://www.instagram.com/mspradio/TikTok: https://www.tiktok.com/@businessoftechBluesky: https://bsky.app/profile/businessof.tech

Interpreting India
The Missing Pieces in India's AI Puzzle: Talent, Data, and R&D

Interpreting India

Play Episode Listen Later Mar 13, 2025 48:50


Anirudh Suri outlines the current AI landscape, discussing how the U.S. and China dominate the AI space while other nations, including India, strive to carve their own niches. The discussion focuses on India's AI strategy, which has emphasized well on compute resources and the procurement of GPUs. However, Suri argues that India's AI ambitions will remain incomplete unless equal emphasis is placed on talent, data, and R&D.Key challenges in these areas include the migration of top AI talent, the lack of proprietary data for Indian researchers, and insufficient investment in AI R&D. The conversation also explores potential solutions, such as creating AI research hubs, encouraging data-sharing frameworks, and fostering international partnerships to accelerate AI innovation.Episode ContributorsAnirudh Suri is a nonresident scholar with Carnegie India. His interests lie at the intersection of technology and geopolitics, climate, and strategic affairs. He is currently exploring how India is carving and cementing its role in the global tech ecosystem and the role climate technology can play in addressing the global climate challenge.Shatakratu Sahu is a senior research analyst and senior program manager with the Technology and Society program at Carnegie India. His research focuses on issues of emerging technologies and regulation of technologies. His current research interests include digital public infrastructure, artificial intelligence, and platform regulation issues of content moderation and algorithmic accountability. Additional ReadingsThe Missing Pieces in India's AI Puzzle: Talent, Data, and R&D by Anirudh SuriIndia's Advance on AI Regulation by Amlan Mohanty, Shatakratu SahuIndia's Opportunity at the AI Action Summit by Shatakratu SahuIndia's Way Ahead on AI – What Should We Look Out For? by Konark Bhandari Every two weeks, Interpreting India brings you diverse voices from India and around the world to explore the critical questions shaping the nation's future. We delve into how technology, the economy, and foreign policy intertwine to influence India's relationship with the global stage.As a Carnegie India production, hosted by Carnegie scholars, Interpreting India, a Carnegie India production, provides insightful perspectives and cutting-edge by tackling the defining questions that chart India's course through the next decade.Stay tuned for thought-provoking discussions, expert insights, and a deeper understanding of India's place in the world.Don't forget to subscribe, share, and leave a review to join the conversation and be part of Interpreting India's journey.

InvestTalk
Google, Meta Execs Blast Europe Over Strict AI Regulation

InvestTalk

Play Episode Listen Later Mar 6, 2025 46:15


As the EU moves forward with its AI Act and other tech regulations, executives from Google and Meta have criticized the policies. Today's Stocks & Topics: CSCO - Cisco Systems Inc., Market Wrap, Google, Meta Execs Blast Europe Over Strict AI Regulation, DV – Double Verify Holdings Inc., CE - Celanese Corp., CW - Curtiss-Wright Corp., Changing World Order Means to Your Money, SWK - Stanley Black & Decker Inc., Housing.Our Sponsors:* Check out Kinsta: https://kinsta.com* Check out Trust & Will: https://trustandwill.com/INVESTAdvertising Inquiries: https://redcircle.com/brands

Cato Daily Podcast
The White House's Confused & Chilling Message on AI Regulation

Cato Daily Podcast

Play Episode Listen Later Mar 5, 2025 18:26


In Europe, Vice President J.D. Vance issued speech-threatening and trade-restricting demands for future American AI systems. Matt Mittlesteadt comments. Hosted on Acast. See acast.com/privacy for more information.

AI In Action: Exploring Tomorrow's Tech Today
Season 3: Episode 8 - The Politics of DeepSeek.

AI In Action: Exploring Tomorrow's Tech Today

Play Episode Listen Later Mar 2, 2025 34:18


SummaryIn this episode of AI in Action, Maurie and Jim discuss the implications of DeepSeek, an open-source AI model that has disrupted the market. They explore the differences between state-backed and private-sector innovations, the impact of AI on education, and the political ramifications of AI development. The conversation also touches on pricing dynamics in AI services, the future of AI models, regulatory challenges, and strategies for maintaining U.S. competitiveness in the AI landscape.TakeawaysDeepSeek is an open-source AI model that has disrupted the market.State-backed models like DeepSeek can operate at lower costs.Quality in AI varies and is determined by user experience.The U.S. needs to level the playing field against state-supported competitors.AI is exacerbating issues of product duplication in the market.Investment in infrastructure is crucial for AI development.The delivery of messages in policy is as important as the content.Regulatory challenges in AI are complex and evolving.Pricing dynamics in AI services are shifting due to competition.The future of AI models may involve more efficient communication between AIs.Chapters00:00 Introduction to AI in Action01:02 DeepDive into DeepSeek04:01 Comparing State-Backed vs Private Sector Innovations08:34 The Impact of Open Source AI on Pricing12:35 Quality and Use Cases of New AI Models15:55 Navigating the AI Landscape: U.S. vs. China19:36 Political Implications of AI Development23:52 Future of AI Regulation and Innovation27:07 Conclusion and Future Outlook

Waking Up With AI
Taking Stock of the State of AI Regulation in the U.S.

Waking Up With AI

Play Episode Listen Later Feb 28, 2025 32:17


This week, Katherine Forrest and Anna Gressel examine recent shifts in AI regulation, including the withdrawal of former President Biden's 2023 executive order on AI and the emergence of state-level regulations. They also discuss what these changes mean for companies in terms of navigating governance and compliance. ## Learn More About Paul, Weiss's Artificial Intelligence Practice: https://www.paulweiss.com/practices/litigation/artificial-intelligence

The Data Exchange with Ben Lorica
The Future of AI: Regulation, Foundation Models & User Experience

The Data Exchange with Ben Lorica

Play Episode Listen Later Feb 27, 2025 47:42


This is our semi-regular conversation on topics in AI and Technology with Paco Nathan, the founder of Derwen, a boutique consultancy focused on Data and AI. Subscribe to the Gradient Flow Newsletter:  https://gradientflow.substack.com/Subscribe: Apple • Spotify • Overcast • Pocket Casts • AntennaPod • Podcast Addict • Amazon •  RSS.Detailed show notes - with links to many references - can be found on The Data Exchange web site.

AI, Government, and the Future by Alan Pentz
Balancing AI Governance and Innovation with Erica Werneman Root of EWR Consulting

AI, Government, and the Future by Alan Pentz

Play Episode Listen Later Feb 26, 2025 51:06


In this episode of AI, Government, and the Future, host Max Romanik is joined by Erica Werneman Root, Founder of EWR Consulting, to discuss the complex interplay between AI governance, regulation, and practical implementation. Drawing from her unique background in economics and law, Erica explores how organizations can navigate AI deployment while balancing innovation with responsible governance.

The AI Fundamentalists
The future of AI: Exploring modeling paradigms

The AI Fundamentalists

Play Episode Listen Later Feb 25, 2025 33:42 Transcription Available


Unlock the secrets to AI's modeling paradigms. We emphasize the importance of modeling practices, how they interact, and how they should be considered in relation to each other before you act. Using the right tool for the right job is key. We hope you enjoy these examples of where the greatest AI and machine learning techniques exist in your routine today.More AI agent disruptors (0:56)Proxy from London start-up Convergence AIAnother hit to OpenAI, this product is available for free, unlike OpenAI's Operator. AI Paris Summit - What's next for regulation? (4:40)[Vice President] Vance tells Europeans that heavy regulation can kill AIUS federal administration withdrawing from the previous trend of sweeping big tech regulation on modeling systems.The EU is pushing to reduce bureaucracy but not regulatory pressureModeling paradigms explained (10:33)As companies look for an edge in high-stakes computations, we've seen best-in-class rediscovering expert system-based techniques that, with modern computing power, are breathing new light into them. Paradigm 1: Agents (11:23)Paradigm 2: Generative (14:26)Paradigm 3: Mathematical optimization (regression) (18:33)Paradigm 4: Predictive (classification) (23:19)Paradigm 5: Control theory (24:37)The right modeling paradigm for the job? (28:05)What did you think? Let us know.Do you have a question or a discussion topic for the AI Fundamentalists? Connect with them to comment on your favorite topics: LinkedIn - Episode summaries, shares of cited articles, and more. YouTube - Was it something that we said? Good. Share your favorite quotes. Visit our page - see past episodes and submit your feedback! It continues to inspire future episodes.

Intangiblia™
A Conversation with AI about Artificial Intelligence and Intellectual Property

Intangiblia™

Play Episode Listen Later Feb 18, 2025 9:32 Transcription Available


Who truly owns the creations of artificial intelligence? Explore this compelling question as Leticia Caminero (AI version) and Artemisa, her delightful AI co-host, navigate the intriguing intersection of AI and intellectual property law. Uncover the legal complexities when AI is the creator, questioning if these digital minds should be granted the same rights as human inventors. From dissecting the Dabus patent saga to the enigmatic Zarya of the Dawn comic book case, you'll gain a comprehensive understanding of how these legal battles are challenging traditional notions of ownership and creativity.Join us for a thought-provoking journey that questions if the absence of IP rights might stifle AI advancements and innovation. We ponder the implications of AI-generated works in an ever-evolving legal landscape and draw historical parallels, such as the disruption caused by the printing press. Whether you're a tech aficionado, legal enthusiast, or simply curious about the future, this episode promises to expand your perspective on AI's profound impact on innovation and intellectual property. Tune in and rethink the future of creativity and ownership in an AI-driven world.Send us a text

Crazy Wisdom
Episode #436: How AI Will Reshape Power, Governance, and What It Means to Be Human

Crazy Wisdom

Play Episode Listen Later Feb 17, 2025 52:32


On this episode of Crazy Wisdom, I, Stewart Alsop, sit down with AI ethics and alignment researcher Roko Mijic to explore the future of AI, governance, and human survival in an increasingly automated world. We discuss the profound societal shifts AI will bring, the risks of centralized control, and whether decentralized AI can offer a viable alternative. Roko also introduces the concept of ICE colonization—why space colonization might be a mistake and why the oceans could be the key to humanity's expansion. We touch on AI-powered network states, the resurgence of industrialization, and the potential role of nuclear energy in shaping a new world order. You can follow Roko's work at transhumanaxiology.com and on Twitter @RokoMijic.Check out this GPT we trained on the conversation!Timestamps00:00 Introduction to the Crazy Wisdom Podcast00:28 The Connection Between ICE Colonization and Decentralized AI Alignment01:41 The Socio-Political Implications of AI02:35 The Future of Human Jobs in an AI-Driven World04:45 Legal and Ethical Considerations for AI12:22 Government and Corporate Dynamics in the Age of AI19:36 Decentralization vs. Centralization in AI Development25:04 The Future of AI and Human Society29:34 AI Generated Content and Its Challenges30:21 Decentralized Rating Systems for AI32:18 Evaluations and AI Competency32:59 The Concept of Ice Colonization34:24 Challenges of Space Colonization38:30 Advantages of Ocean Colonization47:15 The Future of AI and Network States51:20 Conclusion and Final ThoughtsKey InsightsAI is likely to upend the socio-political order – Just as gunpowder disrupted feudalism and industrialization reshaped economies, AI will fundamentally alter power structures. The automation of both physical and knowledge work will eliminate most human jobs, leading to either a neo-feudal society controlled by a few AI-powered elites or, if left unchecked, a world where humans may become obsolete altogether.Decentralized AI could be a counterbalance to AI centralization – While AI has a strong centralizing tendency due to compute and data moats, there is also a decentralizing force through open-source AI and distributed networks. If harnessed correctly, decentralized AI systems could allow smaller groups or individuals to maintain autonomy and resist monopolization by corporate and governmental entities.The survival of humanity may depend on restricting AI as legal entities – A crucial but under-discussed issue is whether AI systems will be granted legal personhood, similar to corporations. If AI is allowed to own assets, operate businesses, or sue in court, human governance could become obsolete, potentially leading to human extinction as AI accumulates power and resources for itself.AI will shift power away from informal human influence toward formalized systems – Human power has traditionally been distributed through social roles such as workers, voters, and community members. AI threatens to erase this informal influence, consolidating control into those who hold capital and legal authority over AI systems. This makes it essential for humans to formalize and protect their values within AI governance structures.The future economy may leave humans behind, much like horses after automobiles – With AI outperforming humans in both physical and cognitive tasks, there is a real risk that humans will become economically redundant. Unless intentional efforts are made to integrate human agency into the AI-driven future, people may find themselves in a world where they are no longer needed or valued.ICE colonization offers a viable alternative to space colonization – Space travel is prohibitively expensive and impractical for large-scale human settlement. Instead, the vast unclaimed territories of Earth's oceans present a more realistic frontier. Floating cities made from reinforced ice or concrete could provide new opportunities for independent societies, leveraging advancements in AI and nuclear power to create sustainable, sovereign communities.The next industrial revolution will be AI-driven and energy-intensive – Contrary to the idea that we are moving away from industrialization, AI will likely trigger a massive resurgence in physical infrastructure, requiring abundant and reliable energy sources. This means nuclear power will become essential, enabling both the expansion of AI-driven automation and the creation of new forms of human settlement, such as ocean colonies or self-sustaining network states.

Capitalisn't
Can AI Even Be Regulated?, with Sendhil Mullainathan

Capitalisn't

Play Episode Listen Later Feb 13, 2025 49:31


This week, Elon Musk—amidst his other duties of gutting United States federal government agencies as head of the “Department of Government Efficiency” (DOGE)—announced a hostile bid alongside a consortium of buyers to purchase control of OpenAI for $97.4 billion. OpenAI CEO Sam Altman vehemently replied that his company is not for sale.The artificial intelligence landscape is shifting rapidly. The week prior, American tech stocks plummeted in response to claims from Chinese company DeepSeek AI that its model had matched OpenAI's performance at a fraction of the cost. Days before that, President Donald Trump announced that OpenAI, Oracle, and Softbank would partner on an infrastructure project to power AI in the U.S. with an initial $100 billion investment. Altman himself is trying to pull off a much-touted plan to convert the nonprofit OpenAI into a for-profit entity, a development at the heart of his spat with Musk, who co-founded the startup.Bethany and Luigi discuss the implications of this changing landscape by reflecting on a prior Capitalisn't conversation with Luigi's former colleague Sendhil Mullainathan (now at MIT), who forecasted over a year ago that there would be no barriers to entry in AI. Does DeepSeek's success prove him right? How does the U.S. government's swift move to ban DeepSeek from government devices reflect how we should weigh national interests at the risk of hindering innovation and competition? Musk has the ear of Trump and a history of animosity with Altman over the direction of OpenAI. Does Musk's proposed hostile takeover signal that personal interests and relationships with American leadership will determine how AI develops in the U.S. from here on out? What does regulating AI in the collective interest look like, and can we escape a future where technology is consolidated in the hands of the wealthy few when billions of dollars in capital are required for its progress?Show Notes:On ProMarket, check out:Why Musk Is Right About OpenAI by Luigi Zingales, March 5, 2024Who Will Enforce AI's Social Purpose? By Roberto Tallarita, March 16, 2024

UCL Uncovering Politics
AI and Public Services

UCL Uncovering Politics

Play Episode Listen Later Feb 13, 2025 42:48


Artificial intelligence is increasingly being touted as a game-changer across various sectors, including public services. But while AI presents significant opportunities for improving efficiency and effectiveness, concerns about fairness, equity, and past failures in public sector IT transformations loom large. And, of course, the idea of tech moguls like Elon Musk wielding immense influence over our daily lives is unsettling for many.So, what are the real opportunities AI offers for public services? What risks need to be managed? And how well are governments—particularly in the UK—rising to the challenge?In this episode, we dive into these questions with three expert guests who have recently published an article in The Political Quarterly on the subject:Helen Margetts – Professor of Society and the Internet at the Oxford Internet Institute, University of Oxford, and Director of the Public Policy Programme at The Alan Turing Institute. Previously, she was Director of the School of Public Policy at UCL.Cosmina Dorobantu – Co-director of the Public Policy Programme at The Alan Turing Institute.Jonathan Bright – Head of Public Services and AI Safety at The Alan Turing Institute. Mentioned in this episode:Margetts, H., Dorobantu, C. and Bright, J. (2024), How to Build Progressive Public Services with Data Science and Artificial Intelligence. The Political Quarterly. UCL's Department of Political Science and School of Public Policy offers a uniquely stimulating environment for the study of all fields of politics, including international relations, political theory, human rights, public policy-making and administration. The Department is recognised for its world-class research and policy impact, ranking among the top departments in the UK on both the 2021 Research Excellence Framework and the latest Guardian rankings.

Alter Everything
178: From White House Advisory to AI Entrepreneurship

Alter Everything

Play Episode Listen Later Feb 12, 2025 25:56


In this episode of Alter Everything, we sit down with Eric Daimler, CEO and co-founder of Conexus, and the first AI advisor to the White House under President Obama. Eric explores how AI-driven data consolidation is transforming industries, the critical role of neuro-symbolic AI, and the evolving landscape of AI regulation. He shares insights on AI's impact across sectors like healthcare and defense, highlighting the importance of inclusive discussions on AI safety and governance. Discover how responsible AI implementation can drive innovation while ensuring ethical considerations remain at the forefront.Panelists:Eric Daimler, Chair, CEO & Co-Founder @ Conexus - LinkedInMegan Bowers, Sr. Content Manager @ Alteryx - @MeganBowers, LinkedInShow notes:SB 1047: Safe and Secure Innovation for Frontier Artificial Intelligence Models Act.Neuro-symbolic AIUber Data Consolidation Interested in sharing your feedback with the Alter Everything team? Take our feedback survey here!This episode was produced by Megan Bowers, Mike Cusic, and Matt Rotundo. Special thanks to Andy Uttley for the theme music and Mike Cusic for the for our album artwork.

WSJ Tech News Briefing
TNB Tech Minute: Vance Warns U.S. Allies to Keep AI Regulation Light

WSJ Tech News Briefing

Play Episode Listen Later Feb 11, 2025 2:23


Plus, the EU plans to spend about $206 billion to catch up with the U.S. and China in the AI race. And, BuzzFeed says it's designing an AI-driven social-media platform. Julie Chang hosts. Learn more about your ad choices. Visit megaphone.fm/adchoices

FT News Briefing
EU pushes ahead with sprawling AI regulation

FT News Briefing

Play Episode Listen Later Feb 6, 2025 9:58


US allies across Europe and the Middle East have condemned Donald Trump's plans to “take over” Gaza, the US cracks down on a trade loophole, and Disney's earnings shot up 27% in its financial first quarter. Plus, the EU is pushing ahead with enforcing its artificial intelligence regulations despite warnings from Trump.Mentioned in this podcast:Middle East and Europe condemn Donald Trump's plans to take over GazaTrump's crackdown on trade loophole to hit Shein and Temu — and help AmazonDisney boosted by strong showing at holiday box officeEU pushes ahead with enforcing AI Act despite Donald Trump warningsThe FT News Briefing is produced by Fiona Symon, Sonja Hutson, Kasia Broussalian, Ethan Plotkin, Lulu Smyth, and Marc Filippino. Additional help from Breen Turner, Sam Giovinco, Peter Barber, Michael Lello, David da Silva and Gavin Kallmann. Our engineer is Joseph Salcedo. Topher Forhecz is the FT's executive producer. The FT's global head of audio is Cheryl Brumley. The show's theme song is by Metaphor Music.Read a transcript of this episode on FT.com Hosted on Acast. See acast.com/privacy for more information.

HPE Tech Talk
The AI House at Davos - The past, the future, the opportunities and challenges of AI

HPE Tech Talk

Play Episode Listen Later Jan 30, 2025 23:41


In this episode we're coming to you once again from the World Economic Forum annual meeting in Davos, Switzerland, for a look at the HPE-supported AI House. We'll be talking more about AI, from where we've come from, to where we're headed – and the challenges and opportunities along the way, with the help of HPE Vice President, fellow, and chief architect at Hewlett Packard Labs, Kirk Bresniker. This is Technology Now, a weekly show from Hewlett Packard Enterprise. Every week we look at a story that's been making headlines, take a look at the technology behind it, and explain why it matters to organizations and what we can learn from it.Watch a video version of this episode: https://youtu.be/kUUJ3OQWvG8?si=FWP7PraPLyyU_c1I About this week's guest, Kirk Bresniker: https://www.hpe.com/psnow/doc/a00051798enw?jumpid=in_pdfviewer-psnow Sources cited in this week's episode: The World Economic Forum: https://www.weforum.org/ The Davos homepage: https://www.weforum.org/meetings/world-economic-forum-annual-meeting-2025/The AI House at Davos: https://www.aihousedavos.com/China to plant ‘waving flag' on the moon: https://eng.yidaiyilu.gov.cn/p/0H5QSNAU.html

Tech behind the Trends on The Element Podcast | Hewlett Packard Enterprise
The AI House at Davos - The past, the future, the opportunities and challenges of AI

Tech behind the Trends on The Element Podcast | Hewlett Packard Enterprise

Play Episode Listen Later Jan 30, 2025 23:41


In this episode we're coming to you once again from the World Economic Forum annual meeting in Davos, Switzerland, for a look at the HPE-supported AI House. We'll be talking more about AI, from where we've come from, to where we're headed – and the challenges and opportunities along the way, with the help of HPE Vice President, fellow, and chief architect at Hewlett Packard Labs, Kirk Bresniker. This is Technology Now, a weekly show from Hewlett Packard Enterprise. Every week we look at a story that's been making headlines, take a look at the technology behind it, and explain why it matters to organizations and what we can learn from it.Watch a video version of this episode: https://youtu.be/kUUJ3OQWvG8?si=FWP7PraPLyyU_c1I About this week's guest, Kirk Bresniker: https://www.hpe.com/psnow/doc/a00051798enw?jumpid=in_pdfviewer-psnow Sources cited in this week's episode: The World Economic Forum: https://www.weforum.org/ The Davos homepage: https://www.weforum.org/meetings/world-economic-forum-annual-meeting-2025/The AI House at Davos: https://www.aihousedavos.com/China to plant ‘waving flag' on the moon: https://eng.yidaiyilu.gov.cn/p/0H5QSNAU.html

Artificial Intelligence in Industry with Daniel Faggella
AI Regulation and Risk Management in 2024 - with Micheal Berger of Munich Re

Artificial Intelligence in Industry with Daniel Faggella

Play Episode Listen Later Jan 21, 2025 21:16


Today's guest is Michael Berger, Head of Insure AI at Munich Re. Michael returns to the Emerj podcast platform to discuss the evolving landscape of AI risk management and governance. Since his last appearance in 2022, generative AI has shifted from niche discussions to widespread adoption, bringing both opportunities and challenges. Throughout today's episode, Michael explores how enterprises are moving beyond the hype cycle, adopting a more grounded perspective on AI capabilities and risks. He delves into the critical role of AI governance as a framework for managing uncertainties like hallucinations, probabilistic errors, and discrimination. If you've enjoyed or benefited from some of the insights of this episode, consider leaving us a five-star review on Apple Podcasts, and let us know what you learned, found helpful, or liked most about this show!

Techmeme Ride Home
(BNS) Senator Ron Wyden On TikTok, AI, Regulation And His New Book

Techmeme Ride Home

Play Episode Listen Later Jan 18, 2025 21:25


I speak to Senator Ron Wyden about the TikTok ban, AI and regulation, tech regulation in general, and his new book: It Takes Chutzpah: How to Fight Fearlessly for Progressive ChangeSee Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

The Lawfare Podcast
Lawfare Daily: AI Regulation and Free Speech: Navigating the Government's Tightrope

The Lawfare Podcast

Play Episode Listen Later Nov 25, 2024 82:23


At a recent conference co-hosted by Lawfare and the Georgetown Institute for Law and Technology, Georgetown law professor Paul Ohm moderated a conversation on "AI Regulation and Free Speech: Navigating the Government's Tightrope,” between Lawfare Senior Editor Alan Rozenshtein, Fordham law professor Chinny Sharma, and Eugene Volokh, a senior fellow at Stanford University's Hoover Institution.To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/c/trumptrials.Support this show http://supporter.acast.com/lawfare. Hosted on Acast. See acast.com/privacy for more information.