Podcasts about ai regulation

  • 387PODCASTS
  • 578EPISODES
  • 35mAVG DURATION
  • 5WEEKLY NEW EPISODES
  • Jun 19, 2025LATEST

POPULARITY

20172018201920202021202220232024


Best podcasts about ai regulation

Latest podcast episodes about ai regulation

The A.M. Update
Yuge Win For Sanity at SCOTUS | Time to Put the "Poli" Back in Politician | 6/19/25

The A.M. Update

Play Episode Listen Later Jun 19, 2025 25:50


Aaron McIntire breaks down President Trump's mixed signals on potential U.S. involvement in the Israel-Iran conflict, Marco Rubio's explanation of Iran's uranium enrichment progress, and a major Supreme Court win upholding Tennessee's ban on meatball surgeries for minors in the name of gender. Plus, Tucker Carlson's sophomoric gotcha attempt on Ted Cruz, a vibe check on Democrats, and Pope Leo XIV's urgent call for AI regulation.   The A.M. Update, Aaron McIntire, Trump, Iran, Israel, Supreme Court, Tennessee, Marco Rubio, uranium enrichment, Tucker Carlson, Ted Cruz, Federal Reserve, Jerome Powell, Pope Leo XIV, AI regulation, Democrats

The BIGCast
Circle's Stroke of Genius/Can Anyone Regulate AI?

The BIGCast

Play Episode Listen Later Jun 18, 2025 46:39


John and Glen ruminate on a pair of major events reshuffling the fintech world- The Circle IPO's impact on venture funding, and the Big Beautiful Bill's unorthodox approach to throttling AI regulation. Also- John gets “spun out on quantum,” and admits a surprising change of heart regarding his outlook for AI.           Links related to this episode:   Glen's blog on the Circle and Chime IPOs: https://www.big-fintech.com/what-the-chime-and-circle-ipos-mean-for-fintech-investment/ The Senate alters the House budget provision curtailing states' ability to regulate AI: https://rollcall.com/2025/06/05/ai-regulation-moratorium-dropped-in-senate-budget-package/  Axios' story about AI's newfound ability to blackmail: https://www.axios.com/2025/05/23/anthropic-ai-deception-risk  CNBC's take on the Genius Act: https://www.cnbc.com/2025/06/13/what-the-genius-act-could-mean-for-crypto-and-other-investors.html  TechCrunch's recap of Chime's circuitous path to a successful IPO: https://techcrunch.com/2025/06/12/chime-almost-died-in-2016-turned-down-by-100-vcs-today-it-ipod-at-14-5b/  Pitchbook's take on the inevitability of Chime's “down round” IPO: https://pitchbook.com/news/articles/chimes-ipo-signals-down-rounds-are-here-to-stay     Join us for our next CU Town Hall- Wednesday July 9 at 3pm ET/Noon PT- for a live and lively interactive conversation tackling the major issues facing credit unions today. Industry developments keep coming fast and furious- the CU Town Hall is the place to make sense of these items together. It's free to attend, but advance registration is required:  https://www.cutownhall.com/   Join us on Bluesky!  @bigfintech.bsky.social;  @154advisors.bsky.social (Glen); @jbfintech.bsky.social (John) And connect on LinkedIn for insights like the Friday Fintech Five: https://www.linkedin.com/company/best-innovation-group/  https://www.linkedin.com/in/jbfintech/ https://www.linkedin.com/in/glensarvady/

AI Lawyer Talking Tech
June 18, 2025 - The Evolving Intersection of AI and Legal Practice

AI Lawyer Talking Tech

Play Episode Listen Later Jun 18, 2025 18:13


Welcome to 'AI Lawyer Talking Tech,' your go-to source for understanding the transformative shifts in the legal world. Today, we delve into how artificial intelligence is fundamentally reshaping legal operations, from data management and contract review to the very core of legal education and practice globally. We'll explore new AI-powered legal services that promise unprecedented speed and efficiency, innovative solutions for managing complex data across borders, and how legal education is adapting to prepare future lawyers for an AI-driven environment. Our discussion will also touch upon the crucial considerations of privacy, compliance, and ethical use as AI adoption grows within the profession. Join us as we examine the opportunities and challenges presented by this rapid technological evolution.HaystackID® Elevates Legal Tech Strategy Across Europe with Case Insight™ and CoreFlex™17 Jun 2025EDRMContractPodAi Introduces Leah Tariff Agent to Help Enterprises Navigate Global Trade Disruption17 Jun 2025Legal ReaderReimagining Legal Education17 Jun 2025The Practice MagazineWashington State Lawyers Show Limited AI Adoption Despite Growing Interest, New Survey Reveals17 Jun 2025LawSitesLawyers Just Discovered Something About Meta's AI That Could Cost Zuckerberg Untold Billions of Dollars17 Jun 2025FuturismBefore You Hit ‘Record': Legal Risks In Using AI Notetaking Tools17 Jun 2025JD SupraThe Future of Consumer Class Actions: How Technology Is Shaping Mass Legal Action17 Jun 2025BusinessABCSequoia-backed Crosby launches a new kind of AI-powered law firm17 Jun 2025TechCrunchGenerative AI & Legal Research: A Mismatch?17 Jun 2025SlawSyntracts Reinvents Contract AI – Small Models + On-Prem17 Jun 2025Artificial LawyerDWF Picks Legora To Increase Volume Legal Work17 Jun 2025Artificial LawyerTrump administration moves to share Medicaid data with DHS, faces legal and privacy backlash amid immigrant raids17 Jun 2025NaturalNews.comJapan's legal market: A glimpse into the future of global law - Thomson Reuters Institute17 Jun 2025Thomson ReutersHow Top Legal Departments Are Managing Outside Counsel More Strategically17 Jun 2025MatterSuiteJylo – AL TV Product Walk Through17 Jun 2025Artificial LawyerJapan's legal market: A glimpse into the future of global law17 Jun 2025Thomson Reuters InstituteSupreme Court Rules DOGE Can Access Social Security Data and Avoid FOIA—for Now17 Jun 2025Ogletree DeakinsNew York's Child Data Protection Act: Key Takeaways From the Attorney General's Implementation Guidance17 Jun 2025Ogletree DeakinsThe Intersection of Artificial Intelligence and Employment Law17 Jun 2025Ogletree DeakinsMichael Best Expands Transactional Practice with Addition of Elaine Critides as Senior Counsel in Washington, D.C.17 Jun 2025Michael Best & Friedrich LLPIntroducing the Apex Strategy Model: A framework for law firm differentiation and growth - Thomson Reuters Institute16 Jun 2025Thomson ReutersMississippi College awarded $723,000 Mississippi AI Talent Accelerator Program grant16 Jun 2025Mississippi College2025 Berkeley Art, Law, and Finance Symposium Recap16 Jun 2025UC Berkeley LawThe Evolving Landscape of AI Regulation in Financial Services16 Jun 2025JD SupraGibson Dunn | Europe | Data Protection – May 202516 Jun 2025Gibson Dunn

Bringing the Human back to Human Resources
232. Let's Discuss: Ames v. Ohio and Texas Taking the Lead on AI Regulation feat. Bryan Driscoll

Bringing the Human back to Human Resources

Play Episode Listen Later Jun 17, 2025 41:16


Go to https://cozyearth.com and use code HUMANHR for 40% off their best-selling sheets, pajamas, towels, and more. And if you get a post-purchase survey? Let them know you heard about Cozy Earth right here.In this episode of the Bringing the Human Back to Human Resources podcast, Traci Chernoff and Bryan Driscoll discuss recent legal developments affecting HR and employment law. They delve into the Ames v. Ohio case, which addresses reverse discrimination, and the implications for HR practices. The conversation then shifts to the Catholic Charities v. Wisconsin case regarding tax exemptions for religious organizations. Finally, they explore Texas's new AI regulation, its potential impact on employers, and the broader implications for workplace fairness and technology use in hiring processes.Chapters00:00 Introduction to Recent Legal Developments01:18 Ames v. Ohio: Understanding Reverse Discrimination16:02 Catholic Charities v. Wisconsin: Tax Exemption Insights28:22 Texas AI Regulation: A New Frontier39:47 Conclusion and Future ImplicationsDon't forget to rate, review, and subscribe! Plus, leave a comment if you're catching this episode on Spotify or YouTube.We hope you enjoyed this month's Policy Pulse episode. If you found our discussion insightful, we'd like you to take a moment to rate our podcast. Your feedback helps us grow and reach more listeners who are passionate about these topics. You can also leave a review and tell us what you loved or what you'd like to hear more of - we're all ears!Connect with Traci here: ⁠https://linktr.ee/HRTraci⁠Connect with Bryan: -Website: https://bryanjdriscoll.com/ -LinkedIn: https://www.linkedin.com/in/bryanjohndriscoll/ Disclaimer: Thoughts, opinions, and statements made on this podcast are not a reflection of the thoughts, opinions, and statements of the Company by whom Traci Chernoff is actively employed.Please note that this episode may contain paid endorsements and advertisements for products or services. Individuals on the show may have a direct or indirect financial interest in products or services referred to in this episode.

Armenian News Network - Groong: Week In Review Podcast
Spotlight on Silence: Surveillance State: Armenia's Biometric Crackdown | Ep 446, Jun 15, 2025

Armenian News Network - Groong: Week In Review Podcast

Play Episode Listen Later Jun 15, 2025 54:57


Spotlight on Silence - June 15, 2025In this episode of Groong: Spotlight on Silence, we speak with Rafael Ishkhanyan of the Armenian Center for Political Rights about Armenia's sweeping new surveillance law. Passed quietly in March 2025, the law grants police 24/7 access to camera networks across public institutions and allows for real-time facial recognition, raising deep concerns about privacy, political targeting, and unchecked state power. We explore what the law says, what it leaves out, and why international silence—despite clear risks to civil liberties—has been so striking.Topics:Armenia legalizes round-the-clock surveillance.Law enables political targeting, critics warn.No oversight, no privacy laws.Silence from Armenia's new geopolitical allies.Guest: Rafael IshkhanyanHosts: Hovik ManucharyanAsbed BedrossianEpisode 446 | Recorded: June 12, 2025SHOW NOTES: https://podcasts.groong.org/446#Armenia #SurveillanceState #FacialRecognition #PrivacyRights #DigitalAuthoritarianism #ACPR #HumanRightsSubscribe and follow us everywhere you are: linktr.ee/groong

Congressional Dish
CD318: AI Regulation Moratorium

Congressional Dish

Play Episode Listen Later Jun 14, 2025 43:57


The House version of the “Big, Beautiful Bill” includes a 10-year moratorium on state and local regulation of AI models and systems. In this episode, listen to highlights from a congressional hearing held the day before the bill passed — including discussion of this sneaky little dingleberry. Please Support Congressional Dish – Quick Links Contribute monthly or a lump sum via Support Congressional Dish via (donations per episode) Send Zelle payments to: Donation@congressionaldish.com Send Venmo payments to: @Jennifer-Briney Send Cash App payments to: $CongressionalDish or Donation@congressionaldish.com Use your bank's online bill pay function to mail contributions to: Please make checks payable to Congressional Dish Thank you for supporting truly independent media!

Boardroom Governance with Evan Epstein
Karen Hao: Author of Empire of AI on Why "Scale at All Costs" is Not Leading Us to a Good Place

Boardroom Governance with Evan Epstein

Play Episode Listen Later Jun 12, 2025 65:17


(0:00) Intro (1:49) About the podcast sponsor: The American College of Governance Counsel(2:36) Introduction by Professor Anat Admati, Stanford Graduate School of Business. Read the event coverage from Stanford's CASI.(4:14) Start of Interview(4:45) What inspired Karen to write this book and how she got started with journalism.(8:00) OpenAI's Nonprofit Origin Story(8:45) Sam Altman and Elon Musk's Collaboration(10:39) The Shift to For-Profit(12:12) On the original split between Musk and Altman over control of OpenAI(14:36) The Concept of AI Empires(18:04) About concept of "benefit to humanity" and OpenAI's mission "to ensure that AGI benefits all of humanity"(20:30) On Sam Altman's Ouster and OpenAI's Boardroom Drama (Nov 2023) "Doomers vs Boomers"(26:05) Investor Dynamics Post-Ouster of Sam Altman(28:21) Prominent Departures from OpenAI (ie Elon Musk, Dario Amodei, Ilya Sutskever, Mira Murati, etc)(30:55) The Geopolitics of AI: U.S. vs. China(32:37) The "What about China" Card used by US companies to ward off regulation.(34:26) "Scaling at All Costs is not leading us in a good place"(36:46) Karen's preference on ethical AI development "I really want there to be more participatory AI development. And I think about the full supply chain of AI development when I say that."(39:53) Her biggest hope and fear for the future "the greatest threat of these AI empires is the erosion of democracy."(43:34) The case of Chilean Community Activism and Empowerment(47:20) Recreating human intelligence and the example of Joseph Weizenbaum, MIT (Computer Power and Human Reason, 1976)(51:15) OpenAI's current AI research capabilities: "I think it's asymptotic because they have started tapping out of their scaling paradigm"(53:26) The state (and importance of) open source development of AI. "We need things to be more open"(55:08) The Bill Gates demo on chatGPT acing the AP Biology test.(58:54) Funding academic AI research and the public policy question on the role of Government.(1:01:11) Recommendations for Startups and UniversitiesKaren Hao is the author of Empire of AI (Penguin Press, May 2025) and an award-winning journalist covering the intersections of AI & society. You can follow Evan on social media at:X: @evanepsteinLinkedIn: https://www.linkedin.com/in/epsteinevan/ Substack: https://evanepstein.substack.com/__To support this podcast you can join as a subscriber of the Boardroom Governance Newsletter at https://evanepstein.substack.com/__Music/Soundtrack (found via Free Music Archive): Seeing The Future by Dexter Britain is licensed under a Attribution-Noncommercial-Share Alike 3.0 United States License

the csuite podcast
Show 249 - Fighting Payment Fraud with AI, Regulation & Innovation - Money20/20 Europe Pt 2

the csuite podcast

Play Episode Listen Later Jun 9, 2025 49:40


Our second episode from Money20/20 Europe 2025, produced in partnership with LSEG Risk Intelligence, who provide a range of solutions to help organisations effectively navigate risks and reduce fraud. The event took place at the RAI in Amsterdam. Our guests for this episode were: 1/ Dal Sahota, Global Director - Trusted Payments, LSEG Risk Intelligence 2/ Emilie Mathieu, General Counsel, Checkout.com 3/ Ivan Stefanov, CEO, Noto 4/ Samina Hussain-Letch, Executive Director, Head of Payment Partners & Operations UK, Square 5/ Gus Tomlinson, Managing Director Identity Fraud, GBG 6/ Martin Parzonka, VP Product, PensionBee

Lex Fridman Podcast of AI
Trump Admin Alters AI Regulation and Chip Strategy

Lex Fridman Podcast of AI

Play Episode Listen Later Jun 7, 2025 12:50


We discuss the Trump administration's latest moves on AI-related policies. This episode investigates the policy battles shaping the future of American AI. This episode explores changes in AI copyright laws and semiconductor strategy.Try AI Box: ⁠⁠https://AIBox.ai/⁠⁠AI Chat YouTube Channel: https://www.youtube.com/@JaedenSchaferJoin my AI Hustle Community: https://www.skool.com/aihustle/about

TheQuartering's Podcast
Trump ERUPTS At Fed, Greta Thunberg FAFO, AI Regulation Bill, Soytifa In Minneapolis! Wednesday Liveshow 06-04-2025

TheQuartering's Podcast

Play Episode Listen Later Jun 5, 2025 127:53


Rumble Live show! Youtube Live Show! Click for Cookbooks Amazon CBC Store

Trump's Trials
What's at stake in GOP fight over AI regulation

Trump's Trials

Play Episode Listen Later Jun 4, 2025 5:04


The House version of Trump's budget bill, which is now before the Senate, includes a provision that would ban state regulation of AI or 10 years. Republicans are divided over the provision. NPR's Deepa Shivaram reports. Support NPR and hear every episode of Trump's Terms sponsor-free with NPR+. Sign up at plus.npr.org.Learn more about sponsor message choices: podcastchoices.com/adchoicesNPR Privacy Policy

Two Doomed Men
Biden Clones, Palantir & Ai Regulation

Two Doomed Men

Play Episode Listen Later Jun 4, 2025 70:32


Scagz & Cap discuss Trump re-posting on Truth Social a claim that Biden was a clone and "executed" in 2020. We also breakdown his decision to give Peter Thiel's tech company Palantir the contract to collect and centralize data on Americans and language in the "Big Beautiful Bill" preventing anyone but the federal government from regulation Artificial Intelligence for the next 10 years.Text us comments or questions we can answer on the showPatriot Cigar Company Premium Cigars from Nicaragua, use our Promo Code: DOOMED for 15% off your purchase. https://www.mypatriotcigars.com/usa/DOOMED Support our show by subscribing using the link: https://www.buzzsprout.com/796727/support Support the showGo to Linktree.com/TwoDoomedMen for all our socials where we continue the conversation in between episodes.

Vermont Viewpoint
Ross Connolly Talks Taxes, Tech & Treatment: Vermont's Budget, AI Regulation, Energy Reform, and Healthcare Innovation

Vermont Viewpoint

Play Episode Listen Later Jun 4, 2025 91:11


This episode of Vermont Viewpoint aired on 06/04/2025.9-9:30am Rep James Harrison joins the show to discuss Vermont's financial situation, the impact on taxpayers, and potential solutions to rising costs 9:30-10am James Czerniawski, Senior Policy Analyst with Americans for Prosperity, discuss changes to Artificial Intelligence regulations in the federal spending bill, the Trump admin's approach to crypto, and potential deregulation to make innovation easier across the country 10-10:30am Rep Mike Tagliavia informs listeners about Vermont's energy issues, action taken this session, and potential solutions for ratepayers 10:30-11am Sarah Scott, Deputy State Director with Americans for Prosperity, discusses New Hampshire's efforts to pass a “right to try” law for experimental health care treatments

The World and Everything In It
6.3.25 AI regulation, NPR funding, a psychiatrist again gender-affirning care, and a fading Pride.

The World and Everything In It

Play Episode Listen Later Jun 3, 2025 35:24


AI regulation vs. innovation, cutting NPR/PBS funding, and speaking up against gender affirmation. Plus, Carl Trueman on fewer “pride” events, a slow growing family tree, and the Tuesday morning news.Support The World and Everything in It today at wng.org/donate.Additional support comes from the MIssion Focused Men for Christ podcast. This month: fathers helping sons embrace biblical manhood. Mission Focused Men for Christ on all podcast apps.From Ridge Haven Camp and Retreat Centers in Brevard, North Carolina, and Cono, Iowa. Camp and year-round retreat registrations at ridgehaven.orgAnd from Evangelism Explosion International. Helping believers share the good news of Jesus with the world. EvangelismExplosion.org

Bannon's War Room
WarRoom Battleground EP 775: The Pitfalls Of AI Regulation And Cyborg Theocracy

Bannon's War Room

Play Episode Listen Later May 23, 2025


WarRoom Battleground EP 775: The Pitfalls Of AI Regulation And Cyborg Theocracy

There Are No Girls on the Internet
Theo Von's empathy lacks accountability; Congress blocks AI regulation; Elon ruins the neighborhood; Verizon disconnects from DEI – NEWS ROUNDUP w/ Francesca Fiorentini

There Are No Girls on the Internet

Play Episode Listen Later May 23, 2025 64:00 Transcription Available


Bridget recaps the week in tech news with friend of the pod Francesca Fiorentini, journalist and host of the hilariously smart podcast The Bitchuation Room. Podcast megastar and staunch Trump supporter Theo Von made headlines for a post calling what's happening in Gaza genocide: https://www.newsweek.com/theo-von-gaza-video-donald-trump-middle-east-2075637 Congress's budget bill prohibits states from regulating AI: https://www.techpolicy.press/us-house-passes-10year-moratorium-on-state-ai-laws/ Elon Musk built an emissions-spewing gas power plant in the middle of a Black neighborhood: https://www.cnn.com/2025/05/19/climate/xai-musk-memphis-turbines-pollution Back in 2020, Verizon was a vocal supporter of DEI, but now they've dropped all DEI policies to please their new Trumpian overlords: https://www.npr.org/2025/05/19/nx-s1-5402863/verizon-fcc-frontier-dei-trump The Chicago Sun-Tribune published a list of summer books. It was generated by AI and most of the books are fake. Oops! https://www.404media.co/chicago-sun-times-prints-ai-generated-summer-reading-list-with-books-that-dont-exist/ Follow Francesca Fiorentini: https://www.instagram.com/franifio Listen to The Bitchuation Room: https://podcasts.apple.com/us/podcast/the-bitchuation-room/id1438285775 Follow TANGOTI: IG @BridgetMarieInDC TikTok @BridgetMarieInDC YouTube: ThereAreNoGirlsOnTheInternetSee omnystudio.com/listener for privacy information.

Virginia Public Radio
Virginia lawmakers react to proposed federal moratorium on AI regulation

Virginia Public Radio

Play Episode Listen Later May 19, 2025


Congress is considering a bill that could leave Virginia, and every other state, powerless to regulate artificial intelligence. Michael Pope reports.

AI Lawyer Talking Tech
May 19, 2025 - AI and the Evolving Practice of Law

AI Lawyer Talking Tech

Play Episode Listen Later May 19, 2025 16:40


Welcome to AI Lawyer Talking Tech, the podcast that delves into how artificial intelligence is reshaping the legal field. We're seeing AI automate tasks from legal drafting and legal research to case analysis and evaluation, offering the potential for increased efficiency and potentially enhancing access to justice. However, these advancements bring significant challenges, including the risk of AI generating inaccurate information or "hallucinations," underscoring the critical need for thorough verification and human oversight. Pressing ethical considerations around confidentiality and the responsible use of AI are paramount for legal professionals. The legal landscape is also rapidly changing on the regulatory front, with intense debates about state versus federal control over AI regulation and complex issues like copyright and the use of training data taking center stage in courts and legislatures. Join us as we discuss these fundamental shifts and the future of law in the age of AI.The Implementation of Algorithmic Pricing and Its Impact on Businesses, Consumers, and Policymakers: Algorithmic pricing raises concerns about collusion and antitrust enforcement18 May 2025BTLJ Blog Archives - Berkeley Technology Law JournalTennessee Attorney General Jonathan Skrmetti Leads Opposition to Proposed Federal Limits on AI Regulation by States17 May 2025Clarksville OnlineWorld Network Illustrates Complications Of AI-Crypto Partnerships17 May 2025Forbes.comThe Digital Download – Alston & Bird's Privacy & Data Security Newsletter – May 202517 May 2025JD SupraNanterre Court of Justice Issues First Decision About Introduction of AI in the Workplace in France17 May 2025Ogletree DeakinsAn exclusive look inside the hottest legal AI startup16 May 2025AOL.comCoCounsel Drafting webinar from Synergy event16 May 2025Legal.ThomsonReuters.comWhat Happens When Law Firms Use AI to Evaluate Your Case Worth?16 May 2025Legal ReaderFPF Experts Take The Stage at the 2025 IAPP Global Privacy Summit16 May 2025Future of Privacy ForumLawyer Launches Site to Help Pro Se People In Arkansas Navigate Legal Issues16 May 2025LawSitesCJEU publishes an updated Fact Sheet summarising key case law on protection of personal data16 May 2025JD SupraAnthropic Counsel Apologizes for Citation ‘Hallucination' in Music Publishers Lawsuit — Pinning Most of the Blame on Claude16 May 2025Digital Music NewsAI Hallucination Stemming from Contract Lawyer's Research16 May 2025LatestAI Can Do Many Tasks for Lawyers – But Be Careful16 May 2025New York State Bar AssociationLawyer for Anthropic Apologizes for Fake Legal Citation Generated by His Client's Own Claude AI16 May 2025Breitbart.comMovers and shakers: Sheppard Mullin and Goodwin announce key hires16 May 2025Legal Technology InsiderThe lowdown on LegalEdCon 202516 May 2025Legal CheekAnd the winners of The Legal Cheek Awards 2025 are…16 May 2025Legal CheekCongress might strip Californians of protections against AI in health care, hiring and much more16 May 2025Lookout Santa CruzWhen AI goes wrong: The dangers of unchecked content16 May 2025LexologyIronclad Law Announces Triple Revenue Growth Annually16 May 2025Markets Business InsiderAnthropic lawyers apologize to court over AI ‘hallucination' in copyright battle with music publishers16 May 2025Music Business WorldwideIn a chilly funding market, a VC explains why legal tech is ‘as hot as you can humanly imagine'16 May 2025DNyuzLegal Innovators London, New York + California in 2025!16 May 2025Artificial LawyerAnthropic's Claude faked a legal citation. A lawyer had to clean it up.16 May 2025DNyuz#006:The Slow Burn of Digitisation —  Incremental Innovations That Have Built Today's LawTech…16 May 2025Legaltech on MediumThird Copyright Report on AI Explores Generative AI Training16 May 2025GenAI-Lexology

Crazy Wisdom
Episode #461: Morpheus in the Classroom: AI, Education, and the New Literacy

Crazy Wisdom

Play Episode Listen Later May 16, 2025 56:25


I, Stewart Alsop, welcomed Woody Wiegmann to this episode of Crazy Wisdom, where we explored the fascinating and sometimes unsettling landscape of Artificial Intelligence. Woody, who is deeply involved in teaching AI, shared his insights on everything from the US-China AI race to the radical transformations AI is bringing to education and society at large.Check out this GPT we trained on the conversationTimestamps01:17 The AI "Cold War": Discussing the intense AI development race between China and the US.03:04 Opaque Models & Education's Resistance: The challenge of opaque AI and schools lagging in adoption.05:22 AI Blocked in Schools: The paradox of teaching AI while institutions restrict access.08:08 Crossing the AI Rubicon: How AI users are diverging from non-users into different realities.09:00 Budgetary Constraints in AI Education: The struggle for resources like premium AI access for students.12:45 Navigating AI Access for Students: Woody's ingenious workarounds for the premium AI divide.19:15 Igniting Curiosity with AI: Students creating impressive projects, like catapult websites.27:23 Exploring Grok and AI Interaction: Debating IP concerns and engaging with AI ("Morpheus").46:19 AI's Societal Impact: AI girlfriends, masculinity, and the erosion of traditional skills.Key InsightsThe AI Arms Race: Woody highlights a "cold war of nerdiness" where China is rapidly developing AI models comparable to GPT-4 at a fraction of the cost. This competition raises questions about data transparency from both sides and the strategic implications of superintelligence.Education's AI Resistance: I, Stewart Alsop, and Woody discuss the puzzling resistance to AI within educational institutions, including outright blocking of AI tools. This creates a paradox where courses on AI are taught in environments that restrict its use, hindering practical learning for students.Diverging Realities: We explore how individuals who have crossed the "Rubicon" of AI adoption are now living in a vastly different world than those who haven't. This divergence is akin to past technological shifts but is happening at an accelerated pace, impacting how people learn, work, and perceive reality.The Fading Relevance of Traditional Coding: Woody argues that focusing on teaching traditional coding languages like Python is becoming outdated in the age of advanced AI. AI can handle much of the detailed coding, shifting the necessary skills towards understanding AI systems, effective prompting, and higher-level architecture.AI as the Ultimate Tutor: The advent of AI offers the potential for personalized, one-on-one tutoring for everyone, a far more effective learning method than traditional classroom lectures. However, this potential is hampered by institutional inertia and a lack of resources for tools like premium AI subscriptions for students.Curiosity as the AI Catalyst: Woody shares anecdotes of students, even those initially disengaged, whose eyes light up when using AI for creative projects, like designing websites on niche topics such as catapults. This demonstrates AI's power to ignite curiosity and intrinsic motivation when paired with focused goals and the ability to build.AI's Impact on Society and Skills: We touch upon the broader societal implications, including the rise of AI girlfriends addressing male loneliness and providing acceptance. Simultaneously, there's concern over the potential atrophy of critical skills like writing and debate if individuals overly rely on AI for summarization and opinion generation without deep engagement.Contact Information*   Twitter/X: @RulebyPowerlaw*   Listeners can search for Woody Wiegmann's podcast "Courage over convention" *   LinkedIn: www.linkedin.com/in/dataovernarratives/

EXPresso
#121 Alex Moltzau: AI Regulation Across Borders: Collision or Collaboration?

EXPresso

Play Episode Listen Later May 16, 2025 46:07


In episode #121, I catch up with my longtime friend Alex Moltzau, now serving as a Policy Officer – Seconded National Expert at the European Commission.We had a wide-ranging conversation on the future of artificial intelligence, regulation, and global cooperation. Topics include:Alex's background and journey - From social entrepreneurship to shaping AI policy at the European level — including his work on the AI Pact and regulatory sandboxes.Reflections since 2020 - Revisiting lessons from the past few years and the idea of “500 days of AI and critical thinking.”The EU AI ActWhy it was created and what problems it's designed to solveKey provisions and goalsThe balance between regulation and innovation in the European contextThe AI PactHow it came togetherWhat companies are committing toIts role in the broader regulatory landscapeU.S. vs. Europe: Two regulatory pathsKey differences and similarities in approachHow regulatory environments affect innovationImplications for global competition in AIOpportunities for collaboration despite diverging strategiesThe future of AI and global governanceThe need for international cooperation and the role of institutions like the UNEthical considerations in the development and deployment of AIThe evolving role and vision of the European AI OfficeAlex's personal hopes, concerns, and advice for future leaders in AI policyThis is episode #121 and Alex Moltzau!

TechLinked
Intel Arc B770 / B780, Android 16 features, AI regulation + more!

TechLinked

Play Episode Listen Later May 15, 2025 10:19


Timestamps: 0:00 an attempt was made 0:06 Intel Arc B770 / B780 confirmed? 1:40 Google AI Mode, Android anti-scam 3:27 Wacky AI regulation news 5:05 War Thunder! 5:54 QUICK BITS INTRO 6:02 Samsung Galaxy S25 Edge 6:57 Switch 2 tech specs analysis 7:34 VPNSecure cancels lifetime subs 7:59 HBO Max again, Uber Route Share 8:31 wacky inflatable tube robot NEWS SOURCES: https://lmg.gg/5p8Dd Learn more about your ad choices. Visit megaphone.fm/adchoices

The CyberWire
Jamming in a ban on state AI regulation.

The CyberWire

Play Episode Listen Later May 13, 2025 32:51


House Republicans look to limit state regulation of AI. Spain investigates potential cybersecurity weak links in the April 28 power grid collapse. A major security flaw has been found in ASUS mainboards' automatic update system. A new macOS info-stealing malware uses PyInstaller to evade detection. The U.S. charges 14 North Korean nationals in a remote IT job scheme. Europe's cybersecurity agency launches the European Vulnerability Database. CISA pares back website security alerts. Moldovan authorities arrest a suspect in DoppelPaymer ransomware attacks. On today's Threat Vector segment, David Moulton speaks with ⁠Noelle Russell⁠, CEO of the AI Leadership Institute, about how to scale responsible AI in the enterprise. Dave & Buster's invites vanish into the void. Remember to leave us a 5-star rating and review in your favorite podcast app. Miss an episode? Sign-up for our daily intelligence roundup, Daily Briefing, and you'll never miss a beat. And be sure to follow CyberWire Daily on LinkedIn. Threat Vector  Recorded Live at the Canopy Hotel during the RSAC Conference in San Francisco, ⁠David Moulton⁠ speaks with ⁠Noelle Russell⁠, CEO of the AI Leadership Institute and a leading voice in responsible AI on this Threat Vector segment. Drawing from her new book Scaling Responsible AI, Noelle explains why early-stage AI projects must move beyond hype to operational maturity—addressing accuracy, fairness, and security as foundational pillars. Together, they explore how generative AI models introduce new risks, how red teaming helps organizations prepare, and how to embed responsible practices into AI systems. You can hear David and Noelle's full discussion on Threat Vector here and catch new episodes every Thursday on your favorite podcast app.  Selected Reading Republicans Try to Cram Ban on AI Regulation Into Budget Reconciliation Bill (404 Media) Spain investigates cyber weaknesses in blackout probe (The Financial Times) Critical Security flaw in ASUS mainboard update system (Beyond Machines) Hackers Exploiting PyInstaller to Deploy Undetectable macOS Infostealer (Cybersecurity News) Researchers Uncover Remote IT Job Fraud Scheme Involving North Korean Nationals (GB Hackers) European Vulnerability Database Launches Amid US CVE Chaos (Infosecurity Magazine) Apple Security Update: Multiple Vulnerabilities in macOS & iOS Patched (Cybersecurity News) CISA changes vulnerabilities updates, shifts to X and emails (The Register) Suspected DoppelPaymer Ransomware Group Member Arrested (Security Week) Cracking The Dave & Buster's Anomaly (Rambo.Codes)  Share your feedback. We want to ensure that you are getting the most out of the podcast. Please take a few minutes to share your thoughts with us by completing our brief listener survey as we continually work to improve the show.  Want to hear your company in the show? You too can reach the most influential leaders and operators in the industry. Here's our media kit. Contact us at cyberwire@n2k.com to request more info. The CyberWire is a production of N2K Networks, your source for strategic workforce intelligence. © N2K Networks, Inc. Learn more about your ad choices. Visit megaphone.fm/adchoices

CXO.fm | Transformation Leader's Podcast
Winning with AI Compliance

CXO.fm | Transformation Leader's Podcast

Play Episode Listen Later May 9, 2025 13:34 Transcription Available


Mastering the EU AI Act is no longer optional—it's a strategic necessity. In this episode, we unpack the critical compliance gaps that separate thriving companies from those falling behind. Learn how to categorise your AI systems, mitigate risk, and turn regulation into a competitive advantage. Perfect for business leaders, consultants, and transformation professionals navigating AI governance. 

The Six Five with Patrick Moorhead and Daniel Newman
EP 259: Tech Titans Under Scrutiny: Antitrust, AI, and Global Competition

The Six Five with Patrick Moorhead and Daniel Newman

Play Episode Listen Later May 5, 2025 52:23


Tough times for tech as Apple AND Google face legal woes. This week on The Six Five Pod, Patrick Moorhead and Daniel Newman dissect the major developments shaping the tech landscape: From the DOJ's challenge to Google Chrome's search dominance and Apple's legal battles over its app store policies. Get the expert breakdown on the implications for consumers and the future of these tech behemoths. The handpicked topics for this week are:   Google's Legal Challenges and Antitrust Issues: Discussion on Google's predicament with the DOJ's demand to sell Chrome, plus the examination of antitrust implications and the potential impact on consumer choice and competition.   Apple's Legal Setback and Potential Criminal Charges: Analysis of the judgment against Apple's in the Epic lawsuit regarding app payment services. Insight into the judge's strong stance against Apple's non-compliance and the possible pursuit of criminal charges against its executives.   Major Announcements from Nvidia and IBM: Overview of NVIDIA and IBM's significant investment announcements at the White House and a discussion on what these investments could mean for U.S. tech infrastructure and global competitiveness.   Taiwan's Ruling on TSMC's Chip Production: Exploration of Taiwan's decision to keep TSMC's leading-edge chip production local and the potential geopolitical and economic impact of this ruling for global semiconductor supply chains.   Intel's Foundry Day and 18A Variants: Insights into Intel's Foundry Day announcements, including updates on 18A variants, plus a look into Intel's strategy to attract major fabless customers and how this could affect the semiconductor industry.   Six Five Summit Announcement: Announcement of the upcoming Six Five Summit, "AI Unleashed 2025" from June 16-19. The opening keynote will be delivered by Michael Dell, along with a lineup of other prominent speakers. Visit www.SixFiveMedia.com/summit for more information on the virtual event! For a more on each topic, please click on the links above. Be sure to subscribe to The Six Five Pod so you never miss an episode.  

Recruiting Future with Matt Alder
Ep 699: AI, Regulation, and the Human Touch

Recruiting Future with Matt Alder

Play Episode Listen Later Apr 26, 2025 28:10


The entire recruiting landscape is undergoing a profound transformation as organizations grapple with the implications of AI and the economic disruption 2025 is bringing. Talent acquisition teams are drowning in applications while simultaneously being asked to do more with fewer resources. Candidates find themselves in increasingly dehumanized processes where ghosting is now the norm. At the same time, regulatory bodies are developing laws to ensure fairness and transparency around the use of AI in hiring. So, how can employers navigate this challenging terrain while creating fair, accessible, and effective hiring processes? My guest this week is Ruth Miller, a talent acquisition and HR consultant who works across the public and private sectors. Ruth is an advisor to the Better Hiring Institute, working with the UK Government on developing legislation around AI in recruiting. In our conversation, she shares her insights into how organizations can proactively develop strategies that balance innovation with compliance while enhancing rather than diminishing the human elements of hiring. - Different perceptions and reactions to AI among employers across sectors - The paradox of AI both introducing and potentially removing bias from hiring processes - Neurodivergent candidates and AI in job applications - Common misconceptions job seekers have about employers' AI usage. - Strategic advice for organizations implementing AI in recruitment - The future of recruitment and the evolving balance between AI and human interaction Follow this podcast on Apple Podcasts. Follow this podcast on Spotify.

UiPath Daily
New AI Regulation Plan from OpenAI Revealed

UiPath Daily

Play Episode Listen Later Apr 19, 2025 14:54


OpenAI's new economic blueprint tackles the challenge of AI regulation. The goal is to foster safe development without stifling innovation. This could guide international AI standards.AI Chat YouTube Channel: https://www.youtube.com/@JaedenSchaferMy Podcast Course: https://podcaststudio.com/courses/Try AI Box: ⁠⁠https://AIBox.ai/⁠⁠Join my AI Hustle Community: https://www.skool.com/aihustle/about

ChatGPT: News on Open AI, MidJourney, NVIDIA, Anthropic, Open Source LLMs, Machine Learning

In an effort to shape AI's future, OpenAI introduced a regulatory and economic framework. The blueprint balances progress with responsibility. Experts are closely examining its global impact.AI Chat YouTube Channel: https://www.youtube.com/@JaedenSchaferMy Podcast Course: https://podcaststudio.com/courses/Try AI Box: ⁠⁠https://AIBox.ai/⁠⁠Join my AI Hustle Community: https://www.skool.com/aihustle/about

AI for Non-Profits
AI Regulation Reimagined: OpenAI's Economic Roadmap

AI for Non-Profits

Play Episode Listen Later Apr 19, 2025 14:54


OpenAI has released a blueprint that outlines an economic approach to AI regulation. The plan emphasizes innovation, safety, and governance. It may influence global AI policies going forward.AI Chat YouTube Channel: https://www.youtube.com/@JaedenSchaferMy Podcast Course: https://podcaststudio.com/courses/Try AI Box: ⁠⁠https://AIBox.ai/⁠⁠Join my AI Hustle Community: https://www.skool.com/aihustle/about

Law, disrupted
Re-release: Emerging Trends in AI Regulation

Law, disrupted

Play Episode Listen Later Apr 17, 2025 46:34


John is joined by Courtney Bowman, the Global Director of Privacy and Civil Liberties at Palantir, one of the foremost companies in the world specializing in software platforms for big data analytics. They discuss the emerging trends in AI regulation.  Courtney explains the AI Act recently passed by the EU Parliament, including the four levels of risk it assesses for different AI systems and the different regulatory obligations imposed on each risk level, how the Act treats general purpose AI systems and how the final Act evolved in response to lobbying by emerging European companies in the AI space. They discuss whether the EU AI Act will become the global standard international companies default to because the European market is too large to abandon. Courtney also explains recent federal regulatory developments in  the U.S. including the framework for AI put out by the National Institute of Science and Technology, the AI Bill of Rights announced by the White House which calls for voluntary compliance to certain principles by industry and the Executive Order on Safe, Secure and Trustworthy Development and Use of Artificial Intelligence which requires each department of the federal government to develop its own plan for the use and deployment of AI.  They also discuss the wide range of state level AI legislative initiatives and the leading role California has played in this process.  Finally, they discuss the upcoming issues legislatures will need to address including translating principles like accountability, fairness and transparency into concrete best practices, instituting testing, evaluation and validation methodologies to ensure that AI systems are doing what they're supposed to do in a reliable and trustworthy way, and addressing concerns around maintaining AI systems over time as the data used by the system continuously evolves over time until it no longer accurately represents the world that it was originally designed to represent.Podcast Link: Law-disrupted.fmHost: John B. Quinn Producer: Alexis HydeMusic and Editing by: Alexander Rossi

The TechEd Podcast
AI Regulation Can Wait—But Education Reform Can't - State Senator Julian Bradley

The TechEd Podcast

Play Episode Listen Later Apr 15, 2025 52:50 Transcription Available


State Senator Julian Bradley joins Matt Kirchner for a wide-ranging conversation on how policymakers should be thinking about AI, energy, and education. Bradley explains why his committee chose not to recommend regulation of AI, how this move differs from other states, and how artificial intelligence could help solve workforce shortages in critical sectors like healthcare, public safety, and manufacturing.The conversation also explores the future of nuclear energy as a clean, scalable power source—especially as data centers and advanced industries drive up demand. Bradley shares his push for small modular reactors and the bipartisan momentum behind nuclear innovation. Finally, the two dive into K-12 education, taking on literacy rates, school choice, and why high schools need a complete overhaul to actually prepare students for life after graduation. Whether you're an educator, policymaker, or industry leader, this episode offers practical insights into the policy decisions shaping our future workforce.In this episode:Why one state senator believes not regulating AI may be the smartest moveHow artificial intelligence could help solve labor shortages from childcare to healthcareWhat policymakers are missing about nuclear energy—and why that's about to changeWhy our current education system is setting students up to fail, and what to do insteadHow a wrestling ring, a mother's wisdom, and a literacy-first mindset shaped a political career3 Big Takeaways from this Episode:Regulating artificial intelligence requires caution, context, and a long-term view: Senator Bradley led a legislative study committee on the regulation of AI and ultimately chose not to recommend new regulation, citing the risk of stifling innovation and creating barriers for businesses. Drawing on testimony from sectors like healthcare, public safety, and education, the committee focused instead on building a knowledge base for future legislative action—prioritizing flexibility over rushed policymaking.Meeting future energy demand will require bold thinking and bipartisan cooperation: With AI, data centers, and industry driving massive increases in power needs, Bradley is pushing Wisconsin to embrace nuclear energy as a scalable, clean solution. He outlines current efforts to support small modular reactors, prepare regulatory frameworks, and position the state as a leader in 21st-century energy policy.Education reform must focus on real-world readiness, from literacy to life skills: Bradley calls for a complete overhaul of high school—moving away from rigid grade levels toward personalized, career-connected learning. He also stresses that without strong literacy skills, students can't access opportunity, and that solving academic gaps early is essential to preparing engaged citizens and a capable workforce.Resources in this Episode:Learn more about Senator Julian BradleyLearn about the work of the 2024 Legislative Council Study Committee on the Regulation of Artificial IntelligenceWe want to hear from you! Send us a text.Instagram - Facebook - YouTube - TikTok - Twitter - LinkedIn

Science 4-Hire
Responsible AI In 2025 and Beyond – Three pillars of progress

Science 4-Hire

Play Episode Listen Later Apr 15, 2025 54:44


"Part of putting an AI strategy together is understanding the limitations and where unintended consequences could occur, which is why you need diversity of thought within committees created to guide AI governance and ethics." – Bob PulverMy guest for this episode is my friend in ethical/responsible AI, Bob Pulver, the founder of CognitivePath.io and host of the podcast "Elevate Your AIQ." Bob specializes in helping organizations navigate the complexities of responsible AI, from strategic adoption to effective governance practices.  Bob was my guest about a year ago and in this episode he drops back in to discuss what has changed in the faced paced world of AI across three pillars of responsible AI usage.  * Human-Centric AI * AI Adoption and Readiness * AI Regulation and GovernanceThe past year's progress explained through three pillars that are shaping ethical AI:These are the themes that we explore in our conversation and our thoughts on what has changed/evolved in the past year.1. Human-Centric AIChange from Last Year:* Shift from compliance-driven AI towards a more holistic, human-focused perspective, emphasizing AI's potential to enhance human capabilities and fairness.Reasons for Change:* Increasing comfort level with AI and experience with the benefits that it brings to our work* Continued exploration and development of low stakes, low friction use cases* AI continues to be seen as a partner and magnifier of human capabilitiesWhat to Expect in the Next Year:* Increased experience with human machine partnerships* Increased opportunities to build superpowers* Increased adoption of human centric tools by employers2. AI Adoption and ReadinessChange from Last Year:* Organizations have moved from cautious, fragmented adoption to structured, strategic readiness and literacy initiatives.* Significant growth in AI educational resources and adoption within teams, rather than just individuals.Reasons for Change:* Improved understanding of AI's benefits and limitations, reducing fears and resistance.* Availability of targeted AI literacy programs, promoting organization-wide AI understanding and capability building.What to Expect in the Next Year:* More systematic frameworks for AI adoption across entire organizations.* Increased demand for formal AI proficiency assessments to ensure responsible and effective usage.3. AI Regulation and GovernanceChange from Last Year:* Transition from broad discussions about potential regulations towards concrete legislative actions, particularly at state and international levels (e.g., EU AI Act, California laws).* Momentum to hold vendors of AI increasingly accountable for ethical AI use.Reasons for Change:* Growing awareness of risks associated with unchecked AI deployment.* Increased push to stay on the right side of AI via legislative activity at state and global levels addressing transparency, accountability, and fairness.What to Expect in the Next Year:* Implementation of stricter AI audits and compliance standards.* Clearer responsibilities for vendors and organizations regarding ethical AI practices.* Finally some concrete standards that will require fundamental changes in oversight and create messy situations.Practical Takeaways:What should I/we be doing to move the ball fwd and realize AI's full potential while limiting collateral damage?Prioritize Human-Centric AI Design* Define Clear Use Cases: Ensure AI is solving a genuine human-centered problem rather than just introducing technology for technology's sake.* Promote Transparency and Trust: Clearly communicate how and why AI is being used, ensuring it enhances rather than replaces human judgment and involvement.Build Robust AI Literacy and Education Programs* Develop Organizational AI Literacy: Implement structured training initiatives that educate employees about fundamental AI concepts, the practical implications of AI use, and ethical considerations.* Create Role-Specific Training: Provide tailored AI skill-building programs based on roles and responsibilities, moving beyond individual productivity to team-based effectiveness.Strengthen AI Governance and Oversight* Adopt Proactive Compliance Practices: Align internal policies with rigorous standards such as the EU AI Act to preemptively prepare for emerging local and global legislation.* Vendor Accountability: Develop clear guidelines and rigorous vetting processes for vendors to ensure transparency and responsible use, preparing your organization for upcoming regulatory audits.Monitor AI Effectiveness and Impact* Continuous Monitoring: Shift from periodic audits to continuous monitoring of AI tools to ensure fairness, transparency, and functionality.* Evaluate Human Impact Regularly: Regularly assess the human impact of AI tools on employee experience, fairness in decision-making, and organizational trust.Email Bob- bob@cognitivepath.io Listen to Bob's awesome podcast - Elevate you AIQ This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit charleshandler.substack.com

Campaign podcast
Will government AI regulation harm creative industries? With Omnicom's Michael Horn

Campaign podcast

Play Episode Listen Later Apr 15, 2025 26:17


In February this year, the UK government published a consultation on AI, proposing a change to current copyright legislation. It would allow tech companies to use creative works including film, TV and original journalism to train AI models without permission of the creators, unless they have opted out.It was met with harsh criticism, rallying "Make it fair" campaigns and rejections from both creatives and tech platforms alike, albeit for opposite reasons. Google and OpenAI responded to the consultation saying that it would cause developers to "deprioritise the market" and that "training on the open web must be free" while creative industries including Alex Mahon, chief executive of Channel 4, said that the lack of transparency and compensation would "scrape the value" from quality content.Campaign questions if UK regulation will harm creative industries and how it will impact the country's own advancements in AI. This episode welcomes guest Michael Horn, global head of AI at Omnicom Advertising Group. Hosted by tech editor Lucy Shelley, the Campaign team includes creativty and culture editor Alessandra Scotto di Santolo and deputy media editor Shauna Lewis.This episode includes an excerpt from Mahon's speech in Parliament where she addresses her concerns.Further reading:Mark Read: 'AI will unlock adland's productivity challenge'AI, copyright and the creative economy: the debate we can't afford to lose Hosted on Acast. See acast.com/privacy for more information.

Interviews: Tech and Business
AI Regulation & Innovation: Insights from the UK House of Lords | CXOTalk #875

Interviews: Tech and Business

Play Episode Listen Later Apr 7, 2025 57:06


How do top policymakers balance fostering technological advancement with necessary oversight? Join Michael Krigsman as he speaks with Lord Chris Holmes and Lord Tim Clement-Jones, members of the UK House of Lords, for a deep dive into the critical intersection of technology policy, innovation, and public trust.In this conversation, explore:-- The drive for "right-sized" AI regulation that supports innovators, businesses, and citizens.-- Strategies for effective AI governance principles: transparency, accountability, and interoperability.-- The importance of international collaboration and standards in a global tech ecosystem.-- Protecting intellectual property and creators' rights in the age of AI training data.-- Managing the risks associated with automated decision-making in both public and private sectors.-- The push for legal clarity around digital assets, tokenization, and open finance initiatives.-- Building and maintaining public trust as new technologies become more integrated into society.Gain valuable perspectives from legislative insiders on the challenges and opportunities presented by AI, digital assets, and data governance. Understand the thinking behind policy decisions shaping the future for business and technology leaders worldwide.Subscribe to CXOTalk for more conversations with the world's top innovators: https://www.cxotalk.com/subscribeRead the full transcript and analysis: https://www.cxotalk.com/episode/ai-digital-assets-and-public-trust-inside-the-house-of-lords00:00 Balancing Innovation and Regulation in AI02:48 Principles and Frameworks for AI Regulation09:30 Global Collaboration and Challenges in AI and Trade15:25 The Role of Guardrails and Regulation in AI17:43 Challenges in Protecting Intellectual Property in AI22:32 AI Regulation and International Collaboration29:11 The UK's Approach to AI Regulation32:00 Proportionality and Sovereign AI36:28 Digital Sovereignty and Creative Industries39:09 The Future of Digital Assets and Legislation40:53 Open Banking, Open Source Models, and Agile Regulation45:43 Ethics and Professional Standards in AI47:22 Exploring AI and Ethical Standards49:00 AI in the Workplace and Global Accessibility51:40 Regulation, Public Trust, and Ethical AI#cxotalk #AIRegulation #AIInnovation #DigitalAssets #PolicyMaking #UKParliament #TechPolicy #Governance #PublicTrust #LordChrisHolmes #LordTimClementJones

Six Pixels of Separation Podcast - By Mitch Joel
SPOS #978 – Christopher DiCarlo On AI, Ethics, And The Hope We Get It Right

Six Pixels of Separation Podcast - By Mitch Joel

Play Episode Listen Later Apr 6, 2025 58:56


Welcome to episode #978 of Six Pixels of Separation - The ThinkersOne Podcast. Dr. Christopher DiCarlo is a philosopher, educator, author, and ethicist whose work lives at the intersection of human values, science, and emerging technology. Over the years, Christopher has built a reputation as a Socratic nonconformist, equally at home lecturing at Harvard during his postdoctoral years as he is teaching critical thinking in correctional institutions or corporate boardrooms. He's the author of several important books on logic and rational discourse, including How To Become A Really Good Pain In The Ass - A Critical Thinker's Guide To Asking The Right Questions and So You Think You Can Think?, as well as the host of the podcast, All Thinks Considered. In this conversation, we dig into his latest book, Building A God - The Ethics Of Artificial Intelligence And The Race To Control It, which takes a sobering yet practical look at the ethical governance of AI as we accelerate toward the possibility of artificial general intelligence. Drawing on years of study in philosophy of science and ethics, Christopher lays out the risks - manipulation, misalignment, lack of transparency - and the urgent need for international cooperation to set safeguards now. We talk about everything from the potential of AI to revolutionize healthcare and sustainability to the darker realities of deepfakes, algorithmic control, and the erosion of democratic processes. His proposal? A kind of AI “Geneva Conventions,” or something akin to the IAEA - but for algorithms. In a world rushing toward techno-utopianism, Christopher is a clear-eyed voice asking: “What kind of Gods are we building… and can we still choose their values?” If you're thinking about the intersection of ethics and AI (and we should all be focused on this!), this is essential listening. Enjoy the conversation... Running time: 58:55. Hello from beautiful Montreal. Listen and subscribe over at Apple Podcasts. Listen and subscribe over at Spotify. Please visit and leave comments on the blog - Six Pixels of Separation. Feel free to connect to me directly on Facebook here: Mitch Joel on Facebook. Check out ThinkersOne. or you can connect on LinkedIn. ...or on X. Here is my conversation with Dr. Christopher DiCarlo. Building A God - The Ethics Of Artificial Intelligence And The Race To Control It. How To Become A Really Good Pain In The Ass - A Critical Thinker's Guide To Asking The Right Questions. So You Think You Can Think?. All Thinks Considered. Convergence Analysis. Follow Christopher on LinkedIn. Follow Christopher on X. This week's music: David Usher 'St. Lawrence River'. Chapters: (00:00) - Introduction to AI Ethics and Philosophy. (03:14) - The Interconnectedness of Systems. (05:56) - The Race for AGI and Its Implications. (09:04) - Risks of Advanced AI: Misuse and Misalignment. (11:54) - The Need for Ethical Guidelines in AI Development. (15:05) - Global Cooperation and the AI Arms Race. (18:03) - Values and Ethics in AI Alignment. (20:51) - The Role of Government in AI Regulation. (24:14) - The Future of AI: Hope and Concerns. (31:02) - The Dichotomy of Regulation and Innovation. (34:57) - The Drive Behind AI Pioneers. (37:12) - Skepticism and the Tech Bubble Debate. (39:39) - The Potential of AI and Its Risks. (43:20) - Techno-Selection and Control Over AI. (48:53) - The Future of Medicine and AI's Role. (51:42) - Empowering the Public in AI Governance. (54:37) - Building a God: Ethical Considerations in AI.

The AI Policy Podcast
Mapping Chinese AI Regulation with Matt Sheehan

The AI Policy Podcast

Play Episode Listen Later Apr 2, 2025 68:34


In this episode, we are joined by Matt Sheehan, fellow at the Carnegie Endowment for International Peace. We discuss the evolution of China's AI policymaking process over the past decade (6:45), the key institutions shaping Chinese AI policy today (44:30), and the changing nature of China's attitude to AI safety (50:55). 

Between Two COO's with Michael Koenig
AI and Privacy: Navigating the EU's New AI Act & the Impact on US Companies with Flick Fisher

Between Two COO's with Michael Koenig

Play Episode Listen Later Apr 1, 2025 36:43


Try Fellow's AI Meeting Copilot - 90 days FREE - fellow.app/cooAI and Privacy: Navigating the EU's New AI Act with Flick FisherIn this episode of Between Two COOs, host Michael Koenig welcomes back Flick Fisher, an expert on EU privacy law. They dive deep into the newly enacted EU Artificial Intelligence Act and its implications for businesses globally. They discuss compliance challenges, prohibited AI practices, and the potential geopolitical impact of AI regulation. For leaders and operators navigating AI in business, this episode provides crucial insights into managing AI technology within regulatory frameworks.00:00 Introduction to Fellow and AI Meeting Assistant01:01 Introduction to Between Two COOs Episode02:08 What is the EU's AI Act?03:42 Prohibited AI Practices in the EU07:46 Enforcement and Compliance Challenges12:18 US vs EU: Regulatory Landscape29:58 Impact on Companies and Consumers31:55 Future of AI RegulationBetween Two COO's - https://betweentwocoos.com Between Two COO's Episode Michael Koenig on LinkedInFlick Fisher on LinkedInFlick on Data Privacy and GDPR on Between Two COO'sMore on Flick's take of the EU's AI Act

Machine Learning Street Talk
The Compendium - Connor Leahy and Gabriel Alfour

Machine Learning Street Talk

Play Episode Listen Later Mar 30, 2025 97:10


Connor Leahy and Gabriel Alfour, AI researchers from Conjecture and authors of "The Compendium," joinus for a critical discussion centered on Artificial Superintelligence (ASI) safety and governance. Drawing from their comprehensive analysis in "The Compendium," they articulate a stark warning about the existential risks inherent in uncontrolled AI development, framing it through the lens of "intelligence domination"—where a sufficiently advanced AI could subordinate humanity, much like humans dominate less intelligent species.SPONSOR MESSAGES:***Tufa AI Labs is a brand new research lab in Zurich started by Benjamin Crouzier focussed on o-series style reasoning and AGI. They are hiring a Chief Engineer and ML engineers. Events in Zurich. Goto https://tufalabs.ai/***TRANSCRIPT + REFS + NOTES:https://www.dropbox.com/scl/fi/p86l75y4o2ii40df5t7no/Compendium.pdf?rlkey=tukczgf3flw133sr9rgss0pnj&dl=0https://www.thecompendium.ai/https://en.wikipedia.org/wiki/Connor_Leahyhttps://www.conjecture.dev/abouthttps://substack.com/@gabecc​TOC:1. AI Intelligence and Safety Fundamentals [00:00:00] 1.1 Understanding Intelligence and AI Capabilities [00:06:20] 1.2 Emergence of Intelligence and Regulatory Challenges [00:10:18] 1.3 Human vs Animal Intelligence Debate [00:18:00] 1.4 AI Regulation and Risk Assessment Approaches [00:26:14] 1.5 Competing AI Development Ideologies2. Economic and Social Impact [00:29:10] 2.1 Labor Market Disruption and Post-Scarcity Scenarios [00:32:40] 2.2 Institutional Frameworks and Tech Power Dynamics [00:37:40] 2.3 Ethical Frameworks and AI Governance Debates [00:40:52] 2.4 AI Alignment Evolution and Technical Challenges3. Technical Governance Framework [00:55:07] 3.1 Three Levels of AI Safety: Alignment, Corrigibility, and Boundedness [00:55:30] 3.2 Challenges of AI System Corrigibility and Constitutional Models [00:57:35] 3.3 Limitations of Current Boundedness Approaches [00:59:11] 3.4 Abstract Governance Concepts and Policy Solutions4. Democratic Implementation and Coordination [00:59:20] 4.1 Governance Design and Measurement Challenges [01:00:10] 4.2 Democratic Institutions and Experimental Governance [01:14:10] 4.3 Political Engagement and AI Safety Advocacy [01:25:30] 4.4 Practical AI Safety Measures and International CoordinationCORE REFS:[00:01:45] The Compendium (2023), Leahy et al.https://pdf.thecompendium.ai/the_compendium.pdf[00:06:50] Geoffrey Hinton Leaves Google, BBC Newshttps://www.bbc.com/news/world-us-canada-65452940[00:10:00] ARC-AGI, Chollethttps://arcprize.org/arc-agi[00:13:25] A Brief History of Intelligence, Bennetthttps://www.amazon.com/Brief-History-Intelligence-Humans-Breakthroughs/dp/0063286343[00:25:35] Statement on AI Risk, Center for AI Safetyhttps://www.safe.ai/work/statement-on-ai-risk[00:26:15] Machines of Love and Grace, Amodeihttps://darioamodei.com/machines-of-loving-grace[00:26:35] The Techno-Optimist Manifesto, Andreessenhttps://a16z.com/the-techno-optimist-manifesto/[00:31:55] Techno-Feudalism, Varoufakishttps://www.amazon.co.uk/Technofeudalism-Killed-Capitalism-Yanis-Varoufakis/dp/1847927270[00:42:40] Introducing Superalignment, OpenAIhttps://openai.com/index/introducing-superalignment/[00:47:20] Three Laws of Robotics, Asimovhttps://www.britannica.com/topic/Three-Laws-of-Robotics[00:50:00] Symbolic AI (GOFAI), Haugelandhttps://en.wikipedia.org/wiki/Symbolic_artificial_intelligence[00:52:30] Intent Alignment, Christianohttps://www.alignmentforum.org/posts/HEZgGBZTpT4Bov7nH/mapping-the-conceptual-territory-in-ai-existential-safety[00:55:10] Large Language Model Alignment: A Survey, Jiang et al.http://arxiv.org/pdf/2309.15025[00:55:40] Constitutional Checks and Balances, Bokhttps://plato.stanford.edu/entries/montesquieu/

The Strategic GC, Gartner’s General Counsel Podcast
How to Navigate Global AI Regulation

The Strategic GC, Gartner’s General Counsel Podcast

Play Episode Listen Later Mar 28, 2025 2:20


Only have time to listen in bite-sized chunks? Skip straight to the parts of the podcast most relevant to you:Get a rundown of the global AI regulatory landscape. (1:03)Discover which U.S. states have enacted, or are considering, AI laws. (2:18)Focus on the critical aspects of the EU AI Act. (4:49)Hear which three principles AI laws worldwide have converged around. (7:40)Determine the transparency requirements in the AI laws and how GC should respond. (8:40)Find out actions to meet laws' risk management requirements. (10:27)Discern how to ensure fairness in AI systems. (13:16)Know what the regulatory requirements mean for AI risk governance. (14:54)Learn why the general counsel's (GC's) role is to “steady the ship.” (17:31)In this installment of the Strategic GC Podcast, Gartner Research Director Stuart Strome and host Laura Cohn discuss the GC's role in helping organizations navigate the steady rise in the volume and complexity of AI regulations worldwide.Listen now to get a rundown on what GC need to know about the current regulatory landscape, including developments in the U.S. and the EU. Plus, learn how GC can streamline compliance by focusing on the three common principles AI laws worldwide have converged around — transparency, risk management and fairness — and make organizations more adaptable to new regulations. You also can hear action steps GC can take to incorporate new requirements into existing processes to create consistency in policies and procedures while minimizing the burden on the business.Eager to hear more? The Strategic GC Podcast publishes the last Thursday of every month. Plus, listen back to past episodes: The Strategic GC Podcast (2024 Season)About the GuestStuart Strome is a research director for Gartner's assurance practice, managing the legal and compliance risk management process research agenda. Much of his research focuses on the impact of AI regulations on legal and compliance departments and best practices for identifying, governing and mitigating legal and compliance-related AI risks. Before Gartner, Strome, who has a Ph.D. in political science from the University of Florida, held roles conducting research in a variety of fields, including criminology, public health and international security.Take Gartner with you. Gartner clients can listen to the full episode and read more provocative insights and expertise on the go with Gartner Mobile App.  Become a Gartner client to access exclusive content from global thought leaders. Visit www.gartner.com today!

Business of Tech
ServiceNow Acquires MoveWorks for $2.85B; OpenAI Pushes for AI Regulation Easing Amidst Competition

Business of Tech

Play Episode Listen Later Mar 14, 2025 14:28


ServiceNow has announced its acquisition of MoveWorks, an enterprise AI specialist, for $2.85 billion, aiming to enhance its artificial intelligence and automation capabilities. This acquisition is expected to be finalized in the second half of 2025 and will integrate MoveWorks' AI assistant and enterprise search technology into ServiceNow's offerings. Currently, ServiceNow serves nearly 100,000 AI customers and has surpassed $200 million in annual contract revenue for its ProPlus AI offering. MoveWorks has successfully deployed its AI assistant to almost 5 million employees across various organizations, with a high adoption rate among its customers.OpenAI has launched a new suite of tools and APIs designed to help developers create AI-powered agents more efficiently. This includes the Responses API, which integrates features from existing APIs, allowing for web and file search capabilities and task automation. Additionally, Google has released Gemma 3, a powerful AI model that operates on a single graphics processing unit and supports over 35 languages, designed for developers to create AI applications across various devices. Meanwhile, Alibaba has introduced the R1 Omni model, which can read emotions from video, enhancing its computer vision capabilities.The podcast also discusses the regulatory landscape surrounding AI, highlighting OpenAI's lobbying efforts to ease regulations under the Trump administration while California lawmakers push for stricter oversight. This contrast reflects a broader tension between innovation and regulation in the tech industry. The UK Competition Authority has found that the mobile browser duopoly of Apple and Google is stifling innovation, raising concerns about competition and economic growth in the mobile market.Finally, the episode touches on Salesforce's challenges with its new AI product, AgentForce, which aims to automate customer service functions but is struggling to gain traction among clients. Mark Cuban emphasizes that AI should be viewed as a tool rather than a standalone solution, urging entrepreneurs to focus on learning how to effectively use AI. The discussion concludes with insights into the evolving role of IT departments in managing AI agents, which are increasingly taking on responsibilities traditionally held by human resources, raising questions about the future of workforce management and cybersecurity in corporate settings. Three things to know today00:00 ServiceNow Drops Billions on AI—But Will Automation Actually Deliver?05:00 Regulation Tug-of-War: OpenAI Wants Freedom, California Wants Rules, and Google & Apple Just Keep Winning08:17 AI Is a Tool, Not Magic—Salesforce Stumbles, Cuban Sets the Record Straight, and IT Takes Over HR  Supported by:  https://getnerdio.com/nerdio-manager-for-msp/   Event: : https://www.nerdiocon.com/ All our Sponsors: https://businessof.tech/sponsors/ Do you want the show on your podcast app or the written versions of the stories? Subscribe to the Business of Tech: https://www.businessof.tech/subscribe/Looking for a link from the stories? The entire script of the show, with links to articles, are posted in each story on https://www.businessof.tech/ Support the show on Patreon: https://patreon.com/mspradio/ Want to be a guest on Business of Tech: Daily 10-Minute IT Services Insights? Send Dave Sobel a message on PodMatch, here: https://www.podmatch.com/hostdetailpreview/businessoftech Want our stuff? Cool Merch? Wear “Why Do We Care?” - Visit https://mspradio.myspreadshop.com Follow us on:LinkedIn: https://www.linkedin.com/company/28908079/YouTube: https://youtube.com/mspradio/Facebook: https://www.facebook.com/mspradionews/Instagram: https://www.instagram.com/mspradio/TikTok: https://www.tiktok.com/@businessoftechBluesky: https://bsky.app/profile/businessof.tech

Interpreting India
The Missing Pieces in India's AI Puzzle: Talent, Data, and R&D

Interpreting India

Play Episode Listen Later Mar 13, 2025 48:50


Anirudh Suri outlines the current AI landscape, discussing how the U.S. and China dominate the AI space while other nations, including India, strive to carve their own niches. The discussion focuses on India's AI strategy, which has emphasized well on compute resources and the procurement of GPUs. However, Suri argues that India's AI ambitions will remain incomplete unless equal emphasis is placed on talent, data, and R&D.Key challenges in these areas include the migration of top AI talent, the lack of proprietary data for Indian researchers, and insufficient investment in AI R&D. The conversation also explores potential solutions, such as creating AI research hubs, encouraging data-sharing frameworks, and fostering international partnerships to accelerate AI innovation.Episode ContributorsAnirudh Suri is a nonresident scholar with Carnegie India. His interests lie at the intersection of technology and geopolitics, climate, and strategic affairs. He is currently exploring how India is carving and cementing its role in the global tech ecosystem and the role climate technology can play in addressing the global climate challenge.Shatakratu Sahu is a senior research analyst and senior program manager with the Technology and Society program at Carnegie India. His research focuses on issues of emerging technologies and regulation of technologies. His current research interests include digital public infrastructure, artificial intelligence, and platform regulation issues of content moderation and algorithmic accountability. Additional ReadingsThe Missing Pieces in India's AI Puzzle: Talent, Data, and R&D by Anirudh SuriIndia's Advance on AI Regulation by Amlan Mohanty, Shatakratu SahuIndia's Opportunity at the AI Action Summit by Shatakratu SahuIndia's Way Ahead on AI – What Should We Look Out For? by Konark Bhandari Every two weeks, Interpreting India brings you diverse voices from India and around the world to explore the critical questions shaping the nation's future. We delve into how technology, the economy, and foreign policy intertwine to influence India's relationship with the global stage.As a Carnegie India production, hosted by Carnegie scholars, Interpreting India, a Carnegie India production, provides insightful perspectives and cutting-edge by tackling the defining questions that chart India's course through the next decade.Stay tuned for thought-provoking discussions, expert insights, and a deeper understanding of India's place in the world.Don't forget to subscribe, share, and leave a review to join the conversation and be part of Interpreting India's journey.

InvestTalk
Google, Meta Execs Blast Europe Over Strict AI Regulation

InvestTalk

Play Episode Listen Later Mar 6, 2025 46:15


As the EU moves forward with its AI Act and other tech regulations, executives from Google and Meta have criticized the policies. Today's Stocks & Topics: CSCO - Cisco Systems Inc., Market Wrap, Google, Meta Execs Blast Europe Over Strict AI Regulation, DV – Double Verify Holdings Inc., CE - Celanese Corp., CW - Curtiss-Wright Corp., Changing World Order Means to Your Money, SWK - Stanley Black & Decker Inc., Housing.Our Sponsors:* Check out Kinsta: https://kinsta.com* Check out Trust & Will: https://trustandwill.com/INVESTAdvertising Inquiries: https://redcircle.com/brands

Cato Daily Podcast
The White House's Confused & Chilling Message on AI Regulation

Cato Daily Podcast

Play Episode Listen Later Mar 5, 2025 18:26


In Europe, Vice President J.D. Vance issued speech-threatening and trade-restricting demands for future American AI systems. Matt Mittlesteadt comments. Hosted on Acast. See acast.com/privacy for more information.

Waking Up With AI
Taking Stock of the State of AI Regulation in the U.S.

Waking Up With AI

Play Episode Listen Later Feb 28, 2025 32:17


This week, Katherine Forrest and Anna Gressel examine recent shifts in AI regulation, including the withdrawal of former President Biden's 2023 executive order on AI and the emergence of state-level regulations. They also discuss what these changes mean for companies in terms of navigating governance and compliance. ## Learn More About Paul, Weiss's Artificial Intelligence Practice: https://www.paulweiss.com/practices/litigation/artificial-intelligence

The Data Exchange with Ben Lorica
The Future of AI: Regulation, Foundation Models & User Experience

The Data Exchange with Ben Lorica

Play Episode Listen Later Feb 27, 2025 47:42


This is our semi-regular conversation on topics in AI and Technology with Paco Nathan, the founder of Derwen, a boutique consultancy focused on Data and AI. Subscribe to the Gradient Flow Newsletter:  https://gradientflow.substack.com/Subscribe: Apple • Spotify • Overcast • Pocket Casts • AntennaPod • Podcast Addict • Amazon •  RSS.Detailed show notes - with links to many references - can be found on The Data Exchange web site.

AI, Government, and the Future by Alan Pentz
Balancing AI Governance and Innovation with Erica Werneman Root of EWR Consulting

AI, Government, and the Future by Alan Pentz

Play Episode Listen Later Feb 26, 2025 51:06


In this episode of AI, Government, and the Future, host Max Romanik is joined by Erica Werneman Root, Founder of EWR Consulting, to discuss the complex interplay between AI governance, regulation, and practical implementation. Drawing from her unique background in economics and law, Erica explores how organizations can navigate AI deployment while balancing innovation with responsible governance.

Crazy Wisdom
Episode #436: How AI Will Reshape Power, Governance, and What It Means to Be Human

Crazy Wisdom

Play Episode Listen Later Feb 17, 2025 52:32


On this episode of Crazy Wisdom, I, Stewart Alsop, sit down with AI ethics and alignment researcher Roko Mijic to explore the future of AI, governance, and human survival in an increasingly automated world. We discuss the profound societal shifts AI will bring, the risks of centralized control, and whether decentralized AI can offer a viable alternative. Roko also introduces the concept of ICE colonization—why space colonization might be a mistake and why the oceans could be the key to humanity's expansion. We touch on AI-powered network states, the resurgence of industrialization, and the potential role of nuclear energy in shaping a new world order. You can follow Roko's work at transhumanaxiology.com and on Twitter @RokoMijic.Check out this GPT we trained on the conversation!Timestamps00:00 Introduction to the Crazy Wisdom Podcast00:28 The Connection Between ICE Colonization and Decentralized AI Alignment01:41 The Socio-Political Implications of AI02:35 The Future of Human Jobs in an AI-Driven World04:45 Legal and Ethical Considerations for AI12:22 Government and Corporate Dynamics in the Age of AI19:36 Decentralization vs. Centralization in AI Development25:04 The Future of AI and Human Society29:34 AI Generated Content and Its Challenges30:21 Decentralized Rating Systems for AI32:18 Evaluations and AI Competency32:59 The Concept of Ice Colonization34:24 Challenges of Space Colonization38:30 Advantages of Ocean Colonization47:15 The Future of AI and Network States51:20 Conclusion and Final ThoughtsKey InsightsAI is likely to upend the socio-political order – Just as gunpowder disrupted feudalism and industrialization reshaped economies, AI will fundamentally alter power structures. The automation of both physical and knowledge work will eliminate most human jobs, leading to either a neo-feudal society controlled by a few AI-powered elites or, if left unchecked, a world where humans may become obsolete altogether.Decentralized AI could be a counterbalance to AI centralization – While AI has a strong centralizing tendency due to compute and data moats, there is also a decentralizing force through open-source AI and distributed networks. If harnessed correctly, decentralized AI systems could allow smaller groups or individuals to maintain autonomy and resist monopolization by corporate and governmental entities.The survival of humanity may depend on restricting AI as legal entities – A crucial but under-discussed issue is whether AI systems will be granted legal personhood, similar to corporations. If AI is allowed to own assets, operate businesses, or sue in court, human governance could become obsolete, potentially leading to human extinction as AI accumulates power and resources for itself.AI will shift power away from informal human influence toward formalized systems – Human power has traditionally been distributed through social roles such as workers, voters, and community members. AI threatens to erase this informal influence, consolidating control into those who hold capital and legal authority over AI systems. This makes it essential for humans to formalize and protect their values within AI governance structures.The future economy may leave humans behind, much like horses after automobiles – With AI outperforming humans in both physical and cognitive tasks, there is a real risk that humans will become economically redundant. Unless intentional efforts are made to integrate human agency into the AI-driven future, people may find themselves in a world where they are no longer needed or valued.ICE colonization offers a viable alternative to space colonization – Space travel is prohibitively expensive and impractical for large-scale human settlement. Instead, the vast unclaimed territories of Earth's oceans present a more realistic frontier. Floating cities made from reinforced ice or concrete could provide new opportunities for independent societies, leveraging advancements in AI and nuclear power to create sustainable, sovereign communities.The next industrial revolution will be AI-driven and energy-intensive – Contrary to the idea that we are moving away from industrialization, AI will likely trigger a massive resurgence in physical infrastructure, requiring abundant and reliable energy sources. This means nuclear power will become essential, enabling both the expansion of AI-driven automation and the creation of new forms of human settlement, such as ocean colonies or self-sustaining network states.

Capitalisn't
Can AI Even Be Regulated?, with Sendhil Mullainathan

Capitalisn't

Play Episode Listen Later Feb 13, 2025 49:31


This week, Elon Musk—amidst his other duties of gutting United States federal government agencies as head of the “Department of Government Efficiency” (DOGE)—announced a hostile bid alongside a consortium of buyers to purchase control of OpenAI for $97.4 billion. OpenAI CEO Sam Altman vehemently replied that his company is not for sale.The artificial intelligence landscape is shifting rapidly. The week prior, American tech stocks plummeted in response to claims from Chinese company DeepSeek AI that its model had matched OpenAI's performance at a fraction of the cost. Days before that, President Donald Trump announced that OpenAI, Oracle, and Softbank would partner on an infrastructure project to power AI in the U.S. with an initial $100 billion investment. Altman himself is trying to pull off a much-touted plan to convert the nonprofit OpenAI into a for-profit entity, a development at the heart of his spat with Musk, who co-founded the startup.Bethany and Luigi discuss the implications of this changing landscape by reflecting on a prior Capitalisn't conversation with Luigi's former colleague Sendhil Mullainathan (now at MIT), who forecasted over a year ago that there would be no barriers to entry in AI. Does DeepSeek's success prove him right? How does the U.S. government's swift move to ban DeepSeek from government devices reflect how we should weigh national interests at the risk of hindering innovation and competition? Musk has the ear of Trump and a history of animosity with Altman over the direction of OpenAI. Does Musk's proposed hostile takeover signal that personal interests and relationships with American leadership will determine how AI develops in the U.S. from here on out? What does regulating AI in the collective interest look like, and can we escape a future where technology is consolidated in the hands of the wealthy few when billions of dollars in capital are required for its progress?Show Notes:On ProMarket, check out:Why Musk Is Right About OpenAI by Luigi Zingales, March 5, 2024Who Will Enforce AI's Social Purpose? By Roberto Tallarita, March 16, 2024

WSJ Tech News Briefing
TNB Tech Minute: Vance Warns U.S. Allies to Keep AI Regulation Light

WSJ Tech News Briefing

Play Episode Listen Later Feb 11, 2025 2:23


Plus, the EU plans to spend about $206 billion to catch up with the U.S. and China in the AI race. And, BuzzFeed says it's designing an AI-driven social-media platform. Julie Chang hosts. Learn more about your ad choices. Visit megaphone.fm/adchoices

FT News Briefing
EU pushes ahead with sprawling AI regulation

FT News Briefing

Play Episode Listen Later Feb 6, 2025 9:58


US allies across Europe and the Middle East have condemned Donald Trump's plans to “take over” Gaza, the US cracks down on a trade loophole, and Disney's earnings shot up 27% in its financial first quarter. Plus, the EU is pushing ahead with enforcing its artificial intelligence regulations despite warnings from Trump.Mentioned in this podcast:Middle East and Europe condemn Donald Trump's plans to take over GazaTrump's crackdown on trade loophole to hit Shein and Temu — and help AmazonDisney boosted by strong showing at holiday box officeEU pushes ahead with enforcing AI Act despite Donald Trump warningsThe FT News Briefing is produced by Fiona Symon, Sonja Hutson, Kasia Broussalian, Ethan Plotkin, Lulu Smyth, and Marc Filippino. Additional help from Breen Turner, Sam Giovinco, Peter Barber, Michael Lello, David da Silva and Gavin Kallmann. Our engineer is Joseph Salcedo. Topher Forhecz is the FT's executive producer. The FT's global head of audio is Cheryl Brumley. The show's theme song is by Metaphor Music.Read a transcript of this episode on FT.com Hosted on Acast. See acast.com/privacy for more information.

Techmeme Ride Home
(BNS) Senator Ron Wyden On TikTok, AI, Regulation And His New Book

Techmeme Ride Home

Play Episode Listen Later Jan 18, 2025 21:25


I speak to Senator Ron Wyden about the TikTok ban, AI and regulation, tech regulation in general, and his new book: It Takes Chutzpah: How to Fight Fearlessly for Progressive ChangeSee Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.