Podcasts about ai policy

  • 258PODCASTS
  • 370EPISODES
  • 35mAVG DURATION
  • 5WEEKLY NEW EPISODES
  • Aug 20, 2025LATEST

POPULARITY

20172018201920202021202220232024


Best podcasts about ai policy

Latest podcast episodes about ai policy

The Science of Politics
Making AI policy: Are we falling behind or rushing in?

The Science of Politics

Play Episode Listen Later Aug 20, 2025 58:31


As the next AI cycle begins, state and national governments are trying to keep up. And AI policy now matters for energy, health, education, foreign, and economic development policy as well. What can we learn from the early AI legislation? Chinnu Parinandi finds that partisan alignments and institutional capacity shape where and how consumer protection versus economic development AI policies appear in the states. Heonuk Ha finds an AI boom in congressional legislation with key thematic clusters—from innovation and security to data governance and healthcare.

China Global
The Race to AI Dominance: US and Chinese Approaches Differ

China Global

Play Episode Listen Later Aug 19, 2025 28:25


The United States and China are locked in a race for dominance in artificial intelligence, including its applications and diffusion. American and Chinese AI firms like OpenAI and DeepSeek respectively have captured global attention and major companies like Google and Microsoft have been actively investing in AI development. While the US currently boasts world-leading AI models, China is ahead in some areas of AI research and application. With the release of US and Chinese AI action plans in July, we may be on the cusp of a new phase in US-China AI competition.Why is AI so important for a country's global influence? What are the strengths of China's AI strategy? And what does China's new AI action plan tell us about its AI ambitions? To discuss these questions, we are joined by Owen Daniels. Owen is the Associate Director of Analysis at Georgetown's Center for Security and Emerging Technology and a Non-Resident Fellow at the Atlantic Council. His recently published article in Foreign Affairs co-authored with Hanna Dohmen -- titled China's Overlooked AI Strategy -- provides insights into how Beijing is utilizing AI to gain global dominance and what the US can and should do to sustain and bolster its lead.Timestamps[00:00] Start [02:05] US Policy Risks to Chinese AI Leadership [05:28] Deepseek and Kimi's Newest Models  [07:54] US vs. China's Approach to AI [10:42] Limitations to China's AI Strategy  [13:08] Using AI as a Soft Power Tool  [16:10] AI Action Plans  [19:34] Trump's Approach to AI Competition [22:30] Can China Lead Global AI Governance?  [25:10] Evolving US Policy for Open Models

The Lawfare Podcast
Scaling Laws: What's Next in AI Policy (and for Dean Ball)?

The Lawfare Podcast

Play Episode Listen Later Aug 15, 2025 59:14


In this episode of Scaling Laws, Dean Ball, Senior Fellow at the Foundation for American Innovation and former Senior Policy Advisor for Artificial Intelligence and Emerging Technology, White House Office of Science and Technology Policy, joins Kevin Frazier, AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare, and Alan Rozenshtein, Associate Professor at Minnesota Law and Research Director at Lawfare, to share an inside perspective of the Trump administration's AI agenda, with a specific focus on the AI Action Plan. The trio also explore Dean's thoughts on the recently released ChatGPT-5 and the ongoing geopolitical dynamics shaping America's domestic AI policy.Find Scaling Laws on the Lawfare website, and subscribe to never miss an episode.To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/lawfare-institute.Support this show http://supporter.acast.com/lawfare. Hosted on Acast. See acast.com/privacy for more information.

a16z
The State of American AI Policy: From ‘Pause AI' to ‘Build'

a16z

Play Episode Listen Later Aug 15, 2025 42:01


a16z General Partners Martin Casado and Anjney Midha join Erik Torenberg to unpack one of the most dramatic shifts in tech policy in recent memory: the move from “pause AI” to “win the AI race.”They trace the evolution of U.S. AI policy—from executive orders that chilled innovation, to the recent AI Action Plan that puts scientific progress and open source at the center. The discussion covers how technologists were caught off guard, why open source was wrongly equated to nuclear risk, and what changed the narrative—including China's rapid progress.The conversation also explores:How and why the AI discourse got captured by doomerismWhat “marginal risk” really means—and why it mattersWhy open source AI is not just ideology, but business strategyHow government, academia, and industry are realigning after a fractured few yearsThe effect of bad legislation—and what comes nextWhether you're a founder, policymaker, or just trying to make sense of AI's regulatory future, this episode breaks it all down.Timecodes:0:00 Introduction & Setting the Stage0:39 The Shift in AI Regulation Discourse2:10 Historical Context: Tech Waves & Policy6:39 The Open Source Debate13:39 The Chilling Effect & Global Competition15:00 Changing Sentiments on Open Source21:06 Open Source as Business Strategy28:50 The AI Action Plan: Reflections & Critique32:45 Alignment, Marginal Risk, and Policy41:30 The Future of AI Regulation & Closing ThoughtsResourcesFind Martin on X: https://x.com/martin_casadoFind Anjney on X: https://x.com/anjneymidhaStay Updated:Let us know what you think: https://ratethispodcast.com/a16zFind a16z on Twitter: https://twitter.com/a16zFind a16z on LinkedIn: https://www.linkedin.com/company/a16zSubscribe on your favorite podcast app: https://a16z.simplecast.com/Follow our host: https://x.com/eriktorenbergPlease note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures.

Breaking Battlegrounds
Gary Saul Morson on Revolutions and Satya Thallam on the Future of AI Policy

Breaking Battlegrounds

Play Episode Listen Later Aug 15, 2025 75:57


This week on Breaking Battlegrounds, we kick things off with Northwestern University's Gary Saul Morson, co-author of Cents and Sensibility, joins us to explore why revolutions never truly end, Dostoevsky's warnings about nihilism, and what economist Friedrich Hayek might think about artificial intelligence. We wrap up with Satya Thallam, senior advisor at Americans for Responsible Innovation, for an inside look at the political and national security implications of AI policy, from the White House's export control changes to the GOP's divide over state regulation, and what it all means for the future of innovation in America.

POLITICO Dispatch
An exit interview with Trump's AI policy adviser

POLITICO Dispatch

Play Episode Listen Later Aug 14, 2025 21:40


As a senior policy adviser in the Office of Science and Technology Policy, Dean Ball helped write President Donald Trump's recently released AI Action Plan. This week, Ball left the administration and plans to continue shaping AI policy from outside the White House. On POLITICO Tech, Ball joins host Steven Overly to discuss the government's role in regulating artificial intelligence, Trump allowing China to buy American microchips, and whether the rush of AI investment will lead to a market bubble. Steven Overly is the host of POLITICO Tech and covers the intersection of politics and technology. Nirmal Mulaikal is the co-host and producer of POLITICO Energy and producer of POLITICO Tech. Music courtesy of www.epidemicsound.com Intro: https://www.epidemicsound.com/track/0KEjTXFuS0/ Outro: https://www.epidemicsound.com/track/MHh0nBFuwg/ Learn more about your ad choices. Visit megaphone.fm/adchoices

RTP's Free Lunch Podcast
Deep Dive 307 - America's AI Action Plan: Green Lights or Guardrails?

RTP's Free Lunch Podcast

Play Episode Listen Later Aug 14, 2025 51:23 Transcription Available


America’s new AI Action Plan — announced by the White House in July and framed by three pillars of accelerating innovation, building national AI infrastructure, and projecting U.S. leadership abroad — promises more than 90 separate federal actions, from fast-tracking approvals for medical-AI tools to revising international export controls on advanced chips. Supporters hail its light-touch approach, swift development of domestic and foreign deployment of AI, and explicit warnings against “ideological bias” in AI systems. In contrast, some critics say the plan removes guardrails, favors big tech, and is overshadowed by other actions disinvesting in research. How will the Plan impact AI in America? Join us for a candid discussion that will unpack the Plan’s major levers and ask whether the “innovation-first” framing clarifies or obscures deeper constitutional and economic questions. Featuring: Neil Chilson, Head of AI Policy, Abundance Institute Mario Loyola, Senior Research Fellow, Environmental Policy and Regulation, Center for Energy, Climate, and Environment, The Heritage Foundation Asad Ramzanali, Director of Artificial Intelligence & Technology Policy, Vanderbilt Policy Accelerator, Vanderbilt University (Moderator) Kevin Frazier, AI Innovation and Law Fellow, University of Texas School of Law

Arbiters of Truth
Navigating AI Policy: Dean Ball on Insights from the White House

Arbiters of Truth

Play Episode Listen Later Aug 14, 2025 58:11


Join us on Scaling Laws as we delve into the intricate world of AI policy with Dean Ball, former senior policy advisor at the White House's Office of Science and Technology Policy. Discover the behind-the-scenes insights into the Trump administration's AI Action Plan, the challenges of implementing AI policy at the federal level, and the evolving political landscape surrounding AI on the right. Dean shares his unique perspective on the opportunities and hurdles in shaping AI's future, offering a candid look at the intersection of technology, policy, and politics. Tune in for a thought-provoking discussion that explores the strategic steps America can take to lead in the AI era. Hosted on Acast. See acast.com/privacy for more information.

Play Big Faster Podcast
#206: Custom AI vs ChatGPT: Local Systems That Keep Data Secure | Sam Sammane

Play Big Faster Podcast

Play Episode Listen Later Aug 14, 2025 43:41


AI ethics expert Sam Sammane challenges Silicon Valley's artificial intelligence hype in this controversial entrepreneurship interview. The Theo Sim founder and nanotechnology PhD reveals why current AI regulations only help wealthy tech giants while blocking innovation for small businesses. Sam exposes the truth about ChatGPT privacy risks, demonstrates how personalized AI systems running locally protect your data better than cloud-based solutions, and shares his revolutionary context engineering approach that transforms generic chatbots into custom AI employees. Sam's contrarian take on AI policy, trustworthy AI development, and why schools must teach cognitive ethics now will reshape how you think about augmenting human intelligence. The future of AI belongs to businesses that act today, not tomorrow.

Talking Indonesia
Diah Angendari - AI Policy in Indonesia

Talking Indonesia

Play Episode Listen Later Aug 13, 2025 29:02


From the algorithms that curate your social media feed to the recommendation systems that influence what you buy, artificial intelligence is quietly reshaping every aspect of our daily lives. Yet most of us remain in the dark about how these powerful technologies are governed—and that's a problem we can't afford to ignore. Artificial Intelligence (or AI) policy isn't just about tech regulation; it's about who gets to shape the future of work, privacy, and power in our increasingly digital world. The rules being written today will determine whether AI serves all of society or just a privileged few. In this episode of Talking Indonesia, Dr Elisabeth Kramer dives into Indonesia's approach to AI governance, taking its cues from the private sector, with guest Diah Angendari. Diah Angendari is a PhD Candidate at Leiden University and her dissertation examines the interplay between imaginaries, power, and interests in policymaking. She's using the case study of AI in Indonesia to understand the factors that shape these policies. Prior to joining the PhD program, Diah was a lecturer in the Department of Communication Science at Gadjah Mada University.

AskAlli: Self-Publishing Advice Podcast
News: Shopify Bot Attack Hits Authors, UK and EU Enforce AI and Safety Laws, US Plans Pro-Tech AI Policy

AskAlli: Self-Publishing Advice Podcast

Play Episode Listen Later Aug 8, 2025 12:24


On this episode of the Self-Publishing News Podcast, Dan Holloway reports on a coordinated bot attack that hit indie authors using Shopify, leaving some with unexpected fees and limited recourse. He also covers new and proposed legislation across the UK, EU, and US, including the UK's Online Safety Act, concerns over enforcement of the EU AI Act, and the US White House's pro-tech AI action plan—all with implications for author rights and content access. Sponsors Self-Publishing News is proudly sponsored by Bookvault. Sell high-quality, print-on-demand books directly to readers worldwide and earn maximum royalties selling directly. Automate fulfillment and create stunning special editions with BookvaultBespoke. Visit Bookvault.app today for an instant quote. Self-Publishing News is also sponsored by book cover design company Miblart. They offer unlimited revisions, take no deposit to start work and you pay only when you love the final result. Get a book cover that will become your number-one marketing tool. Find more author advice, tips, and tools at our Self-publishing Author Advice Center, with a huge archive of nearly 2,000 blog posts and a handy search box to find key info on the topic you need. And, if you haven't already, we invite you to join our organization and become a self-publishing ally. About the Host Dan Holloway is a novelist, poet, and spoken word artist. He is the MC of the performance arts show The New Libertines, He competed at the National Poetry Slam final at the Royal Albert Hall. His latest collection, The Transparency of Sutures, is available on Kindle.

Compliance Unfiltered With Adam Goslin
Episode 180 - No AI Policy? Your Company is Flirting with Disaster

Compliance Unfiltered With Adam Goslin

Play Episode Listen Later Aug 7, 2025 22:20


On this episode of Compliance Unfiltered, the CU guys delve into the critical need for AI policies within organizations. As AI technology rapidly evolves, many companies find themselves unprepared, risking exposure of sensitive data through platforms like ChatGPT. Adam emphasizes the urgency of implementing AI policies to protect against potential data breaches and compliance issues. Discover why having a robust AI policy is not just a best practice but a necessity in today's digital landscape. All this, and more, on this episode of Compliance Unfiltered.

In AI We Trust?
In AI We Trust? Ep. 11: Adam Thierer on AI, Innovation & Tech Policy

In AI We Trust?

Play Episode Listen Later Aug 5, 2025 58:40


In this episode of In AI We Trust?, cohosts Miriam Vogel and Nuala O'Connor are joined by Adam Thierer, resident senior fellow @ R Street's Tech & Innovation team. Adam weighs in on the Trump Administration's AI Action Plan, the importance of Congress in developing AI policy, and existing legal principles and practices that help define the new digital and AI age. They focused on the mandate for AI literacy, as well as the necessity of AI technologies being regulated in a transparent and trustworthy way that end users, and particularly consumers, can understand.

House of #EdTech
How to Build a Responsible AI Policy for Your Classroom - HoET262

House of #EdTech

Play Episode Listen Later Aug 3, 2025 31:16


In Episode 262 of the House of #EdTech, Chris Nesi explores the timely and necessary topic of creating a responsible AI policy for your classroom. With artificial intelligence tools becoming more integrated into educational spaces, the episode breaks down why teachers need to set clear expectations and how they can do it with transparency, collaboration, and flexibility. Chris offers a five-part framework that educators can use to guide students toward ethical and effective AI use. Before the featured content, Chris reflects on a growing internal debate: is it time to step back from tech-heavy classrooms and return to more analog methods? He also shares three edtech recommendations, including tools for generating copyright-free images, discovering daily AI tool capabilities, and randomizing seating charts for better classroom dynamics. Topics Discussed: EdTech Thought: Chris debates the “Tech or No Tech” question in modern classrooms EdTech Recommendations: https://nomorecopyright.com/ - Upload an image to transform it into a unique, distinct version designed solely for inspiration and creative exploration. https://www.shufflebuddy.com/ - Never worry about seating charts again Foster a strong classroom community by frequently shuffling your seating charts while respecting your students' individual needs. https://whataicandotoday.com/ - We've analysed 16362 AI Tools and identified their capabilities with OpenAI GPT-4.1, to bring you a free list of 83054 tasks of what AI can do today. Why classrooms need a responsible AI policy A five-part framework to build your AI classroom policy Define What AI Is (and Isn't) Clarify When and How AI Can Be Used Promote Transparency and Attribution Include Privacy and Tool Approval Guidelines Make It Collaborative and Flexible The importance of modeling digital citizenship and AI literacy Free editable AI policy template by Chris for grades K–12 Mentions: Mike Brilla – The Inspired Teacher podcast Jake Miller – Educational Duct Tape podcast // Educational Duct Tape Book

AI Applied: Covering AI News, Interviews and Tools - ChatGPT, Midjourney, Runway, Poe, Anthropic

Dive into the Executive AI Action Plan with Conor Grennan and Jaeden Schafer as they explore its potential impact on AI development, energy policies, and international competition. Discover how this policy could shape the future of AI in the U.S. and beyond.AI Applied YouTube Channel: ⁠https://www.youtube.com/@AI-Applied-Podcast⁠Try AI Box: ⁠⁠⁠https://aibox.ai⁠Conor's AI Course: ⁠https://www.ai-mindset.ai/courses⁠Conor's AI Newsletter: ⁠https://www.ai-mindset.ai/⁠Jaeden's AI Hustle Community: ⁠https://www.skool.com/aihustle/about⁠Chapters00:00 The Executive AI Action Plan: A New Direction02:51 Geopolitical Implications & Energy Focus05:55 DEI in AI: Balancing Bias & Representation08:31 Data Centers & AI Regulation11:09 Market Dynamics & AI Safety

Utah's Noon News
Utah lawmaker joins national task force on AI policy

Utah's Noon News

Play Episode Listen Later Jul 28, 2025 36:21


Newt's World
Episode 874: Winning the Race – America's AI Action Plan

Newt's World

Play Episode Listen Later Jul 26, 2025 25:55 Transcription Available


Newt talks with Neil Chilson, current head of AI Policy at the Abundance Institute, about President Trump’s “Winning the Race: America’s AI Action Plan,” which aims to accelerate AI innovation, build American AI infrastructure, and lead in international AI diplomacy and security. Chilson highlights the importance of AI for U.S. global dominance, emphasizing its potential in various sectors like healthcare and defense. Their conversation also touches on the strategic significance of Taiwan in chip production and the challenges of AI regulation, particularly in Europe. The Abundance Institute focuses on emerging technologies, advocating for a culture that embraces innovation and a regulatory environment that enables it. They conclude with optimism about AI's role in medicine and the potential for a future with greater technological advancements.See omnystudio.com/listener for privacy information.

The Lawfare Podcast
Lawfare Archive: AI Policy Under Technological Uncertainty, with Alex “amac” Macgillivray

The Lawfare Podcast

Play Episode Listen Later Jul 26, 2025 40:32


From July 23, 2024: Alan Rozenshtein, Associate Professor at the University of Minnesota Law School and Senior Editor at Lawfare, and Matt Perault, the Director of the Center on Technology Policy at the University of North Carolina at Chapel Hill, sat down with Alexander Macgillivray, known to all as "amac," who was the former Principle Deputy Chief Technology Officer of the United States in the Biden Administration and General Counsel at Twitter.amac recently wrote a piece for Lawfare about making AI policy in a world of technological uncertainty, and Matt and Alan talked to him about how to do just that.To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/lawfare-institute.Support this show http://supporter.acast.com/lawfare. Hosted on Acast. See acast.com/privacy for more information.

ChatGPT: News on Open AI, MidJourney, NVIDIA, Anthropic, Open Source LLMs, Machine Learning

Discover how former President President Trump is influencing the next chapter of AI development. We evaluate the implications of new regulations and support structures. Tune in to get expert perspectives on what's coming next in AI governance. Special focus is given to tech company responses. Our discussion includes expert opinions and recent statements from key stakeholders.Try AI Box: ⁠⁠https://aibox.aiAI Chat YouTube Channel: https://www.youtube.com/@JaedenSchaferJoin my AI Hustle Community: https://www.skool.com/aihustle/about

The Lawfare Podcast
Scaling Laws: Rapid Response to the AI Action Plan

The Lawfare Podcast

Play Episode Listen Later Jul 25, 2025 64:09


Janet Egan, Senior Fellow with the Technology and National Security Program at the Center for a New American Security; Jessica Brandt, Senior Fellow for Technology and National Security at the Council on Foreign Relations; Neil Chilson, Head of AI Policy at Abundance Institute; and Tim Fist, Director of Emerging Technology Policy at the Institute for Progress join Kevin Frazier, the AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare for a special version of Scaling Laws.This episode was recorded just hours after the release of the AI Action Plan. About 180 days ago, President Trump directed his administration to explore ways to achieve AI dominance. His staff has attempted to do just that. This group of AI researchers dives into the plan's extensive recommendations and explore what may come next.Find Scaling Laws on the Lawfare website, and subscribe to never miss an episode.To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/lawfare-institute.Support this show http://supporter.acast.com/lawfare. Hosted on Acast. See acast.com/privacy for more information.

PMP Industry Insiders
Episode 235: How to Establish a Legal, Effective AI Policy

PMP Industry Insiders

Play Episode Listen Later Jul 24, 2025 42:56


Artificial intelligence is changing the way we do business, but it's still the Wild West. This week Dan and Donnie welcome Bennett Borden, CEO of Clarion AI Partners, who shares his expertise as an AI lawyer and data scientist. He covers how to responsibly implement policies for using AI — which he calls the most transformative technology since electricity — in pest and lawn companies.   Guest: Bennett Bordon, CEO, Clarion AI Partners   Hosts:  Dan Gordon, PCO Bookkeepers & M&A Specialists Donnie Shelton, Triangle Home Services

Bloomberg Talks
White House Senior Policy Advisor for AI Sriram Krishnan Talks AI Policy

Bloomberg Talks

Play Episode Listen Later Jul 24, 2025 7:22 Transcription Available


White House Senior Policy Advisor for AI Sriram Krishnan joined Bloomberg's Caroline Hyde and Ed Ludlow to discuss the latest on AI policy and what to expect as it evolves.See omnystudio.com/listener for privacy information.

Arbiters of Truth
AI Action Plan: Janet Egan, Jessica Brandt, Neil Chilson, and Tim Fist

Arbiters of Truth

Play Episode Listen Later Jul 24, 2025 63:21


Janet Egan, Senior Fellow with the Technology and National Security Program at the Center for a New American Security, Jessica Brandt, Senior Fellow for Technology and National Security at the Council on Foreign Relations, Neil Chilson, Head of AI Policy at Abundance Institute, and Tim Fist, Director of Emerging Technology Policy at the Institute for Progress join Kevin Frazier, the AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare for a special version of Scaling Laws.This episode was recorded just hours after the release of the AI Action Plan. About 180 days ago, President Trump directed his administration to explore ways to achieve AI dominance. His staff has attempted to do just that. This group of AI researchers dives into the plan's extensive recommendations and explore what may come next. Hosted on Acast. See acast.com/privacy for more information.

PBS NewsHour - Segments
What’s in Trump’s new AI policy and why it matters

PBS NewsHour - Segments

Play Episode Listen Later Jul 23, 2025 6:38


President Trump unveiled his approach to the development of AI. Surrounded by some of the biggest names in tech, he signed three executive orders. One targets what Trump called "ideological bias" in AI chatbots, another aims to make it easier to build massive AI data centers and the third encourages the export of American AI tech. Amna Nawaz discussed the implications with Will Oremus. PBS News is supported by - https://www.pbs.org/newshour/about/funders

PBS NewsHour - Science
What’s in Trump’s new AI policy and why it matters

PBS NewsHour - Science

Play Episode Listen Later Jul 23, 2025 6:38


President Trump unveiled his approach to the development of AI. Surrounded by some of the biggest names in tech, he signed three executive orders. One targets what Trump called "ideological bias" in AI chatbots, another aims to make it easier to build massive AI data centers and the third encourages the export of American AI tech. Amna Nawaz discussed the implications with Will Oremus. PBS News is supported by - https://www.pbs.org/newshour/about/funders

Voices in Local Government
AI Sandbox for Local Government

Voices in Local Government

Play Episode Listen Later Jul 23, 2025 36:22


Key Takeaways for local government AI:Why policy might not be the best starting point... start doing!Get in the figurative sandbox and to play and test AI with small teams for real outcomes.Official policy documents and templates can be found (and copied!) via the GovAI Coalition. Featured Guest:Parth Shah – CEO and Co-Founder, Polimorphic Voices in Local Government Podcast Hosts:Joe Supervielle and Angelica WedellResources:ICMA Annual Conference, October 25-29 in Tampa. Multiple AI trainings on the ICMA Learning Lab.AI policy, templates, and more tools from the GovAI CoalitionVoices in Local Gov Episode: GovAI Coalition -  Your Voice in Shaping the Future of AI

The Sunday Show
How US States Are Shaping AI Policy Amid Federal Debate and Industry Pushback

The Sunday Show

Play Episode Listen Later Jul 13, 2025 30:17


In the United States, state legislatures are key players in shaping artificial intelligence policy, as lawmakers attempt to navigate a thicket of politics surrounding complex issues ranging from AI safety, deepfakes, and algorithmic discrimination to workplace automation and government use of AI. The decision by the US Senate to exclude a moratorium on the enforcement of state AI laws from the budget reconciliation package passed by Congress and signed by President Donald Trump over the July 4 weekend leaves the door open for more significant state-level AI policymaking.To take stock of where things stand on state AI policymaking, Tech Policy Press associate editor Cristiano Lima-Strong spoke to two experts:Scott Babwah Brennen, director of NYU's Center on Technology Policy, and Hayley Tsukayama, associate director of legislative activism at the Electronic Frontier Foundation (EFF).

Generative Now | AI Builders on Creating the Future
Inside AI Policy with FAI's Chief Economist Sam Hammond

Generative Now | AI Builders on Creating the Future

Play Episode Listen Later Jul 11, 2025 41:29


In this episode, Lightspeed Partner Michael Mignano sits down with the Foundation for American Innovation's Chief Economist Sam Hammond to talk AI policy. Sam breaks down the key infrastructure needed for AI developments and how policymakers are adapting to rapid technological change. He also shares insights on AI training data and fair use, workforce disruption, and how, when it comes to AI, everything can change in just a few months.Episode Chapters: 00:00 Introduction 00:55 Meet Sam Hammond: Background and Role03:06 The Big AI Policy Issues05:09 Energy and Chip Policy06:47 Fair Use and Copyright in AI13:37 The Urgency of AI Regulation17:03 Potential AI Crisis and Legislative Response20:25 Challenges in AI Regulation21:39 Acceleration vs. Regulation in AI Development22:34 AI Safety and National Security23:51 Fair Use and Copyright in AI Training Data25:39 AI-Induced Labor Disruptions33:36 State-Level AI Regulation 36:02 Global Cooperation on AI Safety37:29 Advice for AI Startups38:34 Optimism for AI and Policy Advancements41:07 ConclusionStay in touch:www.lsvp.comX: https://twitter.com/lightspeedvpLinkedIn: https://www.linkedin.com/company/lightspeed-venture-partners/Instagram: https://www.instagram.com/lightspeedventurepartners/Subscribe on your favorite podcast app: generativenow.coEmail: generativenow@lsvp.comThe content here does not constitute tax, legal, business or investment advice or an offer to provide such advice, should not be construed as advocating the purchase or sale of any security or investment or a recommendation of any company, and is not an offer, or solicitation of an offer, for the purchase or sale of any security or investment product. For more details please see lsvp.com/legal.

Waking Up With AI
The State of U.S. AI Policy

Waking Up With AI

Play Episode Listen Later Jul 10, 2025 18:35


This week on “Paul, Weiss Waking Up With AI,” Katherine Forrest and Anna Gressel discuss the Senate's removal of a proposed AI moratorium from the “One Big Beautiful Bill Act,” and examine new state-level AI legislation in Colorado, Texas, New York, California and others.   ## Learn More About Paul, Weiss's Artificial Intelligence practice: https://www.paulweiss.com/industries/artificial-intelligence

Interpreting India
Beyond Superintelligence: A Realist's Guide to AI

Interpreting India

Play Episode Listen Later Jul 10, 2025 39:21


The episode begins with Kapoor explaining the origins of AI Snake Oil, tracing it back to his PhD research at Princeton on AI's limited predictive capabilities in social science domains. He shares how he and co-author Arvind Narayanan uncovered major methodological flaws in civil war prediction models, which later extended to other fields misapplying machine learning.The conversation then turns to the disconnect between academic findings and media narratives. Kapoor critiques the hype cycle around AI, emphasizing how its real-world adoption is slower, more fragmented, and often augmentative rather than fully automating human labor. He cites the enduring demand for radiologists as a case in point.Kapoor introduces the concept of “AI as normal technology,” which rejects both the notion of imminent superintelligence and the dismissal of AI as a passing fad. He argues that, like other general-purpose technologies (electricity, the internet), AI will gradually reshape industries, mediated by social, economic, and organizational factors—not just technical capabilities.The episode also examines the speculative worldviews put forth by documents like AI 2027, which warn of AGI-induced catastrophe. Kapoor outlines two key disagreements: current AI systems are not technically on track to achieve general intelligence, and even capable systems require human and institutional choices to wield real-world power.On policy, Kapoor emphasizes the importance of investing in AI complements—such as education, workforce training, and regulatory frameworks—to enable meaningful and equitable AI integration. He advocates for resilience-focused policies, including cybersecurity preparedness, unemployment protection, and broader access to AI tools.The episode concludes with a discussion on recalibrating expectations. Kapoor urges policymakers to move beyond benchmark scores and collaborate with domain experts to measure AI's real impact. In a rapid-fire segment, he names the myth of AI predicting the future as the most misleading and humorously imagines a superintelligent AI fixing global cybersecurity first if it ever emerged.Episode ContributorsSayash Kapoor is a computer science Ph.D. candidate at Princeton University's Center for Information Technology Policy. His research focuses on the societal impact of AI. He previously worked on AI in the industry and academia at Facebook, Columbia University, and EPFL Switzerland. He is a recipient of a best paper award at ACM FAccT and an impact recognition award at ACM CSCW.Nidhi Singh is a senior research analyst and program manager at Carnegie India. Her current research interests include data governance, artificial intelligence and emerging technologies. Her work focuses on the implications of information technology law and policy from a Global Majority and Asian perspective. Suggested ReadingsAI as Normal Technology by Arvind Narayanan and Sayash Kapoor. Every two weeks, Interpreting India brings you diverse voices from India and around the world to explore the critical questions shaping the nation's future. We delve into how technology, the economy, and foreign policy intertwine to influence India's relationship with the global stage.As a Carnegie India production, hosted by Carnegie scholars, Interpreting India, a Carnegie India production, provides insightful perspectives and cutting-edge by tackling the defining questions that chart India's course through the next decade.Stay tuned for thought-provoking discussions, expert insights, and a deeper understanding of India's place in the world.Don't forget to subscribe, share, and leave a review to join the conversation and be part of Interpreting India's journey.

Cato Event Podcast
AI Policy Today and Beyond: A Fireside Chat with Rep. Rich McCormick

Cato Event Podcast

Play Episode Listen Later Jun 25, 2025 50:34


Join us at the Cato Institute for an in-depth fireside chat featuring Congressman Rich McCormick and Matt Mittelsteadt, Cato policy fellow in technology. This timely conversation will explore the evolving landscape of artificial intelligence (AI) and cybersecurity policy, and the state of AI in Congress.Join us for a discussion on the current state of AI governance at the federal and state levels, the proposal for a 10-year moratorium on state and local AI regulations (what it means, and what's at stake), and the long-term vision for responsible, innovation-friendly AI policy in the United States.Whether you're a policymaker, tech professional, academic, or simply interested in the future of AI regulation, this is a must-attend conversation on how to balance innovation, security, and civil liberties in the age of artificial intelligence. Hosted on Acast. See acast.com/privacy for more information.

Tech Hive: The Tech Leaders Podcast
#115: The Rt Hon Stephen McPartland, Former MP and Minister for National Security

Tech Hive: The Tech Leaders Podcast

Play Episode Listen Later Jun 25, 2025 53:51


"You can't have good Cyber Security without Economic Security.'' Join us this week on The Tech Leaders Podcast, where Gareth Davies sits down with The Rt Hon Stephen McPartland, former MP and author of the McPartland Review into Cyber Security. Stephen talks about his time in Parliament, the impact of AI on Cyber Security, and why the UK is both uniquely well prepared and uniquely vulnerable. On this episode, Stephen and Gareth discuss what it's like to work with Prime Ministers, how to prevent the widespread adoption of AI leading to “digital exclusion”, why we need to automate processes rather than jobs, and how a Scouser became Tory MP for Stevenage…Timestamps: Intro and good leadership (1:33) Proudest achievements and lessons learned in Politics (7:54) Ministerial role, and working with Prime Ministers (10:09) Cyber Security and the Digital Economy (17:50) AI, Government and Cyber Security (23:23) Fostering a Cyber workforce (29:35) LLMs and Agentic AI (33:14) Cryptocurrencies and Post-Quantum Cryptography (38:28) AI concerns – Digital exclusion and Rules of Engagement (46:25) Stephen's advice to his younger self (49:00) https://www.bedigitaluk.com/

In AI We Trust?
AI Literacy Series Ep. 10: Angie Cooper's Call to Action for the Heartland

In AI We Trust?

Play Episode Listen Later Jun 24, 2025 37:20


In Episode 10 of the In AI We Trust? AI Literacy series, Angie Cooper's Call to Action for the Heartland, Miriam Vogel talks with Angie Cooper, President and Chief Operating Officer of Heartland Forward, to explore how artificial intelligence (AI) can accelerate economic growth across America's Heartland. The discussion follows Heartland Forward's recent annual Heartland Summit, the data-driven insights that inform their mission, and their partnership with Stemuli to create a first-of-its-kind AI literacy video game to promote AI learning for rural students. Angie stresses the importance of increasing access to AI by expanding affordable, high-speed internet and building trust with AI platforms through education initiatives and open conversations with teachers and employers. This episode explores how AI can be utilized as a tool to benefit small businesses, prepare students for the workforce, and advance jobs throughout the Heartland and beyond. 

NatSec Tech
Episode 78: Margaret Busse on Balancing Innovation and Protection

NatSec Tech

Play Episode Listen Later Jun 18, 2025 21:34


Margaret Woolley Bussey, Executive Director of the Utah Department of Commerce, joins host Jeanne Meserve to discuss Utah's establishment of an Office of AI Policy, Utah's thriving tech sector, and regulations and protections on AI. Bussey explains the office's three core objectives—encouraging innovation, protecting the public, and building a continuous learning function within government. The discussion highlights the office's successful work on mental health chatbots and its future plans to tackle deepfakes and AI companions. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit scsp222.substack.com

Technology for Business
Your AI Policy guide

Technology for Business

Play Episode Listen Later Jun 18, 2025 35:44


Ann, CIT's Quality Assurance Analyst, joins us to discuss the critical topic of AI policy. With the rapid evolution and adoption of AI technology, organizations need to establish clear guidelines and ethical considerations. Ann shares insights on how AI can be leveraged within different industries, defining AI in the context of policy, and the importance of collaborative efforts in policy creation. Learn about the critical foundational elements, the role of training, transparency, and how to ensure ethical use in your AI initiatives. Whether your company is just starting or looking to refine its AI policy, this episode offers invaluable guidance and expert advice.00:00 Introduction to AI Policy00:23 Defining AI in Policy Context03:24 Collaborative Efforts in Policy Making06:37 Effective Communication of AI Policies08:27 Ethical Considerations in AI Policy13:49 Transparency in AI Use25:53 Starting Points for Creating AI Policies31:03 Common Missteps and Foundational Elements35:10 Conclusion and Final Thoughts

Cato Event Podcast
What Is the Opportunity Cost of State AI Policy?

Cato Event Podcast

Play Episode Listen Later Jun 12, 2025 59:43


Proposals to regulate artificial intelligence (AI) at the state level continue to increase. Unfortunately, these proposals could potentially disrupt advances in this important technology, even if there is strong federal policy. This policy forum, which is related to an upcoming policy analysis on the topic, will explore the potential economic costs of state-level AI regulation as well as the potential barriers in the market it creates for both consumers and innovators. Are there ways state AI policy conversations may discourage or encourage the important policy conversations around AI innovation? Hosted on Acast. See acast.com/privacy for more information.

The Road to Accountable AI
Brenda Leong: Building AI Law Amid Legal Uncertainty

The Road to Accountable AI

Play Episode Listen Later Jun 12, 2025 36:52 Transcription Available


Kevin Werbach interviews Brenda Leong, Director of the AI division at boutique technology law firm ZwillGen, to explore how legal practitioners are adapting to the rapidly evolving landscape of artificial intelligence. Leong explains why meaningful AI audits require deep collaboration between lawyers and data scientists, arguing that legal systems have not kept pace with the speed and complexity of technological change. Drawing on her experience at Luminos.Law—one of the first AI-specialist law firms—she outlines how companies can leverage existing regulations, industry-specific expectations, and contextual risk assessments to build practical, responsible AI governance frameworks. Leong emphasizes that many organizations now treat AI oversight not just as a legal compliance issue, but as a critical business function. As AI tools become more deeply embedded in legal workflows and core operations, she highlights the growing need for cautious interpretation, technical fluency, and continuous adaptation within the legal field. Brenda Leong is Director of ZwillGen's AI Division, where she leads legal-technical collaboration on AI governance, risk management, and model audits. Formerly Managing Partner at Luminos.Law, she pioneered many of the audit practices now used at ZwillGen. She serves on the Advisory Board of the IAPP AI Center, teaches AI law at IE University, and previously led AI and ethics work at the Future of Privacy Forum.  Transcript   AI Audits: Who, When, How...Or Even If?   Why Red Teaming Matters Even More When AI Starts Setting Its Own Agenda      

Federal Drive with Tom Temin
2019 executive order began a trend toward White House-centered AI policy

Federal Drive with Tom Temin

Play Episode Listen Later Jun 12, 2025 9:03


A 2019, executive order was a significant landmark in the nation's regulation and management of artificial intelligence technologies in the government, but it was just the first of many, among other things, that first executive order on AI focused on building research and development and the nation's AI workforce. And on those topics, it shared a lot in common with five subsequent orders on AI. That first order is one of 25 significant moments Federal News Network is marking this year as part of our 25th anniversary. Federal News Network's Jared Serbu has been writing about it this week, and he's here with more.See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

In AI We Trust?
AI Literacy Series Ep. 9: Robbie Torney of Common Sense Media & Special Co-host Nuala O'Connor

In AI We Trust?

Play Episode Listen Later Jun 10, 2025 69:49


In this episode of In AI We Trust?, Miriam & Nuala speak with Common Sense Senior Director of AI Programs Robbie Torney to discuss AI's impact on children, families, and schools, focusing on AI literacy, which builds upon media and digital literacy. Robbie advises parents to engage in tech conversations with curiosity and empathy and encourages educators to view AI as a tool to enhance learning, noting students' prevalent use. Common Sense Media provides AI training and risk assessments for educators. Torney aims to bridge digital divides and supports AI implementation in underserved schools, highlighting risks of AI companions for vulnerable youth and developing resources for school AI readiness and risk assessments. The episode stresses the importance of AI literacy and critical thinking to navigate AI's complexities and minimize harm.The EqualAI AI Literacy podcast series builds on In AI We Trust?'s global reach, focusing specifically on AI literacy. Featuring prominent leaders in the technology, education, and governance fields, this special series provides listeners with valuable insights and discussions around AI's impact on society, leading efforts in this area of AI literacy, and how listeners can benefit from these experts and tools.Related ResourcesEpisode Blog PostAI Risk AssessmentsAI Basics for K–12 TeachersParents' Ultimate Guide to AI Companions and Relationships2025: The Common Sense Census2024: The Dawn of the AI Era

Driven by Data: The Podcast
S5 | Ep 28 | AI Upskilling: Rethinking Readiness, Responsibility, and Real Impact with McKinley Hyden, Director of Data Value & Strategy at Financial Times.

Driven by Data: The Podcast

Play Episode Listen Later Jun 10, 2025 56:43


In Episode 28, of Season 5 of Driven by Data: The Podcast, Kyle Winterbottom was joined by McKinley Hyden, Director of Data Value & Strategy at Financial Times, where they discuss McKinley's journey from a background in literature to a career in data, the role of the Financial Times in providing quality journalism, and the importance of data in driving strategic decisions. McKinley shares insights on the challenges of valuing data, the need for cultural change in organisations to embrace data as an asset, and the significance of upskilling in AI. The conversation also touches on the importance of effective communication and knowledge management in data analytics, as well as the future of AI in business. In this conversation, McKinley Hyden and Kyle Winterbottom explore the profound impact of technology, particularly AI, on society, education, and the workforce. They discuss the moral implications of AI, the need for responsible use, and the importance of upskilling to navigate the changing landscape. The conversation emphasizes the necessity of creating effective AI policies to ensure ethical practices and the potential for job transformation in the face of technological advancements.00:00 Introduction to Data and Storytelling02:14 Understanding the Financial Times04:19 The Role of Data Value and Strategy09:33 Upskilling in Data and AI10:44 Valuing Data as an Asset14:12 Overcoming Resistance to Change19:25 Defining Value in Data22:20 Communications and Knowledge Management27:31 The Future of AI in Business28:44 The Impact of Technology on Society30:53 Navigating the Moral Hazards of AI32:44 The Future of Education in an AI World35:19 Job Transformation and the Role of AI41:47 Upskilling for the AI Era44:42 Creating an AI Policy for Responsible...

a16z on Protecting Little Tech: The Techno-Optimist AI Policy Agenda with Matt Perault, Head of AI Policy

Play Episode Listen Later Jun 9, 2025 63:40


In this episode, Matt Perault, Head of AI Policy at a16z, discusses their approach to AI regulation focused on protecting "little tech" startups from regulatory capture that could entrench big tech incumbents. The conversation covers a16z's core principle of regulating harmful AI use rather than the development process, exploring key policy initiatives like the Raise Act and California's SB 813. Perault addresses critical challenges including setting appropriate regulatory thresholds, transparency requirements, and designing dynamic frameworks that balance innovation with safety. The discussion examines both areas of agreement and disagreement within the AI policy landscape, particularly around scaling laws, regulatory timing, and the concentration of AI capabilities. Disclaimer: This information is for general educational purposes only and is not a recommendation to buy, hold, or sell any investment or financial product. Turpentine is an acquisition of a16z Holdings, L.L.C., and is not a bank, investment adviser, or broker-dealer. This podcast may include paid promotional advertisements, individuals and companies featured or advertised during this podcast are not endorsing AH Capital or any of its affiliates (including, but not limited to, a16z Perennial Management L.P.). Similarly, Turpentine is not endorsing affiliates, individuals, or any entities featured on this podcast. All investments involve risk, including the possible loss of capital. Past performance is no guarantee of future results and the opinions presented cannot be viewed as an indicator of future performance. Before making decisions with legal, tax, or accounting effects, you should consult appropriate professionals. Information is from sources deemed reliable on the date of publication, but Turpentine does not guarantee its accuracy. SPONSORS: Oracle Cloud Infrastructure: Oracle Cloud Infrastructure (OCI) is the next-generation cloud platform that delivers better, cheaper, and faster solutions for your infrastructure, database, application development, and AI needs. Experience up to 50% savings on compute, 70% on storage, and 80% on networking with OCI's high-performance environment—try it for free with zero commitment at https://oracle.com/cognitive The AGNTCY: The AGNTCY is an open-source collective dedicated to building the Internet of Agents, enabling AI agents to communicate and collaborate seamlessly across frameworks. Join a community of engineers focused on high-quality multi-agent software and support the initiative at https://agntcy.org/?utm_campaign=fy25q4_agntcy_amer_paid-media_agntcy-cognitiverevolution_podcast&utm_channel=podcast&utm_source=podcast NetSuite by Oracle: NetSuite by Oracle is the AI-powered business management suite trusted by over 41,000 businesses, offering a unified platform for accounting, financial management, inventory, and HR. Gain total visibility and control to make quick decisions and automate everyday tasks—download the free ebook, Navigating Global Trade: Three Insights for Leaders, at https://netsuite.com/cognitive PRODUCED BY: https://aipodcast.ing

The National Security Podcast
AI, rights and rules: who's accountable in an automated world?

The National Security Podcast

Play Episode Listen Later Jun 5, 2025 43:14


Can differing global approaches to AI regulation and investment work together, or are we headed toward fragmented, siloed systems? How can AI governance in developing nations be supported as part of regional aid and security agendas? What challenges does Australia face in regulating AI without a national bill of rights or federal human rights charter? Should governments mandate the inclusion of human oversight in all AI-powered decisions? In this episode, Sarah Vallee and Maria O'Sullivan join David Andrews to talk about how AI is impacting national security, with a focus on AI governance models and mass-surveillance.Maria O'Sullivan is an Associate Professor at Deakin Law School. She's a member of the Deakin Cyber Research and Innovation Centre.Sarah Vallee is a specialist in AI Policy and Governance. She's a Fellow at the UTS Human Technology Institute, sponsored by the French Ministry of Foreign Affairs.David Andrews is Senior Manager, Policy & Engagement at the ANU National Security College. TRANSCRIPT  Show notes  NSC academic programs – find out more  Article 8: respect for your private and family life We'd love to hear from you! Send in your questions, comments, and suggestions to NatSecPod@anu.edu.au. You can tweet us @NSC_ANU and be sure to subscribe so you don't miss out on future episodes. Hosted on Acast. See acast.com/privacy for more information.

Chicago's Morning Answer with Dan Proft & Amy Jacobson

0:00 - HeteroAwesomeness Month 13:02 - Elon Musk comes out against the Big Beautiful Bill 27:15 - US Open qualifying 30:14 - Mamet on Maher podcast 55:25 - James A. Gagliano, retired FBI supervisory special agent and a doctoral candidate in homeland security at St. John’s University, on the "unhealthy" direction of college campuses - "we are becoming the architects of our own demise" 01:11:51 - CA 400M champ Clara Adams stripped of title 01:26:04 - Chief Economist at First Trust Portfolios LP, Brian Wesbury, on the Big Beautiful Bill - "the last two years of government spending were some of the most irresponsible budgets we have ever seen" Follow Brian on X @wesbury 01:49:30 - Emeritus professor of law, Harvard Law School, Alan Dershowitz, shares details from his new book The Preventive State: The Challenge of Preventing Serious Harms While Preserving Essential Liberties. For more from Professor Dershowitz, check out his podcast “The Dershow” on Spotify, YouTube and iTunes 02:07:58 - Neil Chilson, former Chief Technologist for the FTC and currently Head of AI Policy at the Abundance Institute, on the risks, rewards and myths of AI. Check out Neil’s substack outofcontrol.substack.comSee omnystudio.com/listener for privacy information.

The EdUp Experience
How the 'Stoplight Approach' Could Solve AI Policy Challenges - with Christian Moriarty, Professor of Ethics & Law, Applied Ethics Institute, St. Petersburg College

The EdUp Experience

Play Episode Listen Later May 29, 2025 42:17


It's YOUR time to #EdUpIn this episode, part of our Academic Integrity Series, sponsored by Pangram Labs,YOUR guest is Christian Moriarty, Professor of Ethics & Law, Applied Ethics Institute, St. Petersburg CollegeYOUR cohost is Bradley Emi , Cofounder & CTO, Pangram LabsYOUR host is Elvin FreytesHow does Christian define academic integrity from both legal & philosophical perspectives?Why do students often "cheat" even when they have good intentions & strong moral values?What is the role of faculty in supporting students to act with integrity & resist temptation?How can institutions implement effective AI policies that respect different teaching contexts?Why is Christian predicting a return to in-class writing or required keystroke tracking software?Topics include:The tension between rules-based & values-based approaches to academic integrityThe importance of empathy in understanding why students make poor choicesThe "stoplight approach" to AI use policies (green/yellow/red options for different contexts)Finding the balance between trusting students & verifying their workThe challenges of time management for community college studentsThe value of specialized academic integrity offices in educational institutionsWhy "difficulty is part of the process" in genuine learning & skill developmentThe connection between integrity & asking for help when neededListen in to #EdUpDo YOU want to accelerate YOUR professional development?Do YOU want to get exclusive early access to ad-free episodes, extended episodes, bonus episodes, original content, invites to special events, & more?Then ⁠⁠⁠⁠⁠⁠BECOME A SUBSCRIBER TODAY⁠⁠ - $19.99/month or $199.99/year (Save 17%)!Want to get YOUR organization to pay for YOUR subscription? Email ⁠⁠⁠EdUp@edupexperience.comThank YOU so much for tuning in. Join us on the next episode for YOUR time to EdUp!Connect with YOUR EdUp Team - ⁠⁠⁠⁠⁠ ⁠⁠⁠⁠Elvin Freytes⁠⁠⁠⁠⁠⁠⁠⁠⁠ & ⁠⁠⁠⁠⁠⁠⁠⁠⁠ Dr. Joe Sallustio⁠⁠⁠⁠● Join YOUR EdUp community at The EdUp ExperienceWe make education YOUR business!

In AI We Trust?
SPECIAL EDITION A Pre-Summit Conversation With Van Jones

In AI We Trust?

Play Episode Listen Later May 28, 2025 16:11


In this special episode of In AI We Trust?, recorded live at the launch of the EqualAI C-Suite Summit in Washington, D.C., host Miriam Vogel sits down with the dynamic Van Jones — acclaimed social entrepreneur, innovator, and tech evangelist. Together, they dive into a thought-provoking conversation about how AI can be a  transformative force for opportunity creation. With his trademark clarity and conviction, Van offers a hopeful vision for the future of AI— one that empowers communities and drives societal progress, but only if we lead with the right values and policies at this critical moment.Related ResourcesDream Machine AI x Library Project

The Shannon Joy Show
Trump's Big Bad Bull-Shit Budget Betrayal Bill Prohibits States From Interfering With Federal AI Programs For Ten Years! Rage Against This America. With Special Guest Dr. William Makis.

The Shannon Joy Show

Play Episode Listen Later May 22, 2025 90:18


SJ Show Notes:Follow Dr. Makis HERE: https://substack.com/@makismdhttps://x.com/MakisMDmakisw79@yahoo.comPlease support Shannon's independent network with your donation HERE: https://www.paypal.com/donate/?hosted_button_id=MHSMPXEBSLVTSupport Our Sponsors:You can get 20% off your first order of Blackout Coffee! Just head to http://blackoutcoffee.com/joy and use code joy at checkout.The Satellite Phone Store has everything you need when the POWER goes OUT. Use the promo code JOY for 10% off your entire order TODAY! www.SAT123.com/JoyGet 45% OFF Native Path HYDRATE today! Special exclusive deal for the Joy audience only! Check it out HERE: www.nativepathhydrate.com/joyColonial Metals Group is the company Shannon trusts for all her metals purchases! Set up a SAFE & Secure IRA or 401k with a company who shares your values! Learn more HERE: https://colonialmetalsgroup.com/joyPlease consider Dom Pullano of PCM & Associates! He has been Shannon's advisor for over a decade and would love to help you grow! Call his toll free number today: 1-800-536-1368 Or visit his website at https://www.pcmpullano.comShannon's Top Headlines May 22, 2025:Trump's 'Big Beautiful Bill' would create 'unfettered abuse' of AI: Business InsiderTrump's 'Big Beautiful Bill' would create 'unfettered abuse' of AI, 141 high-profile orgs warn in letter to Congress days agoWhen it Comes to AI Policy, Congress Shouldn't Cut States off at the Knees: https://garymarcus.substack.com/p/when-it-comes-to-ai-policy-congress?r=fuu7w&utm_medium=iosRon Johnson: The Ugly Truth About Trump's Big Beautiful Bill: https://x.com/SenRonJohnson/status/1923057940908454239WATCH: Dr. Peter McCullough's Truth Bombs In Testimony Yesterday: https://x.com/MJTruthUltra/status/1925271018387763352Dr. William Makis: Scott Adams reveals his Prostate Cancer and our attempts to beat it - my response to Scott's Podcast: https://substack.com/home/post/p-163941944Renowned Data Analyst Warns Excess Deaths Are Surging ‘Off the Charts' https://substack.com/home/post/p-162167578Stop this bill.Shut the government down.Because it is becoming increasingly clear that every penny given to these psychopaths can and will be used against we the people.Hidden deep within Trump's budget monstrosity is a clause which threatens every American, our Constitutional Republic and humanity. Trump's ‘big beautiful budget bill' sneaks in a section which prohibits states from interfering with AI programs and development and also machine decision making for for ten years.“No state or political subdivision may enforce any law or regulation regulating artificial intelligence models, artificial intelligence systems, or automated decision systems during the 10-year period beginning on the date of the enactment of this Act.”There is SO much wrong with this and frankly, it's the cherry on top of a dumpster fire of a bill which betrays nearly every promise made by Trump in the 2024 election.There is nothing good about this bill and in my opinion the best we can do is dump it completely and shut down the government.Interestingly, ‘shutting down the government' actually KEEPS the essential spending in place (like Social Security benefits and Medicare) while suspending all the grift and billionaire benefits.It's exactly what we need.That's the bad news … but there is GOOD news too!Today we will talk to a frontline medical freedom warrior who is actually saving lives through life saving cancer treatments. Dr. William Makis is living proof that there ARE solutions out there and I cannot wait to talk to him again.We discuss this and more today on the SJ Show!Join the Rumble LIVE chat and follow my Rumble Page HERE so you never miss an episode: https://rumble.com/c/TheShannonJoyShowSee Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

Caveat
The AI policy divide.

Caveat

Play Episode Listen Later May 1, 2025 42:56


Please enjoy this encore episode of Caveat. This week on Caveat, Dave and Ben are thrilled to welcome back N2K's own Ethan Cook for the second installment of our newest policy ⁠deep dive⁠ segment. As a trusted expert in law, privacy, and surveillance, Ethan is joining the show regularly to provide in-depth analysis on the latest policy developments shaping the cybersecurity and legal landscape. While this show covers legal topics, and Ben is a lawyer, the views expressed do not constitute legal advice. For official legal advice on any of the topics we cover, please contact your attorney.  Please take a moment to fill out an ⁠audience survey⁠! Let us know how we are doing! Policy Deep Dive In this Caveat Policy ⁠Deep Dive⁠, we turn our focus to the evolving landscape of artificial intelligence (AI) policy. This month, the Caveat team delves into the key issues shaping political discourse around AI, exploring state-led initiatives, the lack of significant federal action, and the critical areas that still require stronger oversight, offering an in-depth analysis of AI legislation, the varied approaches across states, and the pressing challenges that demand federal attention. Get the weekly Caveat Briefing delivered to your inbox. Like what you heard? Be sure to check out and subscribe to our ⁠Caveat Briefing⁠, a weekly newsletter available exclusively to ⁠N2K Pro⁠ members on ⁠N2K CyberWire's⁠ website. N2K Pro members receive our Thursday wrap-up covering the latest in privacy, policy, and research news, including incidents, techniques, compliance, trends, and more. This week's ⁠Caveat Briefing⁠ covers the story of the ⁠Paris AI summit⁠, where French President Emmanuel Macron and EU digital chief Henna Virkkunen announced plans to reduce regulatory barriers to support AI innovation. The summit highlighted the growing pressure on Europe to adopt a lighter regulatory touch in order to remain competitive with the U.S. and China, while also addressing concerns about potential risks and the impact on workers as AI continues to evolve. Curious about the details? Head over to the ⁠Caveat Briefing⁠ for the full scoop and additional compelling stories. Got a question you'd like us to answer on our show? You can send your audio file to ⁠caveat@thecyberwire.com⁠. Hope to hear from you. Learn more about your ad choices. Visit megaphone.fm/adchoices

The EdUp Experience
What Makes an Effective AI Policy? - with Dr. Elizabeth Skomp, Provost & Vice President of Academic Affairs, Stetson University

The EdUp Experience

Play Episode Listen Later Apr 24, 2025 38:03


⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠It's YOUR time to #EdUpIn this episode, part of our Academic Integrity Series, sponsored by Pangram Labs,YOUR guest is Dr. Elizabeth Skomp, Provost & Vice President of Academic Affairs, Stetson UniversityYOUR cohost is Bradley Emi, Cofounder & CTO, Pangram LabsYOUR host is Elvin FreytesHow does Dr. Skomp define academic integrity & its student-led honor system at Stetson? What strategies does Stetson use with their honor pledge & code? How does Stetson integrate AI tools ethically with their 3 syllabus templates? What approach does faculty take when considering AI in course design? Why does the university focus on "learning opportunities" rather than punitive measures? Topics include:Creating a student-led, faculty-advised honor system The importance of faculty modeling academic integrity Developing flexible AI policies that preserve academic freedom Using AI disclosure as a trust-building approach Faculty development for AI-adapted teaching methods The "Hatter Ready" initiative connecting experiential learning & academic integrity Listen in to #EdUpDo YOU want to accelerate YOUR professional development?Do YOU want to get exclusive early access to ad-free episodes, extended episodes, bonus episodes, original content, invites to special events, & more?Then ⁠⁠⁠⁠⁠⁠BECOME A SUBSCRIBER TODAY⁠⁠ - $19.99/month or $199.99/year (Save 17%)!Want to get YOUR organization to pay for YOUR subscription? Email ⁠⁠⁠EdUp@edupexperience.comThank YOU so much for tuning in. Join us on the next episode for YOUR time to EdUp!Connect with YOUR EdUp Team - ⁠⁠⁠⁠⁠⁠⁠⁠⁠Elvin Freytes⁠⁠⁠⁠⁠⁠⁠⁠⁠ & ⁠⁠⁠⁠⁠⁠⁠⁠⁠Dr. Joe Sallustio⁠⁠⁠⁠● Join YOUR EdUp community at ⁠⁠⁠⁠⁠⁠⁠⁠⁠The EdUp Experience⁠⁠⁠⁠⁠⁠⁠⁠⁠!We make education YOUR business!

Deep State Radio
Siliconsciousness: What the Trump Administration Gets Right about AI Policy

Deep State Radio

Play Episode Listen Later Apr 14, 2025 35:52


There's a lot to criticize about US AI policy, but what has the administration been getting right? Senior VP of Government Affairs for Americans for Responsible Innovation Doug Calidas joins David Rothkopf to break down the Trump administration's industrial and AI policies, the role of tariffs, and more.  This material is distributed by TRG Advisory Services, LLC on behalf of the Embassy of the United Arab Emirates in the U.S.. Additional information is available at the Department of Justice, Washington, DC. Learn more about your ad choices. Visit megaphone.fm/adchoices

The Lawfare Podcast
Lawfare Daily: Matt Perault on the Little Tech Agenda

The Lawfare Podcast

Play Episode Listen Later Feb 18, 2025 40:10


Matt Perault, Head of AI Policy at Andreessen Horowitz, joins Kevin Frazier, Contributing Editor at Lawfare and Adjunct Professor at Delaware Law, to define the Little Tech Agenda and explore how adoption of the Agenda may shape AI development across the country. The duo also discuss the current AI policy landscape.We value your feedback! Help us improve by sharing your thoughts at lawfaremedia.org/survey. Your input ensures that we deliver what matters most to you. Thank you for your support—and, as always, for listening!To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/lawfare-institute.Support this show http://supporter.acast.com/lawfare. Hosted on Acast. See acast.com/privacy for more information.