CSIS’ Gregory C. Allen, Director of the Wadhwani Center for AI and Advanced Technologies, is joined by cohost H. Andrew Schwartz on a deep dive into the world of AI policy. Every two weeks, tune in for insightful discussions regarding AI policy regulation, innovation, national security, and geopolitics. The AI Policy Podcast is by the Wadhwani Center for AI and Advanced Technologies at CSIS, a bipartisan think-tank in Washington, D.C.Â
Center for Strategic and International Studies
In this episode, we discuss the U.S. AI Safety Institute's rebrand to the Center for AI Standards and Innovation (00:37), BIS Undersecretary Jeffrey Kessler's testimony on semiconductor export controls (10:36), and Meta's new AI superintelligence lab and accompanying $15 billion investment in Scale AI (22:26).
On June 9, 2025, the CSIS Wadhwani AI Center hosted Ryan Tseng, Co-Founder and President of Shield AI, a company building AI-powered software to enable autonomous capabilities for defense and national security. Mr. Tseng leads strategic partnerships with defense and policy leaders across the United States and internationally. Under his leadership, Shield AI secured major contracts with the U.S. Special Operations Command, Air Force, Marine Corps, and Navy, while expanding internationally with offices opening in Ukraine and the UAE. Watch the full event here.
In this episode, we discuss House Republicans' proposed moratorium on state and local AI laws (00:57), break down AI-related appropriations across the executive branch (18:54), and unpack the safety issues and safeguards of Anthropic's newest model, Claude Opus 4 (26:51). Correction: In this episode, a quote was incorrectly attributed directly to Rep. Laurel Lee (R-Fla.). The statement—“Should the provision be stripped from the Senate reconciliation bill, some Republicans are eyeing separate legislation.”—was reported by The Hill as a paraphrase of Rep. Lee's comments.
In this episode, we discuss Princeton researcher Kyle Chan's op-ed in the New York Times on China's industrial policy for AI and advanced technologies (0:35), what the Bureau of Industry and Security's new controls on Huawei's Ascend chips mean for China's AI ecosystem (10:09), and our biggest takeaways from President Trump's visit to the Middle East (19:07).
In this episode, we discuss the Trump administration's decision to rescind the AI Diffusion Framework (1:34), the message of top AI executives in their recent Senate testimony (20:03), what AI adoption could mean for the IRS (35:15), the U.S. Copyright Office's latest report on generative AI training (44:44), and what AI policy might look like in the new papacy (49:24).
In this episode, we discuss what the Trump administration's Fiscal Year 2026 budget request means for federal AI spending, what might happen to the AI Diffusion Framework before its May 15 implementation deadline, and what the Chinese Communist Party Politburo's Study Session on AI indicates about China's AI ambitions.
On May 1, 2025, the CSIS Wadhwani AI Center hosted Alexandr Wang, Founder and CEO of Scale AI, a company accelerating AI development by delivering expert-level data and technology solutions to leading AI labs, multinational enterprises, and governments. He shared his insights on key issues shaping the future of AI policy, such as U.S.-China AI competition, international AI governance, and the new administration's approach to AI innovation, regulation, and global standards. Alexandr founded Scale AI in 2016 as a 19-year-old MIT student with the vision of providing the critical data and infrastructure needed for complex AI projects. Under his leadership, Scale AI has grown to nearly a $14 billion valuation, serving hundreds of customers across industries ranging from finance to government agencies, and creating flexible, impactful AI work for hundreds of thousands of people worldwide. Watch full event at the following link: https://www.csis.org/analysis/scale-ais-alexandr-wang-securing-us-ai-leadership
In this episode, we discuss OSTP Director Michael Kratsios's recent speech on US technology policy at the Endless Frontiers Retreat (0:19), the Trump administration's decision to control the Nvidia H20 chip (10:48), and what Huawei's announcement of the Ascend 920 chip means for U.S.-China AI competition (18:24).
In this episode of the AI Policy Podcast, Wadhwani AI Center Director Gregory C. Allen is joined by Andrew Freedman, Chief Strategic Officer at Fathom, an organization whose mission is to find, build, and scale the solutions needed to help society transition to a world with AI. They will discuss the origins and purpose of Fathom, key initiatives shaping AI policy around the country such as California Senate Bill 813, and the new administration's approach to AI governance. They will also unpack the concept of “Private AI Governance” and what it means for the future of the U.S. AI ecosystem. Andrew Freedman is the Chief Strategic Officer at Fathom, boasting over 15 years of expertise in emerging industries and regulatory frameworks. Previously, he was a Partner at Forbes Tate Partners, where he led the firm's coalition work in technology and emerging regulatory sectors. Andrew has advised governments in California, Canada, and Massachusetts, and has been a speaker at major conferences like Code Conference and Aspen Ideas Fest. Earlier in his career, Andrew served as Chief of Staff to Colorado's Lieutenant Governor, where he established the Office of Early Childhood and secured a $45 million Race to the Top Grant. He also managed the Colorado Commits to Kids campaign, raising $11 million in three months for education funding. Andrew holds a J.D. from Harvard Law School and a B.A. from Tufts University.
In this episode, we discuss what the Trump administration's tariffs mean for the US AI ecosystem (2:42), reporting that Nvidia's H20s will be exempt from export controls (8:58), the latest AI guidance from the White House Office of Management and Budget (OMB) (12:48), and the EU's AI Continent Action Plan (17:07).
In this episode, we are joined by Matt Sheehan, fellow at the Carnegie Endowment for International Peace. We discuss the evolution of China's AI policymaking process over the past decade (6:45), the key institutions shaping Chinese AI policy today (44:30), and the changing nature of China's attitude to AI safety (50:55).
In this episode, we discuss AI companies' responses to the White House AI Action Plan Request For Information (RFI) related to key areas like export controls and AI governance (00:51), the release of the Joint California Policy Working Group on AI Frontier Models draft report (24:45), and how AI might be affecting the computer programming job market (40:10).
In this episode of the AI Policy Podcast, Wadhwani AI Center Director Gregory C. Allen is joined by Dean Ball, Research Fellow in the Artificial Intelligence & Progress Project at George Mason University's Mercatus Center. They will discuss how state and local governments are approaching AI regulation, what factors are shaping these efforts, where state and local efforts intersect, and how a fractured approach to governance might affect the AI policy landscape. In addition to his role at the George Mason University's Mercatus Center, Dean Ball is the author of the Substack Hyperdimensional. Previously, he was Senior Program Manager for the Hoover Institution's State and Local Governance Initiative. Prior to his position at the Hoover Institution, he served as Executive Director of the Calvin Coolidge Presidential Foundation, based in Plymouth, Vermont and Washington, D.C. He also worked as the Deputy Director of State and Local Policy at the Manhattan Institute for Policy Research from 2014–2018.
In this episode, we discuss the Wadhwani AI Center's latest publication on the implications of DeepSeek for the future of export controls (0:40), Chinese company Manus AI (9:05), what Secretary Hegseth's memo means for the DOD AI ecosystem (15:27), and xAI's acquisition of 1 million square feet for its new data center in Memphis (21:28).
In this special episode, we are joined by Georgia Adamson, Research Associate at the CSIS Wadhwani AI Center, Lennart Heim, Associate Information Scientist at RAND, and Sam Winter-Levy, Fellow for Technology and International Affairs at the Carnegie Endowment for International Peace. We outline the biggest takeaways from our recent report about the UAE's role in the global AI race (2:34), the details of the Microsoft-G42 deal (17:21), our assessment of the UAE-China relationship when it comes to AI technology (25:45), and the future of export controls (44:07).
In our first video episode, we discuss xAI's release of the Grok 3 family of models, the Department of Government Efficiency's (DOGE) impact on the federal AI workforce, Xi Jinping's meeting with major Chinese AI company executives, and what the Evo-2 model could mean for the future of biology.
In this special episode, Greg breaks down his biggest takeaways from the Paris AI Action Summit. He discusses France's goals for the summit (5:05), Vice President JD Vance's speech about the US vision for AI (12:16), the EU's approach to the convening (17:13), why the US and UK did not sign the summit declaration (20:50), and the rebranded UK AI Security Institute (23:20).
In this crossover episode with Truth of the Matter, we discuss the origins of Chinese AI Company DeepSeek (0:55), the release of its DeepSeek R1 model and what it means for the future of U.S.- China AI competition (3:05), why it prompted such a massive reaction by U.S. policymakers and the U.S. stock market (14:04), and the Trump administration's response (24:03)
In this episode, we break down President Trump's repeal of the the Biden administration's Executive Order (EO) on AI (1:00), the release of the America First Trade Policy memorandum (9:52), and the Trump administration's own AI EO (15:02). We are then joined by Lennart Heim, Senior Information Scientist at the RAND Corporation to discuss the Stargate announcement (20:40), how AI company CEOs are talking about AGI (38:36), and why the latest models from DeepSeek matter (52:02).
In this special episode of the AI Policy Podcast, Andrew, Greg, and CSIS Energy Security and Climate Change Program Director Joseph Majkut discuss the Biden administration's Executive Order on Advancing United States Leadership in Artificial Intelligence Infrastructure. They consider the motivation for this measure and its primary goals (1:07), its reception among AI and hyperscaler companies (12:18), and how the Trump administration might approach AI and energy (17:50).
In this pressing episode, we break down the release of the Biden administration's Framework for Artificial Intelligence Diffusion. We discuss the rationale for this latest control (0:52), and its reception among major AI and semiconductor firms (8:14), U.S. allies (17:15), and the incoming administration (19:48).
In this episode, we discuss the December 2nd semiconductor export control update (0:45), the Trump administration's appointment of David Sacks as the White House AI czar (5:35), the OpenAI and Anduril partnership and its implication for national security (9:31), and the latest from China's autonomous fighter aircraft program (16:39).
On this special episode, Wadhwani AI Center director Gregory C. Allen is joined by Dr. Ben Buchanan, the White House Special Advisor for AI. They discuss the Biden administration's biggest AI policy achievements including the AI Bill of Rights, the AI Safety Institute, and the Hiroshima AI process, and the National Security Memorandum on AI.
On this special episode, New York Times reporter Ana Swanson is joined by Neil Chilson, Head of AI Policy at The Abundance Institute, Kara Frederick, Director, Tech Policy Center at The Heritage Foundation, and Brandon Pugh, Director and Senior Fellow, Cybersecurity and Emerging Threats at R Street Institute. They discuss what we can expect from the incoming Trump administration when it comes to AI policy.
In this episode, we are joined by Alondra Nelson, the Harold F. Linder Chair in the School of Social Science at the Institute of Advanced Study, and the former acting director of the White House Office of Science and Technology Policy (OSTP). We discuss her background in AI policy (1:30), the Blueprint for the AI Bill of Rights (9:43), its relationship to the White House Executive Order on AI (23:47), the Senate AI Insight Forums (29:55), the European approach to AI governance (29:55), state-level AI regulation (41:20), and how the incoming administration should approach AI policy (47:04).
In this episode, we discuss recent reporting that so called "scaling laws" are slowing and the potential implications for the policy community (0:37), the latest models coming out of the China AI ecosystem (12:37), the U.S. China - Economic Security Review Commission recommendation for a Manhattan Project for AI (19:02), and the biggest takeaways from the first draft of the European Union's General Purpose AI Code of Practice (25:46) https://www.csis.org/analysis/eu-code-practice-general-purpose-ai-key-takeaways-first-draft https://www.csis.org/analysis/ai-safety-institute-international-network-next-steps-and-recommendations https://www.csis.org/analysis/understanding-military-ai-ecosystem-ukraine https://www.csis.org/events/international-ai-policy-outlook-2025 Correction: hyperscalers Meta, Microsoft, Google, and Amazon are expected to invest $300 billion in AI and AI infrastructure in 2025.
In this episode we are joined by Vilas Dhar, President and Trustee of the Patrick J. McGovern Foundation, a 21st century $1.5 billion philanthropy advancing AI and data solutions to create a thriving, equitable, and sustainable future for all. We discuss his background (1:26), the foundation and its approach to AI philanthropy (4:11), building public sector capacity in AI (13:00), the definition of AI governance (20:07), ongoing multilateral governance efforts (23:01), how liberal and authoritarian norms affect AI (28:35), and what the future of AI might look like (30:30).
In this episode, we discuss what AI policy might look like under the second Trump administration. We dive into the first Trump administration's achievements (0:50), how the Trump campaign handled AI policy (3:37), and where the new administration might fall on key issue areas like national security (5:59), safety (7:37), export controls (11:27), open-source (14:04), and more.
In this special episode, we discuss the National Security Memorandum on AI the Biden administration released on October 24th, its primary audience and main objectives, and what the upcoming U.S. election could mean for its implementation.
On this special episode, Wadhwani AI Center director Gregory C. Allen is joined by Schuyler Moore, the first-ever Chief Technology Officer of U.S. Central Command (CENTCOM), Justin Fanelli, the Chief Technology Officer of the Department of the Navy, and Dr. Alex Miller, the Chief Technology Officer for the Chief of Staff of the Army for a discussion on the warfighter's adoption of emerging technologies. They discuss how U.S. Central Command (CENTCOM), in conjunction with the Army and Navy, has been driving the use of AI and other advanced technologies through a series of exercises such as Desert Sentry, Digital Falcon Oasis, Desert Guardian, and Project Convergence.
In this episode, we are joined by former MEP Dragoș Tudorache, co-rapporteur of the EU AI Act and Chair of the Special Committee on AI in the Digital Age. We discuss where we are in the EU AI Act roadmap (2:37), how to balance innovation and regulation (11:20), the future of the EU AI Office (25:00), and the increasing energy infrastructure demands of AI (42:30). The European Approach to Regulating Artificial Intelligence
In this episode, we discuss Nvidia's earnings report and its implications for the AI industry (0:53), the impact of China's Gallium and Germanium export controls on the global semiconductor competition (9:50), and why OpenAI is demonstrating its capabilities for the national security community (18:00).
In this episode, we are joined by Jeff Alstott, expert at the National Science Foundation (NSF) and director of the Center for Technology and Security Policy at RAND, to discuss past technology forecasting across the national security community (20:45) and a new NSF initiative called Assessing and Predicting Technology Outcomes (APTO) (31:30). https://urldefense.com/v3/__https:/new.nsf.gov/tip/updates/nsf-invests-nearly-52m-align-science-technology__;!!KRhing!eOu1AsJT51VVjrOK6T3-do43HgthGjQ9H0JkwgwH774TXBgeHKT2IweoShOS_F8P27yWUnkbispIRQ$
In this episode, we discuss the CSIS Wadhwani Center for AI and Advanced Technologies latest report on the DOD's Collaborative Combat Aircraft (CCA) program (0:58), what recent news about AI chip smuggling means for U.S. export controls (13:40), how California's SB 1047 might affect AI regulation (23:18), and our biggest takeaways from the EU AI Act going into force (33:52). Collaborative Combat Aircraft Program: Good News, Bad News, and Unanswered Questions
In this episode, we are joined by Andrei Iancu, former Undersecretary of Commerce for Intellectual Property and former Director of the US Patent and Trademark Office (USPTO), to discuss whether AI-generated works can be copyrighted (15:52), what the latest USPTO guidance means for the patent subject matter eligibility of AI systems (22:31), who can claim inventorship for AI-facilitated inventions (36:00), and the use of AI by patent and trademark applicants and the USPTO (53:43).
On this special episode, the CSIS Wadhwani Center for AI and Advanced Technologies is pleased to host Elizabeth Kelly, Director of the United States Artificial Intelligence Safety Institute at the National Institute of Standards and Technology (NIST) at the U.S. Department of Commerce. The U.S. AI Safety Institute (AISI) was announced by Vice President Kamala Harris at the UK AI Safety Summit in November 2023. The institute was established to advance the science, practice, and adoption of AI safety in the face of risks including those to national security, public safety, and individual rights. Director Kelly will discuss the U.S. AISI's recently released Strategic Vision, its activities under President Biden's AI Executive Order, and its approach to the AISI global network announced at the AI Seoul Summit.
In this episode, we discuss what AI policy might like look after the 2024 U.S. presidential election. We dive into the past (1:00), present (9:50), and future (22:50) of both the Trump and Harris campaigns' AI policy positions and where they fall on the key issue areas like safety (23:01), open-source (25:17), energy infrastructure (33:27), and more.
In this episode, DOD Chief Digital and AI Officer Dr. Radha Plumb joins Greg Allen to discuss the Chief Digital and Artificial Intelligence Office (CDAO)'s current role at the department and a preview of its upcoming projects. The CDAO was established in 2022 to create, implement, and steer the DOD's digital transformation and adoption of AI. Under Dr. Plumb's leadership, the CDAO recently announced a new initiative, Open Data and Applications Government-owned Interoperable Repositories (DAGIR), which will open a multi-vendor ecosystem connecting DOD end users with innovative software solutions. Dr. Plumb discusses the role of Open DAGIR and a series of other transformative projects currently underway at the CDAO.
In this episode, we discuss the state of autonomous weapons systems adoption in Ukraine (00:55), our takeaways from the Supreme Court's decision to overturn the Chevron Doctrine and the implications for AI regulation (17:35), the delayed deployment of Apple Intelligence in the EU (30:55), and a breakdown of Nvidia's deal to sell its technology to data centers in the Middle East (41:30).
In this episode, we discuss our biggest takeaways from the AI agenda at the G7 Leaders' Summit (0:41), the details of the Apple-OpenAI partnership announcement (8:05), and why Saudi Aramco's investment in Zhipu AI represents a groundbreaking moment in China-Saudi Arabia relations (16:25).
In this episode, we break down the policy outcomes from the AI Seoul Summit (0:22), the latest news from the U.S.-China AI safety talks (7:59), and why the Zhousidun (Zeus's Shield) dataset matters (16:30).
In this episode, we discuss our biggest takeaways from the bipartisan AI policy roadmap led by Senate Majority Leader Chuck Schumer (1:10), what to expect from the U.S.-China AI safety dialogue (9:55), recent updates to the DOD's Replicator Initiative (19:25), and Microsoft's new Intelligence Community AI Tool (29:31).