POPULARITY
An impasse is coming to a head. The resolution is unknown. The Department of Defense has made clear that Anthropic has until 5:01pm ET today, February 27th, 2026, to permit its use of Claude for any lawful purpose. CEO Dario Amodei doubled down on his insistence that Anthropic tools should not be used for mass domestic surveillance or the operation of lethal autonomous weapons. The Pentagon's Spokesman agrees that such usage would indeed be unlawful and yet, the two parties cannot come to terms. If the DOD is to be taken at its word, the likely result is that Anthropic will be labled as a supply chain risk--an unprecedented decision with huge business ramifications. Alan Rozenshtein, Associate Professor at Minnesota Law and Research Director at Lawfare, joins Kevin Frazier, Senior Fellow at the Abudnance Institute and a Senior Editor at Lawfare, to break this all down.You can also read more on this weighty issue via Alan's two recent Lawfare pieces here and here. Hosted on Acast. See acast.com/privacy for more information.
Alan Rozenshtein, research director at Lawfare, spoke with Cullen O'Keefe, research director at the Institute for Law & AI, and Kevin Frazier, AI Innovation and Law Fellow at the University of Texas at Austin School of Law and senior editor at Lawfare, about their paper, "Automated Compliance and the Regulation of AI" (and associated Lawfare article), which argues that AI systems can automate many regulatory compliance tasks, loosening the trade-off between safety and innovation in AI policy.The conversation covered the disproportionate burden of compliance costs on startups versus large firms; the limitations of compute thresholds as a proxy for targeting AI regulation; how AI can automate tasks like transparency reporting, model evaluations, and incident disclosure; the Goodhart's Law objection to automated compliance; the paper's proposal for "automatability triggers" that condition regulation on the availability of cheap compliance tools; analogies to sunrise clauses in other areas of law; incentive problems in developing compliance-automating AI; the speculative future of automated compliance meeting automated governance; and how co-authoring the paper shifted each author's views on the AI regulation debate. Hosted on Acast. See acast.com/privacy for more information.
Alan Rozenshtein, research director at Lawfare, and Kevin Frazier, senior editor at Lawfare, speak with Amanda Askell, head of personality alignment at Anthropic, about Claude's Constitution, a 20,000-word document that describes the values, character, and ethical framework of Anthropic's flagship AI model and plays a direct role in its training.The conversation covers how the constitution is used during supervised learning and reinforcement learning to shape Claude's behavior; analogies to constitutional law, including fidelity to text, the potential for a body of "case law," and the principal hierarchy of Anthropic, operators, and users; the decision to ground the constitution in virtue ethics and practical judgment rather than rigid rules; the document's treatment of Claude's potential moral patienthood and the question of AI personhood; whether the constitution's values are too Western and culturally specific; the tension between Anthropic's commercial incentives and its stated mission; and whether the constitutional approach can generalize to specialized domains like cybersecurity and military applications.Find Scaling Laws on the Lawfare website, and subscribe to never miss an episode.To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/lawfare-institute.Support this show http://supporter.acast.com/lawfare. Hosted on Acast. See acast.com/privacy for more information.
Alan Rozenshtein, research director at Lawfare, and Kevin Frazier, senior editor at Lawfare, spoke with Amanda Askell, head of personality alignment at Anthropic, about Claude's Constitution: a 20,000-word document that describes the values, character, and ethical framework of Anthropic's flagship AI model and plays a direct role in its training.The conversation covered how the constitution is used during supervised learning and reinforcement learning to shape Claude's behavior; analogies to constitutional law, including fidelity to text, the potential for a body of "case law," and the principal hierarchy of Anthropic, operators, and users; the decision to ground the constitution in virtue ethics and practical judgment rather than rigid rules; the document's treatment of Claude's potential moral patienthood and the question of AI personhood; whether the constitution's values are too Western and culturally specific; the tension between Anthropic's commercial incentives and its stated mission; and whether the constitutional approach can generalize to specialized domains like cybersecurity and military applications. Hosted on Acast. See acast.com/privacy for more information.
Incredible catch up with one of my favorite people in the world, BUTTA!Kevin Frazier on his legendary crossover from sports to entertainment, why the Bad Bunny halftime was misunderstood, fatherhood, The FX Sports Show, his Oscars forecast and so much more. Learn more about your ad choices. Visit podcastchoices.com/adchoices
Kevin Frazier sits down with Andrew Freedman of Fathom and Gillian Hadfield, AI governance scholar, at the Ashby Workshops to examine innovative models for AI regulation.They discuss:Why traditional regulation struggles with rapid AI innovation.The concept of Regulatory Markets and how it aligns with the unique governance challenges posed by AI.Critiques of hybrid governance: concerns about a “race to the bottom,” the limits of soft law on catastrophic risks, and how liability frameworks interact with governance.What success looks like for Ashby Workshops and the future of adaptive AI policy design.Whether you're a policy wonk, technologist, or governance skeptic, this episode bridges ideas and practice in a time of rapid technological change. Hosted on Acast. See acast.com/privacy for more information.
Kevin Frazier analyzes how AI can fail like Western Union, warning that excessive concentration and lack of innovation could doom today's artificial intelligence giants just as the telegraph company declined.1955
Kevin Frazier warns of regulatory capture in AI governance, cautioning that dominant tech companies may co-opt oversight mechanisms, stifling competition and shaping rules to entrench their market dominance.1931
SHOW SCHEDULE 1-28-20261900 PRINCETON CANE RUSHBased on your notes, here are all 16 segments formatted for January 28, 2026:1.General Blaine Holt, USAF (Ret.), outlines the mission to rescue Iran from the brutes, detailing strategic options for liberating the Iranian people from the oppressive regime ruling in Tehran.2.Michael Bernstam of the Hoover Institution explains how Russia prospers with the price of gold, analyzing Moscow'seconomic resilience as precious metals revenues offset sanctions and sustain Putin's war machine.3.Bob Zimmerman of Behind the Black explains Blue Origin and SpaceX next missions, previewing upcoming launches and milestones as both companies push forward with ambitious spaceflight development programs.4.Bob Zimmerman explains Roscosmos failures without credit, examining how Russia's space agency stumbles through technical setbacks while refusing accountability, diminishing Moscow's once-proud position in space exploration.5.Victoria Coates and Gordon Chang identify the Baltic states as most vulnerable to Russian annexation, warning that Estonia, Latvia, and Lithuania face persistent threats from Putin's expansionist ambitions.6.Ann Stevenson-Yang and Gordon Chang comment on the low spirits and isolation of mainland Chinese singles, examining the demographic and social crisis as young people struggle with loneliness and economic pressures.7.Charles Burton and Gordon Chang observe the contest in Arctic waters, analyzing competing claims and military positioning as Russia, China, and Western nations vie for polar strategic advantage.8.Charles Burton and Gordon Chang comment on Prime Minister Mark Carney and Canada's future with the United States and PRC, assessing Ottawa's delicate balancing act between its powerful neighbors.9.Tevi Troy remarks on the new book McNamara at War, exploring Robert McNamara's tenure as Defense Secretary and his controversial management of the Vietnam War under two presidents.10.Tevi Troy observes McNamara dealing with the rude President Lyndon Johnson, examining the difficult working relationship between the cerebral defense secretary and the domineering, often abusive commander-in-chief.11.Kevin Frazier analyzes how AI can fail like Western Union, warning that excessive concentration and lack of innovation could doom today's artificial intelligence giants just as the telegraph company declined.12.Kevin Frazier warns of regulatory capture in AI governance, cautioning that dominant tech companies may co-opt oversight mechanisms, stifling competition and shaping rules to entrench their market dominance.13.Simon Constable reports from temperate France with commodities analysis, noting copper and gold trading dear as industrial demand and safe-haven buying drive precious and base metals prices higher.14.Simon Constable faults Prime Minister Starmer's lack of leadership, criticizing the British leader's failure to articulate vision or direction as the United Kingdom drifts through economic and political uncertainty.15.Astronomer Paul Kalas explains planetary formation in the Fomalhaut system twenty-five light years distant, revealing how observations of this nearby star illuminate the processes that create worlds around young suns.16.David Livingston explains his twenty-five years hosting The Space Show, reflecting on a quarter century of broadcasting interviews with astronauts, engineers, and visionaries shaping humanity's journey beyond Earth.
Kevin Frazier and Alan Rozenshtein explore how AI is reshaping the legal profession, from “secret cyborg” lawyers using tools like Harvey to the uncertain future of junior associates and access to legal services. They discuss maximalist legal services, AI-written “complete contingent contracts,” and where AI should fall between strict formalism and legal realism, including Claude's virtue-ethics-inspired constitution. The conversation then turns to AI's role in legislation and governance, including outcome-oriented law, the “Unitary Artificial Executive,” and new rights like the Right to Compute and the Right to Share personal data. They close by examining limits on government surveillance and how future debates over AI sentience and welfare could spark social conflict. LINKS: Article on automated AI compliance GDPVal dataset lawyers tasks viewer Polis online deliberation platform Sponsors: Blitzy: Blitzy is the autonomous code generation platform that ingests millions of lines of code to accelerate enterprise software development by up to 5x with premium, spec-driven output. Schedule a strategy session with their AI solutions consultants at https://blitzy.com Framer: Framer is an enterprise-grade website builder that lets business teams design, launch, and optimize their.com with AI-powered wireframing, real-time collaboration, and built-in analytics. Start building for free and get 30% off a Framer Pro annual plan at https://framer.com/cognitive Serval: Serval uses AI-powered automations to cut IT help desk tickets by more than 50%, freeing your team from repetitive tasks like password resets and onboarding. Book your free pilot and guarantee 50% help desk automation by week four at https://serval.com/cognitive Tasklet: Tasklet is an AI agent that automates your work 24/7; just describe what you want in plain English and it gets the job done. Try it for free and use code COGREV for 50% off your first month at https://tasklet.ai CHAPTERS: (00:00) About the Episode (03:35) Surveying AI-law landscape (14:56) Legal deserts and demand (Part 1) (15:02) Sponsors: Blitzy | Framer (18:06) Legal deserts and demand (Part 2) (Part 1) (28:25) Sponsors: Serval | Tasklet (31:14) Legal deserts and demand (Part 2) (Part 2) (31:14) AI and legal careers (45:10) AI counsel and self-representation (59:50) Maximalist law and outcomes (01:12:30) Rules, principles, and Claude (01:25:26) New rights and restraints (01:38:26) Outro PRODUCED BY: https://aipodcast.ing
Most folks agree that AI is going to drastically change our economy, the nature of work, and the labor market. What's unclear is when those changes will take place and how best Americans can navigate the transition. Brent Orrell, senior fellow at the American Enterprise Institute, joins Kevin Frazier, a Senior Fellow at the Abundance Institute, the Director of the AI Innovation and Law Fellow at the University of Texas School of Law, and a Senior Editor at Lawfare, to help tackle these and other weighty questions.Orrell has been studying the future of work since before it was cool. His two cents are very much worth a nickel in this important conversation. Send us your feedback (scalinglaws@lawfaremedia.org) and leave us a review! Hosted on Acast. See acast.com/privacy for more information.
Jakub Kraus, a Tarbell Fellow at Lawfare, speaks with Alan Rozenshtein, Associate Professor of Law at the University of Minnesota and Research Director at Lawfare, and Kevin Frazier, the AI Innovation and Law Fellow at the University of Texas School of Law, a Senior Fellow at the Abundance Institute, and a Senior Editor at Lawfare, about Anthropic's newly released "constitution" for its AI model, Claude.The conversation covers the lengthy document's principles and underlying philosophical views, what these reveal about Anthropic's approach to AI development, how market forces are shaping the AI industry, and the weighty question of whether an AI model might ever be a conscious or morally relevant being.Mentioned in this episode:Kevin Frazier, "Interpreting Claude's Constitution," LawfareAlan Rozenshtein, "The Moral Education of an Alien Mind," LawfareFind Scaling Laws on the Lawfare website, and subscribe to never miss an episode.To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/lawfare-institute.Support this show http://supporter.acast.com/lawfare. Hosted on Acast. See acast.com/privacy for more information.
Jakub Kraus, a Tarbell Fellow at Lawfare, spoke with Alan Rozenshtein, Associate Professor of Law at the University of Minnesota and Research Director at Lawfare, and Kevin Frazier, the AI Innovation and Law Fellow at the University of Texas School of Law, a Senior Fellow at the Abundance Institute, and a Senior Editor at Lawfare, about Anthropic's newly released "constitution" for its AI model, Claude. The conversation covered the lengthy document's principles and underlying philosophical views, what these reveal about Anthropic's approach to AI development, how market forces are shaping the AI industry, and the weighty question of whether an AI model might ever be a conscious or morally relevant being. Mentioned in this episode:Kevin Frazier, "Interpreting Claude's Constitution," LawfareAlan Rozenshtein, "The Moral Education of an Alien Mind," Lawfare Hosted on Acast. See acast.com/privacy for more information.
States are racing to regulate AI, creating a patchwork that Kevin Frazier warns could stifle innovation. Now, a sweeping executive order asserts federal preemption and launches a litigation task force to challenge state laws. Frazier joins us to unpack the constitutional stakes and whether Congress will finally step in.See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
Shlomo Klapper, founder of Learned Hand, joins Kevin Frazier, the Director of the AI Innovation and Law Fellow at the University of Texas School of Law, a Senior Fellow at the Abundance Institute, and a Senior Editor at Lawfare, to discuss the rise of judicial AI, the challenges of scaling technology inside courts, and the implications for legitimacy, due process, and access to justice. Hosted on Acast. See acast.com/privacy for more information.
ECONOMIC LIBERTY AND THE LABOR MARKET IN THE AGE OF AI Colleague Kevin Frazier. Kevin Frazier explores how AI is reshaping the economy, noting that liberal arts graduates may be better positioned than STEM majors to handle new information synthesis. He advises legislators to focus on job creation and a fluid labor market rather than trying to protect obsolete professions through regulation. NUMBER 11October 1957
EDUCATION REFORM AND THE AVOIDANCE OF A FEDERAL AI DEPARTMENT Colleague Kevin Frazier. Frazier argues for updating education, starting with teacher training in elementary schools and vocational partnerships in high schools, to prepare students for an AI future. He advises against creating a federal Department of AI, suggesting society should adapt to it as advanced computing rather than a unique threat. NUMBER 121921 FRANCE
SHOW SCHEDULE1-17-251895 PARISLAS VEGAS TUNNELS AND THE RELOCATION OF THE ATHLETICS Colleague Jeff Bliss. Jeff Bliss reports on the expansion of The Boring Company's tunnels in Las Vegas, which use Tesla cars to alleviate traffic congestion. He also discusses the Athletics baseball team's temporary move to Sacramento and the legal complications regarding their team name as they prepare for a permanent move to Las Vegas in 2028. NUMBER 1BIG SUR REOPENS AND COPPER THEFT PLAGUES CALIFORNIA Colleague Jeff Bliss. Highway 1 in Big Sur has reopened after landslide repairs featuring new concrete canopies to protect the road. Bliss also details how copper thieves have crippled infrastructure in Sacramento and Los Angeles, contributing to broader political dissatisfaction with Governor Gavin Newsom regarding crime and the state's management. NUMBER 2FEDERAL IMMUNITY AND THE ICE SHOOTING IN MINNEAPOLIS Colleague Professor Richard Epstein. Professor Richard Epstein analyzes the legal battle over whether ICE agents have immunity from state prosecution following a fatal shooting in Minneapolis. He explains the complexities of absolute versus qualified immunity, arguing that the agents' aggressive conduct might weaken their defense against state charges in this specific instance. NUMBER 3SUPREME COURT LIKELY TO STRIKE DOWN TRUMP TARIFFS Colleague Professor Richard Epstein. Epstein predicts the Supreme Court will invalidate the Trump administration's emergency tariffs, arguing there is no statutory basis for the trade imbalances cited as justification. He anticipates a fractured decision where a centrist block of justices joins liberals to rule that the executive branch exceeded its authority. NUMBER 4MEXICO'S ALIGNMENT WITH DICTATORS AND INFRASTRUCTURE FAILURES Colleague Mary Anastasia O'Grady. Mary Anastasia O'Grady discusses Mexican President Claudia Sheinbaum's ideological support for the Cuban and Venezuelan regimes, including increased oil shipments to Havana. She also details a recent train derailment on Mexico's interoceanic line, attributing the failure to secrecy and no-bid contracts managed by the military. NUMBER 5ITALY STABILIZES PENSION COSTS AND CELEBRATES PASTA TARIFF CUTS Colleague Lorenzo Fiori. Lorenzo Fiori reports that despite high pension costs, Italy's economic reforms under Prime Minister Meloni have stabilized the system by increasing employment. Fiori notes that Italy's deficit and inflation have dropped significantly, and he celebrates the US decision to slash tariffs on Italian pasta imports. NUMBER 6SPACE STATION RETURNS, NUCLEAR MOON PLANS, AND BOEING STRUGGLES Colleague Bob Zimmerman. Bob Zimmerman discusses the early return of an ISS crew due to a medical issue and expresses skepticism about NASA's plan for a lunar nuclear reactor by 2030. He also highlights that the Space Force is shifting launches from ULA to SpaceX due to reliability concerns. NUMBER 7GLOBAL SPACE FAILURES AND CHINA'S REUSABLE CRAFT CLAIMS Colleague Bob Zimmerman. Zimmerman analyzes a failed Indian rocket launch that lost multiple payloads, though a Spanish prototype survived. He also critiques the European Space Agency for delays in debris removal missions and casts doubt on China's claims regarding a "new" reusable spacecraft, suggesting it relies on older suborbital technology. NUMBER 8DATA CENTERS STRAIN THE ELECTRICAL GRID Colleague Henry Sokolski. Henry Sokolski discusses the surging demand for electricity driven by AI data centers and the White House's proposal to auction power access. He argues that tech companies should finance their own off-grid generation, such as nuclear or gas, rather than forcing ratepayers to subsidize new transmission infrastructure. NUMBER 9ELON MUSK AND THE GOLDEN DOME DEFENSE PROPOSAL Colleague Henry Sokolski. Sokolski evaluates Elon Musk's proposal to create a "Golden Dome" missile defense system for the US. While the concept involves space-based sensors, Sokolski notes concerns regarding monopoly power, the reliance on a single contractor for national security, and the undefined costs of ground-based interceptors. NUMBER 10ECONOMIC LIBERTY AND THE LABOR MARKET IN THE AGE OF AI Colleague Kevin Frazier. Kevin Frazier explores how AI is reshaping the economy, noting that liberal arts graduates may be better positioned than STEM majors to handle new information synthesis. He advises legislators to focus on job creation and a fluid labor market rather than trying to protect obsolete professions through regulation. NUMBER 11EDUCATION REFORM AND THE AVOIDANCE OF A FEDERAL AI DEPARTMENT Colleague Kevin Frazier. Frazier argues for updating education, starting with teacher training in elementary schools and vocational partnerships in high schools, to prepare students for an AI future. He advises against creating a federal Department of AI, suggesting society should adapt to it as advanced computing rather than a unique threat. NUMBER 12SOVIET UNION'S SECRET 1972 LUNAR BASE AMBITIONS AND THE N1 ROCKET FAILURE Colleague Anatoli Zak, Publisher of RussianSpaceWeb.com. Anatoli Zak explains that in 1972, the Soviet Union pursued the L3M project to establish a permanent lunar base, refusing to concede the moon race immediately. However, repeated failures of the N1 rocket and the financial strain of competing with the US Space Shuttle eventually forced the program's cancellation. NUMBER 13ISS LAUNCHPAD ACCIDENT AND RUSSIA'S NUCLEAR ROLE IN CHINESE MOON BASE Colleague Anatoli Zak, Publisher of RussianSpaceWeb.com. A launchpad collapse has halted Russian cargo missions to the ISS, endangering the propellant supply required for critical orbit maintenance. Zak also details Russia's attempt to join China's lunar ambitions, with the Kurchatov Institute developing a nuclear reactor to provide electricity for a future Chinese moon base. NUMBER 14PERU NAMED NON-NATO PARTNER AS US COUNTERS CHINESE INFLUENCE Colleague Oscar Sumar, Deputy Vice Chancellor at Universidad Científica del Sur. Oscar Sumar discusses Peru's designation as a US non-NATO partner, a move designed to counter Chinese geopolitical expansion through infrastructure like the Chancay port. Sumar warns that while cultural ties are strong, the Chinese Communist Party poses a threat to Peru's democratic stability and political transparency. NUMBER 15ECONOMIC SLOWDOWN INDICATORS AND SECRECY AT THE WHITE HOUSE Colleague Jim McTague, Former Washington Editor of Barron's. Jim McTague observes unusually light traffic and retail activity in Washington, D.C. and Lancaster, signaling a potential economic slowdown. He notes blocked views of White House construction and predicts a recession driven by rising state taxes and the depletion of pandemic-era stimulus funds for local governments. NUMBER 16
PREVIEW FOR LATER REIMAGINING AI REGULATION BEYOND THE SKYNET MYTH Colleague Kevin Frazier, University of Texas Law School. Frazier argues against regulating Artificial Intelligence through a fearful "Skynet mentality," suggesting it is better viewed simply as advanced computing known since 1956. He recommends treating AI not as a bespoke technology but as part of a broader portfolio of technological changes, including quantum computing and robotics.JANUARY 1931
Connecticut State Senator James Maroney and Neil Chilson, Head of AI Policy at the Abundance Institute, join Kevin Frazier, the AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare, and Alan Rozenshtein, Associate Professor at Minnesota Law and Research Director at Lawfare, for a look back at a wild year in AI policy.Neil provides his expert analysis of all that did (and did not) happen at the federal level. Senator Maroney then examines what transpired across the states. The four then offer their predictions for what seems likely to be an even busier 2026. Find Scaling Laws on the Lawfare website, and subscribe to never miss an episode.To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/lawfare-institute.Support this show http://supporter.acast.com/lawfare. Hosted on Acast. See acast.com/privacy for more information.
Ziad Reslan, a member of OpenAI's Product Policy Staff and a Senior Fellow with the Schmidt Program on Artificial Intelligence, Emerging Technologies, and National Power at Yale University, joins Kevin Frazier, AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare, to talk about iterative deployment--the lab's approach to testing and deploying its models. It's a complex and, at times, controversial approach. Ziad provides the rationale behind iterative deployment and tackles some questions about whether the strategy has always worked as intended. Hosted on Acast. See acast.com/privacy for more information.
Today's Lawfare Daily is Lawfare's annual "Ask Us Anything" mailbag episode where Lawfare contributors answered listener-submitted questions.Scott R. Anderson, Natalie Orpett, Benjamin Wittes, Kevin Frazier, Eric Columbus, Loren Voss, Molly Roberts, Jakub Kraus, Anna Bower, and Roger Parloff address questions on everything from presidential immunity to AI regulations to the domestic deployment of the military.Thank you for your questions. And as always, thank you for listening.To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/lawfare-institute.Support this show http://supporter.acast.com/lawfare. Hosted on Acast. See acast.com/privacy for more information.
Connecticut State Senator James Maroney and Neil Chilson, Head of AI Policy at the Abundance Institute, join Kevin Frazier, the AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare, and Alan Rozenstein, Associate Professor at Minnesota Law and Research Director at Lawfare, for a look back at a wild year in AI policy. Neil provides his expert analysis of all that did (and did not) happen at the federal level. Senator Maroney then examines what transpired across the states. The four then offer their predictions for what seems likely to be an even busier 2026. Hosted on Acast. See acast.com/privacy for more information.
From January 2, 2025: You called in with your questions, and Lawfare contributors have answers! Benjamin Wittes, Kevin Frazier, Quinta Jurecic, Eugenia Lostri, Alan Rozenshtein, Scott R. Anderson, Natalie Orpett, Amelia Wilson, Anna Bower, and Roger Parloff addressed questions on everything from presidential pardons to the risks of AI to the domestic deployment of the military.Thank you for your questions. And as always, thank you for listening.To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/lawfare-institute.Support this show http://supporter.acast.com/lawfare. Hosted on Acast. See acast.com/privacy for more information.
Every year, Lawfare publishes a retrospective of the year that passed. Today, we're pleased to bring you an audio debrief of that article, The Year That Was: 2025, which you can read in full on our website starting December 31.Lawfare is focused on producing timely, rigorous, and non-partisan analysis of “hard national security choices.” And this year, that work was—to use an expression as tired as we are—like drinking from a firehose. We did our best to keep up. We published more than 1,000 articles, podcasts, videos, research papers, and primary source documents. We did livestream round-ups and rapid-response videos. We produced five different podcasts and an investigative video series. We built data visualizations and trackers to make sense of complicated unfolding events. You can find all that and more for free on our website, lawfaremedia.org.It's impossible to capture everything that happened in 2025 in the world of national security. But here's what stood out to the Lawfare team—and what they have to say about. In this episode, you'll hear from Executive Editor Natalie Orpett on Lawfare's work in 2025 and from Editor in Chief Benjamin Wittes on The Situation. You'll hear from Senior Editors Anna Bower on DOGE, Roger Parloff on the Alien Enemies Act, Molly Roberts on politicization of the Justice Department, Eric Columbus on impoundments, Scott R. Anderson on war powers, and Kevin Frazier on AI and the states. You'll hear from Public Interest Fellows Loren Voss on domestic deployments of the military, and Ariane Tabatabai on foreign policy. You'll hear from our Managing Editor, Tyler McBrien, on our narrative podcast series, Escalation. You'll hear from Associate Editors Katherine Pompilio on the Jan. 6 pardons and Olivia Manes on rolling back internal checks at the Justice Department. You'll hear from our Fellow Jakub Kraus on AI, and you'll hear from Contributing Editor Renée DiResta on election integrity capacity.And that's just a sampling of Lawfare's work.It's The Year That Was: 2025. We'll see you next year.To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/lawfare-institute.Support this show http://supporter.acast.com/lawfare. Hosted on Acast. See acast.com/privacy for more information.
SHOW 12-23-25 THE SHOW BEGINS WITH DOUBTS F THE EU... 1831 BRUSSELS EU STRUGGLES WITH RUSSIAN ASSETS AND AID Colleague Judy Dempsey. Judy Dempsey discusses the EU's difficulty in utilizing frozen Russian assets and the "defeat" for Chancellor Merz regarding the funding mechanism for Ukraine. NUMBER 1 THE RISE OF THE AFD IN GERMANY Colleague Judy Dempsey. Judy Dempsey continues, focusing on the rise of the AfD party in Germany and its connections to elements of the US Republican party. NUMBER 2 STALEMATES IN GAZA AND LEBANON Colleague Jonathan Schanzer. Jonathan Schanzer discusses the stalemate regarding the last hostage in Gaza, the fragmented control of the territory, and threats in Lebanon and Syria. NUMBER 3 EU REGULATION VS. US GROWTH Colleague Michael Toth. Michael Toth critiques the European Union's "regulatory imperialism" and contrasts it with the economic growth of the US. NUMBER 4 STATE DEPARTMENT RECALLS AND STRATEGY Colleague Mary Kissel. Mary Kissel discusses the recall of career ambassadors by the Trump administration and challenges in Panama and Greenland. NUMBER 5 AUSTRALIA'S DEFENSE AND CHINA Colleague Grant Newsham. Grant Newsham warns about Australia's lack of defense capabilities and the erosion of its influence in the Pacific islands due to Chinese political warfare. NUMBER 6 THE BORING BENEFITS OF AI Colleague Kevin Frazier. Kevin Frazier advocates for the "boring use cases" of AI, such as in healthcare and traffic management, to save costs and improve efficiency. NUMBER 7 REGULATING ARTIFICIAL INTELLIGENCE Colleague Kevin Frazier. Kevin Frazier continues, warning against a "waterfall of regulation" by states and advocating for "regulatory sandboxes" to allow experimentation. NUMBER 8 US EXPANSIONISM AND DIPLOMATIC RIFTS Colleague Gregory Copley. Gregory Copley analyzes US foreign policy moves regarding Greenland, Panama, and Venezuela, describing them as a return to "might is right" expansionism. NUMBER 9 THE MONROE DOCTRINE AND NAVAL POWER Colleague Gregory Copley. Gregory Copley continues, debating whether the US is a naval or continental power in the context of enforcing the Monroe Doctrine and discussing a proposal for new battleships. NUMBER 10 THE DECLINE OF LITERACY AND CONTEXT Colleague Gregory Copley. Gregory Copley continues, discussing the decline of literacy and context since the mid-20th century, comparing modern society to the Eloi and Morlocks of H.G. Wells. NUMBER 11 KING CHARLES III AND UK POLITICAL TURMOIL Colleague Gregory Copley. Gregory Copley continues, analyzing the challenges King Charles III faces under the Keir Starmer government, which Copley compares to the era of Oliver Cromwell. NUMBER 12 THE LEGEND OF THE HESSIANS Colleague Professor Richard Bell. Professor Richard Bell discusses the American fear of Hessian soldiers and Washington's strategic victory at Trenton. NUMBER 13 FRANCE'S GLOBAL STRATEGY IN THE REVOLUTION Colleague Professor Richard Bell. Professor Richard Bell continues, highlighting the role of Foreign Minister Vergennes and how French involvement expanded the war globally. NUMBER 14 BENEDICT ARNOLD AND PEGGY SHIPPEN Colleague Professor Richard Bell. Professor Richard Bell continues, discussing Peggy Shippen's influence on Benedict Arnold's defection and their subsequent life in London. NUMBER 15 THE ACCIDENTAL COLONIZATION OF AUSTRALIA Colleague Professor Richard Bell. Professor Richard Bell concludes, recounting the story of convict William Murray and the accidental selection of Australia as a penal colony following the loss of the American colonies. NUMBER 16
THE BORING BENEFITS OF AI Colleague Kevin Frazier. Kevin Frazier advocates for the "boring use cases" of AI, such as in healthcare and traffic management, to save costs and improve efficiency. NUMBER 7 JANUARY 1951
REGULATING ARTIFICIAL INTELLIGENCE Colleague Kevin Frazier. Kevin Frazier continues, warning against a "waterfall of regulation" by states and advocating for "regulatory sandboxes" to allow experimentation. NUMBER 8 NOVEMBER 1955
PREVIEW WARNING AGAINST FRAGMENTED STATE-LEVEL AI REGULATION Colleague Kevin Frazier. Kevin Frazier, a University of Texas Law School fellow, warns against fragmented AI regulation by individual states seeking tax revenue. He advocates for a national framework rather than hasty local laws, arguing that allowing technology to develop through "trial and error" is superior to heavy-handed, immediate restrictions.
In this rapid response episode, Lawfare senior editors Alan Rozenshtein and Kevin Frazier and Lawfare Tarbell fellow Jakub Kraus discuss President Trump's new executive order on federal preemption of state AI laws, the politics of AI regulation and the split between Silicon Valley Republicans and MAGA populists, and the administration's decision to allow Nvidia to export H200 chips to China. Mentioned in this episode:Executive Order: Ensuring a National Policy Framework for Artificial IntelligenceCharlie Bullock, "Legal Issues Raised by the Proposed Executive Order on AI Preemption," Institute for Law & AI Hosted on Acast. See acast.com/privacy for more information.
Graham Dufault, General Counsel at ACT | The App Association, joins Kevin Frazier, AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare, to explore how small- and medium-sized enterprises (SMEs) are navigating the EU's AI regulatory framework. The duo breakdown the Association's recent survey of SMEs, which included the views of more than 1,000 enterprises and assessed their views on regulation and adoption of AI. Follow Graham: @GDufault and ACT | The App Association: @actonline Hosted on Acast. See acast.com/privacy for more information.
Caleb Withers, a researcher at the Center for a New American Security, joins Kevin Frazier, the AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare, to discuss how frontier models shift the balance in favor of attackers in cyberspace. The two discuss how labs and governments can take steps to address these asymmetries favoring attackers, and the future of cyber warfare driven by AI agents. Jack Mitchell, a student fellow in the AI Innovation and Law Program at the University of Texas School of Law, provided excellent research assistance on this episode.Check out Caleb's recent research here. Find Scaling Laws on the Lawfare website, and subscribe to never miss an episode.To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/lawfare-institute.Support this show http://supporter.acast.com/lawfare. Hosted on Acast. See acast.com/privacy for more information.
Caleb Withers, a researcher at the Center for a New American Security, joins Kevin Frazier, the AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare, to discuss how frontier models shift the balance in favor of attackers in cyberspace. The two discuss how labs and governments can take steps to address these asymmetries favoring attackers, and the future of cyber warfare driven by AI agents.Jack Mitchell, a student fellow in the AI Innovation and Law Program at the University of Texas School of Law, provided excellent research assistance on this episode.Check out Caleb's recent research here. Hosted on Acast. See acast.com/privacy for more information.
Andrew Prystai, CEO and co-founder of Vesta, and Thomas Bueler-Faudree, co-founder of August Law, join Kevin Frazier, AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare, to think through AI policy from the startup perspective. Andrew and Thomas are the sorts of entrepreneurs that politicians on both sides of the aisle talk about at town halls and press releases. They're creating jobs and pushing the technological frontier. So what do they want AI policy leaders to know as lawmakers across the country weigh regulatory proposals? That's the core question of the episode. Giddy up for a great chat! Learn more about the guests and their companies here:Andrew's Linkedin, Vesta's LinkedinThomas's LinkedIn, August's LinkedIn Hosted on Acast. See acast.com/privacy for more information.
Anton Korinek, a professor of economics at the University of Virginia and newly appointed economist to Anthropic's Economic Advisory Council; Nathan Goldschlag, Director of Research at the Economic Innovation Group; and Bharat Chander, Economist at Stanford Digital Economy Lab, join Kevin Frazier, the AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare, to sort through the myths, truths, and ambiguities that shape the important debate around the effects of AI on jobs. They discuss what happens when machines begin to outperform humans in virtually every computer-based task, how that transition might unfold, and what policy interventions could ensure broadly shared prosperity.These three are prolific researchers. Give them a follow to find their latest works:Anton: @akorinek on XNathan: @ngoldschlag and @InnovateEconomy on XBharat: X: @BharatKChandar, LinkedIn: @bharatchandar, Substack: @bharatchandarFind Scaling Laws on the Lawfare website, and subscribe to never miss an episode.To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/lawfare-institute.Support this show http://supporter.acast.com/lawfare. Hosted on Acast. See acast.com/privacy for more information.
Regulating AI and Protecting Children. Kevin Frazier (Law School Fellow at the University of Texas at Austin) addresses the growing concern over AI chatbots following tragedies, noting that while only 1.9% of ChatGPT conversations relate to "relationships," this fraction still warrants significant attention. He criticizes early state legislative responses, such as Illinois banning AI therapy tools, arguing that such actions risk denying mental health support to children who cannot access human therapists. Frazier advocates against imposing restrictive statutory law on the rapidly evolving technology. Instead, he recommends implementing a voluntary, standardized rating system, similar to the MPA film rating system. This framework would provide consumers with digestible information via labels—like "child safe" or "mental health appropriate"—to make informed decisions and incentivize industry stakeholders to develop safer applications. 1941
Regulating AI and Protecting Children. Kevin Frazier (Law School Fellow at the University of Texas at Austin) addresses the growing concern over AI chatbots following tragedies, noting that while only 1.9% of ChatGPT conversations relate to "relationships," this fraction still warrants significant attention. He criticizes early state legislative responses, such as Illinois banning AI therapy tools, arguing that such actions risk denying mental health support to children who cannot access human therapists. Frazier advocates against imposing restrictive statutory law on the rapidly evolving technology. Instead, he recommends implementing a voluntary, standardized rating system, similar to the MPA film rating system. This framework would provide consumers with digestible information via labels—like "child safe" or "mental health appropriate"—to make informed decisions and incentivize industry stakeholders to develop safer applications. 1919
PREVIEW. The Crisis of AI Literacy: Protecting Vulnerable Communities from Misusing Chatbots. Kevin Frazier discusses the dangers of young people misusing AI chatbots due to a significant lack of public awareness and basic AI literacy. Designers assume users know chatbots are merely objectification and optimization, not real opinions or people. Frazier stresses the need for educating consumers on the best and improper uses of these tools for responsible innovation. 1951
California State Senator Scott Wiener, author of Senate Bill 53—a frontier AI safety bill—signed into law by Governor Newsom earlier this month, joins Alan Rozenshtein, Associate Professor at Minnesota Law and Research Director at Lawfare, and Kevin Frazier, AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare, to explain the significance of SB 53 in the large debate about how to govern AI.The trio analyze the lessons that Senator Wiener learned from the battle of SB 1047, a related bill that Newsom vetoed last year, explore SB 53's key provisions, and forecast what may be coming next in Sacramento and D.C.Find Scaling Laws on the Lawfare website, and subscribe to never miss an episode.To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/lawfare-institute.Support this show http://supporter.acast.com/lawfare. Hosted on Acast. See acast.com/privacy for more information.
Mosharaf Chowdhury, Associate Professor at the University of Michigan and Director of the ML Energy lab, and Dan Zhou, former Senior Research Scientist at the MIT Lincoln Lab, MIT Supercomputing Center, and MIT CSAIL, join Kevin Frazier, AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare, to discuss the energy costs of AI. They break down exactly how much energy fuels a single ChatGPT query, why this is difficult to figure out, how we might improve energy efficiency, and what kinds of policies might minimize AI's growing energy and environmental costs. Leo Wu provided excellent research assistance on this podcast.Read more from Mosharaf:The ML Energy Initiative“We did the math on AI's energy footprint. Here's the story you haven't heard,” in MIT Technology ReviewRead more from Dan:“From Words to Watts: Benchmarking the Energy Costs of Large Language Model Inference,” in Proc. IEEE High Perform. Extreme Comput. Conf. (HPEC)“A Green(er) World for A.I.,” in IEEE International Parallel and Distributed Processing Symposium Workshops (IPDPSW)Find Scaling Laws on the Lawfare website, and subscribe to never miss an episode.To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/lawfare-institute.Support this show http://supporter.acast.com/lawfare. Hosted on Acast. See acast.com/privacy for more information.
HEADLINE: AI Regulation Debate: Premature Laws vs. Emerging Norms GUEST NAME: Kevin Frazier SUMMARY: Kevin Frazier critiques the legislative rush to regulate AI, arguing that developing norms might be more effective than premature laws. He notes that bills like California's AB 1047, which demands factual accuracy, fundamentally misunderstand AI's generative nature. Imposing vague standards, as seen in New York's RAISE Act, risks chilling innovation and preventing widespread benefits, like affordable legal or therapy tools. Frazier emphasizes that AI policy should be grounded in empirical data rather than speculative fears. 1960
HEADLINE: AI Regulation Debate: Premature Laws vs. Emerging Norms GUEST NAME: Kevin Frazier SUMMARY: Kevin Frazier critiques the legislative rush to regulate AI, arguing that developing norms might be more effective than premature laws. He notes that bills like California's AB 1047, which demands factual accuracy, fundamentally misunderstand AI's generative nature. Imposing vague standards, as seen in New York's RAISE Act, risks chilling innovation and preventing widespread benefits, like affordable legal or therapy tools. Frazier emphasizes that AI policy should be grounded in empirical data rather than speculative fears. 1958
David Sullivan, Executive Director of the Digital Trust & Safety Partnership, and Rayi Iyer, Managing Director of the Psychology of Technology Institute at USC's Neely Center, join Kevin Frazier, AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare, to discuss the evolution of the Trust & Safety field and its relevance to ongoing conversations about how best to govern AI. They discuss the importance of thinking about the end user in regulation, debate the differences and similarities between social media and AI companions, and evaluate current policy proposals.Leo Wu provided excellent research assistance to prepare for this podcast.Read more from David:"Why we need to make safety the product to build better bots," from the World Economic Forum Centre for AI Excellence"Learning from the Past to Shape the Future of Digital Trust and Safety," in Tech Policy PressRead more from Ravi:"Ravi Iyer on How to Improve Technology Through Design," from Lawfare's Arbiters of Truth series"Regulate Design, not Speech," from the Designing Tomorrow Substack Read more from Kevin:"California in Your Chatroom: AB 1064's Likely Constitutional Overreach," from the Cato InstituteFind Scaling Laws on the Lawfare website, and subscribe to never miss an episode.To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/lawfare-institute.Support this show http://supporter.acast.com/lawfare. Hosted on Acast. See acast.com/privacy for more information.
From September 20, 2024: Bob Bauer, Professor of Practice and Distinguished Scholar in Residence at New York University School of Law, and Liza Goitein, Senior Director of Liberty & National Security at the Brennan Center, join Kevin Frazier, Assistant Professor at St. Thomas University College of Law and a Tarbell Fellow at Lawfare, to review the emergency powers afforded to the president under the National Emergency Act, International Emergency Economic Powers Act, and the Insurrection Act. The trio also inspect ongoing bipartisan efforts to reform emergency powers.To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/lawfare-institute.Support this show http://supporter.acast.com/lawfare. Hosted on Acast. See acast.com/privacy for more information.
Neil Chilson, Head of AI Policy at the Abundance Institute, and Gus Hurwitz, Senior Fellow and CTIC Academic Director at Penn Carey Law School and Director of Law & Economics Programs at the International Center for Law & Economics, join Kevin Frazier, AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare, to explore how academics can overcome the silos and incentives that plague the Ivory Tower and positively contribute to the highly complex, evolving, and interdisciplinary work associated with AI governance.The trio recorded this podcast live at the Institute for Humane Studies's Technology, Liberalism, and Abundance Conference in Arlington, Virginia.Read about Kevin's thinking on the topic here: https://www.civitasinstitute.org/research/draining-the-ivory-towerLearn about the Conference: https://www.theihs.org/blog/curated-event/technology-abundance-and-liberalism/Find Scaling Laws on the Lawfare website, and subscribe to never miss an episode.To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/lawfare-institute.Support this show http://supporter.acast.com/lawfare. Hosted on Acast. See acast.com/privacy for more information.
HEADLINE: Russian Spy Ships Target Vulnerable Undersea Communication Cables GUEST NAME: Kevin Frazier50 WORD SUMMARY: Undersea cables are highly vulnerable to sabotage or accidental breaks. Russia uses sophisticated naval technology, including the spy ship Yantar, to map and potentially break these cables in sensitive locations. The US is less vulnerable due to redundancy. However, protection is fragmented, relying on private owners who often lack incentives to adopt sophisticated defense techniques. 1945 RED SQUARE
Preview: Kevin Frazier discusses the extreme vulnerability and fragmented state of undersea cables, the vast majority of which are privately owned. The Department of Defense relies on these systems, which lack sufficient protection due to high costs. Frazier highlights recent reports that the Russian ship Yantar, under GRU possession, is tracking and mapping these vital cables near Great Britain in the event of conflict.
Preview: Kevin Frazier discusses the extreme vulnerability and fragmented state of undersea cables, the vast majority of which are privately owned. The Department of Defense relies on these systems, which lack sufficient protection due to high costs. Frazier highlights recent reports that the Russian ship Yantar, under GRU possession, is tracking and mapping these vital cables near Great Britain in the event of conflict.
From September 18, 2024: Jane Bambauer, Professor at Levin College of Law; Ramya Krishnan, Senior Staff Attorney at the Knight First Amendment Institute and a lecturer in law at Columbia Law School; Alan Rozenshtein, Associate Professor of Law at the University of Minnesota Law School and a Senior Editor at Lawfare, join Kevin Frazier, Assistant Professor at St. Thomas University College of Law and a Tarbell Fellow at Lawfare, to break down the D.C. Circuit Court of Appeals' hearing in TikTok v. Garland, in which a panel of judges assessed the constitutionality of the TikTok bill.To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/lawfare-institute.Support this show http://supporter.acast.com/lawfare. Hosted on Acast. See acast.com/privacy for more information.
Steven Adler, former OpenAI safety researcher, author of Clear-Eyed AI on Substack, and independent AGI-readiness researcher, joins Kevin Frazier, AI Innovation and Law Fellow at the University of Texas School of Law and Senior Fellow at Lawfare, to assess the current state of AI testing and evaluations. The two walk through Steven's views on industry efforts to improve model testing and what he thinks regulators ought to know and do when it comes to preventing AI harms.Thanks to Leo Wu for research assistance!Find Scaling Laws on the Lawfare website, and subscribe to never miss an episode.To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/lawfare-institute.Support this show http://supporter.acast.com/lawfare. Hosted on Acast. See acast.com/privacy for more information.