POPULARITY
Brian Fuller, a member of the Product Policy Team at OpenAI, joins Kevin Frazier, the AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare, to analyze how large AI labs go about testing their models for compliance with internal requirements and various legal obligations. They also cover the ins and outs of what it means to work in product policy and what issues are front of mind for in-house policy teams amid substantial regulatory uncertainty.Find Scaling Laws on the Lawfare website, and subscribe to never miss an episode.To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/lawfare-institute.Support this show http://supporter.acast.com/lawfare. Hosted on Acast. See acast.com/privacy for more information.
AI: Electricity supremacy. Kevin Frazier, Civitas Institute JUNE 1957
AI: Electricity supremacy. Kevin Frazier, Civitas Institute continued JANUARY 1959
SHOW SCHEDULE 8-7-25 Good evening. The show begins in the future, discussing the AI androids that will dominate the QSRs... NOVEMBER 1957 CBS EYE ON THE WORLD WITH JOHN BATCHELOR FIRST HOUR 9-915 Android AI: How soon? #SCALAREPORT: Chris Riegel CEO, Scala.com @Stratacache 915-930 Jobs: QSR all androids. #SCALAREPORT: Chris Riegel CEO, Scala.com @Stratacache 930-945 Research endowments and Trump admin. Eric Jensen, Case Western University, Civitas 945-1000 Research endowments and Trump admin. Eric Jensen, Case Western University, Civitas continued SECOND HOUR 10-1015 Putin softens. Anatol Lieven, Quincy Institute 1015-1030 Putin successor. Anatol Lieven, Quincy Institute 1030-1045 AI: Electricity supremacy. Kevin Frazier, Civitas Institute 1045-1100 AI: Electricity supremacy. Kevin Frazier, Civitas Institute continued THIRD HOUR 1100-1115 #NewWorldReport: Brazil lawfare. Latin American Research Professor Evan Ellis, U.S. Army War College Strategic Studies Institute. @revanellis #NewWorldReportEllis 1115-1130 #NewWorldReport: Colombia lawfare. Latin American Research Professor Evan Ellis, U.S. Army War College Strategic Studies Institute. @revanellis #NewWorldReportEllis 1130-1145 #NewWorldReport: Mexico Sheinbaum. Latin American Research Professor Evan Ellis, U.S. Army War College Strategic Studies Institute. @revanellis #NewWorldReportEllis 1145-1200 #NewWorldReport: Argentina congress election. Latin American Research Professor Evan Ellis, U.S. Army War College Strategic Studies Institute. @revanellis #NewWorldReportEllis FOURTH HOUR 12-1215 Fed choice. Veronique de Rugy 1215-1230 Canada: Shy vacationers. Conrad Black 1230-1245 Rubio and Caracas. Mary Anastasia O'Grady 1245-100 AM HOTELl Mars: China wins. Rand Simberg, David Livingston
Preview: AI predictions: Kevin Frazier of UT School of Law explains that AI cannot yet predict the future. More later. FRBRUARY 1962
Renée DiResta, an Associate Research Professor at the McCourt School of Public Policy at Georgetown join Alan Rozenshtein and Kevin Frazier, to take a look at the Trump Administration's Woke AI policies, as set forth by a recent EO and explored in the AI Action Plan. This episode unpacks the implications of prohibiting AI models that fail to pursue objective truth and espouse "DEI" values. Hosted on Acast. See acast.com/privacy for more information.
Renée DiResta, an Associate Research Professor at the McCourt School of Public Policy at Georgetown and a Contributing Editor at Lawfare, and Alan Rozenshtein, an Associate Professor at Minnesota Law, Research Director at Lawfare, and, with the exception of today, co-host on the Scaling Laws podcast, join Kevin Frazier, the AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare, to take a look at the Trump Administration's Woke AI policies, as set forth by a recent EO and explored in the AI Action Plan.Read the Woke AI executive orderRead the AI Action PlanRead "Generative Baseline Hell and the Regulation of Machine-Learning Foundation Models," by James Grimmelmann, Blake Reid, and Alan RozenshteinFind Scaling Laws on the Lawfare website, and subscribe to never miss an episode.To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/lawfare-institute.Support this show http://supporter.acast.com/lawfare. Hosted on Acast. See acast.com/privacy for more information.
In this episode of Scaling Laws, Kevin Frazier is joined by Sayash Kapoor, co-author of "AI Snake Oil," to explore the complexities of AI development and its societal implications. They delve into the skepticism surrounding AGI claims, the real bottlenecks in AI adoption, and the transformative potential of AI as a general-purpose technology. Kapoor shares insights on the challenges of integrating AI into various sectors, the importance of empirical research, and the evolving nature of work in the AI era. The conversation also touches on the role of policy in shaping AI's future and the need for a nuanced understanding of AI's capabilities and limitations. Hosted on Acast. See acast.com/privacy for more information.
This week, Scott sat down with his Lawfare colleagues Natalie Orpett, Kevin Frazier, and Tyler McBrien to talk through the week's big national security news stories, including:“Feeding Frenzy.” The crisis in Gaza has reached a new, desperate stage. Months of a near total blockade on humanitarian assistance has created an imminent risk, if not a reality, of mass starvation among Gazan civilians. And it finally has the world—including President Donald Trump—taking notice and putting pressure on the Israeli government to change tack, including by threatening to recognize a Palestinian state. Now the Israeli government appears to be giving an inch, allowing what experts maintain is the bare minimum level of aid necessary to avoid famine into the country and even pursuing a few (largely symbolic) airlifts, while allowing other states to do the same. But how meaningful is this shift? And what could it mean for the trajectory of the broader conflict?“Hey, It Beats an AI Inaction Plan.” After months of anticipation, the Trump administration finally released its “AI Action Plan” last week. And despite some serious reservations about its handling of “woke AI” and select other culture war issues, the plan has generally been met with cautious optimism. How should we feel about the AI Action Plan? And what does it tell us about the direction AI policy is headed?“Pleas and No Thank You.” Earlier this month, the D.C. Circuit upheld then-Secretary of Defense Lloyd Austin's decision to nullify plea deals that several of the surviving 9/11 perpetrators had struck with those prosecuting them in the military commissions. How persuasive is the court's argument? And what does the decision mean for the future of the tribunals?In object lessons, Kevin highlighted a fascinating breakthrough from University of Texas engineers who developed over 1,500 AI-designed materials that can make buildings cooler and more energy efficient—an innovation that, coming from Texas, proves that necessity really is the mother of invention. Tyler took us on a wild ride into the world of Professional Bull Riders with a piece from The Baffler exploring the sport's current state and terrifying risks. Scott brought a sobering but essential read from the Carnegie Endowment for International Peace about how synthetic imagery and disinformation are shaping the Iran-Israel conflict. And Natalie recommended “Drive Your Plow Over the Bones of the Dead,” by Olga Tokarczuk, assuring us it's not nearly as murder-y as it sounds.Note: We will be on vacation next week but look forward to being back on August 13!To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/lawfare-institute.Support this show http://supporter.acast.com/lawfare. Hosted on Acast. See acast.com/privacy for more information.
This week, Scott sat down with his Lawfare colleagues Natalie Orpett, Kevin Frazier, and Tyler McBrien to talk through the week's big national security news stories, including:“Feeding Frenzy.” The crisis in Gaza has reached a new, desperate stage. Months of a near total blockade on humanitarian assistance has created an imminent risk, if not a reality, of mass starvation among Gazan civilians. And it finally has the world—including President Donald Trump—taking notice and putting pressure on the Israeli government to change tack, including by threatening to recognize a Palestinian state. Now the Israeli government appears to be giving an inch, allowing what experts maintain is the bare minimum level of aid necessary to avoid famine into the country and even pursuing a few (largely symbolic) airlifts, while allowing other states to do the same. But how meaningful is this shift? And what could it mean for the trajectory of the broader conflict?“Hey, It Beats an AI Inaction Plan.” After months of anticipation, the Trump administration finally released its “AI Action Plan” last week. And despite some serious reservations about its handling of “woke AI” and select other culture war issues, the plan has generally been met with cautious optimism. How should we feel about the AI Action Plan? And what does it tell us about the direction AI policy is headed?“Pleas and No Thank You.” Earlier this month, the D.C. Circuit upheld then-Secretary of Defense Lloyd Austin's decision to nullify plea deals that several of the surviving 9/11 perpetrators had struck with those prosecuting them in the military commissions. How persuasive is the court's argument? And what does the decision mean for the future of the tribunals?In object lessons, Kevin highlighted a fascinating breakthrough from University of Texas engineers who developed over 1,500 AI-designed materials that can make buildings cooler and more energy efficient—an innovation that, coming from Texas, proves that necessity really is the mother of invention. Tyler took us on a wild ride into the world of Professional Bull Riders with a piece from The Baffler exploring the sport's current state and terrifying risks. Scott brought a sobering but essential read from the Carnegie Endowment for International Peace about how synthetic imagery and disinformation are shaping the Iran-Israel conflict. And Natalie recommended “Drive Your Plow Over the Bones of the Dead,” by Olga Tokarczuk, assuring us it's not nearly as murder-y as it sounds.Note: We will be on vacation next week but look forward to being back on August 13!To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/lawfare-institute. Hosted on Acast. See acast.com/privacy for more information.
In this episode, join Kevin Frazier as he delves into the complex world of AI regulation with experts Lauren Wagner of the Abundance Institute and Andrew Freedman, Chief Strategy Officer at Fathom. As the AI community eagerly awaits the federal government's AI action plan, our guests explore the current regulatory landscape and the challenges of implementing effective governance with bills like SB 813. Innovative approaches are being proposed, including the role of independent verification organizations and the potential for public-private partnerships.Be sure to check out Fathom's Substack here: https://fathomai.substack.com/subscribe?params=%5Bobject%20Object%5D Hosted on Acast. See acast.com/privacy for more information.
Janet Egan, Senior Fellow with the Technology and National Security Program at the Center for a New American Security; Jessica Brandt, Senior Fellow for Technology and National Security at the Council on Foreign Relations; Neil Chilson, Head of AI Policy at Abundance Institute; and Tim Fist, Director of Emerging Technology Policy at the Institute for Progress join Kevin Frazier, the AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare for a special version of Scaling Laws.This episode was recorded just hours after the release of the AI Action Plan. About 180 days ago, President Trump directed his administration to explore ways to achieve AI dominance. His staff has attempted to do just that. This group of AI researchers dives into the plan's extensive recommendations and explore what may come next.Find Scaling Laws on the Lawfare website, and subscribe to never miss an episode.To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/lawfare-institute.Support this show http://supporter.acast.com/lawfare. Hosted on Acast. See acast.com/privacy for more information.
Janet Egan, Senior Fellow with the Technology and National Security Program at the Center for a New American Security, Jessica Brandt, Senior Fellow for Technology and National Security at the Council on Foreign Relations, Neil Chilson, Head of AI Policy at Abundance Institute, and Tim Fist, Director of Emerging Technology Policy at the Institute for Progress join Kevin Frazier, the AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare for a special version of Scaling Laws.This episode was recorded just hours after the release of the AI Action Plan. About 180 days ago, President Trump directed his administration to explore ways to achieve AI dominance. His staff has attempted to do just that. This group of AI researchers dives into the plan's extensive recommendations and explore what may come next. Hosted on Acast. See acast.com/privacy for more information.
Lt. Gen. (ret) Jack Shanahan joins Kevin Frazier to explore the nuanced landscape of AI in national security. Challenging the prevalent "AI arms race" narrative. The discussion delves into the complexities of AI integration in defense, the cultural shifts required within the Department of Defense, and the critical role of public trust and shared national vision. Tune in to understand how AI is reshaping military strategies and the broader implications for society. Hosted on Acast. See acast.com/privacy for more information.
In this Scaling Laws Academy "class," Kevin Frazier, the AI Innovation and Law Fellow at Texas Law and a Senior Editor at Lawfare, speaks with Eugene Volokh, a Senior Fellow at the Hoover Institution and long-time professor of law at UCLA, on libel in the AI context. The two dive into Volokh's paper, “Large Libel Models? Liability for AI Output.” Extra credit for those who give it a full read and explore some of the "homework" below:“Beyond Section 230: Principles for AI Governance,” 138 Harv. L. Rev. 1657 (2025)“When Artificial Agents Lie, Defame, and Defraud, Who Is to Blame?,” Stanford HAI (2021)Find Scaling Laws on the Lawfare website, and subscribe to never miss an episode.To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/lawfare-institute.Support this show http://supporter.acast.com/lawfare. Hosted on Acast. See acast.com/privacy for more information.
Kevin Frazier brings Eugene Volokh, a senior fellow at the Hoover Institution and UCLA law professor, to explore the complexities of libel in the age of AI. Discover how AI-generated content challenges traditional legal frameworks and the implications for platforms under Section 230. This episode is a must-listen for anyone interested in the evolving landscape of AI and law. Hosted on Acast. See acast.com/privacy for more information.
My conversation today is on the necessity of adaptive leadership in the coming wave that is aritificial intelligence. My guest is Kevin Frazier, the newly minted AI Innovation and Law Fellow at The University of Texas School of Law. His article (here) in Law & Liberty is called Building an AI-Savvy Workforce. His new podcast, Scaling Law (here), is excellent. Find his other work at Lawfare (here). Cross & Gavel is a production of CHRISTIAN LEGAL SOCIETY. The episode was produced by Josh Deng, with music from Vexento.
Cass Madison, the Executive Director of the Center for Civic Futures, and Zach Boyd, Director of the AI Policy Office at the State of Utah, join Kevin Frazier, AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare, to discuss how state governments are adjusting to the Age of AI. This conversation explores Cass's work to organize the increasing number of state officials tasked with thinking about AI adoption and regulation as well as Zach's experience leading one of the most innovative state AI offices. Hosted on Acast. See acast.com/privacy for more information.
Ethan Mollick, Professor of Management and author of the “One Useful Thing” Substack, joins Kevin Frazier, the AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare, and Alan Rozenshtein, Associate Professor at Minnesota Law and a Senior Editor at Lawfare, to analyze the latest research in AI adoption, specifically its use by professionals and educators. The trio also analyze the trajectory of AI development and related, ongoing policy discussions.More of Ethan Mollick's work: https://www.oneusefulthing.org/Find Scaling Laws on the Lawfare website, and subscribe to never miss an episode.To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/lawfare-institute.Support this show http://supporter.acast.com/lawfare. Hosted on Acast. See acast.com/privacy for more information.
On the inaugural episode of Scaling Laws, co-hosts Kevin Frazier, AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Law, and Alan Rozenshtein, Professor at Minnesota Law and Research Director at Lawfare, speak with Adam Thierer, a senior fellow for the Technology and Innovation team at the R Street Institute, and Helen Toner, the Director of Strategy and Foundational Research Grants at Georgetown's Center for Security and Emerging Technology (CSET).They discuss the recent overwhelming defeat in the Senate of a proposed moratorium on state and local regulation of artificial intelligence. The conversation explores the moratorium's journey from its inclusion in a House bill to its ultimate failure, examining the procedural hurdles, the confusing legislative language, and the political maneuvering that led to its demise by a 99-to-1 vote. The group discuss the future of U.S. AI governance, covering the Republican party's fragmentation on tech policy and whether Congress's failure to act is a sign of it being broken or a deliberate policy choice.Mentioned in this episode: “The Continuing Tech Policy Realignment on the Right” by Adam Thierer in Medium “1,000 AI Bills: Time for Congress to Get Serious About Preemption” by Kevin Frazier and Adam Thierer in Lawfare “Congress Should Preempt State AI Safety Legislation” by Dean W. Ball and Alan Z. Rozenshtein in Lawfare "The Coming Techlash Could Kill AI Innovation Before It Helps Anyone" by Kevin Frazier in Reason "Unresolved debates about the future of AI" by Helen Toner in Rising Tide Hosted on Acast. See acast.com/privacy for more information.
Christina Knight, Machine Learning Safety and Evals Lead at Scale AI and former senior policy adviser at the U.S. AI Safety Institute (AISI), joins Kevin Frazier, the AI Innovation and Law Fellow at Texas and a Senior Editor at Lawfare, to break down what it means to test and evaluate frontier AI models as well as the status of international efforts to coordinate on those efforts.This recording took place before the administration changed the name of the U.S. AI Safety Institute to the U.S. Center for AI Standards and Innovation. To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/lawfare-institute.Support this show http://supporter.acast.com/lawfare. Hosted on Acast. See acast.com/privacy for more information.
Josh Batson, a research scientist at Anthropic, joins Kevin Frazier, AI Innovation and Law Fellow at the Texas Law and Senior Editor at Lawfare, to break down two research papers—“Mapping the Mind of a Large Language Model” and “Tracing the thoughts of a large language model”—that uncovered some important insights about how advanced generative AI models work. The two discuss those findings as well as the broader significance of interpretability and explainability research.To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/lawfare-institute.Support this show http://supporter.acast.com/lawfare. Hosted on Acast. See acast.com/privacy for more information.
This week, Scott sat down with the AI-oriented Lawfare Senior Editors Alan Rozenshtein and Kevin Frazier to talk through the week's top AI-focused news stories, including:“Oh Sure, Now He's Into Free Trade.” President Trump has repealed the Biden administration's rule setting strict limits on the diffusion of high-end AI technology, opening the door to the global transfer of the technologies powering U.S. AI development, including advanced chipsets. And we're already seeing results of that policy in a recent deal the president signed with the UAE that would work toward the transfer of advanced semiconductors. How should AI diffusion fit into the broader global strategy surrounding the AI industry in the United States? And what approach does the Trump administration seem inclined to take?“Paving Over the Playing Field.” House Republicans recently included a provision in a House bill that would have preempted state efforts to legislate on and regulate the AI industry for a decade. Is this sort of federal preemption a prudent step given the broader competitive dynamics with China? Or does it go too far in insulating AI companies and users from accountability for their actions, particularly where they put the public interest and safety at risk?“Speechless.” A federal district court in Florida has issued a notable opinion of first impression in a tragic case involving a teenager who committed suicide, allegedly as a result of encouragement from an AI bot powered by the company character.ai. Among other holdings, the judge concluded that the AI's output was not itself protected speech. Is this holding correct? And what impact will it have on the development of the AI industry?In Object Lessons, the AI Guys went surprisingly analog. Alan recommended some good, ol' fashioned, 19th-century imperial espionage with “The Great Game,” by Peter Hopkirk. Kevin, meanwhile, is keeping an eye on a different kind of game: the NCAA Division I Baseball Championship, in which he's throwing up some Hook 'em Horns for Texas. And Scott is trying to “Economize” his time with The Economist's Espresso app, a quick, curated read that fits neatly into a busy morning.To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/lawfare-institute.Support this show http://supporter.acast.com/lawfare. Hosted on Acast. See acast.com/privacy for more information.
This week, Scott sat down with the AI-oriented Lawfare Senior Editors Alan Rozenshtein and Kevin Frazier to talk through the week's top AI-focused news stories, including:“Oh Sure, Now He's Into Free Trade.” President Trump has repealed the Biden administration's rule setting strict limits on the diffusion of high-end AI technology, opening the door to the global transfer of the technologies powering U.S. AI development, including advanced chipsets. And we're already seeing results of that policy in a recent deal the president signed with the UAE that would work toward the transfer of advanced semiconductors. How should AI diffusion fit into the broader global strategy surrounding the AI industry in the United States? And what approach does the Trump administration seem inclined to take?“Paving Over the Playing Field.” House Republicans recently included a provision in a House bill that would have preempted state efforts to legislate on and regulate the AI industry for a decade. Is this sort of federal preemption a prudent step given the broader competitive dynamics with China? Or does it go too far in insulating AI companies and users from accountability for their actions, particularly where they put the public interest and safety at risk?“Speechless.” A federal district court in Florida has issued a notable opinion of first impression in a tragic case involving a teenager who committed suicide, allegedly as a result of encouragement from an AI bot powered by the company character.ai. Among other holdings, the judge concluded that the AI's output was not itself protected speech. Is this holding correct? And what impact will it have on the development of the AI industry?In Object Lessons, the AI Guys went surprisingly analog. Alan recommended some good, ol' fashioned, 19th-century imperial espionage with “The Great Game,” by Peter Hopkirk. Kevin, meanwhile, is keeping an eye on a different kind of game: the NCAA Division I Baseball Championship, in which he's throwing up some Hook 'em Horns for Texas. And Scott is trying to “Economize” his time with The Economist's Espresso app, a quick, curated read that fits neatly into a busy morning.To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/lawfare-institute. Hosted on Acast. See acast.com/privacy for more information.
President Trump's budget bill, having recently passed the House of Representatives, is headed for the Senate with a proposed 10-year moratorium on AI regulations at the state level. How should lawmakers approach this rapidly-developing technology without stalling US progress in the AI "arms race," while still prioritizing consumers' data privacy and online safety?Dr. Scott Babwah Brennen, Kevin Frazier, and Adam Thierer join the RTP Fourth Branch Podcast to discuss and debate the arguments of AI regulation, innovation, and preemption.
Page Hedley, Senior Advisor at Forecasting Research Institute and co-author of the Not for Private Gain letter urging state attorneys general to stop OpenAI's planned restructuring, and Gad Weiss, the Wagner Fellow in Law & Business at NYU Law, join Kevin Frazier, AI Innovation and Law Fellow at Texas Law and Senior Editor at Lawfare, and Alan Rozenshtein, Associate Professor at Minnesota Law and Senior Editor at Lawfare, to analyze news of OpenAI once again modifying its corporate governance structure. The group break down the rationale for the proposed modification, the relevant underlying law, and the significance of corporate governance in shaping the direction of AI development.To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/lawfare-institute.Support this show http://supporter.acast.com/lawfare. Hosted on Acast. See acast.com/privacy for more information.
Cullen O'Keefe, Research Director at the Institute for Law and AI, joins Kevin Frazier, AI Innovation and Law Fellow at Texas Law and a Contributing Editor at Lawfare, and Renée DiResta, Associate Research Professor at the McCourt School of Public Policy at Georgetown and a Contributing Editor at Lawfare, to discuss a novel AI governance framework. They dive into a paper he co-authored on the concept of "Law-Following AI" or LFAI. That paper explores a near-term future. Imagine AI systems capable of tackling complex computer-based tasks with expert human-level skill. The potential for economic growth, scientific discovery, and improving public services is immense. But how do we ensure these powerful tools operate safely and align with our societal values? That's the question at the core of Cullen's paper and this podcast.To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/lawfare-institute.Support this show http://supporter.acast.com/lawfare. Hosted on Acast. See acast.com/privacy for more information.
Ben Brooks, a fellow at Harvard's Berkman Klein Center and former head of public policy for Stability AI, joins Kevin Frazier, AI Innovation and Law Fellow at Texas Law and Contributing Editor at Lawfare, to discuss a sudden and significant shift toward open-sourcing leading AI models and the ramifications of that pivot for AI governance at home and abroad. Ben and Kevin specifically review OpenAI's announced plans to release a new open-weights model.Coverage of OpenAI announcement: https://techcrunch.com/2025/03/31/openai-plans-to-release-a-new-open-language-model-in-the-coming-months/To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/lawfare-institute.Support this show http://supporter.acast.com/lawfare. Hosted on Acast. See acast.com/privacy for more information.
Andrew Bakaj, Chief Legal Counsel at Whistleblower Aid, joins Kevin Frazier, AI Innovation and Law Fellow at Texas Law and Contributing Editor at Lawfare, to discuss a declaration by a National Labor Relations Board employee Daniel Berulis that DOGE facilitated the exfiltration of potentially sensitive information to external sources. The two also analyze the merits of whistleblower protections more generally.Read more about the declaration here: https://www.npr.org/2025/04/15/nx-s1-5355896/doge-nlrb-elon-musk-spacex-securityFor a copy of the letter penned by several members of Congress, go here: https://www.npr.org/2025/04/24/nx-s1-5375118/congress-doge-nlrb-whistleblowerTo receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/lawfare-institute.Support this show http://supporter.acast.com/lawfare. Hosted on Acast. See acast.com/privacy for more information.
This week, Scott sat down with his Lawfare colleagues Anna Bower, Tyler McBrien, and Kevin Frazier to talk through the week's big national security news, including:“Aliens vs. Predators.” Despite forceful legal pushback—including by the U.S. Supreme Court—the Trump administration is working hard to continue its campaign to remove foreign aliens it accuses of pursuing a “predatory incursion” from the country using the Alien Enemies Act. How far will it go? And to what extent can the courts (or anyone else) stop them?“Aye Aye Robot.” Both the Biden and Trump administrations were fans of artificial intelligence (AI) and set out policies to incorporate it into government decision-making. But while the Biden administration focused much of its efforts on guardrails, the Trump administration has increasingly torn them down as part of a broader push to incorporate the nascent technology into government decision-making. What are the risks and potential benefits of this sort of government by AI? “For Pete's Sake.” Beleaguered Secretary of Defense Pete Hegseth is more beleaguered than ever this week, after reports that, in addition to inadvertently sharing classified secrets with Atlantic reporter Jeffrey Goldberg, he also passed them to his wife, brother, and personal lawyer on another Signal thread. Meanwhile, a former adviser (and established Trump loyalist) went public with allegations that Hegseth's management has led to chaos at the Defense Department and called for his resignation. Will this be enough for the Trump administration to cut bait and run? Or does his support in the MAGAsphere simply run too deep?In object lessons, Tyler, fresh from biking adventures abroad, hyped the routes, photos, and resources on bikepacking.com, if physical exertion is your idea of relaxation. Anna, finding other ways to relax, came to the defense of The Big Short in helping to soothe her anxiety amid more current market upheaval. Doubling down on the “no relaxation without tension” theme, Scott's outie binge-watched Severance while on vacation. And Kevin, very on-brand, was quick to bring us a feel-good story of a new community partnership to support AI skill-building in Austin-based nonprofits. To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/lawfare-institute.Support this show http://supporter.acast.com/lawfare. Hosted on Acast. See acast.com/privacy for more information.
This week, Scott sat down with his Lawfare colleagues Anna Bower, Tyler McBrien, and Kevin Frazier to talk through the week's big national security news, including:“Aliens vs. Predators.” Despite forceful legal pushback—including by the U.S. Supreme Court—the Trump administration is working hard to continue its campaign to remove foreign aliens it accuses of pursuing a “predatory incursion” from the country using the Alien Enemies Act. How far will it go? And to what extent can the courts (or anyone else) stop them?“Aye Aye Robot.” Both the Biden and Trump administrations were fans of artificial intelligence (AI) and set out policies to incorporate it into government decision-making. But while the Biden administration focused much of its efforts on guardrails, the Trump administration has increasingly torn them down as part of a broader push to incorporate the nascent technology into government decision-making. What are the risks and potential benefits of this sort of government by AI? “For Pete's Sake.” Beleaguered Secretary of Defense Pete Hegseth is more beleaguered than ever this week, after reports that, in addition to inadvertently sharing classified secrets with Atlantic reporter Jeffrey Goldberg, he also passed them to his wife, brother, and personal lawyer on another Signal thread. Meanwhile, a former adviser (and established Trump loyalist) went public with allegations that Hegseth's management has led to chaos at the Defense Department and called for his resignation. Will this be enough for the Trump administration to cut bait and run? Or does his support in the MAGAsphere simply run too deep?In object lessons, Tyler, fresh from biking adventures abroad, hyped the routes, photos, and resources on bikepacking.com, if physical exertion is your idea of relaxation. Anna, finding other ways to relax, came to the defense of The Big Short in helping to soothe her anxiety amid more current market upheaval. Doubling down on the “no relaxation without tension” theme, Scott's outie binge-watched Severance while on vacation. And Kevin, very on-brand, was quick to bring us a feel-good story of a new community partnership to support AI skill-building in Austin-based nonprofits. To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/lawfare-institute. Hosted on Acast. See acast.com/privacy for more information.
Chris Hughes, author of “Marketcrafters” and co-founder of the Economic Securities Project, joins Kevin Frazier, AI Innovation and Law Fellow at Texas Law and Contributing Editor at Lawfare, to discuss his book and its implications at a time of immense economic uncertainty and political upheaval. The duo explore several important historical case studies that Chris suggests may have lessons worth heeding in the ongoing struggle to direct markets toward the public good.To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/lawfare-institute.Support this show http://supporter.acast.com/lawfare. Hosted on Acast. See acast.com/privacy for more information.
Daniel Kokotajlo, former OpenAI researcher and Executive Director of the AI Futures Project, and Eli Lifland, a researcher with the AI Futures Project, join Kevin Frazier, AI Innovation and Law Fellow at Texas Law and Contributing Editor at Lawfare, to discuss what AI may look like in 2027. The trio explore a report co-authored by Daniel that dives into the hypothetical evolution of AI over the coming years. This novel report has already elicited a lot of attention with some reviewers celebrating its creativity and others questioning its methodology. Daniel and Eli tackle that feedback and help explain the report's startling conclusion—that superhuman AI will develop within the next decade.To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/lawfare-institute.Support this show http://supporter.acast.com/lawfare. Hosted on Acast. See acast.com/privacy for more information.
In this Tech Roundup Episode of RTP's Fourth Branch podcast, Kevin Frazier and Aram Gavoor sit down to discuss the recent, fast-moving developments in AI policy in the second Trump administration, as well as the importance of innovation and procurement.
Hillary Hartley, the former Chief Digital Officer of Ontario and former Co-Founder and Deputy Executive Director at 18F, and David Eaves, Associate Professor of Digital Government and Co-Deputy Director of the Institute for Innovation and Public Purpose at University College London, join Kevin Frazier, AI Innovation and Law Fellow at Texas Law and Contributing Editor at Lawfare, to discuss the recent closure of 18F, a digital unit within the GSA focused on updating and enhancing government technological systems and public-facing digital services. Hillary and David also published a recent Lawfare article on this topic, “Learning from the Legacy of 18F.”To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/lawfare-institute.Support this show http://supporter.acast.com/lawfare. Hosted on Acast. See acast.com/privacy for more information.
Adam Thierer, Senior Fellow for the Technology & Innovation team at R Street, joins Kevin Frazier, the AI Innovation and Law Fellow at the UT Austin School of Law and a Contributing Editor at Lawfare, to review public comments submitted in response to the Office of Science and Technology Policy's Request for Information on the AI Action Plan. The pair summarize their own comments and explore those submitted by major labs and civil society organizations. They also dive into recent developments in the AI regulatory landscape, including a major veto by Governor Youngkin in Virginia.Readings discussed:Kevin on Vance's America First, America Only Approach to AIKeegan and Adam on AI Safety Treatises Kevin on Proposed Firings at NISTDean and Alan on PreemptionTo receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/lawfare-institute.Support this show http://supporter.acast.com/lawfare. Hosted on Acast. See acast.com/privacy for more information.
Dan Hendrycks, Director of the Center for AI Safety, joins Kevin Frazier, the AI Innovation and Law Fellow at the UT Austin School of Law and Contributing Editor at Lawfare, to discuss his recent paper (co-authored with former Google CEO Eric Schmidt and Scale AI CEO Alexandr Wang), “Superintelligence Strategy.”To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/lawfare-institute.Support this show http://supporter.acast.com/lawfare. Hosted on Acast. See acast.com/privacy for more information.
Derek Thompson, a senior editor at The Atlantic and co-author (with Ezra Klein) of Abundance, joins Renée DiResta, Associate Research Professor at the McCourt School of Public Policy at Georgetown and a Contributing Editor at Lawfare, and Kevin Frazier, AI Innovation and Law Fellow at the UT Austin School of Law and Contributing Editor at Lawfare, to discuss the theory of Abundance and its feasibility in an age of political discord and institutional distrust.To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/lawfare-institute.Support this show http://supporter.acast.com/lawfare. Hosted on Acast. See acast.com/privacy for more information.
Carla Reyes, Associate Professor of Law at SMU Dedman School of Law, and Drew Hinkes, a Partner at Winston & Strawn with a practice focused on digital assets and advising financial services clients, join Kevin Frazier, Contributing Editor at Lawfare, to discuss the latest in cryptocurrency policy. The trio review the evolution of crypto-related policy since the Obama era, discuss the veracity of dominant crypto narratives, and explore what's next from the Trump administration on this complex, evolving topic. Read more:TRM Labs 2025 Crypto Crime Report: https://www.trmlabs.com/2025-crypto-crime-report2023 FDIC National Survey of Unbanked and Underbanked Households: https://www.fdic.gov/household-surveyTo receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/lawfare-institute.Support this show http://supporter.acast.com/lawfare. Hosted on Acast. See acast.com/privacy for more information.
Tim Fist, Director of Emerging Technology Policy at the Institute for Future Progress, and Arnab Datta, Director of Infrastructure Policy at IFP and Managing Director of Policy Implementation at Employ America, join Kevin Frazier, a Contributing Editor at Lawfare and adjunct professor at Delaware Law, to dive into the weeds of their thorough report on building America's AI infrastructure. The duo extensively studied the gulf between the stated goals of America's AI leaders and the practical hurdles to realizing those ambitious aims.Check out the entire report series here: Compute in AmericaTo receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/lawfare-institute.Support this show http://supporter.acast.com/lawfare. Hosted on Acast. See acast.com/privacy for more information.
Alexandra Reeve Givens, CEO of the Center for Democracy & Technology; Courtney Lang, Vice President of Policy for Trust, Data, and Technology at ITI and a Non-Resident Senior Fellow at the Atlantic Council GeoTech Center; and Nema Milaninia, a partner on the Special Matters & Government Investigations team at King & Spalding, join Kevin Frazier, Contributing Editor at Lawfare and Adjunct Professor at Delaware Law, to discuss the Paris AI Action Summit and whether it marks a formal pivot away from AI safety to AI security and, if so, what an embrace of AI security means for domestic and international AI governance.We value your feedback! Help us improve by sharing your thoughts at lawfaremedia.org/survey. Your input ensures that we deliver what matters most to you. Thank you for your support—and, as always, for listening!To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/lawfare-institute.Support this show http://supporter.acast.com/lawfare. Hosted on Acast. See acast.com/privacy for more information.
The Washington Post reported earlier this month that representatives of DOGE — the Department of Government Efficiency — gained access to sensitive data at the Department of Education and fed it into AI software. This has raised red flags over whether it violates federal privacy law. We reached out to DOGE for comment, but didn’t hear back. But there are ways to use AI to improve efficiency without raising privacy concerns. Marketplace’s Stephanie Hughes spoke with Kevin Frazier, contributing editor at the publication Lawfare, about how the government has used AI in the past and how it could use it more responsibly in the future.
The Washington Post reported earlier this month that representatives of DOGE — the Department of Government Efficiency — gained access to sensitive data at the Department of Education and fed it into AI software. This has raised red flags over whether it violates federal privacy law. We reached out to DOGE for comment, but didn’t hear back. But there are ways to use AI to improve efficiency without raising privacy concerns. Marketplace’s Stephanie Hughes spoke with Kevin Frazier, contributing editor at the publication Lawfare, about how the government has used AI in the past and how it could use it more responsibly in the future.
Matt Perault, Head of AI Policy at Andreessen Horowitz, joins Kevin Frazier, Contributing Editor at Lawfare and Adjunct Professor at Delaware Law, to define the Little Tech Agenda and explore how adoption of the Agenda may shape AI development across the country. The duo also discuss the current AI policy landscape.We value your feedback! Help us improve by sharing your thoughts at lawfaremedia.org/survey. Your input ensures that we deliver what matters most to you. Thank you for your support—and, as always, for listening!To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/lawfare-institute.Support this show http://supporter.acast.com/lawfare. Hosted on Acast. See acast.com/privacy for more information.
Chris Miller, a professor at the Fletcher School at Tufts University and Nonresident Senior Fellow at the American Enterprise Institute, and Marshall Kosloff, Senior Fellow at the Niskanen Center and co-host of the Realignment Podcast, join Kevin Frazier, a Contributing Editor at Lawfare and adjunct professor at Delaware Law, and Alan Rozenshtein, Senior Editor at Lawfare and associate professor of law at the University of Minnesota, to discuss AI, supply chains, and the Abundance Agenda.We value your feedback! Help us improve by sharing your thoughts at lawfaremedia.org/survey. Your input ensures that we deliver what matters most to you. Thank you for your support—and, as always, for listening!To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/lawfare-institute.Support this show http://supporter.acast.com/lawfare. Hosted on Acast. See acast.com/privacy for more information.
Aram Gavoor, Associate Dean for Academic Affairs at GW Law, joins Kevin Frazier, a Tarbell Fellow at Lawfare, to summarize and analyze the Trump administration's initial moves to pivot the nation's AI policy toward relentless innovation. The duo discuss the significance of Trump rescinding the Biden administration's 2023 executive order on AI as well as the recently announced Stargate Project.To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/lawfare-institute.Support this show http://supporter.acast.com/lawfare. Hosted on Acast. See acast.com/privacy for more information.
Senior Editor at Lawfare Eugenia Lostri sits down with Kevin Frazier, Lawfare's Tarbell Fellow in Artificial Intelligence, to discuss recent disruptions to undersea cables. They talk about the ongoing investigations; the challenges that weather, cooperation, and jurisdiction can present; and the plans in place to protect the cables from accidents and sabotage.To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/lawfare-institute.Support this show http://supporter.acast.com/lawfare. Hosted on Acast. See acast.com/privacy for more information.
Janet Egan, Senior Fellow at the Center for a New American Security (CNAS) and Lennart Heim, an AI researcher at RAND, join Kevin Frazier, a Tarbell Fellow at Lawfare, to analyze the interim final rule on AI diffusion announced by the Bureau of Industry and Security on January 13, 2025. This fourth-quarter effort by the Biden Administration to shape AI policy may have major ramifications on the global race for AI dominance. To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/lawfare-institute.Support this show http://supporter.acast.com/lawfare. Hosted on Acast. See acast.com/privacy for more information.
This week, Scott sat down with his Lawfare colleagues Molly Reynolds and Kevin Frazier to discuss the week's big national security news, including:“Mike Drop (Almost).” While we are still two weeks away from having a new president, the 119th Congress is already underway. But there are signs of tension in the Republican majority controlling both chambers, with House Republicans (under pressure from former President Trump and adviser Elon Musk) having killed a leadership-negotiated compromise funding bill at the end of the last Congress and Speaker Mike Johnson just barely securing reelection by a single vote after some last minute wrangling within the Republican caucus. What do these recent events tell us about what we should expect over the next year?“Will Be Mild.” The Jan. 6 that passed earlier this week went very differently than the one four years ago, with Congress peacefully recognizing former President Trump's election back to the White House. How are the legacies of the Jan. 6 insurrection of 2021 winding to a close in 2025? And which seem likely to persist?“Missed Connections.” Finland received an unwelcome Christmas present this year, after a major undersea telecommunications cable was damaged by the anchor of a suspected Russian shadow ship, in an act some believe was deliberate. And Taiwan rang in the New Year in similar fashion, with a major undersea cable getting damaged by a China-associated vessel. What is behind this set of attacks? And what tools do the affected states have to defend themselves?In object lessons, Molly shared an excellent holiday tradition to keep in your back pocket for next year and all the years to come: a family time capsule. Scott shared his newly perfected cocktail recipe, a concoction he is calling the Little Palermo™ (see below). And Kevin went a bit darker with his recommendation of “End Times,” by Peter Turchin.To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/lawfare-institute.The Little Palermo™ by Scott R. Anderson1 oz. brandy1 oz. cold brew concentrate3/4 oz. Mr. Black coffee liqueur3/4 oz. Averna1/4 oz. rich demerara syrup2 dashes chicory bittersShake vigorously over ice, double strain into a glass, express lemon oil over the top.Support this show http://supporter.acast.com/lawfare. Hosted on Acast. See acast.com/privacy for more information.
It's time for Lawfare's annual "Ask Us Anything" podcast.You called in with your questions, and Lawfare contributors have answers! Benjamin Wittes, Kevin Frazier, Quinta Jurecic, Eugenia Lostri, Alan Rozenshtein, Scott R. Anderson, Natalie Orpett, Amelia Wilson, Anna Bower, and Roger Parloff addressed questions on everything from presidential pardons to the risks of AI to the domestic deployment of the military.Thank you for your questions. And as always, thank you for listening.To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/lawfare-institute.Support this show http://supporter.acast.com/lawfare. Hosted on Acast. See acast.com/privacy for more information.