POPULARITY
From September 20, 2024: Bob Bauer, Professor of Practice and Distinguished Scholar in Residence at New York University School of Law, and Liza Goitein, Senior Director of Liberty & National Security at the Brennan Center, join Kevin Frazier, Assistant Professor at St. Thomas University College of Law and a Tarbell Fellow at Lawfare, to review the emergency powers afforded to the president under the National Emergency Act, International Emergency Economic Powers Act, and the Insurrection Act. The trio also inspect ongoing bipartisan efforts to reform emergency powers.To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/lawfare-institute.Support this show http://supporter.acast.com/lawfare. Hosted on Acast. See acast.com/privacy for more information.
“Starting small, but aspiring to grow” defines the little tech agenda. Big Tech companies often depend on smaller innovators for key components of manufacturing and new technologies. With this dependence on little tech, what are the “gaps” in its agenda? The U.S. has technological capital waiting to be unlocked by small innovators. What steps can be taken to address this gap and channel little tech's efforts towards our national interests? Can we strike a balance between Big Tech and little tech to further the goals of the United States’ technological development? Join the Federalist Society’s Regulatory Transparency Project and host Prof. Kevin Frazier for an in-depth discussion of the “Little Tech Agenda” with special guest Sam Hammond, Foundation of American Innovation.
Over the past 30 years, the United States has experienced rapid technological change. Yet in recent years, innovation appears to have plateaued. The iPhone of four years ago is nearly identical to today’s model, and the internet has changed little over the same period. Little tech companies play a significant role in generating new ideas and technological development. In this episode, experts discuss the financial gains and risks of incentivising little tech innovation and offer policy recommendations that encourage investment in the "littlest tech" firms to drive future breakthroughs.Join the Federalist Society’s Regulatory Transparency Project and host Prof. Kevin Frazier for an in-depth discussion of the “Little Tech Agenda” with special guest Dave Karpf, Associate Professor at the George Washington University School of Media and Public Affairs.
Neil Chilson, Head of AI Policy at the Abundance Institute, and Gus Hurwitz, Senior Fellow and CTIC Academic Director at Penn Carey Law School and Director of Law & Economics Programs at the International Center for Law & Economics, join Kevin Frazier, AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare, to explore how academics can overcome the silos and incentives that plague the Ivory Tower and positively contribute to the highly complex, evolving, and interdisciplinary work associated with AI governance.The trio recorded this podcast live at the Institute for Humane Studies's Technology, Liberalism, and Abundance Conference in Arlington, Virginia.Read about Kevin's thinking on the topic here: https://www.civitasinstitute.org/research/draining-the-ivory-towerLearn about the Conference: https://www.theihs.org/blog/curated-event/technology-abundance-and-liberalism/Find Scaling Laws on the Lawfare website, and subscribe to never miss an episode.To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/lawfare-institute.Support this show http://supporter.acast.com/lawfare. Hosted on Acast. See acast.com/privacy for more information.
HEADLINE: Russian Spy Ships Target Vulnerable Undersea Communication Cables GUEST NAME: Kevin Frazier50 WORD SUMMARY: Undersea cables are highly vulnerable to sabotage or accidental breaks. Russia uses sophisticated naval technology, including the spy ship Yantar, to map and potentially break these cables in sensitive locations. The US is less vulnerable due to redundancy. However, protection is fragmented, relying on private owners who often lack incentives to adopt sophisticated defense techniques. 1945 RED SQUARE
Neil Chilson, Head of AI Policy at the Abundance Institute, and Gus Hurwitz, Senior Fellow and CTIC Academic Director at Penn Carey Law School and Director of Law & Economics Programs at the International Center for Law & Economics, join Kevin Frazier, AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare, to explore how academics can overcome the silos and incentives that plague the Ivory Tower and positively contribute to the highly complex, evolving, and interdisciplinary work associated with AI governance. The trio recorded this podcast live at the Institute for Humane Studies's Technology, Liberalism, and Abundance Conference in Arlington, Virginia.Read about Kevin's thinking on the topic here: https://www.civitasinstitute.org/research/draining-the-ivory-towerLearn about the Conference: https://www.theihs.org/blog/curated-event/technology-abundance-and-liberalism/ Hosted on Acast. See acast.com/privacy for more information.
In this Scaling Laws rapid response episode, hosts Kevin Frazier and Alan Rozenshtein talk about SB-53, the frontier AI transparency (and more) law that California Governor Gavin Newsom signed into law on September 29. Hosted on Acast. See acast.com/privacy for more information.
Preview: Kevin Frazier discusses the extreme vulnerability and fragmented state of undersea cables, the vast majority of which are privately owned. The Department of Defense relies on these systems, which lack sufficient protection due to high costs. Frazier highlights recent reports that the Russian ship Yantar, under GRU possession, is tracking and mapping these vital cables near Great Britain in the event of conflict.
Preview: Kevin Frazier discusses the extreme vulnerability and fragmented state of undersea cables, the vast majority of which are privately owned. The Department of Defense relies on these systems, which lack sufficient protection due to high costs. Frazier highlights recent reports that the Russian ship Yantar, under GRU possession, is tracking and mapping these vital cables near Great Britain in the event of conflict.
Kevin Frazier testified that Congress needs a national vision to manage data center infrastructure and mitigate local impacts. He stressed vulnerable undersea cables are neglected and urged academics to prioritize teaching and public-oriented research. 1939
Kevin Frazier testified that Congress needs a national vision to manage data center infrastructure and mitigate local impacts. He stressed vulnerable undersea cables are neglected and urged academics to prioritize teaching and public-oriented research.
Preview: Kevin Frazier of University of Texas Law School/Civitas Institute discusses congressional concerns over AIregulation, balancing state interests versus federal goals of preventing cross-state policy projection and prioritizing national AI innovation and growth.
From September 18, 2024: Jane Bambauer, Professor at Levin College of Law; Ramya Krishnan, Senior Staff Attorney at the Knight First Amendment Institute and a lecturer in law at Columbia Law School; Alan Rozenshtein, Associate Professor of Law at the University of Minnesota Law School and a Senior Editor at Lawfare, join Kevin Frazier, Assistant Professor at St. Thomas University College of Law and a Tarbell Fellow at Lawfare, to break down the D.C. Circuit Court of Appeals' hearing in TikTok v. Garland, in which a panel of judges assessed the constitutionality of the TikTok bill.To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/lawfare-institute.Support this show http://supporter.acast.com/lawfare. Hosted on Acast. See acast.com/privacy for more information.
What priorities should shape U.S. innovation policy at the national level? Historically, the federal government has adopted a "light touch" approach, with legislation often focused on reducing barriers so that smaller entrepreneurs can prioritize innovation over regulatory compliance. Big tech companies often hold a competitive advantage including resources, capital, and political influence that small-scale entrepreneurs lack. How can policymakers design legislation that ensures fair competition between Big Tech and little tech? Do acquisitions of little tech companies by Big tech promote innovation or constrain the development of emerging ideas? How can policymakers foster innovation for smaller scale initiatives through legislation, competition regulation, and support for emerging firms? Join the Federalist Society’s Regulatory Transparency Project and host Prof. Kevin Frazier for an in-depth discussion of the “Little Tech Agenda” with special guest Jennifer Huddleston, Senior Fellow in Technology Policy at the Cato Institute.
Steven Adler, former OpenAI safety researcher, author of Clear-Eyed AI on Substack, and independent AGI-readiness researcher, joins Kevin Frazier, AI Innovation and Law Fellow at the University of Texas School of Law and Senior Fellow at Lawfare, to assess the current state of AI testing and evaluations. The two walk through Steven's views on industry efforts to improve model testing and what he thinks regulators ought to know and do when it comes to preventing AI harms.Thanks to Leo Wu for research assistance!Find Scaling Laws on the Lawfare website, and subscribe to never miss an episode.To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/lawfare-institute.Support this show http://supporter.acast.com/lawfare. Hosted on Acast. See acast.com/privacy for more information.
Joshua Gans, a professor at the University of Toronto and co-author of "Power and Prediction: The Disruptive Economics of Artificial Intelligence," joins Kevin Frazier, the AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare, to evaluate ongoing concerns about AI-induced job displacement, the likely consequences of various regulatory proposals on AI innovation, and how AI tools are already changing higher education. Select works by Gans include: A Quest for AI Knowledge (https://www.nber.org/papers/w33566)Regulating the Direction of Innovation (https://www.nber.org/papers/w32741)How Learning About Harms Impacts the Optimal Rate of Artificial Intelligence Adoption (https://www.nber.org/papers/w32105) Hosted on Acast. See acast.com/privacy for more information.
Steven Adler, former OpenAI safety researcher, author of Clear-Eyed AI on Substack, and independent AGI-readiness researcher, joins Kevin Frazier, AI Innovation and Law Fellow at the University of Texas School of Law, to assess the current state of AI testing and evaluations. The two walk through Steven's views on industry efforts to improve model testing and what he thinks regulators ought to know and do when it comes to preventing AI harms. You can read Steven's Substack here: https://stevenadler.substack.com/ Thanks to Leo Wu for research assistance! Hosted on Acast. See acast.com/privacy for more information.
Anu Bradford, Professor at Columbia Law School, and Kate Klonick, Senior Editor at Lawfare and Associate Professor at St. John's University School of Law, join Kevin Frazier, AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare, to assess the ongoing, contrasting, and, at times, conflicting regulatory approaches to Big Tech being pursued by the EU and U.S. The trio start with an assessment of the EU's use of the Brussels Effect, coined by Anu, to shape AI development. Next, they explore the U.S.'s increasingly interventionist industrial policy with respect to key sectors, especially tech.Read more:Anu's op-ed in The New York Times"The Impact of Regulation on Innovation," by Philippe Aghion, Antonin Bergeaud, and John Van ReenenDraghi Report on the Future of European CompetitivenessFind Scaling Laws on the Lawfare website, and subscribe to never miss an episode.To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/lawfare-institute.Support this show http://supporter.acast.com/lawfare. Hosted on Acast. See acast.com/privacy for more information.
Smaller, advanced technology entrepreneurs are increasingly shaping the U.S. innovation landscape through what some have called the “Little Tech Agenda.” But what exactly is this agenda, and how might it influence policy debates moving forward?America has long celebrated small-scale innovators, yet questions remain about how regulatory frameworks can support entrepreneurship without stifling growth. Some policymakers argue that new parameters are needed to govern emerging technologies, while others caution that overregulation could hinder the nation’s competitive edge in the global power struggle. If “Little Tech” is critical to America’s future, how far should the United States go to defend and promote its development?Join the Federalist Society’s Regulatory Transparency Project and host Prof. Kevin Frazier for an in-depth discussion of the “Little Tech Agenda” with special guest Collin McCune, Head of Government Affairs at Andreessen Horowitz.
Anu Bradford, Professor at Columbia Law School, and Kate Klonick, Senior Editor at Lawfare and Associate Professor at St. John's University School of Law, join Kevin Frazier, AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare, to assess the ongoing contrasting and, at times, conflicting regulatory approaches to Big Tech being pursued by the EU and US. The trio start with an assessment of the EU's use of the Brussels Effect, coined by Anu, to shape AI development. Next, then explore the US's increasingly interventionist industrial policy with respect to key sectors, especially tech. Read more:Anu's op-ed in The New York TimesThe Impact of Regulation on Innovation by Philippe Aghion, Antonin Bergeaud & John Van ReenenDraghi Report on the Future of European Competitiveness Hosted on Acast. See acast.com/privacy for more information.
From August 23, 2024: Richard Albert, William Stamps Farish Professor in Law, Professor of Government, and Director of Constitutional Studies at the University of Texas at Austin, joins Kevin Frazier, Assistant Professor at St. Thomas University College of Law and a Tarbell Fellow at Lawfare, to conduct a comparative analysis of what helps constitutions withstand political pressures. Richard's extensive study of different means to amend constitutions shapes their conversation about whether the U.S. Constitution has become too rigid.To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/lawfare-institute.Support this show http://supporter.acast.com/lawfare. Hosted on Acast. See acast.com/privacy for more information.
Peter E. Harrell, Adjunct Senior Fellow at the Center for a New American Security, joins Kevin Frazier, AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare, to examine the White House's announcement that it will take a 10% share of Intel. They dive into the policy rationale for the stake as well as its legality. Peter and Kevin also explore whether this is just the start of such deals given that President Trump recently declared that “there will be more transactions, if not in this industry then other industries.”Find Scaling Laws on the Lawfare website, and subscribe to never miss an episode.To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/lawfare-institute.Support this show http://supporter.acast.com/lawfare. Hosted on Acast. See acast.com/privacy for more information.
Peter E. Harrell, Adjunct Senior Fellow at the Center for a New American Security, joins Kevin Frazier, AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare, to examine the White House's announcement that it will take a 10% share of Intel. They dive into the policy rationale for the stake as well as its legality. Peter and Kevin also explore whether this is just the start of such deals given that President Trump recently declared that “there will be more transactions, if not in this industry then other industries.” Hosted on Acast. See acast.com/privacy for more information.
MacKenzie Price, co-founder of Alpha School, and Rebecca Winthrop, a senior fellow and director of the Center for Universal Education at the Brookings Institution, join Kevin Frazier, the AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare, to review how AI is being integrated into the classroom at home and abroad. MacKenzie walks through the use of predictive AI in Alpha School classrooms. Rebecca provides a high-level summary of ongoing efforts around the globe to bring AI into the education pipeline. This conversation is particularly timely in the wake of the AI Action Plan, which built on the Trump administration's prior calls for greater use of AI from K to 12 and beyond. Learn more about Alpha School here: https://www.nytimes.com/2025/07/27/us/politics/ai-alpha-school-austin-texas.html and here: https://www.astralcodexten.com/p/your-review-alpha-schoolLearn about the Brookings Global Task Force on AI in Education here: https://www.brookings.edu/projects/brookings-global-task-force-on-ai-in-education/ Hosted on Acast. See acast.com/privacy for more information.
AI: REGULATING LLM - KEVIN FRAZIER, CIVITAS INSTITUTE 1941
AI: REGULATING LLM - KEVIN FRAZIER, CIVITAS INSTITUTE CONTINUED 1952
Keegan McBride, Senior Policy Advisor in Emerging Technology and Geopolitics at the Tony Blair Institute, and Nathan Lambert, a post-training lead at the Allen Institute for AI, join Alan Rozenshein, Associate Professor at Minnesota Law and Research Director at Lawfare, and Kevin Frazier, the AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare, to explore the current state of open source AI model development and associated policy questions.The pivot to open source has been swift following initial concerns that the security risks posed by such models outweighed their benefits. What this transition means for the US AI ecosystem and the global AI competition is a topic worthy of analysis by these two experts. Hosted on Acast. See acast.com/privacy for more information.
Preview: AGI Regulation Colleague Kevin Frazier comments on the tentative state of LLM that needs time to develop before it is either judged or derided by lawmakers. More later.
In this episode of Scaling Laws, Dean Ball, Senior Fellow at the Foundation for American Innovation and former Senior Policy Advisor for Artificial Intelligence and Emerging Technology, White House Office of Science and Technology Policy, joins Kevin Frazier, AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare, and Alan Rozenshtein, Associate Professor at Minnesota Law and Research Director at Lawfare, to share an inside perspective of the Trump administration's AI agenda, with a specific focus on the AI Action Plan. The trio also explore Dean's thoughts on the recently released ChatGPT-5 and the ongoing geopolitical dynamics shaping America's domestic AI policy.Find Scaling Laws on the Lawfare website, and subscribe to never miss an episode.To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/lawfare-institute.Support this show http://supporter.acast.com/lawfare. Hosted on Acast. See acast.com/privacy for more information.
Brian Fuller, a member of the Product Policy Team at OpenAI, joins Kevin Frazier, the AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare, to analyze how large AI labs go about testing their models for compliance with internal requirements and various legal obligations. They also cover the ins and outs of what it means to work in product policy and what issues are front of mind for in-house policy teams amid substantial regulatory uncertainty.Find Scaling Laws on the Lawfare website, and subscribe to never miss an episode.To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/lawfare-institute.Support this show http://supporter.acast.com/lawfare. Hosted on Acast. See acast.com/privacy for more information.
AI: Electricity supremacy. Kevin Frazier, Civitas Institute JUNE 1957
AI: Electricity supremacy. Kevin Frazier, Civitas Institute continued JANUARY 1959
SHOW SCHEDULE 8-7-25 Good evening. The show begins in the future, discussing the AI androids that will dominate the QSRs... NOVEMBER 1957 CBS EYE ON THE WORLD WITH JOHN BATCHELOR FIRST HOUR 9-915 Android AI: How soon? #SCALAREPORT: Chris Riegel CEO, Scala.com @Stratacache 915-930 Jobs: QSR all androids. #SCALAREPORT: Chris Riegel CEO, Scala.com @Stratacache 930-945 Research endowments and Trump admin. Eric Jensen, Case Western University, Civitas 945-1000 Research endowments and Trump admin. Eric Jensen, Case Western University, Civitas continued SECOND HOUR 10-1015 Putin softens. Anatol Lieven, Quincy Institute 1015-1030 Putin successor. Anatol Lieven, Quincy Institute 1030-1045 AI: Electricity supremacy. Kevin Frazier, Civitas Institute 1045-1100 AI: Electricity supremacy. Kevin Frazier, Civitas Institute continued THIRD HOUR 1100-1115 #NewWorldReport: Brazil lawfare. Latin American Research Professor Evan Ellis, U.S. Army War College Strategic Studies Institute. @revanellis #NewWorldReportEllis 1115-1130 #NewWorldReport: Colombia lawfare. Latin American Research Professor Evan Ellis, U.S. Army War College Strategic Studies Institute. @revanellis #NewWorldReportEllis 1130-1145 #NewWorldReport: Mexico Sheinbaum. Latin American Research Professor Evan Ellis, U.S. Army War College Strategic Studies Institute. @revanellis #NewWorldReportEllis 1145-1200 #NewWorldReport: Argentina congress election. Latin American Research Professor Evan Ellis, U.S. Army War College Strategic Studies Institute. @revanellis #NewWorldReportEllis FOURTH HOUR 12-1215 Fed choice. Veronique de Rugy 1215-1230 Canada: Shy vacationers. Conrad Black 1230-1245 Rubio and Caracas. Mary Anastasia O'Grady 1245-100 AM HOTELl Mars: China wins. Rand Simberg, David Livingston
Preview: AI predictions: Kevin Frazier of UT School of Law explains that AI cannot yet predict the future. More later. FRBRUARY 1962
Renée DiResta, an Associate Research Professor at the McCourt School of Public Policy at Georgetown join Alan Rozenshtein and Kevin Frazier, to take a look at the Trump Administration's Woke AI policies, as set forth by a recent EO and explored in the AI Action Plan. This episode unpacks the implications of prohibiting AI models that fail to pursue objective truth and espouse "DEI" values. Hosted on Acast. See acast.com/privacy for more information.
Renée DiResta, an Associate Research Professor at the McCourt School of Public Policy at Georgetown and a Contributing Editor at Lawfare, and Alan Rozenshtein, an Associate Professor at Minnesota Law, Research Director at Lawfare, and, with the exception of today, co-host on the Scaling Laws podcast, join Kevin Frazier, the AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare, to take a look at the Trump Administration's Woke AI policies, as set forth by a recent EO and explored in the AI Action Plan.Read the Woke AI executive orderRead the AI Action PlanRead "Generative Baseline Hell and the Regulation of Machine-Learning Foundation Models," by James Grimmelmann, Blake Reid, and Alan RozenshteinFind Scaling Laws on the Lawfare website, and subscribe to never miss an episode.To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/lawfare-institute.Support this show http://supporter.acast.com/lawfare. Hosted on Acast. See acast.com/privacy for more information.
In this episode of Scaling Laws, Kevin Frazier is joined by Sayash Kapoor, co-author of "AI Snake Oil," to explore the complexities of AI development and its societal implications. They delve into the skepticism surrounding AGI claims, the real bottlenecks in AI adoption, and the transformative potential of AI as a general-purpose technology. Kapoor shares insights on the challenges of integrating AI into various sectors, the importance of empirical research, and the evolving nature of work in the AI era. The conversation also touches on the role of policy in shaping AI's future and the need for a nuanced understanding of AI's capabilities and limitations. Hosted on Acast. See acast.com/privacy for more information.
This week, Scott sat down with his Lawfare colleagues Natalie Orpett, Kevin Frazier, and Tyler McBrien to talk through the week's big national security news stories, including:“Feeding Frenzy.” The crisis in Gaza has reached a new, desperate stage. Months of a near total blockade on humanitarian assistance has created an imminent risk, if not a reality, of mass starvation among Gazan civilians. And it finally has the world—including President Donald Trump—taking notice and putting pressure on the Israeli government to change tack, including by threatening to recognize a Palestinian state. Now the Israeli government appears to be giving an inch, allowing what experts maintain is the bare minimum level of aid necessary to avoid famine into the country and even pursuing a few (largely symbolic) airlifts, while allowing other states to do the same. But how meaningful is this shift? And what could it mean for the trajectory of the broader conflict?“Hey, It Beats an AI Inaction Plan.” After months of anticipation, the Trump administration finally released its “AI Action Plan” last week. And despite some serious reservations about its handling of “woke AI” and select other culture war issues, the plan has generally been met with cautious optimism. How should we feel about the AI Action Plan? And what does it tell us about the direction AI policy is headed?“Pleas and No Thank You.” Earlier this month, the D.C. Circuit upheld then-Secretary of Defense Lloyd Austin's decision to nullify plea deals that several of the surviving 9/11 perpetrators had struck with those prosecuting them in the military commissions. How persuasive is the court's argument? And what does the decision mean for the future of the tribunals?In object lessons, Kevin highlighted a fascinating breakthrough from University of Texas engineers who developed over 1,500 AI-designed materials that can make buildings cooler and more energy efficient—an innovation that, coming from Texas, proves that necessity really is the mother of invention. Tyler took us on a wild ride into the world of Professional Bull Riders with a piece from The Baffler exploring the sport's current state and terrifying risks. Scott brought a sobering but essential read from the Carnegie Endowment for International Peace about how synthetic imagery and disinformation are shaping the Iran-Israel conflict. And Natalie recommended “Drive Your Plow Over the Bones of the Dead,” by Olga Tokarczuk, assuring us it's not nearly as murder-y as it sounds.Note: We will be on vacation next week but look forward to being back on August 13!To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/lawfare-institute.Support this show http://supporter.acast.com/lawfare. Hosted on Acast. See acast.com/privacy for more information.
This week, Scott sat down with his Lawfare colleagues Natalie Orpett, Kevin Frazier, and Tyler McBrien to talk through the week's big national security news stories, including:“Feeding Frenzy.” The crisis in Gaza has reached a new, desperate stage. Months of a near total blockade on humanitarian assistance has created an imminent risk, if not a reality, of mass starvation among Gazan civilians. And it finally has the world—including President Donald Trump—taking notice and putting pressure on the Israeli government to change tack, including by threatening to recognize a Palestinian state. Now the Israeli government appears to be giving an inch, allowing what experts maintain is the bare minimum level of aid necessary to avoid famine into the country and even pursuing a few (largely symbolic) airlifts, while allowing other states to do the same. But how meaningful is this shift? And what could it mean for the trajectory of the broader conflict?“Hey, It Beats an AI Inaction Plan.” After months of anticipation, the Trump administration finally released its “AI Action Plan” last week. And despite some serious reservations about its handling of “woke AI” and select other culture war issues, the plan has generally been met with cautious optimism. How should we feel about the AI Action Plan? And what does it tell us about the direction AI policy is headed?“Pleas and No Thank You.” Earlier this month, the D.C. Circuit upheld then-Secretary of Defense Lloyd Austin's decision to nullify plea deals that several of the surviving 9/11 perpetrators had struck with those prosecuting them in the military commissions. How persuasive is the court's argument? And what does the decision mean for the future of the tribunals?In object lessons, Kevin highlighted a fascinating breakthrough from University of Texas engineers who developed over 1,500 AI-designed materials that can make buildings cooler and more energy efficient—an innovation that, coming from Texas, proves that necessity really is the mother of invention. Tyler took us on a wild ride into the world of Professional Bull Riders with a piece from The Baffler exploring the sport's current state and terrifying risks. Scott brought a sobering but essential read from the Carnegie Endowment for International Peace about how synthetic imagery and disinformation are shaping the Iran-Israel conflict. And Natalie recommended “Drive Your Plow Over the Bones of the Dead,” by Olga Tokarczuk, assuring us it's not nearly as murder-y as it sounds.Note: We will be on vacation next week but look forward to being back on August 13!To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/lawfare-institute. Hosted on Acast. See acast.com/privacy for more information.
In this episode, join Kevin Frazier as he delves into the complex world of AI regulation with experts Lauren Wagner of the Abundance Institute and Andrew Freedman, Chief Strategy Officer at Fathom. As the AI community eagerly awaits the federal government's AI action plan, our guests explore the current regulatory landscape and the challenges of implementing effective governance with bills like SB 813. Innovative approaches are being proposed, including the role of independent verification organizations and the potential for public-private partnerships.Be sure to check out Fathom's Substack here: https://fathomai.substack.com/subscribe?params=%5Bobject%20Object%5D Hosted on Acast. See acast.com/privacy for more information.
Janet Egan, Senior Fellow with the Technology and National Security Program at the Center for a New American Security; Jessica Brandt, Senior Fellow for Technology and National Security at the Council on Foreign Relations; Neil Chilson, Head of AI Policy at Abundance Institute; and Tim Fist, Director of Emerging Technology Policy at the Institute for Progress join Kevin Frazier, the AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare for a special version of Scaling Laws.This episode was recorded just hours after the release of the AI Action Plan. About 180 days ago, President Trump directed his administration to explore ways to achieve AI dominance. His staff has attempted to do just that. This group of AI researchers dives into the plan's extensive recommendations and explore what may come next.Find Scaling Laws on the Lawfare website, and subscribe to never miss an episode.To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/lawfare-institute.Support this show http://supporter.acast.com/lawfare. Hosted on Acast. See acast.com/privacy for more information.
Janet Egan, Senior Fellow with the Technology and National Security Program at the Center for a New American Security, Jessica Brandt, Senior Fellow for Technology and National Security at the Council on Foreign Relations, Neil Chilson, Head of AI Policy at Abundance Institute, and Tim Fist, Director of Emerging Technology Policy at the Institute for Progress join Kevin Frazier, the AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare for a special version of Scaling Laws.This episode was recorded just hours after the release of the AI Action Plan. About 180 days ago, President Trump directed his administration to explore ways to achieve AI dominance. His staff has attempted to do just that. This group of AI researchers dives into the plan's extensive recommendations and explore what may come next. Hosted on Acast. See acast.com/privacy for more information.
Lt. Gen. (ret) Jack Shanahan joins Kevin Frazier to explore the nuanced landscape of AI in national security. Challenging the prevalent "AI arms race" narrative. The discussion delves into the complexities of AI integration in defense, the cultural shifts required within the Department of Defense, and the critical role of public trust and shared national vision. Tune in to understand how AI is reshaping military strategies and the broader implications for society. Hosted on Acast. See acast.com/privacy for more information.
In this Scaling Laws Academy "class," Kevin Frazier, the AI Innovation and Law Fellow at Texas Law and a Senior Editor at Lawfare, speaks with Eugene Volokh, a Senior Fellow at the Hoover Institution and long-time professor of law at UCLA, on libel in the AI context. The two dive into Volokh's paper, “Large Libel Models? Liability for AI Output.” Extra credit for those who give it a full read and explore some of the "homework" below:“Beyond Section 230: Principles for AI Governance,” 138 Harv. L. Rev. 1657 (2025)“When Artificial Agents Lie, Defame, and Defraud, Who Is to Blame?,” Stanford HAI (2021)Find Scaling Laws on the Lawfare website, and subscribe to never miss an episode.To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/lawfare-institute.Support this show http://supporter.acast.com/lawfare. Hosted on Acast. See acast.com/privacy for more information.
Ethan Mollick, Professor of Management and author of the “One Useful Thing” Substack, joins Kevin Frazier, the AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare, and Alan Rozenshtein, Associate Professor at Minnesota Law and a Senior Editor at Lawfare, to analyze the latest research in AI adoption, specifically its use by professionals and educators. The trio also analyze the trajectory of AI development and related, ongoing policy discussions.More of Ethan Mollick's work: https://www.oneusefulthing.org/Find Scaling Laws on the Lawfare website, and subscribe to never miss an episode.To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/lawfare-institute.Support this show http://supporter.acast.com/lawfare. Hosted on Acast. See acast.com/privacy for more information.
Christina Knight, Machine Learning Safety and Evals Lead at Scale AI and former senior policy adviser at the U.S. AI Safety Institute (AISI), joins Kevin Frazier, the AI Innovation and Law Fellow at Texas and a Senior Editor at Lawfare, to break down what it means to test and evaluate frontier AI models as well as the status of international efforts to coordinate on those efforts.This recording took place before the administration changed the name of the U.S. AI Safety Institute to the U.S. Center for AI Standards and Innovation. To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/lawfare-institute.Support this show http://supporter.acast.com/lawfare. Hosted on Acast. See acast.com/privacy for more information.
Josh Batson, a research scientist at Anthropic, joins Kevin Frazier, AI Innovation and Law Fellow at the Texas Law and Senior Editor at Lawfare, to break down two research papers—“Mapping the Mind of a Large Language Model” and “Tracing the thoughts of a large language model”—that uncovered some important insights about how advanced generative AI models work. The two discuss those findings as well as the broader significance of interpretability and explainability research.To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/lawfare-institute.Support this show http://supporter.acast.com/lawfare. Hosted on Acast. See acast.com/privacy for more information.
This week, Scott sat down with the AI-oriented Lawfare Senior Editors Alan Rozenshtein and Kevin Frazier to talk through the week's top AI-focused news stories, including:“Oh Sure, Now He's Into Free Trade.” President Trump has repealed the Biden administration's rule setting strict limits on the diffusion of high-end AI technology, opening the door to the global transfer of the technologies powering U.S. AI development, including advanced chipsets. And we're already seeing results of that policy in a recent deal the president signed with the UAE that would work toward the transfer of advanced semiconductors. How should AI diffusion fit into the broader global strategy surrounding the AI industry in the United States? And what approach does the Trump administration seem inclined to take?“Paving Over the Playing Field.” House Republicans recently included a provision in a House bill that would have preempted state efforts to legislate on and regulate the AI industry for a decade. Is this sort of federal preemption a prudent step given the broader competitive dynamics with China? Or does it go too far in insulating AI companies and users from accountability for their actions, particularly where they put the public interest and safety at risk?“Speechless.” A federal district court in Florida has issued a notable opinion of first impression in a tragic case involving a teenager who committed suicide, allegedly as a result of encouragement from an AI bot powered by the company character.ai. Among other holdings, the judge concluded that the AI's output was not itself protected speech. Is this holding correct? And what impact will it have on the development of the AI industry?In Object Lessons, the AI Guys went surprisingly analog. Alan recommended some good, ol' fashioned, 19th-century imperial espionage with “The Great Game,” by Peter Hopkirk. Kevin, meanwhile, is keeping an eye on a different kind of game: the NCAA Division I Baseball Championship, in which he's throwing up some Hook 'em Horns for Texas. And Scott is trying to “Economize” his time with The Economist's Espresso app, a quick, curated read that fits neatly into a busy morning.To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/lawfare-institute.Support this show http://supporter.acast.com/lawfare. Hosted on Acast. See acast.com/privacy for more information.
Page Hedley, Senior Advisor at Forecasting Research Institute and co-author of the Not for Private Gain letter urging state attorneys general to stop OpenAI's planned restructuring, and Gad Weiss, the Wagner Fellow in Law & Business at NYU Law, join Kevin Frazier, AI Innovation and Law Fellow at Texas Law and Senior Editor at Lawfare, and Alan Rozenshtein, Associate Professor at Minnesota Law and Senior Editor at Lawfare, to analyze news of OpenAI once again modifying its corporate governance structure. The group break down the rationale for the proposed modification, the relevant underlying law, and the significance of corporate governance in shaping the direction of AI development.To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/lawfare-institute.Support this show http://supporter.acast.com/lawfare. Hosted on Acast. See acast.com/privacy for more information.