Podcast appearances and mentions of Kevin Frazier

  • 105PODCASTS
  • 289EPISODES
  • 44mAVG DURATION
  • 5WEEKLY NEW EPISODES
  • Sep 12, 2025LATEST

POPULARITY

20172018201920202021202220232024


Best podcasts about Kevin Frazier

Latest podcast episodes about Kevin Frazier

The Lawfare Podcast
Scaling Laws: The State of AI Safety with Steven Adler

The Lawfare Podcast

Play Episode Listen Later Sep 12, 2025 49:14


Steven Adler, former OpenAI safety researcher, author of Clear-Eyed AI on Substack, and independent AGI-readiness researcher, joins Kevin Frazier, AI Innovation and Law Fellow at the University of Texas School of Law and Senior Fellow at Lawfare, to assess the current state of AI testing and evaluations. The two walk through Steven's views on industry efforts to improve model testing and what he thinks regulators ought to know and do when it comes to preventing AI harms.Thanks to Leo Wu for research assistance!Find Scaling Laws on the Lawfare website, and subscribe to never miss an episode.To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/lawfare-institute.Support this show http://supporter.acast.com/lawfare. Hosted on Acast. See acast.com/privacy for more information.

Arbiters of Truth
AI and the Future of Work: Joshua Gans on Navigating Job Displacement

Arbiters of Truth

Play Episode Listen Later Sep 11, 2025 57:56


Joshua Gans, a professor at the University of Toronto and co-author of "Power and Prediction: The Disruptive Economics of Artificial Intelligence," joins Kevin Frazier, the AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare, to evaluate ongoing concerns about AI-induced job displacement, the likely consequences of various regulatory proposals on AI innovation, and how AI tools are already changing higher education. Select works by Gans include: A Quest for AI Knowledge (https://www.nber.org/papers/w33566)Regulating the Direction of Innovation (https://www.nber.org/papers/w32741)How Learning About Harms Impacts the Optimal Rate of Artificial Intelligence Adoption (https://www.nber.org/papers/w32105) Hosted on Acast. See acast.com/privacy for more information.

Arbiters of Truth
The State of AI Safety with Steven Adler

Arbiters of Truth

Play Episode Listen Later Sep 9, 2025 47:23


Steven Adler, former OpenAI safety researcher, author of Clear-Eyed AI on Substack, and independent AGI-readiness researcher, joins Kevin Frazier, AI Innovation and Law Fellow at the University of Texas School of Law, to assess the current state of AI testing and evaluations. The two walk through Steven's views on industry efforts to improve model testing and what he thinks regulators ought to know and do when it comes to preventing AI harms. You can read Steven's Substack here: https://stevenadler.substack.com/ Thanks to Leo Wu for research assistance! Hosted on Acast. See acast.com/privacy for more information.

The Lawfare Podcast
Scaling Laws: Contrasting and Conflicting Efforts to Regulate Big Tech: EU v. U.S.

The Lawfare Podcast

Play Episode Listen Later Sep 5, 2025 47:04


Anu Bradford, Professor at Columbia Law School, and Kate Klonick, Senior Editor at Lawfare and Associate Professor at St. John's University School of Law, join Kevin Frazier, AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare, to assess the ongoing, contrasting, and, at times, conflicting regulatory approaches to Big Tech being pursued by the EU and U.S. The trio start with an assessment of the EU's use of the Brussels Effect, coined by Anu, to shape AI development. Next, they explore the U.S.'s increasingly interventionist industrial policy with respect to key sectors, especially tech.Read more:Anu's op-ed in The New York Times"The Impact of Regulation on Innovation," by Philippe Aghion, Antonin Bergeaud, and John Van ReenenDraghi Report on the Future of European CompetitivenessFind Scaling Laws on the Lawfare website, and subscribe to never miss an episode.To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/lawfare-institute.Support this show http://supporter.acast.com/lawfare. Hosted on Acast. See acast.com/privacy for more information.

RTP's Free Lunch Podcast
Law For Little Tech: Part 1 - Breaking Down the Little Tech Agenda

RTP's Free Lunch Podcast

Play Episode Listen Later Sep 5, 2025 37:16 Transcription Available


Smaller, advanced technology entrepreneurs are increasingly shaping the U.S. innovation landscape through what some have called the “Little Tech Agenda.” But what exactly is this agenda, and how might it influence policy debates moving forward?America has long celebrated small-scale innovators, yet questions remain about how regulatory frameworks can support entrepreneurship without stifling growth. Some policymakers argue that new parameters are needed to govern emerging technologies, while others caution that overregulation could hinder the nation’s competitive edge in the global power struggle. If “Little Tech” is critical to America’s future, how far should the United States go to defend and promote its development?Join the Federalist Society’s Regulatory Transparency Project and host Prof. Kevin Frazier for an in-depth discussion of the “Little Tech Agenda” with special guest Collin McCune, Head of Government Affairs at Andreessen Horowitz.

Arbiters of Truth
Contrasting and Conflicting Efforts to Regulate Big Tech: EU v. US

Arbiters of Truth

Play Episode Listen Later Sep 2, 2025 46:15


Anu Bradford, Professor at Columbia Law School, and Kate Klonick, Senior Editor at Lawfare and Associate Professor at St. John's University School of Law, join Kevin Frazier, AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare, to assess the ongoing contrasting and, at times, conflicting regulatory approaches to Big Tech being pursued by the EU and US. The trio start with an assessment of the EU's use of the Brussels Effect, coined by Anu, to shape AI development. Next, then explore the US's increasingly interventionist industrial policy with respect to key sectors, especially tech. Read more:Anu's op-ed in The New York TimesThe Impact of Regulation on Innovation by Philippe Aghion, Antonin Bergeaud & John Van ReenenDraghi Report on the Future of European Competitiveness Hosted on Acast. See acast.com/privacy for more information.

The Lawfare Podcast
Lawfare Archive: Richard Albert on Constitutional Resilience Amid Political Tumult

The Lawfare Podcast

Play Episode Listen Later Aug 31, 2025 46:41


From August 23, 2024: Richard Albert, William Stamps Farish Professor in Law, Professor of Government, and Director of Constitutional Studies at the University of Texas at Austin, joins Kevin Frazier, Assistant Professor at St. Thomas University College of Law and a Tarbell Fellow at Lawfare, to conduct a comparative analysis of what helps constitutions withstand political pressures. Richard's extensive study of different means to amend constitutions shapes their conversation about whether the U.S. Constitution has become too rigid.To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/lawfare-institute.Support this show http://supporter.acast.com/lawfare. Hosted on Acast. See acast.com/privacy for more information.

The Lawfare Podcast
Scaling Laws: Uncle Sam Buys In: Examining the Intel Deal 

The Lawfare Podcast

Play Episode Listen Later Aug 29, 2025 48:22


Peter E. Harrell, Adjunct Senior Fellow at the Center for a New American Security, joins Kevin Frazier, AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare, to examine the White House's announcement that it will take a 10% share of Intel. They dive into the policy rationale for the stake as well as its legality. Peter and Kevin also explore whether this is just the start of such deals given that President Trump recently declared that “there will be more transactions, if not in this industry then other industries.”Find Scaling Laws on the Lawfare website, and subscribe to never miss an episode.To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/lawfare-institute.Support this show http://supporter.acast.com/lawfare. Hosted on Acast. See acast.com/privacy for more information.

Arbiters of Truth
Uncle Sam Buys In: Examining the Intel Deal

Arbiters of Truth

Play Episode Listen Later Aug 28, 2025 47:34


Peter E. Harrell, Adjunct Senior Fellow at the Center for a New American Security, joins Kevin Frazier, AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare, to examine the White House's announcement that it will take a 10% share of Intel. They dive into the policy rationale for the stake as well as its legality. Peter and Kevin also explore whether this is just the start of such deals given that President Trump recently declared that “there will be more transactions, if not in this industry then other industries.” Hosted on Acast. See acast.com/privacy for more information.

Arbiters of Truth
AI in the Classroom with MacKenzie Price, Alpha School co-founder, and Rebecca Winthrop, leader of the Brookings Global Task Force on AI in Education

Arbiters of Truth

Play Episode Listen Later Aug 26, 2025 80:43


MacKenzie Price, co-founder of Alpha School, and Rebecca Winthrop, a senior fellow and director of the Center for Universal Education at the Brookings Institution, join Kevin Frazier, the AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare, to review how AI is being integrated into the classroom at home and abroad. MacKenzie walks through the use of predictive AI in Alpha School classrooms. Rebecca provides a high-level summary of ongoing efforts around the globe to bring AI into the education pipeline. This conversation is particularly timely in the wake of the AI Action Plan, which built on the Trump administration's prior calls for greater use of AI from K to 12 and beyond. Learn more about Alpha School here: https://www.nytimes.com/2025/07/27/us/politics/ai-alpha-school-austin-texas.html and here: https://www.astralcodexten.com/p/your-review-alpha-schoolLearn about the Brookings Global Task Force on AI in Education here: https://www.brookings.edu/projects/brookings-global-task-force-on-ai-in-education/ Hosted on Acast. See acast.com/privacy for more information.

The John Batchelor Show
AI: REGULATING LLM - KEVIN FRAZIER, CIVITAS INSTITUTE CONTINUED

The John Batchelor Show

Play Episode Listen Later Aug 21, 2025 3:30


AI: REGULATING LLM - KEVIN FRAZIER, CIVITAS INSTITUTE CONTINUED 1952

The John Batchelor Show
AI: REGULATING LLM - KEVIN FRAZIER, CIVITAS INSTITUTE

The John Batchelor Show

Play Episode Listen Later Aug 21, 2025 14:20


AI: REGULATING LLM - KEVIN FRAZIER, CIVITAS INSTITUTE 1941

Arbiters of Truth
The Open Questions Surrounding Open Source AI with Nathan Lambert and Keegan McBride

Arbiters of Truth

Play Episode Listen Later Aug 21, 2025 45:17


Keegan McBride, Senior Policy Advisor in Emerging Technology and Geopolitics at the Tony Blair Institute, and Nathan Lambert, a post-training lead at the Allen Institute for AI, join Alan Rozenshein, Associate Professor at Minnesota Law and Research Director at Lawfare, and Kevin Frazier, the AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare, to explore the current state of open source AI model development and associated policy questions.The pivot to open source has been swift following initial concerns that the security risks posed by such models outweighed their benefits. What this transition means for the US AI ecosystem and the global AI competition is a topic worthy of analysis by these two experts. Hosted on Acast. See acast.com/privacy for more information.

The John Batchelor Show
Preview: AGI Regulation Colleague Kevin Frazier comments on the tentative state of LLM that needs time to develop before it is either judged or derided by lawmakers. More later.

The John Batchelor Show

Play Episode Listen Later Aug 20, 2025 1:52


Preview: AGI Regulation Colleague Kevin Frazier comments on the tentative state of LLM that needs time to develop before it is either judged or derided by lawmakers. More later.

The Lawfare Podcast
Scaling Laws: What's Next in AI Policy (and for Dean Ball)?

The Lawfare Podcast

Play Episode Listen Later Aug 15, 2025 59:14


In this episode of Scaling Laws, Dean Ball, Senior Fellow at the Foundation for American Innovation and former Senior Policy Advisor for Artificial Intelligence and Emerging Technology, White House Office of Science and Technology Policy, joins Kevin Frazier, AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare, and Alan Rozenshtein, Associate Professor at Minnesota Law and Research Director at Lawfare, to share an inside perspective of the Trump administration's AI agenda, with a specific focus on the AI Action Plan. The trio also explore Dean's thoughts on the recently released ChatGPT-5 and the ongoing geopolitical dynamics shaping America's domestic AI policy.Find Scaling Laws on the Lawfare website, and subscribe to never miss an episode.To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/lawfare-institute.Support this show http://supporter.acast.com/lawfare. Hosted on Acast. See acast.com/privacy for more information.

The Lawfare Podcast
Scaling Laws: What Keeps OpenAI's Product Policy Staff Up at Night? A Conversation with Brian Fuller

The Lawfare Podcast

Play Episode Listen Later Aug 8, 2025 51:16


Brian Fuller, a member of the Product Policy Team at OpenAI, joins Kevin Frazier, the AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare, to analyze how large AI labs go about testing their models for compliance with internal requirements and various legal obligations. They also cover the ins and outs of what it means to work in product policy and what issues are front of mind for in-house policy teams amid substantial regulatory uncertainty.Find Scaling Laws on the Lawfare website, and subscribe to never miss an episode.To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/lawfare-institute.Support this show http://supporter.acast.com/lawfare. Hosted on Acast. See acast.com/privacy for more information.

The John Batchelor Show
AI: Electricity supremacy. Kevin Frazier, Civitas Institute

The John Batchelor Show

Play Episode Listen Later Aug 8, 2025 9:44


AI: Electricity supremacy. Kevin Frazier, Civitas Institute JUNE 1957

The John Batchelor Show
AI: Electricity supremacy. Kevin Frazier, Civitas Institute continued

The John Batchelor Show

Play Episode Listen Later Aug 8, 2025 9:56


AI: Electricity supremacy. Kevin Frazier, Civitas Institute continued JANUARY 1959

The John Batchelor Show
SHOW SCHEDULE 8-7-25 Good evening. The show begins in the future, discussing the AI androids that will dominate the QSRs...

The John Batchelor Show

Play Episode Listen Later Aug 8, 2025 5:34


SHOW SCHEDULE 8-7-25 Good evening. The show begins in the future, discussing the AI androids that will dominate the QSRs... NOVEMBER 1957 CBS EYE ON THE WORLD WITH JOHN BATCHELOR FIRST HOUR 9-915 Android AI: How soon? #SCALAREPORT: Chris Riegel CEO, Scala.com @Stratacache 915-930 Jobs: QSR all androids. #SCALAREPORT: Chris Riegel CEO, Scala.com @Stratacache 930-945 Research endowments and Trump admin. Eric Jensen, Case Western University, Civitas 945-1000 Research endowments and Trump admin. Eric Jensen, Case Western University, Civitas continued SECOND HOUR 10-1015 Putin softens. Anatol Lieven, Quincy Institute 1015-1030 Putin successor. Anatol Lieven, Quincy Institute 1030-1045 AI: Electricity supremacy. Kevin Frazier, Civitas Institute 1045-1100 AI: Electricity supremacy. Kevin Frazier, Civitas Institute continued THIRD HOUR 1100-1115 #NewWorldReport: Brazil lawfare. Latin American Research Professor Evan Ellis, U.S. Army War College Strategic Studies Institute. @revanellis #NewWorldReportEllis 1115-1130 #NewWorldReport: Colombia lawfare. Latin American Research Professor Evan Ellis, U.S. Army War College Strategic Studies Institute. @revanellis #NewWorldReportEllis 1130-1145 #NewWorldReport: Mexico Sheinbaum. Latin American Research Professor Evan Ellis, U.S. Army War College Strategic Studies Institute. @revanellis #NewWorldReportEllis 1145-1200 #NewWorldReport: Argentina congress election. Latin American Research Professor Evan Ellis, U.S. Army War College Strategic Studies Institute. @revanellis #NewWorldReportEllis FOURTH HOUR 12-1215 Fed choice. Veronique de Rugy 1215-1230 Canada: Shy vacationers. Conrad Black 1230-1245 Rubio and Caracas. Mary Anastasia O'Grady 1245-100 AM HOTELl Mars: China wins. Rand Simberg, David Livingston

The John Batchelor Show
Preview: AI predictions: Kevin Frazier of UT School of Law explains that AI cannot yet predict the future. More later.

The John Batchelor Show

Play Episode Listen Later Aug 7, 2025 1:18


Preview: AI predictions: Kevin Frazier of UT School of Law explains that AI cannot yet predict the future. More later. FRBRUARY 1962

Arbiters of Truth
Because of Woke: Renée DiResta and Alan Rozenshtein on the ‘Woke AI' Executive Order

Arbiters of Truth

Play Episode Listen Later Aug 5, 2025 46:48


Renée DiResta, an Associate Research Professor at the McCourt School of Public Policy at Georgetown join Alan Rozenshtein and Kevin Frazier, to take a look at the Trump Administration's Woke AI policies, as set forth by a recent EO and explored in the AI Action Plan. This episode unpacks the implications of prohibiting AI models that fail to pursue objective truth and espouse "DEI" values. Hosted on Acast. See acast.com/privacy for more information.

The Lawfare Podcast
Scaling Laws: Renée DiResta and Alan Rozenshtein on the ‘Woke AI' Executive Order

The Lawfare Podcast

Play Episode Listen Later Aug 1, 2025 46:48


Renée DiResta, an Associate Research Professor at the McCourt School of Public Policy at Georgetown and a Contributing Editor at Lawfare, and Alan Rozenshtein, an Associate Professor at Minnesota Law, Research Director at Lawfare, and, with the exception of today, co-host on the Scaling Laws podcast, join Kevin Frazier, the AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare, to take a look at the Trump Administration's Woke AI policies, as set forth by a recent EO and explored in the AI Action Plan.Read the Woke AI executive orderRead the AI Action PlanRead "Generative Baseline Hell and the Regulation of Machine-Learning Foundation Models," by James Grimmelmann, Blake Reid, and Alan RozenshteinFind Scaling Laws on the Lawfare website, and subscribe to never miss an episode.To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/lawfare-institute.Support this show http://supporter.acast.com/lawfare. Hosted on Acast. See acast.com/privacy for more information.

Arbiters of Truth
Moving the AGI Goal Posts: AI Skepticism with Sayash Kapoor

Arbiters of Truth

Play Episode Listen Later Jul 31, 2025 58:32


In this episode of Scaling Laws, Kevin Frazier is joined by Sayash Kapoor, co-author of "AI Snake Oil," to explore the complexities of AI development and its societal implications. They delve into the skepticism surrounding AGI claims, the real bottlenecks in AI adoption, and the transformative potential of AI as a general-purpose technology. Kapoor shares insights on the challenges of integrating AI into various sectors, the importance of empirical research, and the evolving nature of work in the AI era. The conversation also touches on the role of policy in shaping AI's future and the need for a nuanced understanding of AI's capabilities and limitations. Hosted on Acast. See acast.com/privacy for more information.

The Lawfare Podcast
Rational Security: The “SkrillEx Parte” Edition

The Lawfare Podcast

Play Episode Listen Later Jul 30, 2025 74:03


This week, Scott sat down with his Lawfare colleagues Natalie Orpett, Kevin Frazier, and Tyler McBrien to talk through the week's big national security news stories, including:“Feeding Frenzy.” The crisis in Gaza has reached a new, desperate stage. Months of a near total blockade on humanitarian assistance has created an imminent risk, if not a reality, of mass starvation among Gazan civilians. And it finally has the world—including President Donald Trump—taking notice and putting pressure on the Israeli government to change tack, including by threatening to recognize a Palestinian state. Now the Israeli government appears to be giving an inch, allowing what experts maintain is the bare minimum level of aid necessary to avoid famine into the country and even pursuing a few (largely symbolic) airlifts, while allowing other states to do the same. But how meaningful is this shift? And what could it mean for the trajectory of the broader conflict?“Hey, It Beats an AI Inaction Plan.” After months of anticipation, the Trump administration finally released its “AI Action Plan” last week. And despite some serious reservations about its handling of “woke AI” and select other culture war issues, the plan has generally been met with cautious optimism. How should we feel about the AI Action Plan? And what does it tell us about the direction AI policy is headed?“Pleas and No Thank You.” Earlier this month, the D.C. Circuit upheld then-Secretary of Defense Lloyd Austin's decision to nullify plea deals that several of the surviving 9/11 perpetrators had struck with those prosecuting them in the military commissions. How persuasive is the court's argument? And what does the decision mean for the future of the tribunals?In object lessons, Kevin highlighted a fascinating breakthrough from University of Texas engineers who developed over 1,500 AI-designed materials that can make buildings cooler and more energy efficient—an innovation that, coming from Texas, proves that necessity really is the mother of invention. Tyler took us on a wild ride into the world of Professional Bull Riders with a piece from The Baffler exploring the sport's current state and terrifying risks. Scott brought a sobering but essential read from the Carnegie Endowment for International Peace about how synthetic imagery and disinformation are shaping the Iran-Israel conflict. And Natalie recommended “Drive Your Plow Over the Bones of the Dead,” by Olga Tokarczuk, assuring us it's not nearly as murder-y as it sounds.Note: We will be on vacation next week but look forward to being back on August 13!To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/lawfare-institute.Support this show http://supporter.acast.com/lawfare. Hosted on Acast. See acast.com/privacy for more information.

Rational Security
The “SkrillEx Parte” Edition

Rational Security

Play Episode Listen Later Jul 30, 2025 74:03


This week, Scott sat down with his Lawfare colleagues Natalie Orpett, Kevin Frazier, and Tyler McBrien to talk through the week's big national security news stories, including:“Feeding Frenzy.” The crisis in Gaza has reached a new, desperate stage. Months of a near total blockade on humanitarian assistance has created an imminent risk, if not a reality, of mass starvation among Gazan civilians. And it finally has the world—including President Donald Trump—taking notice and putting pressure on the Israeli government to change tack, including by threatening to recognize a Palestinian state. Now the Israeli government appears to be giving an inch, allowing what experts maintain is the bare minimum level of aid necessary to avoid famine into the country and even pursuing a few (largely symbolic) airlifts, while allowing other states to do the same. But how meaningful is this shift? And what could it mean for the trajectory of the broader conflict?“Hey, It Beats an AI Inaction Plan.” After months of anticipation, the Trump administration finally released its “AI Action Plan” last week. And despite some serious reservations about its handling of “woke AI” and select other culture war issues, the plan has generally been met with cautious optimism. How should we feel about the AI Action Plan? And what does it tell us about the direction AI policy is headed?“Pleas and No Thank You.” Earlier this month, the D.C. Circuit upheld then-Secretary of Defense Lloyd Austin's decision to nullify plea deals that several of the surviving 9/11 perpetrators had struck with those prosecuting them in the military commissions. How persuasive is the court's argument? And what does the decision mean for the future of the tribunals?In object lessons, Kevin highlighted a fascinating breakthrough from University of Texas engineers who developed over 1,500 AI-designed materials that can make buildings cooler and more energy efficient—an innovation that, coming from Texas, proves that necessity really is the mother of invention. Tyler took us on a wild ride into the world of Professional Bull Riders with a piece from The Baffler exploring the sport's current state and terrifying risks. Scott brought a sobering but essential read from the Carnegie Endowment for International Peace about how synthetic imagery and disinformation are shaping the Iran-Israel conflict. And Natalie recommended “Drive Your Plow Over the Bones of the Dead,” by Olga Tokarczuk, assuring us it's not nearly as murder-y as it sounds.Note: We will be on vacation next week but look forward to being back on August 13!To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/lawfare-institute. Hosted on Acast. See acast.com/privacy for more information.

Arbiters of Truth
A New AI Regulatory Regime? SB 813 with Lauren Wagner and Andrew Freedman

Arbiters of Truth

Play Episode Listen Later Jul 30, 2025 55:30


In this episode, join Kevin Frazier as he delves into the complex world of AI regulation with experts Lauren Wagner of the Abundance Institute and Andrew Freedman, Chief Strategy Officer at Fathom. As the AI community eagerly awaits the federal government's AI action plan, our guests explore the current regulatory landscape and the challenges of implementing effective governance with bills like SB 813. Innovative approaches are being proposed, including the role of independent verification organizations and the potential for public-private partnerships.Be sure to check out Fathom's Substack here: https://fathomai.substack.com/subscribe?params=%5Bobject%20Object%5D Hosted on Acast. See acast.com/privacy for more information.

The Lawfare Podcast
Scaling Laws: Rapid Response to the AI Action Plan

The Lawfare Podcast

Play Episode Listen Later Jul 25, 2025 64:09


Janet Egan, Senior Fellow with the Technology and National Security Program at the Center for a New American Security; Jessica Brandt, Senior Fellow for Technology and National Security at the Council on Foreign Relations; Neil Chilson, Head of AI Policy at Abundance Institute; and Tim Fist, Director of Emerging Technology Policy at the Institute for Progress join Kevin Frazier, the AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare for a special version of Scaling Laws.This episode was recorded just hours after the release of the AI Action Plan. About 180 days ago, President Trump directed his administration to explore ways to achieve AI dominance. His staff has attempted to do just that. This group of AI researchers dives into the plan's extensive recommendations and explore what may come next.Find Scaling Laws on the Lawfare website, and subscribe to never miss an episode.To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/lawfare-institute.Support this show http://supporter.acast.com/lawfare. Hosted on Acast. See acast.com/privacy for more information.

Arbiters of Truth
AI Action Plan: Janet Egan, Jessica Brandt, Neil Chilson, and Tim Fist

Arbiters of Truth

Play Episode Listen Later Jul 24, 2025 63:21


Janet Egan, Senior Fellow with the Technology and National Security Program at the Center for a New American Security, Jessica Brandt, Senior Fellow for Technology and National Security at the Council on Foreign Relations, Neil Chilson, Head of AI Policy at Abundance Institute, and Tim Fist, Director of Emerging Technology Policy at the Institute for Progress join Kevin Frazier, the AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare for a special version of Scaling Laws.This episode was recorded just hours after the release of the AI Action Plan. About 180 days ago, President Trump directed his administration to explore ways to achieve AI dominance. His staff has attempted to do just that. This group of AI researchers dives into the plan's extensive recommendations and explore what may come next. Hosted on Acast. See acast.com/privacy for more information.

Arbiters of Truth
Lt. Gen Jack Shanahan: Defense's AI Integration

Arbiters of Truth

Play Episode Listen Later Jul 22, 2025 55:45


Lt. Gen. (ret) Jack Shanahan joins Kevin Frazier to explore the nuanced landscape of AI in national security. Challenging the prevalent "AI arms race" narrative. The discussion delves into the complexities of AI integration in defense, the cultural shifts required within the Department of Defense, and the critical role of public trust and shared national vision. Tune in to understand how AI is reshaping military strategies and the broader implications for society. Hosted on Acast. See acast.com/privacy for more information.

The Lawfare Podcast
Scaling Laws: Eugene Volokh on Libel and AI

The Lawfare Podcast

Play Episode Listen Later Jul 18, 2025 59:17


In this Scaling Laws Academy "class," Kevin Frazier, the AI Innovation and Law Fellow at Texas Law and a Senior Editor at Lawfare, speaks with Eugene Volokh, a Senior Fellow at the Hoover Institution and long-time professor of law at UCLA, on libel in the AI context. The two dive into Volokh's paper, “Large Libel Models? Liability for AI Output.” Extra credit for those who give it a full read and explore some of the "homework" below:“Beyond Section 230: Principles for AI Governance,” 138 Harv. L. Rev. 1657 (2025)“When Artificial Agents Lie, Defame, and Defraud, Who Is to Blame?,” Stanford HAI (2021)Find Scaling Laws on the Lawfare website, and subscribe to never miss an episode.To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/lawfare-institute.Support this show http://supporter.acast.com/lawfare. Hosted on Acast. See acast.com/privacy for more information.

Arbiters of Truth
Eugene Volokh: Navigating Libel and Liability in the AI Age

Arbiters of Truth

Play Episode Listen Later Jul 17, 2025 58:29


Kevin Frazier brings Eugene Volokh, a senior fellow at the Hoover Institution and UCLA law professor, to explore the complexities of libel in the age of AI. Discover how AI-generated content challenges traditional legal frameworks and the implications for platforms under Section 230. This episode is a must-listen for anyone interested in the evolving landscape of AI and law. Hosted on Acast. See acast.com/privacy for more information.

Cross & Gavel Audio
194. Building an AI-Savvy Workforce — Kevin Frazier

Cross & Gavel Audio

Play Episode Listen Later Jul 16, 2025 38:06


My conversation today is on the necessity of adaptive leadership in the coming wave that is aritificial intelligence. My guest is Kevin Frazier, the newly minted AI Innovation and Law Fellow at The University of Texas School of Law. His article (here) in Law & Liberty is called Building an AI-Savvy Workforce. His new podcast, Scaling Law (here), is excellent. Find his other work at Lawfare (here). Cross & Gavel is a production of CHRISTIAN LEGAL SOCIETY. The episode was produced by Josh Deng, with music from Vexento.

Arbiters of Truth
Cass Madison and Zach Boyd: State Level AI Regulation

Arbiters of Truth

Play Episode Listen Later Jul 15, 2025 41:32


Cass Madison, the Executive Director of the Center for Civic Futures, and Zach Boyd, Director of the AI Policy Office at the State of Utah, join Kevin Frazier, AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare, to discuss how state governments are adjusting to the Age of AI. This conversation explores Cass's work to organize the increasing number of state officials tasked with thinking about AI adoption and regulation as well as Zach's experience leading one of the most innovative state AI offices. Hosted on Acast. See acast.com/privacy for more information.

The Lawfare Podcast
Scaling Laws: Ethan Mollick: Navigating the Uncertainty of AI Development

The Lawfare Podcast

Play Episode Listen Later Jul 10, 2025 66:21


Ethan Mollick, Professor of Management and author of the “One Useful Thing” Substack, joins Kevin Frazier, the AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare, and Alan Rozenshtein, Associate Professor at Minnesota Law and a Senior Editor at Lawfare, to analyze the latest research in AI adoption, specifically its use by professionals and educators. The trio also analyze the trajectory of AI development and related, ongoing policy discussions.More of Ethan Mollick's work: https://www.oneusefulthing.org/Find Scaling Laws on the Lawfare website, and subscribe to never miss an episode.To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/lawfare-institute.Support this show http://supporter.acast.com/lawfare. Hosted on Acast. See acast.com/privacy for more information.

Arbiters of Truth
The AI Moratorium Goes Down in Flames

Arbiters of Truth

Play Episode Listen Later Jul 2, 2025 55:32


On the inaugural episode of Scaling Laws, co-hosts Kevin Frazier, AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Law, and Alan Rozenshtein, Professor at Minnesota Law and Research Director at Lawfare, speak with Adam Thierer, a senior fellow for the Technology and Innovation team at the R Street Institute, and Helen Toner, the Director of Strategy and Foundational Research Grants at Georgetown's Center for Security and Emerging Technology (CSET).They discuss the recent overwhelming defeat in the Senate of a proposed moratorium on state and local regulation of artificial intelligence. The conversation explores the moratorium's journey from its inclusion in a House bill to its ultimate failure, examining the procedural hurdles, the confusing legislative language, and the political maneuvering that led to its demise by a 99-to-1 vote. The group discuss the future of U.S. AI governance, covering the Republican party's fragmentation on tech policy and whether Congress's failure to act is a sign of it being broken or a deliberate policy choice.Mentioned in this episode: “The Continuing Tech Policy Realignment on the Right” by Adam Thierer in Medium “1,000 AI Bills: Time for Congress to Get Serious About Preemption” by Kevin Frazier and Adam Thierer in Lawfare “Congress Should Preempt State AI Safety Legislation” by Dean W. Ball and Alan Z. Rozenshtein in Lawfare "The Coming Techlash Could Kill AI Innovation Before It Helps Anyone" by Kevin Frazier in Reason "Unresolved debates about the future of AI" by Helen Toner in Rising Tide Hosted on Acast. See acast.com/privacy for more information.

The Lawfare Podcast
Lawfare Daily: Christina Knight on AI Safety Institutes

The Lawfare Podcast

Play Episode Listen Later Jun 11, 2025 38:53


Christina Knight, Machine Learning Safety and Evals Lead at Scale AI and former senior policy adviser at the U.S. AI Safety Institute (AISI), joins Kevin Frazier, the AI Innovation and Law Fellow at Texas and a Senior Editor at Lawfare, to break down what it means to test and evaluate frontier AI models as well as the status of international efforts to coordinate on those efforts.This recording took place before the administration changed the name of the U.S. AI Safety Institute to the U.S. Center for AI Standards and Innovation. To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/lawfare-institute.Support this show http://supporter.acast.com/lawfare. Hosted on Acast. See acast.com/privacy for more information.

The Lawfare Podcast
Lawfare Daily: Josh Batson on Understanding How and Why AI Works

The Lawfare Podcast

Play Episode Listen Later May 30, 2025 41:15


Josh Batson, a research scientist at Anthropic, joins Kevin Frazier, AI Innovation and Law Fellow at the Texas Law and Senior Editor at Lawfare, to break down two research papers—“Mapping the Mind of a Large Language Model” and “Tracing the thoughts of a large language model”—that uncovered some important insights about how advanced generative AI models work. The two discuss those findings as well as the broader significance of interpretability and explainability research.To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/lawfare-institute.Support this show http://supporter.acast.com/lawfare. Hosted on Acast. See acast.com/privacy for more information.

The Lawfare Podcast
Rational Security: The “Hi, Robot!” Edition

The Lawfare Podcast

Play Episode Listen Later May 28, 2025 83:33


This week, Scott sat down with the AI-oriented Lawfare Senior Editors Alan Rozenshtein and Kevin Frazier to talk through the week's top AI-focused news stories, including:“Oh Sure, Now He's Into Free Trade.” President Trump has repealed the Biden administration's rule setting strict limits on the diffusion of high-end AI technology, opening the door to the global transfer of the technologies powering U.S. AI development, including advanced chipsets. And we're already seeing results of that policy in a recent deal the president signed with the UAE that would work toward the transfer of advanced semiconductors. How should AI diffusion fit into the broader global strategy surrounding the AI industry in the United States? And what approach does the Trump administration seem inclined to take?“Paving Over the Playing Field.” House Republicans recently included a provision in a House bill that would have preempted state efforts to legislate on and regulate the AI industry for a decade. Is this sort of federal preemption a prudent step given the broader competitive dynamics with China? Or does it go too far in insulating AI companies and users from accountability for their actions, particularly where they put the public interest and safety at risk?“Speechless.” A federal district court in Florida has issued a notable opinion of first impression in a tragic case involving a teenager who committed suicide, allegedly as a result of encouragement from an AI bot powered by the company character.ai. Among other holdings, the judge concluded that the AI's output was not itself protected speech. Is this holding correct? And what impact will it have on the development of the AI industry?In Object Lessons, the AI Guys went surprisingly analog. Alan recommended some good, ol' fashioned, 19th-century imperial espionage with “The Great Game,” by Peter Hopkirk. Kevin, meanwhile, is keeping an eye on a different kind of game: the NCAA Division I Baseball Championship, in which he's throwing up some Hook 'em Horns for Texas. And Scott is trying to “Economize” his time with The Economist's Espresso app, a quick, curated read that fits neatly into a busy morning.To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/lawfare-institute.Support this show http://supporter.acast.com/lawfare. Hosted on Acast. See acast.com/privacy for more information.

Rational Security
The “Hi, Robot!” Edition

Rational Security

Play Episode Listen Later May 28, 2025 83:33


This week, Scott sat down with the AI-oriented Lawfare Senior Editors Alan Rozenshtein and Kevin Frazier to talk through the week's top AI-focused news stories, including:“Oh Sure, Now He's Into Free Trade.” President Trump has repealed the Biden administration's rule setting strict limits on the diffusion of high-end AI technology, opening the door to the global transfer of the technologies powering U.S. AI development, including advanced chipsets. And we're already seeing results of that policy in a recent deal the president signed with the UAE that would work toward the transfer of advanced semiconductors. How should AI diffusion fit into the broader global strategy surrounding the AI industry in the United States? And what approach does the Trump administration seem inclined to take?“Paving Over the Playing Field.” House Republicans recently included a provision in a House bill that would have preempted state efforts to legislate on and regulate the AI industry for a decade. Is this sort of federal preemption a prudent step given the broader competitive dynamics with China? Or does it go too far in insulating AI companies and users from accountability for their actions, particularly where they put the public interest and safety at risk?“Speechless.” A federal district court in Florida has issued a notable opinion of first impression in a tragic case involving a teenager who committed suicide, allegedly as a result of encouragement from an AI bot powered by the company character.ai. Among other holdings, the judge concluded that the AI's output was not itself protected speech. Is this holding correct? And what impact will it have on the development of the AI industry?In Object Lessons, the AI Guys went surprisingly analog. Alan recommended some good, ol' fashioned, 19th-century imperial espionage with “The Great Game,” by Peter Hopkirk. Kevin, meanwhile, is keeping an eye on a different kind of game: the NCAA Division I Baseball Championship, in which he's throwing up some Hook 'em Horns for Texas. And Scott is trying to “Economize” his time with The Economist's Espresso app, a quick, curated read that fits neatly into a busy morning.To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/lawfare-institute. Hosted on Acast. See acast.com/privacy for more information.

RTP's Free Lunch Podcast
Tech Roundup Episode 27 - AI on the Senate Floor: Is it Time for a Moratorium?

RTP's Free Lunch Podcast

Play Episode Listen Later May 28, 2025 40:42


President Trump's budget bill, having recently passed the House of Representatives, is headed for the Senate with a proposed 10-year moratorium on AI regulations at the state level. How should lawmakers approach this rapidly-developing technology without stalling US progress in the AI "arms race," while still prioritizing consumers' data privacy and online safety?Dr. Scott Babwah Brennen, Kevin Frazier, and Adam Thierer join the RTP Fourth Branch Podcast to discuss and debate the arguments of AI regulation, innovation, and preemption.

donald trump ai house tech senate moratorium senate floor kevin frazier adam thierer telecommunications & electroni regulatory transparency projec security & privacy
The Lawfare Podcast
Lawfare Daily: Page Hedley and Gad Weiss on OpenAI's Latest Corporate Governance Pivot

The Lawfare Podcast

Play Episode Listen Later May 22, 2025 47:02


Page Hedley, Senior Advisor at Forecasting Research Institute and co-author of the Not for Private Gain letter urging state attorneys general to stop OpenAI's planned restructuring, and Gad Weiss, the Wagner Fellow in Law & Business at NYU Law, join Kevin Frazier, AI Innovation and Law Fellow at Texas Law and Senior Editor at Lawfare, and Alan Rozenshtein, Associate Professor at Minnesota Law and Senior Editor at Lawfare, to analyze news of OpenAI once again modifying its corporate governance structure. The group break down the rationale for the proposed modification, the relevant underlying law, and the significance of corporate governance in shaping the direction of AI development.To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/lawfare-institute.Support this show http://supporter.acast.com/lawfare. Hosted on Acast. See acast.com/privacy for more information.

The Lawfare Podcast
Lawfare Daily: Cullen O'Keefe on the Impending Wave of AI Agents

The Lawfare Podcast

Play Episode Listen Later May 14, 2025 37:52


Cullen O'Keefe, Research Director at the Institute for Law and AI, joins Kevin Frazier, AI Innovation and Law Fellow at Texas Law and a Contributing Editor at Lawfare, and Renée DiResta, Associate Research Professor at the McCourt School of Public Policy at Georgetown and a Contributing Editor at Lawfare, to discuss a novel AI governance framework. They dive into a paper he co-authored on the concept of "Law-Following AI" or LFAI. That paper explores a near-term future. Imagine AI systems capable of tackling complex computer-based tasks with expert human-level skill. The potential for economic growth, scientific discovery, and improving public services is immense. But how do we ensure these powerful tools operate safely and align with our societal values? That's the question at the core of Cullen's paper and this podcast.To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/lawfare-institute.Support this show http://supporter.acast.com/lawfare. Hosted on Acast. See acast.com/privacy for more information.

The Lawfare Podcast
Lawfare Daily: Ben Brooks on the Rise of Open Source AI

The Lawfare Podcast

Play Episode Listen Later May 9, 2025 44:36


Ben Brooks, a fellow at Harvard's Berkman Klein Center and former head of public policy for Stability AI, joins Kevin Frazier, AI Innovation and Law Fellow at Texas Law and Contributing Editor at Lawfare, to discuss a sudden and significant shift toward open-sourcing leading AI models and the ramifications of that pivot for AI governance at home and abroad. Ben and Kevin specifically review OpenAI's announced plans to release a new open-weights model.Coverage of OpenAI announcement: https://techcrunch.com/2025/03/31/openai-plans-to-release-a-new-open-language-model-in-the-coming-months/To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/lawfare-institute.Support this show http://supporter.acast.com/lawfare. Hosted on Acast. See acast.com/privacy for more information.

The Lawfare Podcast
Lawfare Daily: Andrew Bakaj on the Whistleblowing and DOGE's Activities at the NLRB

The Lawfare Podcast

Play Episode Listen Later Apr 30, 2025 34:07


Andrew Bakaj, Chief Legal Counsel at Whistleblower Aid, joins Kevin Frazier, AI Innovation and Law Fellow at Texas Law and Contributing Editor at Lawfare, to discuss a declaration by a National Labor Relations Board employee Daniel Berulis that DOGE facilitated the exfiltration of potentially sensitive information to external sources. The two also analyze the merits of whistleblower protections more generally.Read more about the declaration here: https://www.npr.org/2025/04/15/nx-s1-5355896/doge-nlrb-elon-musk-spacex-securityFor a copy of the letter penned by several members of Congress, go here: https://www.npr.org/2025/04/24/nx-s1-5375118/congress-doge-nlrb-whistleblowerTo receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/lawfare-institute.Support this show http://supporter.acast.com/lawfare. Hosted on Acast. See acast.com/privacy for more information.

The Lawfare Podcast
Rational Security: “The More You DOGE” Edition

The Lawfare Podcast

Play Episode Listen Later Apr 23, 2025 82:19


This week, Scott sat down with his Lawfare colleagues Anna Bower, Tyler McBrien, and Kevin Frazier to talk through the week's big national security news, including:“Aliens vs. Predators.” Despite forceful legal pushback—including by the U.S. Supreme Court—the Trump administration is working hard to continue its campaign to remove foreign aliens it accuses of pursuing a “predatory incursion” from the country using the Alien Enemies Act. How far will it go? And to what extent can the courts (or anyone else) stop them?“Aye Aye Robot.” Both the Biden and Trump administrations were fans of artificial intelligence (AI) and set out policies to incorporate it into government decision-making. But while the Biden administration focused much of its efforts on guardrails, the Trump administration has increasingly torn them down as part of a broader push to incorporate the nascent technology into government decision-making. What are the risks and potential benefits of this sort of government by AI? “For Pete's Sake.” Beleaguered Secretary of Defense Pete Hegseth is more beleaguered than ever this week, after reports that, in addition to inadvertently sharing classified secrets with Atlantic reporter Jeffrey Goldberg, he also passed them to his wife, brother, and personal lawyer on another Signal thread. Meanwhile, a former adviser (and established Trump loyalist) went public with allegations that Hegseth's management has led to chaos at the Defense Department and called for his resignation. Will this be enough for the Trump administration to cut bait and run? Or does his support in the MAGAsphere simply run too deep?In object lessons, Tyler, fresh from biking adventures abroad, hyped the routes, photos, and resources on bikepacking.com, if physical exertion is your idea of relaxation. Anna, finding other ways to relax, came to the defense of The Big Short in helping to soothe her anxiety amid more current market upheaval. Doubling down on the “no relaxation without tension” theme, Scott's outie binge-watched Severance while on vacation. And Kevin, very on-brand, was quick to bring us a feel-good story of a new community partnership to support AI skill-building in Austin-based nonprofits. To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/lawfare-institute.Support this show http://supporter.acast.com/lawfare. Hosted on Acast. See acast.com/privacy for more information.

Rational Security
“The More You DOGE” Edition

Rational Security

Play Episode Listen Later Apr 23, 2025 82:19


This week, Scott sat down with his Lawfare colleagues Anna Bower, Tyler McBrien, and Kevin Frazier to talk through the week's big national security news, including:“Aliens vs. Predators.” Despite forceful legal pushback—including by the U.S. Supreme Court—the Trump administration is working hard to continue its campaign to remove foreign aliens it accuses of pursuing a “predatory incursion” from the country using the Alien Enemies Act. How far will it go? And to what extent can the courts (or anyone else) stop them?“Aye Aye Robot.” Both the Biden and Trump administrations were fans of artificial intelligence (AI) and set out policies to incorporate it into government decision-making. But while the Biden administration focused much of its efforts on guardrails, the Trump administration has increasingly torn them down as part of a broader push to incorporate the nascent technology into government decision-making. What are the risks and potential benefits of this sort of government by AI? “For Pete's Sake.” Beleaguered Secretary of Defense Pete Hegseth is more beleaguered than ever this week, after reports that, in addition to inadvertently sharing classified secrets with Atlantic reporter Jeffrey Goldberg, he also passed them to his wife, brother, and personal lawyer on another Signal thread. Meanwhile, a former adviser (and established Trump loyalist) went public with allegations that Hegseth's management has led to chaos at the Defense Department and called for his resignation. Will this be enough for the Trump administration to cut bait and run? Or does his support in the MAGAsphere simply run too deep?In object lessons, Tyler, fresh from biking adventures abroad, hyped the routes, photos, and resources on bikepacking.com, if physical exertion is your idea of relaxation. Anna, finding other ways to relax, came to the defense of The Big Short in helping to soothe her anxiety amid more current market upheaval. Doubling down on the “no relaxation without tension” theme, Scott's outie binge-watched Severance while on vacation. And Kevin, very on-brand, was quick to bring us a feel-good story of a new community partnership to support AI skill-building in Austin-based nonprofits. To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/lawfare-institute. Hosted on Acast. See acast.com/privacy for more information.

The Lawfare Podcast
Lawfare Daily: Chris Hughes on His New Book, ‘Marketcrafters'

The Lawfare Podcast

Play Episode Listen Later Apr 22, 2025 41:44


Chris Hughes, author of “Marketcrafters” and co-founder of the Economic Securities Project, joins Kevin Frazier, AI Innovation and Law Fellow at Texas Law and Contributing Editor at Lawfare, to discuss his book and its implications at a time of immense economic uncertainty and political upheaval. The duo explore several important historical case studies that Chris suggests may have lessons worth heeding in the ongoing struggle to direct markets toward the public good.To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/lawfare-institute.Support this show http://supporter.acast.com/lawfare. Hosted on Acast. See acast.com/privacy for more information.

The Lawfare Podcast
Lawfare Daily: Daniel Kokotajlo and Eli Lifland on Their AI 2027 Report

The Lawfare Podcast

Play Episode Listen Later Apr 15, 2025 37:41


Daniel Kokotajlo, former OpenAI researcher and Executive Director of the AI Futures Project, and Eli Lifland, a researcher with the AI Futures Project, join Kevin Frazier, AI Innovation and Law Fellow at Texas Law and Contributing Editor at Lawfare, to discuss what AI may look like in 2027. The trio explore a report co-authored by Daniel that dives into the hypothetical evolution of AI over the coming years. This novel report has already elicited a lot of attention with some reviewers celebrating its creativity and others questioning its methodology. Daniel and Eli tackle that feedback and help explain the report's startling conclusion—that superhuman AI will develop within the next decade.To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/lawfare-institute.Support this show http://supporter.acast.com/lawfare. Hosted on Acast. See acast.com/privacy for more information.

The Lawfare Podcast
Lawfare Daily: Hillary Hartley and David Eaves on 18F, Its Origin, Legacy, and Lesson

The Lawfare Podcast

Play Episode Listen Later Apr 4, 2025 42:10


Hillary Hartley, the former Chief Digital Officer of Ontario and former Co-Founder and Deputy Executive Director at 18F, and David Eaves, Associate Professor of Digital Government and Co-Deputy Director of the Institute for Innovation and Public Purpose at University College London, join Kevin Frazier, AI Innovation and Law Fellow at Texas Law and Contributing Editor at Lawfare, to discuss the recent closure of 18F, a digital unit within the GSA focused on updating and enhancing government technological systems and public-facing digital services. Hillary and David also published a recent Lawfare article on this topic, “Learning from the Legacy of 18F.”To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/lawfare-institute.Support this show http://supporter.acast.com/lawfare. Hosted on Acast. See acast.com/privacy for more information.

The Lawfare Podcast
Lawfare Daily: Adam Thierer on the AI Regulatory Landscape

The Lawfare Podcast

Play Episode Listen Later Apr 1, 2025 38:03


Adam Thierer, Senior Fellow for the Technology & Innovation team at R Street, joins Kevin Frazier, the AI Innovation and Law Fellow at the UT Austin School of Law and a Contributing Editor at Lawfare, to review public comments submitted in response to the Office of Science and Technology Policy's Request for Information on the AI Action Plan. The pair summarize their own comments and explore those submitted by major labs and civil society organizations. They also dive into recent developments in the AI regulatory landscape, including a major veto by Governor Youngkin in Virginia.Readings discussed:Kevin on Vance's America First, America Only Approach to AIKeegan and Adam on AI Safety Treatises Kevin on Proposed Firings at NISTDean and Alan on PreemptionTo receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/lawfare-institute.Support this show http://supporter.acast.com/lawfare. Hosted on Acast. See acast.com/privacy for more information.