POPULARITY
Anton Korinek, a professor of economics at the University of Virginia and newly appointed economist to Anthropic's Economic Advisory Council; Nathan Goldschlag, Director of Research at the Economic Innovation Group; and Bharat Chander, Economist at Stanford Digital Economy Lab, join Kevin Frazier, the AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare, to sort through the myths, truths, and ambiguities that shape the important debate around the effects of AI on jobs. They discuss what happens when machines begin to outperform humans in virtually every computer-based task, how that transition might unfold, and what policy interventions could ensure broadly shared prosperity.These three are prolific researchers. Give them a follow to find their latest works:Anton: @akorinek on XNathan: @ngoldschlag and @InnovateEconomy on XBharat: X: @BharatKChandar, LinkedIn: @bharatchandar, Substack: @bharatchandarFind Scaling Laws on the Lawfare website, and subscribe to never miss an episode.To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/lawfare-institute.Support this show http://supporter.acast.com/lawfare. Hosted on Acast. See acast.com/privacy for more information.
Anton Korinek, a professor of economics at the University of Virginia and newly appointed economist to Anthropic's Economic Advisory Council, Nathan Goldschlag, Director of Research at the Economic Innovation Group, and Bharat Chander, Economist at Stanford Digital Economy Lab, join Kevin Frazier, the AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare, to sort through the myths, truths, and ambiguities that shape the important debate around the effects of AI on jobs. We discuss what happens when machines begin to outperform humans in virtually every computer-based task, how that transition might unfold, and what policy interventions could ensure broadly shared prosperity.These three are prolific researchers. Give them a follow to find their latest works.Anton: @akorinek on XNathan: @ngoldschlag and @InnovateEconomy on XBharat: X: @BharatKChandar, LinkedIn: @bharatchandar, Substack: @bharatchandar Hosted on Acast. See acast.com/privacy for more information.
Regulating AI and Protecting Children. Kevin Frazier (Law School Fellow at the University of Texas at Austin) addresses the growing concern over AI chatbots following tragedies, noting that while only 1.9% of ChatGPT conversations relate to "relationships," this fraction still warrants significant attention. He criticizes early state legislative responses, such as Illinois banning AI therapy tools, arguing that such actions risk denying mental health support to children who cannot access human therapists. Frazier advocates against imposing restrictive statutory law on the rapidly evolving technology. Instead, he recommends implementing a voluntary, standardized rating system, similar to the MPA film rating system. This framework would provide consumers with digestible information via labels—like "child safe" or "mental health appropriate"—to make informed decisions and incentivize industry stakeholders to develop safer applications. 1919
Regulating AI and Protecting Children. Kevin Frazier (Law School Fellow at the University of Texas at Austin) addresses the growing concern over AI chatbots following tragedies, noting that while only 1.9% of ChatGPT conversations relate to "relationships," this fraction still warrants significant attention. He criticizes early state legislative responses, such as Illinois banning AI therapy tools, arguing that such actions risk denying mental health support to children who cannot access human therapists. Frazier advocates against imposing restrictive statutory law on the rapidly evolving technology. Instead, he recommends implementing a voluntary, standardized rating system, similar to the MPA film rating system. This framework would provide consumers with digestible information via labels—like "child safe" or "mental health appropriate"—to make informed decisions and incentivize industry stakeholders to develop safer applications. 1941
PREVIEW. The Crisis of AI Literacy: Protecting Vulnerable Communities from Misusing Chatbots. Kevin Frazier discusses the dangers of young people misusing AI chatbots due to a significant lack of public awareness and basic AI literacy. Designers assume users know chatbots are merely objectification and optimization, not real opinions or people. Frazier stresses the need for educating consumers on the best and improper uses of these tools for responsible innovation. 1951
Gabriel Nicholas, a member of the Product Public Policy team at Anthropic, joins Kevin Frazier, AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare, to introduce the policy problems (and some solutions) posed by AI agents. Defined as AI tools capable of autonomously completing tasks on your behalf, it's widely expected that AI agents will soon become ubiquitous. The integration of AI agents into sensitive tasks presents a slew of technical, social, economic, and political questions. Gabriel walks through the weighty questions that labs are thinking through as AI agents finally become “a thing.” Hosted on Acast. See acast.com/privacy for more information.
Artificial intelligence isn't just transforming industries—it's redefining freedom, opportunity, and the future of human work. This week on the Let People Prosper Show, I talk with Kevin Frazier, the inaugural AI Innovation and Law Fellow at the University of Texas School of Law, where he leads their groundbreaking new AI Innovation and Law Program.Kevin's at the center of the national conversation on how to balance innovation with accountability—and how to make sure regulation doesn't crush the technological progress that drives prosperity. With degrees from UC Berkeley Law, Harvard Kennedy School, and the University of Oregon, Kevin brings both a legal and policy lens to today's most pressing questions about AI, federalism, and the economy. Before joining UT, he served as an Assistant Professor at St. Thomas University College of Law and conducted research for the Institute for Law and AI. His scholarship has appeared in the Tennessee Law Review, MIT Technology Review, and Lawfare. He also co-hosts the Scaling Laws Podcast, bridging the gap between innovation and regulation.This episode goes deep into how we can harness AI to promote human flourishing, not government dependency—how we can regulate based on reality, not fear—and how federalism can help America remain the global leader in technological innovation.For more insights, visit vanceginn.com. You can also get even greater value by subscribing to my Substack newsletter at vanceginn.substack.com. Please share with your friends, family, and broader social media network.
California State Senator Scott Wiener, author of Senate Bill 53—a frontier AI safety bill—signed into law by Governor Newsom earlier this month, joins Alan Rozenshtein, Associate Professor at Minnesota Law and Research Director at Lawfare, and Kevin Frazier, AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare, to explain the significance of SB 53 in the large debate about how to govern AI.The trio analyze the lessons that Senator Wiener learned from the battle of SB 1047, a related bill that Newsom vetoed last year, explore SB 53's key provisions, and forecast what may be coming next in Sacramento and D.C.Find Scaling Laws on the Lawfare website, and subscribe to never miss an episode.To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/lawfare-institute.Support this show http://supporter.acast.com/lawfare. Hosted on Acast. See acast.com/privacy for more information.
California State Senator Scott Wiener, author of Senate Bill 53--a frontier AI safety bill--signed into law by Governor Newsom earlier this month, joins Alan Rozenshtein, Associate Professor at Minnesota Law and Research Director at Lawfare, and Kevin Frazier, AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare, to explain the significance of SB 53 in the large debate about how to govern AI.The trio analyze the lessons that Senator Wiener learned from the battle of SB 1047, a related bill that Newsom vetoed last year, explore SB 53's key provisions, and forecast what may be coming next in Sacramento and D.C. Hosted on Acast. See acast.com/privacy for more information.
Mosharaf Chowdhury, Associate Professor at the University of Michigan and Director of the ML Energy lab, and Dan Zhou, former Senior Research Scientist at the MIT Lincoln Lab, MIT Supercomputing Center, and MIT CSAIL, join Kevin Frazier, AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare, to discuss the energy costs of AI. They break down exactly how much energy fuels a single ChatGPT query, why this is difficult to figure out, how we might improve energy efficiency, and what kinds of policies might minimize AI's growing energy and environmental costs. Leo Wu provided excellent research assistance on this podcast.Read more from Mosharaf:The ML Energy Initiative“We did the math on AI's energy footprint. Here's the story you haven't heard,” in MIT Technology ReviewRead more from Dan:“From Words to Watts: Benchmarking the Energy Costs of Large Language Model Inference,” in Proc. IEEE High Perform. Extreme Comput. Conf. (HPEC)“A Green(er) World for A.I.,” in IEEE International Parallel and Distributed Processing Symposium Workshops (IPDPSW)Find Scaling Laws on the Lawfare website, and subscribe to never miss an episode.To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/lawfare-institute.Support this show http://supporter.acast.com/lawfare. Hosted on Acast. See acast.com/privacy for more information.
HEADLINE: AI Regulation Debate: Premature Laws vs. Emerging Norms GUEST NAME: Kevin Frazier SUMMARY: Kevin Frazier critiques the legislative rush to regulate AI, arguing that developing norms might be more effective than premature laws. He notes that bills like California's AB 1047, which demands factual accuracy, fundamentally misunderstand AI's generative nature. Imposing vague standards, as seen in New York's RAISE Act, risks chilling innovation and preventing widespread benefits, like affordable legal or therapy tools. Frazier emphasizes that AI policy should be grounded in empirical data rather than speculative fears. 1960
HEADLINE: AI Regulation Debate: Premature Laws vs. Emerging Norms GUEST NAME: Kevin Frazier SUMMARY: Kevin Frazier critiques the legislative rush to regulate AI, arguing that developing norms might be more effective than premature laws. He notes that bills like California's AB 1047, which demands factual accuracy, fundamentally misunderstand AI's generative nature. Imposing vague standards, as seen in New York's RAISE Act, risks chilling innovation and preventing widespread benefits, like affordable legal or therapy tools. Frazier emphasizes that AI policy should be grounded in empirical data rather than speculative fears. 1958
Mosharaf Chowdhury, associate professor at the University of Michigan and director of the ML Energy lab, and Dan Zhao, AI researcher at MIT, GoogleX, and Microsoft focused on AI for science and sustainable and energy-efficient AI, join Kevin Frazier, AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare, to discuss the energy costs of AI. They break down exactly how much a energy fuels a single ChatGPT query, why this is difficult to figure out, how we might improve energy efficiency, and what kinds of policies might minimize AI's growing energy and environmental costs. Leo Wu provided excellent research assistance on this podcast. Read more from Mosharaf:https://ml.energy/ https://www.technologyreview.com/2025/05/20/1116327/ai-energy-usage-climate-footprint-big-tech/ Read more from Dan:https://arxiv.org/abs/2310.03003'https://arxiv.org/abs/2301.11581 Hosted on Acast. See acast.com/privacy for more information.
David Sullivan, Executive Director of the Digital Trust & Safety Partnership, and Rayi Iyer, Managing Director of the Psychology of Technology Institute at USC's Neely Center, join Kevin Frazier, AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare, to discuss the evolution of the Trust & Safety field and its relevance to ongoing conversations about how best to govern AI. They discuss the importance of thinking about the end user in regulation, debate the differences and similarities between social media and AI companions, and evaluate current policy proposals.Leo Wu provided excellent research assistance to prepare for this podcast.Read more from David:"Why we need to make safety the product to build better bots," from the World Economic Forum Centre for AI Excellence"Learning from the Past to Shape the Future of Digital Trust and Safety," in Tech Policy PressRead more from Ravi:"Ravi Iyer on How to Improve Technology Through Design," from Lawfare's Arbiters of Truth series"Regulate Design, not Speech," from the Designing Tomorrow Substack Read more from Kevin:"California in Your Chatroom: AB 1064's Likely Constitutional Overreach," from the Cato InstituteFind Scaling Laws on the Lawfare website, and subscribe to never miss an episode.To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/lawfare-institute.Support this show http://supporter.acast.com/lawfare. Hosted on Acast. See acast.com/privacy for more information.
David Sullivan, Executive Director of the Digital Trust & Safety Partnership, and Rayi Iyer, Managing Director of the Psychology of Technology Institute at USC's Neely Center, join join Kevin Frazier, AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare, to discuss the evolution of the Trust & Safety field and its relevance to ongoing conversations about how best to govern AI. They discuss the importance of thinking about the end user in regulation, debate the differences and similarities between social media and AI companions, and evaluate current policy proposals. You'll “like” (bad pun intended) this one. Leo Wu provided excellent research assistance to prepare for this podcast. Read more from David:https://www.weforum.org/stories/2025/08/safety-product-build-better-bots/https://www.techpolicy.press/learning-from-the-past-to-shape-the-future-of-digital-trust-and-safety/ Read more from Ravi:https://shows.acast.com/arbiters-of-truth/episodes/ravi-iyer-on-how-to-improve-technology-through-designhttps://open.substack.com/pub/psychoftech/p/regulate-value-aligned-design-not?r=2alyy0&utm_campaign=post&utm_medium=web&showWelcomeOnShare=false Read more from Kevin:https://www.cato.org/blog/california-chatroom-ab-1064s-likely-constitutional-overreach Hosted on Acast. See acast.com/privacy for more information.
From September 20, 2024: Bob Bauer, Professor of Practice and Distinguished Scholar in Residence at New York University School of Law, and Liza Goitein, Senior Director of Liberty & National Security at the Brennan Center, join Kevin Frazier, Assistant Professor at St. Thomas University College of Law and a Tarbell Fellow at Lawfare, to review the emergency powers afforded to the president under the National Emergency Act, International Emergency Economic Powers Act, and the Insurrection Act. The trio also inspect ongoing bipartisan efforts to reform emergency powers.To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/lawfare-institute.Support this show http://supporter.acast.com/lawfare. Hosted on Acast. See acast.com/privacy for more information.
Startups often struggle to balance financial constraints with the pursuit of innovation, raising questions about how they can effectively advocate for themselves within the tech industry. In Washington, D.C. and abroad, various organizations promote the growth of smaller innovators, yet many "little tech" firms still face challenges meeting regulatory requirements. How do regulatory frameworks affect smaller innovators and their ability to compete? What balance should be struck between oversight and innovation? How can policymakers incentivize little tech companies without creating a disadvantage for Big Tech firms or consumers?Join the Federalist Society’s Regulatory Transparency Project and host Prof. Kevin Frazier for an in-depth discussion of the “Little Tech Agenda” with special guest Kate Tummarello at Engine | Advocacy & Foundation.
“Starting small, but aspiring to grow” defines the little tech agenda. Big Tech companies often depend on smaller innovators for key components of manufacturing and new technologies. With this dependence on little tech, what are the “gaps” in its agenda? The U.S. has technological capital waiting to be unlocked by small innovators. What steps can be taken to address this gap and channel little tech's efforts towards our national interests? Can we strike a balance between Big Tech and little tech to further the goals of the United States’ technological development? Join the Federalist Society’s Regulatory Transparency Project and host Prof. Kevin Frazier for an in-depth discussion of the “Little Tech Agenda” with special guest Sam Hammond, Foundation of American Innovation.
Over the past 30 years, the United States has experienced rapid technological change. Yet in recent years, innovation appears to have plateaued. The iPhone of four years ago is nearly identical to today’s model, and the internet has changed little over the same period. Little tech companies play a significant role in generating new ideas and technological development. In this episode, experts discuss the financial gains and risks of incentivising little tech innovation and offer policy recommendations that encourage investment in the "littlest tech" firms to drive future breakthroughs.Join the Federalist Society’s Regulatory Transparency Project and host Prof. Kevin Frazier for an in-depth discussion of the “Little Tech Agenda” with special guest Dave Karpf, Associate Professor at the George Washington University School of Media and Public Affairs.
Neil Chilson, Head of AI Policy at the Abundance Institute, and Gus Hurwitz, Senior Fellow and CTIC Academic Director at Penn Carey Law School and Director of Law & Economics Programs at the International Center for Law & Economics, join Kevin Frazier, AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare, to explore how academics can overcome the silos and incentives that plague the Ivory Tower and positively contribute to the highly complex, evolving, and interdisciplinary work associated with AI governance.The trio recorded this podcast live at the Institute for Humane Studies's Technology, Liberalism, and Abundance Conference in Arlington, Virginia.Read about Kevin's thinking on the topic here: https://www.civitasinstitute.org/research/draining-the-ivory-towerLearn about the Conference: https://www.theihs.org/blog/curated-event/technology-abundance-and-liberalism/Find Scaling Laws on the Lawfare website, and subscribe to never miss an episode.To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/lawfare-institute.Support this show http://supporter.acast.com/lawfare. Hosted on Acast. See acast.com/privacy for more information.
HEADLINE: Russian Spy Ships Target Vulnerable Undersea Communication Cables GUEST NAME: Kevin Frazier50 WORD SUMMARY: Undersea cables are highly vulnerable to sabotage or accidental breaks. Russia uses sophisticated naval technology, including the spy ship Yantar, to map and potentially break these cables in sensitive locations. The US is less vulnerable due to redundancy. However, protection is fragmented, relying on private owners who often lack incentives to adopt sophisticated defense techniques. 1945 RED SQUARE
In this Scaling Laws rapid response episode, hosts Kevin Frazier and Alan Rozenshtein talk about SB-53, the frontier AI transparency (and more) law that California Governor Gavin Newsom signed into law on September 29. Hosted on Acast. See acast.com/privacy for more information.
Neil Chilson, Head of AI Policy at the Abundance Institute, and Gus Hurwitz, Senior Fellow and CTIC Academic Director at Penn Carey Law School and Director of Law & Economics Programs at the International Center for Law & Economics, join Kevin Frazier, AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare, to explore how academics can overcome the silos and incentives that plague the Ivory Tower and positively contribute to the highly complex, evolving, and interdisciplinary work associated with AI governance. The trio recorded this podcast live at the Institute for Humane Studies's Technology, Liberalism, and Abundance Conference in Arlington, Virginia.Read about Kevin's thinking on the topic here: https://www.civitasinstitute.org/research/draining-the-ivory-towerLearn about the Conference: https://www.theihs.org/blog/curated-event/technology-abundance-and-liberalism/ Hosted on Acast. See acast.com/privacy for more information.
Preview: Kevin Frazier discusses the extreme vulnerability and fragmented state of undersea cables, the vast majority of which are privately owned. The Department of Defense relies on these systems, which lack sufficient protection due to high costs. Frazier highlights recent reports that the Russian ship Yantar, under GRU possession, is tracking and mapping these vital cables near Great Britain in the event of conflict.
Preview: Kevin Frazier discusses the extreme vulnerability and fragmented state of undersea cables, the vast majority of which are privately owned. The Department of Defense relies on these systems, which lack sufficient protection due to high costs. Frazier highlights recent reports that the Russian ship Yantar, under GRU possession, is tracking and mapping these vital cables near Great Britain in the event of conflict.
Kevin Frazier testified that Congress needs a national vision to manage data center infrastructure and mitigate local impacts. He stressed vulnerable undersea cables are neglected and urged academics to prioritize teaching and public-oriented research. 1939
Kevin Frazier testified that Congress needs a national vision to manage data center infrastructure and mitigate local impacts. He stressed vulnerable undersea cables are neglected and urged academics to prioritize teaching and public-oriented research.
Preview: Kevin Frazier of University of Texas Law School/Civitas Institute discusses congressional concerns over AIregulation, balancing state interests versus federal goals of preventing cross-state policy projection and prioritizing national AI innovation and growth.
From September 18, 2024: Jane Bambauer, Professor at Levin College of Law; Ramya Krishnan, Senior Staff Attorney at the Knight First Amendment Institute and a lecturer in law at Columbia Law School; Alan Rozenshtein, Associate Professor of Law at the University of Minnesota Law School and a Senior Editor at Lawfare, join Kevin Frazier, Assistant Professor at St. Thomas University College of Law and a Tarbell Fellow at Lawfare, to break down the D.C. Circuit Court of Appeals' hearing in TikTok v. Garland, in which a panel of judges assessed the constitutionality of the TikTok bill.To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/lawfare-institute.Support this show http://supporter.acast.com/lawfare. Hosted on Acast. See acast.com/privacy for more information.
What priorities should shape U.S. innovation policy at the national level? Historically, the federal government has adopted a "light touch" approach, with legislation often focused on reducing barriers so that smaller entrepreneurs can prioritize innovation over regulatory compliance. Big tech companies often hold a competitive advantage including resources, capital, and political influence that small-scale entrepreneurs lack. How can policymakers design legislation that ensures fair competition between Big Tech and little tech? Do acquisitions of little tech companies by Big tech promote innovation or constrain the development of emerging ideas? How can policymakers foster innovation for smaller scale initiatives through legislation, competition regulation, and support for emerging firms? Join the Federalist Society’s Regulatory Transparency Project and host Prof. Kevin Frazier for an in-depth discussion of the “Little Tech Agenda” with special guest Jennifer Huddleston, Senior Fellow in Technology Policy at the Cato Institute.
Steven Adler, former OpenAI safety researcher, author of Clear-Eyed AI on Substack, and independent AGI-readiness researcher, joins Kevin Frazier, AI Innovation and Law Fellow at the University of Texas School of Law and Senior Fellow at Lawfare, to assess the current state of AI testing and evaluations. The two walk through Steven's views on industry efforts to improve model testing and what he thinks regulators ought to know and do when it comes to preventing AI harms.Thanks to Leo Wu for research assistance!Find Scaling Laws on the Lawfare website, and subscribe to never miss an episode.To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/lawfare-institute.Support this show http://supporter.acast.com/lawfare. Hosted on Acast. See acast.com/privacy for more information.
Joshua Gans, a professor at the University of Toronto and co-author of "Power and Prediction: The Disruptive Economics of Artificial Intelligence," joins Kevin Frazier, the AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare, to evaluate ongoing concerns about AI-induced job displacement, the likely consequences of various regulatory proposals on AI innovation, and how AI tools are already changing higher education. Select works by Gans include: A Quest for AI Knowledge (https://www.nber.org/papers/w33566)Regulating the Direction of Innovation (https://www.nber.org/papers/w32741)How Learning About Harms Impacts the Optimal Rate of Artificial Intelligence Adoption (https://www.nber.org/papers/w32105) Hosted on Acast. See acast.com/privacy for more information.
Steven Adler, former OpenAI safety researcher, author of Clear-Eyed AI on Substack, and independent AGI-readiness researcher, joins Kevin Frazier, AI Innovation and Law Fellow at the University of Texas School of Law, to assess the current state of AI testing and evaluations. The two walk through Steven's views on industry efforts to improve model testing and what he thinks regulators ought to know and do when it comes to preventing AI harms. You can read Steven's Substack here: https://stevenadler.substack.com/ Thanks to Leo Wu for research assistance! Hosted on Acast. See acast.com/privacy for more information.
Anu Bradford, Professor at Columbia Law School, and Kate Klonick, Senior Editor at Lawfare and Associate Professor at St. John's University School of Law, join Kevin Frazier, AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare, to assess the ongoing, contrasting, and, at times, conflicting regulatory approaches to Big Tech being pursued by the EU and U.S. The trio start with an assessment of the EU's use of the Brussels Effect, coined by Anu, to shape AI development. Next, they explore the U.S.'s increasingly interventionist industrial policy with respect to key sectors, especially tech.Read more:Anu's op-ed in The New York Times"The Impact of Regulation on Innovation," by Philippe Aghion, Antonin Bergeaud, and John Van ReenenDraghi Report on the Future of European CompetitivenessFind Scaling Laws on the Lawfare website, and subscribe to never miss an episode.To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/lawfare-institute.Support this show http://supporter.acast.com/lawfare. Hosted on Acast. See acast.com/privacy for more information.
Smaller, advanced technology entrepreneurs are increasingly shaping the U.S. innovation landscape through what some have called the “Little Tech Agenda.” But what exactly is this agenda, and how might it influence policy debates moving forward?America has long celebrated small-scale innovators, yet questions remain about how regulatory frameworks can support entrepreneurship without stifling growth. Some policymakers argue that new parameters are needed to govern emerging technologies, while others caution that overregulation could hinder the nation’s competitive edge in the global power struggle. If “Little Tech” is critical to America’s future, how far should the United States go to defend and promote its development?Join the Federalist Society’s Regulatory Transparency Project and host Prof. Kevin Frazier for an in-depth discussion of the “Little Tech Agenda” with special guest Collin McCune, Head of Government Affairs at Andreessen Horowitz.
From August 23, 2024: Richard Albert, William Stamps Farish Professor in Law, Professor of Government, and Director of Constitutional Studies at the University of Texas at Austin, joins Kevin Frazier, Assistant Professor at St. Thomas University College of Law and a Tarbell Fellow at Lawfare, to conduct a comparative analysis of what helps constitutions withstand political pressures. Richard's extensive study of different means to amend constitutions shapes their conversation about whether the U.S. Constitution has become too rigid.To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/lawfare-institute.Support this show http://supporter.acast.com/lawfare. Hosted on Acast. See acast.com/privacy for more information.
Peter E. Harrell, Adjunct Senior Fellow at the Center for a New American Security, joins Kevin Frazier, AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare, to examine the White House's announcement that it will take a 10% share of Intel. They dive into the policy rationale for the stake as well as its legality. Peter and Kevin also explore whether this is just the start of such deals given that President Trump recently declared that “there will be more transactions, if not in this industry then other industries.”Find Scaling Laws on the Lawfare website, and subscribe to never miss an episode.To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/lawfare-institute.Support this show http://supporter.acast.com/lawfare. Hosted on Acast. See acast.com/privacy for more information.
AI: REGULATING LLM - KEVIN FRAZIER, CIVITAS INSTITUTE 1941
AI: REGULATING LLM - KEVIN FRAZIER, CIVITAS INSTITUTE CONTINUED 1952
Preview: AGI Regulation Colleague Kevin Frazier comments on the tentative state of LLM that needs time to develop before it is either judged or derided by lawmakers. More later.
In this episode of Scaling Laws, Dean Ball, Senior Fellow at the Foundation for American Innovation and former Senior Policy Advisor for Artificial Intelligence and Emerging Technology, White House Office of Science and Technology Policy, joins Kevin Frazier, AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare, and Alan Rozenshtein, Associate Professor at Minnesota Law and Research Director at Lawfare, to share an inside perspective of the Trump administration's AI agenda, with a specific focus on the AI Action Plan. The trio also explore Dean's thoughts on the recently released ChatGPT-5 and the ongoing geopolitical dynamics shaping America's domestic AI policy.Find Scaling Laws on the Lawfare website, and subscribe to never miss an episode.To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/lawfare-institute.Support this show http://supporter.acast.com/lawfare. Hosted on Acast. See acast.com/privacy for more information.
Brian Fuller, a member of the Product Policy Team at OpenAI, joins Kevin Frazier, the AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare, to analyze how large AI labs go about testing their models for compliance with internal requirements and various legal obligations. They also cover the ins and outs of what it means to work in product policy and what issues are front of mind for in-house policy teams amid substantial regulatory uncertainty.Find Scaling Laws on the Lawfare website, and subscribe to never miss an episode.To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/lawfare-institute.Support this show http://supporter.acast.com/lawfare. Hosted on Acast. See acast.com/privacy for more information.
AI: Electricity supremacy. Kevin Frazier, Civitas Institute JUNE 1957
AI: Electricity supremacy. Kevin Frazier, Civitas Institute continued JANUARY 1959
SHOW SCHEDULE 8-7-25 Good evening. The show begins in the future, discussing the AI androids that will dominate the QSRs... NOVEMBER 1957 CBS EYE ON THE WORLD WITH JOHN BATCHELOR FIRST HOUR 9-915 Android AI: How soon? #SCALAREPORT: Chris Riegel CEO, Scala.com @Stratacache 915-930 Jobs: QSR all androids. #SCALAREPORT: Chris Riegel CEO, Scala.com @Stratacache 930-945 Research endowments and Trump admin. Eric Jensen, Case Western University, Civitas 945-1000 Research endowments and Trump admin. Eric Jensen, Case Western University, Civitas continued SECOND HOUR 10-1015 Putin softens. Anatol Lieven, Quincy Institute 1015-1030 Putin successor. Anatol Lieven, Quincy Institute 1030-1045 AI: Electricity supremacy. Kevin Frazier, Civitas Institute 1045-1100 AI: Electricity supremacy. Kevin Frazier, Civitas Institute continued THIRD HOUR 1100-1115 #NewWorldReport: Brazil lawfare. Latin American Research Professor Evan Ellis, U.S. Army War College Strategic Studies Institute. @revanellis #NewWorldReportEllis 1115-1130 #NewWorldReport: Colombia lawfare. Latin American Research Professor Evan Ellis, U.S. Army War College Strategic Studies Institute. @revanellis #NewWorldReportEllis 1130-1145 #NewWorldReport: Mexico Sheinbaum. Latin American Research Professor Evan Ellis, U.S. Army War College Strategic Studies Institute. @revanellis #NewWorldReportEllis 1145-1200 #NewWorldReport: Argentina congress election. Latin American Research Professor Evan Ellis, U.S. Army War College Strategic Studies Institute. @revanellis #NewWorldReportEllis FOURTH HOUR 12-1215 Fed choice. Veronique de Rugy 1215-1230 Canada: Shy vacationers. Conrad Black 1230-1245 Rubio and Caracas. Mary Anastasia O'Grady 1245-100 AM HOTELl Mars: China wins. Rand Simberg, David Livingston
Preview: AI predictions: Kevin Frazier of UT School of Law explains that AI cannot yet predict the future. More later. FRBRUARY 1962
Renée DiResta, an Associate Research Professor at the McCourt School of Public Policy at Georgetown and a Contributing Editor at Lawfare, and Alan Rozenshtein, an Associate Professor at Minnesota Law, Research Director at Lawfare, and, with the exception of today, co-host on the Scaling Laws podcast, join Kevin Frazier, the AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare, to take a look at the Trump Administration's Woke AI policies, as set forth by a recent EO and explored in the AI Action Plan.Read the Woke AI executive orderRead the AI Action PlanRead "Generative Baseline Hell and the Regulation of Machine-Learning Foundation Models," by James Grimmelmann, Blake Reid, and Alan RozenshteinFind Scaling Laws on the Lawfare website, and subscribe to never miss an episode.To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/lawfare-institute.Support this show http://supporter.acast.com/lawfare. Hosted on Acast. See acast.com/privacy for more information.
This week, Scott sat down with his Lawfare colleagues Natalie Orpett, Kevin Frazier, and Tyler McBrien to talk through the week's big national security news stories, including:“Feeding Frenzy.” The crisis in Gaza has reached a new, desperate stage. Months of a near total blockade on humanitarian assistance has created an imminent risk, if not a reality, of mass starvation among Gazan civilians. And it finally has the world—including President Donald Trump—taking notice and putting pressure on the Israeli government to change tack, including by threatening to recognize a Palestinian state. Now the Israeli government appears to be giving an inch, allowing what experts maintain is the bare minimum level of aid necessary to avoid famine into the country and even pursuing a few (largely symbolic) airlifts, while allowing other states to do the same. But how meaningful is this shift? And what could it mean for the trajectory of the broader conflict?“Hey, It Beats an AI Inaction Plan.” After months of anticipation, the Trump administration finally released its “AI Action Plan” last week. And despite some serious reservations about its handling of “woke AI” and select other culture war issues, the plan has generally been met with cautious optimism. How should we feel about the AI Action Plan? And what does it tell us about the direction AI policy is headed?“Pleas and No Thank You.” Earlier this month, the D.C. Circuit upheld then-Secretary of Defense Lloyd Austin's decision to nullify plea deals that several of the surviving 9/11 perpetrators had struck with those prosecuting them in the military commissions. How persuasive is the court's argument? And what does the decision mean for the future of the tribunals?In object lessons, Kevin highlighted a fascinating breakthrough from University of Texas engineers who developed over 1,500 AI-designed materials that can make buildings cooler and more energy efficient—an innovation that, coming from Texas, proves that necessity really is the mother of invention. Tyler took us on a wild ride into the world of Professional Bull Riders with a piece from The Baffler exploring the sport's current state and terrifying risks. Scott brought a sobering but essential read from the Carnegie Endowment for International Peace about how synthetic imagery and disinformation are shaping the Iran-Israel conflict. And Natalie recommended “Drive Your Plow Over the Bones of the Dead,” by Olga Tokarczuk, assuring us it's not nearly as murder-y as it sounds.Note: We will be on vacation next week but look forward to being back on August 13!To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/lawfare-institute.Support this show http://supporter.acast.com/lawfare. Hosted on Acast. See acast.com/privacy for more information.
Janet Egan, Senior Fellow with the Technology and National Security Program at the Center for a New American Security; Jessica Brandt, Senior Fellow for Technology and National Security at the Council on Foreign Relations; Neil Chilson, Head of AI Policy at Abundance Institute; and Tim Fist, Director of Emerging Technology Policy at the Institute for Progress join Kevin Frazier, the AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare for a special version of Scaling Laws.This episode was recorded just hours after the release of the AI Action Plan. About 180 days ago, President Trump directed his administration to explore ways to achieve AI dominance. His staff has attempted to do just that. This group of AI researchers dives into the plan's extensive recommendations and explore what may come next.Find Scaling Laws on the Lawfare website, and subscribe to never miss an episode.To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/lawfare-institute.Support this show http://supporter.acast.com/lawfare. Hosted on Acast. See acast.com/privacy for more information.