POPULARITY
Kevin Frazier analyzes how AI can fail like Western Union, warning that excessive concentration and lack of innovation could doom today's artificial intelligence giants just as the telegraph company declined.1955
Kevin Frazier warns of regulatory capture in AI governance, cautioning that dominant tech companies may co-opt oversight mechanisms, stifling competition and shaping rules to entrench their market dominance.1931
SHOW SCHEDULE 1-28-20261900 PRINCETON CANE RUSHBased on your notes, here are all 16 segments formatted for January 28, 2026:1.General Blaine Holt, USAF (Ret.), outlines the mission to rescue Iran from the brutes, detailing strategic options for liberating the Iranian people from the oppressive regime ruling in Tehran.2.Michael Bernstam of the Hoover Institution explains how Russia prospers with the price of gold, analyzing Moscow'seconomic resilience as precious metals revenues offset sanctions and sustain Putin's war machine.3.Bob Zimmerman of Behind the Black explains Blue Origin and SpaceX next missions, previewing upcoming launches and milestones as both companies push forward with ambitious spaceflight development programs.4.Bob Zimmerman explains Roscosmos failures without credit, examining how Russia's space agency stumbles through technical setbacks while refusing accountability, diminishing Moscow's once-proud position in space exploration.5.Victoria Coates and Gordon Chang identify the Baltic states as most vulnerable to Russian annexation, warning that Estonia, Latvia, and Lithuania face persistent threats from Putin's expansionist ambitions.6.Ann Stevenson-Yang and Gordon Chang comment on the low spirits and isolation of mainland Chinese singles, examining the demographic and social crisis as young people struggle with loneliness and economic pressures.7.Charles Burton and Gordon Chang observe the contest in Arctic waters, analyzing competing claims and military positioning as Russia, China, and Western nations vie for polar strategic advantage.8.Charles Burton and Gordon Chang comment on Prime Minister Mark Carney and Canada's future with the United States and PRC, assessing Ottawa's delicate balancing act between its powerful neighbors.9.Tevi Troy remarks on the new book McNamara at War, exploring Robert McNamara's tenure as Defense Secretary and his controversial management of the Vietnam War under two presidents.10.Tevi Troy observes McNamara dealing with the rude President Lyndon Johnson, examining the difficult working relationship between the cerebral defense secretary and the domineering, often abusive commander-in-chief.11.Kevin Frazier analyzes how AI can fail like Western Union, warning that excessive concentration and lack of innovation could doom today's artificial intelligence giants just as the telegraph company declined.12.Kevin Frazier warns of regulatory capture in AI governance, cautioning that dominant tech companies may co-opt oversight mechanisms, stifling competition and shaping rules to entrench their market dominance.13.Simon Constable reports from temperate France with commodities analysis, noting copper and gold trading dear as industrial demand and safe-haven buying drive precious and base metals prices higher.14.Simon Constable faults Prime Minister Starmer's lack of leadership, criticizing the British leader's failure to articulate vision or direction as the United Kingdom drifts through economic and political uncertainty.15.Astronomer Paul Kalas explains planetary formation in the Fomalhaut system twenty-five light years distant, revealing how observations of this nearby star illuminate the processes that create worlds around young suns.16.David Livingston explains his twenty-five years hosting The Space Show, reflecting on a quarter century of broadcasting interviews with astronauts, engineers, and visionaries shaping humanity's journey beyond Earth.
Kevin Frazier and Alan Rozenshtein explore how AI is reshaping the legal profession, from “secret cyborg” lawyers using tools like Harvey to the uncertain future of junior associates and access to legal services. They discuss maximalist legal services, AI-written “complete contingent contracts,” and where AI should fall between strict formalism and legal realism, including Claude's virtue-ethics-inspired constitution. The conversation then turns to AI's role in legislation and governance, including outcome-oriented law, the “Unitary Artificial Executive,” and new rights like the Right to Compute and the Right to Share personal data. They close by examining limits on government surveillance and how future debates over AI sentience and welfare could spark social conflict. LINKS: Article on automated AI compliance GDPVal dataset lawyers tasks viewer Polis online deliberation platform Sponsors: Blitzy: Blitzy is the autonomous code generation platform that ingests millions of lines of code to accelerate enterprise software development by up to 5x with premium, spec-driven output. Schedule a strategy session with their AI solutions consultants at https://blitzy.com Framer: Framer is an enterprise-grade website builder that lets business teams design, launch, and optimize their.com with AI-powered wireframing, real-time collaboration, and built-in analytics. Start building for free and get 30% off a Framer Pro annual plan at https://framer.com/cognitive Serval: Serval uses AI-powered automations to cut IT help desk tickets by more than 50%, freeing your team from repetitive tasks like password resets and onboarding. Book your free pilot and guarantee 50% help desk automation by week four at https://serval.com/cognitive Tasklet: Tasklet is an AI agent that automates your work 24/7; just describe what you want in plain English and it gets the job done. Try it for free and use code COGREV for 50% off your first month at https://tasklet.ai CHAPTERS: (00:00) About the Episode (03:35) Surveying AI-law landscape (14:56) Legal deserts and demand (Part 1) (15:02) Sponsors: Blitzy | Framer (18:06) Legal deserts and demand (Part 2) (Part 1) (28:25) Sponsors: Serval | Tasklet (31:14) Legal deserts and demand (Part 2) (Part 2) (31:14) AI and legal careers (45:10) AI counsel and self-representation (59:50) Maximalist law and outcomes (01:12:30) Rules, principles, and Claude (01:25:26) New rights and restraints (01:38:26) Outro PRODUCED BY: https://aipodcast.ing
Most folks agree that AI is going to drastically change our economy, the nature of work, and the labor market. What's unclear is when those changes will take place and how best Americans can navigate the transition. Brent Orrell, senior fellow at the American Enterprise Institute, joins Kevin Frazier, a Senior Fellow at the Abundance Institute, the Director of the AI Innovation and Law Fellow at the University of Texas School of Law, and a Senior Editor at Lawfare, to help tackle these and other weighty questions.Orrell has been studying the future of work since before it was cool. His two cents are very much worth a nickel in this important conversation. Send us your feedback (scalinglaws@lawfaremedia.org) and leave us a review! Hosted on Acast. See acast.com/privacy for more information.
Jakub Kraus, a Tarbell Fellow at Lawfare, speaks with Alan Rozenshtein, Associate Professor of Law at the University of Minnesota and Research Director at Lawfare, and Kevin Frazier, the AI Innovation and Law Fellow at the University of Texas School of Law, a Senior Fellow at the Abundance Institute, and a Senior Editor at Lawfare, about Anthropic's newly released "constitution" for its AI model, Claude.The conversation covers the lengthy document's principles and underlying philosophical views, what these reveal about Anthropic's approach to AI development, how market forces are shaping the AI industry, and the weighty question of whether an AI model might ever be a conscious or morally relevant being.Mentioned in this episode:Kevin Frazier, "Interpreting Claude's Constitution," LawfareAlan Rozenshtein, "The Moral Education of an Alien Mind," LawfareFind Scaling Laws on the Lawfare website, and subscribe to never miss an episode.To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/lawfare-institute.Support this show http://supporter.acast.com/lawfare. Hosted on Acast. See acast.com/privacy for more information.
Jakub Kraus, a Tarbell Fellow at Lawfare, spoke with Alan Rozenshtein, Associate Professor of Law at the University of Minnesota and Research Director at Lawfare, and Kevin Frazier, the AI Innovation and Law Fellow at the University of Texas School of Law, a Senior Fellow at the Abundance Institute, and a Senior Editor at Lawfare, about Anthropic's newly released "constitution" for its AI model, Claude. The conversation covered the lengthy document's principles and underlying philosophical views, what these reveal about Anthropic's approach to AI development, how market forces are shaping the AI industry, and the weighty question of whether an AI model might ever be a conscious or morally relevant being. Mentioned in this episode:Kevin Frazier, "Interpreting Claude's Constitution," LawfareAlan Rozenshtein, "The Moral Education of an Alien Mind," Lawfare Hosted on Acast. See acast.com/privacy for more information.
Shlomo Klapper, founder of Learned Hand, joins Kevin Frazier, the Director of the AI Innovation and Law Fellow at the University of Texas School of Law, a Senior Fellow at the Abundance Institute, and a Senior Editor at Lawfare, to discuss the rise of judicial AI, the challenges of scaling technology inside courts, and the implications for legitimacy, due process, and access to justice. Hosted on Acast. See acast.com/privacy for more information.
EDUCATION REFORM AND THE AVOIDANCE OF A FEDERAL AI DEPARTMENT Colleague Kevin Frazier. Frazier argues for updating education, starting with teacher training in elementary schools and vocational partnerships in high schools, to prepare students for an AI future. He advises against creating a federal Department of AI, suggesting society should adapt to it as advanced computing rather than a unique threat. NUMBER 121921 FRANCE
SHOW SCHEDULE1-17-251895 PARISLAS VEGAS TUNNELS AND THE RELOCATION OF THE ATHLETICS Colleague Jeff Bliss. Jeff Bliss reports on the expansion of The Boring Company's tunnels in Las Vegas, which use Tesla cars to alleviate traffic congestion. He also discusses the Athletics baseball team's temporary move to Sacramento and the legal complications regarding their team name as they prepare for a permanent move to Las Vegas in 2028. NUMBER 1BIG SUR REOPENS AND COPPER THEFT PLAGUES CALIFORNIA Colleague Jeff Bliss. Highway 1 in Big Sur has reopened after landslide repairs featuring new concrete canopies to protect the road. Bliss also details how copper thieves have crippled infrastructure in Sacramento and Los Angeles, contributing to broader political dissatisfaction with Governor Gavin Newsom regarding crime and the state's management. NUMBER 2FEDERAL IMMUNITY AND THE ICE SHOOTING IN MINNEAPOLIS Colleague Professor Richard Epstein. Professor Richard Epstein analyzes the legal battle over whether ICE agents have immunity from state prosecution following a fatal shooting in Minneapolis. He explains the complexities of absolute versus qualified immunity, arguing that the agents' aggressive conduct might weaken their defense against state charges in this specific instance. NUMBER 3SUPREME COURT LIKELY TO STRIKE DOWN TRUMP TARIFFS Colleague Professor Richard Epstein. Epstein predicts the Supreme Court will invalidate the Trump administration's emergency tariffs, arguing there is no statutory basis for the trade imbalances cited as justification. He anticipates a fractured decision where a centrist block of justices joins liberals to rule that the executive branch exceeded its authority. NUMBER 4MEXICO'S ALIGNMENT WITH DICTATORS AND INFRASTRUCTURE FAILURES Colleague Mary Anastasia O'Grady. Mary Anastasia O'Grady discusses Mexican President Claudia Sheinbaum's ideological support for the Cuban and Venezuelan regimes, including increased oil shipments to Havana. She also details a recent train derailment on Mexico's interoceanic line, attributing the failure to secrecy and no-bid contracts managed by the military. NUMBER 5ITALY STABILIZES PENSION COSTS AND CELEBRATES PASTA TARIFF CUTS Colleague Lorenzo Fiori. Lorenzo Fiori reports that despite high pension costs, Italy's economic reforms under Prime Minister Meloni have stabilized the system by increasing employment. Fiori notes that Italy's deficit and inflation have dropped significantly, and he celebrates the US decision to slash tariffs on Italian pasta imports. NUMBER 6SPACE STATION RETURNS, NUCLEAR MOON PLANS, AND BOEING STRUGGLES Colleague Bob Zimmerman. Bob Zimmerman discusses the early return of an ISS crew due to a medical issue and expresses skepticism about NASA's plan for a lunar nuclear reactor by 2030. He also highlights that the Space Force is shifting launches from ULA to SpaceX due to reliability concerns. NUMBER 7GLOBAL SPACE FAILURES AND CHINA'S REUSABLE CRAFT CLAIMS Colleague Bob Zimmerman. Zimmerman analyzes a failed Indian rocket launch that lost multiple payloads, though a Spanish prototype survived. He also critiques the European Space Agency for delays in debris removal missions and casts doubt on China's claims regarding a "new" reusable spacecraft, suggesting it relies on older suborbital technology. NUMBER 8DATA CENTERS STRAIN THE ELECTRICAL GRID Colleague Henry Sokolski. Henry Sokolski discusses the surging demand for electricity driven by AI data centers and the White House's proposal to auction power access. He argues that tech companies should finance their own off-grid generation, such as nuclear or gas, rather than forcing ratepayers to subsidize new transmission infrastructure. NUMBER 9ELON MUSK AND THE GOLDEN DOME DEFENSE PROPOSAL Colleague Henry Sokolski. Sokolski evaluates Elon Musk's proposal to create a "Golden Dome" missile defense system for the US. While the concept involves space-based sensors, Sokolski notes concerns regarding monopoly power, the reliance on a single contractor for national security, and the undefined costs of ground-based interceptors. NUMBER 10ECONOMIC LIBERTY AND THE LABOR MARKET IN THE AGE OF AI Colleague Kevin Frazier. Kevin Frazier explores how AI is reshaping the economy, noting that liberal arts graduates may be better positioned than STEM majors to handle new information synthesis. He advises legislators to focus on job creation and a fluid labor market rather than trying to protect obsolete professions through regulation. NUMBER 11EDUCATION REFORM AND THE AVOIDANCE OF A FEDERAL AI DEPARTMENT Colleague Kevin Frazier. Frazier argues for updating education, starting with teacher training in elementary schools and vocational partnerships in high schools, to prepare students for an AI future. He advises against creating a federal Department of AI, suggesting society should adapt to it as advanced computing rather than a unique threat. NUMBER 12SOVIET UNION'S SECRET 1972 LUNAR BASE AMBITIONS AND THE N1 ROCKET FAILURE Colleague Anatoli Zak, Publisher of RussianSpaceWeb.com. Anatoli Zak explains that in 1972, the Soviet Union pursued the L3M project to establish a permanent lunar base, refusing to concede the moon race immediately. However, repeated failures of the N1 rocket and the financial strain of competing with the US Space Shuttle eventually forced the program's cancellation. NUMBER 13ISS LAUNCHPAD ACCIDENT AND RUSSIA'S NUCLEAR ROLE IN CHINESE MOON BASE Colleague Anatoli Zak, Publisher of RussianSpaceWeb.com. A launchpad collapse has halted Russian cargo missions to the ISS, endangering the propellant supply required for critical orbit maintenance. Zak also details Russia's attempt to join China's lunar ambitions, with the Kurchatov Institute developing a nuclear reactor to provide electricity for a future Chinese moon base. NUMBER 14PERU NAMED NON-NATO PARTNER AS US COUNTERS CHINESE INFLUENCE Colleague Oscar Sumar, Deputy Vice Chancellor at Universidad Científica del Sur. Oscar Sumar discusses Peru's designation as a US non-NATO partner, a move designed to counter Chinese geopolitical expansion through infrastructure like the Chancay port. Sumar warns that while cultural ties are strong, the Chinese Communist Party poses a threat to Peru's democratic stability and political transparency. NUMBER 15ECONOMIC SLOWDOWN INDICATORS AND SECRECY AT THE WHITE HOUSE Colleague Jim McTague, Former Washington Editor of Barron's. Jim McTague observes unusually light traffic and retail activity in Washington, D.C. and Lancaster, signaling a potential economic slowdown. He notes blocked views of White House construction and predicts a recession driven by rising state taxes and the depletion of pandemic-era stimulus funds for local governments. NUMBER 16
ECONOMIC LIBERTY AND THE LABOR MARKET IN THE AGE OF AI Colleague Kevin Frazier. Kevin Frazier explores how AI is reshaping the economy, noting that liberal arts graduates may be better positioned than STEM majors to handle new information synthesis. He advises legislators to focus on job creation and a fluid labor market rather than trying to protect obsolete professions through regulation. NUMBER 11October 1957
PREVIEW FOR LATER REIMAGINING AI REGULATION BEYOND THE SKYNET MYTH Colleague Kevin Frazier, University of Texas Law School. Frazier argues against regulating Artificial Intelligence through a fearful "Skynet mentality," suggesting it is better viewed simply as advanced computing known since 1956. He recommends treating AI not as a bespoke technology but as part of a broader portfolio of technological changes, including quantum computing and robotics.JANUARY 1931
Connecticut State Senator James Maroney and Neil Chilson, Head of AI Policy at the Abundance Institute, join Kevin Frazier, the AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare, and Alan Rozenshtein, Associate Professor at Minnesota Law and Research Director at Lawfare, for a look back at a wild year in AI policy.Neil provides his expert analysis of all that did (and did not) happen at the federal level. Senator Maroney then examines what transpired across the states. The four then offer their predictions for what seems likely to be an even busier 2026. Find Scaling Laws on the Lawfare website, and subscribe to never miss an episode.To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/lawfare-institute.Support this show http://supporter.acast.com/lawfare. Hosted on Acast. See acast.com/privacy for more information.
Ziad Reslan, a member of OpenAI's Product Policy Staff and a Senior Fellow with the Schmidt Program on Artificial Intelligence, Emerging Technologies, and National Power at Yale University, joins Kevin Frazier, AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare, to talk about iterative deployment--the lab's approach to testing and deploying its models. It's a complex and, at times, controversial approach. Ziad provides the rationale behind iterative deployment and tackles some questions about whether the strategy has always worked as intended. Hosted on Acast. See acast.com/privacy for more information.
Today's Lawfare Daily is Lawfare's annual "Ask Us Anything" mailbag episode where Lawfare contributors answered listener-submitted questions.Scott R. Anderson, Natalie Orpett, Benjamin Wittes, Kevin Frazier, Eric Columbus, Loren Voss, Molly Roberts, Jakub Kraus, Anna Bower, and Roger Parloff address questions on everything from presidential immunity to AI regulations to the domestic deployment of the military.Thank you for your questions. And as always, thank you for listening.To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/lawfare-institute.Support this show http://supporter.acast.com/lawfare. Hosted on Acast. See acast.com/privacy for more information.
Connecticut State Senator James Maroney and Neil Chilson, Head of AI Policy at the Abundance Institute, join Kevin Frazier, the AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare, and Alan Rozenstein, Associate Professor at Minnesota Law and Research Director at Lawfare, for a look back at a wild year in AI policy. Neil provides his expert analysis of all that did (and did not) happen at the federal level. Senator Maroney then examines what transpired across the states. The four then offer their predictions for what seems likely to be an even busier 2026. Hosted on Acast. See acast.com/privacy for more information.
From January 2, 2025: You called in with your questions, and Lawfare contributors have answers! Benjamin Wittes, Kevin Frazier, Quinta Jurecic, Eugenia Lostri, Alan Rozenshtein, Scott R. Anderson, Natalie Orpett, Amelia Wilson, Anna Bower, and Roger Parloff addressed questions on everything from presidential pardons to the risks of AI to the domestic deployment of the military.Thank you for your questions. And as always, thank you for listening.To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/lawfare-institute.Support this show http://supporter.acast.com/lawfare. Hosted on Acast. See acast.com/privacy for more information.
Every year, Lawfare publishes a retrospective of the year that passed. Today, we're pleased to bring you an audio debrief of that article, The Year That Was: 2025, which you can read in full on our website starting December 31.Lawfare is focused on producing timely, rigorous, and non-partisan analysis of “hard national security choices.” And this year, that work was—to use an expression as tired as we are—like drinking from a firehose. We did our best to keep up. We published more than 1,000 articles, podcasts, videos, research papers, and primary source documents. We did livestream round-ups and rapid-response videos. We produced five different podcasts and an investigative video series. We built data visualizations and trackers to make sense of complicated unfolding events. You can find all that and more for free on our website, lawfaremedia.org.It's impossible to capture everything that happened in 2025 in the world of national security. But here's what stood out to the Lawfare team—and what they have to say about. In this episode, you'll hear from Executive Editor Natalie Orpett on Lawfare's work in 2025 and from Editor in Chief Benjamin Wittes on The Situation. You'll hear from Senior Editors Anna Bower on DOGE, Roger Parloff on the Alien Enemies Act, Molly Roberts on politicization of the Justice Department, Eric Columbus on impoundments, Scott R. Anderson on war powers, and Kevin Frazier on AI and the states. You'll hear from Public Interest Fellows Loren Voss on domestic deployments of the military, and Ariane Tabatabai on foreign policy. You'll hear from our Managing Editor, Tyler McBrien, on our narrative podcast series, Escalation. You'll hear from Associate Editors Katherine Pompilio on the Jan. 6 pardons and Olivia Manes on rolling back internal checks at the Justice Department. You'll hear from our Fellow Jakub Kraus on AI, and you'll hear from Contributing Editor Renée DiResta on election integrity capacity.And that's just a sampling of Lawfare's work.It's The Year That Was: 2025. We'll see you next year.To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/lawfare-institute.Support this show http://supporter.acast.com/lawfare. Hosted on Acast. See acast.com/privacy for more information.
SHOW 12-23-25 THE SHOW BEGINS WITH DOUBTS F THE EU... 1831 BRUSSELS EU STRUGGLES WITH RUSSIAN ASSETS AND AID Colleague Judy Dempsey. Judy Dempsey discusses the EU's difficulty in utilizing frozen Russian assets and the "defeat" for Chancellor Merz regarding the funding mechanism for Ukraine. NUMBER 1 THE RISE OF THE AFD IN GERMANY Colleague Judy Dempsey. Judy Dempsey continues, focusing on the rise of the AfD party in Germany and its connections to elements of the US Republican party. NUMBER 2 STALEMATES IN GAZA AND LEBANON Colleague Jonathan Schanzer. Jonathan Schanzer discusses the stalemate regarding the last hostage in Gaza, the fragmented control of the territory, and threats in Lebanon and Syria. NUMBER 3 EU REGULATION VS. US GROWTH Colleague Michael Toth. Michael Toth critiques the European Union's "regulatory imperialism" and contrasts it with the economic growth of the US. NUMBER 4 STATE DEPARTMENT RECALLS AND STRATEGY Colleague Mary Kissel. Mary Kissel discusses the recall of career ambassadors by the Trump administration and challenges in Panama and Greenland. NUMBER 5 AUSTRALIA'S DEFENSE AND CHINA Colleague Grant Newsham. Grant Newsham warns about Australia's lack of defense capabilities and the erosion of its influence in the Pacific islands due to Chinese political warfare. NUMBER 6 THE BORING BENEFITS OF AI Colleague Kevin Frazier. Kevin Frazier advocates for the "boring use cases" of AI, such as in healthcare and traffic management, to save costs and improve efficiency. NUMBER 7 REGULATING ARTIFICIAL INTELLIGENCE Colleague Kevin Frazier. Kevin Frazier continues, warning against a "waterfall of regulation" by states and advocating for "regulatory sandboxes" to allow experimentation. NUMBER 8 US EXPANSIONISM AND DIPLOMATIC RIFTS Colleague Gregory Copley. Gregory Copley analyzes US foreign policy moves regarding Greenland, Panama, and Venezuela, describing them as a return to "might is right" expansionism. NUMBER 9 THE MONROE DOCTRINE AND NAVAL POWER Colleague Gregory Copley. Gregory Copley continues, debating whether the US is a naval or continental power in the context of enforcing the Monroe Doctrine and discussing a proposal for new battleships. NUMBER 10 THE DECLINE OF LITERACY AND CONTEXT Colleague Gregory Copley. Gregory Copley continues, discussing the decline of literacy and context since the mid-20th century, comparing modern society to the Eloi and Morlocks of H.G. Wells. NUMBER 11 KING CHARLES III AND UK POLITICAL TURMOIL Colleague Gregory Copley. Gregory Copley continues, analyzing the challenges King Charles III faces under the Keir Starmer government, which Copley compares to the era of Oliver Cromwell. NUMBER 12 THE LEGEND OF THE HESSIANS Colleague Professor Richard Bell. Professor Richard Bell discusses the American fear of Hessian soldiers and Washington's strategic victory at Trenton. NUMBER 13 FRANCE'S GLOBAL STRATEGY IN THE REVOLUTION Colleague Professor Richard Bell. Professor Richard Bell continues, highlighting the role of Foreign Minister Vergennes and how French involvement expanded the war globally. NUMBER 14 BENEDICT ARNOLD AND PEGGY SHIPPEN Colleague Professor Richard Bell. Professor Richard Bell continues, discussing Peggy Shippen's influence on Benedict Arnold's defection and their subsequent life in London. NUMBER 15 THE ACCIDENTAL COLONIZATION OF AUSTRALIA Colleague Professor Richard Bell. Professor Richard Bell concludes, recounting the story of convict William Murray and the accidental selection of Australia as a penal colony following the loss of the American colonies. NUMBER 16
THE BORING BENEFITS OF AI Colleague Kevin Frazier. Kevin Frazier advocates for the "boring use cases" of AI, such as in healthcare and traffic management, to save costs and improve efficiency. NUMBER 7 JANUARY 1951
REGULATING ARTIFICIAL INTELLIGENCE Colleague Kevin Frazier. Kevin Frazier continues, warning against a "waterfall of regulation" by states and advocating for "regulatory sandboxes" to allow experimentation. NUMBER 8 NOVEMBER 1955
PREVIEW WARNING AGAINST FRAGMENTED STATE-LEVEL AI REGULATION Colleague Kevin Frazier. Kevin Frazier, a University of Texas Law School fellow, warns against fragmented AI regulation by individual states seeking tax revenue. He advocates for a national framework rather than hasty local laws, arguing that allowing technology to develop through "trial and error" is superior to heavy-handed, immediate restrictions.
In this rapid response episode, Lawfare senior editors Alan Rozenshtein and Kevin Frazier and Lawfare Tarbell fellow Jakub Kraus discuss President Trump's new executive order on federal preemption of state AI laws, the politics of AI regulation and the split between Silicon Valley Republicans and MAGA populists, and the administration's decision to allow Nvidia to export H200 chips to China. Mentioned in this episode:Executive Order: Ensuring a National Policy Framework for Artificial IntelligenceCharlie Bullock, "Legal Issues Raised by the Proposed Executive Order on AI Preemption," Institute for Law & AI Hosted on Acast. See acast.com/privacy for more information.
Graham Dufault, General Counsel at ACT | The App Association, joins Kevin Frazier, AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare, to explore how small- and medium-sized enterprises (SMEs) are navigating the EU's AI regulatory framework. The duo breakdown the Association's recent survey of SMEs, which included the views of more than 1,000 enterprises and assessed their views on regulation and adoption of AI. Follow Graham: @GDufault and ACT | The App Association: @actonline Hosted on Acast. See acast.com/privacy for more information.
Caleb Withers, a researcher at the Center for a New American Security, joins Kevin Frazier, the AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare, to discuss how frontier models shift the balance in favor of attackers in cyberspace. The two discuss how labs and governments can take steps to address these asymmetries favoring attackers, and the future of cyber warfare driven by AI agents. Jack Mitchell, a student fellow in the AI Innovation and Law Program at the University of Texas School of Law, provided excellent research assistance on this episode.Check out Caleb's recent research here. Find Scaling Laws on the Lawfare website, and subscribe to never miss an episode.To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/lawfare-institute.Support this show http://supporter.acast.com/lawfare. Hosted on Acast. See acast.com/privacy for more information.
Caleb Withers, a researcher at the Center for a New American Security, joins Kevin Frazier, the AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare, to discuss how frontier models shift the balance in favor of attackers in cyberspace. The two discuss how labs and governments can take steps to address these asymmetries favoring attackers, and the future of cyber warfare driven by AI agents.Jack Mitchell, a student fellow in the AI Innovation and Law Program at the University of Texas School of Law, provided excellent research assistance on this episode.Check out Caleb's recent research here. Hosted on Acast. See acast.com/privacy for more information.
Andrew Prystai, CEO and co-founder of Vesta, and Thomas Bueler-Faudree, co-founder of August Law, join Kevin Frazier, AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare, to think through AI policy from the startup perspective. Andrew and Thomas are the sorts of entrepreneurs that politicians on both sides of the aisle talk about at town halls and press releases. They're creating jobs and pushing the technological frontier. So what do they want AI policy leaders to know as lawmakers across the country weigh regulatory proposals? That's the core question of the episode. Giddy up for a great chat! Learn more about the guests and their companies here:Andrew's Linkedin, Vesta's LinkedinThomas's LinkedIn, August's LinkedIn Hosted on Acast. See acast.com/privacy for more information.
Jeff Bleich, General Counsel at Anthropic, former Chief Legal Officer at Cruise, and former Ambassador to Australia during the Obama administration, joins Kevin Frazier, AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare, to get a sense of how the practice of law looks at the edge of the AI frontier.The two also review how Jeff's prior work in the autonomous vehicle space prepared him for the challenges and opportunities posed by navigating legal uncertainties in AI governance. Hosted on Acast. See acast.com/privacy for more information.
Anton Korinek, a professor of economics at the University of Virginia and newly appointed economist to Anthropic's Economic Advisory Council; Nathan Goldschlag, Director of Research at the Economic Innovation Group; and Bharat Chander, Economist at Stanford Digital Economy Lab, join Kevin Frazier, the AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare, to sort through the myths, truths, and ambiguities that shape the important debate around the effects of AI on jobs. They discuss what happens when machines begin to outperform humans in virtually every computer-based task, how that transition might unfold, and what policy interventions could ensure broadly shared prosperity.These three are prolific researchers. Give them a follow to find their latest works:Anton: @akorinek on XNathan: @ngoldschlag and @InnovateEconomy on XBharat: X: @BharatKChandar, LinkedIn: @bharatchandar, Substack: @bharatchandarFind Scaling Laws on the Lawfare website, and subscribe to never miss an episode.To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/lawfare-institute.Support this show http://supporter.acast.com/lawfare. Hosted on Acast. See acast.com/privacy for more information.
Send us a textA fun chat with comedian and actor ARIES SPEARS, who is performing at The Improv in Schaumburg this weekend with 7 and 9:30 shows on Friday, 3:30, 7, and 9:30 on Saturday, and 6:00 on Sunday.ARIES SPEARSBiographyEver since New York native Aries Spears was 14 years old, he has been a force to be reckoned with in the comedy scene throughout America. His quick wit, charisma and ferociously aggressive style of comedy have earned him critical acclaim, high accolades, and above all, a busy schedule. From being a regular on Fox's Mad TV, starring in feature films, appearing on a number of national talk shows, and continually touring the country where he continually sells out each and every city, Aries' talents are widespread as he just passed the 3 million spins played on Pandora and his podcast is one of the "Musts" in the US. This past summer, Aries guest starred in CNN's “See It Loud” new summer series on the history of Black Television which was one of their hit docuseries of 2023, alongside Amanda Seales, Da'Vinchi, Debbie Allen, Deon Cole, Desus & Mero, Gabrielle Union, Jimmie Walker, Judge Greg Mathis, Kevin Frazier, Loni Love, Lynn Whitfield, Mo'Nique, Naturi Naughton, Omari Hardwick, Ray J, Ruben Studdard, Sherri Shepherd, Tatyana Ali, Tiffany Pollard, Tisha Campbell, Vivica A. Fox and many more. Trailer: https://tinyurl.com/AriesCNNSeeItLoudAs a principal cast member on Fox's hit sketch comedy show Mad TV, Aries brought a fresh, hip style to the already-edgy program from the third through the tenth season. The producers made full use of many of Aries's talents by calling on him frequently to create new hilarious characters and write sketches where he thrived for eight seasons.Some of the many recurring characters that Aries is famous for on the show include: Belma Buttons, Bill Cosby, Jesse Jackson, Mike Tyson, Walter (Crackheads), Reggie (Erascist), Dollar Bill Montgomery, Shaquille O'Neal, The Klumps, Michael Jackson, Sisqo, Evander Holyfield, El Diablo Negro and more. Aries also boasts a number of uproarious impersonations on the show that include James Brown, Al Pacino, and his childhood idol Eddie Murphy. The Aries-branded sketch “Talkin' American” was Mad TV's most popular bit, keeping fans tuned in and boosting Mad's ratings impressively on Saturday night.Aries has also found great success in the world of feature films. At the age of 17, after being spotted in a comedy club, Aries landed a part in the movie Home of Angels which eventually led to a starring role alongside John Leguizamo in TriStar's The Pest followed by the notable role as Cuba Gooding, Jr.'s brother in Columbia's blockbuster hit, Jerry Maguire.Aries maintains a strong connection to his African roots and is involved in many charitable events, including a number of organizations that help abused women. He feels blessed to have been able to achieve everything that he has, but he in no way intends on stopping there. Given the extreme growing success rate of action comedies coming out of Hollywood, it's safe to say that Aries Spears will be in demand for quite some time. https://www.ariesspears.com
Anton Korinek, a professor of economics at the University of Virginia and newly appointed economist to Anthropic's Economic Advisory Council, Nathan Goldschlag, Director of Research at the Economic Innovation Group, and Bharat Chander, Economist at Stanford Digital Economy Lab, join Kevin Frazier, the AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare, to sort through the myths, truths, and ambiguities that shape the important debate around the effects of AI on jobs. We discuss what happens when machines begin to outperform humans in virtually every computer-based task, how that transition might unfold, and what policy interventions could ensure broadly shared prosperity.These three are prolific researchers. Give them a follow to find their latest works.Anton: @akorinek on XNathan: @ngoldschlag and @InnovateEconomy on XBharat: X: @BharatKChandar, LinkedIn: @bharatchandar, Substack: @bharatchandar Hosted on Acast. See acast.com/privacy for more information.
Regulating AI and Protecting Children. Kevin Frazier (Law School Fellow at the University of Texas at Austin) addresses the growing concern over AI chatbots following tragedies, noting that while only 1.9% of ChatGPT conversations relate to "relationships," this fraction still warrants significant attention. He criticizes early state legislative responses, such as Illinois banning AI therapy tools, arguing that such actions risk denying mental health support to children who cannot access human therapists. Frazier advocates against imposing restrictive statutory law on the rapidly evolving technology. Instead, he recommends implementing a voluntary, standardized rating system, similar to the MPA film rating system. This framework would provide consumers with digestible information via labels—like "child safe" or "mental health appropriate"—to make informed decisions and incentivize industry stakeholders to develop safer applications. 1919
Regulating AI and Protecting Children. Kevin Frazier (Law School Fellow at the University of Texas at Austin) addresses the growing concern over AI chatbots following tragedies, noting that while only 1.9% of ChatGPT conversations relate to "relationships," this fraction still warrants significant attention. He criticizes early state legislative responses, such as Illinois banning AI therapy tools, arguing that such actions risk denying mental health support to children who cannot access human therapists. Frazier advocates against imposing restrictive statutory law on the rapidly evolving technology. Instead, he recommends implementing a voluntary, standardized rating system, similar to the MPA film rating system. This framework would provide consumers with digestible information via labels—like "child safe" or "mental health appropriate"—to make informed decisions and incentivize industry stakeholders to develop safer applications. 1941
PREVIEW. The Crisis of AI Literacy: Protecting Vulnerable Communities from Misusing Chatbots. Kevin Frazier discusses the dangers of young people misusing AI chatbots due to a significant lack of public awareness and basic AI literacy. Designers assume users know chatbots are merely objectification and optimization, not real opinions or people. Frazier stresses the need for educating consumers on the best and improper uses of these tools for responsible innovation. 1951
Artificial intelligence isn't just transforming industries—it's redefining freedom, opportunity, and the future of human work. This week on the Let People Prosper Show, I talk with Kevin Frazier, the inaugural AI Innovation and Law Fellow at the University of Texas School of Law, where he leads their groundbreaking new AI Innovation and Law Program.Kevin's at the center of the national conversation on how to balance innovation with accountability—and how to make sure regulation doesn't crush the technological progress that drives prosperity. With degrees from UC Berkeley Law, Harvard Kennedy School, and the University of Oregon, Kevin brings both a legal and policy lens to today's most pressing questions about AI, federalism, and the economy. Before joining UT, he served as an Assistant Professor at St. Thomas University College of Law and conducted research for the Institute for Law and AI. His scholarship has appeared in the Tennessee Law Review, MIT Technology Review, and Lawfare. He also co-hosts the Scaling Laws Podcast, bridging the gap between innovation and regulation.This episode goes deep into how we can harness AI to promote human flourishing, not government dependency—how we can regulate based on reality, not fear—and how federalism can help America remain the global leader in technological innovation.For more insights, visit vanceginn.com. You can also get even greater value by subscribing to my Substack newsletter at vanceginn.substack.com. Please share with your friends, family, and broader social media network.
California State Senator Scott Wiener, author of Senate Bill 53—a frontier AI safety bill—signed into law by Governor Newsom earlier this month, joins Alan Rozenshtein, Associate Professor at Minnesota Law and Research Director at Lawfare, and Kevin Frazier, AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare, to explain the significance of SB 53 in the large debate about how to govern AI.The trio analyze the lessons that Senator Wiener learned from the battle of SB 1047, a related bill that Newsom vetoed last year, explore SB 53's key provisions, and forecast what may be coming next in Sacramento and D.C.Find Scaling Laws on the Lawfare website, and subscribe to never miss an episode.To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/lawfare-institute.Support this show http://supporter.acast.com/lawfare. Hosted on Acast. See acast.com/privacy for more information.
Mosharaf Chowdhury, Associate Professor at the University of Michigan and Director of the ML Energy lab, and Dan Zhou, former Senior Research Scientist at the MIT Lincoln Lab, MIT Supercomputing Center, and MIT CSAIL, join Kevin Frazier, AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare, to discuss the energy costs of AI. They break down exactly how much energy fuels a single ChatGPT query, why this is difficult to figure out, how we might improve energy efficiency, and what kinds of policies might minimize AI's growing energy and environmental costs. Leo Wu provided excellent research assistance on this podcast.Read more from Mosharaf:The ML Energy Initiative“We did the math on AI's energy footprint. Here's the story you haven't heard,” in MIT Technology ReviewRead more from Dan:“From Words to Watts: Benchmarking the Energy Costs of Large Language Model Inference,” in Proc. IEEE High Perform. Extreme Comput. Conf. (HPEC)“A Green(er) World for A.I.,” in IEEE International Parallel and Distributed Processing Symposium Workshops (IPDPSW)Find Scaling Laws on the Lawfare website, and subscribe to never miss an episode.To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/lawfare-institute.Support this show http://supporter.acast.com/lawfare. Hosted on Acast. See acast.com/privacy for more information.
HEADLINE: AI Regulation Debate: Premature Laws vs. Emerging Norms GUEST NAME: Kevin Frazier SUMMARY: Kevin Frazier critiques the legislative rush to regulate AI, arguing that developing norms might be more effective than premature laws. He notes that bills like California's AB 1047, which demands factual accuracy, fundamentally misunderstand AI's generative nature. Imposing vague standards, as seen in New York's RAISE Act, risks chilling innovation and preventing widespread benefits, like affordable legal or therapy tools. Frazier emphasizes that AI policy should be grounded in empirical data rather than speculative fears. 1960
HEADLINE: AI Regulation Debate: Premature Laws vs. Emerging Norms GUEST NAME: Kevin Frazier SUMMARY: Kevin Frazier critiques the legislative rush to regulate AI, arguing that developing norms might be more effective than premature laws. He notes that bills like California's AB 1047, which demands factual accuracy, fundamentally misunderstand AI's generative nature. Imposing vague standards, as seen in New York's RAISE Act, risks chilling innovation and preventing widespread benefits, like affordable legal or therapy tools. Frazier emphasizes that AI policy should be grounded in empirical data rather than speculative fears. 1958
David Sullivan, Executive Director of the Digital Trust & Safety Partnership, and Rayi Iyer, Managing Director of the Psychology of Technology Institute at USC's Neely Center, join Kevin Frazier, AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare, to discuss the evolution of the Trust & Safety field and its relevance to ongoing conversations about how best to govern AI. They discuss the importance of thinking about the end user in regulation, debate the differences and similarities between social media and AI companions, and evaluate current policy proposals.Leo Wu provided excellent research assistance to prepare for this podcast.Read more from David:"Why we need to make safety the product to build better bots," from the World Economic Forum Centre for AI Excellence"Learning from the Past to Shape the Future of Digital Trust and Safety," in Tech Policy PressRead more from Ravi:"Ravi Iyer on How to Improve Technology Through Design," from Lawfare's Arbiters of Truth series"Regulate Design, not Speech," from the Designing Tomorrow Substack Read more from Kevin:"California in Your Chatroom: AB 1064's Likely Constitutional Overreach," from the Cato InstituteFind Scaling Laws on the Lawfare website, and subscribe to never miss an episode.To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/lawfare-institute.Support this show http://supporter.acast.com/lawfare. Hosted on Acast. See acast.com/privacy for more information.
From September 20, 2024: Bob Bauer, Professor of Practice and Distinguished Scholar in Residence at New York University School of Law, and Liza Goitein, Senior Director of Liberty & National Security at the Brennan Center, join Kevin Frazier, Assistant Professor at St. Thomas University College of Law and a Tarbell Fellow at Lawfare, to review the emergency powers afforded to the president under the National Emergency Act, International Emergency Economic Powers Act, and the Insurrection Act. The trio also inspect ongoing bipartisan efforts to reform emergency powers.To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/lawfare-institute.Support this show http://supporter.acast.com/lawfare. Hosted on Acast. See acast.com/privacy for more information.
Neil Chilson, Head of AI Policy at the Abundance Institute, and Gus Hurwitz, Senior Fellow and CTIC Academic Director at Penn Carey Law School and Director of Law & Economics Programs at the International Center for Law & Economics, join Kevin Frazier, AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare, to explore how academics can overcome the silos and incentives that plague the Ivory Tower and positively contribute to the highly complex, evolving, and interdisciplinary work associated with AI governance.The trio recorded this podcast live at the Institute for Humane Studies's Technology, Liberalism, and Abundance Conference in Arlington, Virginia.Read about Kevin's thinking on the topic here: https://www.civitasinstitute.org/research/draining-the-ivory-towerLearn about the Conference: https://www.theihs.org/blog/curated-event/technology-abundance-and-liberalism/Find Scaling Laws on the Lawfare website, and subscribe to never miss an episode.To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/lawfare-institute.Support this show http://supporter.acast.com/lawfare. Hosted on Acast. See acast.com/privacy for more information.
HEADLINE: Russian Spy Ships Target Vulnerable Undersea Communication Cables GUEST NAME: Kevin Frazier50 WORD SUMMARY: Undersea cables are highly vulnerable to sabotage or accidental breaks. Russia uses sophisticated naval technology, including the spy ship Yantar, to map and potentially break these cables in sensitive locations. The US is less vulnerable due to redundancy. However, protection is fragmented, relying on private owners who often lack incentives to adopt sophisticated defense techniques. 1945 RED SQUARE
Preview: Kevin Frazier discusses the extreme vulnerability and fragmented state of undersea cables, the vast majority of which are privately owned. The Department of Defense relies on these systems, which lack sufficient protection due to high costs. Frazier highlights recent reports that the Russian ship Yantar, under GRU possession, is tracking and mapping these vital cables near Great Britain in the event of conflict.
Preview: Kevin Frazier discusses the extreme vulnerability and fragmented state of undersea cables, the vast majority of which are privately owned. The Department of Defense relies on these systems, which lack sufficient protection due to high costs. Frazier highlights recent reports that the Russian ship Yantar, under GRU possession, is tracking and mapping these vital cables near Great Britain in the event of conflict.
Kevin Frazier testified that Congress needs a national vision to manage data center infrastructure and mitigate local impacts. He stressed vulnerable undersea cables are neglected and urged academics to prioritize teaching and public-oriented research. 1939
Kevin Frazier testified that Congress needs a national vision to manage data center infrastructure and mitigate local impacts. He stressed vulnerable undersea cables are neglected and urged academics to prioritize teaching and public-oriented research.
Preview: Kevin Frazier of University of Texas Law School/Civitas Institute discusses congressional concerns over AIregulation, balancing state interests versus federal goals of preventing cross-state policy projection and prioritizing national AI innovation and growth.
From September 18, 2024: Jane Bambauer, Professor at Levin College of Law; Ramya Krishnan, Senior Staff Attorney at the Knight First Amendment Institute and a lecturer in law at Columbia Law School; Alan Rozenshtein, Associate Professor of Law at the University of Minnesota Law School and a Senior Editor at Lawfare, join Kevin Frazier, Assistant Professor at St. Thomas University College of Law and a Tarbell Fellow at Lawfare, to break down the D.C. Circuit Court of Appeals' hearing in TikTok v. Garland, in which a panel of judges assessed the constitutionality of the TikTok bill.To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/lawfare-institute.Support this show http://supporter.acast.com/lawfare. Hosted on Acast. See acast.com/privacy for more information.
Steven Adler, former OpenAI safety researcher, author of Clear-Eyed AI on Substack, and independent AGI-readiness researcher, joins Kevin Frazier, AI Innovation and Law Fellow at the University of Texas School of Law and Senior Fellow at Lawfare, to assess the current state of AI testing and evaluations. The two walk through Steven's views on industry efforts to improve model testing and what he thinks regulators ought to know and do when it comes to preventing AI harms.Thanks to Leo Wu for research assistance!Find Scaling Laws on the Lawfare website, and subscribe to never miss an episode.To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/lawfare-institute.Support this show http://supporter.acast.com/lawfare. Hosted on Acast. See acast.com/privacy for more information.