POPULARITY
Connecticut State Senator James Maroney and Neil Chilson, Head of AI Policy at the Abundance Institute, join Kevin Frazier, the AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare, and Alan Rozenshtein, Associate Professor at Minnesota Law and Research Director at Lawfare, for a look back at a wild year in AI policy.Neil provides his expert analysis of all that did (and did not) happen at the federal level. Senator Maroney then examines what transpired across the states. The four then offer their predictions for what seems likely to be an even busier 2026. Find Scaling Laws on the Lawfare website, and subscribe to never miss an episode.To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/lawfare-institute.Support this show http://supporter.acast.com/lawfare. Hosted on Acast. See acast.com/privacy for more information.
Ziad Reslan, a member of OpenAI's Product Policy Staff and a Senior Fellow with the Schmidt Program on Artificial Intelligence, Emerging Technologies, and National Power at Yale University, joins Kevin Frazier, AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare, to talk about iterative deployment--the lab's approach to testing and deploying its models. It's a complex and, at times, controversial approach. Ziad provides the rationale behind iterative deployment and tackles some questions about whether the strategy has always worked as intended. Hosted on Acast. See acast.com/privacy for more information.
University of Texas School of Law professor Mechele Dickerson details her new book The Middle-Class New Deal: Restoring Upward Mobility and the American Dream.Then The Associated Press’ Mary Clare Jalonick examines her new book Storm at the Capitol: An Oral History of January 6th.See omnystudio.com/listener for privacy information.
Connecticut State Senator James Maroney and Neil Chilson, Head of AI Policy at the Abundance Institute, join Kevin Frazier, the AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare, and Alan Rozenstein, Associate Professor at Minnesota Law and Research Director at Lawfare, for a look back at a wild year in AI policy. Neil provides his expert analysis of all that did (and did not) happen at the federal level. Senator Maroney then examines what transpired across the states. The four then offer their predictions for what seems likely to be an even busier 2026. Hosted on Acast. See acast.com/privacy for more information.
Plus the bizarre story out of Michigan, and making sure Texas keeps the lights on in an AI world
Every so often, I'll re-publish some of my favorite How I Write interviews. This classic episode is with Ward Farnsworth, a law professor and former dean at the University of Texas School of Law who has written popular books about clear thinking, language, and philosophy. His books include Classical English Style and works on rhetoric and legal writing. Get 60 days free Readwise Reader at https://readwise.io/davidperell/ Check out Ward's website: http://wardfarnsworth.com/ 00:05:09 Example 1 (King James Bible) 00:07:25 Example 2 (Winston Churchill) 00:12:01 Example 3 (Winston Churchill) 00:15:35 Example 4 (King James Bible) 00:18:34 Example 5 (Abraham Lincoln) 00:23:13 Example 6 (Oliver Wendell Holmes, Jr.) 00:26:56 Classical English Rhetoric 00:27:56 Example 7 (Abraham Lincoln) 00:30:15 Example 8 (Abraham Lincoln) 00:32:04 The only app I use to read articles [Readwise Reader] 00:33:31 Example 9 (Winston Churchill) 00:36:09 Example 10 (Lloyd Bentsen) 00:38:50 Example 11 (JFK) 00:42:16 Example 12 (Abraham Lincoln) 00:43:56 Example 13 (Henry Fielding, Tom Jones) 00:45:48 Example 14 (King James Bible) 00:47:58 The 3 Techniques, explained 00:52:40 Practical advice for everyone 00:56:24 The ideal writing curriculum About the host Hey! I'm David Perell and I'm a writer, teacher, and podcaster. I believe writing online is one of the biggest opportunities in the world today. For the first time in human history, everybody can freely share their ideas with a global audience. I seek to help as many people publish their writing online as possible. Follow me Apple: https://podcasts.apple.com/us/podcast/how-i-write/id1700171470 YouTube: https://www.youtube.com/@DavidPerellChannel Spotify: https://open.spotify.com/show/2DjMSboniFAeGA8v9NpoPv X: https://x.com/david_perell Learn more about your ad choices. Visit megaphone.fm/adchoices
Graham Dufault, General Counsel at ACT | The App Association, joins Kevin Frazier, AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare, to explore how small- and medium-sized enterprises (SMEs) are navigating the EU's AI regulatory framework. The duo breakdown the Association's recent survey of SMEs, which included the views of more than 1,000 enterprises and assessed their views on regulation and adoption of AI. Follow Graham: @GDufault and ACT | The App Association: @actonline Hosted on Acast. See acast.com/privacy for more information.
Caleb Withers, a researcher at the Center for a New American Security, joins Kevin Frazier, the AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare, to discuss how frontier models shift the balance in favor of attackers in cyberspace. The two discuss how labs and governments can take steps to address these asymmetries favoring attackers, and the future of cyber warfare driven by AI agents. Jack Mitchell, a student fellow in the AI Innovation and Law Program at the University of Texas School of Law, provided excellent research assistance on this episode.Check out Caleb's recent research here. Find Scaling Laws on the Lawfare website, and subscribe to never miss an episode.To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/lawfare-institute.Support this show http://supporter.acast.com/lawfare. Hosted on Acast. See acast.com/privacy for more information.
Should Camp Mistic re-open? Would you send your child there? Can the Cowboys win in Detroit?
Randy Schaffer is a criminal defense attorney. He received his Bachelor's Degree from the University of Texas in 1970 and earned his JD from the University of Texas School of Law in 1973. Licensed to practice before the US Supreme Court, multiple US Courts of Appeals, multiple US District Courts, and the Texas State Courts, he is a member of the American Board of Criminal Lawyers, National Association of Criminal Defense Lawyers, Texas Criminal Defense Lawyers Association, Harris County Criminal Defense Lawyers Association, American Bar Association, and Houston Bar Association. He has written numerous articles and papers that have been published and spoken at seminars for various organizations in addition to being featured in many newspapers and magazines. Randy and his wife Mollie have two sons and live in Houston, Texas.
Caleb Withers, a researcher at the Center for a New American Security, joins Kevin Frazier, the AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare, to discuss how frontier models shift the balance in favor of attackers in cyberspace. The two discuss how labs and governments can take steps to address these asymmetries favoring attackers, and the future of cyber warfare driven by AI agents.Jack Mitchell, a student fellow in the AI Innovation and Law Program at the University of Texas School of Law, provided excellent research assistance on this episode.Check out Caleb's recent research here. Hosted on Acast. See acast.com/privacy for more information.
In today's episode, we're addressing an often-overlooked topic in dentistry—managing dental patients with Sickle Cell Disease and Sickle Cell Trait. These patients face unique challenges, and unfortunately, many are turned away from dental practices simply because their condition isn't well understood. Our guest, Linda Chandler, RDH, s a graduate of the University of Texas School of Dental Hygiene, past President of the Southeast National Dental Hygiene Association, author, educator and voted top hygienist of the year 2012.
This week Steve and Yvonne interview Brian Beckcom of VB Attorneys (https://www.vbattorneys.com/). Remember to rate and review GTP in iTunes: Click Here to Rate and Review View/Download Trial Documents Case Details: The crew of the Maersk Alabama was attacked and taken hostage by Somali pirates. The events were depicted in a Hollywood blockbuster called “Captain Phillips” starring Tom Hanks. Brian and his firm countered a massive Hollywood publicity campaign that portrayed Captain Phillips as a hero and the crew as lazy layabouts, when in fact the exact opposite was true. The case resulted in a confidential settlement for each crew member. It also changed the way the shipping industry provides security to the men and women who travel in dangerous, pirate-infested waters Guest Bio: Brian Beckcom Brian Beckcom is one of the leading lawyers of his generation. Brian's peers have voted him a Texas Super Lawyer 14 years in a row, and every single year he has been eligible. Brian is also a Board Certified Expert in Personal Injury Trial Law by the Texas Board of Legal Specialization, a recognition shared by less than 2% of lawyers. Brian has obtained hundreds of millions of dollars for his clients, and he and his law firm have obtained record-setting settlements and verdicts in a wide variety of significant legal cases. Brian is also a Computer Scientist & Philosopher. He created and hosts the popular podcast "Lesson from Leaders with Brian Beckcom." Brian is a military "brat," a purple belt in Brazilian jiu-jitsu, a single-digit golfer, a former college basketball player at Texas A&M, a four-year member of the Texas A&M Corps of Cadets, and an accomplished freshwater and saltwater fly fisherman. Brian is an honors graduate of the University of Texas School of Law. He is the author of 6 books and hundreds of articles on a wide variety of topics. Brian has successfully prosecuted many high profile cases. The national and international media has covered Brian's work on these complex cases. Read Full Bio LISTEN TO PREVIOUS EPISODES & MEET THE TEAM: Great Trials Podcast Show Sponsors: Legal Technology Services Harris Lowry Manton LLP - hlmlawfirm.com Production Team: Dee Daniels Media Podcast Production Free Resources: Stages Of A Jury Trial - Part 1 Stages Of A Jury Trial - Part 2
Andrew Prystai, CEO and co-founder of Vesta, and Thomas Bueler-Faudree, co-founder of August Law, join Kevin Frazier, AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare, to think through AI policy from the startup perspective. Andrew and Thomas are the sorts of entrepreneurs that politicians on both sides of the aisle talk about at town halls and press releases. They're creating jobs and pushing the technological frontier. So what do they want AI policy leaders to know as lawmakers across the country weigh regulatory proposals? That's the core question of the episode. Giddy up for a great chat! Learn more about the guests and their companies here:Andrew's Linkedin, Vesta's LinkedinThomas's LinkedIn, August's LinkedIn Hosted on Acast. See acast.com/privacy for more information.
Jeff Bleich, General Counsel at Anthropic, former Chief Legal Officer at Cruise, and former Ambassador to Australia during the Obama administration, joins Kevin Frazier, AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare, to get a sense of how the practice of law looks at the edge of the AI frontier.The two also review how Jeff's prior work in the autonomous vehicle space prepared him for the challenges and opportunities posed by navigating legal uncertainties in AI governance. Hosted on Acast. See acast.com/privacy for more information.
Anton Korinek, a professor of economics at the University of Virginia and newly appointed economist to Anthropic's Economic Advisory Council; Nathan Goldschlag, Director of Research at the Economic Innovation Group; and Bharat Chander, Economist at Stanford Digital Economy Lab, join Kevin Frazier, the AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare, to sort through the myths, truths, and ambiguities that shape the important debate around the effects of AI on jobs. They discuss what happens when machines begin to outperform humans in virtually every computer-based task, how that transition might unfold, and what policy interventions could ensure broadly shared prosperity.These three are prolific researchers. Give them a follow to find their latest works:Anton: @akorinek on XNathan: @ngoldschlag and @InnovateEconomy on XBharat: X: @BharatKChandar, LinkedIn: @bharatchandar, Substack: @bharatchandarFind Scaling Laws on the Lawfare website, and subscribe to never miss an episode.To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/lawfare-institute.Support this show http://supporter.acast.com/lawfare. Hosted on Acast. See acast.com/privacy for more information.
Show NotesAs artificial intelligence begins generating music from vast datasets of human art, a fundamental question emerges: who truly owns the sound of AI? This episode of Music Evolves brings together a law student and former musician Chandler Lawn, music industry executive and professor Drew Thurlow, Michael Sheldrick, Co-Founder of Global Citizen, and intellectual property attorney Puya Partow-Navid, alongside hosts Sean Martin and Marco Ciappelli, to examine how AI is reshaping authorship, licensing, and the meaning of originality.The panel explores how AI democratizes creation while exposing deep ethical and economic gaps. Lawn raises the issue of whether artists whose works trained AI models deserve compensation, asking if innovation can be ethical when built on uncompensated labor. Thurlow highlights how, despite fears of automation, generative AI music accounts for less than 1% of streaming royalties—suggesting opportunity, not replacement.Sheldrick connects the conversation to a broader global context, describing how music's economic potential could drive sustainable development if nations modernize copyright frameworks. He views this shift as a rare chance to position creative industries as engines for jobs and growth.Partow-Navid grounds the discussion in legal precedent, pointing to landmark cases—from Two Live Crew to George R. R. Martin—as markers of how courts may interpret fair use, causality, and global jurisdiction in AI-driven creation.Together, the guests agree that the debate extends beyond legality. It's about the emotional authenticity that makes music human. As Chandler notes, “We connect through imperfection.” Marco adds that live performance may ultimately anchor value in a world saturated by digital replication.This conversation captures the tension—and promise—of a future where music, technology, and law must learn to play in harmony.GuestsChandler Lawn, AI Innovation and Law Fellow at The University of Texas School of Law | On LinkedIn: https://www.linkedin.com/in/chandlerlawn/Drew Thurlow, Adjunct Professor at Berklee College of Music | On LinkedIn: https://www.linkedin.com/in/drewthurlow/Michael Sheldrick, Co-Founder and Chief Policy, Impact and Government Affairs Officer at Global Citizen | On LinkedIn: https://www.linkedin.com/in/michael-sheldrick-30364051/Puya Partow-Navid, Partner at Seyfarth Shaw LLP | On LinkedIn: https://www.linkedin.com/in/puyapartow/Marco Ciappelli, Co-Founder, ITSPmagazine and Studio C60 | Website: https://www.marcociappelli.comHostSean Martin, Co-Founder at ITSPmagazine, Studio C60, and Host of Redefining CyberSecurity Podcast & Music Evolves Podcast | Website: https://www.seanmartin.com/ResourcesLegal Publication: You Can't Alway Get What You Want: A Survey of AI-related Copyright Considerations for the Music Industry published in Vol. 32, No. 3 of the Texas State Bar Entertainment and Sports Law Journal.BOOK: Machine Music: How AI Is Transforming Music's Next Act by Drew Thurlow: https://www.routledge.com/Machine-Music-How-AI-is-Transforming-Musics-Next-Act/Thurlow/p/book/9781032425242BOOK: From Ideas to Impact: A Playbook for Influencing and Implementing Change in a Divided World by Michael Sheldrick: https://www.fromideastoimpact.com/AI and Copyright Blogs:https://www.gadgetsgigabytesandgoodwill.com/category/ai/https://www.gadgetsgigabytesandgoodwill.com/2025/11/dr-thaler-is-right-in-part/https://www.gadgetsgigabytesandgoodwill.com/2025/07/californias-ai-law-has-set-rules-for-generative-ai-are-you-ready/https://www.gadgetsgigabytesandgoodwill.com/2025/06/copyright-office-firings-spark-constitutional-concerns-amid-ai-policy-tensions/Newsletter (Article, Video, Podcast): The Human Touch in a Synthetic Age: Why AI-Created Music Raises More Than Just Eyebrows: https://www.linkedin.com/pulse/human-touch-synthetic-age-why-ai-created-music-raises-martin-cissp-s9m7e/Article — Universal and Sony Music partner with new platform to detect AI music copyright theft using ‘groundbreaking neural fingerprinting' technology: https://www.musicbusinessworldwide.com/universal-and-sony-music-partner-with-new-platform-to-detect-ai-music-copyright-theft-using-groundbreaking-neural-fingerprinting-technology/Article: When Virtual Reality Is A Commodity, Will True Reality Come At A Premium: https://sean-martin.medium.com/when-virtual-reality-is-a-commodity-will-true-reality-come-at-a-premium-4a97bccb4d72Global Citizen: https://www.globalcitizen.org/Gallo Music (Gallo Records, South Africa): https://www.gallo.co.za/Global Citizen Festival: https://www.globalcitizen.org/en/festival/Andy Warhol Foundation v. Goldsmith (Shepard Fairey / “Hope” poster context): https://supreme.justia.com/cases/federal/us/598/21-869/case.pdfGeorge R. R. Martin / Authors Guild v. OpenAI (current AI training lawsuit): https://authorsguild.org/news/ag-and-authors-file-class-action-suit-against-openai/Campbell v. Acuff-Rose Music, Inc. (2 Live Crew “Pretty Woman”): https://supreme.justia.com/cases/federal/us/510/569/Vanilla Ice / “Under Pressure” Sampling Case: https://blogs.law.gwu.edu/mcir/case/queen-david-bowie-v-vanilla-ice/MIDiA Research — AI in Music Reports: https://www.midiaresearch.com/reports/ai-and-the-future-of-music-the-future-is-already-hereMerlin (Global Independent Rights Organization): https://www.merlinnetwork.org/Instagram Reel re: Spotify Terms: https://www.instagram.com/reel/DOrgbUNCYj_/ Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
Anton Korinek, a professor of economics at the University of Virginia and newly appointed economist to Anthropic's Economic Advisory Council, Nathan Goldschlag, Director of Research at the Economic Innovation Group, and Bharat Chander, Economist at Stanford Digital Economy Lab, join Kevin Frazier, the AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare, to sort through the myths, truths, and ambiguities that shape the important debate around the effects of AI on jobs. We discuss what happens when machines begin to outperform humans in virtually every computer-based task, how that transition might unfold, and what policy interventions could ensure broadly shared prosperity.These three are prolific researchers. Give them a follow to find their latest works.Anton: @akorinek on XNathan: @ngoldschlag and @InnovateEconomy on XBharat: X: @BharatKChandar, LinkedIn: @bharatchandar, Substack: @bharatchandar Hosted on Acast. See acast.com/privacy for more information.
Gabriel Nicholas, a member of the Product Public Policy team at Anthropic, joins Kevin Frazier, AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare, to introduce the policy problems (and some solutions) posed by AI agents. Defined as AI tools capable of autonomously completing tasks on your behalf, it's widely expected that AI agents will soon become ubiquitous. The integration of AI agents into sensitive tasks presents a slew of technical, social, economic, and political questions. Gabriel walks through the weighty questions that labs are thinking through as AI agents finally become “a thing.” Hosted on Acast. See acast.com/privacy for more information.
Artificial intelligence isn't just transforming industries—it's redefining freedom, opportunity, and the future of human work. This week on the Let People Prosper Show, I talk with Kevin Frazier, the inaugural AI Innovation and Law Fellow at the University of Texas School of Law, where he leads their groundbreaking new AI Innovation and Law Program.Kevin's at the center of the national conversation on how to balance innovation with accountability—and how to make sure regulation doesn't crush the technological progress that drives prosperity. With degrees from UC Berkeley Law, Harvard Kennedy School, and the University of Oregon, Kevin brings both a legal and policy lens to today's most pressing questions about AI, federalism, and the economy. Before joining UT, he served as an Assistant Professor at St. Thomas University College of Law and conducted research for the Institute for Law and AI. His scholarship has appeared in the Tennessee Law Review, MIT Technology Review, and Lawfare. He also co-hosts the Scaling Laws Podcast, bridging the gap between innovation and regulation.This episode goes deep into how we can harness AI to promote human flourishing, not government dependency—how we can regulate based on reality, not fear—and how federalism can help America remain the global leader in technological innovation.For more insights, visit vanceginn.com. You can also get even greater value by subscribing to my Substack newsletter at vanceginn.substack.com. Please share with your friends, family, and broader social media network.
This week, we're heading to Texas! My guest is Cris Duncan, the new head of the legendary Texas School of Professional Photography—one of the biggest and most loved PPA affiliate schools in the country.Cris and his wife, Deanna, run CJ Duncan Photography in Lubbock, creating beautiful portrait and commercial work. He's also a PPA Juror, CPP Instructor, founder of Find Your Focus Photographic Education™, and recipient of the PPA Education Award.We chat about his journey into photography, his passion for teaching, and why in-person education still makes all the difference. Cris shares what makes Texas School so special and how connection and community fuel creativity.Here's what we cover:Cris's start in photography and the mentors who shaped his pathWhat Texas School offers – 32 week-long classes, 8 pre-cons, and a one-of-a-kind communityWhy live education matters – the difference between information, understanding, and true wisdomHow competition and critique help photographers grow and build credibilityIf you've ever thought about attending Texas School or you're ready to reignite your passion for learning, this episode will inspire you to keep growing.Registration for Texas School 2026 opens January 3rd, and spots fill fast!Learn more at texasschool.org and visit cjduncan.com.PS…Check out my TWO gifts on my website about marketing your photography…www.lucidumascoaching.comConnect with Photography Business Coach Luci Dumas: Website Email: luci@lucidumas.comInstagram FacebookYouTubeNew episodes drop every week — make sure to subscribe so you never miss an inspiring guest or a powerful solo episode designed to help you grow your photography business.
Jim Curry is the co-founder and CEO of BuildGroup, a venture firm based in Austin that has raised $330 million since its founding in 2015 and backed companies like Anaconda, Vidmob, DigniFi, and Benefitfocus. He brings more than two decades of experience in product, strategy, and corporate development from roles at Rackspace and Dell, and he co-founded OpenStack, one of the most widely used open source cloud computing platforms. Jim serves on the boards of Generation Serve and the University of Texas School of Undergraduate Studies. He holds degrees from UT Austin and Harvard Business School.In this conversation, we discuss:Jim's journey from Rackspace to launching BuildGroup and why he believes in “longer, slower capital” to support mission-driven foundersHow his experience co-founding OpenStack shaped his thinking on community-driven innovation and open-source softwareWhat AI startups can learn from the cloud era—and why infrastructure still matters in the age of foundation modelsWhy Jim believes VCs often push startups to scale too fast and what sustainable growth looks like in practiceThe impact of AI on venture capital and how BuildGroup thinks about investing in software companies that solve real problemsHow founders can balance product vision with pragmatism, especially when building in volatile marketsResources:Subscribe to the AI & The Future of Work NewsletterConnect with Jim on LinkedInAI fun fact articleOn How to Develop NLP and AI Data Harvesting Using Games and Blockchains To Earn NFTs
California State Senator Scott Wiener, author of Senate Bill 53—a frontier AI safety bill—signed into law by Governor Newsom earlier this month, joins Alan Rozenshtein, Associate Professor at Minnesota Law and Research Director at Lawfare, and Kevin Frazier, AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare, to explain the significance of SB 53 in the large debate about how to govern AI.The trio analyze the lessons that Senator Wiener learned from the battle of SB 1047, a related bill that Newsom vetoed last year, explore SB 53's key provisions, and forecast what may be coming next in Sacramento and D.C.Find Scaling Laws on the Lawfare website, and subscribe to never miss an episode.To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/lawfare-institute.Support this show http://supporter.acast.com/lawfare. Hosted on Acast. See acast.com/privacy for more information.
California's push to regulate artificial intelligence could reshape national markets and sideline smaller developers. But critics say no single state should dictate the future of AI policy—and that Congress must act now to establish clear standards that protect innovation and prevent a patchwork of conflicting mandates. Here to explain why that matters is Professor Kevin Frazier, AI Innovation and Law Fellow at the University of Texas School of Law.See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
Sarah Isgur and David French kick off Free Speech Week at the University of Texas School of Law to talk about the confusion around Callais oral arguments, “Let's Go Brandon,” and the John Bolton indictment. The Agenda:—Let's Go Brandon—“Everybody seems to be having a good time”—No one understands Callais—Lawfare and selective prosecutions—Q&A! Advisory Opinions is a production of The Dispatch, a digital media company covering politics, policy, and culture from a non-partisan, conservative perspective. To access all of The Dispatch's offerings—including access to all of our articles, members-only newsletters, and bonus podcast episodes—click here. If you'd like to remove all ads from your podcast experience, consider becoming a premium Dispatch member by clicking here. Learn more about your ad choices. Visit megaphone.fm/adchoices
California State Senator Scott Wiener, author of Senate Bill 53--a frontier AI safety bill--signed into law by Governor Newsom earlier this month, joins Alan Rozenshtein, Associate Professor at Minnesota Law and Research Director at Lawfare, and Kevin Frazier, AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare, to explain the significance of SB 53 in the large debate about how to govern AI.The trio analyze the lessons that Senator Wiener learned from the battle of SB 1047, a related bill that Newsom vetoed last year, explore SB 53's key provisions, and forecast what may be coming next in Sacramento and D.C. Hosted on Acast. See acast.com/privacy for more information.
Mosharaf Chowdhury, Associate Professor at the University of Michigan and Director of the ML Energy lab, and Dan Zhou, former Senior Research Scientist at the MIT Lincoln Lab, MIT Supercomputing Center, and MIT CSAIL, join Kevin Frazier, AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare, to discuss the energy costs of AI. They break down exactly how much energy fuels a single ChatGPT query, why this is difficult to figure out, how we might improve energy efficiency, and what kinds of policies might minimize AI's growing energy and environmental costs. Leo Wu provided excellent research assistance on this podcast.Read more from Mosharaf:The ML Energy Initiative“We did the math on AI's energy footprint. Here's the story you haven't heard,” in MIT Technology ReviewRead more from Dan:“From Words to Watts: Benchmarking the Energy Costs of Large Language Model Inference,” in Proc. IEEE High Perform. Extreme Comput. Conf. (HPEC)“A Green(er) World for A.I.,” in IEEE International Parallel and Distributed Processing Symposium Workshops (IPDPSW)Find Scaling Laws on the Lawfare website, and subscribe to never miss an episode.To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/lawfare-institute.Support this show http://supporter.acast.com/lawfare. Hosted on Acast. See acast.com/privacy for more information.
Mosharaf Chowdhury, associate professor at the University of Michigan and director of the ML Energy lab, and Dan Zhao, AI researcher at MIT, GoogleX, and Microsoft focused on AI for science and sustainable and energy-efficient AI, join Kevin Frazier, AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare, to discuss the energy costs of AI. They break down exactly how much a energy fuels a single ChatGPT query, why this is difficult to figure out, how we might improve energy efficiency, and what kinds of policies might minimize AI's growing energy and environmental costs. Leo Wu provided excellent research assistance on this podcast. Read more from Mosharaf:https://ml.energy/ https://www.technologyreview.com/2025/05/20/1116327/ai-energy-usage-climate-footprint-big-tech/ Read more from Dan:https://arxiv.org/abs/2310.03003'https://arxiv.org/abs/2301.11581 Hosted on Acast. See acast.com/privacy for more information.
David Sullivan, Executive Director of the Digital Trust & Safety Partnership, and Rayi Iyer, Managing Director of the Psychology of Technology Institute at USC's Neely Center, join Kevin Frazier, AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare, to discuss the evolution of the Trust & Safety field and its relevance to ongoing conversations about how best to govern AI. They discuss the importance of thinking about the end user in regulation, debate the differences and similarities between social media and AI companions, and evaluate current policy proposals.Leo Wu provided excellent research assistance to prepare for this podcast.Read more from David:"Why we need to make safety the product to build better bots," from the World Economic Forum Centre for AI Excellence"Learning from the Past to Shape the Future of Digital Trust and Safety," in Tech Policy PressRead more from Ravi:"Ravi Iyer on How to Improve Technology Through Design," from Lawfare's Arbiters of Truth series"Regulate Design, not Speech," from the Designing Tomorrow Substack Read more from Kevin:"California in Your Chatroom: AB 1064's Likely Constitutional Overreach," from the Cato InstituteFind Scaling Laws on the Lawfare website, and subscribe to never miss an episode.To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/lawfare-institute.Support this show http://supporter.acast.com/lawfare. Hosted on Acast. See acast.com/privacy for more information.
Support the show: http://www.newcountry963.com/hawkeyeinthemorningSee omnystudio.com/listener for privacy information.
In this week's episode of the NAWL Podcast, host Ashley Carlisle—NAWL member and Co-Chair of NAWL's Startups Affinity Group—sits down with Liz Federowicz, General Counsel at Expa, for a compelling conversation about her multifaceted legal career. Liz shares her journey from the entertainment industry to big law, and ultimately to the fast-paced world of startups.Together, they explore the transition from traditional law firms to startup culture, the unique challenges of working in emerging tech, and how legal professionals are adapting to the rise of AI and other transformative technologies. Whether you're a seasoned attorney or just starting out, this episode offers valuable insights into the future of the legal profession. Learn more about Liz and her work at Expa here!$25 off a prenup at First with the code NAWL25 Bios: Liz Federowicz serves as General Counsel at Expa, a venture capital firm and venture studio that builds and invests in early-stage technology companies. She oversees all legal aspects of fund investments, formation, operations, and newly incorporated startups within the venture studio. As the sole in-house legal counsel for an organization with over 100 portfolio companies and 5-10 studio-incubated companies, Liz provides comprehensive guidance across venture deals, legal strategy, and business affairs. Her path to law was unconventional, beginning with independent films and co-founding a film production company in the early 2000s. Inspired by entertainment attorneys, she focused on intellectual property and business during law school, but ultimately transitioned to the tech sector after law school. Liz began her legal career in-house at a Los Angeles tech company before joining Fenwick & West, where she developed her expertise in Silicon Valley and Silicon Beach practices before stepping into her role as Expa's GC. Colleagues recognize Liz for her innovative thinking, deal-making acumen, and hands-on approach with Expa's incubated companies, including legal tech company First, where she additionally serves on the founding team as Head of Legal Product. Ashley Carlisle is a corporate attorney and entrepreneur focused on transforming the legal industry with technology. As a founding team member and CMO of HyperDraft, she helps real estate and financial institutions use AI and automation to streamline their legal documentation. A graduate of the University of Texas School of Law, Ashley practiced corporate law at two global firms before joining HyperDraft and is a frequent voice on AI and automation in industry publications and podcasts.
Public schools across Texas are being put to the test. From Fort Worth to Grapevine, Austin to San Antonio, we'll look at how communities are reacting – and in some cases, pushing back – on plans to shutter schools and shuffle the map for many students.Also, a big fight over redistricting with huge consequences – […] The post Texas school closures raise tough questions appeared first on KUT & KUTX Studios -- Podcasts.
David Sullivan, Executive Director of the Digital Trust & Safety Partnership, and Rayi Iyer, Managing Director of the Psychology of Technology Institute at USC's Neely Center, join join Kevin Frazier, AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare, to discuss the evolution of the Trust & Safety field and its relevance to ongoing conversations about how best to govern AI. They discuss the importance of thinking about the end user in regulation, debate the differences and similarities between social media and AI companions, and evaluate current policy proposals. You'll “like” (bad pun intended) this one. Leo Wu provided excellent research assistance to prepare for this podcast. Read more from David:https://www.weforum.org/stories/2025/08/safety-product-build-better-bots/https://www.techpolicy.press/learning-from-the-past-to-shape-the-future-of-digital-trust-and-safety/ Read more from Ravi:https://shows.acast.com/arbiters-of-truth/episodes/ravi-iyer-on-how-to-improve-technology-through-designhttps://open.substack.com/pub/psychoftech/p/regulate-value-aligned-design-not?r=2alyy0&utm_campaign=post&utm_medium=web&showWelcomeOnShare=false Read more from Kevin:https://www.cato.org/blog/california-chatroom-ab-1064s-likely-constitutional-overreach Hosted on Acast. See acast.com/privacy for more information.
Less than 2,000 people live in Sunray, Texas, but with our help, they could win $1 million. Cast your daily vote and help this community!
Neil Chilson, Head of AI Policy at the Abundance Institute, and Gus Hurwitz, Senior Fellow and CTIC Academic Director at Penn Carey Law School and Director of Law & Economics Programs at the International Center for Law & Economics, join Kevin Frazier, AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare, to explore how academics can overcome the silos and incentives that plague the Ivory Tower and positively contribute to the highly complex, evolving, and interdisciplinary work associated with AI governance.The trio recorded this podcast live at the Institute for Humane Studies's Technology, Liberalism, and Abundance Conference in Arlington, Virginia.Read about Kevin's thinking on the topic here: https://www.civitasinstitute.org/research/draining-the-ivory-towerLearn about the Conference: https://www.theihs.org/blog/curated-event/technology-abundance-and-liberalism/Find Scaling Laws on the Lawfare website, and subscribe to never miss an episode.To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/lawfare-institute.Support this show http://supporter.acast.com/lawfare. Hosted on Acast. See acast.com/privacy for more information.
Neil Chilson, Head of AI Policy at the Abundance Institute, and Gus Hurwitz, Senior Fellow and CTIC Academic Director at Penn Carey Law School and Director of Law & Economics Programs at the International Center for Law & Economics, join Kevin Frazier, AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare, to explore how academics can overcome the silos and incentives that plague the Ivory Tower and positively contribute to the highly complex, evolving, and interdisciplinary work associated with AI governance. The trio recorded this podcast live at the Institute for Humane Studies's Technology, Liberalism, and Abundance Conference in Arlington, Virginia.Read about Kevin's thinking on the topic here: https://www.civitasinstitute.org/research/draining-the-ivory-towerLearn about the Conference: https://www.theihs.org/blog/curated-event/technology-abundance-and-liberalism/ Hosted on Acast. See acast.com/privacy for more information.
Most disappearances leave echoes—missing persons flyers, TV reports, police pleas for tips. But when James Robert “Jimmy” Farenthold vanished in the spring of 1989, there was only silence. No bulletin. No headlines. No public outcry. Just absence.Jimmy wasn't just anyone. He was the youngest son of one of Texas's most prominent dynasties, a family bound by oil, politics, and power. But behind the legacy was a private story of grief and dysfunction. Jimmy had been born a twin—and when his brother Vincent died suddenly, Jimmy became the “one who lived,” carrying scars that shaped the rest of his life.Charming yet reckless, Jimmy drifted through addiction, rehab programs, and cities across the South. In April 1989, he promised a fresh start. Bags packed, ticket in hand, he was set to enter a Florida treatment program. Instead, he disappeared. His car, his passport, even his clothes—left behind.What followed was not the frantic search you'd expect for the son of a famous family. Instead, his disappearance became another fracture inside an already divided household. A father chasing rumors. A mother haunted by silence. A family dynasty unraveling.Part 3 of 3 of our series follows Jimmy's apparent final days, the dead ends that followed, and the generational weight of a name built on both power and tragedy.If you have information about the disappearance of James Robert “Jimmy” Farenthold, please contact the San Antonio Police Department at 210-207-8939. Sources: The Corpus Christi Caller-Times, The Port Aransas South Jetty, The Houston Chronicle, The San Antonio Express-News, Texas Monthly, Texas Observer, texashistory.unt.edu, The Los Angeles Times, The University of Texas School of Law – Frances Tarlton “Sissy” Farenthold Archives ProjectYou can support gone cold and listen to the show ad-free at https://patreon.com/gonecoldpodcastFind us at https://www.gonecold.comFor Gone Cold merch, visit https://gonecold.dashery.com Follow gone cold on Facebook, Instagram, Threads, TikTok, YouTube, and X. Search @gonecoldpodcast at all or just click https://linknbio.com/gonecoldpodcast#WhereIsJimmyFarenthold #CorpusChristi #CCTX #TX #Texas #TrueCrime #TexasTrueCrime #ColdCase #TrueCrimePodcast #Podcast #ColdCase #Unsolved #Murder #UnsolvedMurder #UnsolvedMysteries #Homicide #CrimeStories #PodcastRecommendations #CrimeJunkie #MysteryPodcast #TrueCrimeObsessed #CrimeDocs #InvestigationDiscovery #PodcastAddict #TrueCrimeFan #CriminalJustice #ForensicFilesBecome a supporter of this podcast: https://www.spreaker.com/podcast/gone-cold-texas-true-crime--3203003/support.
John is joined by William T. Reid IV, Senior Founding Partner of Reid Collins & Tsai LLP, and author of Fighting Bullies: The Case for a Career in Plaintiff's Law. They discuss Bill's view that young lawyers are too often funneled into BigLaw careers before they understand the full range of options available in the legal profession—particularly plaintiffs' work.The impetus for Bill's book came from his experience teaching at the University of Texas School of Law and advising students who often expressed frustration at the lack of career guidance and exposure to alternative paths. The law school hiring process, particularly the On-Campus Interview (OCI) process, now often takes place in January of the students' first year—rather than the fall of the students' second year. This, Bill believes, is too soon for the students to have meaningful legal experience or career insights. The result is a “conveyor belt” that locks students into BigLaw roles primarily for the salary, often at the expense of passion, fulfillment, and long-term satisfaction.Bill's book makes the case for the personal and professional rewards of plaintiffs' practice. He emphasizes that his firm, Reid Collins, generally only brings cases after extensive pre-suit investigation. This selectivity allows him to accept cases he believes in which brings deep meaning and satisfaction to his work. He argues that plaintiffs' lawyers, especially those focused on commercial and institutional wrongdoing, play a vital societal role by holding wrongdoers accountable, especially when government agencies fail to act. While not every case—or plaintiff's lawyer—meets a high moral bar, the ability to choose meaningful work and act on principle often leads to a highly satisfying career in law.Finally, John and Bill also discuss the evolution of the legal profession, including how artificial intelligence may reshape law firm structures by increasing efficiency and altering the traditional BigLaw pyramid. These changes may lead to firms pursuing alternative billing structures to traditional hourly billing.Podcast Link: Law-disrupted.fmHost: John B. Quinn Producer: Alexis HydeMusic and Editing by: Alexander Rossi
Steven Adler, former OpenAI safety researcher, author of Clear-Eyed AI on Substack, and independent AGI-readiness researcher, joins Kevin Frazier, AI Innovation and Law Fellow at the University of Texas School of Law and Senior Fellow at Lawfare, to assess the current state of AI testing and evaluations. The two walk through Steven's views on industry efforts to improve model testing and what he thinks regulators ought to know and do when it comes to preventing AI harms.Thanks to Leo Wu for research assistance!Find Scaling Laws on the Lawfare website, and subscribe to never miss an episode.To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/lawfare-institute.Support this show http://supporter.acast.com/lawfare. Hosted on Acast. See acast.com/privacy for more information.
Over the past 25 years, the rapid growth of Big Tech has raised questions about competition, innovation, and the ability of smaller startups to thrive. At the same time, regulatory approaches can create uncertainty that affects entrepreneurs in different ways. With Congress hesitant to act decisively, the debate continues: how can policymakers strike a balance that encourages innovation, ensures fair competition, and protects consumers? And when it comes to regulation should the path forward involve more, or less? Join the Federalist Society's Regulatory Transparency Project for the 2nd episode of Law for Little Tech series, featuring special guest Samuel Levine, Senior Fellow at the Berkeley Center for Consumer Law & Economic Justice and led by host Professor Kevin Frazier, AI Innovation & Law Fellow at the University of Texas School of Law.
Joe Stephens operates at the intersection of cutting-edge technology and traditional legal practice. As both a trial consultant and public defender, he helps lawyers harness AI to transform their litigation strategies and case preparation. Joe discusses how he guides other attorneys through AI integration, from analyzing mountains of depositions to crafting more effective motions. In his own public defender work, Joe has implemented AI tools to efficiently process vast amounts of digital discovery and pinpoint crucial moments in hours of police camera footage, allowing him to provide more thorough representation to indigent clients with limited resources. Joe Stephens is a graduate of the University of Texas School of Law.This episode is hosted by Katya Valasek.Mentioned in this episode:Learn more about Vermont LawLearn more about Rutgers LawAccess LawHub today!Learn more about Rutgers Law
#591 Cris Duncan is an experienced studio owner from Lubbock, Texas, well-known for his expertise in lighting and in-person photography education. The conversation dives into the transformative value of hands-on, in-person learning experiences for photographers at all stages, shining a spotlight on Cris' role with the Texas School of Professional Photography—a weeklong, immersive event designed to provide deep dives into specific areas of photography, rather than the surface-level "popcorn" approach of larger conferences.KEY TOPICS COVEREDIn-Person Education vs. Online Learning - Cris highlights the unique advantages of in-person workshops (like Texas School) over online resources. While online tools are valuable for information, in-person learning provides real-time feedback, hands-on practice, and the chance to safely make and correct mistakes.Structure and Tracks at Texas School of Professional Photography - Texas School offers deep-dives in four main tracks—digital post-production, successful studio/business growth, lighting and camera skills, and specialty/technique. Students select a single class for the week, fostering mastery and focused learning.Building Community and Lifelong Learning - Success is fueled by connecting with others, sharing experiences, and forming lasting friendships. Community learning fosters motivation, accountability, and creative problem-solving. Cris describes students returning year after year, advancing in their journeys, and forming strong professional networks.IMPORTANT DEFINITIONS & CONCEPTSConcierge Family Portrait Experience: A premium, tailored photography service that guides clients in everything from wardrobe choices to matching artwork with home décor, ensuring the final portrait complements the client's personality and environment.Texas School of Professional Photography: A weeklong educational event near Dallas, Texas, offering immersive, instructor-led classes to help photographers build foundational and advanced skills in a collaborative, hands-on environment.DISCUSSION & REFLECTION QUESTIONSIn what ways does hands-on, in-person instruction help you understand and retain new photography skills better than online learning?How might joining a creative community or peer group accelerate your own photography journey?What skills or business goals would you most want to “deep-dive” into if given a week-long immersive class?Reflect on a time you learned best by failing—and how feedback helped you improve.RESOURCES:Visit Texas School's Website - https://texasschool.org/Follow Cris Duncan on Instagram - https://www.instagram.com/cjduncanSign up for your free CloudSpot Account today at www.DeliverPhotos.comConnect with Raymond! Join the free Beginner Photography Podcast Community at https://beginnerphotopod.com/group Get your Photo Questions Answered on the show - https://beginnerphotopod.com/qa Grab your free camera setting cheatsheet - https://perfectcamerasettings.com/ Thanks for listening & keep shooting!
Anu Bradford, Professor at Columbia Law School, and Kate Klonick, Senior Editor at Lawfare and Associate Professor at St. John's University School of Law, join Kevin Frazier, AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare, to assess the ongoing, contrasting, and, at times, conflicting regulatory approaches to Big Tech being pursued by the EU and U.S. The trio start with an assessment of the EU's use of the Brussels Effect, coined by Anu, to shape AI development. Next, they explore the U.S.'s increasingly interventionist industrial policy with respect to key sectors, especially tech.Read more:Anu's op-ed in The New York Times"The Impact of Regulation on Innovation," by Philippe Aghion, Antonin Bergeaud, and John Van ReenenDraghi Report on the Future of European CompetitivenessFind Scaling Laws on the Lawfare website, and subscribe to never miss an episode.To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/lawfare-institute.Support this show http://supporter.acast.com/lawfare. Hosted on Acast. See acast.com/privacy for more information.
Peter E. Harrell, Adjunct Senior Fellow at the Center for a New American Security, joins Kevin Frazier, AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare, to examine the White House's announcement that it will take a 10% share of Intel. They dive into the policy rationale for the stake as well as its legality. Peter and Kevin also explore whether this is just the start of such deals given that President Trump recently declared that “there will be more transactions, if not in this industry then other industries.”Find Scaling Laws on the Lawfare website, and subscribe to never miss an episode.To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/lawfare-institute.Support this show http://supporter.acast.com/lawfare. Hosted on Acast. See acast.com/privacy for more information.
A poll on Texas’ 2026 Senate race finds a tightening GOP battle between Ken Paxton and John Cornyn, and a Democratic preference for Beto O'Rourke – who isn’t yet in the race.This morning, a federal judge temporarily blocked 11 Texas school districts from displaying the Ten Commandments in classrooms. We’ll bring you the latest.Texans are […] The post 11 Texas school districts temporarily blocked from displaying Ten Commandments appeared first on KUT & KUTX Studios -- Podcasts.
In this episode of Scaling Laws, Dean Ball, Senior Fellow at the Foundation for American Innovation and former Senior Policy Advisor for Artificial Intelligence and Emerging Technology, White House Office of Science and Technology Policy, joins Kevin Frazier, AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare, and Alan Rozenshtein, Associate Professor at Minnesota Law and Research Director at Lawfare, to share an inside perspective of the Trump administration's AI agenda, with a specific focus on the AI Action Plan. The trio also explore Dean's thoughts on the recently released ChatGPT-5 and the ongoing geopolitical dynamics shaping America's domestic AI policy.Find Scaling Laws on the Lawfare website, and subscribe to never miss an episode.To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/lawfare-institute.Support this show http://supporter.acast.com/lawfare. Hosted on Acast. See acast.com/privacy for more information.
Brian Fuller, a member of the Product Policy Team at OpenAI, joins Kevin Frazier, the AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare, to analyze how large AI labs go about testing their models for compliance with internal requirements and various legal obligations. They also cover the ins and outs of what it means to work in product policy and what issues are front of mind for in-house policy teams amid substantial regulatory uncertainty.Find Scaling Laws on the Lawfare website, and subscribe to never miss an episode.To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/lawfare-institute.Support this show http://supporter.acast.com/lawfare. Hosted on Acast. See acast.com/privacy for more information.
Renée DiResta, an Associate Research Professor at the McCourt School of Public Policy at Georgetown and a Contributing Editor at Lawfare, and Alan Rozenshtein, an Associate Professor at Minnesota Law, Research Director at Lawfare, and, with the exception of today, co-host on the Scaling Laws podcast, join Kevin Frazier, the AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare, to take a look at the Trump Administration's Woke AI policies, as set forth by a recent EO and explored in the AI Action Plan.Read the Woke AI executive orderRead the AI Action PlanRead "Generative Baseline Hell and the Regulation of Machine-Learning Foundation Models," by James Grimmelmann, Blake Reid, and Alan RozenshteinFind Scaling Laws on the Lawfare website, and subscribe to never miss an episode.To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/lawfare-institute.Support this show http://supporter.acast.com/lawfare. Hosted on Acast. See acast.com/privacy for more information.
Janet Egan, Senior Fellow with the Technology and National Security Program at the Center for a New American Security; Jessica Brandt, Senior Fellow for Technology and National Security at the Council on Foreign Relations; Neil Chilson, Head of AI Policy at Abundance Institute; and Tim Fist, Director of Emerging Technology Policy at the Institute for Progress join Kevin Frazier, the AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare for a special version of Scaling Laws.This episode was recorded just hours after the release of the AI Action Plan. About 180 days ago, President Trump directed his administration to explore ways to achieve AI dominance. His staff has attempted to do just that. This group of AI researchers dives into the plan's extensive recommendations and explore what may come next.Find Scaling Laws on the Lawfare website, and subscribe to never miss an episode.To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/lawfare-institute.Support this show http://supporter.acast.com/lawfare. Hosted on Acast. See acast.com/privacy for more information.
Ethan Mollick, Professor of Management and author of the “One Useful Thing” Substack, joins Kevin Frazier, the AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare, and Alan Rozenshtein, Associate Professor at Minnesota Law and a Senior Editor at Lawfare, to analyze the latest research in AI adoption, specifically its use by professionals and educators. The trio also analyze the trajectory of AI development and related, ongoing policy discussions.More of Ethan Mollick's work: https://www.oneusefulthing.org/Find Scaling Laws on the Lawfare website, and subscribe to never miss an episode.To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/lawfare-institute.Support this show http://supporter.acast.com/lawfare. Hosted on Acast. See acast.com/privacy for more information.