POPULARITY
Neil Chilson, Head of AI Policy at the Abundance Institute, and Gus Hurwitz, Senior Fellow and CTIC Academic Director at Penn Carey Law School and Director of Law & Economics Programs at the International Center for Law & Economics, join Kevin Frazier, AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare, to explore how academics can overcome the silos and incentives that plague the Ivory Tower and positively contribute to the highly complex, evolving, and interdisciplinary work associated with AI governance.The trio recorded this podcast live at the Institute for Humane Studies's Technology, Liberalism, and Abundance Conference in Arlington, Virginia.Read about Kevin's thinking on the topic here: https://www.civitasinstitute.org/research/draining-the-ivory-towerLearn about the Conference: https://www.theihs.org/blog/curated-event/technology-abundance-and-liberalism/Find Scaling Laws on the Lawfare website, and subscribe to never miss an episode.To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/lawfare-institute.Support this show http://supporter.acast.com/lawfare. Hosted on Acast. See acast.com/privacy for more information.
Neil Chilson, Head of AI Policy at the Abundance Institute, and Gus Hurwitz, Senior Fellow and CTIC Academic Director at Penn Carey Law School and Director of Law & Economics Programs at the International Center for Law & Economics, join Kevin Frazier, AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare, to explore how academics can overcome the silos and incentives that plague the Ivory Tower and positively contribute to the highly complex, evolving, and interdisciplinary work associated with AI governance. The trio recorded this podcast live at the Institute for Humane Studies's Technology, Liberalism, and Abundance Conference in Arlington, Virginia.Read about Kevin's thinking on the topic here: https://www.civitasinstitute.org/research/draining-the-ivory-towerLearn about the Conference: https://www.theihs.org/blog/curated-event/technology-abundance-and-liberalism/ Hosted on Acast. See acast.com/privacy for more information.
Jake Sullivan was the US National Security Advisor from 2021-2025. He joined our friends on The Cognitive Revolution podcast in August to discuss AI as a critical national security issue. We thought it was such a good interview and we wanted more people to see it, so we're cross-posting it here on The 80,000 Hours Podcast.Jake and host Nathan Labenz discuss:Jake's four-category framework to think about AI risks and opportunities: security, economics, society, and existential.Why Jake advocates for "managed competition" with China — where the US and China "compete like hell" while maintaining sufficient guardrails to prevent conflict.Why Jake thinks competition is a "chronic condition" of the US-China relationship that cannot be solved with “grand bargains.”How current conflicts are providing "glimpses of the future" with lessons about scale, attritability, and the potential for autonomous weapons as AI gets integrated into modern warfare.Why Jake worries that Pentagon bureaucracy prevents rapid AI adoption while China's People's Liberation Army may be better positioned to integrate AI capabilities.And why we desperately need private sector leadership: AI is "the first technology with such profound national security applications that the government really had very little to do with."Check out more of Nathan's interviews on The Cognitive Revolution YouTube channel: https://www.youtube.com/@CognitiveRevolutionPodcastOriginally produced by: https://aipodcast.ingThis edit by: Simon Monsour, Dominic Armstrong, and Milo McGuire | 80,000 HoursChapters:Cold open (00:00:00)Luisa's intro (00:01:06)Jake's AI worldview (00:02:08)What Washington gets — and doesn't — about AI (00:04:43)Concrete AI opportunities (00:10:53)Trump's AI Action Plan (00:19:36)Middle East AI deals (00:23:26)Is China really a threat? (00:28:52)Export controls strategy (00:35:55)Managing great power competition (00:54:51)AI in modern warfare (01:01:47)Economic impacts in people's daily lives (01:04:13)
In a special Future of Everything podcast episode recorded live before a studio audience in New York, host Russ Altman talks to three authorities on the innovation economy. His guests – Fei-Fei Li, professor of computer science and co-director of the Stanford Institute for Human-Centered AI (HAI); Susan Athey, professor and authority on the economics of technology; and Neale Mahoney, Trione Director of the Stanford Institute for Economic Policy Research – bring their distinct-but-complementary perspectives to a discussion on how artificial intelligence is reshaping our economy.Athey emphasizes that both AI broadly and AI-based coding tools specifically are general-purpose technologies, like electricity or the personal computer, whose impact may be felt quickly in certain sectors but much more slowly in aggregate. She tells how solving one bottleneck to implementation often reveals others – whether in digitization, adoption costs, or the need to restructure work and organizations. Mahoney draws on economic history to say we are in a “veil of ignorance” moment with regard to societal impacts. We cannot know whose jobs will be disrupted, he says, but we can invest in safety nets now to ease the transition. Li cautions against assuming AI will replace people. Instead, she speaks of AI as a “horizontal technology” that could supercharge human creativity – but only if it is properly rooted in science, not science fiction.Collectively, the panel calls on policymakers, educators, researchers, and entrepreneurs to steer AI toward what they call “human-centered goals” – protecting workers, growing opportunities, and supercharging education and medicine – to deliver broad and shared prosperity. It's the future of the innovation economy on this episode of Stanford Engineering's The Future of Everything podcast.Have a question for Russ? Send it our way in writing or via voice memo, and it might be featured on an upcoming episode. Please introduce yourself, let us know where you're listening from, and share your question. You can send questions to thefutureofeverything@stanford.edu.Episode Reference Links:Stanford Profile: Fei-Fei LiStanford Profile: Susan AtheyStanford Profile: Neale MahoneyConnect With Us:Episode Transcripts >>> The Future of Everything WebsiteConnect with Russ >>> Threads / Bluesky / MastodonConnect with School of Engineering >>> Twitter/X / Instagram / LinkedIn / FacebookChapters:(00:00:00) IntroductionRuss Altman introduces live guests Fei-Fei Li, Susan Athey, and Neale Mahoney, professors from Stanford University.(00:02:37) Lessons from Past TechnologyComparing AI with past technologies and the bottlenecks to their adoption.(00:06:29) Jobs & Safety NetsThe uncertainty of AI's labor impact and investing in social protections.(00:08:29) Augmentation vs. ReplacementUsing AI as a tool to enhance, not replace, human work and creativity.(00:11:41) Human-Centered AI & PolicyShaping AI through universities, government, and global collaboration.(00:15:58) Education RevolutionThe potential for AI to revolutionize education by focusing on human capital.(00:18:58) Balancing Regulation & InnovationBalancing pragmatic, evidence-based AI policy with entrepreneurship.(00:22:22) Competition & Market PowerThe risks of monopolies and the role of open models in fair pricing.(00:25:22) America's Economic FunkHow social media and innovation are shaping America's declining optimism.(00:27:05) Future in a MinuteThe panel shares what gives them hope and what they'd study today.(00:30:49) Conclusion Connect With Us:Episode Transcripts >>> The Future of Everything WebsiteConnect with Russ >>> Threads / Bluesky / MastodonConnect with School of Engineering >>>Twitter/X / Instagram / LinkedIn / Facebook Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
"I bleed purple at work. I don't bleed Republican red or Democrat blue—I bleed purple, the color of my company." When two of healthcare technology's most influential policy voices join forces, you get the unvarnished truth about how healthcare transformation really happens in Washington and beyond. Leigh Burchell (VP Policy and Public Affairs at Altera Digital Health) and Leslie Krigstein (VP Communication & Government Affairs at Transcarent) have spent decades translating between Silicon Valley innovation and Capitol Hill regulation. Their combined influence has shaped everything from meaningful use to digital health adoption. In this revealing episode, Leigh and Leslie discuss: Why they're still counting clicks in 2025 Humanizing corporate interests while maintaining credibility The delicate dance between innovation and regulation in the age of AI Why "pledges" are back under Trump 2.0 How consumerization is revolutionizing healthcare Being "the sharpest person in the room" while staying honest "Every policy maker wants to talk about digital health," Leigh notes. "It's massively exploding at the state level too." With AI "bullet training down the tracks," both women navigate the balance between enabling innovation and avoiding regulation that could "cut us off at the knees." Their secret to influence? Collaboration and genuine relationships. "We all want the same thing. People can sense that, so we hold hands and run in the same direction," says Leigh. Leslie adds: "There are lasting relationships with folks on Capitol Hill that started with simple coffee." Both have stood up to CEOs, defended patient interests over profits, and maintained integrity when commercial pressures mounted. For aspiring policy influencers: Be an advocate in all facets of life. Find your passion. Build trust through honesty. Chapters 03:45 - From Hill to Healthcare Tech: Finding Your Policy Passion 06:29 - Making Complex Policy Personal for Lawmakers 10:06 - Bleeding Purple: Navigating Bipartisan Corporate Advocacy 13:16 - The Deregulation Cycle and State-Level Explosion 15:00 - AI and the Consumerization Revolution in Healthcare 21:44 - Building Collaborative Networks for Policy Impact 24:42 - The Power of Being the Trusted Expert in the Room 29:20 - Finding Passion in Policy: Career Advice for Advocates Guest & Host Links Connect with Laurie McGraw on LinkedIn Connect with Leigh Burchell on LinkedIn Connect with Leslie Krigstein on LinkedIn Connect with Inspiring Women Browse Episodes | LinkedIn | Instagram | Apple | Spotify
In this episode of In AI We Trust?, cohosts Miriam Vogel and Nuala O'Connor speak with Daniel Dobrygowski, Head of Governance and Trust at the World Economic Forum (WEF), and Karla Yee Amezaga, Initiatives Lead for AI and Data Governance with WEF's Centre for AI Excellence about the importance of building trust in technology, strengthening AI and digital literacy, and modernizing boards to be fit for purpose for the modern era. They discuss EqualAI's and WEF's new playbooks, including WEF's Playbook on Advancing Responsible AI Innovation and EqualAI's new AI Governance Playbook. Find the WEF Playbook here and EqualAI's AI Governance Playbook here.
Step into the future of media with the creator of Virtually Parkinson, the world-first podcast hosted by an AI recreation of legendary broadcaster Sir Michael Parkinson. In this episode of Virtually Anything Goes, Deep Fusion Films CEO and Co-founder Ben Field reveals how he and his team brought the iconic interviewer back to life through cutting-edge AI and groundbreaking production workflows.AI That Thinks, Listens, and Interviews Ben takes us behind the scenes of building “AI Parky,” trained on more than 100 hours of classic interviews to hold entirely unscripted conversations with celebrity guests. He shares how the custom-built “Squawk” software allows the virtual host to react in real time and creates surprisingly personal, even therapeutic, discussions that feel authentically Parkinson.Setting the Standard for Ethical AI Beyond the show's wow factor, Ben is shaping the global conversation on responsible AI. From writing ethical frameworks for the BBC's Gerry Anderson: A Life Uncharted to advising broadcasters and policymakers, he's helping define how AI can enhance creativity while protecting intellectual property and artist rights.The Journey Behind the Innovation Ben also reflects on the career twists, from acting and BBC comedy writing to award-winning directing, that prepared him to lead at the frontier of technology and storytelling. His message: bold experimentation, trust in your team, and clear ethical guardrails can turn big ideas into reality.If you're curious about where AI and entertainment collide, or how policy and creativity can coexist, this conversation delivers inspiration and insight in equal measure.This episode is part of our Leadership Stories Series, where we speak to leaders from a variety of different backgrounds, including AI, Strategy, Marketing, Executive Coaching, Healthcare and others! Subscribe and check out our other episodes on Youtube at @madetoseemediaConnect with Benjamin Field on Linkedin or find out more at https://www.deepfusionfilms.com/Connect with Lev Cribb on LinkedinFor more information, content, and podcast episodes go to our Youtube Channel or https://www.madetosee.com
What happens when innovation runs headfirst into big government?On this week's Let People Prosper Show, I'm joined by Jake Morabito, Senior Director at the American Legislative Exchange Council (ALEC), where he leads both the Communications and Technology Task Force and the Energy, Environment, and Agriculture Task Force. Jake is working directly with state lawmakers to make sure freedom—not federal micromanagement—drives the future of innovation.From AI and broadband expansion to smart cities and age verification, Jake has been at the center of some of the most pressing debates in technology policy. His background—from Capitol Hill with Rep. Darrell Issa to his work with Software.org—gives him a unique perspective on how lawmakers handle innovation and how often they get it wrong. Together, we explore how states can lead in AI without replicating California's regulatory overreach, how to bridge the digital divide without fostering dependency, and how free-market principles can guide a more prosperous digital future.For more insights, visit vanceginn.com. You can also get even greater value by subscribing to my Substack newsletter at vanceginn.substack.com. Please share with your friends, family, and broader social media network.
Send us a textPresenters: Evgeniy Kharam, Cybersecurity Architect | Evangelist | Consultant | Advisor | Podcaster | Visionary | Speaker |Nim Nadarajah, C.CISO, Cyber Security, Compliance & Transformation Expert | Executive Board Member | Keynote Speaker Julian Lee, Publisher, Community Builder, Speaker, Channel Ecosystem Developer with a focus on cybersecurity, AI and Digital TransformationAdam Bennett, Co-Founder & CEO at SureStack CEO at Crosshair CyberThe Cybersecurity Defense Ecosystem aims to assist Managed Service Providers (MSPs) in becoming more cybersecurity-oriented amidst industry disruptions caused by AI and regulatory changes.In this discussion, we focused on the challenges posed by shadow IT and shadow AI within organizations. Evgeniy shared insights on how employees often resort to unauthorized applications due to strict IT policies, indicating a disconnect between user needs and IT support. Nim emphasized the importance of IT leaders understanding these needs rather than simply denying requests, while Adam highlighted the necessity of identifying user goals and exploring secure alternatives. The group recognized the critical need for improved communication between IT departments and employees to effectively address these challenges.The discussion also delved into the risks associated with AI usage among employees, with Julian noting that many companies lack clear AI policies, leading to unregulated use of various applications. Evgeniy suggested that organizations should focus on classifying and securing sensitive information while allowing knowledge workers to leverage AI for efficiency. Adam emphasized the importance of aligning AI strategies with business goals and reviewing the privacy policies of AI platforms. The group acknowledged the challenges of building secure AI environments and the associated costs, recognizing that not all companies may find this feasible.Click here to watch previous episodes on Cybersecurity Defense EcosystemDont miss our Cybersecurity Defense Ecosystem Summit on Nov. 26th in Toronto, aimed at fostering collaboration among managed service providers, chief information security officers, and other stakeholders. The event will feature a unique “Shark Tank” format for vendor presentations and expert-led discussions on various topics.To learn more on Cybersecurity Defense Ecosystem, visit: https://cybersecuritydefenseecosystem.com/
In this episode of The Responsive Lab, co-hosts Carly Berna and Scott Holthaus sit down with Javan Van Gronigen, founder and creative director of Fifty & Fifty and Donately, to unpack the insights behind the 2025 Nonprofit Peer Report. Javan shares surveying over 160 nonprofit leaders identified the top challenges nonprofit teams are facing today: misalignment, siloed data, brand confusion, and the inability to confidently execute digital strategies. From the power of donor-centric messaging to the rise of personalized, AI-powered donor journeys, this conversation dives deep into the core disconnects that hinder generosity, and how organizations can close the gap.
Who's speaking up for startups in Washington, D.C.?In this episode, Matt Perault (Head of AI Policy, a16z) and Collin McCune (Head of Government Affairs, a16z) unpack the “Little Tech Agenda” for AI- why AI rules should regulate harmful use, not model development; how to keep open source open; the roles of the federal government vs states in regulating AI; and how the U.S. can compete globally without shutting out new founders. Timecodes: 0:00 – Introduction 1:12 – Defining the Little Tech Agenda4:40 – Challenges for Startups vs. Big Tech6:37 – Principles of Smart AI Regulation9:55 – History of AI Policy & Regulatory Fears19:26 – The Role of Open Source and Global Competition23:45 – Motivations Behind Policy Approaches26:40 – Debates on Regulating Use vs. Development35:15 – Federal vs. State Roles in AI Policy39:24 – AI Policy and U.S.–China Competition40:45 – Current Policy Landscape & Action Plans42:47 – Moratoriums, Preemption, and Political Dynamics50:00 – Looking Forward: The Future of AI Policy56:16 – Conclusion & DisclaimersResources: Read the Little Tech Agenda: https://a16z.com/the-little-tech-agenda/Read ‘Regulate AI Use, Not AI Development : https://a16z.com/regulate-ai-use-not-ai-development/Read Martin's article ‘Base AI Policy on Evidence, Not Existential Angst: https://a16z.com/base-ai-policy-on-evidence-not-existential-angst/Read ‘Setting the Agenda for Global AI Leadership':https://a16z.com/setting-the-agenda-for-global-ai-leadership-assessing-the-roles-of-congress-and-the-states/Read ‘The Commerce Clause in the Age of AI”: https://a16z.com/the-commerce-clause-in-the-age-of-ai-guardrails-and-opportunities-for-state-legislatures/Find Matt on X: https://x.com/MattPeraultFind Collin on X: https://x.com/Collin_McCune
In this Texas Talks interview, Senator Tan Parker of District 12 joins Brad Swail to discuss his decades-long fight against human trafficking, including the passage of Senate Bill 11 to protect victims and prosecute traffickers. Parker also breaks down disaster relief efforts following the July 4th floods, new legislation to safeguard Texas children, and the state's leadership in AI innovation and regulation. A deep dive into protecting the vulnerable, strengthening public safety, and keeping Texas at the forefront of progress.
Dinis Guarda citiesabc openbusinesscouncil Thought Leadership Interviews
Lord Timothy Clement-Jones, CBE is a Liberal Democrat House of Lords spokesperson for Science, Innovation and Technology. He is the Former Chair of the House of Lords Select Committee on Artificial Intelligence which reported in 2018 with "AI in the UK Ready Willing and Able?" and its follow-up report in 2020 "AI in the UK: No Room for Complacency".Lord Clement-Jones is also the Co-Chair of the All-Party Parliamentary Group on Artificial Intelligence, which he co-founded in 2017 and a Consultant on AI Policy and Regulation at DLA Piper since 2018.To read more about Timothy Clement Jones, please visit https://businessabc.net/wiki/timothy-francis-clement-jonesLord Tim Clement Jones interview questions00:00 - 08:29 Introduction08:30 - 12:56 Career12:57 - 17:41 Politics17:42 - 22:41 Regulation leads to innovation 22:40 - 29:42 Agentic AI vs AGI29:43 - 34:49 Trust in AI is prime34:50 - 39:27 Risks if AI is not used as a tool39:26 - 45:24 Theft and cybersecurity45:25 - 50:41 Education for children with autism50:42 - 53:49 Living with the algorithm53:50 - 01:00:22 Change is not changing01:00:23 - 01:04:47 The AI governance framework01:04:48 - 01:07:39 Make an impact01:07:40 - 01:12:27 ClosureUseful Links and Resourceshttps://members.parliament.uk/member/3396/contacthttps://www.turing.ac.uk/people/guest-speakers/tim-clement-joneshttps://www.linkedin.com/in/tim-clement-jones-59a3254?originalSubdomain=ukAbout businessabc.nethttps://www.businessabc.net/About citiesabc.comhttps://www.citiesabc.com/ About fashionabc.orghttps://www.fashionabc.org/ About Dinis Guardahttps://www.dinisguarda.com/https://businessabc.net/wiki/dinis-guardaBusiness Inquiries- info@ztudium.comSupport the show
As the next AI cycle begins, state and national governments are trying to keep up. And AI policy now matters for energy, health, education, foreign, and economic development policy as well. What can we learn from the early AI legislation? Chinnu Parinandi finds that partisan alignments and institutional capacity shape where and how consumer protection versus economic development AI policies appear in the states. Heonuk Ha finds an AI boom in congressional legislation with key thematic clusters—from innovation and security to data governance and healthcare.
The United States and China are locked in a race for dominance in artificial intelligence, including its applications and diffusion. American and Chinese AI firms like OpenAI and DeepSeek respectively have captured global attention and major companies like Google and Microsoft have been actively investing in AI development. While the US currently boasts world-leading AI models, China is ahead in some areas of AI research and application. With the release of US and Chinese AI action plans in July, we may be on the cusp of a new phase in US-China AI competition.Why is AI so important for a country's global influence? What are the strengths of China's AI strategy? And what does China's new AI action plan tell us about its AI ambitions? To discuss these questions, we are joined by Owen Daniels. Owen is the Associate Director of Analysis at Georgetown's Center for Security and Emerging Technology and a Non-Resident Fellow at the Atlantic Council. His recently published article in Foreign Affairs co-authored with Hanna Dohmen -- titled China's Overlooked AI Strategy -- provides insights into how Beijing is utilizing AI to gain global dominance and what the US can and should do to sustain and bolster its lead.Timestamps[00:00] Start [02:05] US Policy Risks to Chinese AI Leadership [05:28] Deepseek and Kimi's Newest Models [07:54] US vs. China's Approach to AI [10:42] Limitations to China's AI Strategy [13:08] Using AI as a Soft Power Tool [16:10] AI Action Plans [19:34] Trump's Approach to AI Competition [22:30] Can China Lead Global AI Governance? [25:10] Evolving US Policy for Open Models
In this episode of Scaling Laws, Dean Ball, Senior Fellow at the Foundation for American Innovation and former Senior Policy Advisor for Artificial Intelligence and Emerging Technology, White House Office of Science and Technology Policy, joins Kevin Frazier, AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare, and Alan Rozenshtein, Associate Professor at Minnesota Law and Research Director at Lawfare, to share an inside perspective of the Trump administration's AI agenda, with a specific focus on the AI Action Plan. The trio also explore Dean's thoughts on the recently released ChatGPT-5 and the ongoing geopolitical dynamics shaping America's domestic AI policy.Find Scaling Laws on the Lawfare website, and subscribe to never miss an episode.To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/lawfare-institute.Support this show http://supporter.acast.com/lawfare. Hosted on Acast. See acast.com/privacy for more information.
a16z General Partners Martin Casado and Anjney Midha join Erik Torenberg to unpack one of the most dramatic shifts in tech policy in recent memory: the move from “pause AI” to “win the AI race.”They trace the evolution of U.S. AI policy—from executive orders that chilled innovation, to the recent AI Action Plan that puts scientific progress and open source at the center. The discussion covers how technologists were caught off guard, why open source was wrongly equated to nuclear risk, and what changed the narrative—including China's rapid progress.The conversation also explores:How and why the AI discourse got captured by doomerismWhat “marginal risk” really means—and why it mattersWhy open source AI is not just ideology, but business strategyHow government, academia, and industry are realigning after a fractured few yearsThe effect of bad legislation—and what comes nextWhether you're a founder, policymaker, or just trying to make sense of AI's regulatory future, this episode breaks it all down.Timecodes:0:00 Introduction & Setting the Stage0:39 The Shift in AI Regulation Discourse2:10 Historical Context: Tech Waves & Policy6:39 The Open Source Debate13:39 The Chilling Effect & Global Competition15:00 Changing Sentiments on Open Source21:06 Open Source as Business Strategy28:50 The AI Action Plan: Reflections & Critique32:45 Alignment, Marginal Risk, and Policy41:30 The Future of AI Regulation & Closing ThoughtsResourcesFind Martin on X: https://x.com/martin_casadoFind Anjney on X: https://x.com/anjneymidhaStay Updated:Let us know what you think: https://ratethispodcast.com/a16zFind a16z on Twitter: https://twitter.com/a16zFind a16z on LinkedIn: https://www.linkedin.com/company/a16zSubscribe on your favorite podcast app: https://a16z.simplecast.com/Follow our host: https://x.com/eriktorenbergPlease note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures.
This week on Breaking Battlegrounds, we kick things off with Northwestern University's Gary Saul Morson, co-author of Cents and Sensibility, joins us to explore why revolutions never truly end, Dostoevsky's warnings about nihilism, and what economist Friedrich Hayek might think about artificial intelligence. We wrap up with Satya Thallam, senior advisor at Americans for Responsible Innovation, for an inside look at the political and national security implications of AI policy, from the White House's export control changes to the GOP's divide over state regulation, and what it all means for the future of innovation in America.
Peter Campbell is the principal consultant at Techcafeteria, a micro-consulting firm dedicated to helping nonprofits make more affordable and effective use of technology to support their missions. He recently published a free download powerpoint on Managing AI Risk and had time to talk with Carolyn about his thoughts on developing AI policies with an eye to risk, where the greatest risks lie for nonprofits using AI, and how often to review your policies as the technology changes rapidly.The takeaways: AI tools are like GPS (which is itself an AI). You are the expert; they are not able to critically analyze their own output even though they can mimic authority. Using AI tools for subjects where you have subject expertise allows you to correct the output. Using AI tools for subjects where you have no knowledge adds risk. Common AI tasks at nonprofits move from low-level risks such as searching your own inbox for an important email to higher-risk activities more prone to consequential errors, such as automation and analysis.Common AI risks include inaccuracy, lack of authenticity, reputational damage, and copyright and privacy violations.AI also has risk factors associated with audience: your personal use probably has pretty low risk that you will be fooled or divulge sensitive information to yourself, but when you use AI to communicate with the public, the risk increases for your nonprofit.How to Manage AI Risks at Nonprofits? Start with an AI Policy. Review it often as the technology and tools are changing rapidly.Use your own judgement. A good rule of thumb is to use AI tools to create things that you are already knowledgeable about, so that you can easily assess the accuracy of the AI output. Transparency matters. Let people know AI was used and how it was used. Use an “Assisted by AI” disclaimer when appropriate.Require a human third party review before sharing AI created materials with the public. State this in your transparency policy/disclaimers. Be honest about the roles of AI and humans in your nonprofit work.Curate data sources, and always know what your AI is using to create materials or analysis. Guard against bias and harm to communities you care about.“I've been helping clients develop Artificial Intelligence (AI) policies lately. AI has lots of innovative uses and every last one of them has some risk associated with it, so I regularly urge my clients to get the policies and training in place before they let staff loose with the tools. Here is a generic version of a powerpoint explaining AI risks and policies for nonprofits. “Peter Campbell, Techcafeteria _______________________________Start a conversation :) Register to attend a webinar in real time, and find all past transcripts at https://communityit.com/webinars/ email Carolyn at cwoodard@communityit.com on LinkedIn Thanks for listening.
As a senior policy adviser in the Office of Science and Technology Policy, Dean Ball helped write President Donald Trump's recently released AI Action Plan. This week, Ball left the administration and plans to continue shaping AI policy from outside the White House. On POLITICO Tech, Ball joins host Steven Overly to discuss the government's role in regulating artificial intelligence, Trump allowing China to buy American microchips, and whether the rush of AI investment will lead to a market bubble. Steven Overly is the host of POLITICO Tech and covers the intersection of politics and technology. Nirmal Mulaikal is the co-host and producer of POLITICO Energy and producer of POLITICO Tech. Music courtesy of www.epidemicsound.com Intro: https://www.epidemicsound.com/track/0KEjTXFuS0/ Outro: https://www.epidemicsound.com/track/MHh0nBFuwg/ Learn more about your ad choices. Visit megaphone.fm/adchoices
America’s new AI Action Plan — announced by the White House in July and framed by three pillars of accelerating innovation, building national AI infrastructure, and projecting U.S. leadership abroad — promises more than 90 separate federal actions, from fast-tracking approvals for medical-AI tools to revising international export controls on advanced chips. Supporters hail its light-touch approach, swift development of domestic and foreign deployment of AI, and explicit warnings against “ideological bias” in AI systems. In contrast, some critics say the plan removes guardrails, favors big tech, and is overshadowed by other actions disinvesting in research. How will the Plan impact AI in America? Join us for a candid discussion that will unpack the Plan’s major levers and ask whether the “innovation-first” framing clarifies or obscures deeper constitutional and economic questions. Featuring: Neil Chilson, Head of AI Policy, Abundance Institute Mario Loyola, Senior Research Fellow, Environmental Policy and Regulation, Center for Energy, Climate, and Environment, The Heritage Foundation Asad Ramzanali, Director of Artificial Intelligence & Technology Policy, Vanderbilt Policy Accelerator, Vanderbilt University (Moderator) Kevin Frazier, AI Innovation and Law Fellow, University of Texas School of Law
Join us on Scaling Laws as we delve into the intricate world of AI policy with Dean Ball, former senior policy advisor at the White House's Office of Science and Technology Policy. Discover the behind-the-scenes insights into the Trump administration's AI Action Plan, the challenges of implementing AI policy at the federal level, and the evolving political landscape surrounding AI on the right. Dean shares his unique perspective on the opportunities and hurdles in shaping AI's future, offering a candid look at the intersection of technology, policy, and politics. Tune in for a thought-provoking discussion that explores the strategic steps America can take to lead in the AI era. Hosted on Acast. See acast.com/privacy for more information.
AI ethics expert Sam Sammane challenges Silicon Valley's artificial intelligence hype in this controversial entrepreneurship interview. The Theo Sim founder and nanotechnology PhD reveals why current AI regulations only help wealthy tech giants while blocking innovation for small businesses. Sam exposes the truth about ChatGPT privacy risks, demonstrates how personalized AI systems running locally protect your data better than cloud-based solutions, and shares his revolutionary context engineering approach that transforms generic chatbots into custom AI employees. Sam's contrarian take on AI policy, trustworthy AI development, and why schools must teach cognitive ethics now will reshape how you think about augmenting human intelligence. The future of AI belongs to businesses that act today, not tomorrow.
From the algorithms that curate your social media feed to the recommendation systems that influence what you buy, artificial intelligence is quietly reshaping every aspect of our daily lives. Yet most of us remain in the dark about how these powerful technologies are governed—and that's a problem we can't afford to ignore. Artificial Intelligence (or AI) policy isn't just about tech regulation; it's about who gets to shape the future of work, privacy, and power in our increasingly digital world. The rules being written today will determine whether AI serves all of society or just a privileged few. In this episode of Talking Indonesia, Dr Elisabeth Kramer dives into Indonesia's approach to AI governance, taking its cues from the private sector, with guest Diah Angendari. Diah Angendari is a PhD Candidate at Leiden University and her dissertation examines the interplay between imaginaries, power, and interests in policymaking. She's using the case study of AI in Indonesia to understand the factors that shape these policies. Prior to joining the PhD program, Diah was a lecturer in the Department of Communication Science at Gadjah Mada University.
On this episode of the Self-Publishing News Podcast, Dan Holloway reports on a coordinated bot attack that hit indie authors using Shopify, leaving some with unexpected fees and limited recourse. He also covers new and proposed legislation across the UK, EU, and US, including the UK's Online Safety Act, concerns over enforcement of the EU AI Act, and the US White House's pro-tech AI action plan—all with implications for author rights and content access. Sponsors Self-Publishing News is proudly sponsored by Bookvault. Sell high-quality, print-on-demand books directly to readers worldwide and earn maximum royalties selling directly. Automate fulfillment and create stunning special editions with BookvaultBespoke. Visit Bookvault.app today for an instant quote. Self-Publishing News is also sponsored by book cover design company Miblart. They offer unlimited revisions, take no deposit to start work and you pay only when you love the final result. Get a book cover that will become your number-one marketing tool. Find more author advice, tips, and tools at our Self-publishing Author Advice Center, with a huge archive of nearly 2,000 blog posts and a handy search box to find key info on the topic you need. And, if you haven't already, we invite you to join our organization and become a self-publishing ally. About the Host Dan Holloway is a novelist, poet, and spoken word artist. He is the MC of the performance arts show The New Libertines, He competed at the National Poetry Slam final at the Royal Albert Hall. His latest collection, The Transparency of Sutures, is available on Kindle.
On this episode of Compliance Unfiltered, the CU guys delve into the critical need for AI policies within organizations. As AI technology rapidly evolves, many companies find themselves unprepared, risking exposure of sensitive data through platforms like ChatGPT. Adam emphasizes the urgency of implementing AI policies to protect against potential data breaches and compliance issues. Discover why having a robust AI policy is not just a best practice but a necessity in today's digital landscape. All this, and more, on this episode of Compliance Unfiltered.
In this episode of In AI We Trust?, cohosts Miriam Vogel and Nuala O'Connor are joined by Adam Thierer, resident senior fellow @ R Street's Tech & Innovation team. Adam weighs in on the Trump Administration's AI Action Plan, the importance of Congress in developing AI policy, and existing legal principles and practices that help define the new digital and AI age. They focused on the mandate for AI literacy, as well as the necessity of AI technologies being regulated in a transparent and trustworthy way that end users, and particularly consumers, can understand.
In Episode 262 of the House of #EdTech, Chris Nesi explores the timely and necessary topic of creating a responsible AI policy for your classroom. With artificial intelligence tools becoming more integrated into educational spaces, the episode breaks down why teachers need to set clear expectations and how they can do it with transparency, collaboration, and flexibility. Chris offers a five-part framework that educators can use to guide students toward ethical and effective AI use. Before the featured content, Chris reflects on a growing internal debate: is it time to step back from tech-heavy classrooms and return to more analog methods? He also shares three edtech recommendations, including tools for generating copyright-free images, discovering daily AI tool capabilities, and randomizing seating charts for better classroom dynamics. Topics Discussed: EdTech Thought: Chris debates the “Tech or No Tech” question in modern classrooms EdTech Recommendations: https://nomorecopyright.com/ - Upload an image to transform it into a unique, distinct version designed solely for inspiration and creative exploration. https://www.shufflebuddy.com/ - Never worry about seating charts again Foster a strong classroom community by frequently shuffling your seating charts while respecting your students' individual needs. https://whataicandotoday.com/ - We've analysed 16362 AI Tools and identified their capabilities with OpenAI GPT-4.1, to bring you a free list of 83054 tasks of what AI can do today. Why classrooms need a responsible AI policy A five-part framework to build your AI classroom policy Define What AI Is (and Isn't) Clarify When and How AI Can Be Used Promote Transparency and Attribution Include Privacy and Tool Approval Guidelines Make It Collaborative and Flexible The importance of modeling digital citizenship and AI literacy Free editable AI policy template by Chris for grades K–12 Mentions: Mike Brilla – The Inspired Teacher podcast Jake Miller – Educational Duct Tape podcast // Educational Duct Tape Book
AI Applied: Covering AI News, Interviews and Tools - ChatGPT, Midjourney, Runway, Poe, Anthropic
Dive into the Executive AI Action Plan with Conor Grennan and Jaeden Schafer as they explore its potential impact on AI development, energy policies, and international competition. Discover how this policy could shape the future of AI in the U.S. and beyond.AI Applied YouTube Channel: https://www.youtube.com/@AI-Applied-PodcastTry AI Box: https://aibox.aiConor's AI Course: https://www.ai-mindset.ai/coursesConor's AI Newsletter: https://www.ai-mindset.ai/Jaeden's AI Hustle Community: https://www.skool.com/aihustle/aboutChapters00:00 The Executive AI Action Plan: A New Direction02:51 Geopolitical Implications & Energy Focus05:55 DEI in AI: Balancing Bias & Representation08:31 Data Centers & AI Regulation11:09 Market Dynamics & AI Safety
July 28th, 2025
Newt talks with Neil Chilson, current head of AI Policy at the Abundance Institute, about President Trump’s “Winning the Race: America’s AI Action Plan,” which aims to accelerate AI innovation, build American AI infrastructure, and lead in international AI diplomacy and security. Chilson highlights the importance of AI for U.S. global dominance, emphasizing its potential in various sectors like healthcare and defense. Their conversation also touches on the strategic significance of Taiwan in chip production and the challenges of AI regulation, particularly in Europe. The Abundance Institute focuses on emerging technologies, advocating for a culture that embraces innovation and a regulatory environment that enables it. They conclude with optimism about AI's role in medicine and the potential for a future with greater technological advancements.See omnystudio.com/listener for privacy information.
From July 23, 2024: Alan Rozenshtein, Associate Professor at the University of Minnesota Law School and Senior Editor at Lawfare, and Matt Perault, the Director of the Center on Technology Policy at the University of North Carolina at Chapel Hill, sat down with Alexander Macgillivray, known to all as "amac," who was the former Principle Deputy Chief Technology Officer of the United States in the Biden Administration and General Counsel at Twitter.amac recently wrote a piece for Lawfare about making AI policy in a world of technological uncertainty, and Matt and Alan talked to him about how to do just that.To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/lawfare-institute.Support this show http://supporter.acast.com/lawfare. Hosted on Acast. See acast.com/privacy for more information.
ChatGPT: News on Open AI, MidJourney, NVIDIA, Anthropic, Open Source LLMs, Machine Learning
Discover how former President President Trump is influencing the next chapter of AI development. We evaluate the implications of new regulations and support structures. Tune in to get expert perspectives on what's coming next in AI governance. Special focus is given to tech company responses. Our discussion includes expert opinions and recent statements from key stakeholders.Try AI Box: https://aibox.aiAI Chat YouTube Channel: https://www.youtube.com/@JaedenSchaferJoin my AI Hustle Community: https://www.skool.com/aihustle/about
Janet Egan, Senior Fellow with the Technology and National Security Program at the Center for a New American Security; Jessica Brandt, Senior Fellow for Technology and National Security at the Council on Foreign Relations; Neil Chilson, Head of AI Policy at Abundance Institute; and Tim Fist, Director of Emerging Technology Policy at the Institute for Progress join Kevin Frazier, the AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare for a special version of Scaling Laws.This episode was recorded just hours after the release of the AI Action Plan. About 180 days ago, President Trump directed his administration to explore ways to achieve AI dominance. His staff has attempted to do just that. This group of AI researchers dives into the plan's extensive recommendations and explore what may come next.Find Scaling Laws on the Lawfare website, and subscribe to never miss an episode.To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/lawfare-institute.Support this show http://supporter.acast.com/lawfare. Hosted on Acast. See acast.com/privacy for more information.
Artificial intelligence is changing the way we do business, but it's still the Wild West. This week Dan and Donnie welcome Bennett Borden, CEO of Clarion AI Partners, who shares his expertise as an AI lawyer and data scientist. He covers how to responsibly implement policies for using AI — which he calls the most transformative technology since electricity — in pest and lawn companies. Guest: Bennett Bordon, CEO, Clarion AI Partners Hosts: Dan Gordon, PCO Bookkeepers & M&A Specialists Donnie Shelton, Triangle Home Services
White House Senior Policy Advisor for AI Sriram Krishnan joined Bloomberg's Caroline Hyde and Ed Ludlow to discuss the latest on AI policy and what to expect as it evolves.See omnystudio.com/listener for privacy information.
Janet Egan, Senior Fellow with the Technology and National Security Program at the Center for a New American Security, Jessica Brandt, Senior Fellow for Technology and National Security at the Council on Foreign Relations, Neil Chilson, Head of AI Policy at Abundance Institute, and Tim Fist, Director of Emerging Technology Policy at the Institute for Progress join Kevin Frazier, the AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare for a special version of Scaling Laws.This episode was recorded just hours after the release of the AI Action Plan. About 180 days ago, President Trump directed his administration to explore ways to achieve AI dominance. His staff has attempted to do just that. This group of AI researchers dives into the plan's extensive recommendations and explore what may come next. Hosted on Acast. See acast.com/privacy for more information.
President Trump unveiled his approach to the development of AI. Surrounded by some of the biggest names in tech, he signed three executive orders. One targets what Trump called "ideological bias" in AI chatbots, another aims to make it easier to build massive AI data centers and the third encourages the export of American AI tech. Amna Nawaz discussed the implications with Will Oremus. PBS News is supported by - https://www.pbs.org/newshour/about/funders
President Trump unveiled his approach to the development of AI. Surrounded by some of the biggest names in tech, he signed three executive orders. One targets what Trump called "ideological bias" in AI chatbots, another aims to make it easier to build massive AI data centers and the third encourages the export of American AI tech. Amna Nawaz discussed the implications with Will Oremus. PBS News is supported by - https://www.pbs.org/newshour/about/funders
Key Takeaways for local government AI:Why policy might not be the best starting point... start doing!Get in the figurative sandbox and to play and test AI with small teams for real outcomes.Official policy documents and templates can be found (and copied!) via the GovAI Coalition. Featured Guest:Parth Shah – CEO and Co-Founder, Polimorphic Voices in Local Government Podcast Hosts:Joe Supervielle and Angelica WedellResources:ICMA Annual Conference, October 25-29 in Tampa. Multiple AI trainings on the ICMA Learning Lab.AI policy, templates, and more tools from the GovAI CoalitionVoices in Local Gov Episode: GovAI Coalition - Your Voice in Shaping the Future of AI
In the United States, state legislatures are key players in shaping artificial intelligence policy, as lawmakers attempt to navigate a thicket of politics surrounding complex issues ranging from AI safety, deepfakes, and algorithmic discrimination to workplace automation and government use of AI. The decision by the US Senate to exclude a moratorium on the enforcement of state AI laws from the budget reconciliation package passed by Congress and signed by President Donald Trump over the July 4 weekend leaves the door open for more significant state-level AI policymaking.To take stock of where things stand on state AI policymaking, Tech Policy Press associate editor Cristiano Lima-Strong spoke to two experts:Scott Babwah Brennen, director of NYU's Center on Technology Policy, and Hayley Tsukayama, associate director of legislative activism at the Electronic Frontier Foundation (EFF).
In this episode, Lightspeed Partner Michael Mignano sits down with the Foundation for American Innovation's Chief Economist Sam Hammond to talk AI policy. Sam breaks down the key infrastructure needed for AI developments and how policymakers are adapting to rapid technological change. He also shares insights on AI training data and fair use, workforce disruption, and how, when it comes to AI, everything can change in just a few months.Episode Chapters: 00:00 Introduction 00:55 Meet Sam Hammond: Background and Role03:06 The Big AI Policy Issues05:09 Energy and Chip Policy06:47 Fair Use and Copyright in AI13:37 The Urgency of AI Regulation17:03 Potential AI Crisis and Legislative Response20:25 Challenges in AI Regulation21:39 Acceleration vs. Regulation in AI Development22:34 AI Safety and National Security23:51 Fair Use and Copyright in AI Training Data25:39 AI-Induced Labor Disruptions33:36 State-Level AI Regulation 36:02 Global Cooperation on AI Safety37:29 Advice for AI Startups38:34 Optimism for AI and Policy Advancements41:07 ConclusionStay in touch:www.lsvp.comX: https://twitter.com/lightspeedvpLinkedIn: https://www.linkedin.com/company/lightspeed-venture-partners/Instagram: https://www.instagram.com/lightspeedventurepartners/Subscribe on your favorite podcast app: generativenow.coEmail: generativenow@lsvp.comThe content here does not constitute tax, legal, business or investment advice or an offer to provide such advice, should not be construed as advocating the purchase or sale of any security or investment or a recommendation of any company, and is not an offer, or solicitation of an offer, for the purchase or sale of any security or investment product. For more details please see lsvp.com/legal.
This week on “Paul, Weiss Waking Up With AI,” Katherine Forrest and Anna Gressel discuss the Senate's removal of a proposed AI moratorium from the “One Big Beautiful Bill Act,” and examine new state-level AI legislation in Colorado, Texas, New York, California and others. ## Learn More About Paul, Weiss's Artificial Intelligence practice: https://www.paulweiss.com/industries/artificial-intelligence
The episode begins with Kapoor explaining the origins of AI Snake Oil, tracing it back to his PhD research at Princeton on AI's limited predictive capabilities in social science domains. He shares how he and co-author Arvind Narayanan uncovered major methodological flaws in civil war prediction models, which later extended to other fields misapplying machine learning.The conversation then turns to the disconnect between academic findings and media narratives. Kapoor critiques the hype cycle around AI, emphasizing how its real-world adoption is slower, more fragmented, and often augmentative rather than fully automating human labor. He cites the enduring demand for radiologists as a case in point.Kapoor introduces the concept of “AI as normal technology,” which rejects both the notion of imminent superintelligence and the dismissal of AI as a passing fad. He argues that, like other general-purpose technologies (electricity, the internet), AI will gradually reshape industries, mediated by social, economic, and organizational factors—not just technical capabilities.The episode also examines the speculative worldviews put forth by documents like AI 2027, which warn of AGI-induced catastrophe. Kapoor outlines two key disagreements: current AI systems are not technically on track to achieve general intelligence, and even capable systems require human and institutional choices to wield real-world power.On policy, Kapoor emphasizes the importance of investing in AI complements—such as education, workforce training, and regulatory frameworks—to enable meaningful and equitable AI integration. He advocates for resilience-focused policies, including cybersecurity preparedness, unemployment protection, and broader access to AI tools.The episode concludes with a discussion on recalibrating expectations. Kapoor urges policymakers to move beyond benchmark scores and collaborate with domain experts to measure AI's real impact. In a rapid-fire segment, he names the myth of AI predicting the future as the most misleading and humorously imagines a superintelligent AI fixing global cybersecurity first if it ever emerged.Episode ContributorsSayash Kapoor is a computer science Ph.D. candidate at Princeton University's Center for Information Technology Policy. His research focuses on the societal impact of AI. He previously worked on AI in the industry and academia at Facebook, Columbia University, and EPFL Switzerland. He is a recipient of a best paper award at ACM FAccT and an impact recognition award at ACM CSCW.Nidhi Singh is a senior research analyst and program manager at Carnegie India. Her current research interests include data governance, artificial intelligence and emerging technologies. Her work focuses on the implications of information technology law and policy from a Global Majority and Asian perspective. Suggested ReadingsAI as Normal Technology by Arvind Narayanan and Sayash Kapoor. Every two weeks, Interpreting India brings you diverse voices from India and around the world to explore the critical questions shaping the nation's future. We delve into how technology, the economy, and foreign policy intertwine to influence India's relationship with the global stage.As a Carnegie India production, hosted by Carnegie scholars, Interpreting India, a Carnegie India production, provides insightful perspectives and cutting-edge by tackling the defining questions that chart India's course through the next decade.Stay tuned for thought-provoking discussions, expert insights, and a deeper understanding of India's place in the world.Don't forget to subscribe, share, and leave a review to join the conversation and be part of Interpreting India's journey.
Join us at the Cato Institute for an in-depth fireside chat featuring Congressman Rich McCormick and Matt Mittelsteadt, Cato policy fellow in technology. This timely conversation will explore the evolving landscape of artificial intelligence (AI) and cybersecurity policy, and the state of AI in Congress.Join us for a discussion on the current state of AI governance at the federal and state levels, the proposal for a 10-year moratorium on state and local AI regulations (what it means, and what's at stake), and the long-term vision for responsible, innovation-friendly AI policy in the United States.Whether you're a policymaker, tech professional, academic, or simply interested in the future of AI regulation, this is a must-attend conversation on how to balance innovation, security, and civil liberties in the age of artificial intelligence. Hosted on Acast. See acast.com/privacy for more information.
In Episode 10 of the In AI We Trust? AI Literacy series, Angie Cooper's Call to Action for the Heartland, Miriam Vogel talks with Angie Cooper, President and Chief Operating Officer of Heartland Forward, to explore how artificial intelligence (AI) can accelerate economic growth across America's Heartland. The discussion follows Heartland Forward's recent annual Heartland Summit, the data-driven insights that inform their mission, and their partnership with Stemuli to create a first-of-its-kind AI literacy video game to promote AI learning for rural students. Angie stresses the importance of increasing access to AI by expanding affordable, high-speed internet and building trust with AI platforms through education initiatives and open conversations with teachers and employers. This episode explores how AI can be utilized as a tool to benefit small businesses, prepare students for the workforce, and advance jobs throughout the Heartland and beyond.
Margaret Woolley Bussey, Executive Director of the Utah Department of Commerce, joins host Jeanne Meserve to discuss Utah's establishment of an Office of AI Policy, Utah's thriving tech sector, and regulations and protections on AI. Bussey explains the office's three core objectives—encouraging innovation, protecting the public, and building a continuous learning function within government. The discussion highlights the office's successful work on mental health chatbots and its future plans to tackle deepfakes and AI companions. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit scsp222.substack.com
Proposals to regulate artificial intelligence (AI) at the state level continue to increase. Unfortunately, these proposals could potentially disrupt advances in this important technology, even if there is strong federal policy. This policy forum, which is related to an upcoming policy analysis on the topic, will explore the potential economic costs of state-level AI regulation as well as the potential barriers in the market it creates for both consumers and innovators. Are there ways state AI policy conversations may discourage or encourage the important policy conversations around AI innovation? Hosted on Acast. See acast.com/privacy for more information.
Kevin Werbach interviews Brenda Leong, Director of the AI division at boutique technology law firm ZwillGen, to explore how legal practitioners are adapting to the rapidly evolving landscape of artificial intelligence. Leong explains why meaningful AI audits require deep collaboration between lawyers and data scientists, arguing that legal systems have not kept pace with the speed and complexity of technological change. Drawing on her experience at Luminos.Law—one of the first AI-specialist law firms—she outlines how companies can leverage existing regulations, industry-specific expectations, and contextual risk assessments to build practical, responsible AI governance frameworks. Leong emphasizes that many organizations now treat AI oversight not just as a legal compliance issue, but as a critical business function. As AI tools become more deeply embedded in legal workflows and core operations, she highlights the growing need for cautious interpretation, technical fluency, and continuous adaptation within the legal field. Brenda Leong is Director of ZwillGen's AI Division, where she leads legal-technical collaboration on AI governance, risk management, and model audits. Formerly Managing Partner at Luminos.Law, she pioneered many of the audit practices now used at ZwillGen. She serves on the Advisory Board of the IAPP AI Center, teaches AI law at IE University, and previously led AI and ethics work at the Future of Privacy Forum. Transcript AI Audits: Who, When, How...Or Even If? Why Red Teaming Matters Even More When AI Starts Setting Its Own Agenda
0:00 - HeteroAwesomeness Month 13:02 - Elon Musk comes out against the Big Beautiful Bill 27:15 - US Open qualifying 30:14 - Mamet on Maher podcast 55:25 - James A. Gagliano, retired FBI supervisory special agent and a doctoral candidate in homeland security at St. John’s University, on the "unhealthy" direction of college campuses - "we are becoming the architects of our own demise" 01:11:51 - CA 400M champ Clara Adams stripped of title 01:26:04 - Chief Economist at First Trust Portfolios LP, Brian Wesbury, on the Big Beautiful Bill - "the last two years of government spending were some of the most irresponsible budgets we have ever seen" Follow Brian on X @wesbury 01:49:30 - Emeritus professor of law, Harvard Law School, Alan Dershowitz, shares details from his new book The Preventive State: The Challenge of Preventing Serious Harms While Preserving Essential Liberties. For more from Professor Dershowitz, check out his podcast “The Dershow” on Spotify, YouTube and iTunes 02:07:58 - Neil Chilson, former Chief Technologist for the FTC and currently Head of AI Policy at the Abundance Institute, on the risks, rewards and myths of AI. Check out Neil’s substack outofcontrol.substack.comSee omnystudio.com/listener for privacy information.