POPULARITY
In this episode of In AI We Trust?, cohosts Miriam Vogel and Nuala O'Connor are joined by Adam Thierer, resident senior fellow @ R Street's Tech & Innovation team. Adam weighs in on the Trump Administration's AI Action Plan, the importance of Congress in developing AI policy, and existing legal principles and practices that help define the new digital and AI age. They focused on the mandate for AI literacy, as well as the necessity of AI technologies being regulated in a transparent and trustworthy way that end users, and particularly consumers, can understand.
In Episode 262 of the House of #EdTech, Chris Nesi explores the timely and necessary topic of creating a responsible AI policy for your classroom. With artificial intelligence tools becoming more integrated into educational spaces, the episode breaks down why teachers need to set clear expectations and how they can do it with transparency, collaboration, and flexibility. Chris offers a five-part framework that educators can use to guide students toward ethical and effective AI use. Before the featured content, Chris reflects on a growing internal debate: is it time to step back from tech-heavy classrooms and return to more analog methods? He also shares three edtech recommendations, including tools for generating copyright-free images, discovering daily AI tool capabilities, and randomizing seating charts for better classroom dynamics. Topics Discussed: EdTech Thought: Chris debates the “Tech or No Tech” question in modern classrooms EdTech Recommendations: https://nomorecopyright.com/ - Upload an image to transform it into a unique, distinct version designed solely for inspiration and creative exploration. https://www.shufflebuddy.com/ - Never worry about seating charts again Foster a strong classroom community by frequently shuffling your seating charts while respecting your students' individual needs. https://whataicandotoday.com/ - We've analysed 16362 AI Tools and identified their capabilities with OpenAI GPT-4.1, to bring you a free list of 83054 tasks of what AI can do today. Why classrooms need a responsible AI policy A five-part framework to build your AI classroom policy Define What AI Is (and Isn't) Clarify When and How AI Can Be Used Promote Transparency and Attribution Include Privacy and Tool Approval Guidelines Make It Collaborative and Flexible The importance of modeling digital citizenship and AI literacy Free editable AI policy template by Chris for grades K–12 Mentions: Mike Brilla – The Inspired Teacher podcast Jake Miller – Educational Duct Tape podcast // Educational Duct Tape Book
AI Applied: Covering AI News, Interviews and Tools - ChatGPT, Midjourney, Runway, Poe, Anthropic
Dive into the Executive AI Action Plan with Conor Grennan and Jaeden Schafer as they explore its potential impact on AI development, energy policies, and international competition. Discover how this policy could shape the future of AI in the U.S. and beyond.AI Applied YouTube Channel: https://www.youtube.com/@AI-Applied-PodcastTry AI Box: https://aibox.aiConor's AI Course: https://www.ai-mindset.ai/coursesConor's AI Newsletter: https://www.ai-mindset.ai/Jaeden's AI Hustle Community: https://www.skool.com/aihustle/aboutChapters00:00 The Executive AI Action Plan: A New Direction02:51 Geopolitical Implications & Energy Focus05:55 DEI in AI: Balancing Bias & Representation08:31 Data Centers & AI Regulation11:09 Market Dynamics & AI Safety
July 28th, 2025
Newt talks with Neil Chilson, current head of AI Policy at the Abundance Institute, about President Trump’s “Winning the Race: America’s AI Action Plan,” which aims to accelerate AI innovation, build American AI infrastructure, and lead in international AI diplomacy and security. Chilson highlights the importance of AI for U.S. global dominance, emphasizing its potential in various sectors like healthcare and defense. Their conversation also touches on the strategic significance of Taiwan in chip production and the challenges of AI regulation, particularly in Europe. The Abundance Institute focuses on emerging technologies, advocating for a culture that embraces innovation and a regulatory environment that enables it. They conclude with optimism about AI's role in medicine and the potential for a future with greater technological advancements.See omnystudio.com/listener for privacy information.
From July 23, 2024: Alan Rozenshtein, Associate Professor at the University of Minnesota Law School and Senior Editor at Lawfare, and Matt Perault, the Director of the Center on Technology Policy at the University of North Carolina at Chapel Hill, sat down with Alexander Macgillivray, known to all as "amac," who was the former Principle Deputy Chief Technology Officer of the United States in the Biden Administration and General Counsel at Twitter.amac recently wrote a piece for Lawfare about making AI policy in a world of technological uncertainty, and Matt and Alan talked to him about how to do just that.To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/lawfare-institute.Support this show http://supporter.acast.com/lawfare. Hosted on Acast. See acast.com/privacy for more information.
ChatGPT: News on Open AI, MidJourney, NVIDIA, Anthropic, Open Source LLMs, Machine Learning
Discover how former President President Trump is influencing the next chapter of AI development. We evaluate the implications of new regulations and support structures. Tune in to get expert perspectives on what's coming next in AI governance. Special focus is given to tech company responses. Our discussion includes expert opinions and recent statements from key stakeholders.Try AI Box: https://aibox.aiAI Chat YouTube Channel: https://www.youtube.com/@JaedenSchaferJoin my AI Hustle Community: https://www.skool.com/aihustle/about
Janet Egan, Senior Fellow with the Technology and National Security Program at the Center for a New American Security; Jessica Brandt, Senior Fellow for Technology and National Security at the Council on Foreign Relations; Neil Chilson, Head of AI Policy at Abundance Institute; and Tim Fist, Director of Emerging Technology Policy at the Institute for Progress join Kevin Frazier, the AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare for a special version of Scaling Laws.This episode was recorded just hours after the release of the AI Action Plan. About 180 days ago, President Trump directed his administration to explore ways to achieve AI dominance. His staff has attempted to do just that. This group of AI researchers dives into the plan's extensive recommendations and explore what may come next.Find Scaling Laws on the Lawfare website, and subscribe to never miss an episode.To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/lawfare-institute.Support this show http://supporter.acast.com/lawfare. Hosted on Acast. See acast.com/privacy for more information.
Artificial intelligence is changing the way we do business, but it's still the Wild West. This week Dan and Donnie welcome Bennett Borden, CEO of Clarion AI Partners, who shares his expertise as an AI lawyer and data scientist. He covers how to responsibly implement policies for using AI — which he calls the most transformative technology since electricity — in pest and lawn companies. Guest: Bennett Bordon, CEO, Clarion AI Partners Hosts: Dan Gordon, PCO Bookkeepers & M&A Specialists Donnie Shelton, Triangle Home Services
White House Senior Policy Advisor for AI Sriram Krishnan joined Bloomberg's Caroline Hyde and Ed Ludlow to discuss the latest on AI policy and what to expect as it evolves.See omnystudio.com/listener for privacy information.
Janet Egan, Senior Fellow with the Technology and National Security Program at the Center for a New American Security, Jessica Brandt, Senior Fellow for Technology and National Security at the Council on Foreign Relations, Neil Chilson, Head of AI Policy at Abundance Institute, and Tim Fist, Director of Emerging Technology Policy at the Institute for Progress join Kevin Frazier, the AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare for a special version of Scaling Laws.This episode was recorded just hours after the release of the AI Action Plan. About 180 days ago, President Trump directed his administration to explore ways to achieve AI dominance. His staff has attempted to do just that. This group of AI researchers dives into the plan's extensive recommendations and explore what may come next. Hosted on Acast. See acast.com/privacy for more information.
President Trump unveiled his approach to the development of AI. Surrounded by some of the biggest names in tech, he signed three executive orders. One targets what Trump called "ideological bias" in AI chatbots, another aims to make it easier to build massive AI data centers and the third encourages the export of American AI tech. Amna Nawaz discussed the implications with Will Oremus. PBS News is supported by - https://www.pbs.org/newshour/about/funders
President Trump unveiled his approach to the development of AI. Surrounded by some of the biggest names in tech, he signed three executive orders. One targets what Trump called "ideological bias" in AI chatbots, another aims to make it easier to build massive AI data centers and the third encourages the export of American AI tech. Amna Nawaz discussed the implications with Will Oremus. PBS News is supported by - https://www.pbs.org/newshour/about/funders
Key Takeaways for local government AI:Why policy might not be the best starting point... start doing!Get in the figurative sandbox and to play and test AI with small teams for real outcomes.Official policy documents and templates can be found (and copied!) via the GovAI Coalition. Featured Guest:Parth Shah – CEO and Co-Founder, Polimorphic Voices in Local Government Podcast Hosts:Joe Supervielle and Angelica WedellResources:ICMA Annual Conference, October 25-29 in Tampa. Multiple AI trainings on the ICMA Learning Lab.AI policy, templates, and more tools from the GovAI CoalitionVoices in Local Gov Episode: GovAI Coalition - Your Voice in Shaping the Future of AI
In the United States, state legislatures are key players in shaping artificial intelligence policy, as lawmakers attempt to navigate a thicket of politics surrounding complex issues ranging from AI safety, deepfakes, and algorithmic discrimination to workplace automation and government use of AI. The decision by the US Senate to exclude a moratorium on the enforcement of state AI laws from the budget reconciliation package passed by Congress and signed by President Donald Trump over the July 4 weekend leaves the door open for more significant state-level AI policymaking.To take stock of where things stand on state AI policymaking, Tech Policy Press associate editor Cristiano Lima-Strong spoke to two experts:Scott Babwah Brennen, director of NYU's Center on Technology Policy, and Hayley Tsukayama, associate director of legislative activism at the Electronic Frontier Foundation (EFF).
I think the 2003 invasion of Iraq has some interesting lessons for the future of AI policy. (Epistemic status: I've read a bit about this, talked to AIs about it, and talked to one natsec professional about it who agreed with my analysis (and suggested some ideas that I included here), but I'm not an expert.) For context, the story is: Iraq was sort of a rogue state after invading Kuwait and then being repelled in 1990-91. After that, they violated the terms of the ceasefire, e.g. by ceasing to allow inspectors to verify that they weren't developing weapons of mass destruction (WMDs). (For context, they had previously developed biological and chemical weapons, and used chemical weapons in war against Iran and against various civilians and rebels). So the US was sanctioning and intermittently bombing them. After the war, it became clear that Iraq actually wasn't producing [...] --- First published: July 10th, 2025 Source: https://www.lesswrong.com/posts/PLZh4dcZxXmaNnkYE/lessons-from-the-iraq-war-about-ai-policy --- Narrated by TYPE III AUDIO.
In this episode, Lightspeed Partner Michael Mignano sits down with the Foundation for American Innovation's Chief Economist Sam Hammond to talk AI policy. Sam breaks down the key infrastructure needed for AI developments and how policymakers are adapting to rapid technological change. He also shares insights on AI training data and fair use, workforce disruption, and how, when it comes to AI, everything can change in just a few months.Episode Chapters: 00:00 Introduction 00:55 Meet Sam Hammond: Background and Role03:06 The Big AI Policy Issues05:09 Energy and Chip Policy06:47 Fair Use and Copyright in AI13:37 The Urgency of AI Regulation17:03 Potential AI Crisis and Legislative Response20:25 Challenges in AI Regulation21:39 Acceleration vs. Regulation in AI Development22:34 AI Safety and National Security23:51 Fair Use and Copyright in AI Training Data25:39 AI-Induced Labor Disruptions33:36 State-Level AI Regulation 36:02 Global Cooperation on AI Safety37:29 Advice for AI Startups38:34 Optimism for AI and Policy Advancements41:07 ConclusionStay in touch:www.lsvp.comX: https://twitter.com/lightspeedvpLinkedIn: https://www.linkedin.com/company/lightspeed-venture-partners/Instagram: https://www.instagram.com/lightspeedventurepartners/Subscribe on your favorite podcast app: generativenow.coEmail: generativenow@lsvp.comThe content here does not constitute tax, legal, business or investment advice or an offer to provide such advice, should not be construed as advocating the purchase or sale of any security or investment or a recommendation of any company, and is not an offer, or solicitation of an offer, for the purchase or sale of any security or investment product. For more details please see lsvp.com/legal.
This week on “Paul, Weiss Waking Up With AI,” Katherine Forrest and Anna Gressel discuss the Senate's removal of a proposed AI moratorium from the “One Big Beautiful Bill Act,” and examine new state-level AI legislation in Colorado, Texas, New York, California and others. ## Learn More About Paul, Weiss's Artificial Intelligence practice: https://www.paulweiss.com/industries/artificial-intelligence
The episode begins with Kapoor explaining the origins of AI Snake Oil, tracing it back to his PhD research at Princeton on AI's limited predictive capabilities in social science domains. He shares how he and co-author Arvind Narayanan uncovered major methodological flaws in civil war prediction models, which later extended to other fields misapplying machine learning.The conversation then turns to the disconnect between academic findings and media narratives. Kapoor critiques the hype cycle around AI, emphasizing how its real-world adoption is slower, more fragmented, and often augmentative rather than fully automating human labor. He cites the enduring demand for radiologists as a case in point.Kapoor introduces the concept of “AI as normal technology,” which rejects both the notion of imminent superintelligence and the dismissal of AI as a passing fad. He argues that, like other general-purpose technologies (electricity, the internet), AI will gradually reshape industries, mediated by social, economic, and organizational factors—not just technical capabilities.The episode also examines the speculative worldviews put forth by documents like AI 2027, which warn of AGI-induced catastrophe. Kapoor outlines two key disagreements: current AI systems are not technically on track to achieve general intelligence, and even capable systems require human and institutional choices to wield real-world power.On policy, Kapoor emphasizes the importance of investing in AI complements—such as education, workforce training, and regulatory frameworks—to enable meaningful and equitable AI integration. He advocates for resilience-focused policies, including cybersecurity preparedness, unemployment protection, and broader access to AI tools.The episode concludes with a discussion on recalibrating expectations. Kapoor urges policymakers to move beyond benchmark scores and collaborate with domain experts to measure AI's real impact. In a rapid-fire segment, he names the myth of AI predicting the future as the most misleading and humorously imagines a superintelligent AI fixing global cybersecurity first if it ever emerged.Episode ContributorsSayash Kapoor is a computer science Ph.D. candidate at Princeton University's Center for Information Technology Policy. His research focuses on the societal impact of AI. He previously worked on AI in the industry and academia at Facebook, Columbia University, and EPFL Switzerland. He is a recipient of a best paper award at ACM FAccT and an impact recognition award at ACM CSCW.Nidhi Singh is a senior research analyst and program manager at Carnegie India. Her current research interests include data governance, artificial intelligence and emerging technologies. Her work focuses on the implications of information technology law and policy from a Global Majority and Asian perspective. Suggested ReadingsAI as Normal Technology by Arvind Narayanan and Sayash Kapoor. Every two weeks, Interpreting India brings you diverse voices from India and around the world to explore the critical questions shaping the nation's future. We delve into how technology, the economy, and foreign policy intertwine to influence India's relationship with the global stage.As a Carnegie India production, hosted by Carnegie scholars, Interpreting India, a Carnegie India production, provides insightful perspectives and cutting-edge by tackling the defining questions that chart India's course through the next decade.Stay tuned for thought-provoking discussions, expert insights, and a deeper understanding of India's place in the world.Don't forget to subscribe, share, and leave a review to join the conversation and be part of Interpreting India's journey.
Join us at the Cato Institute for an in-depth fireside chat featuring Congressman Rich McCormick and Matt Mittelsteadt, Cato policy fellow in technology. This timely conversation will explore the evolving landscape of artificial intelligence (AI) and cybersecurity policy, and the state of AI in Congress.Join us for a discussion on the current state of AI governance at the federal and state levels, the proposal for a 10-year moratorium on state and local AI regulations (what it means, and what's at stake), and the long-term vision for responsible, innovation-friendly AI policy in the United States.Whether you're a policymaker, tech professional, academic, or simply interested in the future of AI regulation, this is a must-attend conversation on how to balance innovation, security, and civil liberties in the age of artificial intelligence. Hosted on Acast. See acast.com/privacy for more information.
"You can't have good Cyber Security without Economic Security.'' Join us this week on The Tech Leaders Podcast, where Gareth Davies sits down with The Rt Hon Stephen McPartland, former MP and author of the McPartland Review into Cyber Security. Stephen talks about his time in Parliament, the impact of AI on Cyber Security, and why the UK is both uniquely well prepared and uniquely vulnerable. On this episode, Stephen and Gareth discuss what it's like to work with Prime Ministers, how to prevent the widespread adoption of AI leading to “digital exclusion”, why we need to automate processes rather than jobs, and how a Scouser became Tory MP for Stevenage…Timestamps: Intro and good leadership (1:33) Proudest achievements and lessons learned in Politics (7:54) Ministerial role, and working with Prime Ministers (10:09) Cyber Security and the Digital Economy (17:50) AI, Government and Cyber Security (23:23) Fostering a Cyber workforce (29:35) LLMs and Agentic AI (33:14) Cryptocurrencies and Post-Quantum Cryptography (38:28) AI concerns – Digital exclusion and Rules of Engagement (46:25) Stephen's advice to his younger self (49:00) https://www.bedigitaluk.com/
In Episode 10 of the In AI We Trust? AI Literacy series, Angie Cooper's Call to Action for the Heartland, Miriam Vogel talks with Angie Cooper, President and Chief Operating Officer of Heartland Forward, to explore how artificial intelligence (AI) can accelerate economic growth across America's Heartland. The discussion follows Heartland Forward's recent annual Heartland Summit, the data-driven insights that inform their mission, and their partnership with Stemuli to create a first-of-its-kind AI literacy video game to promote AI learning for rural students. Angie stresses the importance of increasing access to AI by expanding affordable, high-speed internet and building trust with AI platforms through education initiatives and open conversations with teachers and employers. This episode explores how AI can be utilized as a tool to benefit small businesses, prepare students for the workforce, and advance jobs throughout the Heartland and beyond.
Margaret Woolley Bussey, Executive Director of the Utah Department of Commerce, joins host Jeanne Meserve to discuss Utah's establishment of an Office of AI Policy, Utah's thriving tech sector, and regulations and protections on AI. Bussey explains the office's three core objectives—encouraging innovation, protecting the public, and building a continuous learning function within government. The discussion highlights the office's successful work on mental health chatbots and its future plans to tackle deepfakes and AI companions. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit scsp222.substack.com
Ann, CIT's Quality Assurance Analyst, joins us to discuss the critical topic of AI policy. With the rapid evolution and adoption of AI technology, organizations need to establish clear guidelines and ethical considerations. Ann shares insights on how AI can be leveraged within different industries, defining AI in the context of policy, and the importance of collaborative efforts in policy creation. Learn about the critical foundational elements, the role of training, transparency, and how to ensure ethical use in your AI initiatives. Whether your company is just starting or looking to refine its AI policy, this episode offers invaluable guidance and expert advice.00:00 Introduction to AI Policy00:23 Defining AI in Policy Context03:24 Collaborative Efforts in Policy Making06:37 Effective Communication of AI Policies08:27 Ethical Considerations in AI Policy13:49 Transparency in AI Use25:53 Starting Points for Creating AI Policies31:03 Common Missteps and Foundational Elements35:10 Conclusion and Final Thoughts
Proposals to regulate artificial intelligence (AI) at the state level continue to increase. Unfortunately, these proposals could potentially disrupt advances in this important technology, even if there is strong federal policy. This policy forum, which is related to an upcoming policy analysis on the topic, will explore the potential economic costs of state-level AI regulation as well as the potential barriers in the market it creates for both consumers and innovators. Are there ways state AI policy conversations may discourage or encourage the important policy conversations around AI innovation? Hosted on Acast. See acast.com/privacy for more information.
Kevin Werbach interviews Brenda Leong, Director of the AI division at boutique technology law firm ZwillGen, to explore how legal practitioners are adapting to the rapidly evolving landscape of artificial intelligence. Leong explains why meaningful AI audits require deep collaboration between lawyers and data scientists, arguing that legal systems have not kept pace with the speed and complexity of technological change. Drawing on her experience at Luminos.Law—one of the first AI-specialist law firms—she outlines how companies can leverage existing regulations, industry-specific expectations, and contextual risk assessments to build practical, responsible AI governance frameworks. Leong emphasizes that many organizations now treat AI oversight not just as a legal compliance issue, but as a critical business function. As AI tools become more deeply embedded in legal workflows and core operations, she highlights the growing need for cautious interpretation, technical fluency, and continuous adaptation within the legal field. Brenda Leong is Director of ZwillGen's AI Division, where she leads legal-technical collaboration on AI governance, risk management, and model audits. Formerly Managing Partner at Luminos.Law, she pioneered many of the audit practices now used at ZwillGen. She serves on the Advisory Board of the IAPP AI Center, teaches AI law at IE University, and previously led AI and ethics work at the Future of Privacy Forum. Transcript AI Audits: Who, When, How...Or Even If? Why Red Teaming Matters Even More When AI Starts Setting Its Own Agenda
A 2019, executive order was a significant landmark in the nation's regulation and management of artificial intelligence technologies in the government, but it was just the first of many, among other things, that first executive order on AI focused on building research and development and the nation's AI workforce. And on those topics, it shared a lot in common with five subsequent orders on AI. That first order is one of 25 significant moments Federal News Network is marking this year as part of our 25th anniversary. Federal News Network's Jared Serbu has been writing about it this week, and he's here with more.See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
In this episode of In AI We Trust?, Miriam & Nuala speak with Common Sense Senior Director of AI Programs Robbie Torney to discuss AI's impact on children, families, and schools, focusing on AI literacy, which builds upon media and digital literacy. Robbie advises parents to engage in tech conversations with curiosity and empathy and encourages educators to view AI as a tool to enhance learning, noting students' prevalent use. Common Sense Media provides AI training and risk assessments for educators. Torney aims to bridge digital divides and supports AI implementation in underserved schools, highlighting risks of AI companions for vulnerable youth and developing resources for school AI readiness and risk assessments. The episode stresses the importance of AI literacy and critical thinking to navigate AI's complexities and minimize harm.The EqualAI AI Literacy podcast series builds on In AI We Trust?'s global reach, focusing specifically on AI literacy. Featuring prominent leaders in the technology, education, and governance fields, this special series provides listeners with valuable insights and discussions around AI's impact on society, leading efforts in this area of AI literacy, and how listeners can benefit from these experts and tools.Related ResourcesEpisode Blog PostAI Risk AssessmentsAI Basics for K–12 TeachersParents' Ultimate Guide to AI Companions and Relationships2025: The Common Sense Census2024: The Dawn of the AI Era
In Episode 28, of Season 5 of Driven by Data: The Podcast, Kyle Winterbottom was joined by McKinley Hyden, Director of Data Value & Strategy at Financial Times, where they discuss McKinley's journey from a background in literature to a career in data, the role of the Financial Times in providing quality journalism, and the importance of data in driving strategic decisions. McKinley shares insights on the challenges of valuing data, the need for cultural change in organisations to embrace data as an asset, and the significance of upskilling in AI. The conversation also touches on the importance of effective communication and knowledge management in data analytics, as well as the future of AI in business. In this conversation, McKinley Hyden and Kyle Winterbottom explore the profound impact of technology, particularly AI, on society, education, and the workforce. They discuss the moral implications of AI, the need for responsible use, and the importance of upskilling to navigate the changing landscape. The conversation emphasizes the necessity of creating effective AI policies to ensure ethical practices and the potential for job transformation in the face of technological advancements.00:00 Introduction to Data and Storytelling02:14 Understanding the Financial Times04:19 The Role of Data Value and Strategy09:33 Upskilling in Data and AI10:44 Valuing Data as an Asset14:12 Overcoming Resistance to Change19:25 Defining Value in Data22:20 Communications and Knowledge Management27:31 The Future of AI in Business28:44 The Impact of Technology on Society30:53 Navigating the Moral Hazards of AI32:44 The Future of Education in an AI World35:19 Job Transformation and the Role of AI41:47 Upskilling for the AI Era44:42 Creating an AI Policy for Responsible...
AI Unraveled: Latest AI News & Trends, Master GPT, Gemini, Generative AI, LLMs, Prompting, GPT Store
Key discussions include OpenAI's legal battle over retaining user conversation data, raising crucial questions about user privacy and data retention precedents. The sources also address the evolving social impact of AI, with concerns about deep human-AI emotional bonds and the need for regulation in this area. Additionally, they showcase AI's diverse applications, from historical research aiding in the re-dating of the Dead Sea Scrolls to anticipated overhauls of user interfaces like Apple's 'Liquid Glass' design. Challenges remain, as Apple research suggests current AI models struggle with true logical reasoning, and the rollout of autonomous systems faces public backlash, as seen in protests against Waymo robotaxis. Finally, the podcast point to the growing influence of AI in various sectors, including major investments by companies like Meta in AI development and its increasing adoption by billionaires and institutions such as Ohio State University.
In this episode, Matt Perault, Head of AI Policy at a16z, discusses their approach to AI regulation focused on protecting "little tech" startups from regulatory capture that could entrench big tech incumbents. The conversation covers a16z's core principle of regulating harmful AI use rather than the development process, exploring key policy initiatives like the Raise Act and California's SB 813. Perault addresses critical challenges including setting appropriate regulatory thresholds, transparency requirements, and designing dynamic frameworks that balance innovation with safety. The discussion examines both areas of agreement and disagreement within the AI policy landscape, particularly around scaling laws, regulatory timing, and the concentration of AI capabilities. Disclaimer: This information is for general educational purposes only and is not a recommendation to buy, hold, or sell any investment or financial product. Turpentine is an acquisition of a16z Holdings, L.L.C., and is not a bank, investment adviser, or broker-dealer. This podcast may include paid promotional advertisements, individuals and companies featured or advertised during this podcast are not endorsing AH Capital or any of its affiliates (including, but not limited to, a16z Perennial Management L.P.). Similarly, Turpentine is not endorsing affiliates, individuals, or any entities featured on this podcast. All investments involve risk, including the possible loss of capital. Past performance is no guarantee of future results and the opinions presented cannot be viewed as an indicator of future performance. Before making decisions with legal, tax, or accounting effects, you should consult appropriate professionals. Information is from sources deemed reliable on the date of publication, but Turpentine does not guarantee its accuracy. SPONSORS: Oracle Cloud Infrastructure: Oracle Cloud Infrastructure (OCI) is the next-generation cloud platform that delivers better, cheaper, and faster solutions for your infrastructure, database, application development, and AI needs. Experience up to 50% savings on compute, 70% on storage, and 80% on networking with OCI's high-performance environment—try it for free with zero commitment at https://oracle.com/cognitive The AGNTCY: The AGNTCY is an open-source collective dedicated to building the Internet of Agents, enabling AI agents to communicate and collaborate seamlessly across frameworks. Join a community of engineers focused on high-quality multi-agent software and support the initiative at https://agntcy.org/?utm_campaign=fy25q4_agntcy_amer_paid-media_agntcy-cognitiverevolution_podcast&utm_channel=podcast&utm_source=podcast NetSuite by Oracle: NetSuite by Oracle is the AI-powered business management suite trusted by over 41,000 businesses, offering a unified platform for accounting, financial management, inventory, and HR. Gain total visibility and control to make quick decisions and automate everyday tasks—download the free ebook, Navigating Global Trade: Three Insights for Leaders, at https://netsuite.com/cognitive PRODUCED BY: https://aipodcast.ing
Can differing global approaches to AI regulation and investment work together, or are we headed toward fragmented, siloed systems? How can AI governance in developing nations be supported as part of regional aid and security agendas? What challenges does Australia face in regulating AI without a national bill of rights or federal human rights charter? Should governments mandate the inclusion of human oversight in all AI-powered decisions? In this episode, Sarah Vallee and Maria O'Sullivan join David Andrews to talk about how AI is impacting national security, with a focus on AI governance models and mass-surveillance.Maria O'Sullivan is an Associate Professor at Deakin Law School. She's a member of the Deakin Cyber Research and Innovation Centre.Sarah Vallee is a specialist in AI Policy and Governance. She's a Fellow at the UTS Human Technology Institute, sponsored by the French Ministry of Foreign Affairs.David Andrews is Senior Manager, Policy & Engagement at the ANU National Security College. TRANSCRIPT Show notes NSC academic programs – find out more Article 8: respect for your private and family life We'd love to hear from you! Send in your questions, comments, and suggestions to NatSecPod@anu.edu.au. You can tweet us @NSC_ANU and be sure to subscribe so you don't miss out on future episodes. Hosted on Acast. See acast.com/privacy for more information.
0:00 - HeteroAwesomeness Month 13:02 - Elon Musk comes out against the Big Beautiful Bill 27:15 - US Open qualifying 30:14 - Mamet on Maher podcast 55:25 - James A. Gagliano, retired FBI supervisory special agent and a doctoral candidate in homeland security at St. John’s University, on the "unhealthy" direction of college campuses - "we are becoming the architects of our own demise" 01:11:51 - CA 400M champ Clara Adams stripped of title 01:26:04 - Chief Economist at First Trust Portfolios LP, Brian Wesbury, on the Big Beautiful Bill - "the last two years of government spending were some of the most irresponsible budgets we have ever seen" Follow Brian on X @wesbury 01:49:30 - Emeritus professor of law, Harvard Law School, Alan Dershowitz, shares details from his new book The Preventive State: The Challenge of Preventing Serious Harms While Preserving Essential Liberties. For more from Professor Dershowitz, check out his podcast “The Dershow” on Spotify, YouTube and iTunes 02:07:58 - Neil Chilson, former Chief Technologist for the FTC and currently Head of AI Policy at the Abundance Institute, on the risks, rewards and myths of AI. Check out Neil’s substack outofcontrol.substack.comSee omnystudio.com/listener for privacy information.
It's YOUR time to #EdUpIn this episode, part of our Academic Integrity Series, sponsored by Pangram Labs,YOUR guest is Christian Moriarty, Professor of Ethics & Law, Applied Ethics Institute, St. Petersburg CollegeYOUR cohost is Bradley Emi , Cofounder & CTO, Pangram LabsYOUR host is Elvin FreytesHow does Christian define academic integrity from both legal & philosophical perspectives?Why do students often "cheat" even when they have good intentions & strong moral values?What is the role of faculty in supporting students to act with integrity & resist temptation?How can institutions implement effective AI policies that respect different teaching contexts?Why is Christian predicting a return to in-class writing or required keystroke tracking software?Topics include:The tension between rules-based & values-based approaches to academic integrityThe importance of empathy in understanding why students make poor choicesThe "stoplight approach" to AI use policies (green/yellow/red options for different contexts)Finding the balance between trusting students & verifying their workThe challenges of time management for community college studentsThe value of specialized academic integrity offices in educational institutionsWhy "difficulty is part of the process" in genuine learning & skill developmentThe connection between integrity & asking for help when neededListen in to #EdUpDo YOU want to accelerate YOUR professional development?Do YOU want to get exclusive early access to ad-free episodes, extended episodes, bonus episodes, original content, invites to special events, & more?Then BECOME A SUBSCRIBER TODAY - $19.99/month or $199.99/year (Save 17%)!Want to get YOUR organization to pay for YOUR subscription? Email EdUp@edupexperience.comThank YOU so much for tuning in. Join us on the next episode for YOUR time to EdUp!Connect with YOUR EdUp Team - Elvin Freytes & Dr. Joe Sallustio● Join YOUR EdUp community at The EdUp ExperienceWe make education YOUR business!
Canada has a chance to lead on AI policy and data governance at G7 Leaders' Summit Learn more about your ad choices. Visit megaphone.fm/adchoices
In this special episode of In AI We Trust?, recorded live at the launch of the EqualAI C-Suite Summit in Washington, D.C., host Miriam Vogel sits down with the dynamic Van Jones — acclaimed social entrepreneur, innovator, and tech evangelist. Together, they dive into a thought-provoking conversation about how AI can be a transformative force for opportunity creation. With his trademark clarity and conviction, Van offers a hopeful vision for the future of AI— one that empowers communities and drives societal progress, but only if we lead with the right values and policies at this critical moment.Related ResourcesDream Machine AI x Library Project
SJ Show Notes:Follow Dr. Makis HERE: https://substack.com/@makismdhttps://x.com/MakisMDmakisw79@yahoo.comPlease support Shannon's independent network with your donation HERE: https://www.paypal.com/donate/?hosted_button_id=MHSMPXEBSLVTSupport Our Sponsors:You can get 20% off your first order of Blackout Coffee! Just head to http://blackoutcoffee.com/joy and use code joy at checkout.The Satellite Phone Store has everything you need when the POWER goes OUT. Use the promo code JOY for 10% off your entire order TODAY! www.SAT123.com/JoyGet 45% OFF Native Path HYDRATE today! Special exclusive deal for the Joy audience only! Check it out HERE: www.nativepathhydrate.com/joyColonial Metals Group is the company Shannon trusts for all her metals purchases! Set up a SAFE & Secure IRA or 401k with a company who shares your values! Learn more HERE: https://colonialmetalsgroup.com/joyPlease consider Dom Pullano of PCM & Associates! He has been Shannon's advisor for over a decade and would love to help you grow! Call his toll free number today: 1-800-536-1368 Or visit his website at https://www.pcmpullano.comShannon's Top Headlines May 22, 2025:Trump's 'Big Beautiful Bill' would create 'unfettered abuse' of AI: Business InsiderTrump's 'Big Beautiful Bill' would create 'unfettered abuse' of AI, 141 high-profile orgs warn in letter to Congress days agoWhen it Comes to AI Policy, Congress Shouldn't Cut States off at the Knees: https://garymarcus.substack.com/p/when-it-comes-to-ai-policy-congress?r=fuu7w&utm_medium=iosRon Johnson: The Ugly Truth About Trump's Big Beautiful Bill: https://x.com/SenRonJohnson/status/1923057940908454239WATCH: Dr. Peter McCullough's Truth Bombs In Testimony Yesterday: https://x.com/MJTruthUltra/status/1925271018387763352Dr. William Makis: Scott Adams reveals his Prostate Cancer and our attempts to beat it - my response to Scott's Podcast: https://substack.com/home/post/p-163941944Renowned Data Analyst Warns Excess Deaths Are Surging ‘Off the Charts' https://substack.com/home/post/p-162167578Stop this bill.Shut the government down.Because it is becoming increasingly clear that every penny given to these psychopaths can and will be used against we the people.Hidden deep within Trump's budget monstrosity is a clause which threatens every American, our Constitutional Republic and humanity. Trump's ‘big beautiful budget bill' sneaks in a section which prohibits states from interfering with AI programs and development and also machine decision making for for ten years.“No state or political subdivision may enforce any law or regulation regulating artificial intelligence models, artificial intelligence systems, or automated decision systems during the 10-year period beginning on the date of the enactment of this Act.”There is SO much wrong with this and frankly, it's the cherry on top of a dumpster fire of a bill which betrays nearly every promise made by Trump in the 2024 election.There is nothing good about this bill and in my opinion the best we can do is dump it completely and shut down the government.Interestingly, ‘shutting down the government' actually KEEPS the essential spending in place (like Social Security benefits and Medicare) while suspending all the grift and billionaire benefits.It's exactly what we need.That's the bad news … but there is GOOD news too!Today we will talk to a frontline medical freedom warrior who is actually saving lives through life saving cancer treatments. Dr. William Makis is living proof that there ARE solutions out there and I cannot wait to talk to him again.We discuss this and more today on the SJ Show!Join the Rumble LIVE chat and follow my Rumble Page HERE so you never miss an episode: https://rumble.com/c/TheShannonJoyShowSee Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
Send us a textIn this final part of the Katie Smith trilogy on the Customer Success Playbook, we enter the brave new world of AI and marketing. Host Kevin Metzger explores the promises and pitfalls of generative tools with Katie, who brings her thoughtful and grounded take on how businesses can embrace automation while fiercely protecting their authenticity. If you've ever wondered how to use AI effectively without sounding like a robot, this one's for you.Detailed Analysis: AI isn't going anywhere—and that's exactly why it's time to get strategic. Katie Smith walks listeners through the essentials of adopting AI in a way that enhances rather than dilutes your marketing. The episode kicks off with her advice on building internal AI policies: what your team will use AI for, what it won't, and how to protect sensitive data along the way. Her mantra? Be proactive, not reactive.Katie also shares her go-to applications of AI in the creative process:Use AI as a co-creator to spark content ideas and draft early versionsTrain AI with your brand's voice and tone to maintain consistencyStay vigilant about hallucinations and homogenized contentShe emphasizes the importance of human review at every stage, especially when publishing customer-facing materials. AI is a brilliant assistant, but not a final authority.The discussion evolves into deeper insights on lead generation and real-time responsiveness. Kevin adds his own tricks for applying brand tone through prompt engineering and post-processing, offering a compelling use case that blends Claude, GPTs, and content repurposing magic.Finally, the two zoom out to a broader question: How do you optimize your brand for AI-driven search and recommendations? It's an emerging discipline with massive implications, and Katie teases what's to come from leaders in B2B and digital strategy.Whether you're testing the AI waters or already building internal GPTs, Katie's thoughtful approach provides the guardrails needed to preserve quality and trust in a world of automation.Now you can interact with us directly by leaving a voice message at https://www.speakpipe.com/CustomerSuccessPlaybookPlease Like, Comment, Share and Subscribe. You can also find the CS Playbook Podcast:YouTube - @CustomerSuccessPlaybookPodcastTwitter - @CS_PlaybookYou can find Kevin at:Metzgerbusiness.com - Kevin's person web siteKevin Metzger on Linked In.You can find Roman at:Roman Trebon on Linked In.
Please enjoy this encore episode of Caveat. This week on Caveat, Dave and Ben are thrilled to welcome back N2K's own Ethan Cook for the second installment of our newest policy deep dive segment. As a trusted expert in law, privacy, and surveillance, Ethan is joining the show regularly to provide in-depth analysis on the latest policy developments shaping the cybersecurity and legal landscape. While this show covers legal topics, and Ben is a lawyer, the views expressed do not constitute legal advice. For official legal advice on any of the topics we cover, please contact your attorney. Please take a moment to fill out an audience survey! Let us know how we are doing! Policy Deep Dive In this Caveat Policy Deep Dive, we turn our focus to the evolving landscape of artificial intelligence (AI) policy. This month, the Caveat team delves into the key issues shaping political discourse around AI, exploring state-led initiatives, the lack of significant federal action, and the critical areas that still require stronger oversight, offering an in-depth analysis of AI legislation, the varied approaches across states, and the pressing challenges that demand federal attention. Get the weekly Caveat Briefing delivered to your inbox. Like what you heard? Be sure to check out and subscribe to our Caveat Briefing, a weekly newsletter available exclusively to N2K Pro members on N2K CyberWire's website. N2K Pro members receive our Thursday wrap-up covering the latest in privacy, policy, and research news, including incidents, techniques, compliance, trends, and more. This week's Caveat Briefing covers the story of the Paris AI summit, where French President Emmanuel Macron and EU digital chief Henna Virkkunen announced plans to reduce regulatory barriers to support AI innovation. The summit highlighted the growing pressure on Europe to adopt a lighter regulatory touch in order to remain competitive with the U.S. and China, while also addressing concerns about potential risks and the impact on workers as AI continues to evolve. Curious about the details? Head over to the Caveat Briefing for the full scoop and additional compelling stories. Got a question you'd like us to answer on our show? You can send your audio file to caveat@thecyberwire.com. Hope to hear from you. Learn more about your ad choices. Visit megaphone.fm/adchoices
It's YOUR time to #EdUpIn this episode, part of our Academic Integrity Series, sponsored by Pangram Labs,YOUR guest is Dr. Elizabeth Skomp, Provost & Vice President of Academic Affairs, Stetson UniversityYOUR cohost is Bradley Emi, Cofounder & CTO, Pangram LabsYOUR host is Elvin FreytesHow does Dr. Skomp define academic integrity & its student-led honor system at Stetson? What strategies does Stetson use with their honor pledge & code? How does Stetson integrate AI tools ethically with their 3 syllabus templates? What approach does faculty take when considering AI in course design? Why does the university focus on "learning opportunities" rather than punitive measures? Topics include:Creating a student-led, faculty-advised honor system The importance of faculty modeling academic integrity Developing flexible AI policies that preserve academic freedom Using AI disclosure as a trust-building approach Faculty development for AI-adapted teaching methods The "Hatter Ready" initiative connecting experiential learning & academic integrity Listen in to #EdUpDo YOU want to accelerate YOUR professional development?Do YOU want to get exclusive early access to ad-free episodes, extended episodes, bonus episodes, original content, invites to special events, & more?Then BECOME A SUBSCRIBER TODAY - $19.99/month or $199.99/year (Save 17%)!Want to get YOUR organization to pay for YOUR subscription? Email EdUp@edupexperience.comThank YOU so much for tuning in. Join us on the next episode for YOUR time to EdUp!Connect with YOUR EdUp Team - Elvin Freytes & Dr. Joe Sallustio● Join YOUR EdUp community at The EdUp Experience!We make education YOUR business!
(0:00) Intro(1:26) About the podcast sponsor: The American College of Governance Counsel(2:13) Start of interview(2:45) Robin's origin story(3:55) About the AI Law and Innovation Institute.(5:02) On AI governance: "AI is critical for boards, both from a risk management perspective and from a regulatory management perspective." Boards should: 1) Get regular updates on safety and regulatory issues, 2) document the attention that they're paying to it to have a record of meaningful oversight, and 3) Most importantly, boards can't just rely on feedback from the folks in charge of the AI tools. They need a red team of skeptics.(9:58) Boards and AI Ethics. Robin's Rules of Order for AI. Rule #1: Distinguish Real-time Dangers from Distant Dangers(15:21) Antitrust Concerns in AI(18:10) Geopolitical Tensions in AI Race (US v China). "Winning the AI race is essential for the US, both from an economic and from a national security perspective."(23:30) Regulatory Framework for AI "It really isn't one size fits all for AI regulation. Europe, for the most part, is a consumer nation of AI. We are a producer nation of AI, and California in particular is a producer of AI." "There must be strong partnerships in this country between those developing cutting-edge technology and the government—because while the government holds the power, Silicon Valley holds the expertise to understand what this technology truly means."(26:46) California's AI Regulation Efforts "I do believe that over time, at some point, we will need a more comprehensive system that probably overshadows what the individual states will do, or at least cabins to some extent what the individual states will do. It will be a problem to have 50 different approaches to this, or even 20 different approaches to this within the country."(29:03) AI in the Financial Industry(33:13) Future Trends in AI. "I think the key for boards and companies is to be alert and to be nimble" and "as hard as it is, brush up a bit on your math and science, if that's not your area of expertise." "My point is simply, you have to understand these things under the hood if you're going to be able to think about what to do with them."(35:43) Her new book "AI vs IP. Rewriting Creativity" (coming out July 2025).(37:12) Key Considerations for Board Members: "It's about being nimble, staying proactive and having a proven track record of it. Most importantly, you need a red team approach."(38:26) Books that have greatly influenced her life:Rashi's Commentary on the BibleTalmud(39:06) Her mentors.Professor Robert WeisbergProfessor Gerald Gunther(41:39) Quotes that she thinks of often or lives her life by: "The cover-up's always worse than the crime."(42:34) An unusual habit or an absurd thing that she loves. Robin Feldman is the Arthur J. Goldberg Distinguished Professor of Law, Albert Abramson '54 Distinguished Professor of Law Chair, and Director of the Center for Innovation at UC Law SF. You can follow Evan on social media at:X: @evanepsteinLinkedIn: https://www.linkedin.com/in/epsteinevan/ Substack: https://evanepstein.substack.com/__To support this podcast you can join as a subscriber of the Boardroom Governance Newsletter at https://evanepstein.substack.com/__Music/Soundtrack (found via Free Music Archive): Seeing The Future by Dexter Britain is licensed under a Attribution-Noncommercial-Share Alike 3.0 United States License
In this Tech Roundup Episode of RTP's Fourth Branch podcast, Kevin Frazier and Aram Gavoor sit down to discuss the recent, fast-moving developments in AI policy in the second Trump administration, as well as the importance of innovation and procurement.
In this episode of The AI Report, Christine Walker joins Arturo Ferreira to launch a new series on the legal side of artificial intelligence. Christine is a practicing attorney helping businesses understand how to navigate AI risk, compliance, and governance in a rapidly changing policy environment.They explore how the shift from the Biden to the Trump administration is changing the tone on AI regulation, what the EU AI Act means for U.S. companies, and why many of the legal frameworks we need for AI already exist. Christine breaks down how lawyers apply traditional legal principles to today's AI challenges from intellectual property and employment law to bias and defamation.Also in this episode: • The risk of waiting for regulation to catch up • How companies can conduct internal AI audits • What courts are already doing with AI tools • Why even lawyers are still figuring this out in real time • What businesses should be doing now to reduce liabilityChristine offers a grounded, practical view of what it means to use AI responsibly, even when the law seems unclear.Subscribe to The AI Report:theaireport.aiJoin our community:skool.com/the-ai-report-community/aboutChapters:(00:00) The Legal Risks of AI and Why It's Still a Black Box(01:13) Christine Walker's Background in Law and Tech(03:07) Biden vs Trump: Competing AI Governance Philosophies(04:53) What Governance Means and Why It Matters(06:26) Comparing the EU AI Act with the U.S. Legal Vacuum(08:14) Case Law on IP, Bias, and Discrimination(10:50) Why the Fear Around AI May Be Misplaced(13:15) Legal Precedents: What Tech History Teaches Us(16:06) The GOP's AI Stance and Regulatory Philosophy(18:35) Most AI Use Cases Already Fall Under Existing Law(21:11) Why Precedents Take So Long—and What That Means(23:08) Will AI Accelerate the Legal System?(25:24) AI + Lawyers: A Collaborative Model(27:15) Hallucinations, Case Law, and Legal Responsibility(28:36) Building Policy Now to Avoid Legal Pain Later(30:59) Christine's Final Advice for Businesses and Builders
There's a lot to criticize about US AI policy, but what has the administration been getting right? Senior VP of Government Affairs for Americans for Responsible Innovation Doug Calidas joins David Rothkopf to break down the Trump administration's industrial and AI policies, the role of tariffs, and more. This material is distributed by TRG Advisory Services, LLC on behalf of the Embassy of the United Arab Emirates in the U.S.. Additional information is available at the Department of Justice, Washington, DC. Learn more about your ad choices. Visit megaphone.fm/adchoices
Or maybe 2028, it's complicated In 2021, a researcher named Daniel Kokotajlo published a blog post called “What 2026 Looks Like”, where he laid out what he thought would happen in AI over the next five years. The world delights in thwarting would-be prophets. The sea of possibilities is too vast for anyone to ever really chart a course. At best, we vaguely gesture at broad categories of outcome, then beg our listeners to forgive us the inevitable surprises. Daniel knew all this and resigned himself to it. But even he didn't expect what happened next. He got it all right. Okay, not literally all. The US restricted chip exports to China in late 2022, not mid-2024. AI first beat humans at Diplomacy in late 2022, not 2025. And of course the mid-2025 to 2026 period remains to be seen. But to put its errors in context, Daniel's document was written two years before ChatGPT existed. Nobody except researchers and a few hobbyists had ever talked to an AI. In fact, talking to AI was a misnomer. There was no way to make them continue the conversation; they would free associate based on your prompt, maybe turning it into a paragraph-length short story. If you pulled out all the stops, you could make an AI add single digit numbers and get the right answer more than 50% of the time. Yet if you read Daniel's blog post without checking the publication date, you could be forgiven for thinking it was a somewhat garbled but basically reasonable history of the last four years. I wasn't the only one who noticed. A year later, OpenAI hired Daniel to their policy team. While he worked for them, he was limited in his ability to speculate publicly. “What 2026 Looks Like” promised a sequel about 2027 and beyond, but it never materialized. Unluckily for Sam Altman but luckily for the rest of us, Daniel broke with OpenAI mid-2024 in a dramatic split covered by the New York Times and others. He founded the AI Futures Project to produce the promised sequel, including: Eli Lifland, a superforecaster who is ranked first on RAND's Forecasting initiative. You can read more about him and his forecasting team here. He cofounded and advises AI Digest and co-created TextAttack, an adversarial attack framework for language models. Jonas Vollmer, a VC at Macroscopic Ventures, which has done its own, more practical form of successful AI forecasting: they made an early stage investment in Anthropic, now worth $60 billion. Thomas Larsen, the former executive director of the Center for AI Policy, a group which advises policymakers on both sides of the aisle. Romeo Dean, a leader of Harvard's AI Safety Student Team and budding expert in AI hardware. …and me! Since October, I've been volunteering part-time, doing some writing and publicity work. I can't take credit for the forecast itself - or even for the lion's share of the writing and publicity - but it's been an immense privilege to work alongside some of the smartest and most epistemically virtuous people I know, trying to absorb their worldview on a level deep enough to do it justice. We have no illusions that we'll get as lucky as last time, but we still think it's a valuable contribution to the discussion. https://www.astralcodexten.com/p/introducing-ai-2027 https://ai-2027.com/
There's a lot to criticize about US AI policy, but what has the administration been getting right? Senior VP of Government Affairs for Americans for Responsible Innovation Doug Calidas joins David Rothkopf to break down the Trump administration's industrial and AI policies, the role of tariffs, and more. This material is distributed by TRG Advisory Services, LLC on behalf of the Embassy of the United Arab Emirates in the U.S.. Additional information is available at the Department of Justice, Washington, DC. Learn more about your ad choices. Visit megaphone.fm/adchoices
In this episode of our new podcast series, The AI Workplace, where we explore the latest advancements in integrating artificial intelligence (AI) into the workplace, Sam Sedaei (associate, Chicago) shares his insights on crafting and implementing effective AI policies. Sam, who is a member of the firm's Cybersecurity and Privacy and Technology practice groups, discusses the rapid rise of generative AI tools and highlights their potential to boost productivity, spark innovation, and deliver valuable insights. He also addresses the critical risks associated with AI, such as inaccuracies, bias, privacy concerns, and intellectual property issues, while emphasizing the importance of legal and regulatory guidance to ensure the responsible and effective use of AI in various workplace functions. Join us for a compelling discussion on navigating the AI-driven future of work.
Matt Perault, Head of AI Policy at Andreessen Horowitz, joins Kevin Frazier, Contributing Editor at Lawfare and Adjunct Professor at Delaware Law, to define the Little Tech Agenda and explore how adoption of the Agenda may shape AI development across the country. The duo also discuss the current AI policy landscape.We value your feedback! Help us improve by sharing your thoughts at lawfaremedia.org/survey. Your input ensures that we deliver what matters most to you. Thank you for your support—and, as always, for listening!To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/lawfare-institute.Support this show http://supporter.acast.com/lawfare. Hosted on Acast. See acast.com/privacy for more information.
On this week’s Marketplace “Tech Bytes: Week in Review,” we’ll explore OpenAI’s inroads in higher education. Plus, how passengers can get on a waitlist to hail a driverless car in Austin, Texas. But first, a look at how Google is changing its approach to artificial intelligence. In 2018, the company published its “AI principles,” guidelines for how it believed AI should be built and used. Google originally included language that said it would not design or deploy AI to be used in weapons or surveillance. That language has now gone away. Google didn’t respond to our request for comment, but it did say in a blog post this week that companies and governments should work together to create AI that, among other things, supports national security. Marketplace’s Stephanie Hughes spoke with Natasha Mascarenhas, reporter at The Information, about these topics for this week's “Tech Bytes.”
On this week’s Marketplace “Tech Bytes: Week in Review,” we’ll explore OpenAI’s inroads in higher education. Plus, how passengers can get on a waitlist to hail a driverless car in Austin, Texas. But first, a look at how Google is changing its approach to artificial intelligence. In 2018, the company published its “AI principles,” guidelines for how it believed AI should be built and used. Google originally included language that said it would not design or deploy AI to be used in weapons or surveillance. That language has now gone away. Google didn’t respond to our request for comment, but it did say in a blog post this week that companies and governments should work together to create AI that, among other things, supports national security. Marketplace’s Stephanie Hughes spoke with Natasha Mascarenhas, reporter at The Information, about these topics for this week's “Tech Bytes.”