Podcasts about ai policy

  • 235PODCASTS
  • 335EPISODES
  • 35mAVG DURATION
  • 5WEEKLY NEW EPISODES
  • Jun 18, 2025LATEST

POPULARITY

20172018201920202021202220232024


Best podcasts about ai policy

Latest podcast episodes about ai policy

NatSec Tech
Episode 78: Margaret Busse on Balancing Innovation and Protection

NatSec Tech

Play Episode Listen Later Jun 18, 2025 21:34


Margaret Woolley Bussey, Executive Director of the Utah Department of Commerce, joins host Jeanne Meserve to discuss Utah's establishment of an Office of AI Policy, Utah's thriving tech sector, and regulations and protections on AI. Bussey explains the office's three core objectives—encouraging innovation, protecting the public, and building a continuous learning function within government. The discussion highlights the office's successful work on mental health chatbots and its future plans to tackle deepfakes and AI companions. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit scsp222.substack.com

Cato Event Podcast
What Is the Opportunity Cost of State AI Policy?

Cato Event Podcast

Play Episode Listen Later Jun 12, 2025 59:43


Proposals to regulate artificial intelligence (AI) at the state level continue to increase. Unfortunately, these proposals could potentially disrupt advances in this important technology, even if there is strong federal policy. This policy forum, which is related to an upcoming policy analysis on the topic, will explore the potential economic costs of state-level AI regulation as well as the potential barriers in the market it creates for both consumers and innovators. Are there ways state AI policy conversations may discourage or encourage the important policy conversations around AI innovation? Hosted on Acast. See acast.com/privacy for more information.

The Road to Accountable AI
Brenda Leong: Building AI Law Amid Legal Uncertainty

The Road to Accountable AI

Play Episode Listen Later Jun 12, 2025 36:52 Transcription Available


Kevin Werbach interviews Brenda Leong, Director of the AI division at boutique technology law firm ZwillGen, to explore how legal practitioners are adapting to the rapidly evolving landscape of artificial intelligence. Leong explains why meaningful AI audits require deep collaboration between lawyers and data scientists, arguing that legal systems have not kept pace with the speed and complexity of technological change. Drawing on her experience at Luminos.Law—one of the first AI-specialist law firms—she outlines how companies can leverage existing regulations, industry-specific expectations, and contextual risk assessments to build practical, responsible AI governance frameworks. Leong emphasizes that many organizations now treat AI oversight not just as a legal compliance issue, but as a critical business function. As AI tools become more deeply embedded in legal workflows and core operations, she highlights the growing need for cautious interpretation, technical fluency, and continuous adaptation within the legal field. Brenda Leong is Director of ZwillGen's AI Division, where she leads legal-technical collaboration on AI governance, risk management, and model audits. Formerly Managing Partner at Luminos.Law, she pioneered many of the audit practices now used at ZwillGen. She serves on the Advisory Board of the IAPP AI Center, teaches AI law at IE University, and previously led AI and ethics work at the Future of Privacy Forum.  Transcript   AI Audits: Who, When, How...Or Even If?   Why Red Teaming Matters Even More When AI Starts Setting Its Own Agenda      

Federal Drive with Tom Temin
2019 executive order began a trend toward White House-centered AI policy

Federal Drive with Tom Temin

Play Episode Listen Later Jun 12, 2025 9:03


A 2019, executive order was a significant landmark in the nation's regulation and management of artificial intelligence technologies in the government, but it was just the first of many, among other things, that first executive order on AI focused on building research and development and the nation's AI workforce. And on those topics, it shared a lot in common with five subsequent orders on AI. That first order is one of 25 significant moments Federal News Network is marking this year as part of our 25th anniversary. Federal News Network's Jared Serbu has been writing about it this week, and he's here with more.See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

In AI We Trust?
AI Literacy Series Ep. 9: Robbie Torney of Common Sense Media & Special Co-host Nuala O'Connor

In AI We Trust?

Play Episode Listen Later Jun 10, 2025 69:49


In this episode of In AI We Trust?, Miriam & Nuala speak with Common Sense Senior Director of AI Programs Robbie Torney to discuss AI's impact on children, families, and schools, focusing on AI literacy, which builds upon media and digital literacy. Robbie advises parents to engage in tech conversations with curiosity and empathy and encourages educators to view AI as a tool to enhance learning, noting students' prevalent use. Common Sense Media provides AI training and risk assessments for educators. Torney aims to bridge digital divides and supports AI implementation in underserved schools, highlighting risks of AI companions for vulnerable youth and developing resources for school AI readiness and risk assessments. The episode stresses the importance of AI literacy and critical thinking to navigate AI's complexities and minimize harm.The EqualAI AI Literacy podcast series builds on In AI We Trust?'s global reach, focusing specifically on AI literacy. Featuring prominent leaders in the technology, education, and governance fields, this special series provides listeners with valuable insights and discussions around AI's impact on society, leading efforts in this area of AI literacy, and how listeners can benefit from these experts and tools.Related ResourcesEpisode Blog PostAI Risk AssessmentsAI Basics for K–12 TeachersParents' Ultimate Guide to AI Companions and Relationships2025: The Common Sense Census2024: The Dawn of the AI Era

Driven by Data: The Podcast
S5 | Ep 28 | AI Upskilling: Rethinking Readiness, Responsibility, and Real Impact with McKinley Hyden, Director of Data Value & Strategy at Financial Times.

Driven by Data: The Podcast

Play Episode Listen Later Jun 10, 2025 56:43


In Episode 28, of Season 5 of Driven by Data: The Podcast, Kyle Winterbottom was joined by McKinley Hyden, Director of Data Value & Strategy at Financial Times, where they discuss McKinley's journey from a background in literature to a career in data, the role of the Financial Times in providing quality journalism, and the importance of data in driving strategic decisions. McKinley shares insights on the challenges of valuing data, the need for cultural change in organisations to embrace data as an asset, and the significance of upskilling in AI. The conversation also touches on the importance of effective communication and knowledge management in data analytics, as well as the future of AI in business. In this conversation, McKinley Hyden and Kyle Winterbottom explore the profound impact of technology, particularly AI, on society, education, and the workforce. They discuss the moral implications of AI, the need for responsible use, and the importance of upskilling to navigate the changing landscape. The conversation emphasizes the necessity of creating effective AI policies to ensure ethical practices and the potential for job transformation in the face of technological advancements.00:00 Introduction to Data and Storytelling02:14 Understanding the Financial Times04:19 The Role of Data Value and Strategy09:33 Upskilling in Data and AI10:44 Valuing Data as an Asset14:12 Overcoming Resistance to Change19:25 Defining Value in Data22:20 Communications and Knowledge Management27:31 The Future of AI in Business28:44 The Impact of Technology on Society30:53 Navigating the Moral Hazards of AI32:44 The Future of Education in an AI World35:19 Job Transformation and the Role of AI41:47 Upskilling for the AI Era44:42 Creating an AI Policy for Responsible...

AI Unraveled: Latest AI News & Trends, Master GPT, Gemini, Generative AI, LLMs, Prompting, GPT Store
AI Daily Rundown June 09 2025: ⚖️OpenAI Fights Court to Preserve ChatGPT Conversation Data

AI Unraveled: Latest AI News & Trends, Master GPT, Gemini, Generative AI, LLMs, Prompting, GPT Store

Play Episode Listen Later Jun 10, 2025 19:15


Key discussions include OpenAI's legal battle over retaining user conversation data, raising crucial questions about user privacy and data retention precedents. The sources also address the evolving social impact of AI, with concerns about deep human-AI emotional bonds and the need for regulation in this area. Additionally, they showcase AI's diverse applications, from historical research aiding in the re-dating of the Dead Sea Scrolls to anticipated overhauls of user interfaces like Apple's 'Liquid Glass' design. Challenges remain, as Apple research suggests current AI models struggle with true logical reasoning, and the rollout of autonomous systems faces public backlash, as seen in protests against Waymo robotaxis. Finally, the podcast point to the growing influence of AI in various sectors, including major investments by companies like Meta in AI development and its increasing adoption by billionaires and institutions such as Ohio State University.

a16z on Protecting Little Tech: The Techno-Optimist AI Policy Agenda with Matt Perault, Head of AI Policy

Play Episode Listen Later Jun 9, 2025 63:40


In this episode, Matt Perault, Head of AI Policy at a16z, discusses their approach to AI regulation focused on protecting "little tech" startups from regulatory capture that could entrench big tech incumbents. The conversation covers a16z's core principle of regulating harmful AI use rather than the development process, exploring key policy initiatives like the Raise Act and California's SB 813. Perault addresses critical challenges including setting appropriate regulatory thresholds, transparency requirements, and designing dynamic frameworks that balance innovation with safety. The discussion examines both areas of agreement and disagreement within the AI policy landscape, particularly around scaling laws, regulatory timing, and the concentration of AI capabilities. Disclaimer: This information is for general educational purposes only and is not a recommendation to buy, hold, or sell any investment or financial product. Turpentine is an acquisition of a16z Holdings, L.L.C., and is not a bank, investment adviser, or broker-dealer. This podcast may include paid promotional advertisements, individuals and companies featured or advertised during this podcast are not endorsing AH Capital or any of its affiliates (including, but not limited to, a16z Perennial Management L.P.). Similarly, Turpentine is not endorsing affiliates, individuals, or any entities featured on this podcast. All investments involve risk, including the possible loss of capital. Past performance is no guarantee of future results and the opinions presented cannot be viewed as an indicator of future performance. Before making decisions with legal, tax, or accounting effects, you should consult appropriate professionals. Information is from sources deemed reliable on the date of publication, but Turpentine does not guarantee its accuracy. SPONSORS: Oracle Cloud Infrastructure: Oracle Cloud Infrastructure (OCI) is the next-generation cloud platform that delivers better, cheaper, and faster solutions for your infrastructure, database, application development, and AI needs. Experience up to 50% savings on compute, 70% on storage, and 80% on networking with OCI's high-performance environment—try it for free with zero commitment at https://oracle.com/cognitive The AGNTCY: The AGNTCY is an open-source collective dedicated to building the Internet of Agents, enabling AI agents to communicate and collaborate seamlessly across frameworks. Join a community of engineers focused on high-quality multi-agent software and support the initiative at https://agntcy.org/?utm_campaign=fy25q4_agntcy_amer_paid-media_agntcy-cognitiverevolution_podcast&utm_channel=podcast&utm_source=podcast NetSuite by Oracle: NetSuite by Oracle is the AI-powered business management suite trusted by over 41,000 businesses, offering a unified platform for accounting, financial management, inventory, and HR. Gain total visibility and control to make quick decisions and automate everyday tasks—download the free ebook, Navigating Global Trade: Three Insights for Leaders, at https://netsuite.com/cognitive PRODUCED BY: https://aipodcast.ing

The National Security Podcast
AI, rights and rules: who's accountable in an automated world?

The National Security Podcast

Play Episode Listen Later Jun 5, 2025 43:14


Can differing global approaches to AI regulation and investment work together, or are we headed toward fragmented, siloed systems? How can AI governance in developing nations be supported as part of regional aid and security agendas? What challenges does Australia face in regulating AI without a national bill of rights or federal human rights charter? Should governments mandate the inclusion of human oversight in all AI-powered decisions? In this episode, Sarah Vallee and Maria O'Sullivan join David Andrews to talk about how AI is impacting national security, with a focus on AI governance models and mass-surveillance.Maria O'Sullivan is an Associate Professor at Deakin Law School. She's a member of the Deakin Cyber Research and Innovation Centre.Sarah Vallee is a specialist in AI Policy and Governance. She's a Fellow at the UTS Human Technology Institute, sponsored by the French Ministry of Foreign Affairs.David Andrews is Senior Manager, Policy & Engagement at the ANU National Security College. TRANSCRIPT  Show notes  NSC academic programs – find out more  Article 8: respect for your private and family life We'd love to hear from you! Send in your questions, comments, and suggestions to NatSecPod@anu.edu.au. You can tweet us @NSC_ANU and be sure to subscribe so you don't miss out on future episodes. Hosted on Acast. See acast.com/privacy for more information.

Chicago's Morning Answer with Dan Proft & Amy Jacobson

0:00 - HeteroAwesomeness Month 13:02 - Elon Musk comes out against the Big Beautiful Bill 27:15 - US Open qualifying 30:14 - Mamet on Maher podcast 55:25 - James A. Gagliano, retired FBI supervisory special agent and a doctoral candidate in homeland security at St. John’s University, on the "unhealthy" direction of college campuses - "we are becoming the architects of our own demise" 01:11:51 - CA 400M champ Clara Adams stripped of title 01:26:04 - Chief Economist at First Trust Portfolios LP, Brian Wesbury, on the Big Beautiful Bill - "the last two years of government spending were some of the most irresponsible budgets we have ever seen" Follow Brian on X @wesbury 01:49:30 - Emeritus professor of law, Harvard Law School, Alan Dershowitz, shares details from his new book The Preventive State: The Challenge of Preventing Serious Harms While Preserving Essential Liberties. For more from Professor Dershowitz, check out his podcast “The Dershow” on Spotify, YouTube and iTunes 02:07:58 - Neil Chilson, former Chief Technologist for the FTC and currently Head of AI Policy at the Abundance Institute, on the risks, rewards and myths of AI. Check out Neil’s substack outofcontrol.substack.comSee omnystudio.com/listener for privacy information.

The EdUp Experience
How the 'Stoplight Approach' Could Solve AI Policy Challenges - with Christian Moriarty, Professor of Ethics & Law, Applied Ethics Institute, St. Petersburg College

The EdUp Experience

Play Episode Listen Later May 29, 2025 42:17


It's YOUR time to #EdUpIn this episode, part of our Academic Integrity Series, sponsored by Pangram Labs,YOUR guest is Christian Moriarty, Professor of Ethics & Law, Applied Ethics Institute, St. Petersburg CollegeYOUR cohost is Bradley Emi , Cofounder & CTO, Pangram LabsYOUR host is Elvin FreytesHow does Christian define academic integrity from both legal & philosophical perspectives?Why do students often "cheat" even when they have good intentions & strong moral values?What is the role of faculty in supporting students to act with integrity & resist temptation?How can institutions implement effective AI policies that respect different teaching contexts?Why is Christian predicting a return to in-class writing or required keystroke tracking software?Topics include:The tension between rules-based & values-based approaches to academic integrityThe importance of empathy in understanding why students make poor choicesThe "stoplight approach" to AI use policies (green/yellow/red options for different contexts)Finding the balance between trusting students & verifying their workThe challenges of time management for community college studentsThe value of specialized academic integrity offices in educational institutionsWhy "difficulty is part of the process" in genuine learning & skill developmentThe connection between integrity & asking for help when neededListen in to #EdUpDo YOU want to accelerate YOUR professional development?Do YOU want to get exclusive early access to ad-free episodes, extended episodes, bonus episodes, original content, invites to special events, & more?Then ⁠⁠⁠⁠⁠⁠BECOME A SUBSCRIBER TODAY⁠⁠ - $19.99/month or $199.99/year (Save 17%)!Want to get YOUR organization to pay for YOUR subscription? Email ⁠⁠⁠EdUp@edupexperience.comThank YOU so much for tuning in. Join us on the next episode for YOUR time to EdUp!Connect with YOUR EdUp Team - ⁠⁠⁠⁠⁠ ⁠⁠⁠⁠Elvin Freytes⁠⁠⁠⁠⁠⁠⁠⁠⁠ & ⁠⁠⁠⁠⁠⁠⁠⁠⁠ Dr. Joe Sallustio⁠⁠⁠⁠● Join YOUR EdUp community at The EdUp ExperienceWe make education YOUR business!

Shaye Ganam
Canada has a chance to lead on AI policy and data governance at G7 Leaders' Summit

Shaye Ganam

Play Episode Listen Later May 29, 2025 8:06


Canada has a chance to lead on AI policy and data governance at G7 Leaders' Summit Learn more about your ad choices. Visit megaphone.fm/adchoices

In AI We Trust?
SPECIAL EDITION A Pre-Summit Conversation With Van Jones

In AI We Trust?

Play Episode Listen Later May 28, 2025 16:11


In this special episode of In AI We Trust?, recorded live at the launch of the EqualAI C-Suite Summit in Washington, D.C., host Miriam Vogel sits down with the dynamic Van Jones — acclaimed social entrepreneur, innovator, and tech evangelist. Together, they dive into a thought-provoking conversation about how AI can be a  transformative force for opportunity creation. With his trademark clarity and conviction, Van offers a hopeful vision for the future of AI— one that empowers communities and drives societal progress, but only if we lead with the right values and policies at this critical moment.Related ResourcesDream Machine AI x Library Project

The Shannon Joy Show
Trump's Big Bad Bull-Shit Budget Betrayal Bill Prohibits States From Interfering With Federal AI Programs For Ten Years! Rage Against This America. With Special Guest Dr. William Makis.

The Shannon Joy Show

Play Episode Listen Later May 22, 2025 90:18


SJ Show Notes:Follow Dr. Makis HERE: https://substack.com/@makismdhttps://x.com/MakisMDmakisw79@yahoo.comPlease support Shannon's independent network with your donation HERE: https://www.paypal.com/donate/?hosted_button_id=MHSMPXEBSLVTSupport Our Sponsors:You can get 20% off your first order of Blackout Coffee! Just head to http://blackoutcoffee.com/joy and use code joy at checkout.The Satellite Phone Store has everything you need when the POWER goes OUT. Use the promo code JOY for 10% off your entire order TODAY! www.SAT123.com/JoyGet 45% OFF Native Path HYDRATE today! Special exclusive deal for the Joy audience only! Check it out HERE: www.nativepathhydrate.com/joyColonial Metals Group is the company Shannon trusts for all her metals purchases! Set up a SAFE & Secure IRA or 401k with a company who shares your values! Learn more HERE: https://colonialmetalsgroup.com/joyPlease consider Dom Pullano of PCM & Associates! He has been Shannon's advisor for over a decade and would love to help you grow! Call his toll free number today: 1-800-536-1368 Or visit his website at https://www.pcmpullano.comShannon's Top Headlines May 22, 2025:Trump's 'Big Beautiful Bill' would create 'unfettered abuse' of AI: Business InsiderTrump's 'Big Beautiful Bill' would create 'unfettered abuse' of AI, 141 high-profile orgs warn in letter to Congress days agoWhen it Comes to AI Policy, Congress Shouldn't Cut States off at the Knees: https://garymarcus.substack.com/p/when-it-comes-to-ai-policy-congress?r=fuu7w&utm_medium=iosRon Johnson: The Ugly Truth About Trump's Big Beautiful Bill: https://x.com/SenRonJohnson/status/1923057940908454239WATCH: Dr. Peter McCullough's Truth Bombs In Testimony Yesterday: https://x.com/MJTruthUltra/status/1925271018387763352Dr. William Makis: Scott Adams reveals his Prostate Cancer and our attempts to beat it - my response to Scott's Podcast: https://substack.com/home/post/p-163941944Renowned Data Analyst Warns Excess Deaths Are Surging ‘Off the Charts' https://substack.com/home/post/p-162167578Stop this bill.Shut the government down.Because it is becoming increasingly clear that every penny given to these psychopaths can and will be used against we the people.Hidden deep within Trump's budget monstrosity is a clause which threatens every American, our Constitutional Republic and humanity. Trump's ‘big beautiful budget bill' sneaks in a section which prohibits states from interfering with AI programs and development and also machine decision making for for ten years.“No state or political subdivision may enforce any law or regulation regulating artificial intelligence models, artificial intelligence systems, or automated decision systems during the 10-year period beginning on the date of the enactment of this Act.”There is SO much wrong with this and frankly, it's the cherry on top of a dumpster fire of a bill which betrays nearly every promise made by Trump in the 2024 election.There is nothing good about this bill and in my opinion the best we can do is dump it completely and shut down the government.Interestingly, ‘shutting down the government' actually KEEPS the essential spending in place (like Social Security benefits and Medicare) while suspending all the grift and billionaire benefits.It's exactly what we need.That's the bad news … but there is GOOD news too!Today we will talk to a frontline medical freedom warrior who is actually saving lives through life saving cancer treatments. Dr. William Makis is living proof that there ARE solutions out there and I cannot wait to talk to him again.We discuss this and more today on the SJ Show!Join the Rumble LIVE chat and follow my Rumble Page HERE so you never miss an episode: https://rumble.com/c/TheShannonJoyShowSee Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

Millásreggeli • Gazdasági Muppet Show
Millásreggeli podcast: ki, hogyan költözhet egy új településre, Shadow AI, amerikai tőkepiac - 2025-05-15 06 óra

Millásreggeli • Gazdasági Muppet Show

Play Episode Listen Later May 15, 2025


2025. május 15., szerda 6:30-8 óra NÉVNAPOK, ESEMÉNYEK, SZÜLETÉSNAPOSOK, LAPSZEMLE, TŐZSDEI HELYZETKÉP BUDAPEST, TE CSODÁS: Hírek a fővárosból és környékéről Nagy felújítást jelentett be Lázár János – 200 milliárdba fog kerülni - Infostart.hu Tarr Béla mond beszédet a 30. Budapest Pride megnyitóján Figyelem! Medvét láttak Budapesttől 40 km-re: mindenki legyen óvatos - Pénzcentrum Új campus expresszvonat Debrecen és Budapest között Több tízezer turistát és forintmilliárdokat hozhat a budapesti e-sport-bajnokság - Világgazdaság Dühös tüntetés szerveződik az átláthatósági törvény miatt - Propeller Ingyen adják a gyerekeknek a Városligetet ÉBRESZTŐ TÉMA: Benyújtották a törvényjavaslatot: az önkormányzatok korlátozhatják, kik költözhetnek be egy településre A tervezethez képest a törvényjavaslat enyhébbnek tekinthető: kikerült belőle ugyanis az a pont, amely értelmében az önkormányzat közvetlenül megtilthatná a nem helyben élők ingatlanvásárlását. Az önkormányzatok azonban így is fontos jogköröket kapnak. Benyújtották a törvényjavaslatot: az önkormányzatok korlátozhatják, kik költözhetnek be egy településre | 24.hu Balla Ákos, a Balla Ingatlan tulajdonos-ügyvezetője NULLADIK FAKTOR: Shadow AI A múlt héten a Shadow IT volt a téma. Most egy hasonló, de picit specifikusabb témakörrel foglalkozunk. A csapból is az AI folyik, az egész IT szakma ettől pezseg, miközben számos IT biztonsági kérdést vet fel. Köztük az egyik ilyen az újonnan megjelenő “Shadow AI” kifejezés. De mi is az a Shadow AI? - Egyet hátrébb lepve rövid összefoglaló általánosan a felhő használatról és a Shadow IT-ról. Kifejtve, hogy a Shadow AI az a Shadow IT egy szelete, csak az AI terjedese es relevanciaja miatt említjük meg külön fogalomként. Vállalati környezetben mik a főbb biztonsági aggályok, ha Shadow AI-ról van szó? - Láthatóság hiánya, rendkívül gyors terjedés felhasználok között. Feltöltött és kezelt adatok Hogyan lehet ezt kezelni? - A legfontosabb a láthatóság biztosítása. Emellett a felhasznalok folyamatos oktatasa. Vállalati “AI Policy” kialakitasa, megfékezni nem lehet és lehet nem is erdemes. Tudunk példát AI/Shadow AI-hoz köthető jelentős biztonsági incidensre? - Néhány nemzetközi példa + Utolsó kérdés: Hol lehet többet megtudni? - biztonsagosfelho.hu, leirasok, webinar regisztráció Angyal Dániel, a Scirge társalapítója HETI ALAPOZÓ: Mi történt a tőkepiacokon Trump elnökké választása óta? Czachesz Gábor, a VIG Befektetési Alapkezelő Multi-Asset desk vezetője

The Customer Success Playbook
Customer Success Playbook Podcast S3 E54 - Katie Smith - How to Use AI in Marketing Without Losing Your Brand Voice

The Customer Success Playbook

Play Episode Listen Later May 9, 2025 16:24 Transcription Available


Send us a textIn this final part of the Katie Smith trilogy on the Customer Success Playbook, we enter the brave new world of AI and marketing. Host Kevin Metzger explores the promises and pitfalls of generative tools with Katie, who brings her thoughtful and grounded take on how businesses can embrace automation while fiercely protecting their authenticity. If you've ever wondered how to use AI effectively without sounding like a robot, this one's for you.Detailed Analysis: AI isn't going anywhere—and that's exactly why it's time to get strategic. Katie Smith walks listeners through the essentials of adopting AI in a way that enhances rather than dilutes your marketing. The episode kicks off with her advice on building internal AI policies: what your team will use AI for, what it won't, and how to protect sensitive data along the way. Her mantra? Be proactive, not reactive.Katie also shares her go-to applications of AI in the creative process:Use AI as a co-creator to spark content ideas and draft early versionsTrain AI with your brand's voice and tone to maintain consistencyStay vigilant about hallucinations and homogenized contentShe emphasizes the importance of human review at every stage, especially when publishing customer-facing materials. AI is a brilliant assistant, but not a final authority.The discussion evolves into deeper insights on lead generation and real-time responsiveness. Kevin adds his own tricks for applying brand tone through prompt engineering and post-processing, offering a compelling use case that blends Claude, GPTs, and content repurposing magic.Finally, the two zoom out to a broader question: How do you optimize your brand for AI-driven search and recommendations? It's an emerging discipline with massive implications, and Katie teases what's to come from leaders in B2B and digital strategy.Whether you're testing the AI waters or already building internal GPTs, Katie's thoughtful approach provides the guardrails needed to preserve quality and trust in a world of automation.Now you can interact with us directly by leaving a voice message at https://www.speakpipe.com/CustomerSuccessPlaybookPlease Like, Comment, Share and Subscribe. You can also find the CS Playbook Podcast:YouTube - @CustomerSuccessPlaybookPodcastTwitter - @CS_PlaybookYou can find Kevin at:Metzgerbusiness.com - Kevin's person web siteKevin Metzger on Linked In.You can find Roman at:Roman Trebon on Linked In.

Caveat
The AI policy divide.

Caveat

Play Episode Listen Later May 1, 2025 42:56


Please enjoy this encore episode of Caveat. This week on Caveat, Dave and Ben are thrilled to welcome back N2K's own Ethan Cook for the second installment of our newest policy ⁠deep dive⁠ segment. As a trusted expert in law, privacy, and surveillance, Ethan is joining the show regularly to provide in-depth analysis on the latest policy developments shaping the cybersecurity and legal landscape. While this show covers legal topics, and Ben is a lawyer, the views expressed do not constitute legal advice. For official legal advice on any of the topics we cover, please contact your attorney.  Please take a moment to fill out an ⁠audience survey⁠! Let us know how we are doing! Policy Deep Dive In this Caveat Policy ⁠Deep Dive⁠, we turn our focus to the evolving landscape of artificial intelligence (AI) policy. This month, the Caveat team delves into the key issues shaping political discourse around AI, exploring state-led initiatives, the lack of significant federal action, and the critical areas that still require stronger oversight, offering an in-depth analysis of AI legislation, the varied approaches across states, and the pressing challenges that demand federal attention. Get the weekly Caveat Briefing delivered to your inbox. Like what you heard? Be sure to check out and subscribe to our ⁠Caveat Briefing⁠, a weekly newsletter available exclusively to ⁠N2K Pro⁠ members on ⁠N2K CyberWire's⁠ website. N2K Pro members receive our Thursday wrap-up covering the latest in privacy, policy, and research news, including incidents, techniques, compliance, trends, and more. This week's ⁠Caveat Briefing⁠ covers the story of the ⁠Paris AI summit⁠, where French President Emmanuel Macron and EU digital chief Henna Virkkunen announced plans to reduce regulatory barriers to support AI innovation. The summit highlighted the growing pressure on Europe to adopt a lighter regulatory touch in order to remain competitive with the U.S. and China, while also addressing concerns about potential risks and the impact on workers as AI continues to evolve. Curious about the details? Head over to the ⁠Caveat Briefing⁠ for the full scoop and additional compelling stories. Got a question you'd like us to answer on our show? You can send your audio file to ⁠caveat@thecyberwire.com⁠. Hope to hear from you. Learn more about your ad choices. Visit megaphone.fm/adchoices

The EdUp Experience
What Makes an Effective AI Policy? - with Dr. Elizabeth Skomp, Provost & Vice President of Academic Affairs, Stetson University

The EdUp Experience

Play Episode Listen Later Apr 24, 2025 38:03


⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠It's YOUR time to #EdUpIn this episode, part of our Academic Integrity Series, sponsored by Pangram Labs,YOUR guest is Dr. Elizabeth Skomp, Provost & Vice President of Academic Affairs, Stetson UniversityYOUR cohost is Bradley Emi, Cofounder & CTO, Pangram LabsYOUR host is Elvin FreytesHow does Dr. Skomp define academic integrity & its student-led honor system at Stetson? What strategies does Stetson use with their honor pledge & code? How does Stetson integrate AI tools ethically with their 3 syllabus templates? What approach does faculty take when considering AI in course design? Why does the university focus on "learning opportunities" rather than punitive measures? Topics include:Creating a student-led, faculty-advised honor system The importance of faculty modeling academic integrity Developing flexible AI policies that preserve academic freedom Using AI disclosure as a trust-building approach Faculty development for AI-adapted teaching methods The "Hatter Ready" initiative connecting experiential learning & academic integrity Listen in to #EdUpDo YOU want to accelerate YOUR professional development?Do YOU want to get exclusive early access to ad-free episodes, extended episodes, bonus episodes, original content, invites to special events, & more?Then ⁠⁠⁠⁠⁠⁠BECOME A SUBSCRIBER TODAY⁠⁠ - $19.99/month or $199.99/year (Save 17%)!Want to get YOUR organization to pay for YOUR subscription? Email ⁠⁠⁠EdUp@edupexperience.comThank YOU so much for tuning in. Join us on the next episode for YOUR time to EdUp!Connect with YOUR EdUp Team - ⁠⁠⁠⁠⁠⁠⁠⁠⁠Elvin Freytes⁠⁠⁠⁠⁠⁠⁠⁠⁠ & ⁠⁠⁠⁠⁠⁠⁠⁠⁠Dr. Joe Sallustio⁠⁠⁠⁠● Join YOUR EdUp community at ⁠⁠⁠⁠⁠⁠⁠⁠⁠The EdUp Experience⁠⁠⁠⁠⁠⁠⁠⁠⁠!We make education YOUR business!

Boardroom Governance with Evan Epstein
Robin Feldman: On the Evolution and Regulation of AI

Boardroom Governance with Evan Epstein

Play Episode Listen Later Apr 21, 2025 43:48


(0:00) Intro(1:26) About the podcast sponsor: The American College of Governance Counsel(2:13) Start of interview(2:45) Robin's origin story(3:55) About the AI Law and Innovation Institute.(5:02) On AI governance: "AI is critical for boards, both from a risk management perspective and from a regulatory management perspective." Boards should: 1) Get regular updates on safety and regulatory issues, 2) document the attention that they're paying to it to have a record of meaningful oversight, and 3) Most importantly, boards can't just rely on feedback from the folks in charge of the AI tools. They need a red team of skeptics.(9:58) Boards and AI Ethics. Robin's Rules of Order for AI. Rule #1: Distinguish Real-time Dangers from Distant Dangers(15:21) Antitrust Concerns in AI(18:10) Geopolitical Tensions in AI Race (US v China). "Winning the AI race is essential for the US, both from an economic and from a national security perspective."(23:30) Regulatory Framework for AI "It really isn't one size fits all for AI regulation. Europe, for the most part, is a consumer nation of AI. We are a producer nation of AI, and California in particular is a producer of AI." "There must be strong partnerships in this country between those developing cutting-edge technology and the government—because while the government holds the power, Silicon Valley holds the expertise to understand what this technology truly means."(26:46) California's AI Regulation Efforts "I do believe that over time, at some point, we will need a more comprehensive system that probably overshadows what the individual states will do, or at least cabins to some extent what the individual states will do. It will be a problem to have 50 different approaches to this, or even 20 different approaches to this within the country."(29:03) AI in the Financial Industry(33:13) Future Trends in AI. "I think the key for boards and companies is to be alert and to be nimble" and "as hard as it is, brush up a bit on your math and science, if that's not your area of expertise." "My point is simply, you have to understand these things under the hood if you're going to be able to think about what to do with them."(35:43) Her new book "AI vs IP. Rewriting Creativity" (coming out July 2025).(37:12) Key Considerations for Board Members: "It's about being nimble, staying proactive and having a proven track record of it. Most importantly, you need a red team approach."(38:26) Books that have greatly influenced her life:Rashi's Commentary on the BibleTalmud(39:06) Her mentors.Professor Robert WeisbergProfessor Gerald Gunther(41:39) Quotes that she thinks of often or lives her life by: "The cover-up's always worse than the crime."(42:34) An unusual habit or an absurd thing that she loves. Robin Feldman is the Arthur J. Goldberg Distinguished Professor of Law, Albert Abramson '54 Distinguished Professor of Law Chair, and Director of the Center for Innovation at UC Law SF. You can follow Evan on social media at:X: @evanepsteinLinkedIn: https://www.linkedin.com/in/epsteinevan/ Substack: https://evanepstein.substack.com/__To support this podcast you can join as a subscriber of the Boardroom Governance Newsletter at https://evanepstein.substack.com/__Music/Soundtrack (found via Free Music Archive): Seeing The Future by Dexter Britain is licensed under a Attribution-Noncommercial-Share Alike 3.0 United States License

The Elon Musk Podcast
OpenAI Offers Economic Guide to Shape AI Policy

The Elon Musk Podcast

Play Episode Listen Later Apr 19, 2025 14:54


Aiming to balance governance and growth, OpenAI has shared an economic framework for AI regulation. It's a strategic step toward global policy influence. Could it become the industry standard?AI Chat YouTube Channel: https://www.youtube.com/@JaedenSchaferMy Podcast Course: https://podcaststudio.com/courses/Try AI Box: ⁠⁠https://AIBox.ai/⁠⁠Join my AI Hustle Community: https://www.skool.com/aihustle/about

RTP's Free Lunch Podcast
Tech Roundup Episode 26 - Making Sense of White House AI Policy

RTP's Free Lunch Podcast

Play Episode Listen Later Apr 15, 2025 37:34


In this Tech Roundup Episode of RTP's Fourth Branch podcast, Kevin Frazier and Aram Gavoor sit down to discuss the recent, fast-moving developments in AI policy in the second Trump administration, as well as the importance of innovation and procurement.

donald trump ai tech white house making sense rtp ai policy kevin frazier telecommunications & electroni regulatory transparency projec
AI Tool Report Live
Biden vs Trump: How U.S. AI Policy Is Shifting

AI Tool Report Live

Play Episode Listen Later Apr 15, 2025 31:40


In this episode of The AI Report, Christine Walker joins Arturo Ferreira to launch a new series on the legal side of artificial intelligence. Christine is a practicing attorney helping businesses understand how to navigate AI risk, compliance, and governance in a rapidly changing policy environment.They explore how the shift from the Biden to the Trump administration is changing the tone on AI regulation, what the EU AI Act means for U.S. companies, and why many of the legal frameworks we need for AI already exist. Christine breaks down how lawyers apply traditional legal principles to today's AI challenges from intellectual property and employment law to bias and defamation.Also in this episode: • The risk of waiting for regulation to catch up • How companies can conduct internal AI audits • What courts are already doing with AI tools • Why even lawyers are still figuring this out in real time • What businesses should be doing now to reduce liabilityChristine offers a grounded, practical view of what it means to use AI responsibly, even when the law seems unclear.Subscribe to The AI Report:theaireport.aiJoin our community:skool.com/the-ai-report-community/aboutChapters:(00:00) The Legal Risks of AI and Why It's Still a Black Box(01:13) Christine Walker's Background in Law and Tech(03:07) Biden vs Trump: Competing AI Governance Philosophies(04:53) What Governance Means and Why It Matters(06:26) Comparing the EU AI Act with the U.S. Legal Vacuum(08:14) Case Law on IP, Bias, and Discrimination(10:50) Why the Fear Around AI May Be Misplaced(13:15) Legal Precedents: What Tech History Teaches Us(16:06) The GOP's AI Stance and Regulatory Philosophy(18:35) Most AI Use Cases Already Fall Under Existing Law(21:11) Why Precedents Take So Long—and What That Means(23:08) Will AI Accelerate the Legal System?(25:24) AI + Lawyers: A Collaborative Model(27:15) Hallucinations, Case Law, and Legal Responsibility(28:36) Building Policy Now to Avoid Legal Pain Later(30:59) Christine's Final Advice for Businesses and Builders

Deep State Radio
Siliconsciousness: What the Trump Administration Gets Right about AI Policy

Deep State Radio

Play Episode Listen Later Apr 14, 2025 35:52


There's a lot to criticize about US AI policy, but what has the administration been getting right? Senior VP of Government Affairs for Americans for Responsible Innovation Doug Calidas joins David Rothkopf to break down the Trump administration's industrial and AI policies, the role of tariffs, and more.  This material is distributed by TRG Advisory Services, LLC on behalf of the Embassy of the United Arab Emirates in the U.S.. Additional information is available at the Department of Justice, Washington, DC. Learn more about your ad choices. Visit megaphone.fm/adchoices

Slate Star Codex Podcast
Introducing AI 2027

Slate Star Codex Podcast

Play Episode Listen Later Apr 14, 2025 8:10


Or maybe 2028, it's complicated In 2021, a researcher named Daniel Kokotajlo published a blog post called “What 2026 Looks Like”, where he laid out what he thought would happen in AI over the next five years. The world delights in thwarting would-be prophets. The sea of possibilities is too vast for anyone to ever really chart a course. At best, we vaguely gesture at broad categories of outcome, then beg our listeners to forgive us the inevitable surprises. Daniel knew all this and resigned himself to it. But even he didn't expect what happened next. He got it all right. Okay, not literally all. The US restricted chip exports to China in late 2022, not mid-2024. AI first beat humans at Diplomacy in late 2022, not 2025. And of course the mid-2025 to 2026 period remains to be seen. But to put its errors in context, Daniel's document was written two years before ChatGPT existed. Nobody except researchers and a few hobbyists had ever talked to an AI. In fact, talking to AI was a misnomer. There was no way to make them continue the conversation; they would free associate based on your prompt, maybe turning it into a paragraph-length short story. If you pulled out all the stops, you could make an AI add single digit numbers and get the right answer more than 50% of the time. Yet if you read Daniel's blog post without checking the publication date, you could be forgiven for thinking it was a somewhat garbled but basically reasonable history of the last four years. I wasn't the only one who noticed. A year later, OpenAI hired Daniel to their policy team. While he worked for them, he was limited in his ability to speculate publicly. “What 2026 Looks Like” promised a sequel about 2027 and beyond, but it never materialized. Unluckily for Sam Altman but luckily for the rest of us, Daniel broke with OpenAI mid-2024 in a dramatic split covered by the New York Times and others. He founded the AI Futures Project to produce the promised sequel, including: Eli Lifland, a superforecaster who is ranked first on RAND's Forecasting initiative. You can read more about him and his forecasting team here. He cofounded and advises AI Digest and co-created TextAttack, an adversarial attack framework for language models. Jonas Vollmer, a VC at Macroscopic Ventures, which has done its own, more practical form of successful AI forecasting: they made an early stage investment in Anthropic, now worth $60 billion. Thomas Larsen, the former executive director of the Center for AI Policy, a group which advises policymakers on both sides of the aisle. Romeo Dean, a leader of Harvard's AI Safety Student Team and budding expert in AI hardware. …and me! Since October, I've been volunteering part-time, doing some writing and publicity work. I can't take credit for the forecast itself - or even for the lion's share of the writing and publicity - but it's been an immense privilege to work alongside some of the smartest and most epistemically virtuous people I know, trying to absorb their worldview on a level deep enough to do it justice. We have no illusions that we'll get as lucky as last time, but we still think it's a valuable contribution to the discussion. https://www.astralcodexten.com/p/introducing-ai-2027 https://ai-2027.com/

Deep State Radio
Siliconsciousness: What the Trump Administration Gets Right about AI Policy

Deep State Radio

Play Episode Listen Later Apr 14, 2025 35:52


There's a lot to criticize about US AI policy, but what has the administration been getting right? Senior VP of Government Affairs for Americans for Responsible Innovation Doug Calidas joins David Rothkopf to break down the Trump administration's industrial and AI policies, the role of tariffs, and more.  This material is distributed by TRG Advisory Services, LLC on behalf of the Embassy of the United Arab Emirates in the U.S.. Additional information is available at the Department of Justice, Washington, DC. Learn more about your ad choices. Visit megaphone.fm/adchoices

The Dynamist
Tech and the Family: Building for the Next Generation w/Katherine Boyle and Neil Chilson

The Dynamist

Play Episode Listen Later Apr 14, 2025 41:50


In this week's episode of The Dynamist, guest host Jon Askonas is joined by Katherine Boyle, (General Partner at a16z), and Neil Chilson, (AI Policy at the Abundance Institute), to tackle a critical yet often overlooked question: How is technology reshaping the American family? As tech giants like TikTok and Instagram come under scrutiny for their effects on children's mental health, and remote work continues to redefine domestic life, the conversation around technology's role in family dynamics has never been more urgent.Katherine shares insights from her recent keynote at the American Enterprise Institute, highlighting how the core objective of technological innovation, which she calls "American Dynamism," should be empowering the family rather than centralizing state control. Neil provides a fresh perspective on how decentralized systems and emergent technologies can enhance—not hinder—family autonomy and resilience. Amid rising debates about homeschooling, screen time, and the shift toward a remote-first lifestyle, the guests discuss whether tech-driven changes ultimately strengthen or undermine families as society's fundamental institution.Together, they explore the possibility of a new era in which technology revitalizes family autonomy, reshapes education, and reignites productive home economies.

Federal Drive with Tom Temin
Trump's AI policy shifts focus to ‘high impact' use cases

Federal Drive with Tom Temin

Play Episode Listen Later Apr 10, 2025 8:33


The White House is out with new artificial intelligence guidance for federal agencies. The Trump administration says it wants agencies to focus on accelerating the use of AI, but former officials say ongoing federal layoffs could hamper the administration's AI ambitions. For the latest, Federal News Network's Justin Doubleday joins me. Learn more about your ad choices. Visit podcastchoices.com/adchoicesSee Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

Federal Drive with Tom Temin
Trump's AI policy shifts focus to ‘high impact' use cases

Federal Drive with Tom Temin

Play Episode Listen Later Apr 10, 2025 9:18


The White House is out with new artificial intelligence guidance for federal agencies. The Trump administration says it wants agencies to focus on accelerating the use of AI, but former officials say ongoing federal layoffs could hamper the administration's AI ambitions. For the latest, Federal News Network's Justin Doubleday joins me. Learn more about your ad choices. Visit podcastchoices.com/adchoices

The Shared Security Show
Understanding Privacy Changes: eBay’s AI Policy and The Future of Data Privacy

The Shared Security Show

Play Episode Listen Later Mar 31, 2025 9:57


In this episode, host Tom Eston discusses recent privacy changes on eBay related to AI training and the implications for user data. He highlights the hidden opt-out feature for AI data usage and questions the transparency of such policies, especially in regions without strict privacy laws like the United States. The host also explores how […] The post Understanding Privacy Changes: eBay's AI Policy and The Future of Data Privacy appeared first on Shared Security Podcast.

The AI Policy Podcast
AI Action Plan RFI, California's AI Policy Working Group Report, and Why Programming Jobs Are Disappearing

The AI Policy Podcast

Play Episode Listen Later Mar 26, 2025 48:16


In this episode, we discuss AI companies' responses to the White House AI Action Plan Request For Information (RFI) related to key areas like export controls and AI governance (00:51), the release of the Joint California Policy Working Group on AI Frontier Models draft report (24:45), and how AI might be affecting the computer programming job market (40:10). 

Ogletree Deakins Podcasts
The AI Workplace: A Guide on AI Policy Essentials

Ogletree Deakins Podcasts

Play Episode Listen Later Mar 25, 2025 27:58


In this episode of our new podcast series, The AI Workplace, where we explore the latest advancements in integrating artificial intelligence (AI) into the workplace, Sam Sedaei (associate, Chicago) shares his insights on crafting and implementing effective AI policies. Sam, who is a member of the firm's Cybersecurity and Privacy and Technology practice groups, discusses the rapid rise of generative AI tools and highlights their potential to boost productivity, spark innovation, and deliver valuable insights. He also addresses the critical risks associated with AI, such as inaccuracies, bias, privacy concerns, and intellectual property issues, while emphasizing the importance of legal and regulatory guidance to ensure the responsible and effective use of AI in various workplace functions. Join us for a compelling discussion on navigating the AI-driven future of work.

The Bayesian Conspiracy
233 – AI Policy in D.C., with Dave Kasten

The Bayesian Conspiracy

Play Episode Listen Later Mar 19, 2025 91:01


Dave Kasten joins us to discuss how AI is being discussed in the US government and gives a rather inspiring and hopeful take. LINKS Narrow Path Center for AI Policy Dave Kasten's Essay on the Essay Meta on his Substack … Continue reading →

Edtech Insiders
Week in Edtech 3/12/2025: U.S. Department of Education Massive Layoffs, AI in Schools, Cyber Attacks on K-12, Stanford's AI Policy Shift, Duolingo's Playbook, AI-Driven Procurement, Sanoma's Global EdTech Expansion, and More!

Edtech Insiders

Play Episode Listen Later Mar 14, 2025 48:28 Transcription Available


Send us a textThis Week in Edtech, Ben Kornell and guest co-host Matthew Rascoff, Vice Provost for Digital Education at Stanford University, break down the latest headlines shaping education, technology, and policy. From major federal education cuts to AI's growing role in schools and higher ed, this episode unpacks the forces reshaping learning today.✨ Episode Highlights:[00:06:02] U.S. Department of Education lays off 30% of staff.[00:12:07] Education research funding faces uncertainty.[00:17:26] 80% of teachers now use AI in lesson planning.[00:18:57] 82% of K-12 schools report cyber incidents.[00:21:26] Stanford's new policy bans banning AI in student work[00:28:07] AI's future in education—Matthew Rascoff's “Humanity in the Loop” framework.[00:36:11] Duolingo shares its company playbook.[00:42:12] AI-driven public education data and procurement.[00:44:11] Sanoma's latest acquisition in AI-powered learning.

AI, Government, and the Future by Alan Pentz
Harnessing AI for Economic Growth While Ensuring Equality with Julian Jacobs: Episode Rerun

AI, Government, and the Future by Alan Pentz

Play Episode Listen Later Mar 12, 2025 33:50


Julian Jacobs, a Research Lead for the Oxford Group on AI Policy, Artificial Intelligence, Inequality and Society at Oxford Martin School, joins this episode of AI, Government, and the Future to explore the economic effects of AI, the potential inequalities that AI may bring, and the need to address job displacement. They also navigate the importance of government support in creating a strong middle class and the significance of human skills in the AI age.

Future U Podcast
Building an AI-Ready College

Future U Podcast

Play Episode Listen Later Mar 4, 2025 36:58


AI is evolving at a rapid rate, and its implications for higher ed are changing in lock step. So, Michael and Jeff dove back into the topic at the Google Public Sector GenAI Live & Labs Conference with a panel of experts: Ann Krischner of CUNY and ASU, Pace University's Marvin Krislov, and Google's Chris Hein. They discussed the necessity of an institutional AI strategy, the tech's implications for the future of work, and why university partnerships will be essential to equity in the age of AI. This episode is made with support from Google. 0:00 - Intro1:40 - Campus-Wide AI Strategy6:02 - Skills in the Age of AI9:52 - AI Policy and Faculty Training11:49 - The Dislocation of Entry Level Jobs15:33 - Teaching AI18:39 - Mounting the Liberal Arts Comeback21:25 - The Future of Academic Research24:37 - Building Access through Partnership31:12 - Questionable Assumptions  Connect with Michael Horn:Sign Up for the The Future of Education NewsletterWebsiteLinkedInX (Twitter)Threads   Connect with Jeff Selingo:Sign Up for the Next NewsletterWebsiteX (Twitter)ThreadsLinkedIn Connect with Future U: TwitterYouTubeThreadsInstagramFacebookLinkedIn   Submit a question and if we answer it on air we'll send you Future U. swag!Sign up for Future U. emails to get special updates and behind-the-scenes content. 

Unstuckkd
EP 88: Engineering the Future: AI, Policy, and Inclusive Technology with Ishvaraus Davis

Unstuckkd

Play Episode Listen Later Mar 3, 2025 20:37


In my conversation with Ish, we explored his journey from Southeast Georgia to becoming an engineering leader in Silicon Valley. Starting with computer science at the University of Pittsburgh, Ish shared how a college startup project led to his recruitment at Meta, where he worked on B2B advertising algorithms. His career path included roles at IBM, where he learned business fundamentals, and later a shift to tech policy work in Congress, focusing on emerging technologies like quantum computing and AI.LinksConnect with me on LinkedInWork with my fractional firmWork with my executive search and advisory firmConnect with Ish Davis

Tech Policy Podcast
398: AI Policy Potpourri (Part Two)

Tech Policy Podcast

Play Episode Listen Later Feb 24, 2025 50:39


TechFreedom's Corbin Barthold, Andy Jung, and Santana Boulton continue their discussion of the many, many things going on in AI innovation, competition, and regulation. Topics include:The Lina Khan AI crackdown (that we averted)What's next for the FTC and AI?(More) AI culture war600 state AI bills (might be 600 too many)Blackpilled about EuropeMicromanaged deregulation (is not a thing)Will the EU become unaligned?Links:Don't California My Texas: Stargate EditionThe European Commission's (Anti)Competitiveness CompassHow Liberal Democracy Might Lose the 21st CenturyTech Policy Podcast 394: Tech and Trump 2.0

The Lawfare Podcast
Lawfare Daily: Matt Perault on the Little Tech Agenda

The Lawfare Podcast

Play Episode Listen Later Feb 18, 2025 40:10


Matt Perault, Head of AI Policy at Andreessen Horowitz, joins Kevin Frazier, Contributing Editor at Lawfare and Adjunct Professor at Delaware Law, to define the Little Tech Agenda and explore how adoption of the Agenda may shape AI development across the country. The duo also discuss the current AI policy landscape.We value your feedback! Help us improve by sharing your thoughts at lawfaremedia.org/survey. Your input ensures that we deliver what matters most to you. Thank you for your support—and, as always, for listening!To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/lawfare-institute.Support this show http://supporter.acast.com/lawfare. Hosted on Acast. See acast.com/privacy for more information.

Intangiblia™
A Conversation with AI about Artificial Intelligence and Intellectual Property

Intangiblia™

Play Episode Listen Later Feb 18, 2025 9:32 Transcription Available


Who truly owns the creations of artificial intelligence? Explore this compelling question as Leticia Caminero (AI version) and Artemisa, her delightful AI co-host, navigate the intriguing intersection of AI and intellectual property law. Uncover the legal complexities when AI is the creator, questioning if these digital minds should be granted the same rights as human inventors. From dissecting the Dabus patent saga to the enigmatic Zarya of the Dawn comic book case, you'll gain a comprehensive understanding of how these legal battles are challenging traditional notions of ownership and creativity.Join us for a thought-provoking journey that questions if the absence of IP rights might stifle AI advancements and innovation. We ponder the implications of AI-generated works in an ever-evolving legal landscape and draw historical parallels, such as the disruption caused by the printing press. Whether you're a tech aficionado, legal enthusiast, or simply curious about the future, this episode promises to expand your perspective on AI's profound impact on innovation and intellectual property. Tune in and rethink the future of creativity and ownership in an AI-driven world.Send us a text

Tech Policy Podcast
397: AI Policy Potpourri (Part One)

Tech Policy Podcast

Play Episode Listen Later Feb 17, 2025 42:51


TechFreedom's Corbin Barthold, Andy Jung, and Santana Boulton take you on a tour of the many, many things going on in AI innovation, competition, and regulation. Topics include:First signs of AI escape velocity?Automated luxury libertarianismThe Trumpian vibe shiftAI culture warThe AI $$$$ bonfireThe one-week DeepSeek freakoutIs regulation futile?Links:Tyler Cowen on OpenAI's Deep ResearchDwarkesh Patel on fully automated firmsTrump Signs AI and Tech Executive OrdersSome Simple Lessons From China's Big AI BreakthroughTech Giants Double Down on Their Massive AI Spending

Caveat
The AI policy divide.

Caveat

Play Episode Listen Later Feb 13, 2025 42:56


This week on Caveat, Dave and Ben are thrilled to welcome back N2K's own Ethan Cook for the second installment of our newest policy deep dive segment. As a trusted expert in law, privacy, and surveillance, Ethan is joining the show regularly to provide in-depth analysis on the latest policy developments shaping the cybersecurity and legal landscape. While this show covers legal topics, and Ben is a lawyer, the views expressed do not constitute legal advice. For official legal advice on any of the topics we cover, please contact your attorney.  Please take a moment to fill out an audience survey! Let us know how we are doing! Policy Deep Dive In this Caveat Policy Deep Dive, we turn our focus to the evolving landscape of artificial intelligence (AI) policy. This month, the Caveat team delves into the key issues shaping political discourse around AI, exploring state-led initiatives, the lack of significant federal action, and the critical areas that still require stronger oversight, offering an in-depth analysis of AI legislation, the varied approaches across states, and the pressing challenges that demand federal attention. Get the weekly Caveat Briefing delivered to your inbox. Like what you heard? Be sure to check out and subscribe to our Caveat Briefing, a weekly newsletter available exclusively to N2K Pro members on N2K CyberWire's website. N2K Pro members receive our Thursday wrap-up covering the latest in privacy, policy, and research news, including incidents, techniques, compliance, trends, and more. This week's Caveat Briefing covers the story of the Paris AI summit, where French President Emmanuel Macron and EU digital chief Henna Virkkunen announced plans to reduce regulatory barriers to support AI innovation. The summit highlighted the growing pressure on Europe to adopt a lighter regulatory touch in order to remain competitive with the U.S. and China, while also addressing concerns about potential risks and the impact on workers as AI continues to evolve. Curious about the details? Head over to the Caveat Briefing for the full scoop and additional compelling stories. Got a question you'd like us to answer on our show? You can send your audio file to caveat@thecyberwire.com. Hope to hear from you. Learn more about your ad choices. Visit megaphone.fm/adchoices

Star Spangled Gamblers
The MAGA Civil War over H-1B Visas

Star Spangled Gamblers

Play Episode Listen Later Feb 13, 2025 49:57


Rule3O3 returns to discuss how Trump will resolve the debate on the right over H-1B visas. Timestamps  0:00: Intro begins 0:59: Modi-Trump meeting 2:00: Intro ends 4:00: Episode begins 5:13: Why the H1-B debate erupted 5:43: Sriram Krishnan 9:39: Laura Loomer 14:05: Right-wing debate 16:28: Indians Follow Star Spangled Gamblers on X @ssgamblers. Bet on the H1-B debate at Polymarket.com, the world's largest prediction market.  Will Trump eliminate H-1B visa in first 100 days? https://polymarket.com/event/will-trump-eliminate-h-1b-visa-in-first-100-days?tid=1739471241781 Will Trump increase cap on H-1B visas in first 100 days? https://polymarket.com/event/will-trump-increase-cap-on-h-1b-visas-in-first-100-days?tid=1739471369669 Forecasting Meetup Network. Help us grow the forecasting community to positively influence the future by supporting us with an upvote, comment, or pledge on Manifund: https://manifund.org/projects/forecasting-meetup-network---washington-dc-pilot-4-meetups Get notified whenever a new meetup is scheduled and learn more about the Forecasting Meetup Network here: https://bit.ly/forecastingmeetupnetwork Join our Discord to connect with others in the community between monthly meetups: https://discord.com/invite/hFn3yukSwv

The Economist Morning Briefing
Israel's warning to Hamas; Vance criticises EU's AI policy, and more

The Economist Morning Briefing

Play Episode Listen Later Feb 12, 2025 3:48


Binyamin Netanyahu, Israel's prime minister, warned that the country's army would “return to intense fighting” in Gaza unless Hamas released hostages by midday on Saturday.

AI, Government, and the Future by Alan Pentz
AI Trustworthiness and Cybersecurity: Navigating Enterprise Risks with Candy Alexander of NeuEon

AI, Government, and the Future by Alan Pentz

Play Episode Listen Later Feb 12, 2025 39:20


In this episode of AI, Government, and the Future, host Marc Leh is joined by Candy Alexander, a prominent cybersecurity leader with over 35 years of experience and current Chief Information Security Officer at NeuEon. As a two-time President of the Information Systems Security Association (ISSA) International and founding President of the ISSA Education and Research Foundation, Candy brings her extensive expertise to discuss AI trustworthiness, data governance, and enterprise security challenges.

Alter Everything
178: From White House Advisory to AI Entrepreneurship

Alter Everything

Play Episode Listen Later Feb 12, 2025 25:56


In this episode of Alter Everything, we sit down with Eric Daimler, CEO and co-founder of Conexus, and the first AI advisor to the White House under President Obama. Eric explores how AI-driven data consolidation is transforming industries, the critical role of neuro-symbolic AI, and the evolving landscape of AI regulation. He shares insights on AI's impact across sectors like healthcare and defense, highlighting the importance of inclusive discussions on AI safety and governance. Discover how responsible AI implementation can drive innovation while ensuring ethical considerations remain at the forefront.Panelists:Eric Daimler, Chair, CEO & Co-Founder @ Conexus - LinkedInMegan Bowers, Sr. Content Manager @ Alteryx - @MeganBowers, LinkedInShow notes:SB 1047: Safe and Secure Innovation for Frontier Artificial Intelligence Models Act.Neuro-symbolic AIUber Data Consolidation Interested in sharing your feedback with the Alter Everything team? Take our feedback survey here!This episode was produced by Megan Bowers, Mike Cusic, and Matt Rotundo. Special thanks to Andy Uttley for the theme music and Mike Cusic for the for our album artwork.

Marketplace Tech
Bytes: Week in Review — Google’s AI policy pivot, OpenAI teams up with California colleges, and robotaxis arrive in Austin

Marketplace Tech

Play Episode Listen Later Feb 7, 2025 14:31


On this week’s Marketplace “Tech Bytes: Week in Review,” we’ll explore OpenAI’s inroads in higher education. Plus, how passengers can get on a waitlist to hail a driverless car in Austin, Texas. But first, a look at how Google is changing its approach to artificial intelligence. In 2018, the company published its “AI principles,” guidelines for how it believed AI should be built and used. Google originally included language that said it would not design or deploy AI to be used in weapons or surveillance. That language has now gone away. Google didn’t respond to our request for comment, but it did say in a blog post this week that companies and governments should work together to create AI that, among other things, supports national security. Marketplace’s Stephanie Hughes spoke with Natasha Mascarenhas, reporter at The Information, about these topics for this week's “Tech Bytes.”

Marketplace Tech
Bytes: Week in Review — Google’s AI policy pivot, OpenAI teams up with California colleges, and robotaxis arrive in Austin

Marketplace Tech

Play Episode Listen Later Feb 7, 2025 14:31


On this week’s Marketplace “Tech Bytes: Week in Review,” we’ll explore OpenAI’s inroads in higher education. Plus, how passengers can get on a waitlist to hail a driverless car in Austin, Texas. But first, a look at how Google is changing its approach to artificial intelligence. In 2018, the company published its “AI principles,” guidelines for how it believed AI should be built and used. Google originally included language that said it would not design or deploy AI to be used in weapons or surveillance. That language has now gone away. Google didn’t respond to our request for comment, but it did say in a blog post this week that companies and governments should work together to create AI that, among other things, supports national security. Marketplace’s Stephanie Hughes spoke with Natasha Mascarenhas, reporter at The Information, about these topics for this week's “Tech Bytes.”

Marketplace All-in-One
Bytes: Week in Review — Google’s AI policy pivot, OpenAI teams up with California colleges, and robotaxis arrive in Austin

Marketplace All-in-One

Play Episode Listen Later Feb 7, 2025 14:31


On this week’s Marketplace “Tech Bytes: Week in Review,” we’ll explore OpenAI’s inroads in higher education. Plus, how passengers can get on a waitlist to hail a driverless car in Austin, Texas. But first, a look at how Google is changing its approach to artificial intelligence. In 2018, the company published its “AI principles,” guidelines for how it believed AI should be built and used. Google originally included language that said it would not design or deploy AI to be used in weapons or surveillance. That language has now gone away. Google didn’t respond to our request for comment, but it did say in a blog post this week that companies and governments should work together to create AI that, among other things, supports national security. Marketplace’s Stephanie Hughes spoke with Natasha Mascarenhas, reporter at The Information, about these topics for this week's “Tech Bytes.”

Pod Save the UK
Facts versus F-wits: how to fight the Alt Reich

Pod Save the UK

Play Episode Listen Later Jan 16, 2025 51:09


Artificial intelligence will be “mainlined into the veins” of the nation, Prime Minister Keir Starmer has declared, with a multibillion-pound investment and huge public rollout. Amid bond market chaos and a falling pound, the Government is pinning its hopes for growth on the AI revolution - what could possibly go wrong?To unpick the technogobledigook with Nish and Coco is technology journalist Chris Stokel-Walker. They drill into the Government's AI action plan and the potential pitfalls of this silver bullet for society, jobs and the environment. With the far right surging across the world - it's important to know when you can call a fascist a fascist. No one knows this better than Nafeez Ahmed, investigative journalist and author of “Alt Reich: The Network War to Destroy the West from Within”. He breaks down the scale of the threat we are facing and how we can fight it.And with so much Billionaire BS flying about, we debut “Facts versus Fuckwits”, debunking some prime fuckwittery from a painfully out-of-touch Tony Blair. Useful Links“Alt-Reich: The Network War to Destroy the West from Within” by Nafeez Ahmedhttps://www.waterstones.com/book/alt-reich/nafeez-ahmed//9781916754140Mental Health Services:https://www.nhs.uk/mental-health/https://www.mind.org.uk/information-support/guides-to-support-and-services/seeking-help-for-a-mental-health-problem/where-to-start/https://www.samaritans.org/https://www.youngminds.org.uk/https://www.rethink.org/advice-and-information/ GuestsChris Stokel-WalkerNafeez Ahmed Audio CreditsSky NewsJimmy's Jobs for the Future Podcast Pod Save the UK is a Reduced Listening production for Crooked Media.Contact us via email: PSUK@reducedlistening.co.ukInsta: https://instagram.com/podsavetheukTwitter: https://twitter.com/podsavetheukTikTok: https://www.tiktok.com/@podsavetheukFacebook: https://facebook.com/podsavetheukYoutube: https://www.youtube.com/@PodSavetheUK

Daily Tech News Show
YouTube AI Policy Changes – DTNS 4920

Daily Tech News Show

Play Episode Listen Later Dec 20, 2024 36:12


YouTube announced a number of new AI focused policies. How will these help creators? Plus product leaker Evan Blass posted images of what they claim is Lenovo's sixth-generation ThinkBook Plus with a rollable display. And Zelle, a major peer-to-peer payment platform operated by bank-owned fintech firm Early Warning Services, is under scrutiny for inadequate safeguards for transferring funds.Starring Tom Merritt, Sarah Lane, Shannon Morse, Len Peralta, Roger Chang, Joe.Link to the Show Notes.