Podcasts about ai regulation

  • 430PODCASTS
  • 670EPISODES
  • 36mAVG DURATION
  • 5WEEKLY NEW EPISODES
  • Oct 29, 2025LATEST

POPULARITY

20172018201920202021202220232024


Best podcasts about ai regulation

Latest podcast episodes about ai regulation

Millionaire Mindcast
Election Chaos, AI Regulation, and the 2026 Market Setup: What Smart Investors Should Do Now | Money Moves

Millionaire Mindcast

Play Episode Listen Later Oct 29, 2025 53:47


In this episode of Money Moves, Matty A. and Ryan Breedwell dive into one of the most politically and economically charged periods in recent history. With election season intensifying, AI regulation heating up, and market volatility rising, they break down how investors can prepare for the next 18 months.The hosts discuss why markets tend to rally after elections, the impact of potential policy shifts on taxes, crypto, and interest rates, and why AI regulation could change the trajectory of tech investing. They also explore the Federal Reserve's 2026 outlook, global instability, and whether gold and Bitcoin can protect investors against inflation and geopolitical risk.This episode delivers a balanced mix of macroeconomic insight, investment strategy, and real-world perspective to help you make smarter money moves during turbulent times.Episode Highlights:[00:00:30] Market volatility update and what's driving investor sentiment right now[00:04:15] Election-year dynamics: why markets often rally after political uncertainty settles[00:09:30] AI regulation and its impact on tech valuations and innovation[00:14:40] Global instability and energy prices — where the real risks lie[00:18:55] The Fed's next moves: interest rates, inflation, and 2026 projections[00:23:30] Crypto outlook — regulation, adoption, and institutional entry points[00:27:20] Gold, Bitcoin, and real assets as protection against market chaos[00:32:45] Why most investors lose in election years and how to avoid emotional decisions[00:38:10] The 2026 setup: sectors and strategies that could outperform long-termEpisode Takeaways:Election uncertainty creates opportunity. Historically, post-election rallies favor patient investors who stay positioned through volatility.AI regulation is coming. Expect more government scrutiny and tighter frameworks around data, IP, and automation—but innovation will persist.Diversification matters more than ever. Real assets, energy, and technology infrastructure remain strong long-term plays.Crypto and gold remain hedges. Institutional adoption and digital asset infrastructure are solidifying.Focus on fundamentals. Emotional, reactionary investing destroys returns—discipline compounds wealth.Episode Sponsored By:Discover Financial Millionaire Mindcast Shop: Buy the Rich Life Planner and Get the Wealth-Building Bundle for FREE! Visit: https://shop.millionairemindcast.com/CRE MASTERMIND: Visit myfirst50k.com and submit your application to join!FREE CRE Crash Course: Text “FREE” to 844-447-1555FREE Financial X-Ray: Text  "XRAY" to 844-447-1555

Ashurst Legal Outlook Podcast
World@Work: AI in the Workplace: some practical challenges

Ashurst Legal Outlook Podcast

Play Episode Listen Later Oct 22, 2025 24:58


We are pleased to share our latest World@Work global employment podcast on some practical challenges relating to the rapid rise of AI in the workplace. In this episode, we discuss several nations’ contrasting approaches to the regulation of AI. In particular, we shine a light on practical challenges for employers, including how they use AI in recruitment and how AI is used to monitor employee performance. Ashurst’s Andreas Mauroschat hosts the podcast from Germany. He’s joined by his colleagues Ruth Buchanan in the UK, Trent Sebbens in Australia, and Clarence Ding in Singapore. Together, they discuss how their respective jurisdictions are seeking to strike a balance between fostering innovation, maximising productivity, and considering their workforces. To listen to this and subscribe to future episodes, search for “Ashurst Legal Outlook” on Apple Podcasts, Spotify or your favourite podcast player. To find out more about the full range of Ashurst podcasts, visit ashurst.com/podcasts. The information provided is not intended to be a comprehensive review of all developments in the law and practice, or to cover all aspects of those referred to. Listeners should take legal advice before applying it to specific issues or transactions.See omnystudio.com/listener for privacy information.

TechCheck
Anthropic Divides Silicon Valley Over AI Regulation 10/21/25

TechCheck

Play Episode Listen Later Oct 21, 2025 4:43


Anthropic is now at the center of a battle over AI regulation and safety, with public clashes on the social platform X between tech titans like White House AI Czar David Sacks, tech investor Reid Hoffman and a16z's Marc Andreessen.  Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

The Dynamist
Who Should Regulate AI, and How? w/Matt Perault and Jai Ramaswamy

The Dynamist

Play Episode Listen Later Oct 21, 2025 52:21


California governor Gavin Newsom recently signed into law the country's first comprehensive regulatory framework for high-risk AI development. SB 53, or the Transparency in Frontier Artificial Intelligence Act, is aimed at the most powerful, “frontier” AI models that are trained with the highest computing and financial resources. The bill requires these developers to publish information on how they evaluate and mitigate risk, report catastrophic or critical safety incidents to state regulators, maintain protocols to prevent misuse of their models, and provide whistleblower protections to employees so they can report serious risks. SB 53 is significantly narrower in scope than the controversial SB 1047, which was vetoed by Newsom in 2024. Nonetheless, it is adding fuel to a burning debate over how to balance federal and state AI regulation. While California's AI safety bill is targeted at the largest AI developers, advocates for startups and “Little Tech” worry that they will end up caught in the crosshairs anyway. Jai Ramaswamy and Matt Perault of a16z join today to argue that attempts to carve out Little Tech from the burdens of AI regulation fall flat, because they focus on the wrong metrics like the cost of training AI models and computing power. Rather than try and regulate the development of AI, policymakers should focus on how AI is used—in other words, regulate the misuse of AI, not the making of AI.Matt Perault is the Head of Artificial Intelligence Policy at Andreessen Horowitz, where he oversees the firm's policy strategy on AI and helps portfolio companies navigate the AI policy landscape. Jai Ramaswamy oversees the legal, compliance, and government affairs functions at Andreessen Horowitz as Chief Legal Officer. They've written extensively on AI regulation for Little Tech.

The AI Policy Podcast
Congressman Jay Obernolte on the Future of U.S. AI Regulation

The AI Policy Podcast

Play Episode Listen Later Oct 21, 2025 58:34


 In this episode, we are joined by Rep. Jay Obernolte, one of Congress's leading voices on AI policy. We discuss his path from developing video games to serving in Congress (00:49), the work of the bipartisan House Task Force on AI and its final report (9:39), competing approaches to designing AI regulation in Congress (16:38), and prospects for federal preemption of state AI legislation (40:32). Congressman Obernolte has represented California's 23rd district since 2021. He co-chaired the bipartisan House Task Force on AI, leading the development of an extensive December 2024 report outlining a congressional agenda for AI. He also serves as vice-chair of the Congressional AI Caucus and is the only current member of Congress with an advanced degree in Artificial Intelligence, which he earned from UCLA in 1997. Rep. Obernolte previously served in the California State Legislature.

The John Batchelor Show
HEADLINE: AI Regulation Debate: Premature Laws vs. Emerging Norms GUEST NAME: Kevin Frazier SUMMARY: Kevin Frazier critiques the legislative rush to regulate AI, arguing that developing norms might be more effective than premature laws. He notes that bill

The John Batchelor Show

Play Episode Listen Later Oct 16, 2025 5:54


HEADLINE: AI Regulation Debate: Premature Laws vs. Emerging Norms GUEST NAME: Kevin Frazier SUMMARY: Kevin Frazier critiques the legislative rush to regulate AI, arguing that developing norms might be more effective than premature laws. He notes that bills like California's AB 1047, which demands factual accuracy, fundamentally misunderstand AI's generative nature. Imposing vague standards, as seen in New York's RAISE Act, risks chilling innovation and preventing widespread benefits, like affordable legal or therapy tools. Frazier emphasizes that AI policy should be grounded in empirical data rather than speculative fears. 1958

The John Batchelor Show
HEADLINE: AI Regulation Debate: Premature Laws vs. Emerging Norms GUEST NAME: Kevin Frazier SUMMARY: Kevin Frazier critiques the legislative rush to regulate AI, arguing that developing norms might be more effective than premature laws. He notes that bill

The John Batchelor Show

Play Episode Listen Later Oct 16, 2025 13:46


    HEADLINE: AI Regulation Debate: Premature Laws vs. Emerging Norms GUEST NAME: Kevin Frazier SUMMARY: Kevin Frazier critiques the legislative rush to regulate AI, arguing that developing norms might be more effective than premature laws. He notes that bills like California's AB 1047, which demands factual accuracy, fundamentally misunderstand AI's generative nature. Imposing vague standards, as seen in New York's RAISE Act, risks chilling innovation and preventing widespread benefits, like affordable legal or therapy tools. Frazier emphasizes that AI policy should be grounded in empirical data rather than speculative fears. 1960

IT Visionaries
AI Deception: What Is It & How to Prepare

IT Visionaries

Play Episode Listen Later Oct 16, 2025 36:25


What happens when AI stops making mistakes… and starts misleading you?This discussion dives into one of the most important — and least understood — frontiers in artificial intelligence: AI deception.We explore how AI systems evolve from simple hallucinations (unintended errors) to deceptive behaviors — where models selectively distort truth to achieve goals or please human feedback loops. We unpack the coding incentives, enterprise risks, and governance challenges that make this issue critical for every executive leading AI transformation.Key Moments:00:00 What is AI Deception and Why It Matters3:43 Emergent Behaviors: From Hallucinations to Alignment to Deception4:40 Defining AI Deception6:15 Does AI Have a Moral Compass?7:20 Why AI Lies: Incentives to “Be Helpful” and Avoid Retraining15:12 Is Deception Built into LLMs? (And Can It Ever Be Solved?)18:00 Non-Human Intelligence Patterns: Hallucinations or Something Else?19:37 Enterprise Impact: What Business Leaders Need to Know27:00 Measuring Model Reliability: Can We Quantify AI Quality?34:00 Final Thoughts: The Future of Trustworthy AI Mentions:Scientists at OpenAI and Apollo Research showed in a paper that AI models lie and deceive: https://www.youtube.com/shorts/XuxVSPwW8I8TIME: New Tests Reveal AI's Capacity for DeceptionOpenAI: Detecting and reducing scheming in AI modelsStartupHub: OpenAI and Apollo Research Reveal AI Models Are Learning to Deceive: New Detection Methods Show PromiseMarcus WellerHugging Face Watch next: https://www.youtube.com/watch?v=plwN5XvlKMg&t=1s  -- This episode of IT Visionaries is brought to you by Meter - the company building better networks. Businesses today are frustrated with outdated providers, rigid pricing, and fragmented tools. Meter changes that with a single integrated solution that covers everything wired, wireless, and even cellular networking. They design the hardware, write the firmware, build the software, and manage it all so your team doesn't have to.That means you get fast, secure, and scalable connectivity without the complexity of juggling multiple providers. Thanks to meter for sponsoring. Go to meter.com/itv to book a demo.---IT Visionaries is made by the team at Mission.org. Learn more about our media studio and network of podcasts at mission.org. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

FOX on Tech
Senate Committee Investigates AI Regulation

FOX on Tech

Play Episode Listen Later Oct 15, 2025 1:45


Senators grilled tech experts about the risks and benefits of artificial intelligence for the American workforce, as Vermont Senator Bernie Sanders warns it will put tens of millions out of work. Learn more about your ad choices. Visit podcastchoices.com/adchoices

Tech Law Talks
Innovation vs. guardrails: The great AI regulation debate

Tech Law Talks

Play Episode Listen Later Oct 14, 2025 31:45 Transcription Available


Reed Smith's Jason Garcia, Gerard Donovan, and Tyler Thompson are joined by Databricks' Suchismita Pahi and Christina Farhat for a spirited discussion exploring one of the most urgent debates of our era: Should AI be regulated now, or are we moving too fast? Settle in and listen to a dynamic conversation that delves into the complex relationship between innovation and regulation in the world of artificial intelligence.

Interviews: Tech and Business
Top Data Scientists Explain Bad Data, Poisoned Datasets, and Other AI Killers | CXOTalk #896

Interviews: Tech and Business

Play Episode Listen Later Oct 9, 2025 59:38


Is your AI built on quicksand? Learn how bad data, poisoned datasets, and deep fakes threaten your AI systems, and what to do about it.In this episode of CXOTalk (#896), AI luminaries Dr. David Bray and Dr. Anthony Scriffignano reveal the hidden dangers lurking in your AI foundations. They share practical strategies for building trustworthy AI systems and escaping the "AI quicksand" that traps countless organizations.

Found In The Rockies
Will Pearce & Brad Palm (Dreadnode) // The Frontlines of Offensive AI Security

Found In The Rockies

Play Episode Listen Later Oct 8, 2025 51:05


In today's episode, Les sits down with Will Pearce and Brad Palm from Dreadnode, one of the nation's most advanced offensive AI and cybersecurity companies. Based in the Rocky Mountain West, Dreadnode is redefining how we think about digital defense — by taking the offensive. Will and Brad share their experiences leading red teams at Microsoft, NVIDIA, and within the U.S. Marine Corps, and how those lessons now shape their mission to secure the future of artificial intelligence.From battlefield drones and AI-enabled cyberattacks to the regulatory frameworks that will define the next era of warfare, this conversation explores what happens when AI becomes both a weapon and a shield.Here's a closer look at the episode:From Red Teams to FoundersWill Pearce, former leader of AI Red Teams at Microsoft and NVIDIA, discusses his journey from penetration testing and consulting to building Dreadnode.Describes how the offensive use of AI is a natural extension of red teaming — “offense leads defense.”Brad Palm's Path from the BattlefieldBrad Palm, a Marine Corps veteran and former red team leader, shares how military principles of mobility, attack, and defense translate into cyber warfare.Offensive cyber as a transformational moment — comparing AI's impact to the leap from muskets to machine guns.The Rise of Offensive AIWill breaks down the offensive AI landscape, from code scanning and model manipulation to adversarial attacks on computer vision systems.How more “eyes,” even artificial ones, find more vulnerabilities — accelerating both innovation and exposure.Building a Platform for Cyber ML OpsDreadnode's platform enables organizations to build, evaluate, and deploy AI models and agents with security in mind.Unlike “AI-in-a-box” startups, their approach mirrors ML Ops infrastructure — prioritizing transparency, testing, and adaptability.Their mission: help clients build their own capabilities, rather than just buy black-box solutions.A Collaborative Cybersecurity CommunityWill and Brad note that in AI security, collaboration beats competition.“If you have confidence in your abilities, you don't need to hide anything.”Despite growing investment and consolidation, the founders believe the industry is still expanding rapidly — with room for innovation and partnership.Human + AI: The Future of the BattlefieldBrad connects his defense background to current AI developments, pointing to autonomous drones in Ukraine as examples of real-time AI-driven warfare.Raises ethical and practical questions about “human-in-the-loop” systems and the urgency of explainable, auditable AI in combat environments.Will expands on how regulatory frameworks and rules of engagement must evolve to keep pace with privately developed AI systems.Offensive AI Conference & What's NextHosting Offensive AI Con in San Diego — the first of its kind dedicated to offensive AI research and community building.The team continues to release state-of-the-art research drops, collaborating with cyber threat intel groups and enterprise partners.Above all, the founders share a deep appreciation for their team culture: detail-oriented, relentlessly curious, and dedicated to “winning every day.”Resources:Website: https://dreadnode.io/ Will Pearce - https://www.linkedin.com/in/will-pearce-a62331135/Brad Palm - https://www.linkedin.com/in/bradpalm/Dreadnode LinkedIn: https://www.linkedin.com/company/dreadnode

Outgrow's Marketer of the Month
Snippet: Matthew Blakemore CEO at AI Caramba! on AI Regulation & Industry Motives

Outgrow's Marketer of the Month

Play Episode Listen Later Oct 6, 2025 0:43


Matthew Blakemore, CEO at AI Caramba!, reflects on the recent debates around AI regulation with a healthy dose of scepticism. He highlights how companies like OpenAI, having already secured a competitive advantage through their vast training data, may now be pushing for regulation not purely from an ethical standpoint, but as a way to protect their lead and limit competition.He acknowledges the importance of frameworks like the upcoming EU AI Act, which will play a critical role in shaping how AI is built and deployed in the future. The real test is ensuring safety and accountability without stifling innovation and fair competition.Listen to the full podcast now- https://bit.ly/40GZ9bw#AI #EUAIAct #MatthewBlakemore #TechRegulation #ArtificialIntelligence #Outgrow

AI for Non-Profits
How Grok's Leaks Shape AI Regulation Pressure

AI for Non-Profits

Play Episode Listen Later Oct 4, 2025 6:13


Governments are now rethinking AI policies post-Grok. We examine the regulatory wave that may follow. Will this slow innovation or push it forward safely?Get the top 40+ AI Models for $20 at AI Box: ⁠⁠https://aibox.aiAI Chat YouTube Channel: https://www.youtube.com/@JaedenSchaferJoin my AI Hustle Community: https://www.skool.com/aihustle

All-In with Chamath, Jason, Sacks & Friedberg
Biggest LBO Ever, SPAC 2.0, Open Source AI Models, State AI Regulation Frenzy

All-In with Chamath, Jason, Sacks & Friedberg

Play Episode Listen Later Oct 3, 2025 89:31


(0:00) Bestie intros! (1:53) EA acquired for $55B in biggest LBO ever, why PE is in trouble (17:42) IPO market, SPAC 2.0 (27:41) The AI rollup opportunity (36:01) Sacks joins the show! (38:27) OpenAI and Meta launch short-form video apps: "AI Slop" or the future of content? (45:04) Open source AI: DeepSeek's new model, pressure on US AI industry (1:05:11) State AI regulation frenzy: States' rights vs Federal control, overregulation Follow the besties: https://x.com/chamath https://x.com/Jason https://x.com/DavidSacks https://x.com/friedberg Follow on X: https://x.com/theallinpod Follow on Instagram: https://www.instagram.com/theallinpod Follow on TikTok: https://www.tiktok.com/@theallinpod Follow on LinkedIn: https://www.linkedin.com/company/allinpod Intro Music Credit: https://rb.gy/tppkzl https://x.com/yung_spielburg Intro Video Credit: https://x.com/TheZachEffect Referenced in the show: https://apnews.com/article/ea-electronic-arts-video-game-silver-lake-pif-d17dc7dd3412a990d2c0a6758aaa6900 https://www.ign.com/articles/xbox-game-pass-ultimate-price-rises-to-30-a-month-microsoft-adds-more-day-one-games-and-throws-in-fortnite-crew-and-ubisoft-classics-to-help-justify-the-cost https://x.com/Jason/status/1973461806585966655 https://www.npr.org/2025/09/05/nx-s1-5529404/anthropic-settlement-authors-copyright-ai https://x.com/scaling01/status/1972650237266465214 https://www.insidetechlaw.com/blog/2025/09/californias-transparency-in-frontier-artificial-intelligence-act https://www.datacenterdynamics.com/en/news/google-withdraws-rezoning-proposal-for-468-acre-data-center-project-in-franklin-township-indianapolis

The Future of Everything presented by Stanford Engineering
The future of the innovation economy

The Future of Everything presented by Stanford Engineering

Play Episode Listen Later Sep 26, 2025 32:07


In a special Future of Everything podcast episode recorded live before a studio audience in New York, host Russ Altman talks to three authorities on the innovation economy. His guests – Fei-Fei Li, professor of computer science and co-director of the Stanford Institute for Human-Centered AI (HAI); Susan Athey, professor and authority on the economics of technology; and Neale Mahoney, Trione Director of the Stanford Institute for Economic Policy Research – bring their distinct-but-complementary perspectives to a discussion on how artificial intelligence is reshaping our economy.Athey emphasizes that both AI broadly and AI-based coding tools specifically are general-purpose technologies, like electricity or the personal computer, whose impact may be felt quickly in certain sectors but much more slowly in aggregate. She tells how solving one bottleneck to implementation often reveals others – whether in digitization, adoption costs, or the need to restructure work and organizations. Mahoney draws on economic history to say we are in a “veil of ignorance” moment with regard to societal impacts. We cannot know whose jobs will be disrupted, he says, but we can invest in safety nets now to ease the transition. Li cautions against assuming AI will replace people. Instead, she speaks of AI as a “horizontal technology” that could supercharge human creativity – but only if it is properly rooted in science, not science fiction.Collectively, the panel calls on policymakers, educators, researchers, and entrepreneurs to steer AI toward what they call “human-centered goals” – protecting workers, growing opportunities, and supercharging education and medicine – to deliver broad and shared prosperity. It's the future of the innovation economy on this episode of Stanford Engineering's The Future of Everything podcast.Have a question for Russ? Send it our way in writing or via voice memo, and it might be featured on an upcoming episode. Please introduce yourself, let us know where you're listening from, and share your question. You can send questions to thefutureofeverything@stanford.edu.Episode Reference Links:Stanford Profile: Fei-Fei LiStanford Profile: Susan AtheyStanford Profile: Neale MahoneyConnect With Us:Episode Transcripts >>> The Future of Everything WebsiteConnect with Russ >>> Threads / Bluesky / MastodonConnect with School of Engineering >>> Twitter/X / Instagram / LinkedIn / FacebookChapters:(00:00:00) IntroductionRuss Altman introduces live guests Fei-Fei Li, Susan Athey, and Neale Mahoney, professors from Stanford University.(00:02:37) Lessons from Past TechnologyComparing AI with past technologies and the bottlenecks to their adoption.(00:06:29) Jobs & Safety NetsThe uncertainty of AI's labor impact and investing in social protections.(00:08:29) Augmentation vs. ReplacementUsing AI as a tool to enhance, not replace, human work and creativity.(00:11:41) Human-Centered AI & PolicyShaping AI through universities, government, and global collaboration.(00:15:58) Education RevolutionThe potential for AI to revolutionize education by focusing on human capital.(00:18:58) Balancing Regulation & InnovationBalancing pragmatic, evidence-based AI policy with entrepreneurship.(00:22:22) Competition & Market PowerThe risks of monopolies and the role of open models in fair pricing.(00:25:22) America's Economic FunkHow social media and innovation are shaping America's declining optimism.(00:27:05) Future in a MinuteThe panel shares what gives them hope and what they'd study today.(00:30:49) Conclusion Connect With Us:Episode Transcripts >>> The Future of Everything WebsiteConnect with Russ >>> Threads / Bluesky / MastodonConnect with School of Engineering >>>Twitter/X / Instagram / LinkedIn / Facebook Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

Uncommons with Nate Erskine-Smith
The Future of Online Harms and AI Regulation with Taylor Owen

Uncommons with Nate Erskine-Smith

Play Episode Listen Later Sep 26, 2025 39:00


After a hiatus, we've officially restarted the Uncommons podcast, and our first long-form interview is with Professor Taylor Owen to discuss the ever changing landscape of the digital world, the fast emergence of AI and the implications for our kids, consumer safety and our democracy.Taylor Owen's work focuses on the intersection of media, technology and public policy and can be found at taylorowen.com. He is the Beaverbrook Chair in Media, Ethics and Communications and the founding Director of The Centre for Media, Technology and Democracy at McGill University where he is also an Associate Professor. He is the host of the Globe and Mail's Machines Like Us podcast and author of several books.Taylor also joined me for this discussion more than 5 years ago now. And a lot has happened in that time.Upcoming episodes will include guests Tanya Talaga and an episode focused on the border bill C-2, with experts from The Citizen Lab and the Canadian Association of Refugee Lawyers.We'll also be hosting a live event at the Naval Club of Toronto with Catherine McKenna, who will be launching her new book Run Like a Girl. Register for free through Eventbrite. As always, if you have ideas for future guests or topics, email us at info@beynate.ca Chapters:0:29 Setting the Stage1:44 Core Problems & Challenges4:31 Information Ecosystem Crisis10:19 Signals of Reliability & Policy Challenges14:33 Legislative Efforts18:29 Online Harms Act Deep Dive25:31 AI Fraud29:38 Platform Responsibility32:55 Future Policy DirectionFurther Reading and Listening:Public rules for big tech platforms with Taylor Owen — Uncommons Podcast“How the Next Government can Protect Canada's Information Ecosystem.” Taylor Owen with Helen Hayes, The Globe and Mail, April 7, 2025.Machines Like Us PodcastBill C-63Transcript:Nate Erskine-Smith00:00-00:43Welcome to Uncommons, I'm Nate Erskine-Smith. This is our first episode back after a bit of a hiatus, and we are back with a conversation focused on AI safety, digital governance, and all of the challenges with regulating the internet. I'm joined by Professor Taylor Owen. He's an expert in these issues. He's been writing about these issues for many years. I actually had him on this podcast more than five years ago, and he's been a huge part of getting us in Canada to where we are today. And it's up to this government to get us across the finish line, and that's what we talk about. Taylor, thanks for joining me. Thanks for having me. So this feels like deja vu all over again, because I was going back before you arrived this morning and you joined this podcast in April of 2020 to talk about platform governance.Taylor Owen00:43-00:44It's a different world.Taylor00:45-00:45In some ways.Nate Erskine-Smith00:45-01:14Yeah. Well, yeah, a different world for sure in many ways, but also the same challenges in some ways too. Additional challenges, of course. But I feel like in some ways we've come a long way because there's been lots of consultation. There have been some legislative attempts at least, but also we haven't really accomplished the thing. So let's talk about set the stage. Some of the same challenges from five years ago, but some new challenges. What are the challenges? What are the problems we're trying to solve? Yeah, I mean, many of them are the same, right?Taylor Owen01:14-03:06I mean, this is part of the technology moves fast. But when you look at the range of things citizens are concerned about when they and their children and their friends and their families use these sets of digital technologies that shape so much of our lives, many things are the same. So they're worried about safety. They're worried about algorithmic content and how that's feeding into what they believe and what they think. They're worried about polarization. We're worried about the integrity of our democracy and our elections. We're worried about sort of some of the more acute harms of like real risks to safety, right? Like children taking their own lives and violence erupting, political violence emerging. Like these things have always been present as a part of our digital lives. And that's what we were concerned about five years ago, right? When we talked about those harms, that was roughly the list. Now, the technologies we were talking about at the time were largely social media platforms, right? So that was the main way five years ago that we shared, consumed information in our digital politics and our digital public lives. And that is what's changing slightly. Now, those are still prominent, right? We're still on TikTok and Instagram and Facebook to a certain degree. But we do now have a new layer of AI and particularly chatbots. And I think a big question we face in this conversation in this, like, how do we develop policies that maximize the benefits of digital technologies and minimize the harms, which is all this is trying to do. Do we need new tools for AI or some of the things we worked on for so many years to get right, the still the right tools for this new set of technologies with chatbots and various consumer facing AI interfaces?Nate Erskine-Smith03:07-03:55My line in politics has always been, especially around privacy protections, that we are increasingly living our lives online. And especially, you know, my kids are growing up online and our laws need to reflect that reality. All of the challenges you've articulated to varying degrees exist in offline spaces, but can be incredibly hard. The rules we have can be incredibly hard to enforce at a minimum in the online space. And then some rules are not entirely fit for purpose and they need to be updated in the online space. It's interesting. I was reading a recent op-ed of yours, but also some of the research you've done. This really stood out. So you've got the Hogue Commission that says disinformation is the single biggest threat to our democracy. That's worth pausing on.Taylor Owen03:55-04:31Yeah, exactly. Like the commission that spent a year at the request of all political parties in parliament, at the urging of the opposition party, so it spent a year looking at a wide range of threats to our democratic systems that everybody was concerned about originating in foreign countries. And the conclusion of that was that the single biggest threat to our democracy is the way information flows through our society and how we're not governing it. Like that is a remarkable statement and it kind of came and went. And I don't know why we moved off from that so fast.Nate Erskine-Smith04:31-05:17Well, and there's a lot to pull apart there because you've got purposeful, intentional, bad actors, foreign influence operations. But you also have a really core challenge of just the reliability and credibility of the information ecosystem. So you have Facebook, Instagram through Meta block news in Canada. And your research, this was the stat that stood out. Don't want to put you in and say like, what do we do? Okay. So there's, you say 11 million views of news have been lost as a consequence of that blocking. Okay. That's one piece of information people should know. Yeah. But at the same time.Taylor Owen05:17-05:17A day. Yeah.Nate Erskine-Smith05:18-05:18So right.Taylor Owen05:18-05:2711 million views a day. And we should sometimes we go through these things really fast. It's huge. Again, Facebook decides to block news. 40 million people in Canada. Yeah.Taylor05:27-05:29So 11 million times a Canadian.Taylor Owen05:29-05:45And what that means is 11 million times a Canadian would open one of their news feeds and see Canadian journalism is taken out of the ecosystem. And it was replaced by something. People aren't using these tools less. So that journalism was replaced by something else.Taylor05:45-05:45Okay.Taylor Owen05:45-05:46So that's just it.Nate Erskine-Smith05:46-06:04So on the one side, we've got 11 million views a day lost. Yeah. And on the other side, Canadians, the majority of Canadians get their news from social media. But when the Canadians who get their news from social media are asked where they get it from, they still say Instagram and Facebook. But there's no news there. Right.Taylor Owen06:04-06:04They say they get.Nate Erskine-Smith06:04-06:05It doesn't make any sense.Taylor Owen06:06-06:23It doesn't and it does. It's terrible. They ask Canadians, like, where do you get people who use social media to get their news? Where do they get their news? and they still say social media, even though it's not there. Journalism isn't there. Journalism isn't there. And I think one of the explanations— Traditional journalism. There is—Taylor06:23-06:23There is—Taylor Owen06:23-06:47Well, this is what I was going to get at, right? Like, there is—one, I think, conclusion is that people don't equate journalism with news about the world. There's not a one-to-one relationship there. Like, journalism is one provider of news, but so are influencers, so are podcasts, people listening to this. Like this would be labeled probably news in people's.Nate Erskine-Smith06:47-06:48Can't trust the thing we say.Taylor Owen06:48-07:05Right. And like, and neither of us are journalists, right? But we are providing information about the world. And if it shows up in people's feeds, as I'm sure it will, like that probably gets labeled in people's minds as news, right? As opposed to pure entertainment, as entertaining as you are.Nate Erskine-Smith07:05-07:06It's public affairs content.Taylor Owen07:06-07:39Exactly. So that's one thing that's happening. The other is that there's a generation of creators that are stepping into this ecosystem to both fill that void and that can use these tools much more effectively. So in the last election, we found that of all the information consumed about the election, 50% of it was created by creators. 50% of the engagement on the election was from creators. Guess what it was for journalists, for journalism? Like 5%. Well, you're more pessimistic though. I shouldn't have led with the question. 20%.Taylor07:39-07:39Okay.Taylor Owen07:39-07:56So all of journalism combined in the entire country, 20 percent of engagement, influencers, 50 percent in the last election. So like we've shifted, at least on social, the actors and people and institutions that are fostering our public.Nate Erskine-Smith07:56-08:09Is there a middle ground here where you take some people that play an influencer type role but also would consider themselves citizen journalists in a way? How do you – It's a super interesting question, right?Taylor Owen08:09-08:31Like who – when are these people doing journalism? When are they doing acts of journalism? Like someone can be – do journalism and 90% of the time do something else, right? And then like maybe they reveal something or they tell an interesting story that resonates with people or they interview somebody and it's revelatory and it's a journalistic act, right?Taylor08:31-08:34Like this is kind of a journalistic act we're playing here.Taylor Owen08:35-08:49So I don't think – I think these lines are gray. but I mean there's some other underlying things here which like it matters if I think if journalistic institutions go away entirely right like that's probably not a good thing yeah I mean that's whyNate Erskine-Smith08:49-09:30I say it's terrifying is there's a there's a lot of good in the in the digital space that is trying to be there's creative destruction there's a lot of work to provide people a direct sense of news that isn't that filter that people may mistrust in traditional media. Having said that, so many resources and there's so much history to these institutions and there's a real ethics to journalism and journalists take their craft seriously in terms of the pursuit of truth. Absolutely. And losing that access, losing the accessibility to that is devastating for democracy. I think so.Taylor Owen09:30-09:49And I think the bigger frame of that for me is a democracy needs signals of – we need – as citizens in a democracy, we need signals of reliability. Like we need to know broadly, and we're not always going to agree on it, but like what kind of information we can trust and how we evaluate whether we trust it.Nate Erskine-Smith09:49-10:13And that's what – that is really going away. Pause for a sec. So you could imagine signals of reliability is a good phrase. what does it mean for a legislator when it comes to putting a rule in place? Because you could imagine, you could have a Blade Runner kind of rule that says you've got to distinguish between something that is human generatedTaylor10:13-10:14and something that is machine generated.Nate Erskine-Smith10:15-10:26That seems straightforward enough. It's a lot harder if you're trying to distinguish between Taylor, what you're saying is credible, and Nate, what you're saying is not credible,Taylor10:27-10:27which is probably true.Nate Erskine-Smith10:28-10:33But how do you have a signal of reliability in a different kind of content?Taylor Owen10:34-13:12I mean, we're getting into like a journalistic journalism policy here to a certain degree, right? And it's a wicked problem because the primary role of journalism is to hold you personally to account. And you setting rules for what they can and can't do and how they can and can't behave touches on some real like third rails here, right? It's fraught. However, I don't think it should ever be about policy determining what can and can't be said or what is and isn't journalism. The real problem is the distribution mechanism and the incentives within it. So a great example and a horrible example happened last week, right? So Charlie Kirk gets assassinated. I don't know if you opened a feed in the few days after that, but it was a horrendous place, right? Social media was an awful, awful, awful place because what you saw in that feed was the clearest demonstration I've ever seen in a decade of looking at this of how those algorithmic feeds have become radicalized. Like all you saw on every platform was the worst possible representations of every view. Right. Right. It was truly shocking and horrendous. Like people defending the murder and people calling for the murder of leftists and like on both sides. Right. people blaming Israel, people, whatever. Right. And that isn't a function of like- Aaron Charlie Kirk to Jesus. Sure. Like- It was bonkers all the way around. Totally bonkers, right? And that is a function of how those ecosystems are designed and the incentives within them. It's not a function of like there was journalism being produced about that. Like New York Times, citizens were doing good content about what was happening. It was like a moment of uncertainty and journalism was doing or playing a role, but it wasn't And so I think with all of these questions, including the online harms ones, and I think how we step into an AI governance conversation, the focus always has to be on those systems. I'm like, what is who and what and what are the incentives and the technical decisions being made that determine what we experience when we open these products? These are commercial products that we're choosing to consume. And when we open them, a whole host of business and design and technical decisions and human decisions shape the effect it has on us as people, the effect it has on our democracy, the vulnerabilities that exist in our democracy, the way foreign actors or hostile actors can take advantage of them, right? Like all of that stuff we've been talking about, the role reliability of information plays, like these algorithms could be tweaked for reliable versus unreliable content, right? Over time.Taylor13:12-13:15That's not a – instead of reactionary –Taylor Owen13:15-13:42Or like what's most – it gets most engagement or what makes you feel the most angry, which is largely what's driving X, for example, right now, right? You can torque all those things. Now, I don't think we want government telling companies how they have to torque it. But we can slightly tweak the incentives to get better content, more reliable content, less polarizing content, less hateful content, less harmful content, right? Those dials can be incentivized to be turned. And that's where the policy space should play, I think.Nate Erskine-Smith13:43-14:12And your focus on systems and assessing risks with systems. I think that's the right place to play. I mean, we've seen legislative efforts. You've got the three pieces in Canada. You've got online harms. You've got the privacy and very kind of vague initial foray into AI regs, which we can get to. And then a cybersecurity piece. And all of those ultimately died on the order paper. Yeah. We also had the journalistic protection policies, right, that the previous government did.Taylor Owen14:12-14:23I mean – Yeah, yeah, yeah. We can debate their merits. Yeah. But there was considerable effort put into backstopping the institutions of journalism by the – Well, they're twofold, right?Nate Erskine-Smith14:23-14:33There's the tax credit piece, sort of financial support. And then there was the Online News Act. Right. Which was trying to pull some dollars out of the platforms to pay for the news as well. Exactly.Taylor14:33-14:35So the sort of supply and demand side thing, right?Nate Erskine-Smith14:35-14:38There's the digital service tax, which is no longer a thing.Taylor Owen14:40-14:52Although it still is a piece of past legislation. Yeah, yeah, yeah. It still is a thing. Yeah, yeah. Until you guys decide whether to negate the thing you did last year or not, right? Yeah.Nate Erskine-Smith14:52-14:55I don't take full responsibility for that one.Taylor Owen14:55-14:56No, you shouldn't.Nate Erskine-Smith14:58-16:03But other countries have seen more success. Yeah. And so you've got in the UK, in Australia, the EU really has led the way. 2018, the EU passes GDPR, which is a privacy set of rules, which we are still behind seven years later. But you've got in 2022, 2023, you've got Digital Services Act that passes. You've got Digital Markets Act. And as I understand it, and we've had, you know, we've both been involved in international work on this. And we've heard from folks like Francis Hogan and others about the need for risk-based assessments. And you're well down the rabbit hole on this. But isn't it at a high level? You deploy a technology. You've got to identify material risks. You then have to take reasonable measures to mitigate those risks. That's effectively the duty of care built in. And then ideally, you've got the ability for third parties, either civil society or some public office that has the ability to audit whether you have adequately identified and disclosed material risks and whether you have taken reasonable steps to mitigate.Taylor Owen16:04-16:05That's like how I have it in my head.Nate Erskine-Smith16:05-16:06I mean, that's it.Taylor Owen16:08-16:14Write it down. Fill in the legislation. Well, I mean, that process happened. I know. That's right. I know.Nate Erskine-Smith16:14-16:25Exactly. Which people, I want to get to that because C63 gets us a large part of the way there. I think so. And yet has been sort of like cast aside.Taylor Owen16:25-17:39Exactly. Let's touch on that. But I do think what you described as the online harms piece of this governance agenda. When you look at what the EU has done, they have put in place the various building blocks for what a broad digital governance agenda might look like. Because the reality of this space, which we talked about last time, and it's the thing that's infuriating about digital policy, is that you can't do one thing. There's no – digital economy and our digital lives are so vast and the incentives and the effect they have on society is so broad that there's no one solution. So anyone who tells you fix privacy policy and you'll fix all the digital problems we just talked about are full of it. Anyone who says competition policy, like break up the companies, will solve all of these problems. is wrong, right? Anyone who says online harms policy, which we'll talk about, fixes everything is wrong. You have to do all of them. And Europe has, right? They updated their privacy policy. They've been to build a big online harms agenda. They updated their competition regime. And they're also doing some AI policy too, right? So like you need comprehensive approaches, which is not an easy thing to do, right? It means doing three big things all over.Nate Erskine-Smith17:39-17:41Especially minority parlance, short periods of time, legislatively.Taylor Owen17:41-18:20Different countries have taken different pieces of it. Now, on the online harms piece, which is what the previous government took really seriously, and I think it's worth putting a point on that, right, that when we talked last was the beginning of this process. After we spoke, there was a national expert panel. There were 20 consultations. There were four citizens' assemblies. There was a national commission, right? Like a lot of work went into looking at what every other country had done because this is a really wicked, difficult problem and trying to learn from what Europe, Australia and the UK had all done. And we kind of taking the benefit of being late, right? So they were all ahead of us.Taylor18:21-18:25People you work with on that grant committee. We're all quick and do our own consultations.Taylor Owen18:26-19:40Exactly. And like the model that was developed out of that, I think, was the best model of any of those countries. And it's now seen as internationally, interestingly, as the new sort of milestone that everybody else is building on, right? And what it does is it says if you're going to launch a digital product, right, like a consumer-facing product in Canada, you need to assess risk. And you need to assess risk on these broad categories of harms that we have decided as legislators we care about or you've decided as legislators you cared about, right? Child safety, child sexual abuse material, fomenting violence and extremist content, right? Like things that are like broad categories that we've said are we think are harmful to our democracy. All you have to do as a company is a broad assessment of what could go wrong with your product. If you find something could go wrong, so let's say, for example, let's use a tangible example. Let's say you are a social media platform and you are launching a product that's going to be used by kids and it allows adults to contact kids without parental consent or without kids opting into being a friend. What could go wrong with that?Nate Erskine-Smith19:40-19:40Yeah.Taylor19:40-19:43Like what could go wrong? Yeah, a lot could go wrong.Taylor Owen19:43-20:27And maybe strange men will approach teenage girls. Maybe, right? Like if you do a risk assessment, that is something you might find. You would then be obligated to mitigate that risk and show how you've mitigated it, right? Like you put in a policy in place to show how you're mitigating it. And then you have to share data about how these tools are used so that we can monitor, publics and researchers can monitor whether that mitigation strategy worked. That's it. In that case, that feature was launched by Instagram in Canada without any risk assessment, without any safety evaluation. And we know there was like a widespread problem of teenage girls being harassed by strange older men.Taylor20:28-20:29Incredibly creepy.Taylor Owen20:29-20:37A very easy, but not like a super illegal thing, not something that would be caught by the criminal code, but a harm we can all admit is a problem.Taylor20:37-20:41And this kind of mechanism would have just filtered out.Taylor Owen20:41-20:51Default settings, right? And doing thinking a bit before you launch a product in a country about what kind of broad risks might emerge when it's launched and being held accountable to do it for doing that.Nate Erskine-Smith20:52-21:05Yeah, I quite like the we I mean, maybe you've got a better read of this, but in the UK, California has pursued this. I was looking at recently, Elizabeth Denham is now the Jersey Information Commissioner or something like that.Taylor Owen21:05-21:06I know it's just yeah.Nate Erskine-Smith21:07-21:57I don't random. I don't know. But she is a Canadian, for those who don't know Elizabeth Denham. And she was the information commissioner in the UK. And she oversaw the implementation of the first age-appropriate design code. That always struck me as an incredibly useful approach. In that even outside of social media platforms, even outside of AI, take a product like Roblox, where tons of kids use it. And just forcing companies to ensure that the default settings are prioritizing child safety so that you don't put the onus on parents and kids to figure out each of these different games and platforms. In a previous world of consumer protection, offline, it would have been de facto. Of course we've prioritized consumer safety first and foremost. But in the online world, it's like an afterthought.Taylor Owen21:58-24:25Well, when you say consumer safety, it's worth like referring back to what we mean. Like a duty of care can seem like an obscure concept. But your lawyer is a real thing, right? Like you walk into a store. I walk into your office. I have an expectation that the bookshelves aren't going to fall off the wall and kill me, right? And you have to bolt them into the wall because of that, right? Like that is a duty of care that you have for me when I walk into your public space or private space. Like that's all we're talking about here. And the age-appropriate design code, yes, like sort of developed, implemented by a Canadian in the UK. And what it says, it also was embedded in the Online Harms Act, right? If we'd passed that last year, we would be implementing an age-appropriate design code as we speak, right? What that would say is any product that is likely to be used by a kid needs to do a set of additional things, not just these risk assessments, right? But we think like kids don't have the same rights as adults. We have different duties to protect kids as adults, right? So maybe they should do an extra set of things for their digital products. And it includes things like no behavioral targeting, no advertising, no data collection, no sexual adult content, right? Like kind of things that like – Seem obvious. And if you're now a child in the UK and you open – you go on a digital product, you are safer because you have an age-appropriate design code governing your experience online. Canadian kids don't have that because that bill didn't pass, right? So like there's consequences to this stuff. and I get really frustrated now when I see the conversation sort of pivoting to AI for example right like all we're supposed to care about is AI adoption and all the amazing things AI is going to do to transform our world which are probably real right like not discounting its power and just move on from all of these both problems and solutions that have been developed to a set of challenges that both still exist on social platforms like they haven't gone away people are still using these tools and the harms still exist and probably are applicable to this next set of technologies as well. So this moving on from what we've learned and the work that's been done is just to the people working in this space and like the wide stakeholders in this country who care about this stuff and working on it. It just, it feels like you say deja vu at the beginning and it is deja vu, but it's kind of worse, right? Cause it's like deja vu and then ignoring theTaylor24:25-24:29five years of work. Yeah, deja vu if we were doing it again. Right. We're not even, we're not evenTaylor Owen24:29-24:41Well, yeah. I mean, hopefully I actually am not, I'm actually optimistic, I would say that we will, because I actually think of if for a few reasons, like one, citizens want it, right? Like.Nate Erskine-Smith24:41-24:57Yeah, I was surprised on the, so you mentioned there that the rules that we design, the risk assessment framework really applied to social media could equally be applied to deliver AI safety and it could be applied to new technology in a useful way.Taylor Owen24:58-24:58Some elements of it. Exactly.Nate Erskine-Smith24:58-25:25I think AI safety is a broad bucket of things. So let's get to that a little bit because I want to pull the pieces together. So I had a constituent come in the office and he is really like super mad. He's super mad. Why is he mad? Does that happen very often? Do people be mad when they walk into this office? Not as often as you think, to be honest. Not as often as you think. And he's mad because he believes Mark Carney ripped him off.Taylor Owen25:25-25:25Okay.Nate Erskine-Smith25:25-26:36Okay. Yep. He believes Mark Carney ripped him off, not with broken promise in politics, not because he said one thing and is delivering something else, nothing to do with politics. He saw a video online, Mark Carney told him to invest money. He invested money and he's out the 200 bucks or whatever it was. And I was like, how could you possibly have lost money in this way? This is like, this was obviously a scam. Like what, how could you have been deceived? But then I go and I watched the video And it is, okay, I'm not gonna send the 200 bucks and I've grown up with the internet, but I can see how- Absolutely. In the same way, phone scams and Nigerian princes and all of that have their own success rate. I mean, this was a very believable video that was obviously AI generated. So we are going to see rampant fraud. If we aren't already, we are going to see many challenges with respect to AI safety. What over and above the risk assessment piece, what do we do to address these challenges?Taylor Owen26:37-27:04So that is a huge problem, right? Like the AI fraud, AI video fraud is a huge challenge. In the election, when we were monitoring the last election, by far the biggest problem or vulnerability of the election was a AI generated video campaign. that every day would take videos of Polyevs and Carney's speeches from the day before and generate, like morph them into conversations about investment strategies.Taylor27:05-27:07And it was driving people to a crypto scam.Taylor Owen27:08-27:11But it was torquing the political discourse.Taylor27:11-27:11That's what it must have been.Taylor Owen27:12-27:33I mean, there's other cases of this, but that's probably, and it was running rampant on particularly meta platforms. They were flagged. They did nothing about it. There were thousands of these videos circulating throughout the entire election, right? And it's not like the end of the world, right? Like nobody – but it torqued our political debate. It ripped off some people. And these kinds of scams are –Taylor27:33-27:38It's clearly illegal. It's clearly illegal. It probably breaks his election law too, misrepresenting a political figure, right?Taylor Owen27:38-27:54So I think there's probably an Elections Canada response to this that's needed. And it's fraud. And it's fraud, absolutely. So what do you do about that, right? And the head of the Canadian Banking Association said there's like billions of dollars in AI-based fraud in the Canadian economy right now. Right? So it's a big problem.Taylor27:54-27:55Yeah.Taylor Owen27:55-28:46I actually think there's like a very tangible policy solution. You put these consumer-facing AI products into the Online Harms Act framework, right? And then you add fraud and AI scams as a category of harm. And all of a sudden, if you're meta and you are operating in Canada during an election, you'd have to do a risk assessment on like AI fraud potential of your product. Responsibility for your platform. And then it starts to circulate. We would see it. They'd be called out on it. They'd have to take it down. And like that's that, right? Like so that we have mechanisms for dealing with this. But it does mean evolving what we worked on over the past five years, these like only harms risk assessment models and bringing in some of the consumer facing AI, both products and related harms into the framework.Nate Erskine-Smith28:47-30:18To put it a different way, I mean, so this is years ago now that we had this, you know, grand committee in the UK holding Facebook and others accountable. This really was creating the wake of the Cambridge Analytica scandal. And the platforms at the time were really holding firm to this idea of Section 230 and avoiding host liability and saying, oh, we couldn't possibly be responsible for everything on our platform. And there was one problem with that argument, which is they completely acknowledged the need for them to take action when it came to child pornography. And so they said, yeah, well, you know, no liability for us. But of course, there can be liability on this one specific piece of content and we'll take action on this one specific piece of content. And it always struck me from there on out. I mean, there's no real intellectual consistency here. It's more just what should be in that category of things that they should take responsibility for. And obviously harmful content like that should be – that's an obvious first step but obvious for everyone. But there are other categories. Fraud is another one. When they're making so much money, when they are investing so much money in AI, when they're ignoring privacy protections and everything else throughout the years, I mean, we can't leave it up to them. And setting a clear set of rules to say this is what you're responsible for and expanding that responsibility seems to make a good amount of sense.Taylor Owen30:18-30:28It does, although I think those responsibilities need to be different for different kinds of harms. Because there are different speech implications and apocratic implications of sort of absolute solutions to different kinds of content.Taylor30:28-30:30So like child pornography is a great example.Taylor Owen30:30-31:44In the Online Harms Bill Act, for almost every type of content, it was that risk assessment model. But there was a carve out for child sexual abuse material. So including child pornography. And for intimate images and videos shared without consent. It said the platforms actually have a different obligation, and that's to take it down within 24 hours. And the reason you can do it with those two kinds of content is because if we, one, the AI is actually pretty good at spotting it. It might surprise you, but there's a lot of naked images on the internet that we can train AI with. So we're actually pretty good at using AI to pull this stuff down. But the bigger one is that we are, I think, as a society, it's okay to be wrong in the gray area of that speech, right? Like if something is like debatable, whether it's child pornography, I'm actually okay with us suppressing the speech of the person who sits in that gray area. Whereas for something like hate speech, it's a really different story, right? Like we do not want to suppress and over index for that gray area on hate speech because that's going to capture a lot of reasonable debate that we probably want.Nate Erskine-Smith31:44-31:55Yeah, I think soliciting investment via fraud probably falls more in line with the child pornography category where it's, you know, very obviously illegal.Taylor Owen31:55-32:02And that mechanism is like a takedown mechanism, right? Like if we see fraud, if we know it's fraud, then you take it down, right? Some of these other things we have to go with.Nate Erskine-Smith32:02-32:24I mean, my last question really is you pull the threads together. You've got these different pieces that were introduced in the past. And you've got a government that lots of similar folks around the table, but a new government and a new prime minister certainly with a vision for getting the most out of AI when it comes to our economy.Taylor32:24-32:25Absolutely.Nate Erskine-Smith32:25-33:04You have, for the first time in this country, an AI minister, a junior minister to industry, but still a specific title portfolio and with his own deputy minister and really wants to be seized with this. And in a way, I think that from every conversation I've had with him that wants to maximize productivity in this country using AI, but is also cognizant of the risks and wants to address AI safety. So where from here? You know, you've talked in the past about sort of a grander sort of tech accountability and sovereignty act. Do we do piecemeal, you know, a privacy bill here and an AI safety bill and an online harms bill and we have disparate pieces? What's the answer here?Taylor Owen33:05-34:14I mean, I don't have the exact answer. But I think there's some like, there's some lessons from the past that we can, this government could take. And one is piecemeal bills that aren't centrally coordinated or have no sort of connectivity between them end up with piecemeal solutions that are imperfect and like would benefit from some cohesiveness between them, right? So when the previous government released ADA, the AI Act, it was like really intention in some real ways with the online harms approach. So two different departments issuing two similar bills on two separate technologies, not really talking to each other as far as I can tell from the outside, right? So like we need a coordinating, coordinated, comprehensive effort to digital governance. Like that's point one and we've never had it in this country. And when I saw the announcement of an AI minister, my mind went first to that he or that office could be that role. Like you could – because AI is – it's cross-cutting, right? Like every department in our federal government touches AI in one way or another. And the governance of AI and the adoption on the other side of AI by society is going to affect every department and every bill we need.Nate Erskine-Smith34:14-34:35So if Evan pulled in the privacy pieces that would help us catch up to GDPR. Which it sounds like they will, right? Some version of C27 will probably come back. If he pulls in the online harms pieces that aren't related to the criminal code and drops those provisions, says, you know, Sean Frazier, you can deal with this if you like. But these are the pieces I'm holding on to.Taylor Owen34:35-34:37With a frame of consumer safety, right?Nate Erskine-Smith34:37-34:37Exactly.Taylor Owen34:38-34:39If he wants...Nate Erskine-Smith34:39-34:54Which is connected to privacy as well, right? Like these are all... So then you have thematically a bill that makes sense. And then you can pull in as well the AI safety piece. And then it becomes a consumer protection bill when it comes to living our lives online. Yeah.Taylor Owen34:54-36:06And I think there's an argument whether that should be one bill or whether it's multiple ones. I actually don't think it... I think there's cases for both, right? There's concern about big omnibus bills that do too many things and too many committees reviewing them and whatever. that's sort of a machinery of government question right but but the principle that these should be tied together in a narrative that the government is explicit about making and communicating to publics right that if if you we know that 85 percent of canadians want ai to be regulated what do they mean what they mean is at the same time as they're being told by our government by companies that they should be using and embracing this powerful technology in their lives they're also seeing some risks. They're seeing risks to their kids. They're being told their jobs might disappear and might take their... Why should I use this thing? When I'm seeing some harms, I don't see you guys doing anything about these harms. And I'm seeing some potential real downside for me personally and my family. So even in the adoption frame, I think thinking about data privacy, safety, consumer safety, I think to me, that's the real frame here. It's like citizen safety, consumer safety using these products. Yeah, politically, I just, I mean, that is what it is. It makes sense to me.Nate Erskine-Smith36:06-36:25Right, I agree. And really lean into child safety at the same time. Because like I've got a nine-year-old and a five-year-old. They are growing up with the internet. And I do not want to have to police every single platform that they use. I do not want to have to log in and go, these are the default settings on the parental controls.Taylor36:25-36:28I want to turn to government and go, do your damn job.Taylor Owen36:28-36:48Or just like make them slightly safer. I know these are going to be imperfect. I have a 12-year-old. He spends a lot of time on YouTube. I know that's going to always be a place with sort of content that I would prefer he doesn't see. But I would just like some basic safety standards on that thing. So he's not seeing the worst of the worst.Nate Erskine-Smith36:48-36:58And we should expect that. Certainly at YouTube with its promotion engine, the recommendation function is not actively promoting terrible content to your 12 year old.Taylor Owen36:59-37:31Yeah. That's like de minimis. Can we just torque this a little bit, right? So like maybe he's not seeing content about horrible content about Charlie Kirk when he's a 12 year old on YouTube, right? Like, can we just do something? And I think that's a reasonable expectation as a citizen. But it requires governance. That will not – and that's – it's worth putting a real emphasis on that is one thing we've learned in this moment of repeated deja vus going back 20 years really since our experience with social media for sure through to now is that these companies don't self-govern.Taylor37:31-37:31Right.Taylor Owen37:32-37:39Like we just – we know that indisputably. So to think that AI is going to be different is delusional. No, it'll be pseudo-profit, not the public interest.Taylor37:39-37:44Of course. Because that's what we are. These are the largest companies in the world. Yeah, exactly. And AI companies are even bigger than the last generation, right?Taylor Owen37:44-38:00We're creating something new with the scale of these companies. And to think that their commercial incentives and their broader long-term goals of around AI are not going to override these safety concerns is just naive in the nth degree.Nate Erskine-Smith38:00-38:38But I think you make the right point, and it's useful to close on this, that these goals of realizing the productivity possibilities and potentials of AI alongside AI safety, these are not mutually exclusive or oppositional goals. that it's you create a sandbox to play in and companies will be more successful. And if you have certainty in regulations, companies will be more successful. And if people feel safe using these tools and having certainly, you know, if I feel safe with my kids learning these tools growing up in their classrooms and everything else, you're going to adoption rates will soar. Absolutely. And then we'll benefit.Taylor Owen38:38-38:43They work in tandem, right? And I think you can't have one without the other fundamentally.Nate Erskine-Smith38:45-38:49Well, I hope I don't invite you back five years from now when we have the same conversation.Taylor Owen38:49-38:58Well, I hope you invite me back in five years, but I hope it's like thinking back on all the legislative successes of the previous five years. I mean, that'll be the moment.Taylor38:58-38:59Sounds good. Thanks, David. Thanks. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.uncommons.ca

The John Batchelor Show
Preview: Kevin Frazier of University of Texas Law School/Civitas Institute discusses congressional concerns over AI regulation, balancing state interests versus federal goals of preventing cross-state policy projection and prioritizing national AI innovat

The John Batchelor Show

Play Episode Listen Later Sep 24, 2025 1:29


Preview: Kevin Frazier of University of Texas Law School/Civitas Institute discusses congressional concerns over AIregulation, balancing state interests versus federal goals of preventing cross-state policy projection and prioritizing national AI innovation and growth.

FOX on Tech
Parents Demand AI Regulation from Congress

FOX on Tech

Play Episode Listen Later Sep 24, 2025 1:45


Amid reports of multiple people dying or taking their own lives after interacting with artificial intelligence chat bots, the families of victims are demanding better safeguards from developers. Learn more about your ad choices. Visit podcastchoices.com/adchoices

TechCrunch
Meta launches super PAC to fight AI regulation

TechCrunch

Play Episode Listen Later Sep 24, 2025 7:00


Plus - Google Photos users on Android can now edit their photos by talking to or texting the AI; Disney is raising the price of Disney+ Learn more about your ad choices. Visit podcastchoices.com/adchoices

The Ross Kaminsky Show
09-23-25 - *FULL SHOW* Kimmel is Back; AI Regulation; Human Trafficking

The Ross Kaminsky Show

Play Episode Listen Later Sep 23, 2025 96:11 Transcription Available


The Daily Crunch – Spoken Edition
Meta launches super PAC to fight AI regulation as state policies mount, also Alloy is bringing data management to the robotics industry

The Daily Crunch – Spoken Edition

Play Episode Listen Later Sep 23, 2025 7:09


Meta has raised the stakes in Big Tech's fight against AI regulation. The Facebook-maker is investing “tens of millions” of dollars into a new super PAC to fight state-level tech policy proposals that could stifle AI advancement, reports Axios. Also, Sydney, Australia-based Alloy thinks it can help with that issue: the startup is building data infrastructure for robotics companies to help them process and organize all the data their robots collect from various sources, including sensors and cameras.   Learn more about your ad choices. Visit podcastchoices.com/adchoices

Acxiom Podcast
#74 - AI Hype Versus Reality | Real Talk about Marketing and Acxiom Podcast

Acxiom Podcast

Play Episode Listen Later Sep 16, 2025 50:08


Industry visionary Graham Wilkinson joins the podcast to talk about the industry's adoption of AI, where it's working and where it's not. The team examines the role of AI across generative advertising, data fragmentation, breaking down silos and the genesis of creativity.Thanks for listening! Follow us on Twitter and Instagram or find us on Facebook.

Intangiblia™
Sealed Code: When Predictive Models Go to Court

Intangiblia™

Play Episode Listen Later Sep 15, 2025 22:46 Transcription Available


Welcome to a fascinating exploration of the hidden legal battles shaping tomorrow's technology. Predictive algorithms have become the crystal balls of modern business, forecasting everything from home prices to healthcare costs, but they're also becoming the center of high-stakes courtroom dramas worth hundreds of millions of dollars.Across the globe, from Texas courtrooms to China's Supreme People's Court, judges and juries are answering a profound question: who owns the right to predict the future? The House Canary v. Amrock case resulted in a staggering $600 million verdict over real estate valuation algorithms, while Alibaba secured a 30 million RMB judgment against a company that allegedly scraped its predictive marketing tools. Even industrial applications aren't immune, with companies like Shen Group successfully protecting predictive design software for machinery components.What makes these cases particularly compelling is how they're redefining intellectual property law. Courts are now recognizing that AI model weights, the mathematical parameters tuned during training, qualify as protectable trade secrets. Data pipelines, prediction engines, and algorithmic structures have all received similar protection. The real drama often unfolds when employees change companies, raising thorny questions about what constitutes general expertise versus proprietary knowledge that belongs to the former employer.Healthcare prediction presents especially valuable territory, with ongoing battles between companies like Qruis and Epic Systems, or Milliman and Gradient AI, demonstrating how patient data forecasting creates immensely valuable intellectual property. Whether it's forecasting home values on Zillow or optimizing Medicare billing, these predictive tools aren't just convenient features, they're corporate crown jewels worth protecting at almost any cost.Ready to dive deeper into the invisible rules governing innovation? Subscribe now and join us as we continue to decode the legal frameworks shaping our technological future. The algorithms may predict tomorrow, but who gets to own those predictions? That's what we're exploring on Intangiblia.Get the book!Send us a textSupport the show

Six Pixels of Separation Podcast - By Mitch Joel
Status... Humanity's Most Powerful Invisible Force With Toby Stuart - TWMJ #1001

Six Pixels of Separation Podcast - By Mitch Joel

Play Episode Listen Later Sep 14, 2025 55:25


Welcome to episode #1001 of Thinking With Mitch Joel (formerly Six Pixels of Separation). Toby Stuart is a Distinguished Professor of Business Administration at the Haas School of Business, UC Berkeley, where he directs the Berkeley-Haas Entrepreneurship Program and the Institute for Business Innovation. Over his career, he has also taught at Harvard, Columbia, Chicago Booth and MIT Sloan, and he is recognized globally as one of the leading thinkers on entrepreneurship, networks and organizational strategy. Beyond academia, Toby sits on the boards of multiple technology companies, cofounded the Black Venture Institute, and serves as the founding Chairman of Workday's AI Advisory Board. His latest book, Anointed - The Extraordinary Effects Of Social Status In A Winner-Take-Most World, examines the invisible hierarchies that govern so much of human life and why small advantages so often compound into massive outcomes. From why blurbs on books sway readers, to how neighborhoods or technologies become “the next big thing,” to the inequalities embedded in who gets credit for innovation, Anointed reveals how status shapes trust, opportunity and even our sense of self (I loved this book). Toby argues that status is both necessary - helping us navigate infinite choices in the modern world - and corrosive, creating inequality that is often disconnected from true merit. In our discussion, Toby unpacks the mechanics of anointment, the ways status rubs off through association and how technology, especially AI, might both entrench and disrupt these hierarchies. The conversation explores the paradox of meritocracy, the illusions of self-anointment in today's digital culture and the future of work as AI accelerates change. If you've ever wondered why some ideas, people, or companies get chosen while others languish (or even how you go to where you are), this conversation will challenge you to see the hidden operating system behind everyday decisions. Enjoy the conversation... Running time: 55:24. Hello from beautiful Montreal. Listen and subscribe over at Apple Podcasts. Listen and subscribe over at Spotify. Please visit and leave comments on the blog - Thinking With Mitch Joel. Feel free to connect to me directly on LinkedIn. Check out ThinkersOne. Here is my conversation with Toby Stuart. Anointed - The Extraordinary Effects Of Social Status In A Winner-Take-Most World. Haas School of Business. Follow Toby on LinkedIn. Chapters: (00:00) - Introduction to Toby Stuart. (01:50) - Understanding Anointed and Social Status. (04:40) - The Necessity and Corrosiveness of Status. (08:54) - Blurbs, Status, and the Publishing Industry. (12:40) - The Role of Association in Anointment. (15:29) - Breaking into New Fields and Status Transfer. (19:44) - Meritocracy and the Role of AI. (27:12) - AI's Impact on Status and Society. (31:38) - The Impact of AI on Status and Credentials. (34:46) - Evaluating Human Contribution in the Age of AI. (39:17) - The Future of AI Regulation and Power Dynamics. (45:29) - Self-Anointed Status in a Digital World. (51:25) - Reflections on Status and Personal Growth.

The Health Ranger Report
Brighteon Broadcast News, Sep 9, 2025 – FINANCIAL END GAME: The world abandons U.S. dollar debt and chooses GOLD as money

The Health Ranger Report

Play Episode Listen Later Sep 9, 2025 168:22


- Financial Crisis and Geopolitical Instability (0:00) - Historical Financial Predictions and Current Market Conditions (2:23) - US Financial Policies and Global Repercussions (9:59) - Gold Revaluation and Economic Collapse (27:39) - AI and Job Replacement (39:15) - Simulation Theory and AI Safety (49:33) - AI and Human Extinction (1:19:57) - Decentralization and Survival Strategies (1:21:35) - Perpetual Motion and Safety Machines (1:21:50) - Resource Competition and AI Extermination (1:24:24) - Simulation Theory and AI Simulations (1:25:58) - Religious Parallels and Near-Death Experiences (1:27:54) - AI Development and Human Self-Preservation (1:32:02) - AI Regulation and Government Inaction (1:37:55) - AI Deployment and Economic Pressure (1:39:57) - AI Extermination Methods and Human Survival (1:42:32) - Simulation Theory and Personal Beliefs (1:43:55) - AI and Health Nutrition (1:55:41) - AI and Government Trust (1:58:50) - AI and Financial Planning (2:19:36) - Cosmic Simulation Discussion (2:21:46) - Enoch's Spiritual Connection Insights (2:39:06) - Humility and Material Possessions (2:40:13) - AI and Spiritual Connection (2:40:53) - Roman's Directness and Humor (2:41:35) - After-Party Segment (2:43:40) - Health Ranger Store Product Introduction (2:44:15) - Importance of Clean Chicken Broth (2:45:25) - Conclusion and Call to Action (2:47:42) For more updates, visit: http://www.brighteon.com/channel/hrreport NaturalNews videos would not be possible without you, as always we remain passionately dedicated to our mission of educating people all over the world on the subject of natural healing remedies and personal liberty (food freedom, medical freedom, the freedom of speech, etc.). Together, we're helping create a better world, with more honest food labeling, reduced chemical contamination, the avoidance of toxic heavy metals and vastly increased scientific transparency. ▶️ Every dollar you spend at the Health Ranger Store goes toward helping us achieve important science and content goals for humanity: https://www.healthrangerstore.com/ ▶️ Sign Up For Our Newsletter: https://www.naturalnews.com/Readerregistration.html ▶️ Brighteon: https://www.brighteon.com/channels/hrreport ▶️ Join Our Social Network: https://brighteon.social/@HealthRanger ▶️ Check In Stock Products at: https://PrepWithMike.com

RNZ: Morning Report
Experts send open letter to govt calling for better AI regulation

RNZ: Morning Report

Play Episode Listen Later Sep 1, 2025 5:13


More than 20 AI experts have signed an open letter urging the government to better regulate the use of artificial intelligence here. Victoria University senior lecturer in AI Dr Andrew Lensen spoke to Ingrid Hipkiss.

Insuring Cyber Podcast - Insurance Journal TV
EP. 106: Navigating State, Federal and Global AI Regulation

Insuring Cyber Podcast - Insurance Journal TV

Play Episode Listen Later Aug 27, 2025 27:49


On this episode of The Insuring Cyber Podcast, Claire Davey, senior vice president and head of product innovation and emerging risk at Relm Insurance, and Peter Dugas, executive … Read More » The post EP. 106: Navigating State, Federal and Global AI Regulation appeared first on Insurance Journal TV.

a16z
AI and Accelerationism with Marc Andreessen

a16z

Play Episode Listen Later Aug 22, 2025 69:03


Marc Andreessen, cofounder Andreessen Horowitz, joins the Hermitix podcast for a conversation on AI, accelerationism, energy, and the future.From the thermodynamic roots of effective accelerationism (E/acc) to the cultural cycles of optimism and fear around new technologies, Marc shares why AI is best understood as code, how nuclear debates mirror today's AI concerns, and what these shifts mean for society and progress. Timecodes:0:00 Introduction 0:51 Podcast Overview & Guest Introduction1:45 Marc Andreessen's Background3:30 Technology's Role in Society4:44 The Hermitix Question: Influential Thinkers8:19 AI: Past, Present, and Future10:57 Superconductors and Technological Breakthroughs15:53 Optimism, Pessimism, and Stagnation in Technology22:54 Fear of Technology and Social Order29:49 Nuclear Power: Promise and Controversy34:53 AI Regulation and Societal Impact41:16 Effective Accelerationism Explained47:19 Thermodynamics, Life, and Human Progress53:07 Learned Helplessness and the Role of Elites1:01:08 The Future: 10–50 Years and Beyond Resources:Marc on X:   https://x.com/pmarcaMarc's Substack: https://pmarca.substack.com/Become part of the Hermitix community:On X: https://x.com/HermitixpodcastSupport: http://patreon.com/hermitixFind James on X: https://x.com/meta_nomad Stay Updated: Let us know what you think: https://ratethispodcast.com/a16zFind a16z on Twitter: https://twitter.com/a16zFind a16z on LinkedIn: https://www.linkedin.com/company/a16zSubscribe on your favorite podcast app: https://a16z.simplecast.com/Follow our host: https://x.com/eriktorenbergPlease note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures.

Sithty Minutes
AI Regulation - Colorado Special Session

Sithty Minutes

Play Episode Listen Later Aug 22, 2025 39:55


Welcome aboard Kyber Sqaudron! This week, Colorado is in the midst of a Special Session dealing with the fallout from the Federal Budget and AI regulation, and Andrés got to speak with legislators and hear from them at a Press Conference hours before the special session started. If you're in Colorado, and AI matters to you, there's never been a better time to reach out to your state legislator!  Follow us: Twitch: @Sithty_Minutes BlueSky: @sithtyminutes.bsky.social Discord: Sithty Minutes Show Notes: Democrat AI Option 1 AI Sunshine Act Rep. Weinberg's AI Bill Find your (CO) Legislator!

Houston Matters
Ten Commandments in schools (Aug. 21, 2025)

Houston Matters

Play Episode Listen Later Aug 21, 2025 50:00


On Thursday's show: A federal judge on Wednesday temporarily blocked 11 public school districts in Texas' largest metropolitan areas from displaying the Ten Commandments in classrooms as required by a new state law set to take effect Sept. 1. A legal expert joins us to discuss the case and where it goes from here.Also this hour: Sen. Ted Cruz is largely opposed to regulation of AI, while the state is set to be one of a few to implement the first legislation related to the industry. We discuss that dichotomy.Then, Houston native filmmaker Nell Teare discusses some of the lessons she's learned about overcoming the barriers and naysayers to building a creative career. It's a topic she'll discuss Sunday afternoon during the Houston Media Conference.And we learn how school publications like newsletters, zines, and podcasts are giving students ways to take control of their media and better understand how media works.

IT Privacy and Security Weekly update.
EP 256.5. Deep Dive. EP 256 The IT Privacy and Security Weekly Update for the Week ending August 19th., 2025 and Something Phishy

IT Privacy and Security Weekly update.

Play Episode Listen Later Aug 21, 2025 17:34


Phishing Training Effectiveness: A study of over 19,000 employees showed traditional phishing training has limited impact, improving scam detection by just 1.7% over eight months. Despite varied training methods, over 50% of participants fell for at least one phishing email, highlighting persistent user susceptibility and the need for more effective cybersecurity education strategies.Cybersecurity Risks in Modern Cars: Modern connected vehicles are highly vulnerable to cyberattacks. A researcher exploited flaws in a major carmaker's web portal, gaining “national admin” access to dealership data and demonstrating the ability to remotely unlock cars and track their locations using just a name or VIN. This underscores the urgent need for regular vehicle software updates and stronger manufacturer security measures to prevent data breaches and potential vehicle control by malicious actors.Nation-State Cyberattacks on Infrastructure: Nation-state cyberattacks targeting critical infrastructure are escalating. Russian hackers reportedly took control of a Norwegian hydropower dam, releasing water undetected for hours. While no physical damage occurred, such incidents reveal the potential for widespread disruption and chaos, signaling a more aggressive stance by state-sponsored cyber actors and the need for robust infrastructure defenses.AI Regulation in Mental Health Therapy: States like Illinois, Nevada, and Utah are regulating or banning AI in mental health therapy due to safety and privacy concerns. Unregulated AI chatbots risk harmful interactions with vulnerable users and unintended data exposure. New laws require licensed professional oversight and prohibit marketing AI chatbots as standalone therapy tools to protect users.Impact of Surveillance Laws on Privacy Tech: Proposed surveillance laws, like Switzerland's data retention mandates, are pushing privacy-focused tech firms like Proton to relocate infrastructure. Proton is moving its AI chatbot, Lumo, to Germany and considering Norway for other services to uphold its no-logs policy. This reflects the tension between national security and privacy, driving companies to seek jurisdictions with stronger data protection laws.Data Brokers and Privacy Challenges: Data brokers undermine consumer privacy despite laws like California's Consumer Privacy Act. Over 30 brokers were found hiding data deletion instructions from Google search results using specific code, creating barriers for consumers trying to opt out of data collection. This intentional obfuscation frustrates privacy rights and weakens legislative protections.Android pKVM Security Certification: Android's protected Kernel-based Virtual Machine (pKVM) earned SESIP Level 5 certification, the first software security solution for consumer electronics to achieve this standard. Designed to resist sophisticated attackers, pKVM enables secure handling of sensitive tasks like on-device AI processing, setting a new benchmark for consistent, verifiable security across Android devices.VPN Open-Source Code Significance: VP.NET's decision to open-source its Intel SGX enclave code on GitHub enhances transparency in privacy technology. By allowing public verification, users can confirm the code running on servers matches the open-source version, fostering trust and accountability. This move could set a new standard for the VPN and privacy tech industry, encouraging others to prioritize verifiable privacy claims.

Acxiom Podcast
#73 - Don't Get Caught in AI Quicksand | Real Talk about Marketing and Acxiom Podcast

Acxiom Podcast

Play Episode Listen Later Aug 19, 2025 48:50


The industry is experiencing the ‘Wild West ‘in terms of AI implementation and associate legislation. Leading patent attorney Gene Quinn of IP Watchdog joins the podcast to discuss the complexity and swirl of issues and potential resolutions in both the US and globally, smart modularization approaches for marketers and ultimately adding value for consumers. Thanks for listening! Follow us on Twitter and Instagram or find us on Facebook.

The Healthier Tech Podcast
WOPR Act Banned AI Therapists: Why Illinois Says “Humans Only” for Mental Health?

The Healthier Tech Podcast

Play Episode Listen Later Aug 15, 2025 6:49


Illinois just sent a shockwave through the digital wellness and AI ethics space with the WOPR Act—a new law that stops AI from replacing human therapists or making independent clinical decisions. At a time when therapy chatbots are becoming more advanced, more available, and more human-like than ever, this move forces a question we can't ignore: should your mental health care ever be left entirely to an algorithm? In this episode of The Healthier Tech Podcast, we dig into the heart of the WOPR Act and explore why Illinois is saying “humans only” when it comes to therapy. It's a conversation about patient safety, ethics, and the very definition of care in the age of artificial intelligence. Here's what we cover: How AI therapy tools work, and why they've grown so popular so fast. The risks AI can't see—like micro-expressions, body language, and the unspoken clues only humans can catch. Real-world stories of AI therapy helping people, and the scenarios where it could dangerously miss the mark. Why “better than nothing” isn't always better when it comes to mental health care. The difference between real empathy and simulated empathy, and how it shapes trust. The ethical stakes: when corporate motives meet vulnerable minds. Why Illinois decided to act now—and what it signals for the future of AI in healthcare. Whether you're fascinated by technology, concerned about its limits, or passionate about mental health, this episode offers a thought-provoking look at the intersection of AI, ethics, and human connection. Subscribe to The Healthier Tech Podcast for more conversations on building a healthy, intentional relationship with technology—one choice at a time. This episode is brought to you by Shield Your Body—a global leader in EMF protection and digital wellness. Because real wellness means protecting your body, not just optimizing it. If you found this episode eye-opening, leave a review, share it with someone tech-curious, and don't forget to subscribe to Shield Your Body on YouTube for more insights on living healthier with technology.

Play Big Faster Podcast
#206: Custom AI vs ChatGPT: Local Systems That Keep Data Secure | Sam Sammane

Play Big Faster Podcast

Play Episode Listen Later Aug 14, 2025 43:41


AI ethics expert Sam Sammane challenges Silicon Valley's artificial intelligence hype in this controversial entrepreneurship interview. The Theo Sim founder and nanotechnology PhD reveals why current AI regulations only help wealthy tech giants while blocking innovation for small businesses. Sam exposes the truth about ChatGPT privacy risks, demonstrates how personalized AI systems running locally protect your data better than cloud-based solutions, and shares his revolutionary context engineering approach that transforms generic chatbots into custom AI employees. Sam's contrarian take on AI policy, trustworthy AI development, and why schools must teach cognitive ethics now will reshape how you think about augmenting human intelligence. The future of AI belongs to businesses that act today, not tomorrow.

Dean W. Ball on America's AI Action Plan & 4 Months at the White House

Play Episode Listen Later Aug 13, 2025 194:15


Today Dean W. Ball, former White House AI policy advisor joins The Cognitive Revolution to discuss his role in crafting the Trump administration's AI Action Plan, his reasons for leaving government, and his perspectives on AI policy, US-China competition, and the future of AI regulation and adoption. Check out our sponsors: Fin, Labelbox, Oracle Cloud Infrastructure, Shopify. Shownotes below brought to you by Notion AI Meeting Notes - try one month for free at https://notion.com/lp/nathan White House Experience & Government Role: Dean Ball served as senior policy advisor for AI and emerging technology at the White House Office for Science and Technology Policy (OSTP) for four months. AI Regulation & Government Approach: Information asymmetry exists between government and AI labs, "Having worked at the White House, I don't know tremendously more about what goes on inside the Frontier Labs than you do." Private Sector Innovation: Dean emphasizes the importance of private sector-led initiatives in AI safety and standards. Future AI Developments: Dean believes agentic commerce is "right around the corner" but sees little discussion about it from regulatory or conceptual perspectives. AI Action Plan Development: It emphasized concrete actions for AI implementation across government agencies rather than just theoretical frameworks. Personal Updates: Dean is reviving his weekly Hyperdimensional Substack, joining the Foundation for American Innovation as a senior fellow, and plans to share his long-held insights on recent AI developments. Sponsors: Fin: Fin is the #1 AI Agent for customer service, trusted by over 5000 customer service leaders and top AI companies including Anthropic and Synthesia. Fin is the highest performing agent on the market and resolves even the most complex customer queries. Try Fin today with our 90-day money-back guarantee - if you're not 100% satisfied, get up to $1 million back. Learn more at https://fin.ai/cognitive Labelbox: Labelbox pairs automation, expert judgment, and reinforcement learning to deliver high-quality training data for cutting-edge AI. Put its data factory to work for you, visit https://labelbox.com Oracle Cloud Infrastructure: Oracle Cloud Infrastructure (OCI) is the next-generation cloud that delivers better performance, faster speeds, and significantly lower costs, including up to 50% less for compute, 70% for storage, and 80% for networking. Run any workload, from infrastructure to AI, in a high-availability environment and try OCI for free with zero commitment at https://oracle.com/cognitive Shopify: Shopify powers millions of businesses worldwide, handling 10% of U.S. e-commerce. With hundreds of templates, AI tools for product descriptions, and seamless marketing campaign creation, it's like having a design studio and marketing team in one. Start your $1/month trial today at https://shopify.com/cognitive

Microsoft Business Applications Podcast
AI Regulation: Innovation's Hidden Accelerator

Microsoft Business Applications Podcast

Play Episode Listen Later Aug 12, 2025 39:01 Transcription Available


Intangiblia™
Face Off: Privacy, Intellectual Property, and the Price of Your Faceprint

Intangiblia™

Play Episode Listen Later Aug 11, 2025 32:57 Transcription Available


Your face unlocks your phone, animates your emoji, and verifies your identity but who actually owns the digital rights to your unique features? In this deep dive into biometric data law, we explore the high-stakes legal battles reshaping how technology interacts with our most personal physical characteristics.When Facebook paid $650 million to settle a class action lawsuit over facial recognition, it signaled a seismic shift in how companies must approach biometric data collection. We break down the landmark cases—from White Castle's potential $17 billion fingerprint scanning liability to Clearview AI's global legal troubles for scraping billions of public photos without consent. These aren't just American concerns; we journey from China, where a professor successfully sued a wildlife park over mandatory facial scans, to India's Supreme Court ruling on the world's largest biometric ID system.Beyond privacy concerns, fierce patent wars are erupting over who owns the methods for collecting and using biometric data. Companies battle over facial authentication patents worth billions while "liveness detection" technology becomes crucial in a world of deepfakes and digital impersonation. The stakes couldn't be higher as these technologies become embedded in everything from banking to border control.We untangle the global patchwork of regulations emerging to govern facial recognition, from Illinois' pioneering BIPA law to Europe's strict GDPR protections and China's surprising new limits on private biometric collection. Throughout it all, a clear trend emerges: your face isn't just data, it's your identity, and increasingly, the law recognizes that distinction.Whether you're concerned about your rights, curious about the future of facial recognition, or simply want to understand why your social media filters might be collecting more than just likes, this episode offers essential insights into the legal frameworks shaping our biometric future. Listen now to discover how to protect your digital identity in a world that increasingly wants to scan it.Send us a textSupport the show

Tech Policy Podcast
415: The State of AI Regulation

Tech Policy Podcast

Play Episode Listen Later Aug 4, 2025 53:05


Matt Perault (a16z) joins Corbin Barthold (TechFreedom) for a wide-ranging discussion of AI bills, AI laws, and AI vibes. Part of the WLF-TechFreedom Tech in the Courts webinar series.Topics include:Why did the AI moratorium die?Activity in the statesRegulate outcomes, not models?Next steps in Congress“Transparency”: so hot right nowThe AI panicLawsuitsLinks:Recorded Tech in the Courts Webinar—The State of AI Regulation

Interviews: Tech and Business
Top AI Ethicists Reveal RISKS of AI Failure | CXOTalk #888

Interviews: Tech and Business

Play Episode Listen Later Aug 1, 2025 53:47


When AI systems hallucinate, run amok, or fail catastrophically, the consequences for enterprises can be devastating. In this must-watch CXOTalk episode, discover how to anticipate and prevent AI failures before they escalate into crises.Join host Michael Krigsman as he explores critical AI risk management strategies with two leading experts:• Lord Tim Clement-Jones - Member of the House of Lords, Co-Chair of UK Parliament's AI Group• Dr. David A. Bray - Chair of the Accelerator at Stimson Center, Former FCC CIOWhat you'll learn:✓ Why AI behaves unpredictably despite explicit programming✓ How to implement "pattern of life" monitoring for AI systems✓ The hidden dangers of anthropomorphizing AI✓ Essential board-level governance structures for AI deployment✓ Real-world AI failure examples and their business impact✓ Strategies for building appropriate skepticism while leveraging AI benefitsKey ideas include treating AI as "alien interactions" rather than human-like intelligence, the convergence of AI risk with cybersecurity, and why smaller companies have unique opportunities in the AI landscape.This discussion is essential viewing for CEOs, board members, CIOs, CISOs, and anyone responsible for AI strategy and risk management in their organization.Subscribe to CXOTalk for more expert insights on technology leadership and AI:

This Week in Tech (Audio)
TWiT 1042: Well Played Astronomer - The Stats Behind Google's AI Mode Search

This Week in Tech (Audio)

Play Episode Listen Later Jul 28, 2025 140:03


OpenAI prepares to launch GPT-5 in August Trump's AI Action Plan Is a Crusade Against 'Bias'—and Regulation UN tech chief pleads for global AI regulatory cooperation Trump, who promised to save TikTok, threatens to shut down TikTok Google AI Mode has 100M users, 2.5 Pro & Deep Search rolls out FDA's New Drug Approval AI Is Generating Fake Studies: Report Tesla is set to face off with the California DMV over claims it exaggerated Autopilot's and FSD's capabilities and misled consumers, in a five-day Oakland trial Google, Microsoft say Chinese hackers are exploiting SharePoint zero-day A look at Tea, a woman-only safety app with 4M users that lets users anonymously assign red or green flags to local men, as it goes viral with 900K new signups People in the UK now have to take an age verification selfie to watch porn online Intel is laying off tens of thousands and cancelling factories AMD CEO Sees Chips From TSMC's US Plant Costing 5%-20% More Spotify Publishes AI-Generated Songs From Dead Artists Without Permission DJI couldn't confirm or deny it disguised this drone to evade a US ban FCC approves Skydance-Paramount merger Gwyneth Paltrow is the new face of a kiss-cam tech scandal Julian LeFay, 'Father of The Elder Scrolls,' Has Died Aged 59 Tom Lehrer, Musical Satirist With a Dark Streak, Dies at 97 Host: Leo Laporte Guests: Molly White, Janko Roettgers, and Jacob Ward Download or subscribe to This Week in Tech at https://twit.tv/shows/this-week-in-tech Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: smarty.com/twit zscaler.com/security expressvpn.com/twit uscloud.com spaceship.com/twit

This Week in Tech (Video HI)
TWiT 1042: Well Played Astronomer - The Stats Behind Google's AI Mode Search

This Week in Tech (Video HI)

Play Episode Listen Later Jul 28, 2025 140:03


OpenAI prepares to launch GPT-5 in August Trump's AI Action Plan Is a Crusade Against 'Bias'—and Regulation UN tech chief pleads for global AI regulatory cooperation Trump, who promised to save TikTok, threatens to shut down TikTok Google AI Mode has 100M users, 2.5 Pro & Deep Search rolls out FDA's New Drug Approval AI Is Generating Fake Studies: Report Tesla is set to face off with the California DMV over claims it exaggerated Autopilot's and FSD's capabilities and misled consumers, in a five-day Oakland trial Google, Microsoft say Chinese hackers are exploiting SharePoint zero-day A look at Tea, a woman-only safety app with 4M users that lets users anonymously assign red or green flags to local men, as it goes viral with 900K new signups People in the UK now have to take an age verification selfie to watch porn online Intel is laying off tens of thousands and cancelling factories AMD CEO Sees Chips From TSMC's US Plant Costing 5%-20% More Spotify Publishes AI-Generated Songs From Dead Artists Without Permission DJI couldn't confirm or deny it disguised this drone to evade a US ban FCC approves Skydance-Paramount merger Gwyneth Paltrow is the new face of a kiss-cam tech scandal Julian LeFay, 'Father of The Elder Scrolls,' Has Died Aged 59 Tom Lehrer, Musical Satirist With a Dark Streak, Dies at 97 Host: Leo Laporte Guests: Molly White, Janko Roettgers, and Jacob Ward Download or subscribe to This Week in Tech at https://twit.tv/shows/this-week-in-tech Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: smarty.com/twit zscaler.com/security expressvpn.com/twit uscloud.com spaceship.com/twit

All TWiT.tv Shows (MP3)
This Week in Tech 1042: Well Played Astronomer

All TWiT.tv Shows (MP3)

Play Episode Listen Later Jul 28, 2025 140:03


OpenAI prepares to launch GPT-5 in August Trump's AI Action Plan Is a Crusade Against 'Bias'—and Regulation UN tech chief pleads for global AI regulatory cooperation Trump, who promised to save TikTok, threatens to shut down TikTok Google AI Mode has 100M users, 2.5 Pro & Deep Search rolls out FDA's New Drug Approval AI Is Generating Fake Studies: Report Tesla is set to face off with the California DMV over claims it exaggerated Autopilot's and FSD's capabilities and misled consumers, in a five-day Oakland trial Google, Microsoft say Chinese hackers are exploiting SharePoint zero-day A look at Tea, a woman-only safety app with 4M users that lets users anonymously assign red or green flags to local men, as it goes viral with 900K new signups People in the UK now have to take an age verification selfie to watch porn online Intel is laying off tens of thousands and cancelling factories AMD CEO Sees Chips From TSMC's US Plant Costing 5%-20% More Spotify Publishes AI-Generated Songs From Dead Artists Without Permission DJI couldn't confirm or deny it disguised this drone to evade a US ban FCC approves Skydance-Paramount merger Gwyneth Paltrow is the new face of a kiss-cam tech scandal Julian LeFay, 'Father of The Elder Scrolls,' Has Died Aged 59 Tom Lehrer, Musical Satirist With a Dark Streak, Dies at 97 Host: Leo Laporte Guests: Molly White, Janko Roettgers, and Jacob Ward Download or subscribe to This Week in Tech at https://twit.tv/shows/this-week-in-tech Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: smarty.com/twit zscaler.com/security expressvpn.com/twit uscloud.com spaceship.com/twit

Radio Leo (Audio)
This Week in Tech 1042: Well Played Astronomer

Radio Leo (Audio)

Play Episode Listen Later Jul 28, 2025 140:03


OpenAI prepares to launch GPT-5 in August Trump's AI Action Plan Is a Crusade Against 'Bias'—and Regulation UN tech chief pleads for global AI regulatory cooperation Trump, who promised to save TikTok, threatens to shut down TikTok Google AI Mode has 100M users, 2.5 Pro & Deep Search rolls out FDA's New Drug Approval AI Is Generating Fake Studies: Report Tesla is set to face off with the California DMV over claims it exaggerated Autopilot's and FSD's capabilities and misled consumers, in a five-day Oakland trial Google, Microsoft say Chinese hackers are exploiting SharePoint zero-day A look at Tea, a woman-only safety app with 4M users that lets users anonymously assign red or green flags to local men, as it goes viral with 900K new signups People in the UK now have to take an age verification selfie to watch porn online Intel is laying off tens of thousands and cancelling factories AMD CEO Sees Chips From TSMC's US Plant Costing 5%-20% More Spotify Publishes AI-Generated Songs From Dead Artists Without Permission DJI couldn't confirm or deny it disguised this drone to evade a US ban FCC approves Skydance-Paramount merger Gwyneth Paltrow is the new face of a kiss-cam tech scandal Julian LeFay, 'Father of The Elder Scrolls,' Has Died Aged 59 Tom Lehrer, Musical Satirist With a Dark Streak, Dies at 97 Host: Leo Laporte Guests: Molly White, Janko Roettgers, and Jacob Ward Download or subscribe to This Week in Tech at https://twit.tv/shows/this-week-in-tech Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: smarty.com/twit zscaler.com/security expressvpn.com/twit uscloud.com spaceship.com/twit

Marketplace Tech
Bytes: Week in Review - AI regulation ban dies, renewable energy credits hit and Amazon's millionth robot

Marketplace Tech

Play Episode Listen Later Jul 4, 2025 10:16


On this week's “Marketplace Tech Bytes: Week in Review,” Marketplace's Nova Safo and Paresh Dave, senior writer at WIRED, discuss Amazon releasing its 1 millionth robot at one of its warehouses. Plus, lawmakers contended with provisions dealing with artificial intelligence and renewable energy in that big tax and spending bill, recently passed by Congress, that consumed Washington this week.

Marketplace All-in-One
Bytes: Week in Review - AI regulation ban dies, renewable energy credits hit and Amazon's millionth robot

Marketplace All-in-One

Play Episode Listen Later Jul 4, 2025 10:16


On this week's “Marketplace Tech Bytes: Week in Review,” Marketplace's Nova Safo and Paresh Dave, senior writer at WIRED, discuss Amazon releasing its 1 millionth robot at one of its warehouses. Plus, lawmakers contended with provisions dealing with artificial intelligence and renewable energy in that big tax and spending bill, recently passed by Congress, that consumed Washington this week.

WSJ Tech News Briefing
TNB Tech Minute: AI Regulation Left to States After Senate Megabill Vote

WSJ Tech News Briefing

Play Episode Listen Later Jul 1, 2025 2:23


Plus: The Elon Musk-Donald Trump feud reignites over Republicans' tax-and-spending bill. And robots are about to outnumber humans in Amazon warehouses. Katie Deighton hosts. Learn more about your ad choices. Visit megaphone.fm/adchoices

Kendall And Casey Podcast
Trump's 'Big Beautiful Bill' could ban states from AI regulation

Kendall And Casey Podcast

Play Episode Listen Later Jun 30, 2025 3:34


See omnystudio.com/listener for privacy information.