Podcasts about ai innovation

  • 461PODCASTS
  • 666EPISODES
  • 34mAVG DURATION
  • 1DAILY NEW EPISODE
  • Oct 1, 2025LATEST

POPULARITY

20172018201920202021202220232024


Best podcasts about ai innovation

Latest podcast episodes about ai innovation

The Lawfare Podcast
Scaling Laws: The Ivory Tower and AI (Live from IHS's Technology, Liberalism, and Abundance Conference)

The Lawfare Podcast

Play Episode Listen Later Oct 1, 2025 43:22


Neil Chilson, Head of AI Policy at the Abundance Institute, and Gus Hurwitz, Senior Fellow and CTIC Academic Director at Penn Carey Law School and Director of Law & Economics Programs at the International Center for Law & Economics, join Kevin Frazier, AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare, to explore how academics can overcome the silos and incentives that plague the Ivory Tower and positively contribute to the highly complex, evolving, and interdisciplinary work associated with AI governance.The trio recorded this podcast live at the Institute for Humane Studies's Technology, Liberalism, and Abundance Conference in Arlington, Virginia.Read about Kevin's thinking on the topic here: https://www.civitasinstitute.org/research/draining-the-ivory-towerLearn about the Conference: https://www.theihs.org/blog/curated-event/technology-abundance-and-liberalism/Find Scaling Laws on the Lawfare website, and subscribe to never miss an episode.To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/lawfare-institute.Support this show http://supporter.acast.com/lawfare. Hosted on Acast. See acast.com/privacy for more information.

A World of Difference
Leading the AI Revolution: Essential Strategies for Human-Centered, Ethical Leadership with Lori Adams-Brown

A World of Difference

Play Episode Listen Later Oct 1, 2025 25:47


Does this sound familiar? You've been told that “just learning every new AI tool” is the secret to staying relevant and leading your team into the future. But after hours tinkering with the latest apps and features, you're still feeling overwhelmed and wondering if you're actually making any real progress—or just adding more noise to your workflow. If you're tired of chasing shiny objects and still feeling uncertain about how to lead responsibly with AI, you're not alone. Let's talk about what actually works for ethical, human-centered leadership in this AI-driven world. In this episode, you will be able to: Explore how navigating AI and ethical considerations can build trust and long-term success in your business or industry. Discover AI tools that can boost your productivity and sharpen your leadership skills in today's fast-paced world. Embrace human-centered leadership with AI to create more meaningful connections and drive better team results. Understand AI's reshaping of the future of work and position yourself ahead of the curve. Leverage AI's role in ESG and sustainability to lead your organization toward a greener, more responsible future. The key moments in this episode are:00:01:00 - Navigating AI's Role in Leadership and the Future of Work 00:03:25 - How MBA Graduates Can Amplify Their Value with AI 00:07:54 - Future-Proofing Mid-Career Leadership in an AI-Driven World 00:10:06 - Practical AI Tools and Human-Centered Leadership for Enhanced Productivity 00:13:37 - Embracing Human-Centered Leadership in the AI Revolution 00:14:12 - Integrating ESG Perspectives with AI Innovation 00:16:32 - Scaling Global Knowledge with AI and Ensuring Equitable Access 00:18:05 - Leading Ethically in a Future of Human-AI Collaboration 00:20:23 - The Responsibility of Leadership in AI's Impact and Future Share this episode with an MBA student, someone in their mid-career, or someone considering what to study in college. Give a five-star rating and write a review for the podcast to help increase its reach. Sign up for BetterHelp and get 10% off your first month at www.betterhelp.com/difference. Check out Stanford HAI (Human-Centered Artificial Intelligence) on LinkedIn to learn more about their work. Visit Botipedia to explore its multilingual knowledge and education resources. Learn more about your ad choices. Visit megaphone.fm/adchoices

AWS for Software Companies Podcast
Ep152: Balancing AI Innovation with Financial Compliance - Lessons Learned from Lucanet's VP of Engineering

AWS for Software Companies Podcast

Play Episode Listen Later Oct 1, 2025 30:56


Vice President of Engineering James Musson reveals how Lucanet integrated multiple acquired solutions into a unified platform, achieving 3-month integration timelines while serving 6,000+ customers.Topics Include:Lucanet evolved from financial consolidation tool to comprehensive CFO solution platformPlatform covers consolidation, planning, ESG reporting, tax compliance, and cash managementThree key differentiators: easy to use, fast time-to-value, innovative AI featuresAI-powered XBRL tagging reduces days of manual work to minutes with 90% accuracyComplex challenge: integrating multiple acquired tech stacks with cloud-native platform developmentBuilt micro front-end architecture and platform services for seamless user experienceCustom control plane automates customer onboarding and manages rolling upgrades safelyLatest acquisition integrated into platform within three months, unprecedented speedStrong company culture focuses on innovation, hackathons, and continuous learningAI bootcamps and tech lunch sessions keep 6,000+ customer engineering teams engagedBalances AI innovation with regulatory compliance using deterministic core processesHeavy AWS adoption with serverless technologies handles peaky financial reporting workloadsParticipants:James Musson – Vice President, Engineering, LucanetFurther Links:Lucanet: Website – LinkedInSee how Amazon Web Services gives you the freedom to migrate, innovate, and scale your software company at https://aws.amazon.com/isv/

A World of Difference
Leading the AI Revolution: Essential Strategies for Human-Centered, Ethical Leadership with Lori Adams-Brown

A World of Difference

Play Episode Listen Later Oct 1, 2025 25:47


Does this sound familiar? You've been told that “just learning every new AI tool” is the secret to staying relevant and leading your team into the future. But after hours tinkering with the latest apps and features, you're still feeling overwhelmed and wondering if you're actually making any real progress—or just adding more noise to your workflow. If you're tired of chasing shiny objects and still feeling uncertain about how to lead responsibly with AI, you're not alone. Let's talk about what actually works for ethical, human-centered leadership in this AI-driven world. In this episode, you will be able to: Explore how navigating AI and ethical considerations can build trust and long-term success in your business or industry. Discover AI tools that can boost your productivity and sharpen your leadership skills in today's fast-paced world. Embrace human-centered leadership with AI to create more meaningful connections and drive better team results. Understand AI's reshaping of the future of work and position yourself ahead of the curve. Leverage AI's role in ESG and sustainability to lead your organization toward a greener, more responsible future. The key moments in this episode are:00:01:00 - Navigating AI's Role in Leadership and the Future of Work 00:03:25 - How MBA Graduates Can Amplify Their Value with AI 00:07:54 - Future-Proofing Mid-Career Leadership in an AI-Driven World 00:10:06 - Practical AI Tools and Human-Centered Leadership for Enhanced Productivity 00:13:37 - Embracing Human-Centered Leadership in the AI Revolution 00:14:12 - Integrating ESG Perspectives with AI Innovation 00:16:32 - Scaling Global Knowledge with AI and Ensuring Equitable Access 00:18:05 - Leading Ethically in a Future of Human-AI Collaboration 00:20:23 - The Responsibility of Leadership in AI's Impact and Future Share this episode with an MBA student, someone in their mid-career, or someone considering what to study in college. Give a five-star rating and write a review for the podcast to help increase its reach. Sign up for BetterHelp and get 10% off your first month at www.betterhelp.com/difference. Check out Stanford HAI (Human-Centered Artificial Intelligence) on LinkedIn to learn more about their work. Visit Botipedia to explore its multilingual knowledge and education resources. Learn more about your ad choices. Visit megaphone.fm/adchoices

Arbiters of Truth
The Ivory Tower and AI (Live from IHS's Technology, Liberalism, and Abundance Conference).

Arbiters of Truth

Play Episode Listen Later Sep 30, 2025 42:33


Neil Chilson, Head of AI Policy at the Abundance Institute, and Gus Hurwitz, Senior Fellow and CTIC Academic Director at Penn Carey Law School and Director of Law & Economics Programs at the International Center for Law & Economics, join Kevin Frazier, AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare, to explore how academics can overcome the silos and incentives that plague the Ivory Tower and positively contribute to the highly complex, evolving, and interdisciplinary work associated with AI governance. The trio recorded this podcast live at the Institute for Humane Studies's Technology, Liberalism, and Abundance Conference in Arlington, Virginia.Read about Kevin's thinking on the topic here: https://www.civitasinstitute.org/research/draining-the-ivory-towerLearn about the Conference: https://www.theihs.org/blog/curated-event/technology-abundance-and-liberalism/ Hosted on Acast. See acast.com/privacy for more information.

AI for Non-Profits
AI Innovation in Sleep Tech with Eight Sleep Revolution

AI for Non-Profits

Play Episode Listen Later Sep 30, 2025 13:34


Can artificial intelligence unlock better sleep? We break down Eight Sleep's latest advances in AI-powered health and wellness systems.Get the top 40+ AI Models for $20 at AI Box: ⁠⁠https://aibox.aiAI Chat YouTube Channel: https://www.youtube.com/@JaedenSchaferJoin my AI Hustle Community: https://www.skool.com/aihustle

CLOC Talk
Reimagining Legal: AI, Innovation, & the New Playbook

CLOC Talk

Play Episode Listen Later Sep 25, 2025 29:31


Legal services are evolving fast, and AI is playing a major role in reshaping how teams work.In this episode, Mark Ross, Kami Paulsen, and Rich Levine join host, Jeremiah Kincannon, to break down the shifting landscape from the rise of multidisciplinary expertise to the practical impact of AI on workflows, compliance, and hiring. The discussion covers what legal teams need to do to stay competitive, including adopting new tools, rethinking traditional practices, and preparing for the future. They discuss what legal teams need to do to stay competitive, including adopting new tools, rethinking traditional practices, and preparing for the future.Tune in for a deep dive into the forces driving change and how legal professionals can adapt.Thank you to our 2025 CLOC Global Institute CLOC Talk sponsor, DeepL.

GREY Journal Daily News Podcast
Can Boards Balance AI Innovation with Ethical Responsibility?

GREY Journal Daily News Podcast

Play Episode Listen Later Sep 24, 2025 3:25


Boards are integrating artificial intelligence and technology to remain competitive, while addressing ethical implications and maintaining trust. AI tools provide benefits such as predictive analytics and faster decision-making, but can introduce risks like bias in recruitment. Boards are implementing recurring AI ethics discussions, oversight mechanisms, and director education to manage these risks. Diverse board composition helps identify ethical blind spots. Technology governance includes transparency, data privacy policies, and ongoing monitoring. Boards are adopting continuous learning and structured oversight to ensure responsible AI use and compliance.Learn more on this news by visiting us at: https://greyjournal.net/news/ Hosted on Acast. See acast.com/privacy for more information.

The Big Unlock
AI Innovation Across Healthcare and Pharma

The Big Unlock

Play Episode Listen Later Sep 23, 2025 27:36


In this episode, Thomas Fuchs, Chief AI Officer at Eli Lilly, shares his journey from early machine learning research to spearheading transformative AI initiatives across the pharmaceutical value chain. Lilly is running over a thousand AI projects—from drug discovery and development to manufacturing and commercial operations—leveraging innovations like language models for chatbots, computer vision for quality control, and biomarker development. Thomas emphasizes trust, transparency, and collaboration as critical to AI adoption, supported by AI certification programs for all Lilly technology employees, as well as AI education resources being made available for everyone at Lilly. He also highlights the company's unique “lab-in-the-loop” setups that generate synthetic data to accelerate innovation. Thomas shares real-world successes, including AI-assisted drug discovery and intelligent chatbots, and how AI is building confidence across the organization. With robust data infrastructure and scalable strategies for synthesizing large datasets, Lilly is driving rapid innovation. Thomas predicts faster drug design, broader access to medicines, and continuous education as defining trends for AI in healthcare. Take a listen.

The Beijing Hour
China-ASEAN Expo opens with focus on AI, innovation, enhanced regional cooperation

The Beijing Hour

Play Episode Listen Later Sep 17, 2025 59:45


The 22nd China-ASEAN Expo in Nanning gathers 3,200 enterprises from 60 countries, spotlighting New Quality Productive Forces through new themed exhibitions and trade activities (01:02). Israel has expanded a ground offensive in Gaza City, while the international community, including China, the EU, and several countries, condemns the attacks and urges ceasefire (20:17). China has entered the top 10 of the 2025 Global Innovation Index, ranking 10th for the first time (39:57).

Inside Intercom Podcast
Lessons from Vanta: Scaling support through trust, education, and AI innovation

Inside Intercom Podcast

Play Episode Listen Later Sep 16, 2025 30:24


In this episode of The Ticket, Vanta's Director of Customer Support, Margarita Wilshire, joins Intercom's Senior Director of Human Support, Bobby Stapleton, to share how her team blends human expertise, customer education, and AI innovation to deliver fast, trustworthy, and scalable support – while keeping customer experience at the heart of everything they do.Watch this episode on YouTube: https://youtu.be/72V2C-QR6pw?si=BnGXQQNR-mYLAriOFollow the peoplehttps://www.linkedin.com/in/bobbystapleton/https://www.linkedin.com/in/margaritawilshire/NewsletterSign up for The Ticket on LinkedIn: A newsletter bursting with insights and advice for support leaders who are navigating the shift to AI-first CS. https://www.linkedin.com/newsletters/the-ticket-7158151857616355328/Say hiLinkedIn: https://www.linkedin.com/company/intercom/X: https://x.com/intercomhttps://www.fin.aiSee Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

Predictable B2B Success
How This CEO Built AI Innovation Framework Delivering 23,000% ROI

Predictable B2B Success

Play Episode Listen Later Sep 16, 2025 57:26


What happens when a digital innovator decides to reimagine how companies harness the power of AI and innovation? In this episode of Predictable B2B Success, host Vinay Koshy speaks with Matt Leta, CEO of FutureWorks, bestselling author of The Leap Guide, and a trailblazer in building digital products for giants like Apple and Google. Matt shares the gripping journey behind FutureWorks, revealing how a dramatic startup exit, a global adventure, and experiments with remote teams reshaped his vision for the future of work. Discover why Matt believes innovation has lost its meaning—and how he's redefining it to drive tangible results. You'll hear fascinating case studies, including how a solar tech company used AI-powered systems to save thousands of work hours, and why the real key to successful digital transformation starts with empowering every employee—not just the R&D team. Curious about how the Leap framework builds a culture of innovation in just one hour a week? Or why Matt thinks real competitive edge comes from integrating trust and diversity into your AI strategy? Tune in for practical insights, stories from the field, and provocative ideas to accelerate revenue growth through next-gen innovation. Don't miss this episode if you want your business to thrive in the age of AI. Some areas we explore in this episode include: Founding and Evolution of FutureWorks – Matt shares how and why he started FutureWorks and navigated company pivots.Understanding and Defining Innovation – How innovation is defined, distinguished from other terms, and why clarity is important.Integrating AI into Business Operations – Practical ways companies can embrace AI and next-gen digital transformation.The LEAP Framework – Detailed explanation of the LEAP methodology and its components.Creating Innovation Culture – Why Innovation Should Be Organization-Wide Rather Than Just an R&D Initiative.Client Success Stories – Examples like Next Tracker showing the business impact of digital innovation.Leadership and Execution – The importance of innovation champions and starting small but persistently.Measuring ROI and Results – How to track success, and Matt's emphasis on substantial, not incremental, ROI.Learning from Failure – Embracing and learning from setbacks as part of the innovation journey.Global Experience and Community Building – How Matt's worldwide experiences and his Future Horizon community shape his innovation insights.And much, much more...

Every Day’s a Saturday - USMC Veteran
Interview 136- Chas Sampson on Veteran Empowerment, AI Innovation & NFL Dreams

Every Day’s a Saturday - USMC Veteran

Play Episode Listen Later Sep 15, 2025 74:41


Discover how Army veteran Chas Sampson is transforming veteran transitions through the Seven Principles app—an AI-powered tool for résumés and benefits. From helping 25,000+ vets to breaking into real estate, private equity, and the NFL, this episode of Every Day's a Saturday is a masterclass in purpose-driven ambition.

AI Breakdown
AI Innovation Meets Legal Boundaries

AI Breakdown

Play Episode Listen Later Sep 15, 2025 10:00


Anthropic's settlement shows that innovation cannot ignore legal boundaries. We explore the tension between pushing the limits of technology and respecting intellectual property rights. This balance may define the next decade of AI.Try AI Box: ⁠⁠https://aibox.aiAI Chat YouTube Channel: https://www.youtube.com/@JaedenSchaferJoin my AI Hustle Community: https://www.skool.com/aihustle

Open Tech Talks : Technology worth Talking| Blogging |Lifestyle
AI Security and Agentic Risks Every Business Needs to Understand with Alexander Schlager

Open Tech Talks : Technology worth Talking| Blogging |Lifestyle

Play Episode Listen Later Sep 13, 2025 27:42


In this episode of Open Tech Talks, we delve into the critical topics of AI security, explainability, and the risks associated with agentic AI. As organizations adopt Generative AI and Large Language Models (LLMs), ensuring safety, trust, and responsible usage becomes essential. This conversation covers how runtime protection works as a proxy between users and AI models, why explainability is key to user trust, and how cybersecurity teams are becoming central to AI innovation. Chapters 00:00 Introduction to AI Security and eIceberg 02:45 The Evolution of AI Explainability 05:58 Runtime Protection and AI Safety 07:46 Adoption Patterns in AI Security 10:51 Agentic AI: Risks and Management 13:47 Building Effective Agentic AI Workflows 16:42 Governance and Compliance in AI 19:37 The Role of Cybersecurity in AI Innovation 22:36 Lessons Learned and Future Directions Episode # 166 Today's Guest: Alexander Schlager, Founder and CEO of AIceberg.ai He's founded a next-generation AI cybersecurity company that's revolutionizing how we approach digital defense. With a strong background in enterprise tech and a visionary outlook on the future of AI, Alexander is doing more than just developing tools — he's restoring trust in an era of automation. Website: AIceberg.ai Linkedin: Alexander Schlager  What Listeners Will Learn: Why real-time AI security and runtime protection are essential for safe deployments How explainable AI builds trust with users and regulators The unique risks of agentic AI and how to manage them responsibly Why AI safety and governance are becoming strategic priorities for companies How education, awareness, and upskilling help close the AI skills gap Why natural language processing (NLP) is becoming the default interface for enterprise technology Keywords: AI security, generative AI, agentic AI, explainability, runtime protection, cybersecurity, compliance, AI governance, machine learning   Resources: AIceberg.ai

The Lawfare Podcast
Scaling Laws: The State of AI Safety with Steven Adler

The Lawfare Podcast

Play Episode Listen Later Sep 12, 2025 49:14


Steven Adler, former OpenAI safety researcher, author of Clear-Eyed AI on Substack, and independent AGI-readiness researcher, joins Kevin Frazier, AI Innovation and Law Fellow at the University of Texas School of Law and Senior Fellow at Lawfare, to assess the current state of AI testing and evaluations. The two walk through Steven's views on industry efforts to improve model testing and what he thinks regulators ought to know and do when it comes to preventing AI harms.Thanks to Leo Wu for research assistance!Find Scaling Laws on the Lawfare website, and subscribe to never miss an episode.To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/lawfare-institute.Support this show http://supporter.acast.com/lawfare. Hosted on Acast. See acast.com/privacy for more information.

RTP's Free Lunch Podcast
Law For Little Tech: Part 2 - Examining the Little Tech Agenda's Approach to Regulations

RTP's Free Lunch Podcast

Play Episode Listen Later Sep 12, 2025 37:09 Transcription Available


Over the past 25 years, the rapid growth of Big Tech has raised questions about competition, innovation, and the ability of smaller startups to thrive. At the same time, regulatory approaches can create uncertainty that affects entrepreneurs in different ways. With Congress hesitant to act decisively, the debate continues: how can policymakers strike a balance that encourages innovation, ensures fair competition, and protects consumers? And when it comes to regulation should the path forward involve more, or less? Join the Federalist Society's Regulatory Transparency Project for the 2nd episode of Law for Little Tech series, featuring special guest Samuel Levine, Senior Fellow at the Berkeley Center for Consumer Law & Economic Justice and led by host Professor Kevin Frazier, AI Innovation & Law Fellow at the University of Texas School of Law.

Arbiters of Truth
AI and the Future of Work: Joshua Gans on Navigating Job Displacement

Arbiters of Truth

Play Episode Listen Later Sep 11, 2025 57:56


Joshua Gans, a professor at the University of Toronto and co-author of "Power and Prediction: The Disruptive Economics of Artificial Intelligence," joins Kevin Frazier, the AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare, to evaluate ongoing concerns about AI-induced job displacement, the likely consequences of various regulatory proposals on AI innovation, and how AI tools are already changing higher education. Select works by Gans include: A Quest for AI Knowledge (https://www.nber.org/papers/w33566)Regulating the Direction of Innovation (https://www.nber.org/papers/w32741)How Learning About Harms Impacts the Optimal Rate of Artificial Intelligence Adoption (https://www.nber.org/papers/w32105) Hosted on Acast. See acast.com/privacy for more information.

UMBC Mic'd Up
From UMBC to AI Innovation - Harish's Data Science Success Story

UMBC Mic'd Up

Play Episode Listen Later Sep 11, 2025 27:11 Transcription Available


In this inspiring episode of UMBC Mic'd Up Podcast, host Dennise Cardona, M.A. sits down with Harish Reddy Manyam, M.P.S. '24, a graduate of UMBC's Data Science program. Harish shares his remarkable path from arriving in the U.S. with nothing but a dream to becoming an AI governance advisor and healthcare data scientist. He reflects on: • Early struggles as an international student • Key milestones: internships, research, and graduate assistantship • Publishing research with UMBC faculty • Building a no-code AI chatbot with real-world impact • Winning entrepreneurial competitions and launching a startup through BW Tech at UMBC • How UMBC's supportive community and hands-on learning shaped his confidence and career Harish's story is one of resilience, community, and the power of preparation meeting opportunity. 

The CMO Podcast
Ariel Kelman (Salesforce) | Leading the Charge in AI Innovation

The CMO Podcast

Play Episode Listen Later Sep 10, 2025 51:54


Fall is here, which means back to school, football season, crisp apples, and the world's biggest blend of tech conference and music festival: Dreamforce. Now in its 22nd year, Dreamforce 2025 returns to San Francisco on October 16–18, featuring headline speakers like the CEOs of Google and Starbucks, plus musical guests Metallica and Benson Boone.In this week's episode, Jim welcomes Ariel Kelman, President and Chief Marketing Officer of Salesforce, to talk about the power of Dreamforce, what it's like working under visionary founders like Marc Benioff and Jeff Bezos, and why rethinking organizational design with agentic AI at the core is critical for the future. With Salesforce leading the charge in cloud-based CRM and now AI innovation, Ariel shares his unique journey—from his early days at Salesforce, to Amazon Web Services, to Oracle, and back again—offering lessons in marketing leadership at scale.---This week's episode is brought to you by Deloitte.See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

Arbiters of Truth
The State of AI Safety with Steven Adler

Arbiters of Truth

Play Episode Listen Later Sep 9, 2025 47:23


Steven Adler, former OpenAI safety researcher, author of Clear-Eyed AI on Substack, and independent AGI-readiness researcher, joins Kevin Frazier, AI Innovation and Law Fellow at the University of Texas School of Law, to assess the current state of AI testing and evaluations. The two walk through Steven's views on industry efforts to improve model testing and what he thinks regulators ought to know and do when it comes to preventing AI harms. You can read Steven's Substack here: https://stevenadler.substack.com/ Thanks to Leo Wu for research assistance! Hosted on Acast. See acast.com/privacy for more information.

The Lawfare Podcast
Scaling Laws: Contrasting and Conflicting Efforts to Regulate Big Tech: EU v. U.S.

The Lawfare Podcast

Play Episode Listen Later Sep 5, 2025 47:04


Anu Bradford, Professor at Columbia Law School, and Kate Klonick, Senior Editor at Lawfare and Associate Professor at St. John's University School of Law, join Kevin Frazier, AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare, to assess the ongoing, contrasting, and, at times, conflicting regulatory approaches to Big Tech being pursued by the EU and U.S. The trio start with an assessment of the EU's use of the Brussels Effect, coined by Anu, to shape AI development. Next, they explore the U.S.'s increasingly interventionist industrial policy with respect to key sectors, especially tech.Read more:Anu's op-ed in The New York Times"The Impact of Regulation on Innovation," by Philippe Aghion, Antonin Bergeaud, and John Van ReenenDraghi Report on the Future of European CompetitivenessFind Scaling Laws on the Lawfare website, and subscribe to never miss an episode.To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/lawfare-institute.Support this show http://supporter.acast.com/lawfare. Hosted on Acast. See acast.com/privacy for more information.

Inside The Vault with Ash Cash
ITV #180: AI Will Make More Millionaires Than The Internet (ft. Queen of AI Alicia Lyttle )

Inside The Vault with Ash Cash

Play Episode Listen Later Sep 5, 2025 68:25


Arbiters of Truth
Contrasting and Conflicting Efforts to Regulate Big Tech: EU v. US

Arbiters of Truth

Play Episode Listen Later Sep 2, 2025 46:15


Anu Bradford, Professor at Columbia Law School, and Kate Klonick, Senior Editor at Lawfare and Associate Professor at St. John's University School of Law, join Kevin Frazier, AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare, to assess the ongoing contrasting and, at times, conflicting regulatory approaches to Big Tech being pursued by the EU and US. The trio start with an assessment of the EU's use of the Brussels Effect, coined by Anu, to shape AI development. Next, then explore the US's increasingly interventionist industrial policy with respect to key sectors, especially tech. Read more:Anu's op-ed in The New York TimesThe Impact of Regulation on Innovation by Philippe Aghion, Antonin Bergeaud & John Van ReenenDraghi Report on the Future of European Competitiveness Hosted on Acast. See acast.com/privacy for more information.

The Lawfare Podcast
Scaling Laws: Uncle Sam Buys In: Examining the Intel Deal 

The Lawfare Podcast

Play Episode Listen Later Aug 29, 2025 48:22


Peter E. Harrell, Adjunct Senior Fellow at the Center for a New American Security, joins Kevin Frazier, AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare, to examine the White House's announcement that it will take a 10% share of Intel. They dive into the policy rationale for the stake as well as its legality. Peter and Kevin also explore whether this is just the start of such deals given that President Trump recently declared that “there will be more transactions, if not in this industry then other industries.”Find Scaling Laws on the Lawfare website, and subscribe to never miss an episode.To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/lawfare-institute.Support this show http://supporter.acast.com/lawfare. Hosted on Acast. See acast.com/privacy for more information.

Wolfe Admin Podcast
AWP: AI, Innovation, and the Future of Patient Care with Dr. Masoud Nafey

Wolfe Admin Podcast

Play Episode Listen Later Aug 29, 2025 49:16


Aaron sits down with Dr. Masoud Nafey, one of the keynote presenters at the Gateway Tour, presented by Vision Source, to explore how artificial intelligence is reshaping healthcare. They revisit key concepts from the Gateway Tour, including the difference between true AI and simple software, why the “win-win-win” approach matters for patients, providers, and businesses, and how optometrists can stay ahead by embracing technology without losing the irreplaceable human connection at the heart of care. If you'd like to dive deeper, join us at an upcoming Gateway Tour event. Gateway Tour - gatewaytour.ai   ------------------------ Go to MacuHealth.com and use the coupon code PODCAST2024 at checkout for special discounts Let's Connect! Follow and join the conversation! Instagram: @aaron_werner_vision

Arbiters of Truth
Uncle Sam Buys In: Examining the Intel Deal

Arbiters of Truth

Play Episode Listen Later Aug 28, 2025 47:34


Peter E. Harrell, Adjunct Senior Fellow at the Center for a New American Security, joins Kevin Frazier, AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare, to examine the White House's announcement that it will take a 10% share of Intel. They dive into the policy rationale for the stake as well as its legality. Peter and Kevin also explore whether this is just the start of such deals given that President Trump recently declared that “there will be more transactions, if not in this industry then other industries.” Hosted on Acast. See acast.com/privacy for more information.

Arbiters of Truth
AI in the Classroom with MacKenzie Price, Alpha School co-founder, and Rebecca Winthrop, leader of the Brookings Global Task Force on AI in Education

Arbiters of Truth

Play Episode Listen Later Aug 26, 2025 80:43


MacKenzie Price, co-founder of Alpha School, and Rebecca Winthrop, a senior fellow and director of the Center for Universal Education at the Brookings Institution, join Kevin Frazier, the AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare, to review how AI is being integrated into the classroom at home and abroad. MacKenzie walks through the use of predictive AI in Alpha School classrooms. Rebecca provides a high-level summary of ongoing efforts around the globe to bring AI into the education pipeline. This conversation is particularly timely in the wake of the AI Action Plan, which built on the Trump administration's prior calls for greater use of AI from K to 12 and beyond. Learn more about Alpha School here: https://www.nytimes.com/2025/07/27/us/politics/ai-alpha-school-austin-texas.html and here: https://www.astralcodexten.com/p/your-review-alpha-schoolLearn about the Brookings Global Task Force on AI in Education here: https://www.brookings.edu/projects/brookings-global-task-force-on-ai-in-education/ Hosted on Acast. See acast.com/privacy for more information.

MONEY FM 89.3 - Weekend Mornings
Saturday Mornings: "AI, Innovation, and the Future of Law at TechLaw.Fest 2025"

MONEY FM 89.3 - Weekend Mornings

Play Episode Listen Later Aug 24, 2025 20:02


In this Wide World segment, we speak with Yeong Zee Kin, Chief Executive of the Singapore Academy of Law, ahead of TechLaw.Fest 2025 - a legal technology conference returning for its 10th edition with over 2,000 legal experts from around the globe. This year’s theme, "Reimagining Legal in the Digital Age", looks at the current challenges facing the legal profession: rising demand, shrinking budgets, and the race to adopt AI and other tech.Yeong is a leading voice in legal innovation and digital governance. In the Wide World Segment on the “Saturday Mornings Show” with host Glenn van Zutphen and co-host Neil Humphreys - he shares insights into how TechLaw.Fest is shaping the future of law through thought leadership, cross-sector collaboration, and practical solutions. From AI regulation and online safety to digital infrastructure and crypto assets, the event dives deep into the challenges and opportunities of tech adoption in legal practice.We explore whether AI is truly the answer to modern legal pressures—and what it means for access to justice, talent development, and global compliance. With objectives ranging from boosting legal tech adoption to promoting provider visibility.See omnystudio.com/listener for privacy information.

Electric Perspectives
EEI 2025 Highlights: Mission-Critical Networks, Meeting Demand Growth, and AI Innovation

Electric Perspectives

Play Episode Listen Later Aug 22, 2025 18:53


This episode is part of our EEI 2025 Highlights series. In this episode, you will hear conversations about demand growth, mission-critical networks, and the intersection of energy and artificial intelligence. The speakers are: Chris Rogers, Partner, Communities, Energy, & Infrastructure, Guidehouse Michelle Fay, Global Energy Providers Practice Leader, Guidehouse Charlie Sanchez, President, Infrastructure Advisory, Black & Veatch Hans Kobler, Founder and Managing Partner, Energy Impact Partners You can also visit EEI's website to read EEI 2025 recap newsletters, see photos from our annual thought leadership forum, and watch videos from some of the keynotes.

Arbiters of Truth
The Open Questions Surrounding Open Source AI with Nathan Lambert and Keegan McBride

Arbiters of Truth

Play Episode Listen Later Aug 21, 2025 45:17


Keegan McBride, Senior Policy Advisor in Emerging Technology and Geopolitics at the Tony Blair Institute, and Nathan Lambert, a post-training lead at the Allen Institute for AI, join Alan Rozenshein, Associate Professor at Minnesota Law and Research Director at Lawfare, and Kevin Frazier, the AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare, to explore the current state of open source AI model development and associated policy questions.The pivot to open source has been swift following initial concerns that the security risks posed by such models outweighed their benefits. What this transition means for the US AI ecosystem and the global AI competition is a topic worthy of analysis by these two experts. Hosted on Acast. See acast.com/privacy for more information.

The Sure Shot Entrepreneur
Don't Ditch the Pitch Deck

The Sure Shot Entrepreneur

Play Episode Listen Later Aug 19, 2025 27:48


Zach Noorani, Partner at Foundation Capital, shares hard-earned insights from more than a decade in venture capital. He talks about why the most important question investors wrestle with is differentiation, why authentic conversations with founders matter more than polished pitches, and how the best founders demonstrate velocity of learning. Zach also reflects on the evolving nature of venture—how the industry has become more professionalized, what excites him about the new wave of innovation, and why empathy and due process matter in society as much as in business.In this episode, you'll learn:[01:40] From Salem to Stanford to Silicon Valley: Zach's journey into venture capital[03:38] Why venture is a “blank canvas” and not a zero-sum game[05:46] Foundation Capital's early-stage focus and Zach's typical check size[06:58] “There's no time too early”: When founders should reach out—and what Zach looks for[09:43] What really happens in the first meeting (and why it's about participation, not just Q&A)[12:54] Should founders ditch the pitch deck? Zach's advice might surprise you[16:40] The Push Cash story: How one founder showed true differentiation from day one[19:34] Why Zach says no—even after initial interest—and what makes a $10B story possible[22:42] How venture has professionalized—and why AI brings back the excitementAbout Zach NooraniZach Noorani is a Partner at Foundation Capital, where he invests in early-stage fintech and enterprise startups. With over 14 years of experience in venture capital, Zach has a background that spans corporate VC at Capital One and a passion for supporting founders who demonstrate velocity of learning and unique insight. Based in Los Angeles, Zach focuses on inception-to-Series A companies and has led investments in several innovative fintech ventures.About Foundation CapitalFoundation Capital is a Silicon Valley-based venture capital firm with over 30 years of experience backing early-stage companies. The firm partners with founders from inception through Series A, providing hands-on support to help build transformative businesses. Foundation Capital's portfolio includes category-defining companies across fintech, enterprise, and emerging technologies.Subscribe to our podcast and stay tuned for our next episode.

Lead on Purpose with James Laughlin
One NZ CEO Jason Paris on Elon Musk, AI Innovation & the Power of Creativity in Business

Lead on Purpose with James Laughlin

Play Episode Listen Later Aug 17, 2025 65:14


Pre-order my new book Habits of High Performers here - www.thehabitbook.com What if the secret to high performance isn't balance—but choice?In this episode, I sit down with Jason Paris, CEO of One New Zealand, to unpack leadership, creativity, and building partnerships at a world-class level.Jason's led one of NZ's largest organisations through rapid change—while refusing to miss a single moment that matters with his family. From turning Vodafone into One NZ to forging game-changing deals with Starlink and Salesforce, his approach blends bold moves with deep human values.Here's what we cover:Why work-life balance is a myth—and what to focus on insteadThe role creativity plays in high-performing teamsThe pitch that landed a world-first partnership with Elon Musk's StarlinkHow to adopt AI with speed, experimentation, and purposeHandling public criticism while staying grounded and humanIf you're ready to rethink leadership, scale impact, and stay true to your values, this conversation is packed with lessons from the top—and the heart.You can follow Jason on LinkedIn here - https://www.linkedin.com/in/jason-paris-3404565/?originalSubdomain=nzYou can grab your copy of See How They Fall by Rachel Paris here - https://www.paperplus.co.nz/shop/books/fiction/crime-thrillers/see-how-they-fallIf you're interested in having me deliver a keynote or workshop for your team contact Caroline at caroline@jjlaughlin.comWebsite: https://www.jjlaughlin.com YouTube: https://www.youtube.com/channel/UC6GETJbxpgulYcYc6QAKLHA Facebook: https://www.facebook.com/JamesLaughlinOfficial Instagram: https://www.instagram.com/jameslaughlinofficial/ Apple Podcast: https://podcasts.apple.com/nz/podcast/life-on-purpose-with-james-laughlin/id1547874035 Spotify: https://open.spotify.com/show/3WBElxcvhCHtJWBac3nOlF?si=hotcGzHVRACeAx4GvybVOQ LinkedIn: https://www.linkedin.com/in/jameslaughlincoaching/James Laughlin is a High Performance Leadership Coach, Former 7-Time World Champion, Host of the Lead On Purpose Podcast and an Executive Coach to high performers and leaders. James is based in Christchurch, New Zealand.Send me a personal text message - If you're interested in booking me for a keynote or workshop, contact Caroline at caroline@jjlaughlin.comSupport the show

The Lawfare Podcast
Scaling Laws: What's Next in AI Policy (and for Dean Ball)?

The Lawfare Podcast

Play Episode Listen Later Aug 15, 2025 59:14


In this episode of Scaling Laws, Dean Ball, Senior Fellow at the Foundation for American Innovation and former Senior Policy Advisor for Artificial Intelligence and Emerging Technology, White House Office of Science and Technology Policy, joins Kevin Frazier, AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare, and Alan Rozenshtein, Associate Professor at Minnesota Law and Research Director at Lawfare, to share an inside perspective of the Trump administration's AI agenda, with a specific focus on the AI Action Plan. The trio also explore Dean's thoughts on the recently released ChatGPT-5 and the ongoing geopolitical dynamics shaping America's domestic AI policy.Find Scaling Laws on the Lawfare website, and subscribe to never miss an episode.To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/lawfare-institute.Support this show http://supporter.acast.com/lawfare. Hosted on Acast. See acast.com/privacy for more information.

Mommy Dentists in Business
321: Transforming Dental Practices Through AI Innovation

Mommy Dentists in Business

Play Episode Listen Later Aug 15, 2025 32:16


This week on the Mommy Dentists in Business Podcast, Dr. Grace Yum sits down with Jonathan Rat, CEO of Archy, to explore how AI is transforming dental practice management. If you've ever felt weighed down by staffing challenges, outdated software, or endless administrative headaches, this episode is for you! Episode highlights: How staffing shortages, inefficiencies, and outdated software inspired Archy The top pain points in dentistry today—and how Archy addresses them Practical AI tools that reduce admin burdens, streamline workflows, and boost patient trust Why gradual tech adoption matters and what switching to Archy looks like Ready to thrive as a dentist and a mom? Join a supportive community of like-minded professionals at Mommy Dentists in Business. Whether you're looking to grow your practice, find balance, or connect with others who understand your journey, MDIB is here to help. Visit mommydibs.com to learn more and become a part of this empowering network today!

The Tech Blog Writer Podcast
3383: Denodo Technologies on Balancing AI Innovation with Data Integrity

The Tech Blog Writer Podcast

Play Episode Listen Later Aug 14, 2025 33:39


In this episode, I am joined by Charles Southwood, Regional GM of Denodo Technologies, a company recognised globally for its leadership in data management. With revenues of $288M and a customer base that includes Hitachi, Informa, Engie, and Walmart, Denodo sits at the heart of how enterprises access, trust, and act on their data. Charles brings over 35 years of experience in the tech industry, offering both a long-term view of how the data landscape has evolved and sharp insights into the challenges businesses face today. Our conversation begins with a pressing issue for any organisation exploring generative AI: data reliability. With many AI models trained on vast amounts of internet content, there is a real risk of false information creeping into business outputs. Charles explains why mitigating hallucinations and inaccuracies is essential not just for technical quality, but for protecting brand reputation and avoiding costly missteps. We explore alternative approaches that allow enterprises to benefit from AI innovation while maintaining data integrity and control. We also examine the broader enterprise pressures AI has created. The promise of reduced IT costs and improved agility is enticing, but how much of this is achievable today and how much is inflated by hype? Charles shares why he believes 2025 is a tipping point for achieving true business agility through data virtualisation, and what a virtualised data layer can deliver for teams across IT, marketing, and beyond. Along the way, Charles reflects on the industry shifts that caught him most by surprise, the decisions he would make differently with the benefit of hindsight, and the one golden rule he would share with younger tech professionals starting their careers now. This is a conversation for anyone who wants to understand how to harness AI and advanced data integration without falling into the traps of poor data quality, overblown expectations, or short-term thinking.

RTP's Free Lunch Podcast
Deep Dive 307 - America's AI Action Plan: Green Lights or Guardrails?

RTP's Free Lunch Podcast

Play Episode Listen Later Aug 14, 2025 51:23 Transcription Available


America’s new AI Action Plan — announced by the White House in July and framed by three pillars of accelerating innovation, building national AI infrastructure, and projecting U.S. leadership abroad — promises more than 90 separate federal actions, from fast-tracking approvals for medical-AI tools to revising international export controls on advanced chips. Supporters hail its light-touch approach, swift development of domestic and foreign deployment of AI, and explicit warnings against “ideological bias” in AI systems. In contrast, some critics say the plan removes guardrails, favors big tech, and is overshadowed by other actions disinvesting in research. How will the Plan impact AI in America? Join us for a candid discussion that will unpack the Plan’s major levers and ask whether the “innovation-first” framing clarifies or obscures deeper constitutional and economic questions. Featuring: Neil Chilson, Head of AI Policy, Abundance Institute Mario Loyola, Senior Research Fellow, Environmental Policy and Regulation, Center for Energy, Climate, and Environment, The Heritage Foundation Asad Ramzanali, Director of Artificial Intelligence & Technology Policy, Vanderbilt Policy Accelerator, Vanderbilt University (Moderator) Kevin Frazier, AI Innovation and Law Fellow, University of Texas School of Law

The Progress Report
Future-ready: How curiosity powers AI innovation

The Progress Report

Play Episode Listen Later Aug 13, 2025 12:24


As the AI landscape rapidly evolves, organizations are striving to balance a bold, long-term vision with agile, short-term actions that drive innovation and business transformation. But success does not just depend on strategy; it requires a fundamental shift in organizational behavior, mindset, and culture. What steps do technology leaders need to take to prepare today's workforce for tomorrow's AI-powered roles?  Join this discussion on how organizations can enable employees to grow with AI, cultivating a culture of curiosity that drives business transformation and prepares the workforce for future challenges.  Featured experts Borika Vucinic, President of the Board, North American Broadcasters Association 

Eye On A.I.
#278 Julia Peyre: How Schneider Electric is Pioneering Enterprise AI at Scale

Eye On A.I.

Play Episode Listen Later Aug 10, 2025 55:35


AGNTCY - Unlock agents at scale with an open Internet of Agents. Visit https://agntcy.org/ and add your support. How does a 150,000-employee global leader make AI work at scale? In this episode of Eye on AI, host Craig Smith sits down with Julia Peyre, Head of AI Strategy & Innovation at Schneider Electric, to explore how the company is pioneering enterprise AI adoption through its AI Hub, hybrid AI systems, and real-world digital twin applications. From breaking data silos and embedding AI into hardware, to partnering with startups and building predictive maintenance solutions, Julia shares a blueprint for bringing AI from pilot programs to full-scale deployment, across both internal processes and customer-facing products. Stay Updated: Craig Smith on X:https://x.com/craigssEye on A.I. on X: https://x.com/EyeOn_AI (00:00) Preview and Intro (01:58) Meet Julia Peyre & Her Role(03:19) Inside Schneider's AI Hub(05:51) AI in Industrial Automation & Robotics(08:43) Internal vs External AI Applications(13:54) Why Schneider Is Ahead in AI Adoption(15:57) Centralized AI Hub Model(19:38) Hybrid AI: Combining Physics & Data(25:44) Early Steps in Multi-Agent Systems(29:54) Breaking Data Silos for AI at Scale(32:02) Predictive Maintenance with Hybrid AI(37:03) Long-Term View on AI Automation(42:37) Advice for Young Professionals in AI(44:21) Framework for Evaluating AI Solutions(50:19) Involving End Users in AI Testing

The Lawfare Podcast
Scaling Laws: What Keeps OpenAI's Product Policy Staff Up at Night? A Conversation with Brian Fuller

The Lawfare Podcast

Play Episode Listen Later Aug 8, 2025 51:16


Brian Fuller, a member of the Product Policy Team at OpenAI, joins Kevin Frazier, the AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare, to analyze how large AI labs go about testing their models for compliance with internal requirements and various legal obligations. They also cover the ins and outs of what it means to work in product policy and what issues are front of mind for in-house policy teams amid substantial regulatory uncertainty.Find Scaling Laws on the Lawfare website, and subscribe to never miss an episode.To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/lawfare-institute.Support this show http://supporter.acast.com/lawfare. Hosted on Acast. See acast.com/privacy for more information.

Everyday AI Podcast – An AI and ChatGPT Podcast
EP 584: ChatGPT's New Open Source Model gpt-oss: What it means, the risks, and more

Everyday AI Podcast – An AI and ChatGPT Podcast

Play Episode Listen Later Aug 7, 2025 42:51


This ChatGPT announcement is more important than GPT-5.Seriously.This week, OpenAI (kinda) quietly released its first open-source model since 2019.Us AI dorks are talking about it… but the business landscape is crickets.(As everyone gets hyped for GPT-5 today.)But…. Hot take on a Thursday shorties: ChatGPT's new Open Source Model will be a bigger step forward for AI tech than GPT-5 and it's not even close.Join us to find out why, how business development could change, and who will be the winners and losers.Newsletter: Sign up for our free daily newsletterMore on this Episode: Episode PageJoin the discussion: Thoughts on this? Join the convo and connect with other AI leaders on LinkedIn.Upcoming Episodes: Check out the upcoming Everyday AI Livestream lineupWebsite: YourEverydayAI.comEmail The Show: info@youreverydayai.comConnect with Jordan on LinkedInTopics Covered in This Episode:OpenAI Releases GPT OSS Open Source ModelComparison: GPT OSS vs GPT-4 Level ReasoningImpact on AI Industry Competitors & StrategyApache 2.0 License vs Meta Llama RestrictionsBusiness Benefits: Local, Secure, Free AI DeploymentTechnical Specs: 20B and 120B Parameter VersionsAI Model Customization, Fine-Tuning, and Edge UseWinners and Losers: Nvidia, Google, API ProvidersEdge Computing and On-Device AI FutureOpen Source AI Risks and Safety ConcernsGlobal AI Race: US vs China Open SourceAcceleration of AI Innovation and Model DevelopmentTimestamps:00:00 "ChatGPT's Game-Changing Open Source"05:27 Open Source AI Models Explained07:23 OpenAI's New Open-Source Model11:30 Affordable High-Performance Language Models15:54 Meta's Shift Toward Proprietary Models17:56 "AI Model Customization and Deployment"20:38 Leveraging AI for Cost Efficiency26:13 OpenAI's Strategic Competitive Advantage27:58 OpenAI's Strategic Dominance Forecast31:33 "Anticipating Google's Gemma 4 Impact"35:18 Apple's Future in AI-Powered Phones39:11 AGI: The New Global SuperpowerKeywords:GPT OSS, OpenAI, ChatGPT open source, GPT-OSS, GPT4O level reasoning, Open source AI model, Apache 2.0 license, Reasoning model, Local AI models, AI edge computing, On-device AI, Downloadable AI model, 21B parameter model, 120B parameter model, AI model fine tuning, Commercial use AI, Chain of thought, Agentic tasks, Tool use AI, Secure AI deployment, Data privacy, API providers, AI innovation, Chinese open source AI, Meta Llama, MMLU benchmark, Nvidia GPU, Microsoft Azure, AWS Bedrock, Hugging Face, Cloud AI, AI business strategy, AI market disruption, AI mid tier competitors, AI scalability, Patent protection, Cybersecurity, Bioweapon risks, Global AI race, AGI acceleration, Model weights releaSend Everyday AI and Jordan a text message. (We can't reply back unless you leave contact info) Ready for ROI on GenAI? Go to youreverydayai.com/partner

In AI We Trust?
In AI We Trust? Ep. 11: Adam Thierer on AI, Innovation & Tech Policy

In AI We Trust?

Play Episode Listen Later Aug 5, 2025 58:40


In this episode of In AI We Trust?, cohosts Miriam Vogel and Nuala O'Connor are joined by Adam Thierer, resident senior fellow @ R Street's Tech & Innovation team. Adam weighs in on the Trump Administration's AI Action Plan, the importance of Congress in developing AI policy, and existing legal principles and practices that help define the new digital and AI age. They focused on the mandate for AI literacy, as well as the necessity of AI technologies being regulated in a transparent and trustworthy way that end users, and particularly consumers, can understand.

The Lawfare Podcast
Scaling Laws: Renée DiResta and Alan Rozenshtein on the ‘Woke AI' Executive Order

The Lawfare Podcast

Play Episode Listen Later Aug 1, 2025 46:48


Renée DiResta, an Associate Research Professor at the McCourt School of Public Policy at Georgetown and a Contributing Editor at Lawfare, and Alan Rozenshtein, an Associate Professor at Minnesota Law, Research Director at Lawfare, and, with the exception of today, co-host on the Scaling Laws podcast, join Kevin Frazier, the AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare, to take a look at the Trump Administration's Woke AI policies, as set forth by a recent EO and explored in the AI Action Plan.Read the Woke AI executive orderRead the AI Action PlanRead "Generative Baseline Hell and the Regulation of Machine-Learning Foundation Models," by James Grimmelmann, Blake Reid, and Alan RozenshteinFind Scaling Laws on the Lawfare website, and subscribe to never miss an episode.To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/lawfare-institute.Support this show http://supporter.acast.com/lawfare. Hosted on Acast. See acast.com/privacy for more information.

Interviews: Tech and Business
Top AI Ethicists Reveal RISKS of AI Failure | CXOTalk #888

Interviews: Tech and Business

Play Episode Listen Later Aug 1, 2025 53:47


When AI systems hallucinate, run amok, or fail catastrophically, the consequences for enterprises can be devastating. In this must-watch CXOTalk episode, discover how to anticipate and prevent AI failures before they escalate into crises.Join host Michael Krigsman as he explores critical AI risk management strategies with two leading experts:• Lord Tim Clement-Jones - Member of the House of Lords, Co-Chair of UK Parliament's AI Group• Dr. David A. Bray - Chair of the Accelerator at Stimson Center, Former FCC CIOWhat you'll learn:✓ Why AI behaves unpredictably despite explicit programming✓ How to implement "pattern of life" monitoring for AI systems✓ The hidden dangers of anthropomorphizing AI✓ Essential board-level governance structures for AI deployment✓ Real-world AI failure examples and their business impact✓ Strategies for building appropriate skepticism while leveraging AI benefitsKey ideas include treating AI as "alien interactions" rather than human-like intelligence, the convergence of AI risk with cybersecurity, and why smaller companies have unique opportunities in the AI landscape.This discussion is essential viewing for CEOs, board members, CIOs, CISOs, and anyone responsible for AI strategy and risk management in their organization.Subscribe to CXOTalk for more expert insights on technology leadership and AI:

The Lawfare Podcast
Scaling Laws: Rapid Response to the AI Action Plan

The Lawfare Podcast

Play Episode Listen Later Jul 25, 2025 64:09


Janet Egan, Senior Fellow with the Technology and National Security Program at the Center for a New American Security; Jessica Brandt, Senior Fellow for Technology and National Security at the Council on Foreign Relations; Neil Chilson, Head of AI Policy at Abundance Institute; and Tim Fist, Director of Emerging Technology Policy at the Institute for Progress join Kevin Frazier, the AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare for a special version of Scaling Laws.This episode was recorded just hours after the release of the AI Action Plan. About 180 days ago, President Trump directed his administration to explore ways to achieve AI dominance. His staff has attempted to do just that. This group of AI researchers dives into the plan's extensive recommendations and explore what may come next.Find Scaling Laws on the Lawfare website, and subscribe to never miss an episode.To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/lawfare-institute.Support this show http://supporter.acast.com/lawfare. Hosted on Acast. See acast.com/privacy for more information.

Primary Technology
Apple Sues Leaker Jon Prosser, AI Browsers Are Actually Useful, AppleCare One Bundle

Primary Technology

Play Episode Listen Later Jul 24, 2025 67:00 Transcription Available


iOS 26 Beta 4 update brings back glass effects, AppleCare One live signup and walkthrough, OpenAI ChatGPT Agent and why AI browsers can be incredibly useful, Apple's lawsuit against Jon Prosser over iOS 26 leaks, and more!Bonus Episode: CBS Cancels Stephen Colbert - Listen here------------------------------Show Notes via EmailSign up to get exactly one email per week from the Primary Tech guys with the full episode show notes for your perusal. Click here to subscribe.------------------------------Watch on YouTube!Subscribe and watch our weekly episodes plus bonus clips at: https://youtu.be/FOkeI9z-hZo------------------------------Join the CommunityDiscuss new episodes, start your own conversation, and join the Primary Tech community here: social.primarytech.fm------------------------------Support the showGet ad-free versions of the show plus exclusive bonus episodes every week! Subscribe directly in Apple Podcasts or here if you want chapters: primarytech.memberful.com/join------------------------------Reach out:Stephen's YouTube Channel@stephenrobles on ThreadsStephen on BlueskyStephen on Mastodon@stephenrobles on XJason's Inc.com Articles@jasonaten on Threads@JasonAten on XJason on BlueskyJason on Mastodon------------------------------We would also appreciate a 5-star rating and review in Apple Podcasts and SpotifyPodcast artwork with help from Basic Apple Guy.Those interested in sponsoring the show can reach out to us at: podcast@primarytech.fm------------------------------Links from the showEverything New in iOS 26 Beta 4 - MacRumorsiOS 26 beta 4 re-enables Apple Intelligence notification summaries for news apps - 9to5MaciOS 26 beta 4 brings Silence Unknown Callers to Call Screening - 9to5MacApple introduces AppleCare One, streamlining coverage into a single plan - AppleSonos Names Tom Conrad Permanent CEO as He Pursues Comeback - BloombergOpenAI's new ChatGPT Agent can control an entire computer and do tasks for you | The VergeApple Sues Jon Prosser Over iOS 26 Leaks - MacRumorsTrump unveils AI Action Plan | The VergePerplexity Comet (00:00) - Intro (01:41) - iOS 26 Glass is Back (07:57) - AppleCare One Subscription (16:07) - Sonos Appoints CEO (22:37) - ChatGPT Agent (24:57) - Apple Sues Jon Prosser (44:39) - Trump on AI Innovation (48:34) - AI Video and Trust (53:49) - AI Browsers!! ★ Support this podcast ★

The Lawfare Podcast
Scaling Laws: Eugene Volokh on Libel and AI

The Lawfare Podcast

Play Episode Listen Later Jul 18, 2025 59:17


In this Scaling Laws Academy "class," Kevin Frazier, the AI Innovation and Law Fellow at Texas Law and a Senior Editor at Lawfare, speaks with Eugene Volokh, a Senior Fellow at the Hoover Institution and long-time professor of law at UCLA, on libel in the AI context. The two dive into Volokh's paper, “Large Libel Models? Liability for AI Output.” Extra credit for those who give it a full read and explore some of the "homework" below:“Beyond Section 230: Principles for AI Governance,” 138 Harv. L. Rev. 1657 (2025)“When Artificial Agents Lie, Defame, and Defraud, Who Is to Blame?,” Stanford HAI (2021)Find Scaling Laws on the Lawfare website, and subscribe to never miss an episode.To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/lawfare-institute.Support this show http://supporter.acast.com/lawfare. Hosted on Acast. See acast.com/privacy for more information.

This Week in Google (MP3)
IM 828: Stochastic Carrots - Navigating the Future of AI

This Week in Google (MP3)

Play Episode Listen Later Jul 17, 2025 140:26 Transcription Available


Interview with Anil Dash egirl Grok x Claude Zuckerberg announces Meta's new AI data centers for superintelligence Meta Hires Two More OpenAI Researchers Reflections on OpenAI RSS is (not) dead (yet) (NED #3) – audra mcnamee Perplexity AI browser Perplexity CEO says its browser will track everything users do online to sell 'hyper personalized' ads Stewart Holbrook: Portland Mythmaker How I Learned to Stop Worrying and Have Fun With A.I. Hosts: Leo Laporte, Jeff Jarvis, and Paris Martineau Guest: Anil Dash Download or subscribe to Intelligent Machines at https://twit.tv/shows/intelligent-machines. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: helixsleep.com/twit agntcy.org Melissa.com/twit

All TWiT.tv Shows (MP3)
Intelligent Machines 828: Stochastic Carrots

All TWiT.tv Shows (MP3)

Play Episode Listen Later Jul 17, 2025 140:26 Transcription Available


Interview with Anil Dash egirl Grok x Claude Zuckerberg announces Meta's new AI data centers for superintelligence Meta Hires Two More OpenAI Researchers Reflections on OpenAI RSS is (not) dead (yet) (NED #3) – audra mcnamee Perplexity AI browser Perplexity CEO says its browser will track everything users do online to sell 'hyper personalized' ads Stewart Holbrook: Portland Mythmaker How I Learned to Stop Worrying and Have Fun With A.I. Hosts: Leo Laporte, Jeff Jarvis, and Paris Martineau Guest: Anil Dash Download or subscribe to Intelligent Machines at https://twit.tv/shows/intelligent-machines. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: helixsleep.com/twit agntcy.org Melissa.com/twit

The Lawfare Podcast
Scaling Laws: Ethan Mollick: Navigating the Uncertainty of AI Development

The Lawfare Podcast

Play Episode Listen Later Jul 10, 2025 66:21


Ethan Mollick, Professor of Management and author of the “One Useful Thing” Substack, joins Kevin Frazier, the AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare, and Alan Rozenshtein, Associate Professor at Minnesota Law and a Senior Editor at Lawfare, to analyze the latest research in AI adoption, specifically its use by professionals and educators. The trio also analyze the trajectory of AI development and related, ongoing policy discussions.More of Ethan Mollick's work: https://www.oneusefulthing.org/Find Scaling Laws on the Lawfare website, and subscribe to never miss an episode.To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/lawfare-institute.Support this show http://supporter.acast.com/lawfare. Hosted on Acast. See acast.com/privacy for more information.