Podcasts about large language models

  • 1,208PODCASTS
  • 2,580EPISODES
  • 43mAVG DURATION
  • 1DAILY NEW EPISODE
  • Dec 3, 2025LATEST

POPULARITY

20172018201920202021202220232024


Best podcasts about large language models

Show all podcasts related to large language models

Latest podcast episodes about large language models

Everyday AI Podcast – An AI and ChatGPT Podcast
Beginner's Guide: How to visualize data with AI in ChatGPT, Gemini and Claude

Everyday AI Podcast – An AI and ChatGPT Podcast

Play Episode Listen Later Dec 3, 2025 42:07


This is Vibe Coding 001. Have you ever wanted to build your own software or apps that can just kinda do your work for you inside of the LLM you use but don't know where to start? Start here. We're giving it all away and making it as simple as possible, while also hopefully challenging how you think about work. Join us. Beginner's Guide: How to visualize data with AI in ChatGPT, Gemini and Claude -- An Everyday AI Chat with Jordan WilsonNewsletter: Sign up for our free daily newsletterMore on this Episode: Episode PageJoin the discussion:Thoughts on this? Join the convo and connect with other AI leaders on LinkedIn.Upcoming Episodes: Check out the upcoming Everyday AI Livestream lineupWebsite: YourEverydayAI.comEmail The Show: info@youreverydayai.comConnect with Jordan on LinkedInTopics Covered in This Episode:Combining Multiple Features in Large Language ModelsVisualizing Data in ChatGPT, Gemini, and ClaudeCreating Custom GPTs, Gems, and ProjectsUploading Files for Automated Data DashboardsComparing ChatGPT Canvas, Gemini Canvas, and Claude ArtifactsUsing Agentic Capabilities for Problem SolvingVisualizing Meeting Transcripts and Unstructured DataOne-Shot Mini App Creation with AITimestamps:00:00 "Unlocking Superhuman LLM Capabilities"04:12 Custom AI Model and Testing07:18 "Multi-Mode Control for LLMs"12:33 "Intro to Vibe Coding"13:19 "Streamlined AI for Simplification"19:59 Podcast Analytics Simplified21:27 "ChatChibuty vs. Google Gemini"26:55 "Handling Diverse Data Efficiently"28:50 "AI for Actionable Task Automation"33:12 "Personalized Dashboard for Meetings"36:21 Personalized Automated Workflow Solution40:00 "AI Data Visualization Guide"40:38 "Everyday AI Wrap-Up"Keywords:ChatGPT, Gemini, Claude, data visualization with AI, visualize data using AI, Large Language Models, LLM features, combining LLM modes, custom instructions, GPTs, Gems, Anthropic projects, canvas mode, interactive dashboards, agentic models, code rendering, meeting transcripts visualization, SOP visualization, document analysis, unstructured data, structured insights, generative AI workflows, personalized dashboards, automated reporting, chain of thought reasoning, one-shot visualizations, data-driven decision-making, non-technical business leaders, micro apps, AI-powered interfaces, action items extraction, iterative improvement, multimodal AI, Opus 4.5, Five One Thinking, Gemini 3 Pro, artifacts, demos over memos, bespoke software, digital transformation, automated analyticsSend Everyday AI and Jordan a text message. (We can't reply back unless you leave contact info) Ready for ROI on GenAI? Go to youreverydayai.com/partner 

Mystery AI Hype Theater 3000
You Talked to Workers for This Labor Research... Right? (with Sophie Song), 2025.11.17

Mystery AI Hype Theater 3000

Play Episode Listen Later Dec 2, 2025 53:14


Last month, Senate Democrats warned that "Automation Could Destroy Nearly 100 Million U.S Jobs in a Decade." Ironically, they used ChatGPT to come to that conclusion. DAIR Research Associate Sophie Song joins us to unpack the issues when self-professed worker advocates use chatbots for "research."Sophie Song is a researcher, organizer, and advocate working at the intersection of tech and social justice. They're a research associate at DAIR, where they're working with Alex on building the Luddite Lab Resource Hub.References:Senate report: AI and Automation Could Destroy Nearly 100 Million U.S Jobs in a DecadeSenator Sanders' AI Report Ignores the Data on AI and InequalityAlso referenced:MAIHT3k Episode 25: An LLM Says LLMs Can Do Your JobHumlum paper: Large Language Models, Small Labor Market EffectsEmily's blog post: Scholarship should be open, inclusive and slowFresh AI Hell:Tech companies compelling vibe codingarXiv is overwhelmed by LLM slop'Godfather of AI' says tech giants can't profit from their astronomical investments unless human labor is replacedIf you want to satiate AI's hunger for power, Google suggests going to spaceAI pioneers claim human-level general intelligence is already hereGen AI campaign against ranked choice votingChaser: Workplace AI Implementation BingoCheck out future streams on Twitch. Meanwhile, send us any AI Hell you see.Our book, 'The AI Con,' is out now! Get your copy now.Subscribe to our newsletter via Buttondown. Follow us!Emily Bluesky: emilymbender.bsky.social Mastodon: dair-community.social/@EmilyMBender Alex Bluesky: alexhanna.bsky.social Mastodon: dair-community.social/@alex Twitter: @alexhanna Music by Toby Menon.Artwork by Naomi Pleasure-Park. Production by Ozzy Llinas Goodman.

The ERP Advisor
Leaders in ERP: Paul Farrell, Senior VP, ECI - The ERP Advisor Podcast Episode 127

The ERP Advisor

Play Episode Listen Later Dec 2, 2025 40:03


On this episode of our "Leaders in ERP Series", Shawn Windle speaks with Paul Farrell, Senior Vice President at ECI. Windle and Farrell discuss the projected evolution of the ERP market over the next decade, how Large Language Models (LLMs) and Agentic AI are changing the way companies utilize ERP, and how ECI is framing their AI strategy around industry specialization.Connect with us!https://www.erpadvisorsgroup.com866-499-8550LinkedIn:https://www.linkedin.com/company/erp-advisors-groupTwitter:https://twitter.com/erpadvisorsgrpFacebook:https://www.facebook.com/erpadvisorsInstagram:https://www.instagram.com/erpadvisorsgroupPinterest:https://www.pinterest.com/erpadvisorsgroupMedium:https://medium.com/@erpadvisorsgroup

Free Speech Unmuted
Defamation Law in the Age of AI with Lyrissa Lidsky | Eugene Volokh and Jane Bambauer | Hoover Institution

Free Speech Unmuted

Play Episode Listen Later Dec 2, 2025 55:03


What happens when 1970s defamation law collides with the Internet, social media, and AI? University of Florida Law School legal scholar Lyrissa Lidsky — who is also a co-reporter for the American Law Institute's Restatement (Third) of Torts: Defamation and Privacy — explains how the law of libel and slander is being rewritten for the digital age. Lyrissa, Jane, and Eugene discuss why the old line between libel and slander no longer makes sense; how Section 230 upended defamation doctrine; the future of New York Times v. Sullivan and related First Amendment doctrines; Large Libel Models (when Large Language Models meet libel law); and more. Subscribe for the latest on free speech, censorship, social media, AI, and the evolving role of the First Amendment in today's proverbial town square. 

Génération Do It Yourself
#507 - Laurent Alexandre - Vers la fin des études supérieures ?

Génération Do It Yourself

Play Episode Listen Later Nov 30, 2025 134:48


Il faut réagir et vite.La thèse de Laurent Alexandre est simple mais inquiétante : nos systèmes éducatifs et politiques sont — pour l'instant — incapables de s'adapter à la révolution technologique sans précédent qu'est l'IA.Depuis que les Large Language Model ont tous dépassé les 150 de QI, la donne a radicalement changé.L'homme, pour la première fois depuis son apparition, n'est plus l'espèce la plus intelligente de la planète Terre.Et les investissements colossaux des géants de la tech dans l'IA ne font que creuser le fossé qui nous sépare désormais de la machine.Qui dit nouveau paradigme dit nouveau livre : Laurent et et son co-auteur Olivier Babeau considèrent qu'en dehors des 20 écoles les plus prestigieuses du monde, il n'est plus utile de faire des études.Et que “le vrai capital aujourd'hui, c'est l'action”.Autrement dit : l'élite sera constituée de ceux qui s'approprient l'IA le plus tôt (dès la maternelle) et l'utilisent le plus intensément. Et non de ceux qui font 10 ans d'études supérieures.Pour son 3ème passage sur GDIY, Laurent — comme à son habitude — n'épargne rien ni personne :Pourquoi l'IA amplifie à grande échelle les inégalités intellectuelles — et comment y remédierComment se créer son propre I-AristotePourquoi il faut limoger tous les ministres et hauts fonctionnaires qui ne comprennent pas l'IAComment l'espérance de vie peut doubler d'ici 2030.Un épisode crucial pour ne pas être dépassé et savoir comment être du côté des gagnants dans cette révolutionVous pouvez contacter Laurent sur X et le suivre sur ses autres réseaux : Instagram et Linkedin.“Ne faites plus d'études : Apprendre autrement à l'ère de l'IA” est disponible dans toutes les bonnes librairies ou juste ici : https://amzn.to/4ahLYEBTIMELINE:00:00:00 : Le fossé social créé par la révolution technologique00:12:32 : Pourquoi le niveau général des politiques sur l'IA est désastreux ?00:20:06 : Bien prompter et minimiser les hallucinations des modèles00:25:49 : “Le monde de l'IA n'est pas fait pour les feignasses”00:36:46 : Le plus gros amplificateur d'inégalités de l'histoire00:43:04 : Fournir une IA tuteur personnalisée à chaque enfant00:53:41 : Les LLM ont-ils vraiment des biais cognitifs ?01:03:16 : 1 vs 2900 milliards : l'écart abyssal des investissements entre les US et l'Europe01:14:36 : Que valent les livres écrits en intégralité par des IA ?01:20:39 : L'ère des premiers robots plombiers01:27:45 : Les 4 grands conseils de Laurent et Olivier01:35:33 : Comment aider nos enfants à bien s'approprier les outils01:44:20 : Pourquoi les écoles du supérieur sont de moins en moins sélectives ?02:01:28 : La théorie de “l'internet mort”Les anciens épisodes de GDIY mentionnés : #327 - Laurent Alexandre - Auteur - ChatGPT & IA : “Dans 6 mois, il sera trop tard pour s'y intéresser”#165 - Laurent Alexandre - Doctissimo - La nécessité d'affirmer ses idées#487 - VO - Anton Osika - Lovable - Internet, Business, and AI: Nothing Will Ever Be the Same Again#500 - Reid Hoffman - LinkedIn, Paypal - How to master humanity's most powerful invention#501 - Delphine Horvilleur - Rabbin, Écrivaine - Dialoguer quand tout nous divise#506 - Matthieu Ricard - Moine bouddhiste - Se libérer du chaos extérieur sans se couper du monde#450 - Karim Beguir - InstaDeep - L'IA Générale ? C'est pour 2025#397 - Yann Le Cun - Chief AI Scientist chez Meta - l'Intelligence Artificielle Générale ne viendra pas de Chat GPTNous avons parlé de :Olivier Babeau, le co-auteur de LaurentIntroduction à la pensée de Teilhard de ChardinLa théorie de l'internet mortLes recommandations de lecture :La Dette sociale de la France : 1974 - 2024 - Nicolas DufourcqNe faites plus d'études : Apprendre autrement à l'ère de l'IA - Laurent Alexandre et Olivier BabeauL'identité de la France - Fernand BraudelGrammaire des civilisations - Fernand BraudelChat GPT va nous rendre immortel - Laurent AlexandreUn grand MERCI à nos sponsors : SquareSpace : squarespace.com/doitQonto: https://qonto.com/r/2i7tk9 Brevo: brevo.com/doit eToro: https://bit.ly/3GTSh0k Payfit: payfit.com Club Med : clubmed.frCuure : https://cuure.com/product-onelyVous souhaitez sponsoriser Génération Do It Yourself ou nous proposer un partenariat ?Contactez mon label Orso Media via ce formulaire.Hébergé par Audiomeans. Visitez audiomeans.fr/politique-de-confidentialite pour plus d'informations.

Let's Talk AI
#226 - Gemini 3, Claude Opus 4.5, Nano Banana Pro, LeJEPA

Let's Talk AI

Play Episode Listen Later Nov 30, 2025 71:11


Our 226th episode with a summary and discussion of last week's big AI news!Recorded on 11/24/2025Hosted by Andrey Kurenkov and co-hosted by Michelle LeeFeel free to email us your questions and feedback at contact@lastweekinai.com and/or hello@gladstone.aiRead out our text newsletter and comment on the podcast at https://lastweekin.ai/In this episode: New AI model releases include Google's Gemini 3 Pro, Anthropic's Opus 4.5, and OpenAI's GPT-5.1, each showcasing significant advancements in AI capabilities and applications.Robotics innovations feature Sunday Robotics' new robot Memo and a $600M funding round for Visual Intelligence, highlighting growth and investment in the robotics sector.AI safety and policy updates include Europe's proposed changes to GDPR and AI Act regulations, and reports of AI-assisted cyber espionage by a Chinese state-sponsored group.AI-generated content and legal highlights involve settlements between Warner Music Group and AI music platform UDIO, reflecting evolving dynamics in the field of synthetic media.Timestamps:(00:00:10) Intro / Banter(00:01:32) News Preview(00:02:10) Response to listener commentsTools & Apps(00:02:34) Google launches Gemini 3 with new coding app and record benchmark scores | TechCrunch(00:05:49) Google launches Nano Banana Pro powered by Gemini 3(00:10:55) Anthropic releases Opus 4.5 with new Chrome and Excel integrations | TechCrunch(00:15:34) OpenAI releases GPT-5.1-Codex-Max to handle engineering tasks that span twenty-four hours(00:18:26) ChatGPT launches group chats globally | TechCrunch(00:20:33) Grok Claims Elon Musk Is More Athletic Than LeBron James — and the World's Greatest LoverApplications & Business(00:24:03) What AI bubble? Nvidia's strong earnings signal there's more room to grow(00:26:26) Alphabet stock surges on Gemini 3 AI model optimism(00:28:09) Sunday Robotics emerges from stealth with launch of ‘Memo' humanoid house chores robot(00:32:30) Robotics Startup Physical Intelligence Valued at $5.6 Billion in New Funding - Bloomberg(00:34:22) Waymo permitted areas expanded by California DMV - CBS Los Angeles - Waymo enters 3 more cities: Minneapolis, New Orleans, and Tampa | TechCrunchProjects & Open Source(00:37:00) Meta AI Releases Segment Anything Model 3 (SAM 3) for Promptable Concept Segmentation in Images and Videos - MarkTechPost(00:40:18) [2511.16624] SAM 3D: 3Dfy Anything in Images(00:42:51) [2511.13998] LoCoBench-Agent: An Interactive Benchmark for LLM Agents in Long-Context Software EngineeringResearch & Advancements(00:45:10) [2511.08544] LeJEPA: Provable and Scalable Self-Supervised Learning Without the Heuristics(00:50:08) [2511.13720] Back to Basics: Let Denoising Generative Models DenoisePolicy & Safety(00:52:08) Europe is scaling back its landmark privacy and AI laws | The Verge(00:54:13) From shortcuts to sabotage: natural emergent misalignment from reward hacking(00:58:24) [2511.15304] Adversarial Poetry as a Universal Single-Turn Jailbreak Mechanism in Large Language Models(01:01:43) Disrupting the first reported AI-orchestrated cyber espionage campaign(01:04:36) OpenAI Locks Down San Francisco Offices Following Alleged Threat From Activist | WIREDSynthetic Media & Art(01:07:02) Warner Music Group Settles AI Lawsuit With UdioSee Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

Cyber Crime Junkies
CHAOS: AI Jailbreaks, Cloud Meltdowns & The Fish-Tank Casino Hack That Shocked the World

Cyber Crime Junkies

Play Episode Listen Later Nov 29, 2025 48:36 Transcription Available


Question? Text our Studio direct.In this shocking monthly cyber update, the Cyber Crime Junkies (David, Dr. Sergio E. Sanchez, and Zack Moscow) expose the craziest, must-know stories in tech and security.What's Inside This Episode:The AI Threat is Real: Dr. Sergio reveals how Chinese threat actors manipulated Anthropic's Claude AI system to stage cyber attacks against nearly 30 companies globally. Learn how powerful Large Language Models (LLMs) are leveling the field for malicious coders.The Casino Fish Tank Hack (True Story!): David tells the unbelievable story of how hackers breached a casino's main network by exploiting a smart thermostat inside an exotic fish tank, accessing high-roller financials. This proves critical network segmentation is non-negotiable.The New Scam: ClickFix: David breaks down the terrifying new ClickFix attack, where hackers trick you into literally copying and pasting malicious code into your own computer. Learn the golden rule to protect yourself from this massive, 500% spike in attacks.The Cloudflare Outage: Zack discusses the massive Cloudflare outage that took down major services like ChatGPT, revealing how a seemingly minor configuration error caused massive ripple effects across the entire internet.The iPhone Scam Laundry: Dr. Sergio shares a wild anecdote from his time at Apple about a global scammer laundering stolen or damaged iPhones for new ones, using a loophole caused by a business decision.

In Your Shoes
On Philosophy, Future of Intelligence, Large language models & how to be effective in age of the AI

In Your Shoes

Play Episode Listen Later Nov 29, 2025 97:46


In Conversation with Patricia Ferreiro on In Your Shoes Podcast about Philosophy of languages, Large language models, future of intelligence and how to be effective in the age of information abundance.Patricia is the founder and CEO of Luar Labs based out of Zurich, Switzerland. Prior to this, she had technical and strategy roles at Microsoft, DataBricks and IBM. At Luar Labs, she trains executives in AI literacy and critical thinking or the digital age, cultivating resilient mental models for a meaningful collaboration with present and future technologies. Connect with Patricia : https://www.linkedin.com/in/patriciaferreiro/

Definitely, Maybe Agile
AI Tools for Product Managers: Beyond Just Writing User Stories

Definitely, Maybe Agile

Play Episode Listen Later Nov 27, 2025 19:43 Transcription Available


Product managers and product owners are drowning in documentation, vision statements, roadmaps, and backlogs. But what if AI could handle the heavy lifting, freeing you up to actually talk to customers?In this episode, Dave and Peter explore how large language models are changing product management. They go beyond the obvious use cases (like generating user stories) to discuss upstream opportunities: building product strategy, validating market positioning, and testing ideas against competitors.You'll learn:Why documenting your product strategy matters (and why most PMs skip it)How to prompt AI to be critical, not just complimentaryThe danger of accepting AI outputs without evaluationTemperature settings, context windows, and other practical techniquesWhat to do with the time you get back (hint: talk to real customers)Dave and Peter also share a key practice: write down what you expect before you prompt. This simple step helps you critically evaluate AI responses instead of accepting them at face value.If you're a product manager, product owner, or anyone building digital products, this conversation will help you use AI as a tool for better thinking, not just faster output.

Flying High with Flutter
The AI Pocket Book with Emmanuel Maggiori

Flying High with Flutter

Play Episode Listen Later Nov 26, 2025 70:06


AI is everywhere, from coding assistants to chatbots, but what's really happening under the hood? It often feels like a "black box," but it doesn't have to be.In this episode, Allen sits down with Manning author and AI expert Emmanuel Maggiori to demystify the core concepts behind Large Language Models (LLMs). Emmanuel, author of "The AI Pocket Book," breaks down the entire pipeline - from the moment you type a prompt to the second you get a response. He explains complex topics like tokens, embeddings, context windows, and the controversial training methods that make these powerful tools possible.IN THIS EPISODE00:00 - Welcome & Why "The AI Pocket Book" is a Must-Read15:20 - The Basic LLM Pipeline Explained8:05 - What Are Tokens?21:30 - Understanding the Context Window25:50 - How Embeddings Represent Meaning35:45 - Controlling Creativity with Temperature39:30 - How LLMs Learn From Internet Data45:25 - Fine-Tuning with Human Feedback (RLHF)51:15 - Why AI Hallucinates56:45 - When Not to Use

Problem Solved: The IISE Podcast
Large Language Models: How Far We've Come with Dr. Joe Wilck

Problem Solved: The IISE Podcast

Play Episode Listen Later Nov 25, 2025 49:00


Large language models aren't just improving — they're transforming how we work, learn, and make decisions. In this upcoming episode of Problem Solved, IISE's David Brandt talks with Bucknell University's Dr. Joe Wilck about the true state of LLMs after the first 1,000 days: what's gotten better, what's still broken, and why critical thinking matters more than ever.Thank you to this episode's sponsor, Autodesk FlexSimhttps://www.autodesk.com/https://www.flexsim.com/Learn more about The Institute of Industrial and Systems Engineers (IISE)Problem Solved on LinkedInProblem Solved on YouTubeProblem Solved on InstagramProblem Solved on TikTokProblem Solved Executive Producer: Elizabeth GrimesInterested in contributing to the podcast or sponsoring an episode? Email egrimes@iise.org

AI + a16z
From Code Search to AI Agents: Inside Sourcegraph's Transformation with CTO Beyang Liu

AI + a16z

Play Episode Listen Later Nov 25, 2025 46:35


Sourcegraph's CTO just revealed why 90% of his code now comes from agents—and why the Chinese models powering America's AI future should terrify Washington. While Silicon Valley obsesses over AGI apocalypse scenarios, Beyang Liu's team discovered something darker: every competitive open-source coding model they tested traces back to Chinese labs, and US companies have gone silent after releasing Llama 3. The regulatory fear that killed American open-source development isn't hypothetical anymore—it's already handed the infrastructure layer of the AI revolution to Beijing, one fine-tuned model at a time. Resources:Follow Beyang Liu on X: https://x.com/beyangFollow Martin Casado on X: https://x.com/martin_casadoFollow Guido Appenzeller on X: https://x.com/appenz Stay Updated:If you enjoyed this episode, be sure to like, subscribe, and share with your friends!Find a16z on X: https://x.com/a16zFind a16z on LinkedIn: https://www.linkedin.com/company/a16zListen to the a16z Podcast on Spotify: https://open.spotify.com/show/5bC65RDvs3oxnLyqqvkUYXListen to the a16z Podcast on Apple Podcasts: https://podcasts.apple.com/us/podcast/a16z-podcast/id842818711Follow our host: https://x.com/eriktorenbergPlease note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see http://a16z.com/disclosures. Check out everything a16z is doing with artificial intelligence here, including articles, projects, and more podcasts. Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

Die Flowgrade Show mit Max Gotzler
#282: Next Level Longevity: So hilft Kl bei Supplements, Routinen & Regeneration | mit Dr. Reiner Kraft

Die Flowgrade Show mit Max Gotzler

Play Episode Listen Later Nov 23, 2025 70:04


In dieser Folge der Flowgrade Show spreche ich mit Dr. Reiner Kraft – einem der führenden Technologieexperten Europas, Biohacker, Gründer der Plattform EverHealth und jemand, der seit Jahrzehnten an der Schnittstelle von künstlicher Intelligenz, Achtsamkeit und Gesundheit arbeitet.Reiner war 20 Jahre im Silicon Valley tätig, hat über 120 US-Patente mitentwickelt und zählte zu den Top Innovators under 30 des MIT Technology Review. Heute kombiniert er sein Wissen aus der Hochtechnologie mit funktioneller Medizin und entwickelt Tools für die nächste Stufe der Gesundheitsprävention.Wir sprechen darüber, wie KI unsere Gesundheit beeinflussen kann, was Large Language Models (wie ChatGPT) wirklich leisten – und wo ihre Grenzen liegen. Reiner erklärt, warum datengetriebene Prävention der Schlüssel für gesunde Langlebigkeit ist, und gibt tiefe Einblicke in seine neue Plattform EverHealth, mit der er das Thema „Functional Longevity“ für möglichst viele Menschen zugänglich machen will.Wenn du wissen willst, wie Technologie dich unterstützen kann, gesünder zu leben (ohne dich abhängig zu machen) dann ist diese Folge für dich.Viel Freude beim ZuhörenGo for Flow!

ADmire!
Media Entrepreneur John Pasmore of Latimer.ai

ADmire!

Play Episode Listen Later Nov 22, 2025 26:59 Transcription Available


In this episode, host Larry D. Woodard interviews John Pasmore, founder and ceo of Latimer.ai an inclusive large language model designed to address bias in AI by being trained on a diverse dataset that includes experiences, cultures, and histories of Black and Brown communities.Thanks for listening. Don't forget to subscribe.

We Don't PLAY
Google Search Console (GSC) New! Branded and Non-Branded Queries + Annotation Filters | Marketing Talk with Favour Obasi-ike

We Don't PLAY

Play Episode Listen Later Nov 21, 2025 64:58


Google Search Console (GSC) New! Branded and Non-Branded Queries + Annotation Filters | Marketing Talk with Favour Obasi-Ike | Sign up for exclusive SEO insights.This episode focuses on Search Engine Optimization (SEO) and the new features within Google Search Console (GSC).Favour discuss the recently introduced brand queries and annotations features in GSC, highlighting their importance for understanding both branded and non-branded search behavior.The conversation also emphasizes the broader strategic use of GSC data, comparing it to a car's dashboard for website performance, and explores how this data can be leveraged to create valuable content, such as FAQ-based blog posts and multimedia assets, often with the aid of Artificial Intelligence (AI) tools. A key theme is the shift from traditional keyword ranking to ranking for user experience and the interconnectedness of various digital tools in modern marketing strategy.--------------------------------------------------------------------------------Next Steps for Digital Marketing + SEO Services:>> ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Need SEO Services? Book a Complimentary SEO Discovery Call with Favour Obasi-Ike⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠>> Visit our Work and PLAY Entertainment website to learn about our digital marketing services.>> Visit our Official website for the best digital marketing, SEO, and AI strategies today!>> ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Join our exclusive SEO Marketing community⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠>> Read SEO Articles>> ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Need SEO Services? Book a Complimentary SEO Discovery Call with Favour Obasi-Ike⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠>> ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Subscribe to the We Don't PLAY Podcast⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠--------------------------------------------------------------------------------As a content strategist, you live with a fundamental uncertainty. You create content you believe your audience needs, but a nagging question always remains: are you hitting the mark? It often feels like you're operating with a blind spot, focusing on concepts while, as the experts say, "you don't even know the intention behind why they're asking or searching."What if you could close that gap? What if your audience could tell you, explicitly, what they need you to create next?That's the paradigm shift happening right now inside Google Search Console (GSC). Long seen as a technical tool, recent updates are transforming GSC into a strategic command center. It's no longer just for SEO specialists; it's the dashboard for your entire content operation. These new developments are a game-changer, revealing direct intelligence from your audience that will change how you plan, create, and deliver content.Here are the five truths these new GSC features reveal—and how they give you a powerful competitive edge.1. Stop Driving Your Website Blind: The Dashboard AnalogyManaging a website without GSC is like driving a car without a dashboard. You're moving, but you have no idea how fast you're going or if you're about to run out of fuel. GSC is that free, indispensable dashboard providing direct intelligence straight from Google. But the analogy runs deeper. As one strategist put it, driving isn't passive: "when you're driving, you got to hit the gas, you got to... hit the brakes... when do you stop, when do you go, what do you tweak? Do you go to a pit stop?"You wouldn't drive your car without looking at the dashboard. So you shouldn't have a website and drive traffic and do all the things we do without looking at GSC, right?Your content strategy requires the same active management—knowing when to accelerate, when to pivot, and when to optimize. The new features make this "dashboard" more intuitive than ever, giving you the controls you need to navigate with precision.2. The Goldmine in Your Search Queries: Branded vs. Non-BrandedThe first game-changing update is the new "brand queries" filter. For the first time, GSC allows you to easily separate searches for your specific brand name (branded) from searches for the topics and solutions you offer (non-branded). This is the first step in a powerful new workflow: Discovery.Think of your non-branded queries as raw, unfiltered intelligence from your potential audience. These aren't just keywords; they're direct expressions of need. Instead of an abstract concept, you see tangible examples like:• “best practices for washing dishes”• “best pet shampoo”• “best Thanksgiving turkey meal”When you see more non-branded than branded queries, it's a powerful signal. It means you have access to a goldmine of raw material you can build content on to attract a wider audience that doesn't know your brand… yet. This isn't just data; it's a direct trigger for your next move.3. From Keyword to "Keynote": Creating Content with ContextOnce you've discovered this raw material, the next step is Development. This is where you transform an unstructured keyword into a strategic asset by adding structure and meaning. It's a progression: a raw keyword becomes a more defined keyphrase, which can be built into a keystone concept, and ultimately refined into a keynote.What's a keynote? Think about its real-world meaning: "when somebody sends you a note, it has context, right? It's supposed to mean something and it's supposed to say something specific." A keynote isn't just a search term; it's that term fully developed into a structured piece of content that delivers a specific, meaningful answer.This strategic asset can take many forms:• Blogs• Podcast episodes• Articles• Newsletters• Videos/Reels• eBooks4. The Most Underrated SEO Tactic: Your New Secret WeaponYou've discovered the query and developed it into a keynote. Now it's time for Execution. The single most effective format for executing on this strategy is one of the most powerful, yet underrated, SEO tactics in history: creating content around Frequently Asked Questions (FAQs).The rise of Large Language Models (LLMs) has fundamentally changed search behavior. People are asking full, conversational questions, and search engines are prioritizing direct, authoritative answers. A "one blog per FAQ" strategy is the perfect response. It's a secret weapon that's almost shockingly effective.FAQ is the new awesome the most awesome ever. I I said that on purpose.How awesome? By creating a single, targeted blog post for the long-tail question, "full roof replacement cost [city]," one site ranked number one on Google for that exact phrase in just 30 minutes. That's the power of directly answering a question your audience is already asking.5. It's Not About New Features, It's About New ActionsThe real purpose of these GSC updates isn't to give you more charts to observe; it's to prompt decisive action. Every non-branded query is a signal for what content to create next, feeding a powerful strategic loop that builds your authority over time.This is where it all comes together in a professional content framework. As the source material notes, "That's why you have content pillars and you have content clusters." Your non-branded queries show you what clusters your audience needs, and your FAQ-style "keynotes" become the assets that build out those clusters around your core content pillars.This data-driven approach empowers you to:• Recreate outdated content with new, relevant insights.• Repurpose core ideas into different formats to reach wider audiences.• Re-evaluate which topics are truly resonating.• Reemphasize your most valuable messages with fresh content.Conclusion: What Does Your Dashboard Say?Google Search Console is no longer just a reporting tool. It has evolved into an essential strategic partner that closes the gap between the content you produce and the value your audience is searching for. It's your direct line to understanding intent, allowing you to move from guessing what people want to knowing what they need.Now that you know how to read your website's dashboard, what's the first turn you're going to make?See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

SlatorPod
#270 AI Translation State of the Art with Tom Kocmi and Alon Lavie

SlatorPod

Play Episode Listen Later Nov 21, 2025 53:57


Tom Kocmi, Researcher at Cohere, and Alon Lavie, Distinguished Career Professor at Carnegie Mellon University, join Florian and Slator language AI Research Analyst, Maria Stasimioti, on SlatorPod to talk about the state-of-the-art in AI translation and what the latest WMT25 results reveal about progress and remaining challenges.Tom outlines how the WMT conference has become a crucial annual benchmark for assessing AI translation quality and ensuring systems are tested on fresh, demanding datasets. He notes that systems now face literary text, social-media language, ASR-noisy speech transcripts, and data selected through a difficulty-sampling algorithm. He stresses that these harder inputs expose far more system weaknesses than in previous years.He adds that human translators also struggle as they face fatigue, time pressure, and constraints such as not being allowed to post-edit. He emphasizes that human parity claims are unreliable and highlights the need for improved human evaluation design.Alon underscores that harder test data also challenges evaluators. He explains that segment-level scoring is now more difficult, and even human evaluators miss different subsets of errors. He highlights that automated metrics built on earlier-era training data underperformed, particularly COMET, because they absorbed their own biases.He reports that the strongest performers in the evaluation task were reasoning-capable large language models (LLMs), either lightly prompted or submitted with elaborate evaluation-specific prompting. He notes that while these LLM-as-judge setups outperformed traditional neural metrics overall, their segment-level performance varied.Tom points out that the translation task also revealed notable progress from smaller academic models around 9B parameters, some ranking near trillion-parameter frontier models. He sees this as a sign that competitive research is still widely accessible.The duo concludes that they must carefully choose evaluation methods, avoid assessing models with the same metric used during training, and adopt LLM-based judging for more reliable assessments.

This Week in Google (MP3)
IM 846: Chivelord - From Leather-Bound to Cloud Powered

This Week in Google (MP3)

Play Episode Listen Later Nov 20, 2025 178:20


What does it take to build tech the world actually trusts? Wikipedia founder Jimmy Wales joins the crew to dig into the real crisis behind AI, social networks, and the web: trust, and how to build it when the stakes are global. Teen founders raise $6M to reinvent pesticides using AI — and convince Paul Graham to join in Introducing SlopStop: Community-driven AI slop detection in Kagi Search Part 1: How I Found Out $1 billion AI company co-founder admits that its $100 a month transcription service was originally 'two guys surviving on pizza' and typing out notes by hand His announcement leaving Meta White House Working on Executive Order to Foil State AI Regulations Nvidia stock soars after results, forecasts top estimates with sales for AI chips 'off the charts' Jeff Bezos Creates A.I. Start-Up Where He Will Be Co-Chief Executive Jack Conte: I'm Building an Algorithm That Doesn't Rot Your Brain AI love, actually Cat island road trip: liquidator's warehouse Gentype The Carpenter's Son... My excerpt from the Q&A Image of the paper Hosts: Leo Laporte, Jeff Jarvis, and Paris Martineau Guest: Jimmy Wales Download or subscribe to Intelligent Machines at https://twit.tv/shows/intelligent-machines. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: ventionteams.com/twit zapier.com/machines agntcy.org spaceship.com/twit

All TWiT.tv Shows (MP3)
Intelligent Machines 846: Chivelord

All TWiT.tv Shows (MP3)

Play Episode Listen Later Nov 20, 2025 178:20 Transcription Available


What does it take to build tech the world actually trusts? Wikipedia founder Jimmy Wales joins the crew to dig into the real crisis behind AI, social networks, and the web: trust, and how to build it when the stakes are global. Teen founders raise $6M to reinvent pesticides using AI — and convince Paul Graham to join in Introducing SlopStop: Community-driven AI slop detection in Kagi Search Part 1: How I Found Out $1 billion AI company co-founder admits that its $100 a month transcription service was originally 'two guys surviving on pizza' and typing out notes by hand His announcement leaving Meta White House Working on Executive Order to Foil State AI Regulations Nvidia stock soars after results, forecasts top estimates with sales for AI chips 'off the charts' Jeff Bezos Creates A.I. Start-Up Where He Will Be Co-Chief Executive Jack Conte: I'm Building an Algorithm That Doesn't Rot Your Brain AI love, actually Cat island road trip: liquidator's warehouse Gentype The Carpenter's Son... My excerpt from the Q&A Image of the paper Hosts: Leo Laporte, Jeff Jarvis, and Paris Martineau Guest: Jimmy Wales Download or subscribe to Intelligent Machines at https://twit.tv/shows/intelligent-machines. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: ventionteams.com/twit zapier.com/machines agntcy.org spaceship.com/twit

Radio Leo (Audio)
Intelligent Machines 846: Chivelord

Radio Leo (Audio)

Play Episode Listen Later Nov 20, 2025 178:20 Transcription Available


What does it take to build tech the world actually trusts? Wikipedia founder Jimmy Wales joins the crew to dig into the real crisis behind AI, social networks, and the web: trust, and how to build it when the stakes are global. Teen founders raise $6M to reinvent pesticides using AI — and convince Paul Graham to join in Introducing SlopStop: Community-driven AI slop detection in Kagi Search Part 1: How I Found Out $1 billion AI company co-founder admits that its $100 a month transcription service was originally 'two guys surviving on pizza' and typing out notes by hand His announcement leaving Meta White House Working on Executive Order to Foil State AI Regulations Nvidia stock soars after results, forecasts top estimates with sales for AI chips 'off the charts' Jeff Bezos Creates A.I. Start-Up Where He Will Be Co-Chief Executive Jack Conte: I'm Building an Algorithm That Doesn't Rot Your Brain AI love, actually Cat island road trip: liquidator's warehouse Gentype The Carpenter's Son... My excerpt from the Q&A Image of the paper Hosts: Leo Laporte, Jeff Jarvis, and Paris Martineau Guest: Jimmy Wales Download or subscribe to Intelligent Machines at https://twit.tv/shows/intelligent-machines. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: ventionteams.com/twit zapier.com/machines agntcy.org spaceship.com/twit

This Week in Google (Video HI)
IM 846: Chivelord - From Leather-Bound to Cloud Powered

This Week in Google (Video HI)

Play Episode Listen Later Nov 20, 2025 163:51


What does it take to build tech the world actually trusts? Wikipedia founder Jimmy Wales joins the crew to dig into the real crisis behind AI, social networks, and the web: trust, and how to build it when the stakes are global. Teen founders raise $6M to reinvent pesticides using AI — and convince Paul Graham to join in Introducing SlopStop: Community-driven AI slop detection in Kagi Search Part 1: How I Found Out $1 billion AI company co-founder admits that its $100 a month transcription service was originally 'two guys surviving on pizza' and typing out notes by hand His announcement leaving Meta White House Working on Executive Order to Foil State AI Regulations Nvidia stock soars after results, forecasts top estimates with sales for AI chips 'off the charts' Jeff Bezos Creates A.I. Start-Up Where He Will Be Co-Chief Executive Jack Conte: I'm Building an Algorithm That Doesn't Rot Your Brain AI love, actually Cat island road trip: liquidator's warehouse Gentype The Carpenter's Son... My excerpt from the Q&A Image of the paper Hosts: Leo Laporte, Jeff Jarvis, and Paris Martineau Guest: Jimmy Wales Download or subscribe to Intelligent Machines at https://twit.tv/shows/intelligent-machines. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: ventionteams.com/twit zapier.com/machines agntcy.org spaceship.com/twit

All TWiT.tv Shows (Video LO)
Intelligent Machines 846: Chivelord

All TWiT.tv Shows (Video LO)

Play Episode Listen Later Nov 20, 2025 163:51 Transcription Available


What does it take to build tech the world actually trusts? Wikipedia founder Jimmy Wales joins the crew to dig into the real crisis behind AI, social networks, and the web: trust, and how to build it when the stakes are global. Teen founders raise $6M to reinvent pesticides using AI — and convince Paul Graham to join in Introducing SlopStop: Community-driven AI slop detection in Kagi Search Part 1: How I Found Out $1 billion AI company co-founder admits that its $100 a month transcription service was originally 'two guys surviving on pizza' and typing out notes by hand His announcement leaving Meta White House Working on Executive Order to Foil State AI Regulations Nvidia stock soars after results, forecasts top estimates with sales for AI chips 'off the charts' Jeff Bezos Creates A.I. Start-Up Where He Will Be Co-Chief Executive Jack Conte: I'm Building an Algorithm That Doesn't Rot Your Brain AI love, actually Cat island road trip: liquidator's warehouse Gentype The Carpenter's Son... My excerpt from the Q&A Image of the paper Hosts: Leo Laporte, Jeff Jarvis, and Paris Martineau Guest: Jimmy Wales Download or subscribe to Intelligent Machines at https://twit.tv/shows/intelligent-machines. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: ventionteams.com/twit zapier.com/machines agntcy.org spaceship.com/twit

ESC TV Today – Your Cardiovascular News
Season 3 - Ep.27: Extended interview on 'ChatGPT, MD?': large language models at the bedside

ESC TV Today – Your Cardiovascular News

Play Episode Listen Later Nov 20, 2025 10:00


Host: Emer Joyce Guest: Folkert Asselbergs Want to watch that episode? Go to: https://esc365.escardio.org/event/2179 Want to watch that extended interview on 'ChatGPT, MD?': Large Language Models at the Bedside? Go to: https://esc365.escardio.org/event/2179?resource=interview Disclaimer: ESC TV Today is supported by Bristol Myers Squibb and Novartis through an independent funding. The programme has not been influenced in any way by its funding partners. This programme is intended for health care professionals only and is to be used for educational purposes. The European Society of Cardiology (ESC) does not aim to promote medicinal products nor devices. Any views or opinions expressed are the presenters' own and do not reflect the views of the ESC. The ESC is not liable for any translated content of this video. The English language always prevails.  Declarations of interests: Stephan Achenbach, Folkert Asselbergs, Yasmina Bououdina, Emer Joyce, and Nicolle Kraenkel have declared to have no potential conflicts of interest to report. Carlos Aguiar has declared to have potential conflicts of interest to report: personal fees for consultancy and/or speaker fees from Abbott, AbbVie, Alnylam, Amgen, AstraZeneca, Bayer, BiAL, Boehringer-Ingelheim, Daiichi-Sankyo, Ferrer, Gilead, GSK, Lilly, Novartis, Pfizer, Sanofi, Servier, Takeda, Tecnimede. John-Paul Carpenter has declared to have potential conflicts of interest to report: stockholder MyCardium AI. Davide Capodanno has declared to have potential conflicts of interest to report: Bristol Myers Squibb, Daiichi Sankyo, Sanofi Aventis, Novo Nordisk, Terumo. Konstantinos Koskinas has declared to have potential conflicts of interest to report: honoraria from MSD, Daiichi Sankyo, Sanofi. Steffen Petersen has declared to have potential conflicts of interest to report: consultancy for Circle Cardiovascular Imaging Inc. Calgary, Alberta, Canada. E mma Svennberg has declared to have potential conflicts of interest to report: Abbott, Astra Zeneca, Bayer, Bristol-Myers, Squibb-Pfizer, Johnson & Johnson. Abbott, Astra Zeneca, Bayer, Bristol-Myers, Squibb-Pfizer, Johnson & Johnson.

ESC TV Today – Your Cardiovascular News
Season 3 - Ep.27: 'ChatGPT, MD?': large language models at the bedside - Management decisions in myocarditis

ESC TV Today – Your Cardiovascular News

Play Episode Listen Later Nov 20, 2025 23:24


This episode covers: Cardiology This Week: A concise summary of recent studies 'ChatGPT, MD?' - Large Language Models at the Bedside Management decisions in myocarditis Statistics Made Easy: Mendelian randomisation Host: Emer Joyce Guests: Carlos Aguiar, Folkert Asselbergs, Massimo Imazio Want to watch that episode? Go to: https://esc365.escardio.org/event/2179 Want to watch that extended interview on 'ChatGPT, MD?': Large Language Models at the Bedside? Go to: https://esc365.escardio.org/event/2179?resource=interview Disclaimer: ESC TV Today is supported by Bristol Myers Squibb and Novartis through an independent funding. The programme has not been influenced in any way by its funding partners. This programme is intended for health care professionals only and is to be used for educational purposes. The European Society of Cardiology (ESC) does not aim to promote medicinal products nor devices. Any views or opinions expressed are the presenters' own and do not reflect the views of the ESC. The ESC is not liable for any translated content of this video. The English language always prevails. Declarations of interests: Stephan Achenbach, Folkert Asselbergs, Yasmina Bououdina, Massimo Imazio, Emer Joyce, and Nicolle Kraenkel have declared to have no potential conflicts of interest to report. Carlos Aguiar has declared to have potential conflicts of interest to report: personal fees for consultancy and/or speaker fees from Abbott, AbbVie, Alnylam, Amgen, AstraZeneca, Bayer, BiAL, Boehringer-Ingelheim, Daiichi-Sankyo, Ferrer, Gilead, GSK, Lilly, Novartis, Pfizer, Sanofi, Servier, Takeda, Tecnimede. John-Paul Carpenter has declared to have potential conflicts of interest to report: stockholder MyCardium AI. Davide Capodanno has declared to have potential conflicts of interest to report: Bristol Myers Squibb, Daiichi Sankyo, Sanofi Aventis, Novo Nordisk, Terumo. Konstantinos Koskinas has declared to have potential conflicts of interest to report: honoraria from MSD, Daiichi Sankyo, Sanofi. Steffen Petersen has declared to have potential conflicts of interest to report: consultancy for Circle Cardiovascular Imaging Inc. Calgary, Alberta, Canada.  Emma Svennberg has declared to have potential conflicts of interest to report: Abbott, Astra Zeneca, Bayer, Bristol-Myers, Squibb-Pfizer, Johnson & Johnson.

Radio Leo (Video HD)
Intelligent Machines 846: Chivelord

Radio Leo (Video HD)

Play Episode Listen Later Nov 20, 2025 163:51 Transcription Available


What does it take to build tech the world actually trusts? Wikipedia founder Jimmy Wales joins the crew to dig into the real crisis behind AI, social networks, and the web: trust, and how to build it when the stakes are global. Teen founders raise $6M to reinvent pesticides using AI — and convince Paul Graham to join in Introducing SlopStop: Community-driven AI slop detection in Kagi Search Part 1: How I Found Out $1 billion AI company co-founder admits that its $100 a month transcription service was originally 'two guys surviving on pizza' and typing out notes by hand His announcement leaving Meta White House Working on Executive Order to Foil State AI Regulations Nvidia stock soars after results, forecasts top estimates with sales for AI chips 'off the charts' Jeff Bezos Creates A.I. Start-Up Where He Will Be Co-Chief Executive Jack Conte: I'm Building an Algorithm That Doesn't Rot Your Brain AI love, actually Cat island road trip: liquidator's warehouse Gentype The Carpenter's Son... My excerpt from the Q&A Image of the paper Hosts: Leo Laporte, Jeff Jarvis, and Paris Martineau Guest: Jimmy Wales Download or subscribe to Intelligent Machines at https://twit.tv/shows/intelligent-machines. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: ventionteams.com/twit zapier.com/machines agntcy.org spaceship.com/twit

This Podcast is a Ritual
What is a Large Language Model?

This Podcast is a Ritual

Play Episode Listen Later Nov 19, 2025 44:20


Large Language Models, or LLMs, are infiltrating every facet of our society, but do we even understand what they are? In this fascinating deep dive into the intersection of technology, language, and consciousness, the Wizard offers a few new ways of perceiving these revolutionary—and worrying—systems. Got a question for the the Wizard? Call the Wizard Hotline at 860-415-6009‬ and have it answered in a future episode! Join the ritual: www.patreon.com/thispodcastisaritual

Health and Explainable AI Podcast
Dennis Wei from IBM on In-Context Explainability and the Future of Trustworthy AI

Health and Explainable AI Podcast

Play Episode Listen Later Nov 19, 2025 24:50


Dennis Wei, Senior Research Scientist at IBM specializing in human-centered trustworthy AI, speaks with Pitt's HexAI podcast host, Jordan Gass-Pooré, about his work focusing on trustworthy machine learning, including interpretability of machine learning models, algorithmic fairness, robustness, causal inference and graphical models.Concentrating on explainable AI, they speak in depth about the explainability of Large Language Models (LLMs), the field of in-context explainability and IBM's new In-Context Explainability 360 (ICX360) toolkit. They explore research project ideas for students and touch on the personalization of explainability outputs for different users and on leveraging explainability to help guide and optimize LLM reasoning. They also discuss IBM's interest in collaborating with university labs around explainable AI in healthcare and on related work at IBM looking at the steerability of LLMs and combining explainability and steerability to evaluate model modifications.This episode provides a deep dive into explainable AI, exploring how the field's cutting-edge research is contributing to more trustworthy applications of AI in healthcare. The discussion also highlights emerging research directions ideal for stimulating new academic projects and university-industry collaborations.Guest profile: https://research.ibm.com/people/dennis-weiICX360 Toolkit: https://github.com/IBM/ICX360

Problem Solved: The IISE Podcast
Trailer | Large Language Models: How Far We've Come

Problem Solved: The IISE Podcast

Play Episode Listen Later Nov 18, 2025 1:00


Large language models aren't just improving — they're transforming how we work, learn, and make decisions. In this upcoming episode of Problem Solved, IISE's David Brandt talks with Bucknell University's Dr. Joe Wilck about the true state of LLMs after the first 1,000 days: what's gotten better, what's still broken, and why critical thinking matters more than ever.Sponsored by Autodesk FlexSimLearn more about The Institute of Industrial and Systems Engineers (IISE)Problem Solved on LinkedInProblem Solved on YouTubeProblem Solved on InstagramProblem Solved on TikTokProblem Solved Executive Producer: Elizabeth GrimesInterested in contributing to the podcast or sponsoring an episode? Email egrimes@iise.org

Mystery AI Hype Theater 3000
Drag It All To Hell, 2025.10.27

Mystery AI Hype Theater 3000

Play Episode Listen Later Nov 18, 2025 56:44 Transcription Available


It's been six months since our last all-Hell episode! In honor of Halloween season, we take a long journey into the very scary Fresh AI Hell mines. Topics include terrifying uses of AI in education, scientific research, and politics — plus, some delicious palate cleansers along the way.AI bubble: bigger than dot-com bust?No one wants to pay for ChatGPTMeta lays off 600 from AI unitAI data centers: an even bigger disaster than we thoughtPublic universities anticipate data center-driven power outagesChaser: Deloitte has to pay back Albanese government after using AI in report"AI" schools are "dead classrooms"Fake sources in "ethical AI" education reportParents letting kids play with AIStartup sells 'synthetic influencers'AI-powered textbooks fail to make the gradeChaser: "High-reliability" AI slopNature offers "AI-powered research assistant"AI bots wrote all papers at this conference"AI" reviewing at AAAIAI medical tools downplay symptoms in women and POCTherapists are secretly using ChatGPTChaser: Microsoft blocks Israel's use of its technologyGerman initiative uses "AI" for voter educationPolice gunshot detection mics will listen for human voicesSF's AI chatbot for RV dwellersCuomo campaign posts racist AI slopDHS Ordered OpenAI To Share User DataChaser: LA County moves to limit license plate trackingA new form of eugenics"AI Superintelligence" prohibition letterEmad Mostaque's LLM blurbsPrizes must recognize machine contributions to discoveryChaser: The hot new trend in marketing: hating on AICheck out future streams on Twitch. Meanwhile, send us any AI Hell you see.Our book, 'The AI Con,' is out now! Get your copy now.Subscribe to our newsletter via Buttondown. Follow us!Emily Bluesky: emilymbender.bsky.social Mastodon: dair-community.social/@EmilyMBender Alex Bluesky: alexhanna.bsky.social Mastodon: dair-community.social/@alex Twitter: @alexhanna Music by Toby Menon.Artwork by Naomi Pleasure-Park. Production by Ozzy Llinas Goodman.

The Agile Embedded Podcast
Crossover with Embedded AI Podcast

The Agile Embedded Podcast

Play Episode Listen Later Nov 18, 2025 55:02


In this special crossover episode with the brand-new Embedded AI Podcast, Luca and Jeff are joined by Ryan Torvik, Luca's co-host on the Embedded AI podcast, to explore the intersection of AI-powered development tools and agile embedded systems engineering. The hosts discuss practical strategies for using Large Language Models (LLMs) effectively in embedded development workflows, covering topics like context management, test-driven development with AI, and maintaining code quality standards in safety-critical systems. The conversation addresses common anti-patterns that developers encounter when first adopting LLM-assisted coding, such as "vibe coding" yourself off a cliff by letting the AI generate too much code at once, losing control of architectural decisions, and failing to maintain proper test coverage. The hosts emphasize that while LLMs can dramatically accelerate prototyping and reduce boilerplate coding, they require even more rigorous engineering discipline - not less. They discuss how traditional agile practices like small commits, continuous integration, test-driven development, and frequent context resets become even more critical when working with AI tools. For embedded systems engineers working in safety-critical domains like medical devices, automotive, and aerospace, the episode provides valuable guidance on integrating AI tools while maintaining deterministic quality processes. The hosts stress that LLMs should augment, not replace, static analysis tools and human code reviews, and that developers remain fully responsible for AI-generated code. Whether you're just starting with AI-assisted development or looking to refine your approach, this episode offers actionable insights for leveraging LLMs effectively while keeping the reins firmly in hand. ## Key Topics * [03:45] LLM Interface Options: Web, CLI, and IDE Plugins - Choosing the Right Tool for Your Workflow* [08:30] Prompt Engineering Fundamentals: Being Specific and Iterative with LLMs* [12:15] Building Effective Base Prompts: Learning from Experience vs. Starting from Templates* [16:40] Context Window Management: Avoiding Information Overload and Hallucinations* [22:10] Understanding LLM Context: Files, Prompts, and Conversation History* [26:50] The Nature of Hallucinations: Why LLMs Always Generate, Never Judge* [29:20] Test-Driven Development with AI: More Critical Than Ever* [35:45] Avoiding 'Vibe Coding' Disasters: The Importance of Small, Testable Increments* [42:30] Requirements Engineering in the AI Era: Becoming More Specific About What You Want* [48:15] Extreme Programming Principles Applied to LLM Development: Small Steps and Frequent Commits* [52:40] Context Reset Strategies: When and How to Start Fresh Sessions* [56:20] The V-Model Approach: Breaking Down Problems into Manageable LLM-Sized Chunks* [01:01:10] AI in Safety-Critical Systems: Augmenting, Not Replacing, Deterministic Tools* [01:06:45] Code Review in the AI Age: Maintaining Standards Despite Faster Iteration* [01:12:30] Prototyping vs. Production Code: The Superpower and the Danger* [01:16:50] Shifting Left with AI: Empowering Product Owners and Accelerating Feedback Loops* [01:19:40] Bootstrapping New Technologies: From Zero to One in Minutes Instead of Weeks* [01:23:15] Advice for Junior Engineers: Building Intuition in the Age of AI-Assisted Development ## Notable Quotes > "All of us are new to this experience. Nobody went to school back in the 80s and has been doing this for 40 years. We're all just running around, bumping into things and seeing what works for us." — Ryan Torvik > "An LLM is just a token generator. You stick an input in, and it returns an output, and it has no way of judging whether this is correct or valid or useful. It's just whatever it generated. So it's up to you to give it input data that will very likely result in useful output data." — Luca Ingianni > "Tests tell you how this is supposed to work. You can have it write the test first and then evaluate the test. Using tests helps communicate - just like you would to another person - no, it needs to function like this, it needs to have this functionality and behave in this way." — Ryan Torvik > "I find myself being even more aggressively biased towards test-driven development. While I'm reasonably lenient about the code that the LLM writes, I am very pedantic about the tests that I'm using. I will very thoroughly review them and really tweak them until they have the level of detail that I'm interested in." — Luca Ingianni > "It's really forcing me to be a better engineer by using the LLM. You have to go and do that system level understanding of the problem space before you actually ask the LLM to do something. This is what responsible people have been saying - this is how you do engineering." — Ryan Torvik > "I can use LLMs to jumpstart me or bootstrap me from zero to one. Once there's something on the screen that kind of works, I can usually then apply my general programming skill, my general engineering taste to improve it. Getting from that zero to one is now not days or weeks of learning - it's 20 minutes of playing with it." — Jeff Gable > "LLMs are fantastic at small-scale stuff. They will be wonderful at finding better alternatives for how to implement a certain function. But they are absolutely atrocious at large-scale stuff. They will gleefully mess up your architecture and not even notice because they cannot fit it into their tiny electronic brains." — Luca Ingianni > "Don't be afraid to try it out. We're all noobs to this. This is the brave noob world of AI exploration. Be curious about it, but also be cautious about it. Don't ever take your hands off the reins. Trust your engineering intuition - even young folks that are just starting, trust your engineering intuition." — Ryan Torvik > "As the saying goes, good judgment comes from experience. Experience comes from bad judgment. You'll find spectacular ways of messing up - that is how you become a decent engineer. LLMs do not change that. Junior engineers will still be necessary, will still be around, and they will still evolve into senior engineers eventually after they've fallen on their faces enough times." — Luca Ingianni  You can find Jeff at https://jeffgable.com.You can find Luca at https://luca.engineer.Want to join the agile Embedded Slack? Click hereAre you looking for embedded-focused trainings? Head to https://agileembedded.academy/Ryan Torvik and Luca have started the Embedded AI podcast, check it out at https://embeddedaipodcast.com/

Warrior Mindset
Bushido vs. The Algorithm: A Warrior's Guide to AI

Warrior Mindset

Play Episode Listen Later Nov 17, 2025 75:46


Can we use AI to enhance human connection rather than replace it? In this Part 2 of our AI series episodes, Shekeese and I explore how humanist technology, rooted in ethics, education, and responsible use, can transform our relationship with AI. From the dangers of “AI slop” to the potential for real, inquiry-based interaction, we highlight why teaching students to treat AI like a tool, not a replacement, is crucial. We dig into tech company accountability, regulation, social media's societal role, and even the environmental toll of data centers. This isn't a tech utopia, it's a call for wiser integration, grounded in understanding, purpose, and human values.--------- EPISODE CHAPTERS ---------(0:00:02) - Navigating Humanist Technology and AI(0:04:53) - AI Technology Education and Misuse(0:11:53) - Tech Education and Government Regulation(0:16:25) - The Impact of Social Media Regulation(0:23:12) - Regulating Technology and Online Behavior(0:32:58) - Tech Founders Impact Regulation(0:41:05) - Navigating Internet Regulation Debate(0:53:08) - The Impact of Data Centers(0:59:29) - Tech Regulation and Future Development(1:14:10) - Promoting a Warrior MindsetSend us a text

New Money Review podcast
Unseen Money 14—the AI malware threat

New Money Review podcast

Play Episode Listen Later Nov 13, 2025 28:50


Last week, Google's threat intelligence group warned that artificial intelligence (AI) is making malware attacks more dangerous. [Malware is malicious software—programmes designed to disrupt, damage or gain unauthorised access to computer systems—usually delivered via phishing emails, compromised websites or infected downloads]“Adversaries are no longer leveraging artificial intelligence (AI) just for productivity gains, they are deploying novel AI-enabled malware in active operations,” Google said in a 5000-word blog.Are malware programmes using Large Language Models (LLMs) to dynamically generate malicious scripts, obfuscate their own code to evade detection, and leverage AI models to create malicious functions on demand, as Google warns? Or it this yet another case of tech firms selling solutions to a problem they have created themselves?Listen to the latest episode of Unseen Money from New Money Review, featuring co-hosts Timur Yunusov and Paul Amery, to hear more about the effect of AI malware.In the podcast, we cover:Google's warning about the rise of AI malware – reality or hype? (2' 35”)Why LLMs were originally protected from harmful behaviour (4' 10”)How criminals learned to develop LLMs without guardrails (4' 55”)Model context protocols (MCPs) and AI agents as offensive tools (5' 30”)Malicious payloads and web application firewalls (7' 35”)Tricking LLMs by exploiting the wide range of input variables (8' 30”)The state of the art for fraudsters when using LLMs (10' 10”)Timur used AI to learn how to drain funds from a stolen phone (11' 05”)How worried is Timur about the rise of AI malware? (14' 20”)AI has dramatically reduced the cost and increased the speed of producing malware (15')AI, teenage suicides and protecting users (16' 50”)AI for good: using AI to combat AI malware (19')How a Russian bank used AI chatbots to divert fraudsters (19' 40”)Data poisoning—manipulating the training data for AI models (22' 10”)Techniques for tricking LLMs (23')Only state actors can manipulate AI models at scale (25' 40”)The use of SMS blasters by fraudsters is exploding! (27')

Product for Product Management
EP 141 - AI Tools Kickoff with Matt and Moshe

Product for Product Management

Play Episode Listen Later Nov 12, 2025 33:20


We're excited to launch a brand-new series on the Product for Product Podcast, with Matt and Moshe diving deep into the world of AI tools for product managers. In this special episode, we set the stage for upcoming conversations by exploring how AI is becoming an indispensable partner in every stage of the product management journey.Join us as Matt and Moshe discuss:The rapidly evolving role of AI throughout the product management workflow, from idea generation and discovery to strategy, prioritization, delivery, launch, and ongoing monitoringThe importance of using AI as a tool for knowledge and insight, rather than replacing critical thinking and understandingHow product managers can leverage Large Language Models (LLMs) for research, writing, and scenario planningThe realities and limitations of today's AI tools, including the challenges of ensuring accuracy and context in product workExploring the promise of AI platforms for rapid prototyping and MVP testingHow AI can help bridge the gap between prototyping and actually building production-ready productsUsing AI to inform strategic decisions, pricing, packaging, prioritization, and risk assessmentIntegrating AI into your board and backlog systems for smarter feedback synthesis and decision-makingEnhancing sprint-based development with AI-generated user stories, acceptance criteria, and moreUpcoming content around data consolidation, go-to-market strategies, and ways AI is changing the PM disciplineAnd much more!Whether you're just starting to experiment with AI or looking to deepen how you use it in your product practice, this series is for you. Stay tuned for practical examples, case studies, and discussions that will help you harness the latest AI tools, while remembering that the best PMs know how to balance tech innovation with human judgment.Connect with us and follow the rest of the series:Product for Product Podcast http://linkedin.com/company/product-for-product-podcast Matt Green https://www.linkedin.com/in/mattgreenproduct Moshe Mikanovsky http://www.linkedin.com/in/mikanovsky Note: Any views mentioned in the podcast are the sole views of our hosts and guests, and do not represent the products mentioned in any way.Please leave us a review and feedback ⭐️⭐️⭐️⭐️⭐️

SlatorPod
#269 Milestone Localization Founder on Automated Glossaries, LSI Leadership, AI Fatigue

SlatorPod

Play Episode Listen Later Nov 10, 2025 30:49


Nikita Agarwal, Founder of Milestone Localization, joins SlatorPod to talk about her journey founding a language solutions integrator (LSI) and launching Cavya.ai, a platform designed to streamline translation project preparation.Nikita began Milestone Localization in 2020 after discovering the language industry while working in international sales. She was drawn to the field's global scope and low barrier to entry. She emphasizes that sales experience played a crucial role in landing early clients and understanding the value of hiring people from within the industry. The founder reflects on the past 16 months as a period of intense change marked by AI disruption, client pressure on pricing, and shifting expectations. She highlights how regulated sectors like life sciences have helped stabilize the company amid volatility. She details how the LSI specializes in medical device translations and regulatory submissions across Europe.Nikita explains that her new platform, Cavya.ai, emerged from internal needs to improve project preparation. She says the tool automates glossaries, style guides, and document analysis, reducing time and boosting consistency for small and mid-sized projects.The founder shares her observations on India's evolving language technology landscape, noting significant progress in AI for major Indian languages. She says increased internet access and AI-driven localization are expanding education and job opportunities across the country.Nikita concludes that she sees the future in expanding life sciences work, refining Cavya, and developing an AI-powered QA tool. She notes that some clients are showing “AI fatigue” and returning to human-led workflows.

a16z
Amjad Masad & Adam D'Angelo: How Far Are We From AGI?

a16z

Play Episode Listen Later Nov 7, 2025 62:44


Adam D'Angelo (Quora/Poe) thinks we're 5 years from automating remote work. Amjad Masad (Replit) thinks we're brute-forcing intelligence without understanding it.In this conversation, two technical founders who are building the AI future disagree on almost everything: whether LLMs are hitting limits, if we're anywhere close to AGI, and what happens when entry-level jobs disappear but experts remain irreplaceable. They dig into the uncomfortable reality that AI might create a "missing middle" in the job market, why everyone in SF is suddenly too focused on getting rich to do weird experiments, and whether consciousness research has been abandoned for prompt engineering.Plus: Why coding agents can now run for 20+ hours straight, the return of the "sovereign individual" thesis, and the surprising sophistication of everyday users juggling multiple AIs. Resources:Follow Amjad on X: https://x.com/amasadFollow Adam on X: https://x.com/adamdangelo Stay Updated: If you enjoyed this episode, be sure to like, subscribe, and share with your friends!Find a16z on X: https://x.com/a16zFind a16z on LinkedIn: https://www.linkedin.com/company/a16zListen to the a16z Podcast on Spotify: https://open.spotify.com/show/5bC65RDvs3oxnLyqqvkUYXListen to the a16z Podcast on Apple Podcasts: https://podcasts.apple.com/us/podcast/a16z-podcast/id842818711Follow our host: https://x.com/eriktorenbergPlease note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures. Stay Updated:Find a16z on XFind a16z on LinkedInListen to the a16z Podcast on SpotifyListen to the a16z Podcast on Apple PodcastsFollow our host: https://twitter.com/eriktorenberg Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

The Next 100 Days Podcast
#500 - John Bush - Answer Engine Optimisation

The Next 100 Days Podcast

Play Episode Listen Later Nov 7, 2025 51:14


John Bush has created a way for businesses to be seen by Large Language Models called Answer Engine Optimisation.Summary of PodcastPodcast milestone and backgroundKevin and Graham discuss reaching their 500th episode of The Next 100 Days podcast. They reflect on their journey over the past 10 years and how the podcast has evolved. They introduce their guest, John Bush, an expert in "Answer Engine Optimisation" (AEO). John will discuss how businesses can optimise their content for AI-powered search engines like ChatGPT.John's background and AEO conceptJohn shares his background, including his experience in telecom, startups, and cloud infrastructure. He explains how he became interested in AEO after seeing the impact of AI on his marketing consultant friend's business. John describes the process of developing an AEO analysis tool. The tool evaluates websites on factors like visibility, accessibility, and authority. The outcome means businesses can make their content more searchable and usable by large language models.The changing landscape of search and AIKevin and John discuss the declining importance of traditional Google search and the growing prominence of AI-powered search tools like ChatGPT. They explore how businesses need to adapt their content and website structure to be more easily understood and referenced by these new search engines, rather than just optimising for Google.Practical applications of AEOJohn demonstrates a tool his team has developed that can automatically analyse a company's competitive landscape and provide insights based on the data, without relying on the company to manually gather and synthesise the information. He explains how this type of AI-powered analysis can be applied to various business functions, such as RFP responses and lifetime value calculations.Challenges and considerations around AI-generated contentKevin raises concerns about the potential risks of using AI-generated content, such as the ability to verify the accuracy and provenance of the information. John discusses efforts to address these issues, including watermarking content and providing audit trails for AI-powered decisions.The future of AI in businessJohn and Kevin discuss the broader implications of AI in the enterprise. They cover the importance of data stewardship, security, and the role of human expertise in augmenting AI capabilities. They explore how AI can be used to automate and enhance various business processes, while also highlighting the need to carefully manage the integration of these technologies.Wrap-up and reflections on the podcastKevin and Graham reflect on the evolution of The Next 100 Days podcast over the past 10 years, noting the shift in focus towards AI and technology. They express their enthusiasm for continuing the podcast and exploring the latest developments in this rapidly changing landscape.The Next 100 Days Podcast Co-HostsGraham ArrowsmithGraham founded Finely Fettled ten years ago to help business owners and marketers market to affluent and high-net-worth customers. He's the founder of MicroYES, a Partner for MeclabsAI, where he introduces AI Agents that you can talk to, that increase engagement, dwell time, leads and conversions. Now, Graham is offering Answer Engine Optimisation that gets you...

a16z
ElevenLabs CEO: Why Voice is the Next AI Interface

a16z

Play Episode Listen Later Nov 4, 2025 30:50


ElevenLabs CEO and co‑founder Mati Staniszewski joins Jennifer Li to explain how the team ships research‑grade AI at lightning speed—from text‑to‑speech and fully licensed AI music to real‑time voice agents—and why voice is the next interface for human‑computer interaction. He shares the small, autonomous team model, global hiring approach, and how the Voice Marketplace has paid creators over $10M while evolving into an enterprise platform. Resources:Follow Mati on X: https://x.com/matistanisFollow Jennifer on X: https://x.com/JenniferHli Stay Updated: If you enjoyed this episode, be sure to like, subscribe, and share with your friends!Find a16z on X: https://x.com/a16zFind a16z on LinkedIn: https://www.linkedin.com/company/a16zListen to the a16z Podcast on Spotify: https://open.spotify.com/show/5bC65RDvs3oxnLyqqvkUYXListen to the a16z Podcast on Apple Podcasts: https://podcasts.apple.com/us/podcast/a16z-podcast/id842818711Follow our host: https://x.com/eriktorenbergPlease note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures.  Stay Updated:Find a16z on XFind a16z on LinkedInListen to the a16z Podcast on SpotifyListen to the a16z Podcast on Apple PodcastsFollow our host: https://twitter.com/eriktorenberg Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

Causal Bandits Podcast
The Causal Gap: Truly Responsible AI Needs to Understand the Consequences | Zhijing Jin S2E7

Causal Bandits Podcast

Play Episode Listen Later Oct 30, 2025 63:17


Send us a textThe Causal Gap: Truly Responsible AI Needs to Understand the ConsequencesWhy do LLMs systematically drive themselves to extinction, and what does it have to do with evolution, moral reasoning, and causality?In this brand-new episode of Causal Bandits, we meet Zhijing Jin (Max Planck Institute for Intelligent Systems, University of Toronto) to answer these questions and look into the future of automated causal reasoning.In this episode, we discuss:- Zhijing's new work on the "causal scientist"- What's missing in responsible AI- Why ethics matter for agentic systems- Is causality a necessary element of moral reasoning?------------------------------------------------------------------------------------------------------Video version available on Youtube: https://youtu.be/Frb6eTW2ywkRecorded on Aug 18, 2025 in Tübingen, Germany.------------------------------------------------------------------------------------------------------About The GuestZhiijing Jin is a researcher scientist at Max Planck Institute for Intelligent Systems and an incoming Assistant Professor at the University of Toronto. Her work is focused on causality, natural language, and ethics, in particular in the context of large language models and multi-agent systems. Her work received multiple awards, including NeurIPS best paper award, and has been featured in CHIP Magazine, WIRED, and MIT News. She grew up in Shanghai. Currently she prepares to open her new research lab at the University of Toronto.Support the showCausal Bandits PodcastCausal AI || Causal Machine Learning || Causal Inference & DiscoveryWeb: https://causalbanditspodcast.comConnect on LinkedIn: https://www.linkedin.com/in/aleksandermolak/Join Causal Python Weekly: https://causalpython.io The Causal Book: https://amzn.to/3QhsRz4

The Critical Thinking Initiative
Free Your Brain from ChatGPT "Thinking"

The Critical Thinking Initiative

Play Episode Listen Later Oct 27, 2025 10:25


If you you're someone who values being able think independently, then you should be troubled by the fact that your brain's operates all too much like ChatGPT. I'm going to explain how that undermines your ability to think for yourself, but I'll also give you a key way to change it.How ChatGPT “Thinks”Let's first understand how ChatGPT “thinks.” ChatGPT is one of several Artificial Intelligences that's called a Large Language Model or LLM. All LLMs use bulk sources of language—like articles and blogs they find on the internet—to find trends in what words are most likely to follow other words. To do so, they identify key words that stand out as most likely to lead to other words. Those key words are called “tokens.” Tokens are the words that cue the LLM to look for other words.So, as a simple example for the sake of argument, let's say we ask an LLM, “what do politicians care about most?” When the LLM receives that question, it creates two tokens: “politicians” and “care.” The rest of the words are irrelevant. Then, the LLM scours the internet for the its two tokens. Though I did not run this through an LLM, it might find that the words most likely to follow the sequence [politicians]>[care] are: “constituents,” “money,” and “good publicity.”But because LLMs only return what is probabilistically likely to follow what it identifies as its tokens, then an LLM probably would not come up with [politicians]>[care about] moon rocks because the internet does not already have many sentences where the words “moon rocks” follow the token sequence: “politicians” and “care.”Thus, LLMs, though referred to as Artificial Intelligence, really are not intelligent at all, at least not in this particular respect. They really just quickly scour the internet for words that are statistically likely to follow other “token” words, and they really cannot determine the particular value, correctness, or importance of the words that follow those tokens. In other words, they cannot drum up smart, clever, unique, or original ideas. They can only lumber their way toward identifying statistically likely word patterns. If we were to write enough articles that said “politicians care about moon rocks,” the LLMs would return “moon rocks” as the answer even though that's really nonsensical.So, in a nutshell, LLMs just connect words that are statistically likely to follow one another. There's more to how LLMs work, of course, but this understanding is enough for our discussion today.How your Brain Operates Like ChatGPT.You're probably glad that your brain doesn't function like some LLM dullard that just fills in word gaps with ready-made phrases, but I have bad news: our brains actually function all too much like LLMs.The good news about your brain is that one of the primary ways that it keeps you alive is that it is constantly functioning as a prediction engine. Based on whatever is happening now, it is literally charging up the neurons it thinks it will need to use next.Here's an example: The other day, my son and I were hiking in the woods. It was a rain day, so as we were hiking up a steep hill, my son tripped over a great white shark.When you read that, it actually took your brain longer to process the words “great white shark” than the other words. That's because when your brain saw the word “tripped” it charged up neurons for other words like “log” and “rock,” but did not charge up neurons for the words “great white shark.” In fact, your brain is constantly predicting in so many ways that it is impossible to define them all here. But one additional way is in terms of the visual cues words give it. So, if you read the word “math,” your brain actually charges up networks to read words that look similar, such as “mat,” “month,” and “mast,” but it does not charge up networks for words that look very different, like “engineer.”Ultimately, you've probably seen the brain's power as a prediction engine meet utter failure. If you've ever been to a surprise party where the guest of honor was momentarily speechless, then you've seen what happens to the prediction engine when it was unprepared for what happened next. The guest of honor walked into their house expecting, for the sake of argument, to be greeted by their dog or to go to the bathroom, but not by a house full of people. So, their brain literally had to switch functions, and it took it a couple of seconds to do it.But the greater point about how your brain operates like ChatGPT should be becoming clear: If we return to my hiking example where I said, “my son were hiking and he tripped over a ___,” then we see that your brain also essentially used “tokens” like ChatGPT to predict the words that would come next. It saw “hiking” and “tripped,” it cued up words like “log” and “rock,” but not words like “great white shark,” and it did so for the same reason ChatGPT does so: it prepared for the words likely to follow its tokens.The Danger of “Thinking” Like ChatGPTThe good thing about the fact that your brain operates as a prediction engine is that you're not surprised by every word you hear or read. Imagine if every time a server approached you at a restaurant, you had absolutely no idea what they would say. Imagine if they were just as likely to say “hi, Roman soldiers wore shoes called Caliga” as “Hi, may I take your order?” Every conversation would be chaos. Neither person would have any idea what the other person would say next.So, the fact that your brain charges up neurons for words it expects to use is good in one sense, but have you stopped to ask what makes it charge up certain words instead of others? Where does it get the words it charges up to use next?If I ask you to complete this sentence: “All politicians are ____,” what words immediately spring to mind? Where did those words come from? If you reflect for a moment, you'll probably realize that most of the words to follow come from things you've heard on social media or major media. You might even be able to identify the particular sources from which you heard those words.So, if your brain operates as a prediction engine, and if that prediction engine charges up neurons for words that it expects to follow other word “tokens,” and if the words it charges up come from sources like social media, then how can you think that you're really thinking for yourself? In many ways, you're actually not. Your brain, like every brain, adopts ready-made word sequences that it regularly hears.If you engage a lot of conservative sources then you'll finish “All progressives are ___” differently than if you engage a lot of progressive sources. And vice versa. And doing that means that you're not thinking independently.How to Think (More) IndependentlyEven though it's not possible to fully break free of our brain's reliance on the words it frequently engages, it is possible to think much more independently than most people. And it's not even all that difficult.To do that, start challenging the pre-prepared language and ideas that your brain generates. Remember, whenever you hear some words, your brain already prepared other words. So, if you want to think better, then do not accept the words that your own brain wants to use. Instead, challenge your brain's selection of words by consciously considering other words instead.For example, let's say I ask you to complete this sentence: “All politicians are ____.” And let's say, to keep it very simple, that your next word is “liars.” “Liars” is the word your brain hands you, and let's say, as well, that you generally—in broad strokes—think that's true; you think that politicians generally are liars.But if you want to think for yourself, then you can't just let your brain fill in the blank with the easiest word. If you do, you'll be using the phrases given to you by outside sources.Instead, start to challenge that exact word, “liars,” with other words. For instance, you might ask yourself, “is liars really the right word or is it more accurate to say that I think politicians are ‘dishonest.'” After all, they might be dishonest in ways that do not involve lies. Or do I mean that they aren't as much dishonest or liars, but narcissists, or opportunists who exist in a corrupt system?See, even though we find some general truth in the idea that politicians are “liars,” when we consider other words, we actually think. We think for ourselves. We consider the words that outside sources have given us. So, maybe, even though we do think that politicians sometimes lie, we also realize that, as we consider it more deeply, “liars” isn't the best word. Instead, we figure out that “narcissists” might be even more accurate, or at least be another word needed to complete the thought.So, don't let your brain just follow its tokens to ready-made language from social media. Take the words that your prediction engine generates, and consciously challenge those words with other words. Then you'll think for yourself amidst a world of people who not only use ChatGPT but who also “think” like it. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit pearlmanactualintelligence.substack.com

Radio Free Humanity: The Marxist-Humanist Podcast
RFH 145 Artificial Intelligence and Marx's View of Automation (interview with Gavin Mueller

Radio Free Humanity: The Marxist-Humanist Podcast

Play Episode Listen Later Oct 25, 2025 63:06


Co-hosts Andrew Kliman and Gabriel Donnelly speak with guest Gavin Mueller, an assistant professor of New Media and Digital Culture at the University of Amsterdam. Mueller researches the politics of digital culture and much of our discussion centers on the realities of artificial intelligence in our time. They consider the huge amount of money being spent on Large Language Models, how they work, and what they can actually do as opposed to what all the hype says that they can (or will be able to???) do. Additionally the discussants consider how workers can fight the encroachment of this new, automated technology into the workplace. Our discussion leans on parts of Gavin's book Breaking Things at Work: The Luddites Are Right About Why You Hate Your Job. They consider what Marx said about technology and automation, and how it applies to this situation. Plus current-events segment: the co-hosts discuss the political indictments handed down from the Trumpist Department of Justice that have targeted personal foes of Trump-James Comey, Leticia James, and John Bolton. Radio Free Humanity is co-hosted by Gabriel Donnelly and Andrew Kliman, and sponsored by Marxist-Humanist Initiative (https://www.marxisthumanistinitiative.org/ ).

Thinking Faith with Eric Gurash and Dr. Brett Salkeld
Machines, Minds, and Meaning: Catholic Reflections on Artificial Intelligence - Part 1

Thinking Faith with Eric Gurash and Dr. Brett Salkeld

Play Episode Listen Later Oct 24, 2025 37:24


| S03 E08 | This week on Thinking Faith: The Catholic Podcast Deacon Eric Gurash and Dr. Brett Salkeld discuss the intersection of artificial intelligence and personhood through the lens of a Catholic anthropology. Drawing on personal experiences with ChatGPT and what the Large Language Model has to say about itself, they reflect on what these interactions reveal about what it means to be human and what it doesn't. Together, they unpack the theological, ethical, and philosophical implications of rapidly advancing AI technology through the lens of Catholic teaching on the human person, reason, and the soul.

The John Batchelor Show
18: AI Competition: US Leads China in Data Center Race; Europe Is a 'Non-Factor' Chris Riegel, Stratacache, with John Batchelor Riegel discussed the global race involving data center building and the growth of large language models for AI. Riegel assert

The John Batchelor Show

Play Episode Listen Later Oct 23, 2025 1:34


AI Competition: US Leads China in Data Center Race; Europe Is a 'Non-Factor' Chris Riegel, Stratacache, with John Batchelor Riegel discussed the global race involving data center building and the growth of large language models for AI. Riegel asserts that the competition is a "two-horse race" between the U.S. and China. The U.S. currently leads by maybe one to two years due to its focus on development, capital, and infrastructure. The European Union, conversely, is described as a "non-factor" and "nowhere" in this technological competition. Most top engineering talent in this space comes specifically to the United States for opportunity. Riegel noted that the capital developed by an individual like Elon Musk easily out-competes all of Europe's governmental funding toward advanced AI and data centers.

a16z
Marc Andreessen and Amjad Masad: English As the New Programming Language

a16z

Play Episode Listen Later Oct 23, 2025 71:38


Amjad Masad, founder and CEO of Replit, joins a16z's Marc Andreessen and Erik Torenberg to discuss the new world of AI agents, the future of programming, and how software itself is beginning to build software.They trace the history of computing to the rise of AI agents that can now plan, reason, and code for hours without breaking, and explore how Replit is making it possible for anyone to create complex applications in natural language. Amjad explains how RL unlocked reasoning for modern models, why verification loops changed everything, whether LLMs are hitting diminishing returns — and if “good enough” AI might actually block progress toward true general intelligence. Resources:Follow Amjad on X: https://x.com/amasadFollow Marc on X: https://x.com/pmarcaFollow Erik on X: https://x.com/eriktorenberg Stay Updated: If you enjoyed this episode, be sure to like, subscribe, and share with your friends!Find a16z on X: https://x.com/a16zFind a16z on LinkedIn: https://www.linkedin.com/company/a16zListen to the a16z Podcast on Spotify: https://open.spotify.com/show/5bC65RDvs3oxnLyqqvkUYXListen to the a16z Podcast on Apple Podcasts: https://podcasts.apple.com/us/podcast/a16z-podcast/id842818711Follow our host: https://x.com/eriktorenbergPlease note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures. Stay Updated:Find a16z on XFind a16z on LinkedInListen to the a16z Podcast on SpotifyListen to the a16z Podcast on Apple PodcastsFollow our host: https://twitter.com/eriktorenberg Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

Everyday AI Podcast – An AI and ChatGPT Podcast
Ep 636: Uber paying drivers $1 to train AI models? A sign of what's next

Everyday AI Podcast – An AI and ChatGPT Podcast

Play Episode Listen Later Oct 21, 2025 37:23


Colombia Calling - The English Voice in Colombia
589: Unlocking Colombia's Historical Memory with Data

Colombia Calling - The English Voice in Colombia

Play Episode Listen Later Oct 21, 2025 58:51


In this episode of the Colombia Calling podcast, host Richard McColl engages with academics David Anderson (Associate Professor in Analytics at Villanova University in PA) and Galia Benitez (Associate Professor of International Relations at Michigan State University) to discuss their research on using Large Language Models (LLMs) to analyse violence in Colombia. They explore the challenges of data collection, the human impact of their findings, and the importance of interdisciplinary collaboration in social science research. The conversation delves into the complexities of measuring violence, the relationship between coca eradication and violence, and the future of research in this area amidst funding challenges. Read the full report entitled: "Using LLMs to create analytical datasets: A case study of reconstructing the historical memory of Colombia." https://arxiv.org/abs/2509.04523   Tune in to this and the Colombia Briefing with Emily Hart. Only for subscribers this week.   https://harte.substack.com/

a16z
Columbia CS Professor: Why LLMs Can't Discover New Science

a16z

Play Episode Listen Later Oct 13, 2025 50:54


From GPT-1 to GPT-5, LLMs have made tremendous progress in modeling human language. But can they go beyond that to make new discoveries and move the needle on scientific progress?We sat down with distinguished Columbia CS professor Vishal Misra to discuss this, plus why chain-of-thought reasoning works so well, what real AGI would look like, and what actually causes hallucinations. Resources:Follow Dr. Misra on X: https://x.com/vishalmisraFollow Martin on X: https://x.com/martin_casado Stay Updated: If you enjoyed this episode, be sure to like, subscribe, and share with your friends!Find a16z on X: https://x.com/a16zFind a16z on LinkedIn: https://www.linkedin.com/company/a16zListen to the a16z Podcast on Spotify: https://open.spotify.com/show/5bC65RDvs3oxnLyqqvkUYXListen to the a16z Podcast on Apple Podcasts: https://podcasts.apple.com/us/podcast/a16z-podcast/id842818711Follow our host: https://x.com/eriktorenbergPlease note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures. Stay Updated:Find a16z on XFind a16z on LinkedInListen to the a16z Podcast on SpotifyListen to the a16z Podcast on Apple PodcastsFollow our host: https://twitter.com/eriktorenberg Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

The John Batchelor Show
VHEADLINE: DeepSeek AI: Chinese LLM Performance and Security Flaws Revealed Amid Semiconductor Export Circumvention GUEST NAME: Jack Burnham SUMMARY: John Batchelor speaks with Jack Burnham about competition in Large Language Models between the US and Chi

The John Batchelor Show

Play Episode Listen Later Oct 9, 2025 13:07


VHEADLINE: DeepSeek AI: Chinese LLM Performance and Security Flaws Revealed Amid Semiconductor Export Circumvention GUEST NAME: Jack Burnham SUMMARY: John Batchelor speaks with Jack Burnham about competition in Large Language Models between the US and China's DeepSeek. A NIST study found US models superior in software engineering, though DeepSeek showed parity in scientific questions. Critically, DeepSeek models exhibited significant security flaws. China attempts to circumvent US export controls on GPUs by smuggling and using cloud computing centers in Southeast Asia. Additionally, China aims to dominate global telecommunications through control of supply chains and legal mechanisms granting the CCP access to firm data.E 1959

The John Batchelor Show
1: CBS EYE ON THE WORLD WITH JOHN BATCHELOR THE SHOW BEGINS IN THE DOUBTS THAT CONGRESS IS CAPABLE OF CUTTING SPENDING..... 10-8-25 FIRST HOUR 9-915 HEADLINE: Arab Intellectuals Fail Palestinians by Prioritizing Populism and Victimhood Narrative i

The John Batchelor Show

Play Episode Listen Later Oct 9, 2025 8:50


CBS EYE ON THE WORLD WITH JOHN BATCHELOR 1900 KYIV THE SHOW BEGINS IN THE DOUBTS THAT CONGRESS IS CAPABLE OF CUTTING SPENDING..... 10-8-25 FIRST HOUR 9-915 HEADLINE: Arab Intellectuals Fail Palestinians by Prioritizing Populism and Victimhood Narrative in Gaza ConflictGUEST NAME: Hussain Abdul-Hussain SUMMARY: John Batchelor speaks with Hussain Abdul-Hussain about Hamas utilizing the power of victimhood to justify atrocities and vilify opponents. Arab and Muslim intellectuals have failed Palestinians by prioritizing populism over introspection and self-critique. Regional actors like Egypt prioritize populist narratives over national interests, exemplified by refusing to open the Sinai border despite humanitarian suffering. The key recommendation is challenging the narrative and fostering a reliable, mature Palestinian government. 915-930 HEADLINE: Arab Intellectuals Fail Palestinians by Prioritizing Populism and Victimhood Narrative in Gaza ConflictGUEST NAME: Hussain Abdul-Hussain SUMMARY: John Batchelor speaks with Hussain Abdul-Hussain about Hamas utilizing the power of victimhood to justify atrocities and vilify opponents. Arab and Muslim intellectuals have failed Palestinians by prioritizing populism over introspection and self-critique. Regional actors like Egypt prioritize populist narratives over national interests, exemplified by refusing to open the Sinai border despite humanitarian suffering. The key recommendation is challenging the narrative and fostering a reliable, mature Palestinian government. 930-945 HEADLINE: Russian Oil and Gas Revenue Squeezed as Prices Drop, Turkey Shifts to US LNG, and China Delays Pipeline GUEST NAME: Michael Bernstam SUMMARY: John Batchelor speaks with Michael Bernstam about Russia facing severe budget pressure due to declining oil prices projected to reach $40 per barrel for Russian oil and global oil surplus. Turkey, a major buyer, is abandoning Russian natural gas after signing a 20-year LNG contract with the US. Russia refuses Indian rupee payments, demanding Chinese renminbi, which India lacks. China has stalled the major Power of Siberia 2 gas pipeline project indefinitely. Russia utilizes stablecoin and Bitcoin via Central Asian banks to circumvent payment sanctions. 945-1000 HEADLINE: UN Snapback Sanctions Imposed on Iran; Debate Over Nuclear Dismantlement and Enrichment GUEST NAME: Andrea Stricker SUMMARY: John Batchelor speaks with Andrea Stricker about the US and Europe securing the snapback of UN sanctions against Iran after 2015 JCPOA restrictions expired. Iran's non-compliance with inspection demands triggered these severe sanctions. The discussion covers the need for full dismantlement of Iran's nuclear program, including both enrichment and weaponization capabilities, to avoid future conflict. Concerns persist about Iran potentially retaining enrichment capabilities through low-level enrichment proposals and its continued non-cooperation with IAEA inspections. SECOND HOUR 10-1015 HEADLINE: Commodities Rise and UK Flag Controversy: French Weather, Market Trends, and British Politics GUEST NAME: Simon Constable SUMMARY: John Batchelor speaks with Simon Constable about key commodities like copper up 16% and steel up 15% signaling strong economic demand. Coffee prices remain very high at 52% increase. The conversation addresses French political turmoil, though non-citizens cannot vote. In the UK, the St. George's flag has become highly controversial, viewed by some as associated with racism, unlike the Union Jack. This flag controversy reflects a desire among segments like the white working class to assert English identity. 1015-1030 HEADLINE: Commodities Rise and UK Flag Controversy: French Weather, Market Trends, and British Politics GUEST NAME: Simon Constable SUMMARY: John Batchelor speaks with Simon Constable about key commodities like copper up 16% and steel up 15% signaling strong economic demand. Coffee prices remain very high at 52% increase. The conversation addresses French political turmoil, though non-citizens cannot vote. In the UK, the St. George's flag has become highly controversial, viewed by some as associated with racism, unlike the Union Jack. This flag controversy reflects a desire among segments like the white working class to assert English identity. 1030-1045 HEADLINE: China's Economic Contradictions: Deflation and Consumer Wariness Undermine GDP Growth ClaimsGUEST NAME: Fraser Howie SUMMARY: John Batchelor speaks with Fraser Howie about China facing severe economic contradictions despite high World Bank forecasts. Deflation remains rampant with frequently negative CPI and PPI figures. Consumer wariness and high youth unemployment at one in seven persist throughout the economy. The GDP growth figure is viewed as untrustworthy, manufactured through debt in a command economy. Decreased container ship arrivals point to limited actual growth, exacerbated by higher US tariffs. Economic reforms appear unlikely as centralization under Xi Jinping continues. 1045-1100 HEADLINE: Takaichi Sanae Elected LDP Head, Faces Coalition Challenge to Become Japan's First Female Prime Minister GUEST NAME: Lance Gatling SUMMARY: John Batchelor speaks with Lance Gatling about Takaichi Sanae being elected head of Japan's LDP, positioning her to potentially become the first female Prime Minister. A conservative figure, she supports visits to the controversial Yasukuni Shrine. Her immediate challenge is forming a majority coalition, as the junior partner Komeito disagrees with her conservative positions and social policies. President Trump praised her election, signaling potential for strong bilateral relations. THIRD HOUR 1100-1115 VHEADLINE: DeepSeek AI: Chinese LLM Performance and Security Flaws Revealed Amid Semiconductor Export Circumvention GUEST NAME: Jack Burnham SUMMARY: John Batchelor speaks with Jack Burnham about competition in Large Language Models between the US and China's DeepSeek. A NIST study found US models superior in software engineering, though DeepSeek showed parity in scientific questions. Critically, DeepSeek models exhibited significant security flaws. China attempts to circumvent US export controls on GPUs by smuggling and using cloud computing centers in Southeast Asia. Additionally, China aims to dominate global telecommunications through control of supply chains and legal mechanisms granting the CCP access to firm data.E V 1115-1130 HEADLINE: DeepSeek AI: Chinese LLM Performance and Security Flaws Revealed Amid Semiconductor Export Circumvention GUEST NAME: Jack Burnham SUMMARY: John Batchelor speaks with Jack Burnham about competition in Large Language Models between the US and China's DeepSeek. A NIST study found US models superior in software engineering, though DeepSeek showed parity in scientific questions. Critically, DeepSeek models exhibited significant security flaws. China attempts to circumvent US export controls on GPUs by smuggling and using cloud computing centers in Southeast Asia. Additionally, China aims to dominate global telecommunications through control of supply chains and legal mechanisms granting the CCP access to firm data. 1130-1145 HEADLINE: Taiwanese Influencer Charged for Threatening President; Mainland Chinese Influence Tactics ExposedGUEST NAME: Mark Simon SUMMARY: John Batchelor speaks with Mark Simon about internet personality Holger Chen under investigation in Taiwan for calling for President William Lai's decapitation. This highlights mainland Chinese influence operations utilizing influencers who push themes of military threat and Chinese greatness. Chen is suspected of having a mainland-affiliated paymaster due to lack of local commercial support. Taiwan's population primarily identifies as Taiwanese and is unnerved by constant military threats. A key propaganda goal is convincing Taiwan that the US will not intervene. 1145-1200 HEADLINE: Sentinel ICBM Modernization is Critical and Cost-Effective Deterrent Against Great Power CompetitionGUEST NAME: Peter Huessy SUMMARY: John Batchelor speaks with Peter Huessy about the Sentinel program replacing aging 55-year-old Minuteman ICBMs, aiming for lower operating costs and improved capabilities. Cost overruns stem from necessary infrastructure upgrades, including replacing thousands of miles of digital command and control cabling and building new silos. Maintaining the ICBM deterrent is financially and strategically crucial, saving hundreds of billions compared to relying solely on submarines. The need for modernization reflects the end of the post-Cold War "holiday from history," requiring rebuilding against threats from China and Russia. FOURTH HOUR 12-1215 HEADLINE: Supreme Court Battles Over Presidential Impoundment Authority and the Separation of Powers GUEST NAME: Josh Blackman SUMMARY: John Batchelor speaks with Josh Blackman about Supreme Court eras focusing on the separation of powers. Currently, the court is addressing presidential impoundment—the executive's authority to withhold appropriated funds. Earlier rulings, particularly 1975's Train v. City of New York, constrained this power. The Roberts Court appears sympathetic to reclaiming presidential authority lost during the Nixon era. The outcome of this ongoing litigation will determine the proper balance between executive and legislative branches. 1215-1230 HEADLINE: Supreme Court Battles Over Presidential Impoundment Authority and the Separation of Powers GUEST NAME: Josh Blackman SUMMARY: John Batchelor speaks with Josh Blackman about Supreme Court eras focusing on the separation of powers. Currently, the court is addressing presidential impoundment—the executive's authority to withhold appropriated funds. Earlier rulings, particularly 1975's Train v. City of New York, constrained this power. The Roberts Court appears sympathetic to reclaiming presidential authority lost during the Nixon era. The outcome of this ongoing litigation will determine the proper balance between executive and legislative branches. 1230-1245 HEADLINE: Space Force Awards Contracts to SpaceX and ULA; Juno Mission Ending, Launch Competition Heats UpGUEST NAME: Bob Zimmerman SUMMARY: John Batchelor speaks with Bob Zimmerman about Space Force awarding over $1 billion in launch contracts to SpaceX for five launches and ULA for two launches, highlighting growing demand for launch services. ULA's non-reusable rockets contrast with SpaceX's cheaper, reusable approach, while Blue Origin continues to lag behind. Other developments include Firefly entering defense contracting through its Scitec acquisition, Rocket Lab securing additional commercial launches, and the likely end of the long-running Juno Jupiter mission due to budget constraints. 1245-100 AM HEADLINE: Space Force Awards Contracts to SpaceX and ULA; Juno Mission Ending, Launch Competition Heats UpGUEST NAME: Bob Zimmerman SUMMARY: John Batchelor speaks with Bob Zimmerman about Space Force awarding over $1 billion in launch contracts to SpaceX for five launches and ULA for two launches, highlighting growing demand for launch services. ULA's non-reusable rockets contrast with SpaceX's cheaper, reusable approach, while Blue Origin continues to lag behind. Other developments include Firefly entering defense contracting through its Scitec acquisition, Rocket Lab securing additional commercial launches, and the likely end of the long-running Juno Jupiter mission due to budget constraints.

The John Batchelor Show
VHEADLINE: DeepSeek AI: Chinese LLM Performance and Security Flaws Revealed Amid Semiconductor Export Circumvention GUEST NAME: Jack Burnham SUMMARY: John Batchelor speaks with Jack Burnham about competition in Large Language Models between the US and Chi

The John Batchelor Show

Play Episode Listen Later Oct 9, 2025 4:43


VHEADLINE: DeepSeek AI: Chinese LLM Performance and Security Flaws Revealed Amid Semiconductor Export Circumvention GUEST NAME: Jack Burnham SUMMARY: John Batchelor speaks with Jack Burnham about competition in Large Language Models between the US and China's DeepSeek. A NIST study found US models superior in software engineering, though DeepSeek showed parity in scientific questions. Critically, DeepSeek models exhibited significant security flaws. China attempts to circumvent US export controls on GPUs by smuggling and using cloud computing centers in Southeast Asia. Additionally, China aims to dominate global telecommunications through control of supply chains and legal mechanisms granting the CCP access to firm data. 1942