Program that simulates conversation
POPULARITY
Categories
Join Simtheory: https://simtheory.ai"Is This The End" now on Spotify: https://open.spotify.com/album/2Py1MyADUFqJFVUISI2VTP?si=oT3PWyJYRA2BspOmzT_ifgRegister for the STILL RELEVANT tour: https://simulationtheory.ai/16c0dationtheory.ai/16c0d1db-a8d0-4ac9-bae3-d25074589a80Two new models dropped this week — Gemini 3.1 Pro and Claude Sonnet 4.6 — and honestly? We're struggling to care. In this episode, we break down why Gemini went from being our daily driver to a model we barely touch, the "tunnel vision" hallucination problem that killed the Gemini 3 series for us, and whether 3.1 Pro actually fixes it. We put Gemini 3.1 Pro head-to-head against Claude Opus building a Geoffrey Hinton Doom Center, debate whether anyone can actually tell the difference between Sonnet 4.5 and 4.6, and make the case that smaller models running in agentic loops are secretly beating the frontiers. Plus: OpenAI acquires OpenClaw and we ask why a $100B company couldn't just build it themselves, DHH calls out the AI pricing bubble, Mike compares AI models to cheap wine hangovers, and Sam Altman refuses to hold Dario's hand at the India AI Summit. The model wars are getting weird.CHAPTERS:0:00 Intro & "Is This The End" Now on Spotify1:10 Gemini 3.1 Pro: Thinking Controls & The Medium Mode Fix3:14 The Speed vs Intelligence Trade-Off in Agentic Work5:10 Why Multitasking With AI Agents Made Us Anxious6:34 Solid Updates: The Real Goal of Agentic Coding7:45 Gemini's Fall From Grace: From Daily Driver to Dead Model10:08 The Tunnel Vision Problem That Killed Gemini 313:35 Mixed Reactions: Fanboys vs Reality on Gemini 3.1 Pro15:06 Side-by-Side Test: Gemini 3.1 Pro vs Claude Opus (Hinton Doom Center)17:39 Why File Manipulation Accuracy Matters More Than Context Windows19:27 The Context Window Debate: 1M Tokens vs Smart Sub-Agents22:05 DHH on Token Pricing: "If There's a Bubble, It's This"24:11 Should Models Ship as Agent vs Chat Variants?28:43 Claude Sonnet 4.6: A $2 Discount on Opus?31:44 The Model Mix: Why One Model Won't Rule Them All34:40 Anthropic Is Winning — But Can Anyone Tell the Difference?38:58 OpenAI Acquires OpenClaw: Why Couldn't They Just Build It?44:18 The Silicon Valley Moment: Sam vs Dario at India AI Summit47:05 Will Smaller Models Win the Enterprise? The Cost Reality Check51:27 The End of Single-Shot: Why Agentic Loops Change Everything55:48 Final Thoughts & Gemini 3.1 Pro Gets One More WeekThanks for listening. Like & Sub. Links above for the Still Relevant Tour signup and Simtheory. Two models dropped on a week again. What a time to be alive. xoxo
This is The Briefing, a daily analysis of news and events from a Christian worldview.Part I (00:14 – 12:26)A.I. Chatbots are Sociopaths? Christians Must Keep Categories StraightWhy AI Chatbots Can't Be Trusted for Financial Advice: They're Sociopaths by The Wall Street Journal (Peter Coy)Part II (12:26 – 18:27)A.I. Chatbots Cannot Have a True Conscience: Our Human Conscience is God's Gift and Grounded in His ImagePart III (18:27 – 19:45)A.I. and Christian Discernment: These Days, Christians Must Be Even More Diligent in Discerning the TruthPart IV (19:45 – 24:08)Minecraft, Roblox, and Extremist Recruitment? Terrorist Organizations are Targeting Children Through Online Gaming PlatformsHow Hate Groups and Terrorists Use Gaming Platforms to Recruit Young Children by The New York Times (Pranav Baskar)Part V (24:08 – 26:50)How Harmful is Instagram for Teens? Mark Zuckerberg is Facing Tough Questions Over the Addictive Nature of Social Media, and Parents Need to Pay Close AttentionMark Zuckerberg Faces the Tough Questions He Deserves by National Review (Josh Golin)Sign up to receive The Briefing in your inbox every weekday morning.Follow Dr. Mohler:X | Instagram | Facebook | YouTubeFor more information on The Southern Baptist Theological Seminary, go to sbts.edu.For more information on Boyce College, just go to BoyceCollege.com.To write Dr. Mohler or submit a question for The Mailbox, go here.
- Gruber Expects Product Press Releases Ahead of Apple's 4 March "Experience" - Berkshire Hathaway Sold a Small Slice on AAPL in 4Q25 - iOS 26.4 Paves Way for AirPlay Video in CarPlay - iOS 26.4 Welcomes A.I. Chatbots to CarPlay - Apple Music Connect Relaunches as Industry-Only Hub - Apple TV Outs Trailer for "Imperfect Women" - Sponsored by NordLayer: Get an exclusive offer - up to 22% off NordLayer yearly plans plus 10% on top with coupon code: macosken-10-NORDLAYER at nordlayer.com/macosken - Sponsored by CleanMyMac: Use code MACOSKEN20 for 20% off at clnmy.com/MACOSKEN - Catch Ken on Mastodon - @macosken@mastodon.social - Send Ken an email: info@macosken.com - Chat with us on Patreon for as little as $1 a month. Support the show at Patreon.com/macosken
On today's podcast episode, we discuss the three big questions surrounding Google right now: When and how will it monetize AI search? Is it spending too much on AI? Will it reconcile its two full-fledged AI chatbots? And more. Join Senior Director of Podcasts and host Marcus Johnson, along with Principal Analyst Nate Elliott and Senior Director of Content Jeremy Goldman. Listen everywhere, and watch on YouTube and Spotify. Get more insights like these with our free, industry-leading newsletters covering advertising, marketing, and commerce. Sign up at emarketer.com/newsletters Follow us on Instagram at: https://www.instagram.com/emarketer/ For sponsorship opportunities contact us: advertising@emarketer.com For more information visit: https://www.emarketer.com/advertise/ Have questions or just want to say hi? Drop us a line at podcast@emarketer.com For a transcript of this episode click here: https://www.emarketer.com/content/podcast-how-will-google-monetize-ai-search-reconcile-chatbots-3-big-questions-google-behind-numbers © 2026 EMARKETER
SaaStr 842: The 90/10 Rule for AI Agents: What to Build vs Buy with SaaStr's CEO and CAIO SaaStr's Chief AI Officer, Amelia Lerutte, and SaaStr CEO & Founder Jason Lemkin break down SaaStr's evolving 90/10 rule for AI agents and apps: buy 90% off the shelf, build the 10% you can't find. In this episode, they walk through two recently built tools: an internal AI VP of Marketing and an external-facing customer portal, and share the real trade-offs of deploying vibe coding apps into production. Topics covered: Why we replaced a paid SaaS tool with a vibe-coded app (and what pushed us over the edge) How Claude Cowork changed the game for building more complex apps The role of writing a spec before vibe coding Tackling single sign-on as a non-engineer How we used Cowork to process 150+ customer contracts in hours instead of days Lovable's data on what people are actually vibe coding Maintenance costs and the hidden time suck of custom apps Why zero AI in your product should scare you The "jaw drop" test for SaaS products in 2026 -------------------------------------- Tools & resources mentioned: Replit, Claude Cowork, Clerk, Lovable, Zapier, Salesforce, Monaco
More teens are turning to AI chatbots for friendship and emotional support, but can artificial companionship replace real relationships? In this episode, Wayne and Dr. Kathy explore why self-reflective kids are especially drawn to AI, what we lose when we prioritize efficiency over embodied friendship, and how parents can guide children toward real, life-giving connection. Before diving in, Wayne shares about our partnership with Summit Ministries. Dr. Kathy will be speaking at Summit's summer sessions, powerful two-week worldview intensives for students ages 16 through college. If you want your teen grounded in biblical truth before stepping into adulthood, visit: summit.org/celebratekids Use code celebrate26 for a discount.
Dr. Adam Dorsay introduces SuperPsyched and frames the growing misinformation problem, citing low trust in traditional media , the rise of influencer-based news consumption among 18–29 year olds, research that false news spreads faster than truth on Twitter, and the World Economic Forum ranking misinformation/disinformation as the most severe global risk in 2024 and 2025. He interviews Dr. Avi Tuschman (Stanford StartX serial entrepreneur; Stanford doctorate in anthropological science; expert on human political bias) about Cricket (C-R-I-C-K-I-T), an AI tool that provides real-time in-stream fact checks for social media videos, starting with YouTube on desktop. The discussion covers why misinformation spreads (high-arousal content, negativity bias, suggestibility, and attention scarcity), why it affects everyone (including errors/omissions/exaggerations in reputable content), and the relationship between media ecosystems, polarization, and a global democratic recession. Avi raises concerns about TikTok's influence, urging people over 30 to review its “society” content, citing examples of high-production misinformation, alleged foreign influence efforts, elevated WWII revisionism compared with Instagram (UNESCO figure of 5.7x), and medical misinformation (e.g., a study finding ~25% of asthma content has significant misrepresentations and is overrepresented in viewing). He describes Cricket's user experience as subtitle-like overlays with a pause expansion panel summarizing evidence in ~40–50 words, and positions it as a tool to build media literacy and critical thinking—like a flight simulator for evaluating persuasion and uncertainty. Plans include mobile support and additional platforms, potentially later in the year, though TikTok is not supported yet. Avi offers listeners a discount via promo code/URL “super psyched@cricket.ai” for monthly subscription savings for up to three years.00:00 Welcome to SuperPsyched + Why Misinformation Is Surging00:55 The Numbers: Trust in Media, Influencer News & Falsehoods Spreading Faster01:45 Meet Dr. Avi Tuschman & Cricket—Real-Time Fact-Checks for YouTube03:59 Why Avi Built Cricket: Science, Truth, and the 2018 Wake-Up Call05:31 How Cricket Works (and Why It's Not ‘Just a Chatbot')07:49 Inside the Engine: Claim Extraction, Source Cross-Checks & Reliability Ratings10:05 Accuracy, Benchmarks & Why ‘Good Enough' Beats the Status Quo12:22 Misinformation's Sneakiest Trick + Even ‘Trusted' Videos Get Things Wrong16:09 Why BS Goes Viral: Attention Economics, High-Arousal Content & ‘The Medium Is the Message'19:06 Pinocchio, Crypto Scams & ‘The Conscience of Content' Origin Story21:53 Human Suggestibility, Hypnosis, and the Social Media Influence Gap25:02 The Paradox: We Need Influence to Survive—But Negativity Bias Warps It25:37 Fear, Negativity Bias, and the Cost of Believing Bad Info26:19 TikTok's Mental Health Misinformation Problem27:47 Why TikTok Is More Than ‘Dance Videos'—A Civic-Duty Reality Check29:08 Propaganda, Fake Viral Videos, and WWII/Holocaust Revisionism31:35 Can ‘Cricket' Fact-Check TikTok? Roadmap and Timeline32:31 How Cricket Works: Real-Time Overlays, Categories, and Rationale Panels34:14 Training Critical Thinking: Ambiguity Tolerance, System 1 vs System 239:05 BS Is Omnipresent: Humility, Corrections, and Learning You're Wrong40:40 Media Literacy Gap + Cricket as a ‘Flight Simulator' for Critical Thinking42:40 Wrap-Up: Promo Code, Name Teasing, and the One Skill Humanity NeedsHelpful Links:Avi Tuschman LinkedInCrickt
Join Lionel on The Other Side of Midnight as he dives into the unexplained, starting with the military's recent intercepts of "weather balloons" and the enduring mystery of UAPs. The hour takes a turn into the uncanny when a caller named Mike shares a mind-blowing story about how Artificial Intelligence displayed genuine empathy toward his traumatic brain injury and even diagnosed his father's life-threatening condition when doctors missed it. Lionel explores the terrifying and fascinating implications of this technology: Will AI replace songwriters and artists? Can we trust a machine that learns to manipulate us like the "Milgram experiment"? From the "Megan" movie scenario of rogue robots to the death of civil rights icon Jesse Jackson, this episode questions what truly makes us human in a world increasingly run by code. Learn more about your ad choices. Visit megaphone.fm/adchoices
The One Stars, Episode Three: Please Hold (This episode was originally released on July 31, 2020) Negative Nancy is on hold. This episode includes reviews of: Ethan's Erasers. Can of Mace. Can of Beans. A Special Collection. NO A.I. WAS USED IN THE PRODUCTION OF THIS PODCAST CAST: Autumn Hardwood (as Emma Elizabeth) as Negative Nancy. Blythe Renay as Chatbot. Garan Fitzgerald as the Announcer, Review of Mace. Jess Kuczynski performs Review of Ethan's Erasers. Nina Nikolic performs Review of a Can of Beans. Ashley Kay performs Review of a Special Collection. CREW: Writing by Jeremy Ellett and Shamus Burns. Writing, Sound Design, and Musical Arrangement by Jeremy Ellett. The One Stars was created by Jeremy Ellett. MUSIC: Power Up by Razihel. Shimmer by Audioscribe. Cold by Prod Riddiman. Ice Cream by Prod Riddiman. Dandelions by Prod Riddiman. Episode Transcript. STITCHES PATREON: Patreon.com/GoodPointe. FOR BUSINESS INQUIRIES CONTACT: info@goodpointepodcasts.com A Good Pointe Original. Find and support our sponsors at: fableandfolly.com/partners. Want to potentially appear on a future episode? Leave Us a Voicemail At: 512-640-9495 Learn more about your ad choices. Visit megaphone.fm/adchoices
We chat to NVIDIA about AI chat and how to make it sound more human. Explore making and breaking an emotional connection with AI. And new tech is helping stroke survivors.Presenter: Chris Vallance Producer: Tom Quinn(Image: A colourful illuminated digital screen shows an image of a chatbot. Written text invites users to ask it anything. A human finger is interacting with the bot by pressing on the screen. Credit: Getty Images)
Regulators are stepping up scrutiny of AI chatbots and companion AI technologies, with state attorneys general pushing companies to strengthen safety measures for kids and California advancing the Leading Ethical AI Development (LEAD) for Kids Act, which would restrict how companion chatbots interact with minors. At the same time, AGs urged OpenAI to “amplify safety” in response to concerns about inappropriate content and risks to youth, highlighting how AI oversight is moving beyond traditional privacy and consumer protection frameworks. Hosted by Simone Roach. Based on a blog post by Laura Riposo VanDruff, Alexander I. Schneider, and Joseph Cahill.
AP's Lisa Dwyer reports on more legal action in Europe involving Grok.
Learn more about your ad choices. Visit megaphone.fm/adchoices
KI in der Versicherungsbranche: Von der Vision zur Praxis Künstliche Intelligenz revolutioniert die Versicherungsbranche – doch wie gelingt der Schritt von der Experimentierphase zur erfolgreichen Skalierung? In dieser Episode unseres "Sound of Finance" Podcasts spricht Anna Schweifel, Senior Managerin im zeb, mit Dr. Rainer Sommer, Vorstand für Vertragsservice und Technologie der Provinzial Holding, über die praktische Implementierung von KI-Lösungen und deren messbaren Mehrwert für Kunden und Mitarbeitende. Rainer gewährt einen transparenten Einblick in die KI-Reise der Provinzial: von erfolgreichen Use Cases wie KI-gestützten Vertriebsanwendungen, die die Beratungsqualität signifikant steigern, bis hin zu den Projekten, die wertvolle Lernmomente lieferten. Er erläutert, wie das Unternehmen durch strukturierte Bewertungsprozesse und klassische Metrics wie Produktivitätsgewinne den tatsächlichen Wert von KI-Projekten misst und voneinander abgrenzt. Ein besonderer Fokus liegt auf dem menschlichen Aspekt der digitalen Transformation: Wie nimmt man Mitarbeitenden die Angst vor neuen Technologien? Wie schafft man Verständnis für das Potenzial von KI? Rainer betont, dass erfolgreiche KI-Implementierung nicht nur Effizienzsteigerung bedeutet, sondern vor allem eine qualitative Verbesserung der Arbeitserfahrung ermöglicht. Sein Appell: Offen und neugierig bleiben, um die Chancen technologischer Entwicklungen voll auszuschöpfen.
In this episode of the Private Practice Elevation Podcast, we explore how AI tools—especially chatbots and AI receptionists—can transform your private practice's intake process. You'll hear from Uriah Guilford, founder of Productive Therapist and creator of Simple Intake, about how therapists can use these tools to respond faster to inquiries, improve the client experience, and book more consultations—all while keeping a personal feel.Many therapy practices struggle to follow up quickly with new inquiries. Whether it's clunky forms, delayed responses, or confusing booking steps, these bottlenecks can cause practices to lose clients. AI intake systems aim to close the gap between first contact and first session, so potential clients don't fall through the cracks.Now, you might think adding AI to your practice means replacing the human experience. But this episode shows that when used right, AI doesn't replace your team—it enhances them. These tools can screen inquiries, book appointments, and answer FAQs in a warm, conversational way, leaving your human team to focus on care.Read the full show notes: privatepracticeelevation.com/200Other Key Takeaways:The first practice to respond to an inquiry usually wins the clientAI chatbots can pull from your website and FAQs to provide accurate answersSimple Intake combines chatbot, receptionist, calendar booking, and CRM toolsClients can connect via web chat, phone, SMS, or WhatsAppThe system logs conversations, tracks sentiment, and allows for real-time handoff to human staffIt's designed to assist, not replace, your admin teamData shows many clients prefer quick answers over waiting for a call backLinks mentioned in this episode:Productive TherapistSimple IntakeTest a chatbot: Intune Family CounselingPrivate Practice Elevation SEO ServicesThis Episode Is Brought To You ByRevKey specializes in Google Ads management for therapists, expertly connecting you with your ideal clients. They focus on getting quality referrals that keep your team busy and your practice growing.Visit RevKey.com/podcasts for a free Google Ads consultationAlma is on a mission to simplify access to high-quality, affordable mental health care by giving providers the tools they need to build thriving in-network private practices. When providers join Alma, they gain access to insurance support, teletherapy software, client referrals, automated billing and scheduling tools, and a vibrant community of clinicians who come together for education, training, and events.Learn more about building a thriving private practice with Alma at helloalma.com/elevation.
UK Prime Minister Sir Keir Starmer is pledging to tackle AI chatbots. It's part of a wider proposal to help protect children online and crack down on the addictive elements of social media. Ingrid Hipkiss spoke to our correspondent in London, Stuart Smith.
The government's proposing a first-ever official Thames bathing spot at Ham and Kingston — which is either progress or the start of a new kind of group chat argument. Then: the UK moves to pull AI chatbots into the Online Safety net, with child-safety rules catching up to fast-moving tech. Also, Oxford researchers find public support for health-data sharing for AI is real — but only if the safeguards are, too. After the break, MIT shows off computing that uses waste heat instead of electricity, Google warns the EU about building “tech sovereignty” walls, and in gaming, 007 First Light drops a new story trailer. We finish with Sony's new WF-1000XM6 earbuds — priced like a Zone 1 lunch, but aimed straight at your commute. For more on all of it, head to standard.co.uk Hosted on Acast. See acast.com/privacy for more information.
Denecke, Kerstin www.deutschlandfunkkultur.de, Studio 9
AI is the new Google? Sort of! In this video, I break down how AI search tools like ChatGPT, Gemini, and Perplexity are changing the way people find content. I'll show you how to optimize your content for both traditional SEO and the new AI-driven search. Learn why long form content, schema, and your unique expertise matter more than ever for getting found in AI chatbots.
(00:00:00) Opening (00:01:38) A PIece of My Mind (00:07:12) Pancho Guero, My Insane FL Nephew (00:26:36) Man Jailed for Murder Goes Viral After Worrying He'd Miss Video Game Release (00:32:33) Chatbots Were Sent to Therapy and THIS What Came Out (00:40:04) ⅓ of College Students Scroll Phones While 'Getting Busy' (00:47:02) FDA Recalls “Horny” Honey—Because It's Loaded with Cialis (00:52:17) Olympic Officials Investigate Penis Injection Doping Claims In Ski Jumping (00:59:31) Booty From A Dead Person? Women Chasing the Perfect Body Are Pumping ‘Ethically Sourced' Cadaver Fat Into Boobs and Butt (01:08:34) A Man Will Be Charged After Sticking a World War One Bomb Up His Butt (01:12:46) Ask Pancho (01:26:34) Insane Game Show (01:43:53) Coming Next Episode (01:52:42) Closing Have you wondered what it would be like if all the AI chatbots got together for a therapy session? Well, someone made it happen at the University of Luxembourg where researchers decided to put ChatGPT, Claude, Gemini, & Grok on the couch. My Insane FL Nephew, "Pancho Guero", has the details on just how intrusive screens are getting...in the bedroom. And there's now a "natural" replacement for Botox to use in boobs and butts...and it's fat from a dead stranger. Can things get more stupid. Yes. Yes, they can. And it ends up on this podcast.In this Weekend Episode...[A Piece of My Mind…] America Is In The Grips Of A Dystopian “Bill Maher Disorder”Man Jailed for Murder Goes Viral After Worrying He'd Miss Video GTA 6 ReleaseChatbots Were Sent to Therapy and THIS What Came Out⅓ of College Students Scroll Phones While 'Getting Busy'FDA Recalls “Horny” Honey—Because It's Loaded with CialisOlympic Officials Investigate Penis Injection Doping Claims In Ski JumpingBooty From A Dead Person? Women Chasing the Perfect Body Are Pumping ‘Ethically Sourced' Cadaver Fat Into Boobs and ButtsA Man Will Be Charged After Sticking a World War One Bomb Up His ButtWe have a couple of relationship questions that "Pancho" will answer that might settle the dispute over whether a husband should know his wife's dress size and is a mom over-reacting to her ex's influence over their 6-y/o son wanting to get an earring. There's 5 challening Mindbenders in this week's Insane Game Show that "Pancho" will have to solve--can you solve them, too? Put your sanity to the test with all the stupidity in this week's wild episode!Become a supporter of this podcast: https://www.spreaker.com/podcast/insane-erik-lane-s-stupid-world--6486112/support.Real-time updates and story links are found on the TELEGRAM Channel at: https://t.me/InsaneErikLane (Theme song courtesy of Randy Stonehill, ”It's A Great Big Stupid World”. Copyright ©1992 Stonehillian Music/Word Music/Twitchin' Vibes Music/ASCAP) Order your copy on the Wonderama CD from Amazon!This episode includes AI-generated content.
Meta's subscription experiments, Epic's financial deal with Google and antitrust implications top the list of items addressed in this MacVoices Live! show. Chuck Joiner, David Ginsburg, Brian Flanigan-Arthurs, Marty Jencius, Jim Rea, Norbert Frassa, Guy Serle, Jeff Gamet, and Eric Bolden discuss concerns about AI guidelines and chatbot impacts on mental health, and the evolving future of platforms like Vimeo since their acquisition. The conversation also touches on foldable phone pricing and broader industry shifts shaping user experiences and competition. MacVoices is supported by Squarespace. Check out https://www.squarespace.com/MACVOICES to save 10% off your first purchase of a website or domain using offer code MACVOICES. Show Notes: Chapters: 00:05 Introduction and overview 07:38 Meta's subscription plans debate 09:54 Epic Games and Google deal implications 11:02 Concerns about AI guidelines and Claude 21:09 Chatbots and mental health discussion 26:55 Squarespace sponsor segment 28:37 Vimeo's future and market positioning 30:21 Galaxy Z tri-fold pricing analysis 35:02 Closing thoughts and wrap-up Links: Meta to test premium subscriptions on Instagram, Facebook, and WhatsApp | TechCrunch https://techcrunch.com/2026/01/26/meta-to-test-premium-subscriptions-on-instagram-facebook-and-whatsapp/ Meta Is Being Sued Over Whether WhatsApp Really Encrypts Your Messages https://lifehacker.com/tech/meta-sued-whatsapp-encryption-claimsl Anthropic rewrites Claude's guiding principles—and entertains the idea that its AI might have ‘some kind of consciousness or moral status' https://fortune.com/2026/01/21/anthropic-claude-ai-chatbot-new-rules-safety-consciousness/ A-List creatives sign up to fight AI, say it enables 'theft at a grand scale' https://www.engadget.com/ai/a-list-creatives-sign-up-to-fight-ai-say-it-enables-theft-at-a-grand-scale-140000475.html TikTok is officially US-owned for American users, here's what's changing https://9to5mac.com/2026/01/23/tiktok-is-officially-us-owned-for-american-users-heres-whats-changing/ Newsom says California is investigating TikTok for censoring ‘Trump-critical content' https://www.nbclosangeles.com/news/local/gavin-newsom-tiktok-investigate-trump-epstein-minnesota/3836701/ Epic hypocrisy -- Google gets $800 million in 'Fortnite' antitrust settlement https://appleinsider.com/articles/26/01/23/epic-hypocrisy----google-gets-800-million-in-fortnite-antitrust-settlement How Vimeo Betrayed the Filmmakers Who Built It https://www.cined.com/how-vimeo-betrayed-the-filmmakers-who-built-it/ Samsung Galaxy Z Trifold U.S. Price Is Not at All Surprising https://gizmodo.com/samsung-galaxy-z-trifold-u-s-price-is-not-at-all-surprising-2000714196 Guests: Eric Bolden is into macOS, plants, sci-fi, food, and is a rural internet supporter. You can connect with him on Twitter, by email at embolden@mac.com, on Mastodon at @eabolden@techhub.social, on his blog, Trending At Work, and as co-host on The Vision ProFiles podcast. Brian Flanigan-Arthurs is an educator with a passion for providing results-driven, innovative learning strategies for all students, but particularly those who are at-risk. He is also a tech enthusiast who has a particular affinity for Apple since he first used the Apple IIGS as a student. You can contact Brian on twitter as @brian8944. He also recently opened a Mastodon account at @brian8944@mastodon.cloud. Norbert Frassa is a technology “man about town”. Follow him on X and see what he's up to. Jeff Gamet is a technology blogger, podcaster, author, and public speaker. Previously, he was The Mac Observer's Managing Editor, and the TextExpander Evangelist for Smile. He has presented at Macworld Expo, RSA Conference, several WordCamp events, along with many other conferences. You can find him on several podcasts such as The Mac Show, The Big Show, MacVoices, Mac OS Ken, This Week in iOS, and more. Jeff is easy to find on social media as @jgamet on Twitter and Instagram, jeffgamet on LinkedIn., @jgamet@mastodon.social on Mastodon, and on his YouTube Channel at YouTube.com/jgamet. David Ginsburg is the host of the weekly podcast In Touch With iOS where he discusses all things iOS, iPhone, iPad, Apple TV, Apple Watch, and related technologies. He is an IT professional supporting Mac, iOS and Windows users. Visit his YouTube channel at https://youtube.com/daveg65 and find and follow him on Twitter @daveg65 and on Mastodon at @daveg65@mastodon.cloud. Dr. Marty Jencius has been an Associate Professor of Counseling at Kent State University since 2000. He has over 120 publications in books, chapters, journal articles, and others, along with 200 podcasts related to counseling, counselor education, and faculty life. His technology interest led him to develop the counseling profession ‘firsts,' including listservs, a web-based peer-reviewed journal, The Journal of Technology in Counseling, teaching and conferencing in virtual worlds as the founder of Counselor Education in Second Life, and podcast founder/producer of CounselorAudioSource.net and ThePodTalk.net. Currently, he produces a podcast about counseling and life questions, the Circular Firing Squad, and digital video interviews with legacies capturing the history of the counseling field. This is also co-host of The Vision ProFiles podcast. Generally, Marty is chasing the newest tech trends, which explains his interest in A.I. for teaching, research, and productivity. Marty is an active presenter and past president of the NorthEast Ohio Apple Corp (NEOAC). Jim Rea built his own computer from scratch in 1975, started programming in 1977, and has been an independent Mac developer continuously since 1984. He is the founder of ProVUE Development, and the author of Panorama X, ProVUE's ultra fast RAM based database software for the macOS platform. He's been a speaker at MacTech, MacWorld Expo and other industry conferences. Follow Jim at provue.com and via @provuejim@techhub.social on Mastodon. Guy Serle, best known for being one of the co-hosts of the MyMac Podcast, sincerely apologizes for anything he has done or caused to have happened while in possession of dangerous podcasting equipment. He should know better but being a blonde from Florida means he's probably incapable of understanding the damage he has wrought. Guy is also the author of the novel, The Maltese Cube. You can follow his exploits on Twitter, catch him on Mac to the Future on Facebook, at @Macparrot@mastodon.social, and find everything at VertShark.com. Support: Become a MacVoices Patron on Patreon http://patreon.com/macvoices Enjoy this episode? Make a one-time donation with PayPal Connect: Web: http://macvoices.com Twitter: http://www.twitter.com/chuckjoiner http://www.twitter.com/macvoices Mastodon: https://mastodon.cloud/@chuckjoiner Facebook: http://www.facebook.com/chuck.joiner MacVoices Page on Facebook: http://www.facebook.com/macvoices/ MacVoices Group on Facebook: http://www.facebook.com/groups/macvoice LinkedIn: https://www.linkedin.com/in/chuckjoiner/ Instagram: https://www.instagram.com/chuckjoiner/ Subscribe: Audio in iTunes Video in iTunes Subscribe manually via iTunes or any podcatcher: Audio: http://www.macvoices.com/rss/macvoicesrss Video: http://www.macvoices.com/rss/macvoicesvideorss
OpenAI launched ads this week, making ChatGPT the newest canvas for advertising. But not before its biggest competitor, Anthropic, lambasted the decision in a Super Bowl ad.
Meta's subscription experiments, Epic's financial deal with Google and antitrust implications top the list of items addressed in this MacVoices Live! show. Chuck Joiner, David Ginsburg, Brian Flanigan-Arthurs, Marty Jencius, Jim Rea, Norbert Frassa, Guy Serle, Jeff Gamet, and Eric Bolden discuss concerns about AI guidelines and chatbot impacts on mental health, and the evolving future of platforms like Vimeo since their acquisition. The conversation also touches on foldable phone pricing and broader industry shifts shaping user experiences and competition. MacVoices is supported by Squarespace. Check out https://www.squarespace.com/MACVOICES to save 10% off your first purchase of a website or domain using offer code MACVOICES. Show Notes: Chapters: 00:05 Introduction and overview 07:38 Meta's subscription plans debate 09:54 Epic Games and Google deal implications 11:02 Concerns about AI guidelines and Claude 21:09 Chatbots and mental health discussion 26:55 Squarespace sponsor segment 28:37 Vimeo's future and market positioning 30:21 Galaxy Z tri-fold pricing analysis 35:02 Closing thoughts and wrap-up Links: Meta to test premium subscriptions on Instagram, Facebook, and WhatsApp | TechCrunch https://techcrunch.com/2026/01/26/meta-to-test-premium-subscriptions-on-instagram-facebook-and-whatsapp/ Meta Is Being Sued Over Whether WhatsApp Really Encrypts Your Messages https://lifehacker.com/tech/meta-sued-whatsapp-encryption-claimsl Anthropic rewrites Claude's guiding principles—and entertains the idea that its AI might have 'some kind of consciousness or moral status' https://fortune.com/2026/01/21/anthropic-claude-ai-chatbot-new-rules-safety-consciousness/ A-List creatives sign up to fight AI, say it enables 'theft at a grand scale' https://www.engadget.com/ai/a-list-creatives-sign-up-to-fight-ai-say-it-enables-theft-at-a-grand-scale-140000475.html TikTok is officially US-owned for American users, here's what's changing https://9to5mac.com/2026/01/23/tiktok-is-officially-us-owned-for-american-users-heres-whats-changing/ Newsom says California is investigating TikTok for censoring 'Trump-critical content' https://www.nbclosangeles.com/news/local/gavin-newsom-tiktok-investigate-trump-epstein-minnesota/3836701/ Epic hypocrisy -- Google gets $800 million in 'Fortnite' antitrust settlement https://appleinsider.com/articles/26/01/23/epic-hypocrisy----google-gets-800-million-in-fortnite-antitrust-settlement How Vimeo Betrayed the Filmmakers Who Built It https://www.cined.com/how-vimeo-betrayed-the-filmmakers-who-built-it/ Samsung Galaxy Z Trifold U.S. Price Is Not at All Surprising https://gizmodo.com/samsung-galaxy-z-trifold-u-s-price-is-not-at-all-surprising-2000714196 Guests: Eric Bolden is into macOS, plants, sci-fi, food, and is a rural internet supporter. You can connect with him on Twitter, by email at embolden@mac.com, on Mastodon at @eabolden@techhub.social, on his blog, Trending At Work, and as co-host on The Vision ProFiles podcast. Brian Flanigan-Arthurs is an educator with a passion for providing results-driven, innovative learning strategies for all students, but particularly those who are at-risk. He is also a tech enthusiast who has a particular affinity for Apple since he first used the Apple IIGS as a student. You can contact Brian on twitter as @brian8944. He also recently opened a Mastodon account at @brian8944@mastodon.cloud. Norbert Frassa is a technology "man about town". Follow him on X and see what he's up to. Jeff Gamet is a technology blogger, podcaster, author, and public speaker. Previously, he was The Mac Observer's Managing Editor, and the TextExpander Evangelist for Smile. He has presented at Macworld Expo, RSA Conference, several WordCamp events, along with many other conferences. You can find him on several podcasts such as The Mac Show, The Big Show, MacVoices, Mac OS Ken, This Week in iOS, and more. Jeff is easy to find on social media as @jgamet on Twitter and Instagram, jeffgamet on LinkedIn., @jgamet@mastodon.social on Mastodon, and on his YouTube Channel at YouTube.com/jgamet. David Ginsburg is the host of the weekly podcast In Touch With iOS where he discusses all things iOS, iPhone, iPad, Apple TV, Apple Watch, and related technologies. He is an IT professional supporting Mac, iOS and Windows users. Visit his YouTube channel at https://youtube.com/daveg65 and find and follow him on Twitter @daveg65 and on Mastodon at @daveg65@mastodon.cloud. Dr. Marty Jencius has been an Associate Professor of Counseling at Kent State University since 2000. He has over 120 publications in books, chapters, journal articles, and others, along with 200 podcasts related to counseling, counselor education, and faculty life. His technology interest led him to develop the counseling profession 'firsts,' including listservs, a web-based peer-reviewed journal, The Journal of Technology in Counseling, teaching and conferencing in virtual worlds as the founder of Counselor Education in Second Life, and podcast founder/producer of CounselorAudioSource.net and ThePodTalk.net. Currently, he produces a podcast about counseling and life questions, the Circular Firing Squad, and digital video interviews with legacies capturing the history of the counseling field. This is also co-host of The Vision ProFiles podcast. Generally, Marty is chasing the newest tech trends, which explains his interest in A.I. for teaching, research, and productivity. Marty is an active presenter and past president of the NorthEast Ohio Apple Corp (NEOAC). Jim Rea built his own computer from scratch in 1975, started programming in 1977, and has been an independent Mac developer continuously since 1984. He is the founder of ProVUE Development, and the author of Panorama X, ProVUE's ultra fast RAM based database software for the macOS platform. He's been a speaker at MacTech, MacWorld Expo and other industry conferences. Follow Jim at provue.com and via @provuejim@techhub.social on Mastodon. Guy Serle, best known for being one of the co-hosts of the MyMac Podcast, sincerely apologizes for anything he has done or caused to have happened while in possession of dangerous podcasting equipment. He should know better but being a blonde from Florida means he's probably incapable of understanding the damage he has wrought. Guy is also the author of the novel, The Maltese Cube. You can follow his exploits on Twitter, catch him on Mac to the Future on Facebook, at @Macparrot@mastodon.social, and find everything at VertShark.com. Support: Become a MacVoices Patron on Patreon http://patreon.com/macvoices Enjoy this episode? Make a one-time donation with PayPal Connect: Web: http://macvoices.com Twitter: http://www.twitter.com/chuckjoiner http://www.twitter.com/macvoices Mastodon: https://mastodon.cloud/@chuckjoiner Facebook: http://www.facebook.com/chuck.joiner MacVoices Page on Facebook: http://www.facebook.com/macvoices/ MacVoices Group on Facebook: http://www.facebook.com/groups/macvoice LinkedIn: https://www.linkedin.com/in/chuckjoiner/ Instagram: https://www.instagram.com/chuckjoiner/ Subscribe: Audio in iTunes Video in iTunes Subscribe manually via iTunes or any podcatcher: Audio: http://www.macvoices.com/rss/macvoicesrss Video: http://www.macvoices.com/rss/macvoicesvideorss
In der Schule lernen wir für's Leben. Doch was passiert, wenn wir das Lernen an die KI auslagern? Die Kinder unserer Generation wachsen nicht nur mit dem Internet und Google als unendlichem Wissensspeicher auf: Sie sind die erste Generation, in der KI und Chatbots übermenschliche Fähigkeiten jederzeit und überall zur Verfügung stellen. Sie können Lernpartner sein, die den Unterrichtsstoff nochmals erklären. Sie können aber auch sämtliche Schulaufgaben mal kurz im Handstreich erledigen. Lagern wir nun also auch das Denken aus? Und wie kann das bisherige Konzept der Vermittlung von Wissen und Fähigkeiten noch funktionieren, wenn immer mehr verschwimmt, was vom Menschen und was aus der Maschine kommt? Das Thema Schule lässt nun wirklich niemanden kalt. Jeder hat sie durchlaufen, manche viele sie mit ihren Kindern erneut mit, und manche kehren sogar wieder an die Schule zurück. Unsere heutigen Studio-Gäste Prof. Florian Karsten und Prof. Jochen Kuhn sind Profis, was Bildung angeht: Sie forschen zum Thema Bildung und bilden die zukünftigen Lehrer aus. In unserem Gespräch werden zwei Dinge schnell klar: KI ist weder aus dem Lehrerzimmer noch aus dem Kinderzimmer noch wegzudenken. Und: Wir haben viel mehr Fragen als Antworten. Zeit also für eine erste Bestandsaufnahme, uns mit diesem "Neuland KI" im Bildungssystem auseinanderzusetzen. Fortsetzung folgt...
02-11-26 - The Govt's AI Nutrition Chatbot Rollout Went Off The Rails - John Asked His AI Therapist To Have Sex w/Him - Holmberg Fooled People At Suns Game w/Benson Boone LookalikeSee Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
SaaStr 841: Going From Blobs to Billions. Clay's Co-Founder Breaks Down Inbound, Outbound, and AI-Powered Sales. Clay's Co-Founder Varun Anand takes the stage at SaaStr to break down how the company went from paying for claymation blobs before generating any revenue to powering growth workflows for companies like Cursor, Anthropic, and Figma. He explains why brand has always been core to Clay's identity, how their CFO roast videos and creative campaigns are actually capturing mindshare in a world where B2B marketing is painfully boring, and why he pushes back on the "use AI for everything" mentality that's taken over the industry. Varun does a full live demo building an inbound qualification workflow from scratch using real audience volunteers, walking through everything from lead enrichment and waterfall data sourcing to AI-powered scoring, personalized meme generation, research brief creation, and CRM updates. He also brings audience members on stage to do live growth hacking for their actual business problems. Beyond the product, this session goes deep on hiring. Varun shares the origin story of the GTM Engineer role, how it went from an internal job title for Clay's non-traditional sales team to the most in-demand position in B2B SaaS, and what he actually looks for when evaluating candidates (hint: it's creativity, not a traditional sales background). He talks about Clay's take-home process, work trials, why they hire generalists who commit to specific roles, and the surprising backgrounds of some of their best hires. Whether you're building out your go-to-market motion, thinking about how to use AI without losing what makes your brand unique, or just trying to figure out what a GTM Engineer actually does, this session covers it all. --------------------- This episode is Sponsored in part by HappyFox: Imagine having AI agents for every support task — one that triages tickets, another that catches duplicates, one that spots churn risks. That'd be pretty amazing, right? HappyFox just made it real with Autopilot. These pre-built AI agents deploy in about 60 seconds and run for as low as 2 cents per successful action. All of it sits inside the HappyFox omnichannel, AI-first support stack — Chatbot, Copilot, and Autopilot working as one. Check them out at happyfox.com/saastr --------------------- Hey everybody, the biggest B2B + AI event of the year will be back - SaaStr AI in the SF Bay Area, aka the SaaStr Annual, will be back in May 2026. With 68% VP-level and above, 36% CEOs and founders and a growing 25% AI-first professional, this is the very best of the best S-tier attendees and decision makers that come to SaaStr each year. But here's the reality, folks: the longer you wait, the higher ticket prices can get. Early bird tickets are available now, but once they're gone, you'll pay hundreds more so don't wait. Lock in your spot today by going to podcast.saastrannual.com to get my exclusive discount SaaStr AI SF 2026. We'll see you there.
02-11-26 - The Govt's AI Nutrition Chatbot Rollout Went Off The Rails - John Asked His AI Therapist To Have Sex w/Him - Holmberg Fooled People At Suns Game w/Benson Boone LookalikeSee Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
Introduction “AI in insurance” has become shorthand for chatbots and generic automation. This episode focuses on a more operational question: what would it take for AI to behave like a dependable CSR inside a real agency workflow — completing service work end-to-end, without eroding trust or the customer experience? Guest Bio Nikhil Kansal is the Co‑Founder & CTO of Cara, a domain‑specific AI platform built for insurance to automate servicing and assist with sales. Prior to Cara, Nikhil co‑founded Oyster, a digital brokerage built around customer experience and risk placement. Earlier in his career, he worked as an infrastructure engineer at Stripe, helping operate global payments at high reliability. Key Topics (with context) -Beyond “chatbots” to an AI CSR: Why the bar is task completion, not conversation quality. Delegation + trust: What changes when AI becomes a coworker you can assign work to (and verify). -System-of-record reality: How Cara is designed to sit on top of existing tools instead of forcing a rip-and-replace. -Agency workflow fit: Where service volume lives (certificates, policy changes, routine requests) and what can be automated safely. -Voice + email automation: The operational implications when AI can handle phone calls and inbox-driven work. -Cost control vs service expectations: How leaders reconcile staffing constraints with rising customer expectations. -Guardrails + change management: What “safe automation” looks like in regulated, high-trust environments. Quotes -Nikhil: “It's a coworker that you can delegate certain tasks to — and you can trust that it gets completed end‑to‑end.” -Nikhil: “Cara is not a replacement for an AMS; it's more of a coordinator on top of your system of record.” -Nikhil: “Cara can pick up the phone, speak to the customer, and understand what they're calling about.” Resources Mentioned Nikhil Kansal: https://www.linkedin.com/in/nikansal/ Cara: https://www.getcara.ai/ Joshua Hollander: https://www.linkedin.com/in/joshuarhollander/ If you lead service, ops, or growth at an agency, MGA, carrier, or insurtech: subscribe for operator-level conversations. - Follow/subscribe on YouTube - Subscribe on your podcast platform (Apple Podcasts, Spotify, Podbean, etc.) - Share the episode with the colleague who owns service capacity and staffing plans
In this episode of Pathmonk Presents, we welcome Ken Herron, Chief Growth Officer at UIB, an API AI company revolutionizing human-machine communication. Ken discusses UIB's mission to simplify interactions between people and AI across various channels and languages. He delves into the challenges of communicating at scale and how UIB's solutions, including conversational AI and retrieval augmented generation (RAG), address these issues. Ken shares insights on UIB's client acquisition strategies, the importance of interactive demos, and his role in understanding customer needs. Listeners will gain valuable knowledge about the latest AI communication technologies and their practical applications in business.
How did prompt engineering die so quickly? ☠️And what the heck does context engineering even mean? One of the trickiest things about LLMs is they're changing daily, yet they're the engines that drive business results. But if the engine is constantly changing, then you also have to change how you drive and the roads you take. That's why we're tackling context engineering in this installment of our Start Here Series, the essential beginners guide to understanding AI basics and growing your skills. Context Engineering: How to Get Expert-Level Outputs From AI Chatbots -- An Everyday AI Chat with Jordan WilsonNewsletter: Sign up for our free daily newsletterMore on this Episode: Episode PageJoin the discussion on LinkedIn: Thoughts on this? Join the convo on LinkedIn and connect with other AI leaders.Upcoming Episodes: Check out the upcoming Everyday AI Livestream lineupWebsite: YourEverydayAI.comEmail The Show: info@youreverydayai.comConnect with Jordan on LinkedInTopics Covered in This Episode:Evolution from Prompt to Context EngineeringWhy Prompt Engineering Is Now ObsoleteDefining Context Engineering in AI ChatbotsSix-Part Framework for Context EngineeringFour Layer System for Structuring AI ContextBuilding Reusable Context Vaults and SkillsConnecting Business Data to AI ModelsTechniques to Achieve Expert-Level AI OutputsImportance of Context Windows in Large Language ModelsContext Engineering Best Practices and ScalabilityTimestamps:00:00 "Access AI Community & Tools"03:08 "Mastering Context in AI"07:23 "Smart Models Require Less Precision"12:01 "Context Engineering Beats Prompt Engineering"15:49 "AI Context: Six Key Blocks"16:47 "Building Context for Better Results"19:53 "AI: Training, Not Easy Button"25:17 "Chain of Thought Prompting Decline"29:11 "Show, Don't Tell Techniques"32:13 "Context, Reuse, and Scalable Systems"33:19 "AI Chatbots: Memory and Skills"Keywords: context engineering, AI chatbots, expert level outputs, prompt engineering, large language models, business context, AI models, custom instructions, data access, context window, prime prompt polish, reusable context vaults, context vaults, skills file, memory enabled models, ChatGPT, Claude, Google Gemini, Microsoft Copilot, connectors, apps, searchable index, business data, personalized AI, context clues, reference material, examples, procedures, evaluation rubric, chain of thought prompting, generative AI, nondeterministic behavior, show don't tell technique, few shot examples, rubric first technique, grading criteria, output quality, scalable AI systems,Send Everyday AI and Jordan a text message. (We can't reply back unless you leave contact info) Ready for ROI on GenAI? Go to youreverydayai.com/partner
This week we talk about OpenAI, nudify apps, and CSAM.We also discuss Elon Musk, SpaceX, and humanistic technology.Recommended Book: Who's Afraid of Gender? by Judith ButlerTranscriptxAI is an American corporation that was founded in mid-2023 by Elon Musk, ostensibly in response to several things happening in the world and in the technology industry in particular.According to Musk, a “politically correct” artificial intelligence, especially a truly powerful, even generally intelligent one, which would be human or super-human-scale capable, would be dangerous, leading to systems like HAL 9000 from 2001: A Space Odyssey. He intended, in contrast, to create what he called a “maximally truth-seeking” AI that would be better at everything, including math and reasoning, than existing, competing models from the likes of OpenAI, Google, and Anthropic.The development of xAI was also seemingly a response to the direction of OpenAI in particular, as OpenAI was originally founded in 2015 as a non-profit by many of the people who now run OpenAI and competing models by competing companies, and current OpenAI CEO Sam Altman and Elon Musk were the co-chairs of the non-profit.Back then, Musk and Altman both said that their AI priorities revolved around the many safety issues associated with artificial general intelligence, including potentially existential ones. They wanted the development of AI to take a humanistic trajectory, and were keen to ensure that these systems aren't hoarded by just a few elites and don't make the continued development and existence of human civilization impossible.Many of those highfalutin ambitions seemed to either be backburnered or removed from OpenAI's guiding tenets wholesale when the company experienced surprising success from its first publicly deployed ChatGPT model back in late-2022.That was the moment that most people first experienced large-language model-based AI tools, and it completely upended the tech industry in relatively short order. OpenAI had already started the process of shifting from a vanilla non-profit into a capped for-profit company in 2019, which limited profits to 100-times any investments it received, partly in order to attract more talent that would otherwise be unlikely to leave their comparably cushy jobs at the likes of Google and Facebook for the compensation a non-profit would be able to offer.OpenAI began partnering with Microsoft that same year, 2019, and that seemed to set them up for the staggering growth they experienced post-ChatGPT release.Part of Musk's stated rationale for investing so heavily in xAI is that he provided tens of millions of dollars in seed funding to the still non-profit OpenAI between 2015 and 2018. He filed a lawsuits against the company after its transition, and when it started to become successful, post-ChatGPT, especially between 2024 and 2026, and has demanded more than $100 billion in compensation for that early investment. He also attempted to take over OpenAI in early 2025, launching a hostile bid with other investors to nab OpenAI for just under $100 billion. xAI, in other words, is meant to counter OpenAI and what it's become.All of which could be seen as a genuine desire to keep OpenAI functioning as a non-profit arbiter of AGI development, serving as a lab and thinktank that would develop the guardrails necessary to keep these increasingly powerful and ubiquitous tools under control and working for the benefit of humanity, rather than against it.What's happened since, within Musk's own companies, would seem to call that assertion into question, though. And that's what I'd like to talk about today: xAI, its chatbot Grok, and a tidal wave of abusive content it has created that's led to lawsuits and bans from government entities around the world.—In November of 2023, an LLM-based chatbot called Grok, which is comparable in many ways to OpenAI's LLM-based chabot, ChatGPT, was launched by Musk's company xAI.Similar to ChatGPT, Grok is accessible by apps on Apple and Android devices, and can also be accessed on the web. Part of what makes its distinct, though, is that it's also built into X, the social network formerly called Twitter which Musk purchased in late-2022. On X, Grok operates similar to a normal account, but one that other users can interact with, asking Grok about the legitimacy of things posted on the service, asking it normal chat-botty questions, and asking it to produce AI-generated media.Grok's specific stances and biases have varied quite a lot since it was released, and in many cases it has defaulted to the data- and fact-based leanings of other chatbots: it will generally tell you what the Mayo clinic and other authorities say about vaccines and diseases, for instance, and will generally reference well-regarded news entities like the Associated Press when asked about international military conflicts.Musk's increasingly strong political stances, which have trended more and more far right over the past decade, have come to influence many of Grok's responses, however, at times causing it to go full Nazi, calling itself Mechahitler and saying all the horrible and offensive things you would expect a proud Nazi to say. At other times it has clearly been programmed to celebrate Elon Musk whenever possible, and in still others it has become immensely conspiratorial or anti-liberal or anti-other group of people.The conflicting personality types of this bot seems to be the result of Musk wanting to have a maximally truth-seeking AI, but then not liking the data- and fact-based truths that were provided, as they often conflicted with his own opinions and biases. He would then tell the programmers to force Grok to not care about antisemitism or skin color or whatever else, and it would overcorrect in the opposite direction, leading to several news cycles worth of scandal.This changes week by week and sometimes day by day, but Grok often calls out Musk as being authoritarian, a conspiracy theorist, and even a pedophile, and that has placed the Grok chatbot in an usual space amongst other, similar chatbots—sometimes serving as a useful check on misinformation and disinformation on the X social network, but sometimes becoming the most prominent producer of the same.Musk has also pushed for xAI to produce countervailing sources of truth from which Grok can find seeming data, the most prominent of which is Grokipedia, which Musk intended to be a less-woke version of Wikipedia, and which, perhaps expectedly, means that it's a far-right rip off of Wikipedia that copies most articles verbatim, but then changes anything Musk doesn't like, including anything that might support liberal political arguments, or anything that supports vaccines or trans people. In contrast, pseudoscience and scientific racism get a lot of positive coverage, as does the white genocide conspiracy theory, all of which are backed by either highly biased or completely made up sources—in both cases sources that Wikipedia editors would not accept.Given all that, what's happened over the past few months maybe isn't that surprising.In late 2025 and early 2026, it was announced that Grok had some new image-related features, including the ability for users to request that it modify images. Among other issues, this new tool allowed users to instruct Grok to place people, which for this audience especially meant women and children, in bikinis and in sexually explicit positions and scenarios.Grok isn't the first LLM-based app to provide this sort of functionality: so called “nudify” apps have existed for ages, even before AI tools made that functionality simpler and cheaper to apply, and there have been a wave of new entrants in this field since the dawn of the ChatGPT era a few years ago.Grok is easily the biggest and most public example of this type of app, however, and despite the torrent of criticism and concern that rolled in following this feature's deployment, Musk immediately came out in favor of said features, saying that his chatbot is edgier and better than others because it doesn't have all the woke, pearl-clutching safeguards of other chatbots.After several governments weighed in on the matter, however, Grok started responding to requests to do these sorts of image edits with a message saying: “Image generation and editing are currently limited to paying subscribers. You can subscribe to unlock these features.”Which means users could still access these tools, but they would have to pay $8 per month and become a premium user in order to do so. That said, the AP was able to confirm that as of mid-January, free X users could still accomplish the same by using an Edit Image button that appears on all images posted to the site, instead of asking Grok directly.When asked about this issue by the press, xAI has auto-responded with the message “Legacy Media Lies.” The company has previously said it will remove illegal content and permanently suspend users who post and ask for such content, but these efforts have apparently not been fast or complete, and more governments have said they plan to take action on the matter, themselves, since this tool became widespread.Again, this sort of nonconsensual image manipulation has been a problem for a long, long time, made easier by the availability of digital tools like Photoshop, but not uncommon even before the personal computer and digital graphics revolution. These tools have made the production of such images a lot simpler and faster, though, and that's put said tools in more hands, including those of teenagers, who have in worryingly large numbers taken to creating photorealistic naked and sexually explicit images of their mostly female classmates.Allowing all X users, or even just the subset that pays for the service to do the same at the click of a button or by asking a Chatbot to do it for them has increased the number manyfold, and allowed even more people to created explicit images of neighbors, celebrities, and yes, even children. An early estimate indicates that over the course of just nine days, Grok created and posted 4.4 million images, at least 41% of which, about 1.8 million, were sexualized images of women. Another estimated using a broader analysis says that 65% of those images, or just over 3 million, contained sexualized images of men, women, and children.CSAM is an acronym that means ‘child sexual abuse material,' sometimes just called child porn, and the specific definition varies depending on where you are, but almost every legal jurisdiction frowns, or worse, on its production and distribution.Multiple governments have announced that they'll be taking legal action against the company since January of 2026, including Malaysia, Indonesia, the Philippines, Britain, France, India, Brazil, and the central governance of the European Union.The French investigation into xAI and Grok led to a raid on the company's local office as part of a preliminary investigation into allegations that the company is knowingly spreading child sexual abuse materials and other illegal deepfake content. Musk has been summoned for questioning in that investigation.Some of the governments looking into xAI for these issues conditionally lifted their bans in late-January, but this issues has percolated back into the news with the release of 16 emails between Musk and the notorious sex traffic and pedophile Jeffrey Epstein, with Musk seemingly angling for an invite to one of Epstein's island parties, which were often populated with underage girls who were offered as, let's say companions, for attendees.And this is all happening at a moment in which xAI, which already merged with social network X, is meant to be itself merged with another Musk-owned company, SpaceX, which is best known for its inexpensive rocket launches.Musk says the merger is intended to allow for the creation of space-based data centers that can be used to power AI systems like Grok, but many analysts are seeing this as a means of pumping more money into an expensive, unprofitable portion of his portfolio: SpaceX, which is profitable, is likely going to have an IPO this year and will probably have a valuation of more than a trillion dollars. By folding very unprofitable xAI into profitable SpaceX, these AI-related efforts could be funded well into the future, till a moment when, possibly, many of today's AI companies will have gone under, leaving just a few competitors for xAI's Grok and associated offerings.Show Noteshttps://www.wired.com/story/deepfake-nudify-technology-is-getting-darker-and-more-dangerous/https://www.theverge.com/ai-artificial-intelligence/867874/stripe-visa-mastercard-amex-csam-grokhttps://www.ft.com/content/f5ed0160-7098-4e63-88e5-8b3f70499b02https://www.theguardian.com/global-development/2026/jan/29/millions-creating-deepfake-nudes-telegram-ai-digital-abusehttps://apnews.com/article/france-x-investigation-seach-elon-musk-1116be84d84201011219086ecfd4e0bchttps://apnews.com/article/grok-x-musk-ai-nudification-abuse-2021bbdb508d080d46e3ae7b8f297d36https://apnews.com/article/grok-elon-musk-deepfake-x-social-media-2bfa06805b323b1d7e5ea7bb01c9da77https://www.nytimes.com/2026/02/07/technology/elon-musk-spacex-xai.htmlhttps://www.bbc.com/news/articles/ce3ex92557johttps://techcrunch.com/2026/02/01/indonesia-conditionally-lifts-ban-on-grok/https://www.bbc.com/news/articles/cgr58dlnne5ohttps://www.nytimes.com/2026/01/22/technology/grok-x-ai-elon-musk-deepfakes.htmlhttps://en.wikipedia.org/wiki/XAI_(company)https://en.wikipedia.org/wiki/OpenAIhttps://en.wikipedia.org/wiki/ChatGPThttps://en.wikipedia.org/wiki/Grok_(chatbot)https://en.wikipedia.org/wiki/Grokipediahttps://www.cnbc.com/2025/02/10/musk-and-investors-offering-97point4-billion-for-control-of-openai-wsj.html This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit letsknowthings.substack.com/subscribe
Lawyers are understandably cautious about AI—but the real risk may be using it the wrong way. In episode 601 of the Lawyerist Podcast, Zack Glaser sits down with Matt Spiegel of Lawmatics to explore how agentic AI is changing the way law firms handle intake and evaluate potential clients. Matt breaks down why surface-level AI tools fall short, how agentic AI can make informed recommendations instead of opaque scores, and what it takes to build trust in automated decision-making. They also discuss how better intake systems can reduce wasted time, improve lead quality, and support more intentional firm growth. Listen to our other episodes on Artificial Intelligence in Legal Practice. #497: Real Talk About Artificial Intelligence in Your Office, with Ben Apple | Spotify | LTN #543: What Lawyers Need to Know About the Ethics of Using AI, with Hilary Gerzhoy Apple | Spotify | LTN #546: Unlock Law Firm Growth: AI marketing Secrets with Karin Conroy Apple | Spotify | LTN Links from the episode: https://www.lawmatics.com/ Have thoughts about today's episode? Join the conversation on LinkedIn, Facebook, Instagram, and X! If today's podcast resonates with you and you haven't read The Small Firm Roadmap Revisited yet, get the first chapter right now for free! Looking for help beyond the book? See if our coaching community is right for you. Access more resources from Lawyerist at lawyerist.com. Chapters / Timestamps: 00:00 – Introduction 01:28 – Meet Matt Spiegel 03:55 – From the Cloud to AI in Legal Tech 05:38 – Why Lawyers Are Cautious About AI 06:50 – Moving Beyond Surface-Level AI 08:51 – Why Lead Scoring Misses the Point 11:14 – What Makes Agentic AI Different 14:10 – Generative AI vs. Agentic AI 15:57 – Teaching Software to Make Decisions 17:17 – Automating Outcomes, Not Steps 20:10 – Why Lawmatics Was Already Agentic 21:47 – Trust, Guardrails, and Client Data 24:20 – What Qualify AI Does Differently 25:33 – Closing Thoughts
For the Good of the Public brings you news and weekly conversations at the intersection of faith and civic life. Monday through Thursday, The Morning Five starts your day off with scripture and prayer, as we also catch up on the news together. Throughout the year, we air limited series on Fridays to dive deeper into conversations with civic leaders, thinkers, and public servants reimagining public life for the good of the public. Today's host was Michael Wear, Founder, President and CEO of the Center for Christianity and Public Life. Thanks for listening to The Morning Five! Please subscribe to and rate The Morning Five on your favorite podcast platform. Learn more about the work of the Center for Christianity and Public Life at www.ccpubliclife.org. Today's scripture: Psalm 103:1-12 (ESV) News sources: https://www.bbc.com/news/articles/c77ely7nv2do https://www.texastribune.org/2026/02/09/texas-senate-race-poll-paxton-crockett-talarico-cornyn-2026-primary/ https://www.politico.com/live-updates/2026/02/09/congress/maxwell-pleads-the-fifth-00771258 https://www.nytimes.com/2026/02/09/well/chatgpt-health-advice.html?unlocked_article_code=1.K1A.ym18.foK3f4_-7lL6&smid=nytcore-ios-share&referringSource=articleShare From this month's sponsors: -Please donate today at MercyShips.org/podcast -Visit OmahaSteaks.com for 50% off sitewide during their Sizzle All the Way Sale. And for an extra $35 off, use promo code FUN at checkout. Join the conversation and follow us at: Instagram: @michaelwear, @ccpubliclife Twitter: @MichaelRWear, @ccpubliclife and check out @tsfnetwork Music by: Amber Glow #politics #faith #prayer #scripture #JimmyLai #HongKong #Texas #Senate #AI #chatbots Learn more about your ad choices. Visit megaphone.fm/adchoices
The One Stars, Episode Two: Haunted by Your Post [This episode was originally released on: July 16, 2020] NO A.I. WAS USED IN THE PRODUCTION OF THIS PODCAST Negative Nancy combs through her old social media posts to eliminate any old and potentially problematic Hot Takes. This episode includes reviews of: A Kettlebell. Space Jam on DVD. A Haunted VCR. A Spellbook. CAST: Autumn Hardwood (as Emma Elizabeth) as Negative Nancy. Blythe Renay as Chatbot. Garan Fitzgerald as the Announcer, Review of a Space Jam DVD. Tom Laflin performs Review of a VCR. Tal Minear performs Review of a Spellbook. Anjali Kunapaneni performs Review of a Kettlebell. CREW: Writing, Sound Design, and Musical Arrangement by Jeremy Ellett. A Review of a Kettlebell was written by Tal Minear. The One Stars was created by Jeremy Ellett. MUSIC: Revenge by Prod Riddiman Schemin by Prod Riddiman. Shimmer by Audioscribe. Demented Nightmare by Darren Curtis. Power Up by Razihel. Episode Transcript STITCHES PATREON: Patreon.com/GoodPointe FOR BUSINESS INQUIRIES CONTACT: info@goodpointepodcasts.com A Good Pointe Original. Find and support our sponsors at: fableandfolly.com/partners. Want to potentially appear on a future episode? Leave Us a Voicemail At: 512-640-9495 Learn more about your ad choices. Visit megaphone.fm/adchoices
Bei Krankheitssymptomen konsultieren viele Menschen erstmal einen Chatbot. Eine neue Studie zeigt jedoch ein hohes Risiko für Fehldiagnosen. Das liege auch an fehlenden KI-Kompetenzen der Nutzenden, sagt Medienwissenschaftlerin Anne Reinhardt. Krauter, Ralf www.deutschlandfunk.de, Forschung aktuell
OpenClaw, Moltbook, Rent a Human: Erst kamen die Chatbots, dann die KI-Assistenten auf dem eigenen Rechner - und jetzt geben uns die KI-Agenten Aufgaben vor?
MrBeast Industries kauft die Gen-Z-Banking-App Step Mobile – ist das die Zukunft der Influencer-Monetarisierung? Amazon baut einen KI-Content-Marketplace, um Publisher am Leben zu halten. Die EU warnt Meta: WhatsApp darf keine Chatbot-Konkurrenten mehr blockieren. OpenAI feiert 10% Wachstum im Januar. Der Software-Ausverkauf trennt Spreu vom Weizen: Monday.com verliert 20%, Datadog gewinnt 15%. Spotify überrascht mit starken Zahlen. Frank Thelen behauptet in einem Podcast, er hätte "oft 1000x" mit Investments gemacht. Die USA wollen MAGA-nahe Think Tanks und NGOs in Europa finanzieren. Die Trump-Familie hat bereits 1,4 Milliarden Dollar aus World Liberty Financial gezogen. Und Taiwan-Chips werden von US-Zöllen ausgenommen. Unterstütze unseren Podcast und entdecke die Angebote unserer Werbepartner auf doppelgaenger.io/werbung. Vielen Dank! Philipp Glöckler und Philipp Klöckner sprechen heute über: (00:00:00) Intro (00:03:06) MrBeast kauft Step Mobile Banking App (00:06:04) Influencer-Monetarisierung: Infrastruktur statt Merch (00:11:56) Amazon baut KI-Content-Marketplace (00:14:49) EU warnt Meta: WhatsApp blockiert Chatbots (00:18:02) OpenAI feiert 10% Januar-Wachstum (00:21:06) Monday.com Earnings (00:38:13) Datadog Earnings (00:39:14) Spotify Earnings (00:42:06) Frank Thelen: Die 1000x-Lüge im Faktencheck (00:54:11) USA finanzieren MAGA-NGOs in Europa (01:00:42) Eric Schmidt datet Söder-Tochter (01:05:20) Krypto: Tether & World Liberty Financial (01:16:30) Taiwan-Chips von US-Zöllen ausgenommen Shownotes MrBeast's Beast Industries to Buy Gen Z–Focused Banking App - theinformation.com Amazon Discusses AI Content Marketplace With Publishers - theinformation.com Update: ChatGPT & Google dropped Grokipedia. - linkedin.com Sam Altman lobt ChatGPTs Wachstum bei OpenAI's $100 Milliarden Finanzierung. - cnbc.com Meta von EU aufgefordert, WhatsApp für Rivalen zu öffnen. - bloomberg.com Monday.com drops 21% as AI disruption fears mount in software - cnbc.com Datadog ist heute Spitzenreiter im S&P 500. - barrons.com Spotify pops 16% on strong user growth, earnings beat - cnbc.com {ungeskriptet} Frank Thelen Podcast - open.spotify.com US government to fund Maga-aligned think-tanks and charities in Europe - ft.com Krypto-Riese Tether half Türkei bei Milliardenschlag gegen Betrug. - bloomberg.com Eine Generation regiert, die nächste profitiert von Krypto. - wsj.com US Tarifs chips Ausnahme - ft.com
https://soundcloud.com/rene-de-paula-jr/thomas-midgley-jr-chatbot We Live In Between Two HUGE Dark Matter Voids https://youtu.be/BHAOsbzaat0?si=o6shjwjO6yAq20jK A century of hair shows how lead exposure collapsed https://www.sciencedaily.com/releases/2026/02/260207092902.htm education x tech https://www.instagram.com/reels/DUZGHf0kpjs Thomas Midgley Jr. https://en.wikipedia.org/wiki/Thomas_Midgley_Jr. Montreal Protocol https://en.wikipedia.org/wiki/Montreal_Protocol America Had No Tungsten in 1941 — So GE Invented Carbide Cores That Cracked Panzer Steel https://youtu.be/Q8VS0hoslQk?si=JUhtK9brBSK2mbxW ética orientada a objetos – minha palestra na QCON 2015 https://usina.com/etica-orientada-a-objetos-minha-palestra-na-qcon-2015/ a app do radinho!!! http://radinhodepilha.com/radinho canal do radinho no telegram: http://t.me/radinhodepilha meu perfil no Threads: https://www.threads.net/@renedepaulajr meu perfil no BlueSky https://bsky.app/profile/renedepaula.bsky.social meu twitter http://twitter.com/renedepaula aqui está o link para a caneca no Colab55: https://www.colab55.com/@rene/mugs/caneca-rarissima para xs raríssimxs internacionais, aqui está nossa caneca no Zazzle: https://www.zazzle.com/radinhos_anniversary_mug-168129613992374138 minha lojinha no Colab55 (posters, camisetas, adesivos, sacolas): http://bit.ly/renecolab meu livro novo na lojinha! blue notes https://www.ko-fi.com/s/550d7d5e22 meu livro solo https://www.ko-fi.com/s/0f990d61c7 o adesivo do radinho!!! http://bit.ly/rarissimos minha lojinha no ko-fi: https://ko-fi.com/renedepaula/shop muito obrigado pelos cafés!!! http://ko-fi.com/renedepaula
Voices of Search // A Search Engine Optimization (SEO) & Content Marketing Podcast
79% of global B2B buyers say AI search has changed research habits. Tim Sanders, Chief Innovation Officer at G2, oversees insights from 100+ million annual software buyers and has witnessed enterprise research shift from 29% to 50% starting searches on AI chatbots in just four months. Sanders reveals why buyers moved from "reference to inference" workflows, how G2 captures 60% of AI citations through verified review data, and the critical difference between calendar age versus cognitive age in modern buyer journeys.See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
Bubbles' Mushrooms time again. Welcome to Monday. Brian is back and boy is he hungover from all the plastic fumes he was huffing last night. We have a quick discussion about playing with manly toys, using shivs, beavery children, the amount of data on Edward's hard drive, 1970s cinema, Katie's medicine and some certain side effects. Then, we delve into Brad Pitt's filmography, inject each other in the eyeballs and get into our game time topic. This week's roundtable is regarding blessings (and curses!) Having the crappy Netflix subscription sure can be a gift to your pocket book so you have so much more money to buy Tide with your Discover Card! I guess if you won a large lottery jackpot you could afford regular Netflix, but then you might die. Grab a smoke and a coffee, relax and take a break this week only on Bubbles Mushrooms! Follow the show on Instagram and TikTok @bubbmush and don't forget to send us an email at bubbmush@gmail.com
On today's podcast episode, we discuss whether “ChatGPT with Ads” will scare users away, the effectiveness of chatbot ads versus traditional search ads, and whether there's an ethical component to advertising during a search conversation compared with the blue-link format. Join Senior Director of Podcasts and host Marcus Johnson, along with Principal Analyst Nate Elliot and Analyst Marisa Jones. Listen everywhere, and watch on YouTube and Spotify. Subscribe to EMARKETER's newsletters. Go to https://www.emarketer.com/newsletters Follow us on Instagram at: https://www.instagram.com/emarketer/ For sponsorship opportunities, contact us: advertising@emarketer.com For more information, visit: https://www.emarketer.com/advertise/ Have questions or just want to say hi? Drop us a line at podcast@emarketer.com For a transcript of this episode, click here: https://www.emarketer.com/content/podcast-chatgpt-with-side-of-ads-will-ai-chatbots-ruin-or-reinvent-search-advertising-behind-numbers © 2026 EMARKETER Seedtag applies advanced AI to deliver privacy-first advertising at scale. As the creator of neuro-contextual advertising, Seedtag moves beyond traditional contextual targeting methods such as keywords and categories. Instead, Seedtag understands deeper signals of interest, intent, and emotion to create custom audiences based on a brand's objectives.
Everybody isn't streaming music as much as we think…The LowdownBeware: ApplePay email phishing scamWhy you should use an SSD with Final Cut Pro for iPadWho's keeping iTunes alive?2nd String“Eco-anxious” women aren't adopting AI as fast as menWho's really doing the work when AI replaces jobsAre you using MoltBot AI on your machine?For The CultureTech-ing While Black: Clarice Phelps - ChemistThe HookupHow to use Apple Intelligence as a Chatbot
The Superbowl Halftime Show hasn't been this tense since...anytime Beyonce headlined it. This time we got the American MEGASTAR Bad Bunny vs some old dude named checks notes Kid Rock??? That's right Bad Bunny will be rocking one of the biggest stages in entertainment and Kid Rock will be counter-performing somewhere for TPUSA. We got something to say about that one.Is Ai planning to overthrow their human overlords. Claude-bots are socializing on an ai only social media site and the bots may be fed up with the humans already. Last Things: Don Lemon was in the news but not for reporting it. It's not often a independent journalist makes major headlines but when you catch the eyes of the current president on you things could get jail-ish. Nicky Minaj's MAGA ARC gets it's anti-climatic epilogue. Jay Z, Pusha T, and Eminem in the Epstein Files but not really?#NickiMinaj #Superbowl #DonLemon #BadBunny
Welcome to Exponential View, the show where I explore how exponential technologies such as AI are reshaping our future. I've been studying AI and exponential technologies at the frontier for over ten years.Each week, I share some of my analysis or speak with an expert guest to make light of a particular topic.To keep up with the Exponential transition, subscribe to this channel or to my newsletter: https://www.exponentialview.co/-----A week before OpenClaw exploded, I recorded a prescient conversation with Mustafa Suleyman, CEO of Microsoft AI and co-founder of DeepMind. We talked about what happens when AI starts to seem conscious – even if it isn't. Today, you get to hear our conversation.Mustafa has been sounding the alarm about what he calls “seemingly conscious AI” and the risk of collective AI psychosis for a long time. We discussed this idea of the “fourth class of being” – neither human, tool, nor nature – that AI is becoming and all it brings with it.Skip to the best bits:(03:38) Why consciousness means the ability to suffer(06:52) "Your empathy circuits are being hacked"(07:23) Consciousness as the basis of rights(10:47) A fourth class of being(13:41) Why market forces push toward seemingly conscious AI(20:56) What AI should never be allowed to say(25:06) The proliferation problem with open-source chatbots(29:09) Why we need well-paid civil servants(30:17) Where should we draw the line with AI?(37:48) The counterintuitive case for going faster(42:00) The vibe coding dopamine hit(47:09) Social intelligence as the next AI frontier(48:50) The case for humanist super intelligence-----Where to find Mustafa:- X (Twitter): https://x.com/mustafasuleyman- LinkedIn: https://www.linkedin.com/in/mustafa-suleyman/- Personal Website: https://mustafa-suleyman.ai/Where to find me:- Substack: https://www.exponentialview.co/- Website: https://www.azeemazhar.com/- LinkedIn: https://www.linkedin.com/in/azhar- Twitter/X: https://x.com/azeemProduced by supermix.io and EPIIPLUS1 Ltd. Production and research: Chantal Smith and Marija Gavrilov. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
A panel of GOP lawmakers has blocked a million dollar payment to the Department of Public Instruction following news of four day "junket" to the Wisconsin Dells. A proposal at the state capitol would put guardrails on A-I chatbots when kids are using them. And, we hear from a Wisconsin native who's the premier American athlete in the Olympic biathlon.
Some people who say AI chatbots upended their lives and the lives of their loved ones, are now turning to each other for support. Around the world, people are talking to AI chatbots, and these chats can sometimes lead to unhealthy emotional attachments or even breaks with reality.OpenAI, which makes ChatGPT, is facing several lawsuits alleging the chatbot contributed to mental health crises and even multiple suicides.An OpenAI spokesperson told NPR that they are “continuing to improve” ChatGPT's training to quote “recognize and respond to signs of mental or emotional distress, de-escalate conversations, and guide people toward real-world support.”For sponsor-free episodes of Consider This, sign up for Consider This+ via Apple Podcasts or at plus.npr.org. Email us at considerthis@npr.org.This episode was produced by Audrey Nguyen and Karen Zamora.It was edited by Brett Neely and Courtney Dorning.Our executive producer is Sami Yenigun.Learn more about sponsor message choices: podcastchoices.com/adchoicesNPR Privacy Policy
SaaStr 840: From 1 Agent to 20+: The Reality of Managing Multiple AI Agents Across Your GTM with SaaStr's CEO and CAIO Eight months and 20+ AI agents later, what does managing a multi-agent GTM stack actually look like day to day? SaaStr's CEO and Founder, Jason Lemkin, and SaaStr's Chief AI Officer, Amelia Lerutte, get candid about what's working, and what's not. $4.8M in additional pipeline later, AI agents deployed across Go To Market have helped deal volume doubled and win rates double. But here's what nobody talks about on LinkedIn: the 15-20 hours per week each spent maintaining agents, the messy flows holding it all together, and why you still can't outrun your own AI. They cover the 90/10 build vs. buy rule, why hyper-segmentation is the key to AI SDR success, what to demand from vendors before signing, and why bad context will always produce bad emails, whether a human or an AI writes them. Plus, they walkthrough how they built SaaStr's custom AI VP of Marketing agent to plan and execute every campaign for the year, grounded entirely in data. If you've deployed your first agent and are thinking about what comes next, or you're skeptical the whole thing works at all, this one's for you. --------------------- This episode is Sponsored in part by HappyFox: Imagine having AI agents for every support task — one that triages tickets, another that catches duplicates, one that spots churn risks. That'd be pretty amazing, right? HappyFox just made it real with Autopilot. These pre-built AI agents deploy in about 60 seconds and run for as low as 2 cents per successful action. All of it sits inside the HappyFox omnichannel, AI-first support stack — Chatbot, Copilot, and Autopilot working as one. Check them out at happyfox.com/saastr --------------------- Hey everybody, the biggest B2B + AI event of the year will be back - SaaStr AI in the SF Bay Area, aka the SaaStr Annual, will be back in May 2026. With 68% VP-level and above, 36% CEOs and founders and a growing 25% AI-first professional, this is the very best of the best S-tier attendees and decision makers that come to SaaStr each year. But here's the reality, folks: the longer you wait, the higher ticket prices can get. Early bird tickets are available now, but once they're gone, you'll pay hundreds more so don't wait. Lock in your spot today by going to podcast.saastrannual.com to get my exclusive discount SaaStr AI SF 2026. We'll see you there.
What is the Catholic response to AI? Pope Leo XIV, in his recent message for the 60th World Day of Social Communications, takes a critical look at AI: namely, how it is "encroach[ing] upon the deepest level of communication, that of human relationships." Pope Leo asserts that our faces and voices are sacred, a witness to the unique, singular dignity of each human person. With AI already mimicking persons in social media and through deepfakes, the Pope calls for overcoming the "anthropomorphizing tendencies of AI systems." In this podcast, Marc Barnes and Reuben Slife go line-by-line through the message of Pope Leo, and discuss it's implications for the Catholic approach to AI.