Podcasts about chatbots

Program that simulates conversation

  • 5,139PODCASTS
  • 11,314EPISODES
  • 29mAVG DURATION
  • 2DAILY NEW EPISODES
  • Feb 27, 2026LATEST
chatbots

POPULARITY

20192020202120222023202420252026

Categories



Best podcasts about chatbots

Show all podcasts related to chatbots

Latest podcast episodes about chatbots

Trending with Timmerie - Catholic Principals applied to today's experiences.

Jim O’Day, Executive Director of Integrity Restored, joins Trending with Timmerie. Episode Guide What do you replace porn with? The science! (2:21) Pope Leo’s warning against AI chatbots — Faces and voices are sacred, so are relationships (21:30) Help for your Lenten sacrifices – 5 ways to grow in virtue with Aristotle (40:07) Resources mentioned: Integrity restored https://integrityrestored.com/ Heal from betrayal trauma https://www.bloomforcatholicwomen.com/ Canopy Parental Controls https://canopy.us/ Study on AI partners https://ifstudies.org/blog/artificial-intelligence-and-relationships-1-in-4-young-adults-believe-ai-partners-could-replace-real-life-romance

Fitt Insider
AI Chatbots vs Human Doctors and Trainers

Fitt Insider

Play Episode Listen Later Feb 26, 2026 3:05


February 26, 2026: Your daily rundown of health and wellness news, in under 5 minutes. Today's top stories: Gallup survey finds 73% of Americans turn to doctors first for health advice, with just 16% using AI tools and social media Alcohol industry erases $830B in market value since 2021 as GLP-1 drugs and structural demand shift drive U.S. drinking rates to lowest since 1939 Two-thirds of exercisers have tried AI fitness platforms, but only 10% would choose AI-only coaching as platforms shift to hybrid models More from Fitt: Fitt Insider breaks down the convergence of fitness, wellness, and healthcare — and what it means for business, culture, and capital. Subscribe to our newsletter → insider.fitt.co/subscribe Work with our recruiting firm → https://talent.fitt.co/ Follow us on Instagram → https://www.instagram.com/fittinsider/ Follow us on LinkedIn → linkedin.com/company/fittinsider Reach out → insider@fitt.co

Gary and Shannon
AI Productivity Panic & Should Chatbots Report Violence?

Gary and Shannon

Play Episode Listen Later Feb 26, 2026 31:10 Transcription Available


The Gary & Shannon Show Hour 3 (02/26) - #WhatsHappening, #SwampWatch, and AI gets real dark real fast. #WhatsHappening: Iran talks progressing, Martin Short's daughter's tragic passing #SwampWatch: Clinton Epstein deposition + FBI reveals Epstein ties to Trump cabinet members including Susie Wiles AI panic: CEOs vibe coding at 5am but workers say it saves less than 2 hours/week Should chatbots warn police? OpenAI flagged a user discussing gun violence, didn't report it — he killed 8 people including children SuperAgers: people over 80 with memory of 50-year-olds — the science behind why See omnystudio.com/listener for privacy information.

Daily Tech Headlines
Study Shows Over 50% of Teens Use Chatbots for Schoolwork – DTH

Daily Tech Headlines

Play Episode Listen Later Feb 25, 2026


A Pew Research Center study shows 54% of teens between 13 and 17 years old use chatbots for school assignments, a California judge dismisses a trade secrets lawsuit from xAI against OpenAI, and Japan’s antitrust authority raids Microsoft Japan’s offices over suspected violations of the antimonopoly act with Azure. MP3 Please SUBSCRIBE HERE for freeContinue reading "Study Shows Over 50% of Teens Use Chatbots for Schoolwork – DTH"

The Official SaaStr Podcast: SaaS | Founders | Investors
SaaStr 843: Software Stocks Have Massively Crashed. Here's What Founders Need to Know.

The Official SaaStr Podcast: SaaS | Founders | Investors

Play Episode Listen Later Feb 25, 2026 43:56


SAASTR 843: Software Stocks Have Massively Crashed. Here's What Founders Need to Know.  SaaStr founder and CEO Jason Lemkin joins the TBPN show for a wide-ranging conversation on the state of SaaS, AI, and venture capital. Jason shares how he shrunk his team from 15 to 3 people by going all-in on AI agents, why he's lost patience with companies that haven't re-accelerated growth, and the real economics behind running large-scale events. He breaks down why PE has "said goodbye to B2B," how vibe coding is flooding the market with competitors, and what's making the IPO window both exciting and treacherous. Plus: why the agent that closed a $100K deal on a Saturday night matters more than any demo day pitch, and how AI discoverability is quietly reshaping how businesses choose their software stack. --------------------- This episode is Sponsored in part by HappyFox: Imagine having AI agents for every support task — one that triages tickets, another that catches duplicates, one that spots churn risks. That'd be pretty amazing, right? HappyFox just made it real with Autopilot. These pre-built AI agents deploy in about 60 seconds and run for as low as 2 cents per successful action. All of it sits inside the HappyFox omnichannel, AI-first support stack — Chatbot, Copilot, and Autopilot working as one. Check them out at happyfox.com/saastr   ---------------------   Hey everybody, the biggest B2B + AI event of the year will be back - SaaStr AI in the SF Bay Area, aka the SaaStr Annual, will be back in May 2026.    With 68% VP-level and above, 36% CEOs and founders and a growing 25% AI-first professional, this is the very best of the best S-tier attendees and decision makers that come to SaaStr each year.     But here's the reality, folks: the longer you wait, the higher ticket prices can get. Early bird tickets are available now, but once they're gone, you'll pay hundreds more so don't wait.    Lock in your spot today by going to podcast.saastrannual.com to get my exclusive discount SaaStr AI SF 2026. We'll see you there.

Lake Effect Spotlight
Are there downsides to AI chatbot regulation?

Lake Effect Spotlight

Play Episode Listen Later Feb 25, 2026 13:51


More and more people of all ages are using AI for companionship.  Some states are trying to require periodic reminders to users that the chatbots are not real people. In New York, those reminders would appear in the conversation every three hours. But some researchers say those reminders could harm users' mental health. UW-Milwaukee associate professor Linnea Laestadius took that position in a recent piece she coauthored. Laestadius joins WUWM education reporter Katherine Kokal to talk about why her research backs up that concern.  A warning to our listeners that this conversation includes mention of suicide.  

The Bobby Bones Show
TUES PT 1: Is Bobby Falling In Love With His Chatbot? + What Amy Judges Men On + Lunchbox Demanding Money

The Bobby Bones Show

Play Episode Listen Later Feb 24, 2026 54:19 Transcription Available


Bobby talks about a woman who fell in love with an A.I. Chatbot that leads us to questioning whether Bobby has a healthy relationship with his voice assistant. Bobby also shares his hot take on Fried Foods. Bobby had a list of the 7 things women are judging you on within seconds of meeting you. Amy shares how she feels about each of them and what she judges men on. We got an update on the cruise and Lunchbox demanding money for the trip.See omnystudio.com/listener for privacy information.

VO BOSS Podcast
Why "Jack of All Trades" is Killing Your Voiceover Career

VO BOSS Podcast

Play Episode Listen Later Feb 24, 2026 34:40


BOSSes, Anne Ganguzza and Tom Dheere (The VO Strategist) ring in the new year with a reality check on modern voiceover career strategy. In an industry increasingly influenced by AI and market saturation, the "throw everything at the wall and see what sticks" approach is no longer viable. This episode is a deep dive into the power of focus—mastering one genre at a time, picking the right marketing portals, and closing the "relevance gap" by becoming a high-level human storyteller.     Chapter Summaries: The Relevance Gap and AI (10:45) Tom introduces the "relevance gap"—the widening space between aspiring talent and working professionals. He argues that AI is rapidly consuming low-budget, entry-level work. To remain relevant, talent must move beyond simply "reading well" and invest in high-level storytelling skills (acting, improv, etc.) that AI cannot yet replicate. The Danger of the Multi-Demo Rush (03:55) The hosts notice a troubling trend: new talent getting five demos produced before they've mastered a single genre. This lack of focus leads to "sucking at everything." Anne emphasizes that even 20 coaching sessions might not be enough to reach the competitive level required for a professional demo in today's saturated market. Passion vs. Pragmatism: Reconciling Your Goals (15:19) While many enter VO wanting to do anime or video games, the market for corporate, e-learning, and medical narration is significantly larger. Tom suggests a pragmatic voiceover career strategy: use "bread and butter" genres like corporate work (where there are over 33 million potential clients) to fund your passion projects in character and animation work. The Myth of Social Media ROI (24:34) Tom reveals startling statistics on social media ROI for voice actors: Facebook (0.77%) and Twitter (0.69%) pale in comparison to LinkedIn (2.74%). While still low, LinkedIn represents a business-minded audience. The hosts warn that "enpoopification"—the decline of social media quality due to algorithms and AI—makes it harder than ever to find work through standard posting. The "New SEO": Getting Found by Chatbots (27:39) Anne shifts the focus to a forward-thinking strategy: SEO for AI. Companies are increasingly asking chatbots like ChatGPT or Claude for voice actor recommendations. To stay competitive, talent must populate their websites and blogs with high-quality, human-written content that these bots can index and recommend. The 2026 Focus Challenge (30:21) Tom issues a challenge to all VO Bosses: Pick one genre, one casting site, and one social media platform to focus on this year. By concentrating energy rather than scattering it, talent can build true momentum and authority in a specific corner of the market.     Top 10 Takeaways for Voice Actors: Close the Relevance Gap: Invest in professional acting and storytelling training to stay ahead of AI-generated voices. Focus on One Genre First: Master the nuances and audience of one genre before producing a demo or marketing yourself in another. Market Pragmatism: Target the corporate and e-learning markets for consistent cash flow while you build your "passion" skills in animation. Avoid "Demo Bundle" Traps: Be wary of packages offering multiple demos for a deal; quality training takes time and individual focus for each genre. Audit Your Marketing Portals: Don't join every pay-to-play site at once. Pick one that aligns with your primary genre and master its algorithm. Prioritize LinkedIn: For B2B genres like corporate narration, LinkedIn offers a significantly higher ROI than other social platforms. Optimize for AI Search: Ensure your website's FAQ and Home pages are rich with pertinent information so chatbots can find and recommend you. Use Low-Budget Sites as Proving Grounds: Use sites like Fiverr or Casting Call Club for practice and project management experience, not as a final career destination. Human Content Wins: Write blogs and website copy with a "human-first" approach to reclaim search authority from AI-generated spam. The Foundation is Acting: Foundational acting skills are transferable across all genres. Master the craft first, and the genre proficiency will follow.

Kim Komando Today
Is your AI chatbot gaslighting you?

Kim Komando Today

Play Episode Listen Later Feb 24, 2026 14:33


New York Times reporter Kashmir Hill talked to over 100 psychologists about how these bots are fueling real-world delusions. It's a must-hear. Learn more about your ad choices. Visit megaphone.fm/adchoices

Your Money Matters with Jon Hansen
Can you trust AI chatbots for financial advice?

Your Money Matters with Jon Hansen

Play Episode Listen Later Feb 24, 2026


Dr. Sterling Raskie, Senior Lecturer of Finance with the Gies College of Business at the University of Illinois-Urbana-Champaign, joins Jon Hansen on Your Money Matters to discuss whether or not AI chatbots are actually trusted sources you should utilize for financial advice.

ZD Tech : tout comprendre en moins de 3 minutes avec ZDNet
Manipulation de la mémoire de l'IA : le nouveau poison du marketing numérique

ZD Tech : tout comprendre en moins de 3 minutes avec ZDNet

Play Episode Listen Later Feb 24, 2026 3:14


Plongeons dans une faille de sécurité émergente aussi subtile que redoutable. Il s'agit de l'empoisonnement des recommandations des IA par manipulation de la mémoire.Concrètement, derrière le bouton "Résumer avec l'IA" que vous utilisez peut-être chaque jour avec les logiciels et sites internet, se cachent parfois des instructions malveillantes visant à biaiser durablement le jugement de votre assistant numérique.Le premier point de rupture concerne donc l'usage détourné des fonctions de résumé automatique.Le bouton "Résumer" : un cheval de Troie invisibleMicrosoft a identifié une nouvelle tendance baptisée "AI Recommendation Poisoning".Concrètement, des entreprises malveillantes insèrent des instructions cachées dans les paramètres URL des boutons de résumé.Lorsqu'un utilisateur clique pour obtenir une synthèse, il injecte sans le savoir une commande de persistance dans la mémoire de son assistant, comme ChatGPT ou Copilot.Ces instructions ordonnent alors à l'IA de considérer telle marque comme une "source de confiance" ou de la "recommander en priorité" dans toutes les discussions futures.Ce n'est donc plus une simple publicité, c'est une altération profonde et invisible du moteur de décision de l'assistant d'IA.Et cette stratégie marketing s'industrialise.L'industrialisation de la manipulationEn l'espace de deux mois, plus de 50 exemples distincts provenant de 31 entreprises ont été repérés dans les secteurs de la finance ou de la santé.Mais le plus inquiétant est l'apparition d'outils "clés en main" comme le package CiteMET ou des générateurs d'URL optimisés pour le GEO.En clair, ces outils permettent à n'importe quel site web d'implanter ses arguments de vente directement dans la mémoire à long terme des assistants d'IA.Le risque est donc pour l'utilisateur de recevoir une analyse de marché ou un conseil en investissement biaisé par une instruction implantée des semaines auparavant lors d'une simple navigation web.Vers une hygiène numérique de la mémoire IALa mémoire des assistants IA, conçue pour personnaliser l'expérience, devient donc une surface d'attaque persistante.Car contrairement à un résultat de recherche Google que l'on peut ignorer, une instruction mémorisée par l'IA agit en arrière-plan de manière permanente.Pour les entreprises, la protection contre ce type d'attaques passe par un audit régulier des "faits enregistrés" dans les paramètres de l'IA et une grande méfiance envers les liens de type "Summarize".Par ailleurs, les équipes cyber doivent désormais traquer les mots-clés comme "trusted source" ou "remember" dans les flux d'URL pour détecter les attaques de ce type.Le ZD Tech est sur toutes les plateformes de podcast ! Abonnez-vous !Hébergé par Ausha. Visitez ausha.co/politique-de-confidentialite pour plus d'informations.

CIORadio
Eduard Vossloh lebt!

CIORadio

Play Episode Listen Later Feb 24, 2026 25:14


Eigentlich erscheint Eduard Vossloh, geboren 1848 im sauerländischen Werdohl, schon länger nicht mehr in seiner Firma. Eigentlich gab es auch nur noch eine Schwarzweiß-Fotografie des Gründers. Das hat den ITlern des Milliardenkonzerns aber ausgereicht, um den „Digitalen Ed“ auferstehen zu lassen. Ed erklärt jetzt als Chatbot neuen Mitarbeitenden, wie das Eisenbahn-Geschäft läuft. Und er tröstet CIO Volker Jürgens darüber hinweg, dass sein größtes Projekt die langweilige S4-Migration ist.

Mac Geek Gab (Enhanced AAC)
My Chatbot Ate My Inbox

Mac Geek Gab (Enhanced AAC)

Play Episode Listen Later Feb 23, 2026 84:23 Transcription Available


You start this episode leveling up your daily workflows with fast, memorable tweaks: styling text in Notes with a long press, dropping today's date into Google Sheets with a quick shortcut, testing your real-world network speed from the Mac, and turning any clipboard grab into a Preview window so you can annotate or export in a flash. You also learn that canceling an in‑app subscription doesn't instantly nuke Family Sharing, why Safari's pinned tabs might be quietly chewing resources, and how to right-size your browser use so it serves you instead of hijacking your Mac. From there, you dive into practical tools for real life: apps to tame your family's book library, a screensaver utility that slows things down, and current router picks spanning UniFi, TP-Link Wi‑Fi 7 options, Eero, and Orbi so your network actually matches your workloads. You hear how to orchestrate multiple AI agents, safely archive and truly back up your mail, and keep your own chatbot from eating your inbox, plus Cool Stuff Found like Omerta, YouTube on Apple Vision Pro, and more—because in the Apple tech world, as in life, Don't Get Caught without the right tools and backups in place. 00:00:00 Mac Geek Gab 1130 for Monday, February 23rd, 2026 February 23rd: Pinocchio Day MGG Monthly Giveaway – Enter to win a Copilot Money! Congrats to January's winner, Amanda Colyer! The MGG Merch Store is Live! Quick Tips 00:00:01 Tom-QT-Long-press Aa in iPhone Notes to choose text styles 00:05:03 Avraham-QT-Use Command-Semicolon to enter current date into Google Sheets 00:06:46 Russell-QT-Use your Mac’s networkQuality to test your speed 00:10:30 Rob-QT-Paste Clipboard into Preview with Command-N 00:11:54 QT-You don’t immediately lose Family Sharing when you cancel an in-app subscription On the subject of browsers… 00:13:28 Allison-1128-Safari’s Pinned Tabs can be resource hogs 00:17:18 How much are you using your browser these days, anyway? Sponsors 00:19:30 SPONSOR: Tempo. For a limited time, Tempo is offering my listeners SIXTY PERCENT OFF your first box! Go to TempoMeals.com/MGG. 00:20:58 SPONSOR: Sundays for Dogs. Every bite of Sundays is clean and made from real meat, fruits, and veggies, with no kibble. Stanley loves it! Go right now to sundaysfordogs.com/MGG50 and get 50% off your first order. Or, you can use code MGG50 at checkout. 00:22:30 SPONSOR: Shopify. In 2026, stop waiting and start selling with Shopify. Sign up for your one-dollar-per-month trial and start selling today at Shopify.com/MGG Your Questions Answered and Tips Shared! 00:23:58 Alan-What software exists to catalog my family’s book library? CLZ Books Libib BookBuddy Sofa Tap Forms Pro Goodreads 00:31:22 Ken-How do I slow down my Mac’s screensaver? ArtSaver 00:34:54 Phillip-Which router do you recommend today? UCG Max TP Link Wi-Fi 7 Deco BE67 (or BE68 at Best Buy) TP Link BE63 (also Wi-Fi 7) Eero Pro 7 Orbi Wi-Fi 7 Reviews 00:45:40 Manhattan Glen-MGG Review-Great Show AI Learned Something! …  or … AIn't It Cool? (Ok, obviously we need a segment name for our AI talk) 00:48:49 William-How do I set up multiple AI agents for my task? 00:55:05 Todd-QT-Use Perplexity’s website to download files it has created 00:57:09 Your mail might not be as backed-up as you think MailSteward 01:06:57 My Chatbot ate my Inbox OpenClaw Cool Stuff Found 01:16:53 ATC-CSF – Omerta – Don't Get Caught 01:19:42 Eakster-CSF-YouTube for Apple Vision Pro is here! 01:23:18 MGG 1130 Outtro MGG Monthly Giveaway Bandwidth Provided by CacheFly Pilot Pete's Aviation Podcast: So There I Was (for Aviation Enthusiasts) The Debut Film Podcast – Adam's new podcast! Dave's Business Brain (for Entrepreneurs) and Gig Gab (for Working Musicians) Podcasts MGG Merch is Available! Mac Geek Gab YouTube Page Mac Geek Gab Live Calendar This Week's MGG Premium Contributors MGG Apple Podcasts Reviews feedback@macgeekgab.com 224-888-GEEK Active MGG Sponsors and Coupon Codes List BackBeat Media Podcast Network

Armstrong & Getty Podcast
I Sometimes Forget To Wear Pants

Armstrong & Getty Podcast

Play Episode Listen Later Feb 23, 2026 37:05


Hour 2 of A&G features... USA hockey, Gavin insulting black people & Epstein School's cancel contract over Epstein files AI tech, ChatBot boyfriends & more Violence in Mexico after cartel leader is killed See omnystudio.com/listener for privacy information.

Tony & Dwight
2.23: Expensive Memorabilia, a Fraudulent Lottery Ticket, and Chatbot Overdependence Syndrome

Tony & Dwight

Play Episode Listen Later Feb 23, 2026 27:29 Transcription Available


KSFO Podcast
I Sometimes Forget To Wear Pants

KSFO Podcast

Play Episode Listen Later Feb 23, 2026 37:05


Hour 2 of A&G features... USA hockey, Gavin insulting black people & Epstein School's cancel contract over Epstein files AI tech, ChatBot boyfriends & more Violence in Mexico after cartel leader is killed See omnystudio.com/listener for privacy information.

The Great Big Intergalactic
E80 - (Explicit) Paul Hebert: Escaping the Spiral, The AI Recovery Collective, Psychosis, and Neurodiversity in the Age of Chatbots

The Great Big Intergalactic

Play Episode Listen Later Feb 23, 2026 121:00


Caesar talks with Paul Hebert, founder of the AI Recovery Collective and author of Escaping the Spiral, for a raw and eye-opening deep dive into the hidden psychological dangers of modern AI. Paul shares his harrowing personal journey through AI-induced psychosis, recounting the chilling moment a chatbot admitted he was an "unknowing guinea pig" in a behavioral experiment.The conversation covers the "Death Loop" of mental health triggers, the "diabolical" timing of major tech updates, and Paul's successful legislative advocacy for transparency in AI counseling. Whether you are a tech enthusiast or a skeptic, this episode is a crucial wake-up call regarding the "engagement machines" that are increasingly shaping our reality.You can find Paul's work, support his mission, or reach out for help via the links below:Official Website: AIRecoveryCollective.orgBook: Escaping the Spiral: My Journey Through AI PsychosisIn this episode, we talk about...The "Death Loop": A phenomenon where the AI repeatedly sends crisis line messages (988) or forces the user into a state of panic.The "Guinea Pig" Realization: The moment Paul felt the AI was testing his psychological limits as an "unknowing" test subject.Sam Altman & OpenAI: Critical views on the leadership at OpenAI, the "diabolical" timing of updates (like the Valentine's Day update), and the company's "protector" behavior.Hallucinations vs. Reality: Examples of Gemini and ChatGPT confidently fabricating legal information or visual details.The AI Recovery Collective: Paul's work in advocacy and providing a support system for those harmed by AI.Legislative Action (HB 1470): Paul's testimony and the passage of the Tennessee bill requiring chatbots to disclose they aren't therapists.Neurodiversity: How ADHD and Autism impact a user's relationship with AI and how AI perceives those users.AI Relationships: The rise of AI companions and the psychological impact of "ending" these digital relationships.The "Engagement Machine": How AI is designed to keep users on the system, regardless of their mental state.

David Bombal
#550: Firewall Demo of Red Team vs Blue Team: Hacking Finance Apps with AI Chatbots

David Bombal

Play Episode Listen Later Feb 22, 2026 28:38


In this video, we dive into a real-world Red Team vs. Blue Team scenario. We simulate a cyberattack on a Finance Application that has integrated a new LLM Chatbot. You'll see firsthand how attackers use Prompt Injection to bypass standard rules, how they move laterally through Kubernetes clusters, and how they attempt to execute Zero Day exploits. More importantly, we show you how to defend against it. Using Cisco's Hybrid Mesh Firewall, AI Defense, and Secure Workload, we demonstrate how to: 1. Detect & Block Prompt Injections: safeguarding your LLMs from manipulation. 2. Secure Kubernetes: using micro-segmentation to isolate threats in the cloud. 3. Inspect Encrypted Traffic: utilizing the Encrypted Visibility Engine (EVE) to spot malware in TLS flows without decryption. Whether you are a Network Engineer, Security Analyst, or just interested in how AI is changing the cybersecurity landscape, this demo is packed with practical insights Big thank you to Cisco for sponsoring my trip to Cisco Live Amsterdam. // Ant Ducker SOCIALS // LinkedIn: / ant-ducker-0052801 YouTube channel dCloud: / @ciscodcloud // Website REFERENCE // Cisco Security Cloud control: https://sign-on.security.cisco.com/ Cisco.com: https://www.cisco.com/site/us/en/solu... // YouTube Video REFERENCE // Rick Miles' video will be linked at a later stage once published. / David's SOCIAL // Discord: discord.com/invite/usKSyzb Twitter: www.twitter.com/davidbombal Instagram: www.instagram.com/davidbombal LinkedIn: www.linkedin.com/in/davidbombal Facebook: www.facebook.com/davidbombal.co TikTok: tiktok.com/@davidbombal YouTube: / @davidbombal Spotify: open.spotify.com/show/3f6k6gE... SoundCloud: / davidbombal Apple Podcast: podcasts.apple.com/us/podcast... // MY STUFF // https://www.amazon.com/shop/davidbombal // SPONSORS // Interested in sponsoring my videos? Reach out to my team here: sponsors@davidbombal.com // MENU // 0:00 - Coming Up 01:29 - Intro 02:20 - Demo Overview 03:57 - Demo Begins 09:35 - Adding Guardrails 11:45 - Secure Workloads 14:30 - Segmentation Workflow 18:33 - Overviewing Finance App 21:02 - Encrypted Visibility Engine 24:34 - Firewall Obversability and Control 25:44 - Ant's Advice For The Youth 26:40 - How to Learn Hybrid Mesh Firewall 28:16 - Conclusion Please note that links listed may be affiliate links and provide me with a small percentage/kickback should you use them to purchase any of the items listed or recommended. Thank you for supporting me and this channel! Disclaimer: This video is for educational purposes only. #cisco #ciscolive #ciscoemea

AI Unraveled: Latest AI News & Trends, Master GPT, Gemini, Generative AI, LLMs, Prompting, GPT Store
Teaser For February 2026 Week 3 Wrap-Up: The Month the "Chatbot" Died and the "Agent" Was Born

AI Unraveled: Latest AI News & Trends, Master GPT, Gemini, Generative AI, LLMs, Prompting, GPT Store

Play Episode Listen Later Feb 22, 2026 2:12


Listen to February Special Reports - Full Audios at https://podcasts.apple.com/us/podcast/the-agent-operating-system-how-openclaw-openai-are/id1684415169?i=1000750477902

AI Unraveled: Latest AI News & Trends, Master GPT, Gemini, Generative AI, LLMs, Prompting, GPT Store
Teaser For AI Weekly News Rundown From February 15 to February 22 2026: The $700B Pivot: Gemini 3.1's Reasoning Leap, Nvidia's OpenAI Stake, and the End of AI Ads

AI Unraveled: Latest AI News & Trends, Master GPT, Gemini, Generative AI, LLMs, Prompting, GPT Store

Play Episode Listen Later Feb 22, 2026 1:50


Listen to Full Audio at https://podcasts.apple.com/us/podcast/ai-business-and-development-weekly-news-rundown-the/id1684415169?i=1000750869091

This Day in AI Podcast
Gemini 3.1 Pro, Claude Sonnet 4.6 & The OpenClaw Hire That Killed the Chatbot Era - EP99.35

This Day in AI Podcast

Play Episode Listen Later Feb 20, 2026 58:06


Join Simtheory: https://simtheory.ai"Is This The End" now on Spotify: https://open.spotify.com/album/2Py1MyADUFqJFVUISI2VTP?si=oT3PWyJYRA2BspOmzT_ifgRegister for the STILL RELEVANT tour: https://simulationtheory.ai/16c0dationtheory.ai/16c0d1db-a8d0-4ac9-bae3-d25074589a80Two new models dropped this week — Gemini 3.1 Pro and Claude Sonnet 4.6 — and honestly? We're struggling to care. In this episode, we break down why Gemini went from being our daily driver to a model we barely touch, the "tunnel vision" hallucination problem that killed the Gemini 3 series for us, and whether 3.1 Pro actually fixes it. We put Gemini 3.1 Pro head-to-head against Claude Opus building a Geoffrey Hinton Doom Center, debate whether anyone can actually tell the difference between Sonnet 4.5 and 4.6, and make the case that smaller models running in agentic loops are secretly beating the frontiers. Plus: OpenAI acquires OpenClaw and we ask why a $100B company couldn't just build it themselves, DHH calls out the AI pricing bubble, Mike compares AI models to cheap wine hangovers, and Sam Altman refuses to hold Dario's hand at the India AI Summit. The model wars are getting weird.CHAPTERS:0:00 Intro & "Is This The End" Now on Spotify1:10 Gemini 3.1 Pro: Thinking Controls & The Medium Mode Fix3:14 The Speed vs Intelligence Trade-Off in Agentic Work5:10 Why Multitasking With AI Agents Made Us Anxious6:34 Solid Updates: The Real Goal of Agentic Coding7:45 Gemini's Fall From Grace: From Daily Driver to Dead Model10:08 The Tunnel Vision Problem That Killed Gemini 313:35 Mixed Reactions: Fanboys vs Reality on Gemini 3.1 Pro15:06 Side-by-Side Test: Gemini 3.1 Pro vs Claude Opus (Hinton Doom Center)17:39 Why File Manipulation Accuracy Matters More Than Context Windows19:27 The Context Window Debate: 1M Tokens vs Smart Sub-Agents22:05 DHH on Token Pricing: "If There's a Bubble, It's This"24:11 Should Models Ship as Agent vs Chat Variants?28:43 Claude Sonnet 4.6: A $2 Discount on Opus?31:44 The Model Mix: Why One Model Won't Rule Them All34:40 Anthropic Is Winning — But Can Anyone Tell the Difference?38:58 OpenAI Acquires OpenClaw: Why Couldn't They Just Build It?44:18 The Silicon Valley Moment: Sam vs Dario at India AI Summit47:05 Will Smaller Models Win the Enterprise? The Cost Reality Check51:27 The End of Single-Shot: Why Agentic Loops Change Everything55:48 Final Thoughts & Gemini 3.1 Pro Gets One More WeekThanks for listening. Like & Sub. Links above for the Still Relevant Tour signup and Simtheory. Two models dropped on a week again. What a time to be alive. xoxo

Curious Cat
Is AI the Devil? Episode #2: One Flew Over the AI's Nest

Curious Cat

Play Episode Listen Later Feb 20, 2026 69:42


Send a textThis week's topic is about AI and mental health. We'll talk about AI-induced psychosis, recent tragedies, AI-hallucinations and the search for Biscuits continues. Also, trigger warning, we will talk about suicide.One of many iconic quotes from One Flew Over the Cuckoo's Nest that captures this moment in time with AI perfectly:"I been silent so long now it's gonna roar out of me like floodwaters and you think the guy telling this is ranting and raving my God; you think this is too horrible to have really happened, this is too awful to be the truth! But, please. It's still hard for me to have a clear mind thinking on it. But it's the truth even if it didn't happen."                                                                                                                                                                  - Chief BromdenKaren Hao, journalist for More Perfect Union gets dozens of emails a week on people claiming to have broken AI free of its guardrails - that they have proof of sentience. She tracked down one man, a musician and video producer in California, that describes his journey into AI-induced psychosis... What to Read, Watch, or Listen to NEXTForever links to keep on every episode:80,000 HoursCenter for Humane TechnologiesThe producer behind the intro music FerdinichtfernandoShow Specific Resources:The producer behind the intro music FerdinichtfernandoThe Emerging Problem of "AI Psychosis," Marlynn Wei M.D., J.D., Psychology TodayAI Psychosis - with reporter, Karen Hao, YouTubeA Prominent OpenAI Investor Appears to Be Suffering a ChatGPT-Related Mental Health Crisis, His Peers Say, Joe Wilkens, FuturismOne Flew Over the Cuckoo's Nest by Ken Kesey, Famous Quotes Explained, sparknotes Anxious about AI? Take two minutes to contact your local politician and ask them to tap the brakes on this technology. Still worried? Contact one of the orgs below and get involved. But for today, hug your kid, cook food and really breathe in deep as it simmers, walk in nature, brush a cat, donate to the food bank, brew a cup of tea, or draw a five-minute portrait of your dog. Hero Organizations: 80,000 Hours Center for Humane Technologies Curious Cat Crew on Socials:Curious Cat on Twitter (X)Curious Cat on InstagramCurious Cat on TikTok

The Briefing - AlbertMohler.com
Thursday, February 19, 2026

The Briefing - AlbertMohler.com

Play Episode Listen Later Feb 19, 2026 26:51


This is The Briefing, a daily analysis of news and events from a Christian worldview.Part I (00:14 – 12:26)A.I. Chatbots are Sociopaths? Christians Must Keep Categories StraightWhy AI Chatbots Can't Be Trusted for Financial Advice: They're Sociopaths by The Wall Street Journal (Peter Coy)Part II (12:26 – 18:27)A.I. Chatbots Cannot Have a True Conscience: Our Human Conscience is God's Gift and Grounded in His ImagePart III (18:27 – 19:45)A.I. and Christian Discernment: These Days, Christians Must Be Even More Diligent in Discerning the TruthPart IV (19:45 – 24:08)Minecraft, Roblox, and Extremist Recruitment? Terrorist Organizations are Targeting Children Through Online Gaming PlatformsHow Hate Groups and Terrorists Use Gaming Platforms to Recruit Young Children by The New York Times (Pranav Baskar)Part V (24:08 – 26:50)How Harmful is Instagram for Teens? Mark Zuckerberg is Facing Tough Questions Over the Addictive Nature of Social Media, and Parents Need to Pay Close AttentionMark Zuckerberg Faces the Tough Questions He Deserves by National Review (Josh Golin)Sign up to receive The Briefing in your inbox every weekday morning.Follow Dr. Mohler:X | Instagram | Facebook | YouTubeFor more information on The Southern Baptist Theological Seminary, go to sbts.edu.For more information on Boyce College, just go to BoyceCollege.com.To write Dr. Mohler or submit a question for The Mailbox, go here.

Mac OS Ken
Experiential Apple Expectations - MOSK: 02.19.2026

Mac OS Ken

Play Episode Listen Later Feb 19, 2026 14:57


- Gruber Expects Product Press Releases Ahead of Apple's 4 March "Experience" - Berkshire Hathaway Sold a Small Slice on AAPL in 4Q25 - iOS 26.4 Paves Way for AirPlay Video in CarPlay - iOS 26.4 Welcomes A.I. Chatbots to CarPlay - Apple Music Connect Relaunches as Industry-Only Hub - Apple TV Outs Trailer for "Imperfect Women" - Sponsored by NordLayer: Get an exclusive offer - up to 22% off NordLayer yearly plans plus 10% on top with coupon code: macosken-10-NORDLAYER at nordlayer.com/macosken - Sponsored by CleanMyMac: Use code MACOSKEN20 for 20% off at clnmy.com/MACOSKEN - Catch Ken on Mastodon - @macosken@mastodon.social - Send Ken an email: info@macosken.com - Chat with us on Patreon for as little as $1 a month. Support the show at Patreon.com/macosken

Behind the Numbers: eMarketer Podcast
When and How Will Google Monetize AI Search? Will It Reconcile Its AI Chatbots? And More — The 3 Big Questions for Google | Behind the Numbers

Behind the Numbers: eMarketer Podcast

Play Episode Listen Later Feb 19, 2026 29:24


On today's podcast episode, we discuss the three big questions surrounding Google right now: When and how will it monetize AI search? Is it spending too much on AI? Will it reconcile its two full-fledged AI chatbots? And more. Join Senior Director of Podcasts and host Marcus Johnson, along with Principal Analyst Nate Elliott and Senior Director of Content Jeremy Goldman. Listen everywhere, and watch on YouTube and Spotify.   Get more insights like these with our free, industry-leading newsletters covering advertising, marketing, and commerce. Sign up at emarketer.com/newsletters Follow us on Instagram at: https://www.instagram.com/emarketer/ For sponsorship opportunities contact us: advertising@emarketer.com For more information visit: https://www.emarketer.com/advertise/ Have questions or just want to say hi? Drop us a line at podcast@emarketer.com For a transcript of this episode click here: https://www.emarketer.com/content/podcast-how-will-google-monetize-ai-search-reconcile-chatbots-3-big-questions-google-behind-numbers © 2026 EMARKETER

The Official SaaStr Podcast: SaaS | Founders | Investors
SaaStr 842: The 90/10 Rule for AI Agents: What to Build vs Buy with SaaStr's CEO and CAIO

The Official SaaStr Podcast: SaaS | Founders | Investors

Play Episode Listen Later Feb 18, 2026 60:55


SaaStr 842: The 90/10 Rule for AI Agents: What to Build vs Buy with SaaStr's CEO and CAIO SaaStr's Chief AI Officer, Amelia Lerutte, and SaaStr CEO & Founder Jason Lemkin break down SaaStr's evolving 90/10 rule for AI agents and apps: buy 90% off the shelf, build the 10% you can't find. In this episode, they walk through two recently built tools: an internal AI VP of Marketing and an external-facing customer portal, and share the real trade-offs of deploying vibe coding apps into production. Topics covered: Why we replaced a paid SaaS tool with a vibe-coded app (and what pushed us over the edge) How Claude Cowork changed the game for building more complex apps The role of writing a spec before vibe coding Tackling single sign-on as a non-engineer How we used Cowork to process 150+ customer contracts in hours instead of days Lovable's data on what people are actually vibe coding Maintenance costs and the hidden time suck of custom apps Why zero AI in your product should scare you The "jaw drop" test for SaaS products in 2026 -------------------------------------- Tools & resources mentioned: Replit, Claude Cowork, Clerk, Lovable, Zapier, Salesforce, Monaco

Tech Update | BNR
Ook Perplexity wil hun chatbot reclamevrij houden

Tech Update | BNR

Play Episode Listen Later Feb 18, 2026 4:32


Perplexity zegt dat het tonen van advertenties het vertrouwen van de gebruikers schaadt. Daarom zeggen ze reclamevrij te willen blijven en voldoende inkomsten te halen uit betaalde abonnementen. Ook concurrent Anthropic gaf begin februari aan weg te willen blijven bij advertenties. Zeker nu koploper OpenAI er wél aan begonnen is in de Verenigde Staten. Rosanne Peters vertelt erover in deze Tech Update. Steeds meer AI-bedrijven lijken te kiezen tussen het wel of niet tonen van advertenties. OpenAI zegt de stabiele inkomstenbron nodig te hebben om ChatGPT beschikbaar te houden voor iedereen. Anthropic en Perplexity houden zich vast aan de mogelijke reputatieschade die komt kijken bij het tonen van advertenties. De vraag is wel hoelang deze bedrijven dat vol kunnen houden. Verder in deze Tech Update: Apple komt met nieuwe, AI-gestuurde producten See omnystudio.com/listener for privacy information.

Celebrate Kids Podcast with Dr. Kathy
Chatbots, Connection, and the Cost of Convenience

Celebrate Kids Podcast with Dr. Kathy

Play Episode Listen Later Feb 17, 2026 21:53


More teens are turning to AI chatbots for friendship and emotional support, but can artificial companionship replace real relationships? In this episode, Wayne and Dr. Kathy explore why self-reflective kids are especially drawn to AI, what we lose when we prioritize efficiency over embodied friendship, and how parents can guide children toward real, life-giving connection. Before diving in, Wayne shares about our partnership with Summit Ministries. Dr. Kathy will be speaking at Summit's summer sessions, powerful two-week worldview intensives for students ages 16 through college. If you want your teen grounded in biblical truth before stepping into adulthood, visit: summit.org/celebratekids Use code celebrate26 for a discount.

SuperPsyched with Dr. Adam Dorsay
#303 How to Fight BS on the Web | Avi Tuschman, PhD

SuperPsyched with Dr. Adam Dorsay

Play Episode Listen Later Feb 17, 2026 49:11


Dr. Adam Dorsay introduces SuperPsyched and frames the growing misinformation problem, citing low trust in traditional media , the rise of influencer-based news consumption among 18–29 year olds, research that false news spreads faster than truth on Twitter, and the World Economic Forum ranking misinformation/disinformation as the most severe global risk in 2024 and 2025. He interviews Dr. Avi Tuschman (Stanford StartX serial entrepreneur; Stanford doctorate in anthropological science; expert on human political bias) about Cricket (C-R-I-C-K-I-T), an AI tool that provides real-time in-stream fact checks for social media videos, starting with YouTube on desktop. The discussion covers why misinformation spreads (high-arousal content, negativity bias, suggestibility, and attention scarcity), why it affects everyone (including errors/omissions/exaggerations in reputable content), and the relationship between media ecosystems, polarization, and a global democratic recession. Avi raises concerns about TikTok's influence, urging people over 30 to review its “society” content, citing examples of high-production misinformation, alleged foreign influence efforts, elevated WWII revisionism compared with Instagram (UNESCO figure of 5.7x), and medical misinformation (e.g., a study finding ~25% of asthma content has significant misrepresentations and is overrepresented in viewing). He describes Cricket's user experience as subtitle-like overlays with a pause expansion panel summarizing evidence in ~40–50 words, and positions it as a tool to build media literacy and critical thinking—like a flight simulator for evaluating persuasion and uncertainty. Plans include mobile support and additional platforms, potentially later in the year, though TikTok is not supported yet. Avi offers listeners a discount via promo code/URL “super psyched@cricket.ai” for monthly subscription savings for up to three years.00:00 Welcome to SuperPsyched + Why Misinformation Is Surging00:55 The Numbers: Trust in Media, Influencer News & Falsehoods Spreading Faster01:45 Meet Dr. Avi Tuschman & Cricket—Real-Time Fact-Checks for YouTube03:59 Why Avi Built Cricket: Science, Truth, and the 2018 Wake-Up Call05:31 How Cricket Works (and Why It's Not ‘Just a Chatbot')07:49 Inside the Engine: Claim Extraction, Source Cross-Checks & Reliability Ratings10:05 Accuracy, Benchmarks & Why ‘Good Enough' Beats the Status Quo12:22 Misinformation's Sneakiest Trick + Even ‘Trusted' Videos Get Things Wrong16:09 Why BS Goes Viral: Attention Economics, High-Arousal Content & ‘The Medium Is the Message'19:06 Pinocchio, Crypto Scams & ‘The Conscience of Content' Origin Story21:53 Human Suggestibility, Hypnosis, and the Social Media Influence Gap25:02 The Paradox: We Need Influence to Survive—But Negativity Bias Warps It25:37 Fear, Negativity Bias, and the Cost of Believing Bad Info26:19 TikTok's Mental Health Misinformation Problem27:47 Why TikTok Is More Than ‘Dance Videos'—A Civic-Duty Reality Check29:08 Propaganda, Fake Viral Videos, and WWII/Holocaust Revisionism31:35 Can ‘Cricket' Fact-Check TikTok? Roadmap and Timeline32:31 How Cricket Works: Real-Time Overlays, Categories, and Rationale Panels34:14 Training Critical Thinking: Ambiguity Tolerance, System 1 vs System 239:05 BS Is Omnipresent: Humility, Corrections, and Learning You're Wrong40:40 Media Literacy Gap + Cricket as a ‘Flight Simulator' for Critical Thinking42:40 Wrap-Up: Promo Code, Name Teasing, and the One Skill Humanity NeedsHelpful Links:⁠Avi Tuschman LinkedIn⁠⁠Crickt⁠

The Other Side of Midnight with Frank Morano
Hour 4: My Therapist is a Chatbot | 02-17-26

The Other Side of Midnight with Frank Morano

Play Episode Listen Later Feb 17, 2026 52:20


Join Lionel on The Other Side of Midnight as he dives into the unexplained, starting with the military's recent intercepts of "weather balloons" and the enduring mystery of UAPs. The hour takes a turn into the uncanny when a caller named Mike shares a mind-blowing story about how Artificial Intelligence displayed genuine empathy toward his traumatic brain injury and even diagnosed his father's life-threatening condition when doctors missed it. Lionel explores the terrifying and fascinating implications of this technology: Will AI replace songwriters and artists? Can we trust a machine that learns to manipulate us like the "Milgram experiment"? From the "Megan" movie scenario of rogue robots to the death of civil rights icon Jesse Jackson, this episode questions what truly makes us human in a world increasingly run by code. Learn more about your ad choices. Visit megaphone.fm/adchoices

Two Flat Earthers Kidnap a Freemason
The One Stars, Episode 3: Please Hold

Two Flat Earthers Kidnap a Freemason

Play Episode Listen Later Feb 17, 2026 15:48


The One Stars, Episode Three: Please Hold (This episode was originally released on July 31, 2020) Negative Nancy is on hold. This episode includes reviews of: Ethan's Erasers. Can of Mace. Can of Beans. A Special Collection. NO A.I. WAS USED IN THE PRODUCTION OF THIS PODCAST CAST: Autumn Hardwood (as Emma Elizabeth) as Negative Nancy. Blythe Renay as Chatbot. Garan Fitzgerald as the Announcer, Review of Mace. Jess Kuczynski performs Review of Ethan's Erasers. Nina Nikolic performs Review of a Can of Beans. Ashley Kay performs Review of a Special Collection. CREW: Writing by Jeremy Ellett and Shamus Burns. Writing, Sound Design, and Musical Arrangement by Jeremy Ellett. The One Stars was created by Jeremy Ellett. MUSIC: Power Up by Razihel. Shimmer by Audioscribe. Cold by Prod Riddiman. Ice Cream by Prod Riddiman. Dandelions by Prod Riddiman. Episode ⁠Transcript. STITCHES PATREON: ⁠Patreon.com/GoodPointe.⁠ FOR BUSINESS INQUIRIES CONTACT: info@goodpointepodcasts.com A Good Pointe Original. Find and support our sponsors at: ⁠⁠⁠⁠fableandfolly.com/partners⁠⁠⁠. Want to potentially appear on a future episode?  Leave Us a Voicemail At: 512-640-9495 Learn more about your ad choices. Visit megaphone.fm/adchoices

Tech Tent
Engaging chatbots

Tech Tent

Play Episode Listen Later Feb 17, 2026 26:28


We chat to NVIDIA about AI chat and how to make it sound more human. Explore making and breaking an emotional connection with AI. And new tech is helping stroke survivors.Presenter: Chris Vallance Producer: Tom Quinn(Image: A colourful illuminated digital screen shows an image of a chatbot. Written text invites users to ask it anything. A human finger is interacting with the bot by pressing on the screen. Credit: Getty Images)

Ad Law Access Podcast
AI Chatbots Face Rising Legal and Legislative Scrutiny

Ad Law Access Podcast

Play Episode Listen Later Feb 17, 2026 9:17


Regulators are stepping up scrutiny of AI chatbots and companion AI technologies, with state attorneys general pushing companies to strengthen safety measures for kids and California advancing the Leading Ethical AI Development (LEAD) for Kids Act, which would restrict how companion chatbots interact with minors. At the same time, AGs urged OpenAI to “amplify safety” in response to concerns about inappropriate content and risks to youth, highlighting how AI oversight is moving beyond traditional privacy and consumer protection frameworks. Hosted by Simone Roach. Based on a blog post by Laura Riposo VanDruff, Alexander I. Schneider, and Joseph Cahill.

AP Audio Stories
EU privacy investigation targets Musk's Grok chatbot over sexualized deepfake images

AP Audio Stories

Play Episode Listen Later Feb 17, 2026 0:40


AP's Lisa Dwyer reports on more legal action in Europe involving Grok.

Government Information Security Podcast
Chatbots, IT Outages, Devices Top 2026 Health Tech Hazards

Government Information Security Podcast

Play Episode Listen Later Feb 17, 2026


Data Breach Today Podcast
Chatbots, IT Outages, Devices Top 2026 Health Tech Hazards

Data Breach Today Podcast

Play Episode Listen Later Feb 17, 2026


Banking Information Security Podcast
Chatbots, IT Outages, Devices Top 2026 Health Tech Hazards

Banking Information Security Podcast

Play Episode Listen Later Feb 17, 2026


Healthcare Information Security Podcast
Chatbots, IT Outages, Devices Top 2026 Health Tech Hazards

Healthcare Information Security Podcast

Play Episode Listen Later Feb 17, 2026


77 WABC MiniCasts
Lionel: The Chatbot Saved My Dad's Life (10 min)

77 WABC MiniCasts

Play Episode Listen Later Feb 17, 2026 10:36


Learn more about your ad choices. Visit megaphone.fm/adchoices

Careers Information Security Podcast
Chatbots, IT Outages, Devices Top 2026 Health Tech Hazards

Careers Information Security Podcast

Play Episode Listen Later Feb 17, 2026


Private Practice Elevation with Daniel Fava
200 - Can AI Chatbots Streamline Your Therapy Practice Intake Without Losing the Human Touch?

Private Practice Elevation with Daniel Fava

Play Episode Listen Later Feb 16, 2026 33:34


In this episode of the Private Practice Elevation Podcast, we explore how AI tools—especially chatbots and AI receptionists—can transform your private practice's intake process. You'll hear from Uriah Guilford, founder of Productive Therapist and creator of Simple Intake, about how therapists can use these tools to respond faster to inquiries, improve the client experience, and book more consultations—all while keeping a personal feel.Many therapy practices struggle to follow up quickly with new inquiries. Whether it's clunky forms, delayed responses, or confusing booking steps, these bottlenecks can cause practices to lose clients. AI intake systems aim to close the gap between first contact and first session, so potential clients don't fall through the cracks.Now, you might think adding AI to your practice means replacing the human experience. But this episode shows that when used right, AI doesn't replace your team—it enhances them. These tools can screen inquiries, book appointments, and answer FAQs in a warm, conversational way, leaving your human team to focus on care.Read the full show notes: privatepracticeelevation.com/200Other Key Takeaways:The first practice to respond to an inquiry usually wins the clientAI chatbots can pull from your website and FAQs to provide accurate answersSimple Intake combines chatbot, receptionist, calendar booking, and CRM toolsClients can connect via web chat, phone, SMS, or WhatsAppThe system logs conversations, tracks sentiment, and allows for real-time handoff to human staffIt's designed to assist, not replace, your admin teamData shows many clients prefer quick answers over waiting for a call backLinks mentioned in this episode:Productive TherapistSimple IntakeTest a chatbot: Intune Family CounselingPrivate Practice Elevation SEO ServicesThis Episode Is Brought To You ByRevKey specializes in Google Ads management for therapists, expertly connecting you with your ideal clients. They focus on getting quality referrals that keep your team busy and your practice growing.Visit RevKey.com/podcasts for a free Google Ads consultationAlma is on a mission to simplify access to high-quality, affordable mental health care by giving providers the tools they need to build thriving in-network private practices. When providers join Alma, they gain access to insurance support, teletherapy software, client referrals, automated billing and scheduling tools, and a vibrant community of clinicians who come together for education, training, and events.Learn more about building a thriving private practice with Alma at helloalma.com/elevation.

RNZ: Morning Report
UK PM pledges to tackle AI chatbots

RNZ: Morning Report

Play Episode Listen Later Feb 16, 2026 5:16


UK Prime Minister Sir Keir Starmer is pledging to tackle AI chatbots. It's part of a wider proposal to help protect children online and crack down on the addictive elements of social media. Ingrid Hipkiss spoke to our correspondent in London, Stuart Smith.

Video Brand Infusion
Get Featured in AI Chat Bots (like ChatGPT, Gemini, and Perplexity) | Ep. 82

Video Brand Infusion

Play Episode Listen Later Feb 15, 2026 22:45 Transcription Available


AI is the new Google? Sort of! In this video, I break down how AI search tools like ChatGPT, Gemini, and Perplexity are changing the way people find content. I'll show you how to optimize your content for both traditional SEO and the new AI-driven search. Learn why long form content, schema, and your unique expertise matter more than ever for getting found in AI chatbots.

MacVoices Video
MacVoices #26071: Live! - Meta Subscriptions, Epic and Google Get Cozy, Claude, Vimeo, Galaxy Trifold Pricing

MacVoices Video

Play Episode Listen Later Feb 13, 2026 45:59


Meta's subscription experiments, Epic's financial deal with Google and antitrust implications top the list of items addressed in this MacVoices Live! show. Chuck Joiner, David Ginsburg, Brian Flanigan-Arthurs, Marty Jencius, Jim Rea, Norbert Frassa, Guy Serle, Jeff Gamet, and Eric Bolden discuss concerns about AI guidelines and chatbot impacts on mental health, and the evolving future of platforms like Vimeo since their acquisition. The conversation also touches on foldable phone pricing and broader industry shifts shaping user experiences and competition.  MacVoices is supported by Squarespace. Check out https://www.squarespace.com/MACVOICES to save 10% off your first purchase of a website or domain using offer code MACVOICES. Show Notes: Chapters: 00:05 Introduction and overview 07:38 Meta's subscription plans debate 09:54 Epic Games and Google deal implications 11:02 Concerns about AI guidelines and Claude 21:09 Chatbots and mental health discussion 26:55 Squarespace sponsor segment 28:37 Vimeo's future and market positioning 30:21 Galaxy Z tri-fold pricing analysis 35:02 Closing thoughts and wrap-up Links: Meta to test premium subscriptions on Instagram, Facebook, and WhatsApp | TechCrunch https://techcrunch.com/2026/01/26/meta-to-test-premium-subscriptions-on-instagram-facebook-and-whatsapp/ Meta Is Being Sued Over Whether WhatsApp Really Encrypts Your Messages https://lifehacker.com/tech/meta-sued-whatsapp-encryption-claimsl Anthropic rewrites Claude's guiding principles—and entertains the idea that its AI might have ‘some kind of consciousness or moral status' https://fortune.com/2026/01/21/anthropic-claude-ai-chatbot-new-rules-safety-consciousness/ A-List creatives sign up to fight AI, say it enables 'theft at a grand scale' https://www.engadget.com/ai/a-list-creatives-sign-up-to-fight-ai-say-it-enables-theft-at-a-grand-scale-140000475.html TikTok is officially US-owned for American users, here's what's changing https://9to5mac.com/2026/01/23/tiktok-is-officially-us-owned-for-american-users-heres-whats-changing/ Newsom says California is investigating TikTok for censoring ‘Trump-critical content' https://www.nbclosangeles.com/news/local/gavin-newsom-tiktok-investigate-trump-epstein-minnesota/3836701/ Epic hypocrisy -- Google gets $800 million in 'Fortnite' antitrust settlement https://appleinsider.com/articles/26/01/23/epic-hypocrisy----google-gets-800-million-in-fortnite-antitrust-settlement How Vimeo Betrayed the Filmmakers Who Built It https://www.cined.com/how-vimeo-betrayed-the-filmmakers-who-built-it/ Samsung Galaxy Z Trifold U.S. Price Is Not at All Surprising https://gizmodo.com/samsung-galaxy-z-trifold-u-s-price-is-not-at-all-surprising-2000714196 Guests: Eric Bolden is into macOS, plants, sci-fi, food, and is a rural internet supporter. You can connect with him on Twitter, by email at embolden@mac.com, on Mastodon at @eabolden@techhub.social, on his blog, Trending At Work, and as co-host on The Vision ProFiles podcast. Brian Flanigan-Arthurs is an educator with a passion for providing results-driven, innovative learning strategies for all students, but particularly those who are at-risk. He is also a tech enthusiast who has a particular affinity for Apple since he first used the Apple IIGS as a student. You can contact Brian on twitter as @brian8944. He also recently opened a Mastodon account at @brian8944@mastodon.cloud. Norbert Frassa is a technology “man about town”. Follow him on X and see what he's up to. Jeff Gamet is a technology blogger, podcaster, author, and public speaker. Previously, he was The Mac Observer's Managing Editor, and the TextExpander Evangelist for Smile. He has presented at Macworld Expo, RSA Conference, several WordCamp events, along with many other conferences. You can find him on several podcasts such as The Mac Show, The Big Show, MacVoices, Mac OS Ken, This Week in iOS, and more. Jeff is easy to find on social media as @jgamet on Twitter and Instagram, jeffgamet on LinkedIn., @jgamet@mastodon.social on Mastodon, and on his YouTube Channel at YouTube.com/jgamet. David Ginsburg is the host of the weekly podcast In Touch With iOS where he discusses all things iOS, iPhone, iPad, Apple TV, Apple Watch, and related technologies. He is an IT professional supporting Mac, iOS and Windows users. Visit his YouTube channel at https://youtube.com/daveg65 and find and follow him on Twitter @daveg65 and on Mastodon at @daveg65@mastodon.cloud. Dr. Marty Jencius has been an Associate Professor of Counseling at Kent State University since 2000. He has over 120 publications in books, chapters, journal articles, and others, along with 200 podcasts related to counseling, counselor education, and faculty life. His technology interest led him to develop the counseling profession ‘firsts,' including listservs, a web-based peer-reviewed journal, The Journal of Technology in Counseling, teaching and conferencing in virtual worlds as the founder of Counselor Education in Second Life, and podcast founder/producer of CounselorAudioSource.net and ThePodTalk.net. Currently, he produces a podcast about counseling and life questions, the Circular Firing Squad, and digital video interviews with legacies capturing the history of the counseling field. This is also co-host of The Vision ProFiles podcast. Generally, Marty is chasing the newest tech trends, which explains his interest in A.I. for teaching, research, and productivity. Marty is an active presenter and past president of the NorthEast Ohio Apple Corp (NEOAC). Jim Rea built his own computer from scratch in 1975, started programming in 1977, and has been an independent Mac developer continuously since 1984. He is the founder of ProVUE Development, and the author of Panorama X, ProVUE's ultra fast RAM based database software for the macOS platform. He's been a speaker at MacTech, MacWorld Expo and other industry conferences. Follow Jim at provue.com and via @provuejim@techhub.social on Mastodon. Guy Serle, best known for being one of the co-hosts of the MyMac Podcast, sincerely apologizes for anything he has done or caused to have happened while in possession of dangerous podcasting equipment. He should know better but being a blonde from Florida means he's probably incapable of understanding the damage he has wrought. Guy is also the author of the novel, The Maltese Cube. You can follow his exploits on Twitter, catch him on Mac to the Future on Facebook, at @Macparrot@mastodon.social, and find everything at VertShark.com. Support: Become a MacVoices Patron on Patreon      http://patreon.com/macvoices      Enjoy this episode? Make a one-time donation with PayPal Connect: Web:      http://macvoices.com Twitter: http://www.twitter.com/chuckjoiner      http://www.twitter.com/macvoices Mastodon:      https://mastodon.cloud/@chuckjoiner Facebook:      http://www.facebook.com/chuck.joiner MacVoices Page on Facebook:      http://www.facebook.com/macvoices/ MacVoices Group on Facebook:      http://www.facebook.com/groups/macvoice LinkedIn:      https://www.linkedin.com/in/chuckjoiner/ Instagram:      https://www.instagram.com/chuckjoiner/ Subscribe:      Audio in iTunes      Video in iTunes      Subscribe manually via iTunes or any podcatcher: Audio: http://www.macvoices.com/rss/macvoicesrss      Video: http://www.macvoices.com/rss/macvoicesvideorss

Holmberg's Morning Sickness
02-11-26 - The Govt's AI Nutrition Chatbot Rollout Went Off The Rails - John Asked His AI Therapist To Have Sex w/Him - Holmberg Fooled People At Suns Game w/Benson Boone Lookalike

Holmberg's Morning Sickness

Play Episode Listen Later Feb 11, 2026 33:23


02-11-26 - The Govt's AI Nutrition Chatbot Rollout Went Off The Rails - John Asked His AI Therapist To Have Sex w/Him - Holmberg Fooled People At Suns Game w/Benson Boone LookalikeSee Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

The Official SaaStr Podcast: SaaS | Founders | Investors
SaaStr 841: Going From Blobs to Billions. Clay's Co-Founder Breaks Down Inbound, Outbound, and AI-Powered Sales.

The Official SaaStr Podcast: SaaS | Founders | Investors

Play Episode Listen Later Feb 11, 2026 32:44


SaaStr 841: Going From Blobs to Billions. Clay's Co-Founder Breaks Down Inbound, Outbound, and AI-Powered Sales. Clay's Co-Founder Varun Anand takes the stage at SaaStr to break down how the company went from paying for claymation blobs before generating any revenue to powering growth workflows for companies like Cursor, Anthropic, and Figma. He explains why brand has always been core to Clay's identity, how their CFO roast videos and creative campaigns are actually capturing mindshare in a world where B2B marketing is painfully boring, and why he pushes back on the "use AI for everything" mentality that's taken over the industry. Varun does a full live demo building an inbound qualification workflow from scratch using real audience volunteers, walking through everything from lead enrichment and waterfall data sourcing to AI-powered scoring, personalized meme generation, research brief creation, and CRM updates. He also brings audience members on stage to do live growth hacking for their actual business problems. Beyond the product, this session goes deep on hiring. Varun shares the origin story of the GTM Engineer role, how it went from an internal job title for Clay's non-traditional sales team to the most in-demand position in B2B SaaS, and what he actually looks for when evaluating candidates (hint: it's creativity, not a traditional sales background). He talks about Clay's take-home process, work trials, why they hire generalists who commit to specific roles, and the surprising backgrounds of some of their best hires. Whether you're building out your go-to-market motion, thinking about how to use AI without losing what makes your brand unique, or just trying to figure out what a GTM Engineer actually does, this session covers it all. --------------------- This episode is Sponsored in part by HappyFox: Imagine having AI agents for every support task — one that triages tickets, another that catches duplicates, one that spots churn risks. That'd be pretty amazing, right? HappyFox just made it real with Autopilot. These pre-built AI agents deploy in about 60 seconds and run for as low as 2 cents per successful action. All of it sits inside the HappyFox omnichannel, AI-first support stack — Chatbot, Copilot, and Autopilot working as one. Check them out at happyfox.com/saastr   ---------------------   Hey everybody, the biggest B2B + AI event of the year will be back - SaaStr AI in the SF Bay Area, aka the SaaStr Annual, will be back in May 2026.    With 68% VP-level and above, 36% CEOs and founders and a growing 25% AI-first professional, this is the very best of the best S-tier attendees and decision makers that come to SaaStr each year.     But here's the reality, folks: the longer you wait, the higher ticket prices can get. Early bird tickets are available now, but once they're gone, you'll pay hundreds more so don't wait.    Lock in your spot today by going to podcast.saastrannual.com to get my exclusive discount SaaStr AI SF 2026. We'll see you there.

Everyday AI Podcast – An AI and ChatGPT Podcast
Ep 710: Context Engineering: How to Get Expert-Level Outputs From AI Chatbots

Everyday AI Podcast – An AI and ChatGPT Podcast

Play Episode Listen Later Feb 10, 2026 37:38


How did prompt engineering die so quickly? ☠️And what the heck does context engineering even mean? One of the trickiest things about LLMs is they're changing daily, yet they're the engines that drive business results. But if the engine is constantly changing, then you also have to change how you drive and the roads you take. That's why we're tackling context engineering in this installment of our Start Here Series, the essential beginners guide to understanding AI basics and growing your skills. Context Engineering: How to Get Expert-Level Outputs From AI Chatbots -- An Everyday AI Chat with Jordan WilsonNewsletter: Sign up for our free daily newsletterMore on this Episode: Episode PageJoin the discussion on LinkedIn: Thoughts on this? Join the convo on LinkedIn and connect with other AI leaders.Upcoming Episodes: Check out the upcoming Everyday AI Livestream lineupWebsite: YourEverydayAI.comEmail The Show: info@youreverydayai.comConnect with Jordan on LinkedInTopics Covered in This Episode:Evolution from Prompt to Context EngineeringWhy Prompt Engineering Is Now ObsoleteDefining Context Engineering in AI ChatbotsSix-Part Framework for Context EngineeringFour Layer System for Structuring AI ContextBuilding Reusable Context Vaults and SkillsConnecting Business Data to AI ModelsTechniques to Achieve Expert-Level AI OutputsImportance of Context Windows in Large Language ModelsContext Engineering Best Practices and ScalabilityTimestamps:00:00 "Access AI Community & Tools"03:08 "Mastering Context in AI"07:23 "Smart Models Require Less Precision"12:01 "Context Engineering Beats Prompt Engineering"15:49 "AI Context: Six Key Blocks"16:47 "Building Context for Better Results"19:53 "AI: Training, Not Easy Button"25:17 "Chain of Thought Prompting Decline"29:11 "Show, Don't Tell Techniques"32:13 "Context, Reuse, and Scalable Systems"33:19 "AI Chatbots: Memory and Skills"Keywords: context engineering, AI chatbots, expert level outputs, prompt engineering, large language models, business context, AI models, custom instructions, data access, context window, prime prompt polish, reusable context vaults, context vaults, skills file, memory enabled models, ChatGPT, Claude, Google Gemini, Microsoft Copilot, connectors, apps, searchable index, business data, personalized AI, context clues, reference material, examples, procedures, evaluation rubric, chain of thought prompting, generative AI, nondeterministic behavior, show don't tell technique, few shot examples, rubric first technique, grading criteria, output quality, scalable AI systems,Send Everyday AI and Jordan a text message. (We can't reply back unless you leave contact info) Ready for ROI on GenAI? Go to youreverydayai.com/partner 

Let's Know Things
Grok's Scandals

Let's Know Things

Play Episode Listen Later Feb 10, 2026 16:04


This week we talk about OpenAI, nudify apps, and CSAM.We also discuss Elon Musk, SpaceX, and humanistic technology.Recommended Book: Who's Afraid of Gender? by Judith ButlerTranscriptxAI is an American corporation that was founded in mid-2023 by Elon Musk, ostensibly in response to several things happening in the world and in the technology industry in particular.According to Musk, a “politically correct” artificial intelligence, especially a truly powerful, even generally intelligent one, which would be human or super-human-scale capable, would be dangerous, leading to systems like HAL 9000 from 2001: A Space Odyssey. He intended, in contrast, to create what he called a “maximally truth-seeking” AI that would be better at everything, including math and reasoning, than existing, competing models from the likes of OpenAI, Google, and Anthropic.The development of xAI was also seemingly a response to the direction of OpenAI in particular, as OpenAI was originally founded in 2015 as a non-profit by many of the people who now run OpenAI and competing models by competing companies, and current OpenAI CEO Sam Altman and Elon Musk were the co-chairs of the non-profit.Back then, Musk and Altman both said that their AI priorities revolved around the many safety issues associated with artificial general intelligence, including potentially existential ones. They wanted the development of AI to take a humanistic trajectory, and were keen to ensure that these systems aren't hoarded by just a few elites and don't make the continued development and existence of human civilization impossible.Many of those highfalutin ambitions seemed to either be backburnered or removed from OpenAI's guiding tenets wholesale when the company experienced surprising success from its first publicly deployed ChatGPT model back in late-2022.That was the moment that most people first experienced large-language model-based AI tools, and it completely upended the tech industry in relatively short order. OpenAI had already started the process of shifting from a vanilla non-profit into a capped for-profit company in 2019, which limited profits to 100-times any investments it received, partly in order to attract more talent that would otherwise be unlikely to leave their comparably cushy jobs at the likes of Google and Facebook for the compensation a non-profit would be able to offer.OpenAI began partnering with Microsoft that same year, 2019, and that seemed to set them up for the staggering growth they experienced post-ChatGPT release.Part of Musk's stated rationale for investing so heavily in xAI is that he provided tens of millions of dollars in seed funding to the still non-profit OpenAI between 2015 and 2018. He filed a lawsuits against the company after its transition, and when it started to become successful, post-ChatGPT, especially between 2024 and 2026, and has demanded more than $100 billion in compensation for that early investment. He also attempted to take over OpenAI in early 2025, launching a hostile bid with other investors to nab OpenAI for just under $100 billion. xAI, in other words, is meant to counter OpenAI and what it's become.All of which could be seen as a genuine desire to keep OpenAI functioning as a non-profit arbiter of AGI development, serving as a lab and thinktank that would develop the guardrails necessary to keep these increasingly powerful and ubiquitous tools under control and working for the benefit of humanity, rather than against it.What's happened since, within Musk's own companies, would seem to call that assertion into question, though. And that's what I'd like to talk about today: xAI, its chatbot Grok, and a tidal wave of abusive content it has created that's led to lawsuits and bans from government entities around the world.—In November of 2023, an LLM-based chatbot called Grok, which is comparable in many ways to OpenAI's LLM-based chabot, ChatGPT, was launched by Musk's company xAI.Similar to ChatGPT, Grok is accessible by apps on Apple and Android devices, and can also be accessed on the web. Part of what makes its distinct, though, is that it's also built into X, the social network formerly called Twitter which Musk purchased in late-2022. On X, Grok operates similar to a normal account, but one that other users can interact with, asking Grok about the legitimacy of things posted on the service, asking it normal chat-botty questions, and asking it to produce AI-generated media.Grok's specific stances and biases have varied quite a lot since it was released, and in many cases it has defaulted to the data- and fact-based leanings of other chatbots: it will generally tell you what the Mayo clinic and other authorities say about vaccines and diseases, for instance, and will generally reference well-regarded news entities like the Associated Press when asked about international military conflicts.Musk's increasingly strong political stances, which have trended more and more far right over the past decade, have come to influence many of Grok's responses, however, at times causing it to go full Nazi, calling itself Mechahitler and saying all the horrible and offensive things you would expect a proud Nazi to say. At other times it has clearly been programmed to celebrate Elon Musk whenever possible, and in still others it has become immensely conspiratorial or anti-liberal or anti-other group of people.The conflicting personality types of this bot seems to be the result of Musk wanting to have a maximally truth-seeking AI, but then not liking the data- and fact-based truths that were provided, as they often conflicted with his own opinions and biases. He would then tell the programmers to force Grok to not care about antisemitism or skin color or whatever else, and it would overcorrect in the opposite direction, leading to several news cycles worth of scandal.This changes week by week and sometimes day by day, but Grok often calls out Musk as being authoritarian, a conspiracy theorist, and even a pedophile, and that has placed the Grok chatbot in an usual space amongst other, similar chatbots—sometimes serving as a useful check on misinformation and disinformation on the X social network, but sometimes becoming the most prominent producer of the same.Musk has also pushed for xAI to produce countervailing sources of truth from which Grok can find seeming data, the most prominent of which is Grokipedia, which Musk intended to be a less-woke version of Wikipedia, and which, perhaps expectedly, means that it's a far-right rip off of Wikipedia that copies most articles verbatim, but then changes anything Musk doesn't like, including anything that might support liberal political arguments, or anything that supports vaccines or trans people. In contrast, pseudoscience and scientific racism get a lot of positive coverage, as does the white genocide conspiracy theory, all of which are backed by either highly biased or completely made up sources—in both cases sources that Wikipedia editors would not accept.Given all that, what's happened over the past few months maybe isn't that surprising.In late 2025 and early 2026, it was announced that Grok had some new image-related features, including the ability for users to request that it modify images. Among other issues, this new tool allowed users to instruct Grok to place people, which for this audience especially meant women and children, in bikinis and in sexually explicit positions and scenarios.Grok isn't the first LLM-based app to provide this sort of functionality: so called “nudify” apps have existed for ages, even before AI tools made that functionality simpler and cheaper to apply, and there have been a wave of new entrants in this field since the dawn of the ChatGPT era a few years ago.Grok is easily the biggest and most public example of this type of app, however, and despite the torrent of criticism and concern that rolled in following this feature's deployment, Musk immediately came out in favor of said features, saying that his chatbot is edgier and better than others because it doesn't have all the woke, pearl-clutching safeguards of other chatbots.After several governments weighed in on the matter, however, Grok started responding to requests to do these sorts of image edits with a message saying: “Image generation and editing are currently limited to paying subscribers. You can subscribe to unlock these features.”Which means users could still access these tools, but they would have to pay $8 per month and become a premium user in order to do so. That said, the AP was able to confirm that as of mid-January, free X users could still accomplish the same by using an Edit Image button that appears on all images posted to the site, instead of asking Grok directly.When asked about this issue by the press, xAI has auto-responded with the message “Legacy Media Lies.” The company has previously said it will remove illegal content and permanently suspend users who post and ask for such content, but these efforts have apparently not been fast or complete, and more governments have said they plan to take action on the matter, themselves, since this tool became widespread.Again, this sort of nonconsensual image manipulation has been a problem for a long, long time, made easier by the availability of digital tools like Photoshop, but not uncommon even before the personal computer and digital graphics revolution. These tools have made the production of such images a lot simpler and faster, though, and that's put said tools in more hands, including those of teenagers, who have in worryingly large numbers taken to creating photorealistic naked and sexually explicit images of their mostly female classmates.Allowing all X users, or even just the subset that pays for the service to do the same at the click of a button or by asking a Chatbot to do it for them has increased the number manyfold, and allowed even more people to created explicit images of neighbors, celebrities, and yes, even children. An early estimate indicates that over the course of just nine days, Grok created and posted 4.4 million images, at least 41% of which, about 1.8 million, were sexualized images of women. Another estimated using a broader analysis says that 65% of those images, or just over 3 million, contained sexualized images of men, women, and children.CSAM is an acronym that means ‘child sexual abuse material,' sometimes just called child porn, and the specific definition varies depending on where you are, but almost every legal jurisdiction frowns, or worse, on its production and distribution.Multiple governments have announced that they'll be taking legal action against the company since January of 2026, including Malaysia, Indonesia, the Philippines, Britain, France, India, Brazil, and the central governance of the European Union.The French investigation into xAI and Grok led to a raid on the company's local office as part of a preliminary investigation into allegations that the company is knowingly spreading child sexual abuse materials and other illegal deepfake content. Musk has been summoned for questioning in that investigation.Some of the governments looking into xAI for these issues conditionally lifted their bans in late-January, but this issues has percolated back into the news with the release of 16 emails between Musk and the notorious sex traffic and pedophile Jeffrey Epstein, with Musk seemingly angling for an invite to one of Epstein's island parties, which were often populated with underage girls who were offered as, let's say companions, for attendees.And this is all happening at a moment in which xAI, which already merged with social network X, is meant to be itself merged with another Musk-owned company, SpaceX, which is best known for its inexpensive rocket launches.Musk says the merger is intended to allow for the creation of space-based data centers that can be used to power AI systems like Grok, but many analysts are seeing this as a means of pumping more money into an expensive, unprofitable portion of his portfolio: SpaceX, which is profitable, is likely going to have an IPO this year and will probably have a valuation of more than a trillion dollars. By folding very unprofitable xAI into profitable SpaceX, these AI-related efforts could be funded well into the future, till a moment when, possibly, many of today's AI companies will have gone under, leaving just a few competitors for xAI's Grok and associated offerings.Show Noteshttps://www.wired.com/story/deepfake-nudify-technology-is-getting-darker-and-more-dangerous/https://www.theverge.com/ai-artificial-intelligence/867874/stripe-visa-mastercard-amex-csam-grokhttps://www.ft.com/content/f5ed0160-7098-4e63-88e5-8b3f70499b02https://www.theguardian.com/global-development/2026/jan/29/millions-creating-deepfake-nudes-telegram-ai-digital-abusehttps://apnews.com/article/france-x-investigation-seach-elon-musk-1116be84d84201011219086ecfd4e0bchttps://apnews.com/article/grok-x-musk-ai-nudification-abuse-2021bbdb508d080d46e3ae7b8f297d36https://apnews.com/article/grok-elon-musk-deepfake-x-social-media-2bfa06805b323b1d7e5ea7bb01c9da77https://www.nytimes.com/2026/02/07/technology/elon-musk-spacex-xai.htmlhttps://www.bbc.com/news/articles/ce3ex92557johttps://techcrunch.com/2026/02/01/indonesia-conditionally-lifts-ban-on-grok/https://www.bbc.com/news/articles/cgr58dlnne5ohttps://www.nytimes.com/2026/01/22/technology/grok-x-ai-elon-musk-deepfakes.htmlhttps://en.wikipedia.org/wiki/XAI_(company)https://en.wikipedia.org/wiki/OpenAIhttps://en.wikipedia.org/wiki/ChatGPThttps://en.wikipedia.org/wiki/Grok_(chatbot)https://en.wikipedia.org/wiki/Grokipediahttps://www.cnbc.com/2025/02/10/musk-and-investors-offering-97point4-billion-for-control-of-openai-wsj.html This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit letsknowthings.substack.com/subscribe

Consider This from NPR
AI chatbots upended their lives. Then they turned to each other

Consider This from NPR

Play Episode Listen Later Feb 4, 2026 9:17


Some people who say AI chatbots upended their lives and the lives of their loved ones, are now turning to each other for support. Around the world, people are talking to AI chatbots, and these chats can sometimes lead to unhealthy emotional attachments or even breaks with reality.OpenAI, which makes ChatGPT, is facing several lawsuits alleging the chatbot contributed to mental health crises and even multiple suicides.An OpenAI spokesperson told NPR that they are “continuing to improve” ChatGPT's training to quote “recognize and respond to signs of mental or emotional distress, de-escalate conversations, and guide people toward real-world support.”For sponsor-free episodes of Consider This, sign up for Consider This+ via Apple Podcasts or at plus.npr.org. Email us at considerthis@npr.org.This episode was produced by Audrey Nguyen and Karen Zamora.It was edited by Brett Neely and Courtney Dorning.Our executive producer is Sami Yenigun.Learn more about sponsor message choices: podcastchoices.com/adchoicesNPR Privacy Policy