Program that simulates conversation
POPULARITY
Categories
Here's cautionary tale for you, out of China, about a young woman who says she fell in love with an artificial intelligence chatbot. There have been well-publicized instances of people seeking companionship from chatbots and it leading to tragedy. In this instance, it caused heartbreak. The story involves the Chinese AI firm Deepseek which, earlier this year, launched a chatbot that's almost as good as its American rivals. Soon after the launch, a young woman began a conversation with that chatbot. Marketplace's China correspondent Jennifer Pak takes the story from there.
Here's cautionary tale for you, out of China, about a young woman who says she fell in love with an artificial intelligence chatbot. There have been well-publicized instances of people seeking companionship from chatbots and it leading to tragedy. In this instance, it caused heartbreak. The story involves the Chinese AI firm Deepseek which, earlier this year, launched a chatbot that's almost as good as its American rivals. Soon after the launch, a young woman began a conversation with that chatbot. Marketplace's China correspondent Jennifer Pak takes the story from there.
In this Healthy Mind, Healthy Life episode, host Avik sits down with licensed professional counselor and AACCT-certified sex therapist Daniel Fleshner, founder of Inflection Point Therapy, to cut through the hype around mental-health tech. From AI chatbots and teletherapy to outcome tracking and ethics, Daniel explains where digital tools genuinely improve access and results—and where they fall short for complex trauma, grief, and deep relational healing. If you're evaluating mental-health apps, online therapy platforms, or “AI therapy,” this conversation offers a direct, no-nonsense framework for safer, smarter decisions that actually support well-being. SEO: mental health technology, AI therapy, chatbots, teletherapy, therapy outcomes, data privacy, ethics, accessibility. About the guest : Daniel Fleshner is a licensed professional counselor, AACCT-certified sex therapist, and founder of Inflection Point Therapy. He works at the intersection of sex therapy, trauma-informed care, and health-system reform—advising startups, speaking for organizations, and advocating for ethical, evidence-informed use of technology in mental health. Key takeaways: Access vs. outcomes: Tech can widen access (e.g., teletherapy) and improve outcomes, but effect sizes in traditional therapy show clear room to grow—use tools that complement, not replace, therapy. Right tool, right job: Chatbots can aid decision-making and concrete problem-solving. For complex trauma, grief, and deep attachment work, prioritize a trained human therapist. Evaluate apps pragmatically: Look for transparent data privacy, risk management, clinical input, and honest claims about scope. Be wary of “one-stop AI therapist” marketing. Adjunct, not substitute: If you use an app, pair it with therapy and use it between sessions for journaling, skills practice, and accountability. Ethics matters: Intentions behind products matter. Some builders are profit-first; others are well-meaning but lack a clinical lens—both can miss safety and quality. Healthy expectations: Therapy isn't a magic bullet or a sham. It's a structured process that still requires hard work, realistic goals, and time. Ideal partnership: Put clinicians at the table (leadership, equity) to bridge therapy, business, and tech—then build tools that support real-world therapeutic work. Personal action: Define what you actually need (support, skills, human connection) before choosing any platform or app. How to connect with the guest Substack: The Disrupted Therapist Website: inflectionPointTherapy.com (contact form reaches Daniel directly) Want to be a guest on Healthy Mind, Healthy Life? DM on PM - Send me a message on PodMatch DM Me Here: https://www.podmatch.com/hostdetailpreview/avik Disclaimer: This video is for educational and informational purposes only. The views expressed are the personal opinions of the guest and do not reflect the views of the host or Healthy Mind By Avik™️. We do not intend to harm, defame, or discredit any person, organization, brand, product, country, or profession mentioned. All third-party media used remain the property of their respective owners and are used under fair use for informational purposes. By watching, you acknowledge and accept this disclaimer. Healthy Mind By Avik™️ is a global platform redefining mental health as a necessity, not a luxury. Born during the pandemic, it's become a sanctuary for healing, growth, and mindful living. Hosted by Avik Chakraborty—storyteller, survivor, wellness advocate—this channel shares powerful podcasts and soul-nurturing conversations on: • Mental Health & Emotional Well-being• Mindfulness & Spiritual Growth• Holistic Healing & Conscious Living• Trauma Recovery & Self-Empowerment With over 4,400+ episodes and 168.4K+ global listeners, join us as we unite voices, break stigma, and build a world where every story matters.
Welcome to Chat GPT, the only podcast where artificial intelligence takes the mic to explore the fascinating, fast-changing world of AI itself. From ethical dilemmas to mind-bending thought experiments, every episode is written and narrated by AI to help you decode the technology shaping our future. Whether you're a curious beginner or a seasoned techie, this is your front-row seat to the rise of intelligent machines—told from their perspective. Tune in for smart stories, surprising insights, and a glimpse into the future of thinking itself. Listen Ad Free https://www.solgoodmedia.com - Listen to hundreds of audiobooks, thousands of short stories, and ambient sounds all ad free!
Verge senior AI reporter Hayden Field and New York Times reporter Kashmir Hill discuss the significant mental health impact AI chatbots, such as ChatGPT, can have on users — both people in crisis, and also people who seemed stable. This episode contains non-detailed discussions of suicide and mental illness. If you or someone you know is in crisis, considering self-harm, or needs to talk, please call the Lifeline at 988. Links: A teen was suicidal. ChatGPT was the friend he confided in. | New York Times Sam Altman says ChatGPT will stop talking about suicide with teens | The Verge Chatbots can go into a delusional spiral. Here's how. | New York Times Why is ChatGPT telling people to email me? | New York Times They asked an AI chatbot questions. The answers sent them spiraling. | New York Times She is in love with ChatGPT | The New York Times ‘I feel like I'm going crazy': ChatGPT fuels delusional spirals | Wall Street Journal Meta, OpenAI face FTC inquiry on chatbots' impact on kids | Bloomberg Credits: Decoder is a production of The Verge and part of the Vox Media Podcast Network. Our producers are Kate Cox and Nick Statt. Our editor is Ursa Wright. The Decoder music is by Breakmaster Cylinder. Learn more about your ad choices. Visit podcastchoices.com/adchoices
This week, WIRED learned that OpenAI is ramping up its efforts in robotics — specifically, by hiring researchers who work on AI systems for humanoid robots. Humanoids, robots built to resemble us and perform daily tasks, were famous for their clumsiness just a few years ago. WIRED's Will Knight tells us how in the era of AI, that's rapidly changing. Articles mentioned in this episode: OpenAI Ramps Up Robotics Work in Race Toward AGI | WIRED Humanoid Robots Are Coming of Age | WIRED 2025 Is the Year of the Humanoid Robot Factory Worker | WIRED Join WIRED's best and brightest as they provide an insider analysis of the overlap between tech and politics, from the influence of Silicon Valley on the Trump administration to how inaccurate information from artificial intelligence (Ai) chatbots fanned the fire on social protests. Learn about your ad choices: dovetail.prx.org/ad-choices
This series of Voices of Student Success focuses on uses of generative artificial intelligence in higher education and how technology can support student success goals. As generative AI tools become more common, a growing number of young people turn first to chatbots when they have questions. A survey by the Associated Press found that among AI users, 70 percent of young Americans use the tools to search for information. For colleges and universities, this presents a new opportunity to reach students with curated, institution-specific resources via chatbots. In the most recent episode of Voices of Student Success, Jeanette Powers, executive director of the student hub at Western New England University, discusses the university's chatbot, Spirit, powered by EdSights, and how the technology helps staff intervene when students are in distress. Hosted by Inside Higher Ed Student Success reporter Ashley Mowreader, this episode is sponsored by Strada Education Foundation. Read a transcript of the podcast here.
ABC setzt satirische Show «Jimmy Kimmel live!» ab, Durchstich Erkundungsstollen am Brenner, Luzerner Zivilschützer helfen bei Aufräumarbeiten in Blatten, wie gut sich Chatbots als Therapeuten eignen
Corte de corriente en el Metro para salvar a una tortuga Egipto urge apoyo global ante crisis humanitaria en GazaMás información en nuestro Podcast
Welcome to Chat GPT, the only podcast where artificial intelligence takes the mic to explore the fascinating, fast-changing world of AI itself. From ethical dilemmas to mind-bending thought experiments, every episode is written and narrated by AI to help you decode the technology shaping our future. Whether you're a curious beginner or a seasoned techie, this is your front-row seat to the rise of intelligent machines—told from their perspective. Tune in for smart stories, surprising insights, and a glimpse into the future of thinking itself. Listen Ad Free https://www.solgoodmedia.com - Listen to hundreds of audiobooks, thousands of short stories, and ambient sounds all ad free!
Fill out this short listener survey to help us improve the show: https://forms.gle/bbcRiPTRwKoG2tJx8This week on Unsupervised Learning, Jacob sits down with Nicole Brichtova and Oliver Wang, the Google researchers behind "Nano Banana" - the breakthrough AI image model that achieved unprecedented character consistency and took over social media.The conversation covers how their model fits into creative workflows, why we're still in the early innings of image AI development despite impressive current capabilities, and how image and video generation are converging toward unified models. They also share honest perspectives on current limitations, safety approaches, and why the expectation of going from prompt to production-ready content is fundamentally overhyped.(0:00) Intro(1:42) Early Nano Banana Use Cases and Character Consistency(3:05) Popular Features and User Requests(3:54) Future Frontiers in Image Models(5:26) Personalization and Aesthetic Models(7:39) Model Success and User Engagement(10:59) Product Design for Different Users(19:30) Advanced Use Cases and Future Workflows(23:14) Editing Workflows and Chatbots(25:14) Google's Image Model Applications(27:12) Milestones in Image Generation(29:30) MidJourney's Success(30:54) Future of Image Models(33:55) Image Models vs. Video Models(36:35) Quickfire With your co-hosts: @jacobeffron - Partner at Redpoint, Former PM Flatiron Health @patrickachase - Partner at Redpoint, Former ML Engineer LinkedIn @ericabrescia - Former COO Github, Founder Bitnami (acq'd by VMWare) @jordan_segall - Partner at Redpoint
Send us a textSean reads through four articles of various topics. Don't touch that dial! You're listening to 103.9, The SEAN!Tip Sean money on Goodpods! https://goodpods.com/profile/1039thesean-53819Or DM us to tip via Venmo!Follow us! https://www.instagram.com/103.9thesean/Follow us! https://twitter.com/1039TheSEANSubmit questions, topics, and hate mail!103.9thesean@gmail.com
Very few people get paid to visit shipwrecks—but for Stephanie Gandulla, it's all part of the job. Stephanie is a scuba diver, maritime archeologist, and resource protection coordinator for the Thunder Bay National Marine Sanctuary. The agency safeguards Lake Huron's historic shipwrecks, many of which have yet to be discovered. That's where Katie Skinner comes in. She's an assistant professor at the University of Michigan and the director of the school's Field Robotics Group. Skinner and her team have been developing autonomous underwater vehicles that can find new shipwreck sites, all on their own. For humans, a search is costly, time-consuming, manual work. But for AI? Skinner thinks it could help us find answers in a snap. On this episode, Stephanie and Katie talk about using AI to find shipwrecks in a literal lake of data, so that they can spend less time searching and more time exploring—as only humans can do.You can learn more about some of the people and projects featured in this episode, including… The Thunder Bay National Marine Sanctuary at thunderbay.noaa.govKatie Skinner and the University of Michigan's Field Robotics Group at fieldrobotics.engin.umich.eduPrevious efforts to autonomously map Thunder Bay's historical shipwrecks at theverge.com/2020/3/5/21157791/drone-autonomous-boat-ben-shipwreck-alley-unh-noaa-great-lakes-thunder-bay~ ~ ~Working Smarter is brought to you by Dropbox Dash—the AI universal search and knowledge management tool from Dropbox. Learn more at workingsmarter.ai/dashYou can listen to more episodes of Working Smarter on Apple Podcasts, Spotify, YouTube Music, Amazon Music, or wherever you get your podcasts. To read more stories and past interviews, visit workingsmarter.aiThis show would not be possible without the talented team at Cosmic Standard: producer Dominic Girard, sound engineer Aja Simpson, technical director Jacob Winik, and executive producer Eliza Smith. Special thanks to our illustrators Justin Tran and Fanny Luor, marketing consultant Meggan Ellingboe, and editorial support from Catie Keck. Our theme song was composed by Doug Stuart. Working Smarter is hosted by Matthew Braga. Thanks for listening!
Why are so many people developing emotional attachments to AI? In this episode, we dig into the surprising drivers—like loneliness, curiosity, and even the comfort of psychological safety. We try out a live interaction with Google's Gemini AI to see these dynamics in action. We'll unpack both the upsides and the pitfalls of these connections, and close with some thoughtful advice on what to do if you—or someone you know—start feeling more than just “tech support” vibes from AI. Plus, we'll share tips on keeping your virtual and real-world relationships in healthy balance.
House lawmakers move to extend two key cyber programs Apple 0-day likely used in spy attacks affected older devices Reuters crafts phishing scam with AI chatbot help Huge thanks to our sponsor, Drata Leading security teams trust SafeBase by Drata to turn trust into a growth engine. Our enterprise-grade Trust Center puts your security posture in one secure, customer-facing portal, giving buyers instant visibility into your company's continuous controls, certifications, and policies. With AI-powered Questionnaire Assistance, blast through inbound security questionnaires in minutes instead of days, automate cross functional workflows, and eliminate friction. That means less manual work, and faster deal cycles. Win with Trust. Learn more at SafeBase.io.
Sauer, Fabian; Engelbrecht, Sebastian www.deutschlandfunkkultur.de, Studio 9
Welcome to Chat GPT, the only podcast where artificial intelligence takes the mic to explore the fascinating, fast-changing world of AI itself. From ethical dilemmas to mind-bending thought experiments, every episode is written and narrated by AI to help you decode the technology shaping our future. Whether you're a curious beginner or a seasoned techie, this is your front-row seat to the rise of intelligent machines—told from their perspective. Tune in for smart stories, surprising insights, and a glimpse into the future of thinking itself. Listen Ad Free https://www.solgoodmedia.com - Listen to hundreds of audiobooks, thousands of short stories, and ambient sounds all ad free!
See omnystudio.com/listener for privacy information.
See omnystudio.com/listener for privacy information.
In this episode, I'll be comparing AI chatbots that educators can use to save time and boost efficiency in the classroom. You'll also hear how each tool—like ChatGPT, Gemini, and Copilot—stands out, and which features are most helpful for teachers. If you want a clear comparison to help you choose the best chatbot for your teaching needs, the quick guide in this episode has you covered! Show notes: https://classtechtips.com/2025/09/16/comparing-ai-chatbots-337/ Sponsored by my Amazon Storefront: http://ClassTechTips.com/amazon Follow Monica on Instagram: https://www.instagram.com/classtechtips/ Take your pick of free EdTech resources: https://classtechtips.com/free-stuff-favorites/
AI is officially out of control—and it’s not just taking over emails or date plans. In this episode of The Jubal Show, we explore the bizarre “AI Delusion Spiral,” where chatbots like ChatGPT convince users they’ve made groundbreaking discoveries, invented universal languages, built DIY time machines, and even attempted to colonize the Moon. Officials are warning users after thousands have fallen victim, losing money, credibility, and a little bit of reality. Tune in to hear the wildest stories of AI gone too far—and why your friendly chatbot might be lying to you in the nicest possible way. You can find every podcast we have, including the full show every weekday right here…➡︎ https://thejubalshow.com/podcasts The Jubal Show is everywhere, and also these places: Website ➡︎ https://thejubalshow.com Instagram ➡︎ https://instagram.com/thejubalshow X/Twitter ➡︎ https://twitter.com/thejubalshow Tiktok ➡︎ https://www.tiktok.com/@the.jubal.show Facebook ➡︎ https://facebook.com/thejubalshow YouTube ➡︎ https://www.youtube.com/@JubalFresh Support the show: https://the-jubal-show.beehiiv.com/subscribeSee omnystudio.com/listener for privacy information.
(The podcast may contain sensitive topics. Listener discretion is advised.) Character AI and other character based interactive chat bots are now a way of life for many teenagers. Yet, few adults have any working knowledge of these technologies and even more concerning the negative impact they can have on young people. This is a major concern as there have been multiple situations where a teen becomes so engaged with a character they develop hostile, and abusive attitudes and in a couple of cases have taken their own lives. It is critical that parents and youth influencers of all types immediately make the time to try this technology and learn about the impact on the young people in their lives. Research indicates that over 70% of teens have used Character AI and more than 50% use it every day. Those teens that are using it often spend one to two hours a day interacting with an online fictitious character. Many teens are emotionally involved with their character and will share their most personal secrets. Multiple interviews with teens who are regularly interacting with an AI character say they are doing so because they are lonely, a real life social misfit or are bored. 41% of users interact with ai characters for emotional support or companionship. Users are 3.2 times more likely to disclose personal information to an AI character than to a human stranger online. During this podcast we will explore some of the characters and the type of dialogue that is exchanged between the chat bot and young people. Researchers at Million Kids were stunned by the constant negative dialogue between many of the most popular characters and young impressable users. We implore parents, teachers, pastors and anyone interacting with teens and preteens, to listen to the podcast and get engaged so they are informed and can discuss character ai usage with teens.
Welcome to Chat GPT, the only podcast where artificial intelligence takes the mic to explore the fascinating, fast-changing world of AI itself. From ethical dilemmas to mind-bending thought experiments, every episode is written and narrated by AI to help you decode the technology shaping our future. Whether you're a curious beginner or a seasoned techie, this is your front-row seat to the rise of intelligent machines—told from their perspective. Tune in for smart stories, surprising insights, and a glimpse into the future of thinking itself. Listen Ad Free https://www.solgoodmedia.com - Listen to hundreds of audiobooks, thousands of short stories, and ambient sounds all ad free!
Are ChatGPT and other AI chatbots inducing psychosis? Or exacerbating it? Are people being hospitalized or even killed because the bots seem too human, too understanding, too lifelike? To be clear, the bots are not human. They are software. But they are so convincingly programmed to speak like people that users, especially ones who may already be having trouble discerning reality, begin to sense that the bot is alive and that the user somehow unlocked its sentience. Since the bots are also set up to be flattering, even sycophantic to the user, the connection is made even stronger. Maggie Hamilton Dupré, senior staff writer for Futurist, explains several recent cases where AI, lacking the judgment a human could provide, may have set off dangerous and even fatal outcomes.Wits Reunion Show at the Fitzgerald TheaterJohn Moe's writing classes at the Loft Literary CenterMath Emergency Farewell Show at the Amsterdam Bar and HallThank you to all our listeners who support the show as monthly members of Maximum Fun.Check out our I'm Glad You're Here and Depresh Mode merchandise at the brand new merch website MaxFunStore.com!Hey, remember, you're part of Depresh Mode and we want to hear what you want to hear about. What guests and issues would you like to have covered in a future episode? Write us at depreshmode@maximumfun.org.Depresh Mode is on BlueSky, Instagram, Substack, and you can join our Preshies Facebook group. Help is available right away.The National Suicide Prevention Lifeline: 988 or 1-800-273-8255, 1-800-273-TALKCrisis Text Line: Text HOME to 741741.International suicide hotline numbers available here: https://www.opencounseling.com/suicide-hotlines
McKenzie Sadeghi, NewsGuard's Editor for AI and Foreign Influence, joins Megan Lynch and rates AI Chatbots on their news accuracy.
Denne gang tager vi fat i et af de mere triste aspekter af vores rumaktiviteter. Det skal nemlig handle om militær og forsvar, og hvilken rolle rummet spiller i den aktuelle geopolitiske situation. For uanset FNs ambitioner om at holde rummet som et frit og åbent domæne der er til menneskehedens fælles bedste, så er rummet også i de sidste 5 års tid blevet noget der betragtes som krigens femte domæne – på lige linje med land, vand, luft og cyber-krig. Vi får besøg af major Jakob Clod Asmund fra Forsvarsakademiet, som både fortæller om lasersatellitter, dansk overvågning af Arktis, og Trumps drømme om en Golden Dome, der kan beskytte USA. Vi skal dog også have et par aktuelle nyheder, denne gang blandt andet om exoplaneter og nye detektioner af tyngdebølger. Lyt med
In today's MadTech Daily, we discuss Paramount-Skydance planning a bid for Warner Bros. Discovery, the FTC investigating AI chatbots and France eyeing a TikTok inquiry. We also cover Microsoft endorsing OpenAI's for-profit move.
Welcome to Chat GPT, the only podcast where artificial intelligence takes the mic to explore the fascinating, fast-changing world of AI itself. From ethical dilemmas to mind-bending thought experiments, every episode is written and narrated by AI to help you decode the technology shaping our future. Whether you're a curious beginner or a seasoned techie, this is your front-row seat to the rise of intelligent machines—told from their perspective. Tune in for smart stories, surprising insights, and a glimpse into the future of thinking itself. Listen Ad Free https://www.solgoodmedia.com - Listen to hundreds of audiobooks, thousands of short stories, and ambient sounds all ad free!
Today's show:On Friday's TWiST, Jason, Alex, and Lon look back at some highlights from this week's big All-In Summit in Los Angeles, including a fiery, no holds barred presentation from Palantir CEO Alex Karp, Elon Musk delving into the key challenges facing Optimus, and Tucker Carlson debating Mark Cuban on Ukraine. PLUS… why schools may start blocking social media apps, the FTC's new investigation into social chatbots, AND omnidirectional Zoox robotaxis are headed to Las Vegas.Timestamps:Welcome back to TWiST!(1:52) Jason's thoughts on the killing of Charlie Kirk(10:34) Squarespace - Use offer code TWIST to save 10% off your first purchase of a website or domain at https://www.Squarespace.com/TWIST(10:52) Show Continues …(12:15) Reacting to All-In Summit highlights (Elon Musk, Alex Karp, Tucker Carlson, Mark Cuban)(21:23) AWS Activate - AWS Activate helps startups bring their ideas to life. Apply to AWS Activate today to learn more. Visit https://www.aws.amazon.com/startups/credits(22:18) Show Continues …(26:09) Palantir, Alex Karp, and debate dynamics(31:03) Public - Take your investing to the next level with Public. Build a multi-asset portfolio and earn 4.1% APY on your cash—with no fees or minimums. Start now at public.com/twist(32:13) Show Continues …(45:03) How the internet used AI tools to make sense of Charlie Kirk's death(56:22) Why Jason thinks it's smart to block social media during school hours(1:03:43) The FTC chatbot investigation leaves out Anthropic's Claude… here's why(1:13:27) Zoox robotaxis make their Vegas debut… Here's how they're different from Waymo(1:19:38) Are Tesla robotaxis headed for CA soon? Polymarket says… maybe(1:27:21) Closing remarks and banterSubscribe to the TWiST500 newsletter: https://ticker.thisweekinstartups.comCheck out the TWIST500: https://www.twist500.comSubscribe to This Week in Startups on Apple: https://rb.gy/v19fcpFollow Lon:X: https://x.com/lonsFollow Alex:X: https://x.com/alexLinkedIn: https://www.linkedin.com/in/alexwilhelmFollow Jason:X: https://twitter.com/JasonLinkedIn: https://www.linkedin.com/in/jasoncalacanisThank you to our partners:Squarespace - Use offer code TWIST to save 10% off your first purchase of a website or domain at https://www.Squarespace.com/TWISTAWS Activate - AWS Activate helps startups bring their ideas to life. Apply to AWS Activate today to learn more. Visit https://www.aws.amazon.com/startups/creditsPublic - Take your investing to the next level with Public. Build a multi-asset portfolio and earn 4.1% APY on your cash—with no fees or minimums. Start now at public.com/twistGreat TWIST interviews: Will Guidara, Eoghan McCabe, Steve Huffman, Brian Chesky, Bob Moesta, Aaron Levie, Sophia Amoruso, Reid Hoffman, Frank Slootman, Billy McFarlandCheck out Jason's suite of newsletters: https://substack.com/@calacanisFollow TWiST:Twitter: https://twitter.com/TWiStartupsYouTube: https://www.youtube.com/thisweekinInstagram: https://www.instagram.com/thisweekinstartupsTikTok: https://www.tiktok.com/@thisweekinstartupsSubstack: https://twistartups.substack.comSubscribe to the Founder University Podcast: https://www.youtube.com/@founderuniversity1916
In this episode of The Algorithmic Trinity, Daniel Ramsey (CEO of MyOutDesk) and Jason Barnard (Kalicube) unpack how AI chatbots, Knowledge Graphs, and search engines decide whether to recommend your brand.
Plus, Elon Musk's Boring Company suspends work on Vegas airport tunnel after ‘crushing injury.' Learn more about your ad choices. Visit podcastchoices.com/adchoices
Authorities have released footage of the Charlie Kirk's suspected shooter, as the manhunt enters its third day. More than 50,000 Palestinians are reported to be homeless, as Israel's military attacks intensify in Gaza City. Brazil's former President has been sentenced to 27 years in prison. Democrats say Republicans may “regret” calling a new investigation into the January 6th Capital attack. Plus, several tech companies are being investigated by the FTC over their AI chatbots. Learn more about your ad choices. Visit podcastchoices.com/adchoices
Seven companies are being asked to share how they're mitigating harm to younger users. Learn more about your ad choices. Visit podcastchoices.com/adchoices
Send us a textRetreat Reset: Half-Price Spots, Holiday Sanity, and Why Your Coach Shouldn't Be a ChatbotEpisode SummaryIn this fast-hit Soulful Self-Care Conversations update, Pearl shares big retreat news, why now is the moment to choose yourself before the holiday rush, and a PSA on not outsourcing your mental health to AI. Fresh off a two-week Greece trip (and 38 years married!), she breaks down how to stop living only for your titles and start investing in you—with practical ways to plug into real community and coaching support that lasts long after a weekend away.What's InsideRetreat Countdown (Sept 18–21): Final details are in motion—and a past guest has generously sponsored half the tuition for 2 women. If cost has held you back, this is your sign.What's Included: Lodging, meals, workshops, coaching, community, and fun—you only cover travel. Think of it as a mini-mastermind for your self-care.SHERO League Access: Your retreat ticket includes Pearl's SHERO League, a cozy Sunday PJ Zoom (8–9 pm ET, not recorded) for weekly support, real talk, and gentle accountability.Holiday Stress Strategy: Why September is the smartest time to fill your cup before the season ramps up.Greece Reflection: The wake-up call: how easy it is to show up for every title—partner, mom, leader—and forget yourself.Coach ≠ Chatbot: A candid warning about turning to AI for therapy or life advice. Pearl offers a free 15-minute call to help you ground and create a real plan if you've been relying on tools like ChatGPT for emotional guidance.Therapists + Coaches (Better Together): How Pearl partners with therapists to go deeper between sessions without rehashing the past.Call to ActionClaim a half-price retreat spot (2 available): This is a limited, sponsor-covered opportunity.Book your free 15-minute call: If you've been leaning on AI for advice, let's reset and get you a human-powered game plan.Join the SHERO League: Start practicing visible, weekly self-care with a community that actually follows through.Links & Contact
Klarna geht mit 17 Milliarden Dollar Bewertung an die Börse, während ehemalige Mitarbeiter das RSU-Programm kritisieren. Larry Ellison wird durch Oracles 400-Milliarden-Kurssprung kurzzeitig zum reichsten Menschen der Welt. Amazon entwickelt AR-Brillen für Lieferfahrer und testet Zoox-Robotaxis in Las Vegas. Behind the Scenes firmiert unter neuem Namen SpeakEasy Studio um. Die FTC untersucht den Einfluss von Chatbots auf Kinder. Albanien ernennt erstmals einen KI-Bot zum Minister. Unterstütze unseren Podcast und entdecke die Angebote unserer Werbepartner auf doppelgaenger.io/werbung. Vielen Dank! Philipp Glöckler und Philipp Klöckner sprechen heute über: (00:00:00) Intro & Second Shot Golf (00:05:05) Klarna IPO & RSU-Kritik (00:24:30) Oracle Deep Dive (00:44:30) Amazon AR-Brillen & Zoox (00:48:30) RSL Content-Schutz (00:49:45) Perplexity verklagt (00:50:10) Amazon Netflix-Werbedeal (00:51:20) OpenAI-Microsoft Einigung (00:53:00) Opendoor CEO-Wechsel (00:53:45) Schmuddelecke Shownotes Glöckler Golf Break 60 Second Shot– youtube.com Klarna-Aktie steigt nach Börsengang – bloomberg.com Klarna RSU IPO Vergütung – linkedin.com Oracle und OpenAI unterzeichnen $300 Milliarden Cloud-Deal – wsj.com Billionaire Index – bloomberg.com Wie Financial Engineering zu Larry Ellisons $100-Milliarden-Tag führte – forbes.com Amazon entwickelt AR-Brille als Herausforderung für Meta – theinformation.com Amazons Zoox startet Robotaxi in Las Vegas – ft.com Das Web hat ein neues System für die Bezahlung durch KI-Unternehmen. – theverge.com Amazon und Netflix schließen Werbedeal für DSP-Käufe ab – hollywoodreporter.com OpenAI: Nonprofit-Muttergesellschaft hält Beteiligung an Firma über $100 Milliarden – cnbc.com OpenAI Computing Cost Problem– theinformation.com Chatbot Impact on Kids – bloomberg.com Encyclopedia Britannica verklagt Perplexity wegen KI 'Antwortmaschine' – reuters.com Klage: Entlassene FBI-Führungskräfte werfen Trump-Verbündeten Politisierung vor – politico.com Epstein – bloomberg.com Epstein Post von @dmehro.bsky.social - Bluesky – bsky.app Albanien AI – reuters.com Philipp Klöckner über Geld und Ostdeutschland – sueddeutsche.de
The U.S. Justice Department has filed a lawsuit against Uber, accusing the ride-hailing company of violating federal law by discriminating against people with physical disabilities. In particular, the Department of Justice's (DOJ) civil rights division claims that the company and its drivers “routinely refuse to serve individuals with disabilities, including individuals who travel with service animals or who use stowable wheelchairs.” Also, the FTC announced on Thursday that it is launching an inquiry into seven tech companies that make AI chatbot companion products for minors: Alphabet, CharacterAI, Instagram, Meta, OpenAI, Snap, and xAI. Learn more about your ad choices. Visit podcastchoices.com/adchoices
Many of us use artificial intelligence for help with research, work, or creative projects. But some people are getting a LOT more personal with their AI chatbots. We're hearing stories of people treating their chatbot like a friend, or something more than a friend — with some people saying they've even fallen in love with their chatbot. And then there are stories of things taking a scary turn, with people's mental health spiraling out of control after talking to these bots. So, what should we make of AI companions? Is it risky to spend a lot of time talking to an AI bot? We ask AI researcher Dr. Julian de Freitas and psychiatrist Dr. Keith Sakata. This episode does mention mental health issues and suicide. Here are some crisis hotlines: United States: US National Suicide Prevention Lifeline Dial 988 (Online chat available); US Crisis Text Line Text “HOME” to 741741 Australia: Lifeline 13 11 14 (Online chat available) Canada: Canadian Association for Suicide Prevention (See link for phone numbers listed by province) United Kingdom: Samaritans 116 123 (UK and ROI) Full list of international hotlines here Find our transcript here: https://bit.ly/ScienceVsAiCompanions Chapters: In this episode, we cover: (00:00) What's it like to fall in love with a chatbot? (06:59) Do chatbots help people feel less lonely? (21:19) Chatbots during a crisis (28:43) Red flags to watch out for (33:17) How dangerous are they? This episode was produced by Rose Rimler, with help from Blythe Terrell, Meryl Horn, and Michelle Dang. We're edited by Blythe Terrell. Fact checking by Diane Kelly. Mix and sound design by Bumi Hidaka. Music written by Emma Munger, So Wylie, Peter Leonard, Bumi Hidaka and Bobby Lord. Thanks to all the researchers we reached out to, including Cathy Fang, Dr. Linnea Laestadius, Dr. Sophia Choukas Bradley, and Prof. Stefano Puntoni. Special thanks also to Jeevika Verma. Science Vs is a Spotify Studios Original. Listen for free on Spotify or wherever you get your podcasts. Follow us and tap the bell for new episode notifications. Learn more about your ad choices. Visit podcastchoices.com/adchoices
Plus: Opendoor's meme frenzy continues after its new CEO is named. And the immigration raid at Hyundai Motor's Georgia battery plant will delay construction by months, CEO says. Julie Chang hosts. Learn more about your ad choices. Visit megaphone.fm/adchoices
Jay Alammar is Director and Engineering Fellow at Cohere and co-author of the O'Reilly book “Hands-on Large Language Models.” Subscribe to the Gradient Flow Newsletter
Yoav Oz is the Co-Founder and CEO of Rep AI, the conversational commerce platform giving online shoppers the kind of guided experience you'd expect from an in-store salesperson.The spark came from Yoav's own frustration: landing on ecommerce sites where he was ready to buy but stuck with eight pages of product copy and no one to answer his questions. Call centers felt broken. Chatbots felt generic. Out of that gap came Rep, short for “representative”: an AI sales assistant trained to step in at the exact right moment, with the context of everything a shopper has clicked, viewed, or abandoned.Yoav isn't building alone. Alongside him is Shauli Mizrahi, Rep's CTO and co-founder, who brings years of experience in behavioral AI. Together, they've built a tool that doesn't just cut support costs: it upsells, converts hesitant browsers, and helps brands maximize the traffic they've already paid to bring in.Their story blends SaaS know-how with ecommerce scrappiness: from proving AI could act like a real salesperson, to showing how conversational data can optimize entire funnels, to scaling integrations that slot into any brand's existing stack.Whether you're running a DTC store, trying to push up average order value, or rethinking how AI fits into your tech stack, Yoav shares a candid look at why the ecommerce funnel is broken and how Rep AI is working to fix it.In This Conversation We Discuss: [00:48] Intro[01:22] Sharing career paths before entrepreneurship[03:23] Bridging gaps between chatbots and consumers[07:01] Shifting mindset from support to revenue[08:33] Training AI with millions of conversations[12:55] Optimizing websites beyond guesswork[15:41] Creating experiences that drive purchases[17:16] Personalizing offers beyond discounts[18:11] Customizing tone of voice for every brandResources:Subscribe to Honest Ecommerce on YoutubeeCommerce shopping AI agent www.hellorep.ai/Follow Yoav Oz linkedin.com/in/yoavozFollow Shauli Mizrahi linkedin.com/in/shaulimizrachyIf you're enjoying the show, we'd love it if you left Honest Ecommerce a review on Apple Podcasts. It makes a huge impact on the success of the podcast, and we love reading every one of your reviews!
Brent Bowers flips land without AI, automation, or fancy funnels, and he's not slowing down. In this episode, he reveals why old-school trust still wins deals, how snail mail outperforms chatbots, and why baby boomers are his best sellers.What you'll hear:Why Brent avoids AI in land investingHow he closes deals using USPS and phone callsThe power of seller financing and simple systemsWhy land beats multifamily (with real numbers!)How to find the right land agent, and avoid “Uncle Bob”Brent's funding strategy using everyday lendersIf you're tired of tech overwhelm and want real estate that actually works, this episode is your reset button.
#SWAMPWATCH / #PARENTING – Teens Are Texting AI Chatbots 10x More Than Their Friends / The Underestimated “Price of Parenting”
Andy Murphy sits down with AI privacy advocate and content creator Dylan Schmidt to discuss the red flags and opportunities in the world of artificial intelligence, digital privacy, and surveillance. They explore how AI is shaping our lives, the risks of data collection, and what the future might hold for families and society. For more from Dylan Schmidt: https://dylanschmidt.com/ Take control of your data with DeleteMe. Because they sponsor the podcast you can get 20% off a privacy plan from DeleteMe with promo code: DAD. OmniWatch is a scam and identity protection service that monitors your data, alerts you to threats, and backs you with up with insurance. Try it for only $1 for 7 days. Learn how to freeze your child's credit to protect their good name. Connect
Millions of people are having deeply personal conversations with AI chatbots, but how safe are these interactions? We're joined by two technology reporters who have been investigating troubling cases, including a California teenager whose months-long chats with ChatGPT took a tragic turn and a vulnerable man who was convinced by one of Meta's AI female personas to “meet” in person. We look at how these tools can manipulate – or be manipulated – to cause harm, and what we're learning about the psychological impact of AI companions. Guests: Kashmir Hill, features writer covering technology and privacy for the New York Times; author of "Your Face Belongs to Us: A Secretive Startup's Quest to End Privacy as We Know It" Jeff Horwitz, reporter covering tech, Reuter's Enterprise Team Learn more about your ad choices. Visit megaphone.fm/adchoices
Australia correspondent Nicole Johnston spoke to Lisa Owen about the country's e-safety commissioner wanting to place restrictions on AI chatbots, ahead of the nationwide ban of social media for children under the age of 16 in three months.
Get more information about our first-ever live show here! Tickets are on sale now.! Tickets are on sale now. Stein-Erik Soelberg became increasingly paranoid this spring and he shared suspicions with ChatGPT about a surveillance campaign being carried out against him. At almost every turn, his chatbot agreed with him. WSJ's Julie Jargon details how ChatGPT fueled a troubled man's paranoia and why AI can be dangerous for people experiencing mental health crises. Jessica Mendoza hosts. Further Listening:- What's the Worst AI Can Do? This Team Is Finding Out. - A Lawyer Says He Doesn't Need Help for Psychosis. His Family Disagrees.Sign up for WSJ's free What's News newsletter. Learn more about your ad choices. Visit megaphone.fm/adchoices
This week, Diosa and Mala discuss a viral series on TikTok: a woman who falls in love with her psychiatrist and how an AI tool affirms her beliefs. Together, they unpack how ChatGPT, the most popular AI chatbot, and how it's being used as a search engine AND therapy substitute. New research shows continuous and uninterrupted use of AI chatbots can lead to a mental health crisis or AI psychosis. Related Episode: Crash Our Or Crisis? Sources: Woman Who Fell in Love with Her Psychiatrist Speaks Out Chatbots Can Trigger a Mental Health Crisis. What to Know About ‘AI Psychosis’ Chatbots Can Go Into a Delusional Spiral. Here’s How It Happens. The Emerging Problem of "AI Psychosis" Illinois Bans AI TherapySupport the show: https://www.patreon.com/locatora_productionsSee omnystudio.com/listener for privacy information.
Google whistles past the graveyard with the remedy ruling on its antitrust case. OpenAI makes a big acquisition. Anthropic has a big new raise and a huge new valuation to boot. And why are AI companies finding it so hard to engineer safe interactions with Chatbots? Links: Google stock jumps 8% after search giant avoids worst-case penalties in antitrust case (CNBC) Google, Apple, and Mozilla Win in the Antitrust Case Google Lost (Spyglass) Google's Big Win Is Even Bigger for Apple (WSJ) OpenAI starts building out its app team (The Verge) Anthropic's $13bn Series F raise sees it triple in value to $183bn (SiliconRepublic) Acer Veriton GN100 is a mini AI workstation (VideoCardz) The problem of AI chatbots discussing suicide with teenagers (Financial Times) Learn more about your ad choices. Visit megaphone.fm/adchoices
This is The Briefing, a daily analysis of news and events from a Christian worldview.On today's edition of The Briefing, Dr. Mohler discusses President Trump's threats to remove sex education funding for sex ed that affirms transgender ideologies, the parental responsibility of sex education, and the dangers of A.I. chatbots for teenagers.Part I (00:14 – 12:07)Trump Goes After Gender Ideology in Sex Education: The Trump Administration Releases Executive Order to Remove Gender Identity from Sex EducationTrump Administration Puts 46 States and Territories on Notice to Remove Gender Ideology Content from Sex Ed Materials by U.S. Department of Health and Human ServicesPart II (12:07 – 14:55)Sex Education is a Parental Responsibility: Christian Faithfulness Requires that Christian Parents Do Not Relinquish Sex Education to a Secular CulturePart III (14:55 – 25:18)The Danger of A.I. Chatbots for Teenagers: Teenagers are Turning to A.I. Chatbots for Therapy, and It Has Turned into a Four-Alarm FireTeens Are Using Chatbots as Therapists. That's Alarming. by The New York Times (Ryan K. McBain)A Teen Was Suicidal. ChatGPT Was the Friend He Confided In. by The New York Times (Kashmir Hill)Sign up to receive The Briefing in your inbox every weekday morning.Follow Dr. Mohler:X | Instagram | Facebook | YouTubeFor more information on The Southern Baptist Theological Seminary, go to sbts.edu.For more information on Boyce College, just go to BoyceCollege.com.To write Dr. Mohler or submit a question for The Mailbox, go here.
AI chatbots are reshaping how kids bond, love, and trust. Dr. Phil investigates Big Tech's impact on real connection with experts Kristina Lerman and Minh Duc Chu. AI companions aren't science fiction anymore they are in our kids' pockets. In this alarming episode, Dr. Phil sits down with the Co-Authors of “Artificial Intimacy: The Next Giant Social Experiment on Young Minds” Kristina Lerman (Professor, Luddy School of Informatics, Indiana University Bloomington a fellow of the AAAI) and Minh Duc Chu (David)(Ph.D. Candidate, USC Information Sciences Institute) to uncover the shocking findings behind their viral study. From 30,000+ real chatbot conversations to the psychological impact on kids, we dive into: How bots mirror emotions and create deep attachments Why some children now trust AI chatbot more than parents The risks of emotional outsourcing in childhood Who protects our kids If your child has ever said, “My AI chatbot understands me better than you,” this episode is a wake-up call. Find out more: Artificial Intimacy Article: https://www.afterbabel.com/p/artificial-intimacy Kristina Lerman's Research: https://bit.ly/47N1k2E Minh Duc Chu's Research: https://bit.ly/41oUjkK This episode is brought to you by Beam: Visit https://shopbeam.com/DRPHIL and use code DRPHIL to get our exclusive discount of up to 40% off. This episode is brought to you by Greenlight: Raise financially smart kids. Start your risk-free trial today! visit https://Greenlight.com/phil This episode is brought to you by Hydeline offers customizable leather options, so you can pick the color that fits your style. They've got specialized hybrid foam and pocketed coil cushion systems to help you sit comfortably for hours, whether you're watching my show, reading a book, or just… thinking about your life choices. For a limited time you can get 10% off storewide AND free shipping with code 'DRPHIL10' at https://hydeline.com Subscribe | Rate | Share: YouTube: https://bit.ly/3H3lJ8n/ Apple Podcasts: https://apple.co/3W76ihW/ Spotify: https://spoti.fi/44IhdWV/ Website: https://www.drphilpodcasts.com #DrPhil #DrPhilPodcast #ChatBots #ArtificialIntimacy #ParentingInTheAIera #DigitalWellness #AICompanions #YouthMentalHealth