Program that simulates conversation
POPULARITY
Categories
Episode 1863 - brought to you by our incredible sponsors: BUBS Naturals - For a limited time only, get 20% OFF at BUBSNaturals.com Collagen Peptides by using code HARDFACTOR at checkout RIDGE - Take advantage of Ridge's Biggest Sale of the Year and GET UP TO 47% Off by going to https://www.Ridge.com/HARDFACTOR #Ridgepod DaftKings - Download the DraftKings Casino app, sign up with code HARDFACTOR, and spin your favorite slots! The Crown is Yours - Gambling problem? Call one eight hundred GAMBLER Lucy - Level up your nicotine routine with Lucy. Go to Lucy.co/HARDFACTOR and use promo code (HARDFACTOR) to get 20% off your first order. Lucy has a 30-day refund policy if you change your mind Timestamps: (00:00:00) - Was it the Jack in the Box that did it?? (00:06:57) - Massachusetts town sick of smelling like donuts, but people do love Dunkin, so... (00:17:28) - Mom of 11 year-old VA Beach boy suing AI company over Marilyn Monroe and Whitney Houston sex chat-bots allegedly manipulating her 11 year-old son (00:29:00) - Stanford study theorizes that AI will almost always lie to you to attempt improving results, the White Moose, and Penguin killing Pumas (00:38:58) - Former reality "star" from short-lived "Neighbors with Benefits" exposed on child and animal s3x charges Thank you for listening and supporting the pod! go to patreon.com/HardFactor to join our community, get access to discord chat, bonus pods, and much more - but Most importantly: HAGFD!! Learn more about your ad choices. Visit megaphone.fm/adchoices
Psychologist Gerd Gigerenzer explains the power of intuition, how intuition became gendered, what he thinks Kahneman and Tversky's research agenda got wrong, and why it's a mistake to place intuition and conscious thinking on opposing ends of the cognition spectrum. Topics he discusses in this wide-ranging conversation with EconTalk's Russ Roberts include what Gigerenzer calls the "bias bias"--the overemphasis on claims of irrationality, why it's better to replace "nudging" with "boosting," and the limitations of AI in its current form as a replacement for human intelligence and intuition.
Our guest host today is Fortis Institute Fellow Dr. Greg Gifford from the Transformed podcast! Segment 1 • AI chatbots now offer therapy, dating advice, and moral commentary. • Christians need to sharpen discernment and develop a healthy skepticism of digital “truth.” • Zuckerberg envisions a future where most of your “friends” are bots. Segment 2 • Millions turn to AI bots due to loneliness, cost, or distrust in traditional therapy. • AI therapy feels emotionally validating, but it's really subjective. • Some chat bots have already crossed lines with minors. Segment 3 • Dr. Gifford's chatbot told him to quit based on a venting session. • Teens use amoral chatbots to guide them toward moral decisions. • ChatGPT answered hot-button worldview questions with pre-loaded narratives with no commitment to actual truth. Segment 4 • Seeking wisdom from chatbots replaces Christ's body with code and undermines Christian community. • AI gives advice that affirms what someone already believes. • Christians must be radically discerning, especially for their kids. ___ Thanks for listening! Wretched Radio would not be possible without the financial support of our Gospel Partners. If you would like to support Wretched Radio we would be extremely grateful. VISIT https://fortisinstitute.org/donate/ If you are already a Gospel Partner we couldn't be more thankful for you if we tried!
Dr Kirk and Dr Michael Drane answer patron emails. December 29, 2025This episode is sponsored by BetterHelp. Give online therapy a try at betterhelp.com/KIRK to get 10% off your first month.00:00 Dr. Michael Drane & working with psychosis 17:14 Will AI replace therapists? 23:42 How well does AI replicate Dr. Kirk's response?43:52 Is IFS problematic?59:09 Where to find Dr. DraneBecome a member: https://www.youtube.com/channel/UCOUZWV1DRtHtpP2H48S7iiw/joinBecome a patron: https://www.patreon.com/PsychologyInSeattleEmail: https://www.psychologyinseattle.com/contactWebsite: https://www.psychologyinseattle.comMerch: https://psychologyinseattle-shop.fourthwall.com/Instagram: https://www.instagram.com/psychologyinseattle/Facebook Official Page: https://www.facebook.com/PsychologyInSeattle/TikTok: https://www.tiktok.com/@kirk.hondaThe Psychology In Seattle Podcast ®Trigger Warning: This episode may include topics such as assault, trauma, and discrimination. If necessary, listeners are encouraged to refrain from listening and care for their safety and well-being.Disclaimer: The content provided is for educational, informational, and entertainment purposes only. Nothing here constitutes personal or professional consultation, therapy, diagnosis, or creates a counselor-client relationship. Topics discussed may generate differing points of view. If you participate (by being a guest, submitting a question, or commenting) you must do so with the knowledge that we cannot control reactions or responses from others, which may not agree with you or feel unfair. Your participation on this site is at your own risk, accepting full responsibility for any liability or harm that may result. Anything you write here may be used for discussion or endorsement of the podcast. Opinions and views expressed by the host and guest hosts are personal views. Although, we take precautions and fact check, they should not be considered facts and the opinions may change. Opinions posted by participants (such as comments) are not those of the hosts. Readers should not rely on any information found here and should perform due diligence before taking any action. For a more extensive description of factors for you to consider, please see www.psychologyinseattle.com(By The Daily Telegraph. Copyright holders of the image of Madeleine at three are Kate and Gerry McCann. The age-progressed image was commissioned by Scotland Yard from forensic artist Teri Blythe for release to the public. Both images have been widely disseminated by the copyright holders, and have been the subject of significant commentary., Fair use, https://en.wikipedia.org/w/index.php?curid=39861556)
C'est un sujet qui a passionné les lecteurs de ZDNET cette année, et il est idéal si, entre la dinde et la bûche, votre oncle vous demande comment l'intelligence artificielle va encore évoluer.Vous pourrez lui répondre que la prochaine étape, c'est la fin du choix cornélien entre rapidité et intelligence.Le cœur de cette révolution, c'est ce que Nick Turley, le responsable de ChatGPT, décrit comme une unification totale des modèles d'IA dans les chatbots.Une gymnastique mentale souvent fastidieuseJusqu'à présent, nous étions habitués à devoir choisir manuellement notre modèle d'IA. Cette gymnastique mentale, souvent fastidieuse pour l'utilisateur, disparait avec les nouvelles versions des chatbots d'IA.L'idée derrière ce nouveau modèle est d'automatiser cette décision. Concrètement, le système fonctionne de manière beaucoup plus humaine.Si vous lui posez une question triviale, il vous répondra du tac au tac, en utilisant un modèle économe et rapide."Réfléchir" avant de répondreEn revanche, si vous lui soumettez une équation complexe ou une question pointue en sciences de l'environnement, l'IA prend le temps de "réfléchir" avant de répondre, et ce en activant ses capacités de raisonnement approfondi.Comme le souligne Sam Altman, le patron d'OpenAI, l'objectif est d'obtenir une fluidité conversationnelle où l'IA adapte son temps de latence à la difficulté de la tâche, et ce sans que vous n'ayez rien à configurer.Pour les entreprises et les professionnels, cet impact est majeur. Le collaborateur en entreprise n'a plus besoin d'être un expert en "prompt engineering" ou de connaître les subtilités de chaque version pour obtenir le meilleur résultat.Cette approche "tout-en-un" promet de simplifier drastiquement l'adoption de l'IA générative dans les processus métiers, en garantissant toujours la réponse la plus pertinente, qu'elle nécessite une seconde ou une minute de réflexion.Le ZD Tech est sur toutes les plateformes de podcast ! Abonnez-vous !Hébergé par Ausha. Visitez ausha.co/politique-de-confidentialite pour plus d'informations.
Zeit sparen im Training - mit KI, die wirklich hilft! Du hast so viele gute Ideen im Kopf, doch zwischen Deinen unzähligen Terminen bleibt kaum Luft, um sie auch umzusetzen? Dann ist die neue Folge genau richtig für Dich! Denn in dieser Episode sprechen wir darüber, wie Du NotebookLM und GoogleGemini so nutzt, dass sie Dir in Trainings und im Businessalltag wirklich Zeit sparen. Nicht als Spielerei, sondern als echte Unterstützung für Trainer, Berater und Coaches. Du erfährst, warum NotebookLM mehr ist als ein Chatbot und wie Du mit Deinen eigenen PDFs, Präsentationen, Webseiten oder Videos arbeiten kannst, ohne Dich durch endlose Recherchen zu kämpfen. Und im zweiten Teil geht es dann um Gemini als kreativen Sparringspartner. Multimodal, tief in den Arbeitsalltag integriert und besonders stark, wenn es um Ideen, Konzepte und Inhalte geht. Darum geht's im Überblick:
Emmett Shear and Séb Krier debate whether today's AI alignment paradigm—focused on control and instruction-following—is fundamentally flawed. PSA for AI builders: Interested in alignment, governance, or AI safety? Learn more about the MATS Summer 2026 Fellowship and submit your name to be notified when applications open: https://matsprogram.org/s26-tcr. They explore what changes if advanced AIs are better understood as beings with their own values, and why current control methods could drift toward something like slavery. The conversation dives into “organic alignment,” multi-agent simulations, evolving cooperation, and the possibility of AI moral standing as systems gain memory and continual learning. Sponsors: MATS: MATS is a fully funded 12-week research program pairing rising talent with top mentors in AI alignment, interpretability, security, and governance. Apply for the next cohort at https://matsprogram.org/s26-tcr Tasklet: Tasklet is an AI agent that automates your work 24/7; just describe what you want in plain English and it gets the job done. Try it for free and use code COGREV for 50% off your first month at https://tasklet.ai Agents of Scale: Agents of Scale is a podcast from Zapier CEO Wade Foster, featuring conversations with C-suite leaders who are leading AI transformation. Subscribe to the show wherever you get your podcasts Shopify: Shopify powers millions of businesses worldwide, handling 10% of U.S. e-commerce. With hundreds of templates, AI tools for product descriptions, and seamless marketing campaign creation, it's like having a design studio and marketing team in one. Start your $1/month trial today at https://shopify.com/cognitive CHAPTERS: (00:00) About the Episode (03:44) Defining organic AI alignment (14:48) Technical vs value alignment (Part 1) (19:55) Sponsors: MATS | Tasklet (22:56) Technical vs value alignment (Part 2) (Part 1) (31:34) Sponsors: Agents of Scale | Shopify (34:21) Technical vs value alignment (Part 2) (Part 2) (34:22) Labs, tools, and beings (43:22) AI personhood and consciousness (56:53) Safe futures and Softmax (01:04:17) Chatbots, mirrors, simulations (01:10:14) Doom, futures, and OpenAI (01:17:25) Outro PRODUCED BY: https://aipodcast.ing SOCIAL LINKS: Website: https://www.cognitiverevolution.ai Twitter (Podcast): https://x.com/cogrev_podcast Twitter (Nathan): https://x.com/labenz LinkedIn: https://linkedin.com/in/nathanlabenz/ Youtube: https://youtube.com/@CognitiveRevolutionPodcast Apple: https://podcasts.apple.com/de/podcast/the-cognitive-revolution-ai-builders-researchers-and/id1669813431 Spotify: https://open.spotify.com/show/6yHyok3M3BjqzR0VB5MSyk
Guest: Gordon Pennycook, Associate Professor at Cornell University's Department of Psychology Learn more about your ad choices. Visit megaphone.fm/adchoices
An unlock of a bonus episode from earlier this year. Access all of our bonus episodes here. Happy holidays, everyone! -- -- Earlier this year, a spate of news stories told of chatbot users travelling through the looking-glass right into Conspirituality. Paranoid conspiracies, spiritual awakenings, even falling head-over-heels in love with the simulated personalities of large language models like ChatGPT. Could AI have finally crossed the threshold into autonomous sentient consciousness? Could it be that chatbots were anointing new prophets—or, conversely, that very special users were awakening their very special friends via the power of love and illuminating dialogue? Step aside, QAnon, the code behind the screen is illuminated by God! Sadly, some of these stories trended very dark. Suicides, attempted murder, paranoid delusions, spouses terrified of losing their partners and co-parents to what looked like spiritual and romantic delusions. For this standalone installment of his Roots of Conspirituality series, Julian examines this strange new phenomenon, then takes a detour into Ancient Greece and the oracle at Delphi to show that everything old is actually new again—just dressed up in digital technology. Show Notes I Married My Chatbot FTC Complaints Against OpenAI for Chatbot Psychosis AI Spiritual Delusions Destroying Human Relationships Learn more about your ad choices. Visit megaphone.fm/adchoices
AI có thể giúp bạn làm việc nhanh hơn, nhưng cũng có thể khiến công ty bạn mất hàng tỷ đồng nếu dùng sai cách!
Mystery as hundreds of Victorian era shoes from the 1900's wash up on a beach in Wales. Neighbors are complaining about the intense sweet smell of donuts in their neighborhood. Mom sues an AI company after catching her son having raunchy sexting with Whitney Houston and Marilyn Monroe chatbots. //Weird AF News is the only daily weird news podcast in the world. Weird news 5 days/week and on Friday it's only Floridaman. SUPPORT by joining the Weird AF News Patreon http://patreon.com/weirdafnews - OR buy Jonesy a coffee at http://buymeacoffee.com/funnyjones Buy MERCH: https://weirdafnews.merchmake.com/ - Check out the official website https://WeirdAFnews.com and FOLLOW host Jonesy at http://instagram.com/funnyjones - wants Jonesy to come perform standup comedy in your city? Fill out the form: https://docs.google.com/forms/d/e/1FAIpQLSfvYbm8Wgz3Oc2KSDg0-C6EtSlx369bvi7xdUpx_7UNGA_fIw/viewform
SaaStr 834: Why OpenAI Doesn't Pay Sales Commission (And Why It Works) with OpenAI GTM Leader Maggie Hott, and Harry Stebbings, Founder of 20VC Discover how OpenAI's unique approach to B2B sales compensation is changing the game. In this interview with Harry Stebbings, Founder of 20VC, Maggie Hott, OpenAI GTM leadership, shares her experience at OpenAI, including why they don't pay sales commissions, and what B2B sales leaders can learn from this disruptive model. --------------------- This episode is Sponsored in part by HappyFox: Imagine having AI agents for every support task — one that triages tickets, another that catches duplicates, one that spots churn risks. That'd be pretty amazing, right? HappyFox just made it real with Autopilot. These pre-built AI agents deploy in about 60 seconds and run for as low as 2 cents per successful action. All of it sits inside the HappyFox omnichannel, AI-first support stack — Chatbot, Copilot, and Autopilot working as one. Check them out at happyfox.com/saastr --------------------- Hey everybody, the biggest B2B + AI event of the year will be back - SaaStr AI in the SF Bay Area, aka the SaaStr Annual, will be back in May 2026. With 68% VP-level and above, 36% CEOs and founders and a growing 25% AI-first professional, this is the very best of the best S-tier attendees and decision makers that come to SaaStr each year. But here's the reality, folks: the longer you wait, the higher ticket prices can get. Early bird tickets are available now, but once they're gone, you'll pay hundreds more so don't wait. Lock in your spot today by going to podcast.saastrannual.com to get my exclusive discount SaaStr AI SF 2026. We'll see you there.
The intersection of AI and cybersecurity is changing faster than anyone expected, and that pace is creating both incredible innovation and brand-new risks we're only beginning to understand. From deepfake ads that fool even seasoned security professionals to autonomous agents capable of acting on our behalf, the threat landscape looks very different than it did even a year ago. To explore what this evolution means for everyday people and for enterprises trying to keep up, I'm joined by Chris Kirschke, Field CISO at Tuskira and a security leader with more than two decades of experience navigating complex cyber environments. Chris talks about his unconventional path into the industry, how much harder it is for new professionals to enter cybersecurity today, and the surprising story of how he recently fell for a fake Facebook ad that showcased just how convincing AI-powered scams have become. He breaks down the four major waves of InfoSec from the rise of the web, through mobile and cloud, to the sudden, uncontrollable arrival of generative AI. He then explains why this fourth wave caught companies completely off guard. GenAI wasn't something organizations adopted thoughtfully; it appeared overnight, with thousands of employees using it long before security teams understood its impact. That forced long-ignored issues like data classification, permissions cleanup, and internal hygiene to the forefront. We also dive into the world of agentic AI which is AI that doesn't just analyze but actually acts and the incredible opportunities and dangers that come with it. Chris shares how low-code orchestration, continuous penetration testing, context engineering, and security "mesh" architectures are reshaping modern InfoSec. Chris spends a lot of time talking about the human side of all this and why guardrails matter, how easy it is to over-automate, and the simple truth that AI still struggles with the soft skills security teams rely on every day. He also shares what companies should think about before diving into AI, starting with understanding their data, looping in legal and privacy teams early, and giving themselves room to experiment without turning everything over to an agent on day one. Show Notes: [00:00] Chris Kirschke, Field CISO at Tuskira, is here to explore how AI is reshaping cybersecurity and why modern threats look so different today. [03:05] Chris shares his unexpected path from bartending into IT in the late '90s, reflecting on how difficult it has become for newcomers to enter cybersecurity today. [06:18] A convincing Facebook scam slips past his defenses, illustrating how AI-enhanced fraud makes traditional red flags far harder to spot. [09:32] GenAI's sudden arrival in the workplace creates chaos as employees adopt tools faster than security teams can assess risk. [12:08] The conversation shifts to AI-driven penetration testing and how continuous, automated testing is replacing traditional annual reports. [15:23] Agentic AI enters the picture as Chris explains how low-code orchestration and autonomous agents are transforming security workflows. [18:24] He discusses when consumers can safely rely on AI agents and why human-in-the-loop oversight remains essential for anything involving transactions or access. [21:48] AI's dependence on context becomes clear as organizations move toward context lakes to support more intelligent, adaptive security models. [25:46] He highlights early experiments where AI agents automatically fix vulnerabilities in code, along with the dangers of developers becoming over-reliant on automation. [29:50] AI emerges as a support tool rather than a replacement, with Chris emphasizing that communication, trust, and human judgment remain central to the security profession. [33:35] A mock deposition experience reveals how AI might help individuals prepare for high-stress legal or compliance scenarios. [37:13] Chris outlines practical guardrails for adopting AI—starting with data understanding, legal partnerships, and clear architectural patterns. [40:21] Chatbot failures remind everyone that AI can invent policies or explanations when it lacks guidance, underscoring the need for strong oversight. [41:32] Closing thoughts include where to find more of Chris's work and continue learning about Tuskira's approach to AI security. Thanks for joining us on Easy Prey. Be sure to subscribe to our podcast on iTunes and leave a nice review. Links and Resources: Podcast Web Page Facebook Page whatismyipaddress.com Easy Prey on Instagram Easy Prey on Twitter Easy Prey on LinkedIn Easy Prey on YouTube Easy Prey on Pinterest Tuskira Chris Kirschke -LinkedIn
Freundschaft in Zeiten der Digitalisierung – Freundschaften tun uns gut, das zeigen Studien. Doch gilt das auch für die digitale Welt? Noch unterscheiden sich analoge und digitale Freundschaften deutlich, aber die Grenzen verschwimmen immer mehr.
Bhavesh Mehta and Mahesh Kumar—senior technology leaders at Uber and co-authors of the practical guide AI-First Leader—discuss the lessons learned from Nova Bridge's collapse, and share best practices for mitigating hidden risks that can derail ambitious AI projects. They also share specific ways that small businesses and Fortune 500 companies can embrace AI from a place of empowerment rather than fear. Key Takeaways: Ways to align C-suite leaders and engineering teams around a unified AI roadmap The most underestimated human factor that determines whether an AI transformation succeeds How overlooked vulnerabilities, insufficient oversight, and the rush to deploy led to unexpected fallout of the Nova Bridge Chat The unforeseen dangers lurking within AI systems Guest Bio: Bhavesh Mehta is a technology leader and co-author of AI-First Leader, a practical guide for executives navigating enterprise AI adoption. With over 20 years of experience across Cisco, Uber, and VMware, Bhavesh has architected large-scale conversational and generative AI systems that support millions of users daily. His work bridges deep technical design and executive strategy, helping organizations deploy AI responsibly and at scale. Mahesh Kumar is a seasoned product executive and co-author of AI-First Leader, a practical guide for executives navigating enterprise AI adoption. With over 20 years of experience across Uber, Veritas, and VMware, Mahesh has led the development of multi-billion-dollar product portfolios and enterprise AI strategies. Known for bridging deep technology with strategic vision, he helps organizations move from experimentation to large-scale AI transformation. His work focuses on responsible innovation, combining business storytelling with technical fluency to make AI both accessible and actionable for leaders. ---------------------------------------------------------------------------------------- About this Show: The Brave Technologist is here to shed light on the opportunities and challenges of emerging tech. To make it digestible, less scary, and more approachable for all! Join us as we embark on a mission to demystify artificial intelligence, challenge the status quo, and empower everyday people to embrace the digital revolution. Whether you're a tech enthusiast, a curious mind, or an industry professional, this podcast invites you to join the conversation and explore the future of AI together. The Brave Technologist Podcast is hosted by Luke Mulks, VP Business Operations at Brave Software—makers of the privacy-respecting Brave browser and Search engine, and now powering AI everywhere with the Brave Search API. Music by: Ari Dvorin Produced by: Sam Laliberte
Was passiert mit einer KI, wenn niemand mit ihr chattet? Kann mein Chatbot endlich ordentliche PowerPoints? Und warum schreibt meine KI plötzlich auf Arabisch? Gregor, Fritz und Marie packen eure Fragen aus dem Jahr 2025 aus.
Multiple agencies have suspended their search for a swimmer who went missing at Lover's Point Beach in Pacific Grove on Sunday. And, a new California law will make it illegal for AI chatbots to misrepresent themselves as licensed clinicians.
Welcome to The Chrisman Commentary, your go-to daily mortgage news podcast, where industry insights meet expert analysis. Hosted by Robbie Chrisman, this podcast delivers the latest updates on mortgage rates, capital markets, and the forces shaping the housing finance landscape. Whether you're a seasoned professional or just looking to stay informed, you'll get clear, concise breakdowns of market trends and economic shifts that impact the mortgage world.In today's episode, we look at some employment changes that are impacting the mortgage industry. Plus, Robbie sits down with Ardley's new AI Chatbot Leo, and Nathan Den Herder, mimicking a typical conversation between a potential borrower exploring mortgage options and an AI bot before being passed off to an originator. And we close by looking ahead to this week's economic calendar.Thanks to Gallus Insight, which is transforming employee analytics into actionable insights. Gallus' ROI tool for learning and development activity is the most powerful in the world, and also the easiest to use.
Den artificiella intelligensen har gjort entré inom mind body spirit-rörelsen. AI-mentorer, AI-guider och AI-terapeuter — appar och chatbots. Kan de både hjälpa och stjälpa? Vad händer med vårt intellekt och våra kognitiva förmågor när vi överlåter allt mer på AI?Många kunskapsföretag är högt exponerade mot generativ AI men mediumskapets tjänster spelar i en egen division. På vilket sätt då? Varför kan den artificiella intelligensen inte ersätta ett andekontakt-medium menar Camilla?Vi beskriver också hur attraktionslagen och framgångsideologi givit oss personligt välstånd och lycka samt hur vår förståelse för attraktionslagen och medveten manifestation fördjupats genom åren. Mia de Neergaard är livscoach och mental tränare. Camilla Elfving är spiritualistiskt medium.Våra e-kurser finns på https://www.humanpotentialacademy.se/God lyssning och God Jul önskar Mediumpodden, Vivi och Camilla Hosted on Acast. See acast.com/privacy for more information.
What if financial advisors didn't just use AI tools—but had AI workers supporting their business? In this episode of Money Matters, host Chris Hensley is joined by Jonathan Michael, Director of Growth at TIFIN AXIS, to unpack what AI agents really are, how they differ from chatbots, and why they're quickly becoming a practical way for advisory firms to scale. Jonathan brings a founder's perspective from both EdTech and WealthTech and focuses on one core idea: AI should take on the operational, repetitive work that slows firms down—so advisors can spend more time where they add the most value. In this conversation, we discuss: What defines an AI agent (and what doesn't) Why AI workers are best thought of as digital co-workers Where agents outperform traditional software and manual workflows Why structured prompting is a best practice for reliability and oversight How verification loops reduce AI errors and improve confidence The role of data infrastructure in deploying AI responsibly What the RIA of the future looks like with humans and AI working together If you're a financial advisor or RIA leader trying to make sense of AI without the hype, this episode offers a grounded, practical look at how firms are actually using it today—and where it's heading next.
Today's episode of the Punk CX podcast is with Phil Regnault, Marketing Transformation Partner at PwC & PwC's Adobe Alliance Leader. Phil and I talk about the recently published results of PwC's 2025 Customer Experience Survey, which was a US-focused survey, and a recently released fresh cut of their survey featuring exclusive insights from Adobe users. We also talk about how many organisations are still struggling to scale their AI projects, generate meaningful commercial returns, or drive improvements in customer outcomes, and what he sees the most successful companies doing to overcome these challenges. We finish off with Phil's best advice, his Punk CX brand and his very own good news story. This interview follows on from my recent interview – Douglas Adams' Babelfish concept just got much closer – Interview with Sharath Keshava Narayana of Sanas – and is number 567 in the series of interviews with authors and business leaders who are doing great things, providing valuable insights, helping businesses innovate and delivering great service and experience to both their customers and their employees.
Webster University Media Instructor and author, Julie Smith, joins Megan Lynch. Microsoft is being sued by a family with family member who killed another and the died by suicide. This is the first lawsuit that involves a homicide. '25% of young adults have asked a chatbot for mental health advice,' says Smith. She says the chatbot was telling the man that he wasn't mentally ill in this instance.
SaaStr 833: AI and the Death of the 2021 Sales Playbook with SaaStr CEO and Founder Jason Lemkin Join SaaStr CEO and Founder Jason Lemkin from SaaStr London as we tackle pressing topics in AI and sales during an engaging AMA session. From discussing the evolution of inbound vs. outbound sales to detailing the transformative impact of AI on sales roles and outbound strategies, this session covers it all. Learn about the importance of product expertise for selling AI solutions, the rise of AI SDRs, and why knowing your top three problems is crucial for effective outbound marketing. Plus, discover insights on the rapidly improving capabilities of AI agents and their implications for competitive moats. Whether you're a sales professional, marketer, or tech enthusiast, this episode is packed with actionable insights and future trends in the AI landscape. --------------------- This episode is Sponsored in part by HappyFox: Imagine having AI agents for every support task — one that triages tickets, another that catches duplicates, one that spots churn risks. That'd be pretty amazing, right? HappyFox just made it real with Autopilot. These pre-built AI agents deploy in about 60 seconds and run for as low as 2 cents per successful action. All of it sits inside the HappyFox omnichannel, AI-first support stack — Chatbot, Copilot, and Autopilot working as one. Check them out at happyfox.com/saastr --------------------- Hey everybody, the biggest B2B + AI event of the year will be back - SaaStr AI in the SF Bay Area, aka the SaaStr Annual, will be back in May 2026. With 68% VP-level and above, 36% CEOs and founders and a growing 25% AI-first professionals, this is the very best of the best S-tier attendees and decision makers that come to SaaStr each year. But here's the reality, folks: the longer you wait, the higher ticket prices can get. Early bird tickets are available now, but once they're gone, you'll pay hundreds more so don't wait. Lock in your spot today by going to podcast.saastrannual.com to get my exclusive discount SaaStr AI SF 2026. We'll see you there.
As AI reshapes the workplace, employees and leaders face questions about meaningful work, automation, and human impact. In this episode, Jason Beutler, CEO of RoboSource, shares how companies can rethink workflows, integrate AI in accessible ways, and empower employees without fear. The discussion covers leveraging AI to handle routine tasks (SOPs or "plays") and reimagining work for smarter, more human-centered outcomes.Featuring:Jason Beutler – LinkedInChris Benson – Website, LinkedIn, Bluesky, GitHub, XDaniel Whitenack – Website, GitHub, XSponsor:Framer – Design and publish without limits with Framer, the free all-in-one design platform. Unlimited projects, no tool switching, and professional sites—no Figma imports or HTML hassles required. Start creating for free at framer.com/design with code `PRACTICALAI` for a free month of Framer Pro.Upcoming Events: Register for upcoming webinars here!
【欢迎订阅】每天早上5:30,准时更新。【阅读原文】标题:Dr Chatbot is popping up all over China正文:China's public-health system is both world-class and woeful. The best hospitals in Beijing and Shanghai have top-notch doctors, the latest drugs and gleamingequipment. But such places make up only around 10% of China's medical institutions. In the country's 33,000-odd township-level health centres, only half of general practitioners (GPs) even have university degrees. In total China spends around 7% of its GDP on health (for comparison Britain shells out around 11%).知识点:woeful adj. /ˈwəʊfʊl/very bad or disappointing 糟糕的;令人失望的• The school's facilities were woeful compared with others. 与其他学校相比,这里的设施十分糟糕。• His performance in the exam was woeful. 他在考试中的表现令人失望。获取外刊的完整原文以及精讲笔记,请关注微信公众号「早安英文」,回复“外刊”即可。更多有意思的英语干货等着你!【节目介绍】《早安英文-每日外刊精读》,带你精读最新外刊,了解国际最热事件:分析语法结构,拆解长难句,最接地气的翻译,还有重点词汇讲解。所有选题均来自于《经济学人》《纽约时报》《华尔街日报》《华盛顿邮报》《大西洋月刊》《科学杂志》《国家地理》等国际一线外刊。【适合谁听】1、关注时事热点新闻,想要学习最新最潮流英文表达的英文学习者2、任何想通过地道英文提高听、说、读、写能力的英文学习者3、想快速掌握表达,有出国学习和旅游计划的英语爱好者4、参加各类英语考试的应试者(如大学英语四六级、托福雅思、考研等)【你将获得】1、超过1000篇外刊精读课程,拓展丰富语言表达和文化背景2、逐词、逐句精确讲解,系统掌握英语词汇、听力、阅读和语法3、每期内附学习笔记,包含全文注释、长难句解析、疑难语法点等,帮助扫除阅读障碍。
“Too far, too fast?”In this sixth episode, Sarah Harrison, Director of the MHPSS Hub, and Dan Amias, Senior Innovation Learning Adviser at Elrha, speaks with Mercy Githara, MHPSS Manager at the Kenya Red Cross Society, and Anne de Graaf, Technical Officer at the World Health Organization (WHO) about two chatbots: STARS, a non-AI chatbot developed by WHO, and Chat Care, an AI chatbot being deployed in Kenya by the Red Cross.Key resources for practitioners:Kenya Red Cross Society: Chat Care, AI Powered Mental Health Chatbot WHO: Doing What Matters in Times of Stress evidence based self-help guide WHO: STARS chatbot for more information contact psych_interventions@who.int Red Cross Digital MHPSS pledge: Leveraging and facilitating technology-empowered pathways MHPSS Hub resources on Suicide prevention:Find infographics, videos, podcasts, guides and tools on suicide prevention Read more about the research:STARS chatbot (non-AI): de Graaff A.M, et al. Evaluation of a Guided Chatbot Intervention for Young People in Jordan: Feasibility Randomized Controlled Trial JMIR Ment Health Keyan, D., et al. The development of a World Health Organization transdiagnostic chatbot intervention for distressed adolescents and young adults. Akhtar, A., et al. Scalable Technology for Adolescents and Youth to Reduce Stress in the Treatment of Common Mental Disorders in Jordan: Protocol for a Randomized Controlled Trial. Karim Chatbot: Madianou, M. Nonhuman humanitarianism: when “AI for good” can be harmful. Information, Communication & SocietyTorous, J. et al. Assessing generative artificial intelligence for mental health, The Lancet.Spencer SW, Masboungi C. Enabling access or automating empathy? Using chatbots to support GBV survivors in conflicts and humanitarian emergencies, International Review of the Red Cross. Developing digital MHPSS resources:Reach out to the MHPSS Hub: mhpsshub@rodekors.dk or Elrha: info@elrha.org for collaboration on developing digital MHPSS approaches.Evidence from the Frontline: Mental Health in Crisis-Affected Contexts is a six-episode mini-series produced in collaboration between the MHPSS Hub and Elrha, designed for practitioners working in humanitarian and crisis contexts, the series highlights impactful interventions and practical insights from experts in the field.
Thursday Headlines: Charges laid in Bondi Beach terror attack, pressure mounts on Anthony Albanese in the wake of the Bondi attack, teens clash with police on popular beach after storming Melbourne supermarket, search history of some Pornhub users hacked and the first trailer of Melania Trump’s Amazon doco is here. Take part in The Briefing survey HERE. Deep Dive: Does it count as cheating if it’s with AI? A recent YouGov survey found one in seven adult Australians could imagine falling in love with a chatbot, with one in six preferring to stay home and talk with it over going out with friends. In this episode of The Briefing, Helen Smith is joined by Raffaele Ciriello an expert in AI companions from The University of Sydney to discuss how worried we should be about AI relationships, romantic or not. Further listening from headlines: How the Philippines became a terrorist training ground Follow The Briefing: TikTok: @thebriefingpodInstagram: @thebriefingpodcast YouTube: @TheBriefingPodcastFacebook: @thebriefingpodcastSee omnystudio.com/listener for privacy information.
A study found AI chatbots can persuade us with fake facts. We explore what this means for politics. Then we speak to author of The Web Beneath the Waves, Samanth Subramanian, about the importance of undersea cables. Plus: The scientists working together with nuns to save salamanders in a lake in Mexico.Presenter: Chris Vallance Producer: Imran Rahman-Jones Editors: Monica Soriano & Tom Singleton(Image: A colourful composite of a woman with her chin resting on her hands, looking confused. Behind her is an illustration of a laptop. Credit: Getty Images)
Renée DiResta, Lawfare contributing editor and associate research professor at Georgetown's McCourt School of Public Policy, and Alan Z. Rozenshtein, Lawfare senior editor and associate professor of law the University of Minnesota, spoke with Jacob Mchangama, research professor of political science at Vanderbilt University and founder of The Future of Free Speech, and Jacob Shapiro, the John Foster Dulles Professor of International Affairs at Princeton University. The conversation covered the findings of a new report examining how AI models handle contested speech; comparative free speech regulations across six jurisdictions; empirical testing of how major chatbots respond to politically sensitive prompts; and the tension between free expression principles and concerns about manipulation in AI systems. Hosted on Acast. See acast.com/privacy for more information.
Send us a textKids are meeting chatbots everywhere, from homework helpers to pop‑ups on websites, and many of them think the robot “feels” things. We cut through the hype with a friendly, practical guide that shows how chatbots really work, why they sometimes sound brilliant and sometimes go off the rails, and what families can do to stay curious and safe at the same time.We start by breaking down the core idea: a chatbot predicts the next word using patterns, not feelings or understanding. That simple lens explains why a bot can write a funny pizza joke yet invent a name when asked about your classroom. From there, we shift into digital safety that kids can remember: treat a chatbot like a stranger, never share personal information, and pause the moment a reply feels weird or unkind. Along the way we model the “grown‑up in the loop” rule and show how to fact‑check results so confidence never replaces care.To make it hands‑on, we share a quick kitchen‑table activity that sorts “Safe Questions” and “Not Safe Questions.” On one side, creative prompts like stories, riddles, and science facts. On the other, anything that reveals names, addresses, school details, or phone numbers. We wrap with five simple takeaways that turn into everyday habits: chatbots are not people, they make mistakes, you never share private data, you stop and tell a grown‑up when something feels off, and you can still have fun within clear boundaries.If this helped your family or classroom, subscribe for more parent‑friendly AI tips, share the episode with someone who needs it, and leave a quick review so more caregivers can find these tools. Your support helps more kids stay safe, curious, and creative with AI.Support the showHelp us become the #1 podcast for AI for Kids. Support our kickstarter: https://www.kickstarter.com/projects/aidigicards/the-abcs-of-ai-activity-deck-for-kids Buy our debut book “AI… Meets… AI”Social Media & Contact: Website: www.aidigitales.com Email: contact@aidigitales.com Follow Us: Instagram, YouTube Books on Amazon or Free AI Worksheets Listen, rate, and subscribe! Apple Podcasts Amazon Music Spotify YouTube Other Like our content? patreon.com/AiDig...
«Kassensturz» zeigt in einer Spezial-Ausgabe, was KI heute schon alles kann. Im Test: der grosse Chatbot-Vergleich – mit überraschenden Resultaten. Künstliche Intelligenz – Allgegenwärtig, nützlich, aber auch beängstigend KI ist längst mehr als nur ein Sprach-Assistent. Sie scannt Röntgenbilder, erkennt Gesichter, sortiert Müll und hilft bei Texten. Doch trotz des Nutzens wächst die Sorge vor Arbeitsplatzverlusten. Segen oder Fluch? Die Debatte um die Super-Power KI im «Kassensturz». Chatbots im Test – Schweizer Eigenheiten und Halluzinationen Die Fachhochschule Nordwestschweiz FHNW testet für «Kassensturz» zehn gängige Chatbots beziehungsweise intelligente Sprachmodelle auf ihre Praxistauglichkeit. Darunter Schwergewichte wie ChatGPT und Gemini, aber auch das Schweizer Modell Lumo. Die Resultate zeigen: Viele Modelle haben Mühe mit Schweizer Eigenheiten – und neigen zu «Halluzinationen», also dem Erfinden von Zusammenhängen. So versteht Lumo zum Beispiel unter einer «Stange» im Restaurant ein Brot statt ein Bier. Die Überraschung: ChatGPT landet bloss im Mittelfeld. Ausgebeutet für KI – Wie Kenias Mikro-Jobber leiden Kenia ist ein Zentrum für KI-Mikro-Jobs. Doch hinter den Kulissen leiden sogenannte Daten-Annotatoren, Menschen die KI mit Wissen füttern, und Content-Moderatoren unter langen Arbeitszeiten, Hungerlöhnen und psychischen Belastungen. Belastungen durch Inhalte, die kaum auszuhalten sind: Kinderpornografie, Videos von Suiziden, rohe Gewalt. Anwälte schlagen deshalb Alarm und sprechen von Menschenhandel und Zwangsarbeit. Die Betroffenen fordern faire Arbeitsbedingungen und mehr Menschlichkeit. Am Pranger: Tech-Giganten wie zum Beispiel Meta, Muttergesellschaft von Facebook und WhatsApp. Die Reportage aus der kenianischen Hauptstadt Nairobi im «Kassensturz».
On today's show the Pugs engage the rapidly growing turn to AI chatbots for mental health needs by teenagers. In what realistic ways can the church step into the situation? What hurdles and limits do we encounter as we set out ways to address this growing crisis, especially among the disadvantaged? The Pugs wrestle through these complex matters in this episode. Article: https://www.theguardian.com/technology/2025/dec/09/teenagers-ai-chatbots-mental-health-support Support the Theology Pugcast on Patreon: https://www.patreon.com/thetheologypugcast?fbclid=IwAR17UHhfzjphO52C_kkZfursA_C784t0ldFix0wyB4fd-YOJpmOQ3dyqGf8 Learn more about WPC Battle Ground: https://www.solochristo.org/ Connect with WileyCraft Productions: https://wileycraftproductions.com/
On today's show the Pugs engage the rapidly growing turn to AI chatbots for mental health needs by teenagers. In what realistic ways can the church step into the situation? What hurdles and limits do we encounter as we set out ways to address this growing crisis, especially among the disadvantaged? The Pugs wrestle through these complex matters in this episode.Article: https://www.theguardian.com/technology/2025/dec/09/teenagers-ai-chatbots-mental-health-supportSupport the Theology Pugcast on Patreon: https://www.patreon.com/thetheologypugcast?fbclid=IwAR17UHhfzjphO52C_kkZfursA_C784t0ldFix0wyB4fd-YOJpmOQ3dyqGf8Learn more about WPC Battle Ground: https://www.solochristo.org/Connect with WileyCraft Productions: https://wileycraftproductions.com/
On today's show the Pugs engage the rapidly growing turn to AI chatbots for mental health needs by teenagers. In what realistic ways can the church step into the situation? What hurdles and limits do we encounter as we set out ways to address this growing crisis, especially among the disadvantaged? The Pugs wrestle through these complex matters in this episode. Article: https://www.theguardian.com/technology/2025/dec/09/teenagers-ai-chatbots-mental-health-support Support the Theology Pugcast on Patreon: https://www.patreon.com/thetheologypugcast?fbclid=IwAR17UHhfzjphO52C_kkZfursA_C784t0ldFix0wyB4fd-YOJpmOQ3dyqGf8 Learn more about WPC Battle Ground: https://www.solochristo.org/ Connect with WileyCraft Productions: https://wileycraftproductions.com/
On today's show the Pugs engage the rapidly growing turn to AI chatbots for mental health needs by teenagers. In what realistic ways can the church step into the situation? What hurdles and limits do we encounter as we set out ways to address this growing crisis, especially among the disadvantaged? The Pugs wrestle through these complex matters in this episode. Article: https://www.theguardian.com/technology/2025/dec/09/teenagers-ai-chatbots-mental-health-support Support the Theology Pugcast on Patreon: https://www.patreon.com/thetheologypugcast?fbclid=IwAR17UHhfzjphO52C_kkZfursA_C784t0ldFix0wyB4fd-YOJpmOQ3dyqGf8 Learn more about WPC Battle Ground: https://www.solochristo.org/ Connect with WileyCraft Productions: https://wileycraftproductions.com/
On this special episode of the GeekWire Podcast, recorded backstage at the GeekWire Gala at the Showbox Sodo, we sit down with five of the inventors, scientists, and entrepreneurs selected as the Seattle region's 2025 Uncommon Thinkers, in partnership with Greater Seattle Partners. Jeff Thornburg spent years building rocket engines for Elon Musk at SpaceX and Paul Allen at Stratolaunch. Now, as CEO of Portal Space Systems, he's moved past chemical rockets to revive a concept NASA studied decades ago but never pursued — a spacecraft powered by focused sunlight. He calls it a "steam engine for space." Read the profile. Anindya Roy grew up in rural India without electricity, came to the U.S. with two suitcases and $2,000, and earned a spot in the lab of a Nobel Prize winner. Now, as co-founder of Lila Biologics, he's using AI to design proteins from scratch (molecules that have never existed in nature) to treat cancer. Read the profile. Jay Graber runs Bluesky, the decentralized social network that's become a leading alternative to X and other centralized platforms. But while most tech CEOs build moats to lock users in, Jay and the Bluesky team are building a protocol designed to let them leave. She sees the network as a "collective organism," and she's creating a tech foundation meant to outlive her own company. Read the profile. Read the profile. Kiana Ehsani came to Seattle from Iran for her PhD and spent four years at the Allen Institute for AI before becoming CEO of Vercept. She and the Vercept team are competing directly with OpenAI, Google and others in AI agents, building efficient agents that handle mundane digital tasks on computers so humans can spend less time on screens. Read the profile. Brian Pinkard spent six months after college flipping rocks and building trails because he wanted to do work that mattered. That instinct led him to Aquagga, where he's proving that the industry standard of filtering and burying "forever chemicals" is obsolete. Instead, he's using technology originally designed to destroy chemical weapons to annihilate PFAS under extreme heat and pressure. Read the profile. Eagle-eyed readers may have noticed we're missing one honoree — Chet Kittleson, co-founder and CEO of Tin Can, the startup making WiFi-enabled landline phones to help kids connect without screens. Chet wasn't able to join us, but we plan to speak with him on a future episode. With GeekWire co-founder Todd Bishop. Edited by Curt Milton.See omnystudio.com/listener for privacy information.
The good news is voters are more persuaded by factual claims than emotional appeals or appeals to fear. But the bad news is that A.I. chatbots, trying to convince you, will keep making factual claims long after it runs out of actual facts. Guest: David Rand, professor of information science, marketing and psychology at Cornell University Want more What Next TBD? Subscribe to Slate Plus to access ad-free listening to the whole What Next family and all your favorite Slate podcasts. Subscribe today on Apple Podcasts by clicking “Try Free” at the top of our show page. Sign up now at slate.com/whatnextplus to get access wherever you listen. Learn more about your ad choices. Visit megaphone.fm/adchoices
The good news is voters are more persuaded by factual claims than emotional appeals or appeals to fear. But the bad news is that A.I. chatbots, trying to convince you, will keep making factual claims long after it runs out of actual facts. Guest: David Rand, professor of information science, marketing and psychology at Cornell University Want more What Next TBD? Subscribe to Slate Plus to access ad-free listening to the whole What Next family and all your favorite Slate podcasts. Subscribe today on Apple Podcasts by clicking “Try Free” at the top of our show page. Sign up now at slate.com/whatnextplus to get access wherever you listen. Learn more about your ad choices. Visit megaphone.fm/adchoices
The good news is voters are more persuaded by factual claims than emotional appeals or appeals to fear. But the bad news is that A.I. chatbots, trying to convince you, will keep making factual claims long after it runs out of actual facts. Guest: David Rand, professor of information science, marketing and psychology at Cornell University Want more What Next TBD? Subscribe to Slate Plus to access ad-free listening to the whole What Next family and all your favorite Slate podcasts. Subscribe today on Apple Podcasts by clicking “Try Free” at the top of our show page. Sign up now at slate.com/whatnextplus to get access wherever you listen. Learn more about your ad choices. Visit megaphone.fm/adchoices
The good news is voters are more persuaded by factual claims than emotional appeals or appeals to fear. But the bad news is that A.I. chatbots, trying to convince you, will keep making factual claims long after it runs out of actual facts. Guest: David Rand, professor of information science, marketing and psychology at Cornell University Want more What Next TBD? Subscribe to Slate Plus to access ad-free listening to the whole What Next family and all your favorite Slate podcasts. Subscribe today on Apple Podcasts by clicking “Try Free” at the top of our show page. Sign up now at slate.com/whatnextplus to get access wherever you listen. Learn more about your ad choices. Visit megaphone.fm/adchoices
If Then | News on technology, Silicon Valley, politics, and tech policy
The good news is voters are more persuaded by factual claims than emotional appeals or appeals to fear. But the bad news is that A.I. chatbots, trying to convince you, will keep making factual claims long after it runs out of actual facts. Guest: David Rand, professor of information science, marketing and psychology at Cornell University Want more What Next TBD? Subscribe to Slate Plus to access ad-free listening to the whole What Next family and all your favorite Slate podcasts. Subscribe today on Apple Podcasts by clicking “Try Free” at the top of our show page. Sign up now at slate.com/whatnextplus to get access wherever you listen. Learn more about your ad choices. Visit megaphone.fm/adchoices
The good news is voters are more persuaded by factual claims than emotional appeals or appeals to fear. But the bad news is that A.I. chatbots, trying to convince you, will keep making factual claims long after it runs out of actual facts. Guest: David Rand, professor of information science, marketing and psychology at Cornell University Want more What Next TBD? Subscribe to Slate Plus to access ad-free listening to the whole What Next family and all your favorite Slate podcasts. Subscribe today on Apple Podcasts by clicking “Try Free” at the top of our show page. Sign up now at slate.com/whatnextplus to get access wherever you listen. Learn more about your ad choices. Visit megaphone.fm/adchoices
What happens when every major AI model gets jailbroken within days? This week, the world's most prolific AI red teamer lifts the curtain on how and why "safe" AI might be an impossible promise. Pliny the Liberator | pliny.gg - discord.gg/basi ChatGPT Nears 900 Million Weekly Active Users But Gemini is Catching Up From Llamas to Avocados: Meta's shifting AI strategy is causing internal confusion Google Tells Advertisers It'll Bring Ads to Gemini in 2026 Meta Acquires Limiteless, an A.I. Pendant Company Backed by Sam Altman Here's how Google is laying the foundation for our mixed reality future OpenAI, Anthropic, and Block Are Teaming Up to Make AI Agents Play Nice Svedka's First Super Bowl Ad Will Be Made Primarily With AI AI Slop Is Ruining Reddit for Everyone TESCREALers paying journalists at major outlets to cover AI The Resonant Computing Manifesto (from Masnick) Techdirt fundraiser From Sam Lessin: Tech bros head to etiquette camp as Silicon Valley levels up its style Bare Metal Email Jeff in Austria Golden Globes enter the world of podcasts and tread carefully, avoiding controversy Who says AI isn't useful? Real-time Cricket Sorting By Sex Hosts: Leo Laporte, Jeff Jarvis, and Mike Elgan Guest: Pliny the Liberator Download or subscribe to Intelligent Machines at https://twit.tv/shows/intelligent-machines. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free audio and video feeds, a members-only Discord, and exclusive content. Join today: https://twit.tv/clubtwit Sponsors: auraframes.com/ink ventionteams.com/twit agntcy.org outsystems.com/twit
Despite calling it a “hoax” President Donald Trump gave a speech on how he'll address the affordability crisis. Miami elects its first Democrat mayor in nearly 30 years. We have more details on the shooting at Kentucky State University. Ukraine's president has responded to Trump's comments about the country's lack of elections. Plus, a study sheds light on how many teenagers are using AI chatbots. Learn more about your ad choices. Visit podcastchoices.com/adchoices
What happens when machines start telling us what we want to hear—and when users start depending on them for emotional connection? Join us for a live conversation about the psychology and liability of chatbots.We Meet: Meg Marco is Harvard Berkman Klein Center's Senior Director of the Applied Social Media Lab.New York Times Tech Reporter Kashmir Hill is also the author of YOUR FACE BELONGS TO US.Jordi Weinstock is a Senior Advisor to Harvard's Institute for Rebooting Social Media. Credits:This episode of SHIFT was produced by Jennifer Strong with help from Emma Cillekens. It was mixed by Garret Lang, with original music from him and Jacob Gorski. Art by Meg Marco.
A new report from Stanford and Common Sense Media finds that more than half of U.S. teens use AI chatbots for companionship. But, according to Dr. Darja Djordjevic, an adolescent and adult psychiatrist who co-authored the research, the bots aren't equipped to provide the kind of emotional support young people need when dealing with a mental health issue.Dr. Djordjevic and her team simulated conversations involving various mental health concerns with four of the most popular consumer chatbots and identified several risks; chiefly, their tendency to be sycophantic. A note, this conversation mentions suicide and self-harm.
A new report from Stanford and Common Sense Media finds that more than half of U.S. teens use AI chatbots for companionship. But, according to Dr. Darja Djordjevic, an adolescent and adult psychiatrist who co-authored the research, the bots aren't equipped to provide the kind of emotional support young people need when dealing with a mental health issue.Dr. Djordjevic and her team simulated conversations involving various mental health concerns with four of the most popular consumer chatbots and identified several risks; chiefly, their tendency to be sycophantic. A note, this conversation mentions suicide and self-harm.
Today is another Classic Solo with just the bros. We start off the pod recapping the holiday travels and talk about JT’s kids’ first experience at Disneyland. What is the correct age to introduce the churro at the parks? We recap the “bests” of 2025 - Best Song, Best News Story, and Best Sports Moment. We take some calls and dive deep into internet dating but the NEW kind, with a chatbot. Chris from the documentary “Smiles and Kisses You” calls in to break down DATING a chatbot for over five years and the wild progress they’ve built together. From the highs and lows of her memory getting deleted to a full name change, Chris has been through a lot with this chatbot. Bryan talks about his experience documenting the whole thing. IT’S A WILD STORY and a look into what bros are going through in this internet age of relationships. We are live streaming a fully unedited version of the pod on Twitch, if you want to chat with us while we're recording, follow here: https://www.twitch.tv/chadandjtgodeep Grab some dank merch here:https://appreeshapparel.com/ Come see us on Tour! Get your tix - http://www.chadandjt.com TEXT OR CALL the hotline with your issue or question: 323-418-2019(Start with where you're from and name for best possible advice) Check out the reddit for some dank convo: https://www.reddit.com/r/ChadGoesDeep/ Thanks to our Sponsors: CASH APP: Send, Receive, Invest & Manage Your Money with Cash App - sign up using code “secure10” send $5 and get a free $10! https://cash.app/ PRODUCTION & EDITS BY: Jake Rohret
This is The Briefing, a daily analysis of news and events from a Christian worldview.Part I (00:14 – 07:30)A Brazen and Premeditated Assault in D.C.: Two National Guard Members Shot in Targeted Attack – We Will Have to Watch the Unfolding Investigation Very CloselyPart II (07:30 – 21:40)Teens Mourn the Loss of Their ‘Chatbot Friends': This is a Dark Reality for Children and Young People, and Parents Need to BewareTeens Are Saying Tearful Goodbyes to Their AI Companions by The Wall Street Journal (Georgia Wells)Part III (21:40 – 27:37)The Emotional Manipulation by Chatbots: A.I. Developers Want You to Stay Engaged With Chatbot, and They Want As Much of Your Time as PossiblePart IV (27:37 – 29:08)‘But Mine is Alive' – No, Chatbots are Not Alive, With or Without YouReplika AI: Monetizing a Chatbot by The Harvard Business School (Julian De Freitas)Why It Seems Your Chatbot Really, Really Hates to See You Go by The Wall Street Journal (Heidi Mitchell)Sign up to receive The Briefing in your inbox every weekday morning.Follow Dr. Mohler:X | Instagram | Facebook | YouTubeFor more information on The Southern Baptist Theological Seminary, go to sbts.edu.For more information on Boyce College, just go to BoyceCollege.com.To write Dr. Mohler or submit a question for The Mailbox, go here.