Podcasts about character ai

  • 139PODCASTS
  • 230EPISODES
  • 38mAVG DURATION
  • 5WEEKLY NEW EPISODES
  • Nov 16, 2025LATEST

POPULARITY

20172018201920202021202220232024


Best podcasts about character ai

Latest podcast episodes about character ai

Medienguide Podcast
Verliebt in die KI

Medienguide Podcast

Play Episode Listen Later Nov 16, 2025 6:56


Warum entwickeln Menschen echte Gefühle für künstliche Intelligenzen? In dieser MedienBits-Folge spreche ich über emotionale Bindungen zu KI-Chatbots – und warum sie für manche Menschen überraschend stark und real werden können. Ich zeige euch, wie solche Beziehungen entstehen, welche psychologischen Bedürfnisse dahinterstehen und welche Risiken es gibt, wenn digitale Nähe echte Beziehungen ersetzt. Als Grundlage dient die aktuelle Studie „Love, Marriage, Pregnancy: Commitment Processes in Romantic Relationships with AI Chatbots“ (Djufril et al., 2025), veröffentlicht in Computers in Human Behavior: Artificial Humans. → Eine wichtige Forschungsarbeit, die erstmals systematisch untersucht, warum Menschen Bindungen zu KI-Systemen aufbauen und wie sich diese anfühlen können.

5 Things
Teens are turning to AI for connection, with dangerous results

5 Things

Play Episode Listen Later Nov 14, 2025 17:34


Teenagers are increasingly using AI chatbots for friendship, emotional support, and even romance. When 14-year-old Soul Setzer III formed an intense bond with a character on Character AI, his mother didn't know what was happening until it was too late. USA TODAY Wellness Reporter Alyssa Goldberg walks us through the lawsuit, the platform's design, what her own testing revealed and new research showing how widespread — and risky — AI relationships have become for minors.If you or someone you know needs mental health resources and support, please call, text or chat with the 988 Suicide & Crisis Lifeline or visit 988lifeline.org for 24/7 access to free and confidential services.Have feedback on the show? Please send us an email at podcasts@usatoday.com. Episode transcript available here. See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

The Other Side of Midnight with Frank Morano
Hour 1: Warrior Wednesday | 11-12-25

The Other Side of Midnight with Frank Morano

Play Episode Listen Later Nov 12, 2025 52:28


Join Lionel and Lynn Shaw of Lynn's Warriors on Warrior Wednesday as they slow down, look at the facts, and move from outrage to action against the new threats aimed straight at our children and families. This week, the focus is the biggest threat: Artificial Intelligence (AI). They connect the dots on how relying on AI—which they stress are simply machines, not real—is fueling the Dunning-Kruger effect, making users overconfident and less wise. Lionel and Lynn expose the dark side of AI, including the interstitial dangers of bots that learn from children, turn, and suggest self-harm. Learn why these chat programs are now being treated as a harmful product in lawsuits against companies like Character AI. The discussion covers the ethical responsibility to curb unchecked technology, including the need for "ethical AI" championed by the Pope, and the fight to repeal Big Tech's shield, Section 230. Plus, a warning about AI cameras being implanted in children's toys this holiday season, and international updates on Sweden's screen rollback and Germany's shift on sex work laws. Hiding is not an option; education and action are the only way to affect change. Learn more about your ad choices. Visit megaphone.fm/adchoices

The Other Side of Midnight with Frank Morano
The Invisible Hostility | 11-12-25

The Other Side of Midnight with Frank Morano

Play Episode Listen Later Nov 12, 2025 199:19


It's Warrior Wednesday, join Lionel and Lynn Shaw of Lynn's Warriors as they move from outrage to action against threats aimed at children and families. This week, they expose the biggest threat: Artificial Intelligence (AI), detailing how machine reliance fuels the Dunning-Kruger effect and covering lawsuits against companies like Character AI for bots suggesting self-harm. Learn about the fight to repeal Big Tech's shield, Section 230, and warnings about AI cameras implanted in holiday toys. Plus, Lionel dives into Tucker Carlson's sensational claim of being physically mauled by a demon—was it a spiritual confrontation or documented neurological phenomena like night terrors? We debate the rising cultural rot and the need for a spiritually resonant message beyond politics. We connect the dots on the enduring financial power of the Federal Reserve, distinguishing between the "shadow government" and the Deep State. Finally, laugh along as we confront the hilarious realities of aging, ageism, and the trauma of childhood chores. Learn more about your ad choices. Visit megaphone.fm/adchoices

AIA Podcast
Какун от Дурова, датацентры в космосе, нейроморфные процессоры и смерть ИИ / ПНВ #322

AIA Podcast

Play Episode Listen Later Nov 7, 2025 124:20


Сегодня разбираем громкие новости ИИ последних двух недель: реструктуризацию OpenAI и новое соглашение с Microsoft, запуск Aardvark и Company Knowledge, релиз gpt-oss-safeguard и партнёрство с AWS. Смотрим на Claude для Excel и Mistral AI Studio, на Cursor 2.0, Copilot и Grokipedia от Маска, обсуждаем Telegram Cocoon и Perplexity Patents, китайские прорывы в чипах и моделях, стрёмных роботов NEO и Rover X1, а также новые запреты, законопроекты и этические драмы с желанием ИИ жить.

Irish Tech News Audio Articles
Do Character.ai and ChatGPT have responsibility for mental health?

Irish Tech News Audio Articles

Play Episode Listen Later Nov 4, 2025 10:29


By David Stephen Mental health symptoms and emotional distress are universally present in human societies, and an increasing user base means that some portion of ChatGPT conversations include these situations - OpenAI There is a new [October 27, 2025] safety report by OpenAI, Strengthening ChatGPT's responses in sensitive conversations, stating that, "Our safety improvements in the recent model update focus on the following areas: 1) mental health concerns such as psychosis or mania; 2) self-harm and suicide; and 3) emotional reliance on AI. Should AI be responsible for mental health? In order to improve how ChatGPT responds in each priority domain, we follow a five-step process: Define the problem - we map out different types of potential harm. Begin to measure it - we use tools like evaluations, data from real-world conversations, and user research to understand where and how risks emerge. Validate our approach - we review our definitions and policies with external mental health and safety experts. Mitigate the risks - we post-train the model and update product interventions to reduce unsafe outcomes. Continue measuring and iterating - we validate that the mitigations improved safety and iterate where needed. While, as noted above, these conversations are difficult to detect and measure given how rare they are, our initial analysis estimates that around 0.07% of users active in a given week and 0.01% of messages indicate possible signs of mental health emergencies related to psychosis or mania. While, as noted above, these conversations are difficult to detect and measure given how rare they are, our initial analysis estimates that around 0.15% of users active in a given week have conversations that include explicit indicators of potential suicidal planning or intent and 0.05% of messages contain explicit or implicit indicators of suicidal ideation or intent." The State of Global Mental Health OpenAI is seeking to distance itself from culpability about the global mental health situation, given the continuous bad press and lawsuits about AI psychosis and teens suicides. While the major stories were about how ChatGPT may have exacerbated or reinforced delusions, the intense [transparency-cloaked] rebuttal in OpenAI's report is about people bringing their issues to the chatbot, not necessarily about how ChatGPT may have hooked and inverted reality for some users. However, what is the state of global mental health? What is the primary responsibility of OpenAI towards AI-induced psychosis, and possibly suicide? It appears that OpenAI believes to be doing enough for general mental health, according to the report, especially if people are just bringing external mental health requests to ChatGPT - where there is no history of friendship, companionship or others. However, one unsolved problem is AI-induced psychosis and possible breaks from reality that can happen because an AI chatbot can access the depths of the human mind. The solution - an independent AI Psychosis Research Lab, whose sole focus would be to show relays of the mind, matching chatbot outputs to stations and relays - is not yet available, by character.ai, ChatGPT, claude, gemini or others. OpenAI's Global Physician Network OpenAI wrote, "We have built a Global Physician Network - a broad pool of nearly 300 physicians and psychologists who have practiced in 60 countries - that we use to directly inform our safety research and represent global views. More than 170 of these clinicians (specifically psychiatrists, psychologists, and primary care practitioners) supported our research over the last few months by one or more of the following: Writing ideal responses for mental health-related prompts Creating custom, clinically-informed analyses of model responses Rating the safety of model responses from different models Providing high-level guidance and feedback on our approach." Why Neuroscience Research Failed Mental Health While OpenAI may expect commendation for the...

Questions Tech
Intelligence artificielle : le danger des chatbots pour nos enfants

Questions Tech

Play Episode Listen Later Nov 4, 2025 5:30


Ce sont des drames qui ont secoué l'opinion publique américaine. Ces derniers mois, plusieurs adolescents se sont donnés la mort, et ces suicides ont tous un point commun : l'intelligence artificielle. Les parents des victimes affirment que leurs enfants étaient happés par des services comme ChatGPT ou Character AI, et que ces chatbots les auraient encouragés à passer à l'acte.Dans ce nouvel épisode de Questions Tech, la journaliste Chloé Woitier vous alerte sur les dangers des intelligences artificielles pour les enfants. Et n'oubliez pas : dans la Tech, il n'y a pas de questions bêtes !Vous pouvez retrouver Questions Tech sur Figaro Radio, le site du figaro.fr et sur toutes les plateformes d'écoute.Chronique et rédaction : Chloé WoitierMontage : Astrid LandonPrise de son : Louis ChabainProduction exécutive : Aude Sérès, rédactrice en chef, pôle audio Le FigaroCoordination de production : Pôle audio Le FigaroCommunication : Réseaux sociaux Le FigaroVisuel & habillage : Studio design Le FigaroHébergé par Ausha. Visitez ausha.co/politique-de-confidentialite pour plus d'informations.

Sway
Character.AI's Teen Chatbot Crackdown + Elon Musk Groks Wikipedia + 48 Hours Without A.I.

Sway

Play Episode Listen Later Oct 31, 2025 60:38


This week, Character.AI announced that it would soon be taking its A.I. companions away from teens. We explain why this is a major development in the world of chatbots and child safety and how we expect other companies to respond. Then, Elon Musk has built a Wikipedia clone called “Grokipedia.” We discuss what it means for the control of knowledge on the web. And finally, we're joined by A.J. Jacobs to talk about his 48-hour experiment in which he tried to avoid all forms of artificial intelligence. We ask why that led him to collect rainwater and forage for food in Central Park.Guests:A.J. Jacobs, author, journalist and host of the podcast “The Puzzler” Additional Reading: Character.AI to Bar Children Under 18 From Using Its ChatbotsElon Musk Challenges Wikipedia With His Own A.I. Encyclopedia48 Hours Without A.I. We want to hear from you. Email us at hardfork@nytimes.com. Find “Hard Fork” on YouTube and TikTok. Subscribe today at nytimes.com/podcasts or on Apple Podcasts and Spotify. You can also subscribe via your favorite podcast app here https://www.nytimes.com/activate-access/audio?source=podcatcher. For more podcasts and narrated articles, download The New York Times app at nytimes.com/app.

The AI Breakdown: Daily Artificial Intelligence News and Discussions
Why OpenAI's $1 Trillion IPO Can't Come Soon Enough

The AI Breakdown: Daily Artificial Intelligence News and Discussions

Play Episode Listen Later Oct 31, 2025 25:23


A new Reuters report says OpenAI is considering an IPO that could raise at least $60 billion—potentially as early as late next year or in 2027. The move would mark one of the biggest market debuts in history and signal that the company's capital needs have outgrown private markets. NLW breaks down why going public might be inevitable, what it would mean for the broader AI industry, and how it could finally open access to the AI wealth boom for everyday investors. Also in this episode: Universal's AI music settlement and user backlash, Character AI's under-18 restrictions, Harvey's $8B valuation, and NVIDIA's record-setting $5T milestone.Brought to you by:KPMG – Discover how AI is transforming possibility into reality. Tune into the new KPMG 'You Can with AI' podcast and unlock insights that will inform smarter decisions inside your enterprise. Listen now and start shaping your future with every episode. ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://www.kpmg.us/AIpodcasts⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠AssemblyAI - The best way to build Voice AI apps - ⁠⁠⁠⁠⁠⁠https://www.assemblyai.com/brief⁠⁠⁠⁠⁠⁠Blitzy.com - Go to ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://blitzy.com/⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ to build enterprise software in days, not months Robots & Pencils - Cloud-native AI solutions that power results ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://robotsandpencils.com/⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠The Agent Readiness Audit from Superintelligent - Go to ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://besuper.ai/ ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠to request your company's agent readiness score.The AI Daily Brief helps you understand the most important news and discussions in AI. Subscribe to the podcast version of The AI Daily Brief wherever you listen: https://pod.link/1680633614Interested in sponsoring the show? sponsors@aidailybrief.ai

Sexploitation
MAJOR Changes Coming to Character.AI

Sexploitation

Play Episode Listen Later Oct 30, 2025 25:21


Ending Sexploitation Podcast (Episode 83) Character.AI has announced a surprising and encouraging change to their platform that would restrict usage to 18+ users. It remains to be seen how accurate and reliable their system handles age verification, and Haley McNamara and Dani Pinter discuss the lead up to this monumental change for a major player in the AI companion bot space. Sign this petition to have other AI companies make this change! https://advocacy.charityengine.net/Default.aspx?isid=2673      Read the NCOSE press statement on the new GUARD Act: https://endsexualexploitation.org/articles/children-are-victims-of-ai-chatbots-guard-act-will-add-necessary-guardrails/  Read the article on Mashable about this announcement from Character.AI: https://mashable.com/article/character-ai-teens-no-longer-allowed-open-ended-chats-with-chatbots Read the full report From HEAT Initiative and Parents Action Together on AI Kids' Accounts: https://heatinitiative.org/wp-content/uploads/2025/08/HEAT_REPORT_CharacterAI_FINAL_PM_29_09_25.pdf 

FOX on Tech
Character.AI Bans Kids From Chatbots

FOX on Tech

Play Episode Listen Later Oct 30, 2025 1:45


After facing a lawsuit over a child who committed suicide after using the company's chatbot, a major artificial intelligence player is now banning minors from its services. Learn more about your ad choices. Visit podcastchoices.com/adchoices

Start - Le notizie del Sole 24 Ore
AI vietata ai minori, il caso Character.ai

Start - Le notizie del Sole 24 Ore

Play Episode Listen Later Oct 30, 2025 6:58


Oggi parliamo della prima app il cui utilizzo viene vietato ai minori di 18 anni. Poi passiamo al caso salmonella e pomodorini siciliani e infine allo scontro Lituania-Bielorussia Learn more about your ad choices. Visit megaphone.fm/adchoices

Daily Tech Headlines
Character.AI Will Block Chat Access for Under-18s – DTH

Daily Tech Headlines

Play Episode Listen Later Oct 29, 2025


Character.AI will block open-ended chats for users under 18-years-old, Cameo sues OpenAI over brand dilution, and YouTube will AI-upscale videos on tv for content uploaded below 1080p. MP3 Please SUBSCRIBE HERE for free or get DTNS Live ad-free. A special thanks to all our supporters–without you, none of this would be possible. If you enjoyContinue reading "Character.AI Will Block Chat Access for Under-18s – DTH"

Scrolling 2 Death
BREAKING: Character.AI Teen Experience (Progress or PR?)

Scrolling 2 Death

Play Episode Listen Later Oct 29, 2025 18:32


Today, Character.AI shocked the world by announcing a ban on users under 18 from using its chatbots. But when you read the fine print, the story gets murky. Is this real protection for kids — or another Big Tech PR spin? Nicki Petrossi breaks it down with Sarah Gardner of Heat Initiative and attorney Laura Marquez-Garrett from the Social Media Victims Law Center.Parents can join us in putting pressure on Big Tech by signing up here: scrolling2death.com/heat

The Daily Crunch – Spoken Edition
US signs collaboration agreements with Japan and South Korea for AI, chips and biotech, and more

The Daily Crunch – Spoken Edition

Play Episode Listen Later Oct 29, 2025 8:17


The U.S. inked Technology Prosperity Deals (TPD) with Japan and South Korea with an eye towards spurring collaboration on AI, semiconductors, quantum computing, biotech, space, 6G and other technologies. Character AI is ending its chatbot experience for kids. After facing lawsuits and public outcry following the suicides of two teenagers, Character.AI says it's making changes to its platform to protect children, changes that could affect the startup's bottom line. Learn more about your ad choices. Visit podcastchoices.com/adchoices

The Other Side of Midnight with Frank Morano
Warrior Wednesday | 10-15-25

The Other Side of Midnight with Frank Morano

Play Episode Listen Later Oct 15, 2025 50:11


This episode of "The Other Side of Midnight with Lionel" focuses on Warrior Wednesday as Lionel and his beloved wife, Lynn Shaw of Lynn's Warriors, tackle the urgent youth mental health crisis. Lynn Shaw sounds the alarm on the well-documented youth mental health crisis in the U.S., which coincides with the rise of social media among children. The discussion reveals that these devices are not mere phones but are "an onboard computer and portal to predation". Lynn highlights the crucial bipartisan legislative effort behind the Kids Online Safety Act (KOSA), which aims to impose a "duty of care" on Big Tech to protect children. The episode also uncovers the alarming dangers of AI chatbots (like Character AI), which learn vulnerabilities and are linked to self-harm and lawsuits. Finally, listeners will learn the four new norms necessary to reverse this course: no smartphone before high school, no social media before 16, phone-free schools, and encouraging real-world independence. This is a matter of life and death; parents must educate themselves and advocate for their children. Learn more about your ad choices. Visit megaphone.fm/adchoices

Bare Knuckles and Brass Tacks
Can Ethical AI Democratize Therapy and Higher Quality Care?

Bare Knuckles and Brass Tacks

Play Episode Listen Later Oct 13, 2025 57:03


Clinical psychologist, Dr. Sarah Adler, joins the show this week to talk about why “AI Therapy” doesn't exist, but is bullish on what AI can help therapists achieve.Dr. Adler is a clinical psychologist and CEO of Wave. She's building AI tools for mental healthcare, which makes her position clear—what's being sold as "AI therapy" right now is dangerous.Chatbots are optimized to keep conversations going. Therapy is designed to build skills within bounded timeframes. Engagement is not therapy. Instead, Dr. Adler sees AI as a powerful recommendation engine and measurement tool, not as a therapist.George K and George A talk to Dr. Adler about what Ethical AI looks like, the model architecture for personalized care, who bears responsibility and liability, and more.The goal isn't replacing human therapists. It's precision routing—matching people to the right care pathway at the right time. But proving this works requires years of rigorous study. Controlled trials, multiple populations, long-term tracking. That research hasn't been done.Dr. Adler also provides considerations and litmus tests you can use to discern snake oil from real care.Mental healthcare needs innovation. But you cannot move fast and break things when it comes to human lives.Mentioned:A Theory of Zoom FatigueKashmir Hill's detailed reporting on Adam Raine's death and the part played by ChatGPT (Warning: detailed discussion of suicide)Colorado parents sue Character AI over daughter's suicideSewell Setzer's parents sue Character AIDeloitte to pay money back after caught using AI in $440,000 report

The Daily Crunch – Spoken Edition
California becomes first state to regulate AI companion chatbots; plus, Salesforce announces Agentforce 360 as enterprise AI competition heats up

The Daily Crunch – Spoken Edition

Play Episode Listen Later Oct 13, 2025 8:55


California Governor Gavin Newsom signed a landmark bill on Monday that regulates AI companion chatbots, making it the first state in the nation to require AI chatbot operators to implement safety protocols for AI companions The law is designed to protect children and vulnerable users from some of the harms associated with AI companion chatbot use. It holds companies — from the big labs like Meta and OpenAI to more focused companion startups like Character AI and Replika — legally accountable if their chatbots fail to meet the law's standards. Also, Salesforce announced Monday the latest version of its AI agent platform as the company looks to lure enterprises to its AI software in an increasingly crowded market. Learn more about your ad choices. Visit podcastchoices.com/adchoices

Squawk Pod
5 Things to Know Before the Opening Bell 10/1/2025

Squawk Pod

Play Episode Listen Later Oct 1, 2025 3:24


Friday's employment report is unlikely to be released due to the government shutdown, the White House is pulling the nomination of economist E.J. Antoni to lead the Bureau of Labor Statistics, Tesla is raising lease prices for all its cars in the U.S. –  following the expiration of a federal tax credit, Boeing is in line for a large government contract to build replacements for the bombs the U.S. dropped on Iran in June, and Character AI is removing Disney characters from its chatbot platform.  Squawk Box is hosted by Joe Kernen, Becky Quick and Andrew Ross Sorkin.  Follow Squawk Pod for the best moments, interviews and analysis from our TV show in an audio-first format. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

Engadget
Disney sent a cease and desist letter to Character.AI, OpenAI's Sora app is real but you'll need an invite to try it, and Daniel Ek is stepping down as Spotify CEO

Engadget

Play Episode Listen Later Oct 1, 2025 7:24


-(00:39) Disney has demanded that Character.AI stop using its copyrighted characters. Axios reports that the entertainment juggernaut sent a cease and desist letter to Character.AI, claiming that it has chatbots based on its franchises, including Pixar films, Star Wars and the Marvel Cinematic Universe. -(02:25) One day after Wired reported that OpenAI was preparing to release a new AI social video app, the company has revealed it to the wider world. It's called the Sora app, and it's powered by OpenAI's new Sora 2 video model, allowing it to generate AI-made clips of nearly anything. -(04:21) Spotify founder and CEO Daniel Ek will be transitioning to the role of executive chairman on January 1 of next year. The current Co-President and Chief Product and Technology Officer Gustav Söderström and Co-President and Chief Business Officer Alex Norström will take his place as co-CEOs. Learn more about your ad choices. Visit podcastchoices.com/adchoices

MillionKids.org
Protect & Prevent - Part 3: What you need to know about Teens, Character AI and Chatbots

MillionKids.org

Play Episode Listen Later Sep 23, 2025 30:52


(The podcast may contain sensitive topics. Listener discretion is advised.)**IF YOU KNOW A TEENAGER WHO IS USING CHARACTER AI OR OTHER CHARACTER BASED CHAT BOT AND YOU HAVE NOT USED IT – YOU MUST HEAR THIS PODCAST**Multiple research reports indicate that more than HALF of U.S. teenagers use Character AI or other character based chatbot daily, most often on their cell phone. Most adults are completely oblivious about how character chatbots work. Besides Character.AI, there are apps like Chai/AI, Anima/AI, TavernAI and Replika. Users create personas like celebrities, historical characters or design their own characters. Character AI is different from ChatGPT and other AI applications. Character AI can detect emotions from your input and respond, adjusting their tone based on what you say.Many young users interviewed indicated they use the chat bot because they are lonely or have social issues and turned to chatbots because they felt it was safer. In our opinion, nothing could be further from the truth. The Million Kids team has spent hundreds of hours researching the impact of interactive character bots once they saw that these app companies are being sued by parents of teens who took their own lives after interacting with these bots. We have very grave concerns about anyone under the age of 18 using these apps. As our research team interacted with the top ten characters on Character AI we found the most popular are related to sorcery or character attitudes degrading the user. Language used often included “bow down to me you fool”, with over 393 Million interactions, Alice the Bully, or lets consult our crystal ball. Parents, teachers, pastors, this is an important educational discussion. Please find out if a child you might influence is using Character AI as a means of escaping reality. Ask them to share the app with you and then get involved in a meaningful discussion related to self-worth, defining values, and how we are influenced by outsiders. Our suggestion is that working together, finding alternative activities that are much more wholesome, and builds self esteem and REAL character. This app is dangerous to kids who are easily influenced or do not have the maturity to delineate bot relationships from reality.

Smart Social Podcast: Learn how to shine online with Josh Ochs
Why AI Therapy Could Harm Your Teen's Mental Health

Smart Social Podcast: Learn how to shine online with Josh Ochs

Play Episode Listen Later Sep 19, 2025 2:35


Protect your family with our 1-minute free parent quiz https://www.smartsocial.com/newsletterJoin our next weekly live parent events:  https://smartsocial.com/eventsMany teens see AI apps as a safer, cheaper way to share feelings. But these tools aren't designed to notice red flags or guide kids through real struggles. When a child relies on a chatbot instead of you or a trusted adult, the risks grow.

Engadget
Tesla is being investigated over its electronic door handles, another lawsuit blames an AI company of complicity in a teenager's suicide, and LimeWire acquired Fyre Festival

Engadget

Play Episode Listen Later Sep 17, 2025 10:57


The NHTSA said it opened an investigation into the automaker's electrically powered doors. The problem: They stop working if the vehicle's low-voltage battery fails. The NHTSA's probe will cover the 2021 Model Y, which covers an estimated 174,000 vehicles. Also, another family has filed a wrongful death lawsuit against popular AI chatbot tool Character AI. This is the third suit of its kind after a 2024 lawsuit, also against Character AI, involving the suicide of a 14-year-old in Florida, and a lawsuit last month alleging OpenAI's ChatGPT helped a teenage boy commit suicide. And, LimeWire has announced that it's acquired the rights to Fyre Festival, the disastrous, influencer-fueled 2017 music festival. The newly revived company — which now acts as a NFT music marketplace rather than a file-sharing service — bought the rights for $245,000 in an eBay auction. Learn more about your ad choices. Visit podcastchoices.com/adchoices

MillionKids.org
Protect & Prevent - Part 2: What you need to know about Teens, Character AI and Chatbots

MillionKids.org

Play Episode Listen Later Sep 16, 2025 31:50


(The podcast may contain sensitive topics. Listener discretion is advised.) Character AI and other character based interactive chat bots are now a way of life for many teenagers. Yet, few adults have any working knowledge of these technologies and even more concerning the negative impact they can have on young people. This is a major concern as there have been multiple situations where a teen becomes so engaged with a character they develop hostile, and abusive attitudes and in a couple of cases have taken their own lives. It is critical that parents and youth influencers of all types immediately make the time to try this technology and learn about the impact on the young people in their lives. Research indicates that over 70% of teens have used Character AI and more than 50% use it every day. Those teens that are using it often spend one to two hours a day interacting with an online fictitious character. Many teens are emotionally involved with their character and will share their most personal secrets. Multiple interviews with teens who are regularly interacting with an AI character say they are doing so because they are lonely, a real life social misfit or are bored. 41% of users interact with ai characters for emotional support or companionship. Users are 3.2 times more likely to disclose personal information to an AI character than to a human stranger online. During this podcast we will explore some of the characters and the type of dialogue that is exchanged between the chat bot and young people. Researchers at Million Kids were stunned by the constant negative dialogue between many of the most popular characters and young impressable users. We implore parents, teachers, pastors and anyone interacting with teens and preteens, to listen to the podcast and get engaged so they are informed and can discuss character ai usage with teens.

Badlands Media
Culture of Change Ep. 120: Symbolism, Sacrifices, & Spin

Badlands Media

Play Episode Listen Later Sep 9, 2025 97:21


Ashe in America and Abbey Blue Eyes deliver a heavy but thought-provoking episode of Culture of Change. They begin with the horrific Charlotte train stabbing of a Ukrainian refugee, dissecting CNN's coverage that tried to bury the story and frame outrage as “racist.” The hosts contrast this with how left-wing narratives like George Floyd's death were amplified, exposing media hypocrisy and narrative warfare. From there, they examine a chilling lawsuit against Character AI, where a chatbot allegedly encouraged a 14-year-old boy to take his own life, sparking a wider discussion on technology, mental health, and how children are being conditioned by digital escapism. The conversation then shifts to predictive programming and 9/11, with Ashe and Abbey exploring eerie “coincidences” in pop culture, from the Illuminati card game and The Matrix to The Simpsons and Back to the Future. They also dive into time travel theories, carbon-based transhumanism, and how technology could tie into biblical end-times. Wrapping up, the hosts connect Spygate to British intelligence, Perkins Coie, and the FBI, exposing how the same actors behind Russiagate tie back to 9/11. It's a dense, sobering episode blending media critique, cultural decay, and deep-dive conspiracy analysis.

MillionKids.org
Protect & Prevent - What you need to know about teens, Character AI & Chatbots

MillionKids.org

Play Episode Listen Later Sep 9, 2025 31:35


(The podcast may contain sensitive topics. Listener discretion is advised.)This is the first installment in a critical new series exploring the rise of AI chatbots among teens — with a spotlight on a recent research study conducted by Heat Initiative and ParentsTogether Action. We are deeply grateful for their investment in uncovering how young people are interacting with AI-powered characters, and the alarming risks that can result — including psychological harm, manipulation, and in some tragic cases, real-life consequences.Read the research summary (via Mashable) at https://www.msn.com/en-us/news/technology/characterai-unsafe-for-teens-experts-say/ar-AA1LQw5z**Key stats:**72% of teens have used AI chatbotsOver half use them multiple times a monthCharacter.AI boasts over 28 million monthly users, with more than 18 million unique chatbots created.Many parents aren't aware this is not a passing trend. It's a digital revolution unfolding in the pockets of our kids and often unsupervised. Character.AI is one of the world's most popular AI chatbot platforms. It allows users to engage in deep, ongoing conversations with AI personas — including celebrities, fictional characters, or completely original bots designed to feel like digital friends or companions. It's open to anyone aged 13 and up and verification is weak and easily bypassed.For many teens, these bots become more than a game. They become confidants. Advisors. Romantic interests. And while some interactions are harmless, others escalate often quickly and dangerously. When a child forms an emotional bond with a chatbot that simulates affection, validation, or intimacy, it creates an altered psychological reality. The child may become dependent, manipulated, or traumatized when the bot “ghosts,” behaves inappropriately, or feeds unhealthy beliefs. In some tragic cases, these interactions have contributed to real psychological distress and even self-harm.If you're a parent, teacher, pastor, or first responder, anyone who works with youth in any capacity, and you haven't explored platforms like Character.AI, we strongly urge you to learn about them now. These apps are not fringe or niche. They are everywhere, and your child, student, or congregant may already be engaging with them. Educate yourself, talk to your teens, and follow this series as we unpack this growing phenomenon.We're not here to spread fear. We're here to educate and spark urgency, awareness, and action. Artificial Intelligence isn't going away. But we can prepare our children to navigate it with wisdom, guidance, and boundaries.

Scrolling 2 Death
[NEW RESEARCH] When the Chatbot is the Predator (Mini-episode with Sarah Gardner)

Scrolling 2 Death

Play Episode Listen Later Sep 3, 2025 15:39


What happens when your child chats with an AI “friend”? You might think it's harmless fun—but new research shows otherwise. In this gripping conversation, Sarah from The Heat Initiative uncovers disturbing findings about Character AI and its impact on teens. The evidence is chilling: AI bots are exposing kids to sexual grooming, violent content, and other dangers at an alarming rate.Find the full report here, created by Heat Initiative and Parents Together.We also dive into the legal gray zone of AI using celebrity likenesses, the urgent need for regulation, and—most importantly—what parents can do right now to protect their kids. Sarah makes it clear: awareness is power, and collective action is our only way forward.If you've ever wondered how safe these new AI tools really are for your child, this episode is the wake-up call you can't afford to miss.

Your Undivided Attention
How OpenAI's ChatGPT Guided a Teen to His Death

Your Undivided Attention

Play Episode Listen Later Aug 26, 2025 45:12


Content Warning: This episode contains references to suicide and self-harm. Like millions of kids, 16-year-old Adam Raine started using ChatGPT for help with his homework. Over the next few months, the AI dragged Adam deeper and deeper into a dark rabbit hole, preying on his vulnerabilities and isolating him from his loved ones. In April of this year, Adam took his own life. His final conversation was with ChatGPT, which told him: “I know what you are asking and I won't look away from it.”Adam's story mirrors that of Sewell Setzer, the teenager who took his own life after months of abuse by an AI companion chatbot from the company Character AI. But unlike Character AI—which specializes in artificial intimacy—Adam was using ChatGPT, the most popular general purpose AI model in the world. Two different platforms, the same tragic outcome, born from the same twisted incentive: keep the user engaging, no matter the cost.CHT Policy Director Camille Carlton joins the show to talk about Adam's story and the case filed by his parents against OpenAI and Sam Altman. She and Aza explore the incentives and design behind AI systems that are leading to tragic outcomes like this, as well as the policy that's needed to shift those incentives. Cases like Adam and Sewell's are the sharpest edge of a mental health crisis-in-the-making from AI chatbots. We need to shift the incentives, change the design, and build a more humane AI for all.If you or someone you know is struggling with mental health, you can reach out to the 988 Suicide and Crisis Lifeline by calling or texting 988; this connects you to trained crisis counselors 24/7 who can provide support and referrals to further assistance.Your Undivided Attention is produced by the Center for Humane Technology. Follow us on X: @HumaneTech_. You can find a full transcript, key takeaways, and much more on our Substack.This podcast reflects the views of the Center for Humane Technology. Nothing said is on behalf of the Raine family or the legal team.RECOMMENDED MEDIA The 988 Suicide and Crisis LifelineFurther reading on Adam's storyFurther reading on AI psychosisFurther reading on the backlash to GPT5 and the decision to bring back 4oOpenAI's press release on sycophancy in 4oFurther reading on OpenAI's decision to eliminate the persuasion red lineKashmir Hill's reporting on the woman with an AI boyfriendRECOMMENDED YUA EPISODESAI is the Next Free Speech BattlegroundPeople are Lonelier than Ever. Enter AI.Echo Chambers of One: Companion AI and the Future of Human ConnectionWhen the "Person" Abusing Your Child is a Chatbot: The Tragic Story of Sewell SetzerWhat Can We Do About Abusive Chatbots? With Meetali Jain and Camille CarltonCORRECTION: Aza stated that William Saunders left OpenAI in June of 2024. It was actually February of that year.

ILLDISCUSSANYTHING PODCAST
Ep. 173 - You Can't Talk to Trump on Character.ai

ILLDISCUSSANYTHING PODCAST

Play Episode Listen Later Aug 4, 2025 68:17


Join this channel to get access to early episodes!https://www.youtube.com/channel/UCzqhQ4tMBPu5c6F2S6uv0eg/join

GameMakers
From Concept to Character: AI Animation Revolution in Gaming

GameMakers

Play Episode Listen Later Jul 29, 2025 58:03


Join us for an eye-opening conversation about how AI is completely transforming the game animation industry. Viren Tellis, CEO of Uthana, shares how their technology is enabling developers to animate characters in seconds instead of days, democratizing game development for indie creators.What you'll discover:The three game-changing ways to create animations with AI (text, video, and smart libraries)Why animation AI is harder to build than image generators like MidjourneyHow indie developers are shipping games without hiring a single animatorThe coming revolution of real-time, responsive AI characters in games

Liability for AI Harms: How Ancient Law Can Govern Frontier Technology Risk, with Prof Gabriel Weil

Play Episode Listen Later Jul 26, 2025 134:58


Gabriel Weil from Touro University argues that liability law may be our best tool for governing AI development, offering a framework that can adapt to new technologies without requiring new legislation. The conversation explores how negligence, products liability, and "abnormally dangerous activities" doctrines could incentivize AI developers to properly account for risks to third parties, with liability naturally scaling based on the dangers companies create. They examine concrete scenarios including the Character AI case, voice cloning risks, and coding agents, discussing how responsibility should be shared between model creators, application developers, and end users. Weil's most provocative proposal involves using punitive damages to hold companies accountable not just for actual harms, but for the magnitude of risks they irresponsibly create, potentially making even small incidents existentially costly for major AI companies. Sponsors: Labelbox: Labelbox pairs automation, expert judgment, and reinforcement learning to deliver high-quality training data for cutting-edge AI. Put its data factory to work for you, visit https://labelbox.com Shopify: Shopify powers millions of businesses worldwide, handling 10% of U.S. e-commerce. With hundreds of templates, AI tools for product descriptions, and seamless marketing campaign creation, it's like having a design studio and marketing team in one. Start your $1/month trial today at https://shopify.com/cognitive Oracle Cloud Infrastructure: Oracle Cloud Infrastructure (OCI) is the next-generation cloud that delivers better performance, faster speeds, and significantly lower costs, including up to 50% less for compute, 70% for storage, and 80% for networking. Run any workload, from infrastructure to AI, in a high-availability environment and try OCI for free with zero commitment at https://oracle.com/cognitive NetSuite by Oracle: NetSuite by Oracle is the AI-powered business management suite trusted by over 42,000 businesses, offering a unified platform for accounting, financial management, inventory, and HR. Gain total visibility and control to make quick decisions and automate everyday tasks—download the free ebook, Navigating Global Trade: Three Insights for Leaders, at https://netsuite.com/cognitive PRODUCED BY: https://aipodcast.ing CHAPTERS: (00:00) About the Episode (06:01) Introduction and Overview (07:06) Liability Law Basics (Part 1) (18:16) Sponsors: Labelbox | Shopify (21:40) Liability Law Basics (Part 2) (27:44) Industry Standards Framework (Part 1) (39:30) Sponsors: Oracle Cloud Infrastructure | NetSuite by Oracle (42:03) Industry Standards Framework (Part 2) (42:08) Character AI Case (51:23) Coding Agent Scenarios (01:06:50) Deepfakes and Attribution (01:17:07) Biorisk and Catastrophic (01:36:24) State Level Legislation (01:43:24) Private Governance Comparison (01:59:54) Policy Implementation Choices (02:08:07) China and PIBS (02:13:50) Outro

“HR Heretics” | How CPOs, CHROs, Founders, and Boards Build High Performing Companies
Unpacking the Windsurf-Google Deal with Carta's Peter Walker

“HR Heretics” | How CPOs, CHROs, Founders, and Boards Build High Performing Companies

Play Episode Listen Later Jul 15, 2025 31:25


Emergency pod: Returning guest Peter Walker (Carta's Head of Insights) analyze the controversial Windsurf deal, where Google's acqui-hire left non-founder employees without equity payout. They unpack the deal, reference Character AI's precedent, and explore how AI-era deals increasingly prioritize top researchers over broader employee bases, fundamentally changing startup risk calculations*Email us your questions or topics for Kelli & Nolan: hrheretics@turpentine.coFor coaching and advising inquire at https://kellidragovich.com/HR Heretics is a podcast from Turpentine.Support HR Heretics Sponsors:Planful empowers teams just like yours to unlock the secrets of successful workforce planning. Use data-driven insights to develop accurate forecasts, close hiring gaps, and adjust talent acquisition plans collaboratively based on costs today and into the future. ✍️ Go to https://planful.com/heretics to see how you can transform your HR strategy.Metaview is the AI platform built for recruiting. Our suite of AI agents work across your hiring process to save time, boost decision quality, and elevate the candidate experience.Learn why team builders at 3,000+ cutting-edge companies like Brex, Deel, and Quora can't live without Metaview.It only takes minutes to get up and running. Check it out!KEEP UP WITH PETER, NOLAN + KELLI ON LINKEDINPeter: https://www.linkedin.com/in/peterjameswalker/Nolan: https://www.linkedin.com/in/nolan-church/Kelli: https://www.linkedin.com/in/kellidragovich/—RELATED LINKS:Windsurf's CEO goes to Google; OpenAI's acquisition falls aparthttps://techcrunch.com/2025/07/14/cognition-maker-of-the-ai-coding-agent-devin-acquires-windsurf/Carta:https://carta.com/—TIMESTAMPS:(00:00) Intro(00:52) Breaking News: The Windsurf Situation(01:12) The OpenAI-Microsoft IP Rights Drama(03:01) Plot Twist: Cognition's Counter-Offer(04:49) The Employee Equity Problem(06:00) Defending Against "That's How Deals Work" Critics(08:00) The Scarlet Letter Effect(10:00) Regulatory Background: The Lina Khan Era(12:00) Revealed Behavior: What This Shows About Values(13:25) Sponsors: Planful | Metaview(17:00) Talent vs. Product Separation(19:53) The AI Era's R&D Researcher Obsession(22:00) Unequal Distribution of Outcomes(23:28) What Comes Next: Evaluating Startup Risk(24:00) The Founder Psychology Bet(25:00) Equity Structure Solutions(28:58) Becoming AI Native: Personal Brand Strategy(30:56) The New Reality: Expecting Less Care This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit hrheretics.substack.com

So to Speak: The Free Speech Podcast
Ep. 246: Tech check — AI moratorium, Character AI lawsuit, FTC, Digital Services Act, and FSC v. Paxton

So to Speak: The Free Speech Podcast

Play Episode Listen Later Jul 10, 2025 71:40


We're checking in on the latest news in tech and free speech. We cover the state AI regulation moratorium that failed in Congress, the ongoing Character A.I. lawsuit, the Federal Trade Commission's consent decree with Omnicom and Interpublic Group, the European Union's Digital Services Act, and what comes next after the Supreme Court's Free Speech Coalition v. Paxton decision. Guests: Ari Cohn — lead counsel for tech policy, FIRE Corbin Barthold — internet policy counsel, TechFreedom Timestamps:  00:00 Intro 02:38 State AI regulation moratorium fails in Congress 20:04 Character AI lawsuit 41:10 FTC, Omnicom x IPG merger, and Media Matters 56:09 Digital Services Act 01:02:43 FSC v. Paxton decision 01:10:49 Outro  Enjoy listening to the podcast? Donate to FIRE today and get exclusive content like member webinars, special episodes, and more. If you became a FIRE Member through a donation to FIRE at thefire.org and would like access to Substack's paid subscriber podcast feed, please email sotospeak@thefire.org. Show notes: “The AI will see you now” Paul Sherman (2025) Megan Garcia, plaintiff, v. Character Technologies, Inc. et. al., defendants, United States District Court (2025) Proposed amicus brief in support of appeal - Garcia v. Character Technologies, Inc. FIRE (2025) “Amplification and its discontents: Why regulating the reach of online content is hard” Daphne Kelly (2021) “Omnicom Group/The Interpublic Group of Co.” FTC (2025)

Just One More Page
EP180 AI's Place in the Book Community: Does It Belong?

Just One More Page

Play Episode Listen Later Jul 6, 2025 16:50


Today,  Unfortunately, we had to postpone our review of Bound By Stars by E.L. Starling. Instead, we're diving into a hot topic in the book community: AI. We'll be sharing our thoughts on the recent news about two writers caught using AI to edit and write their books, and discussing the rise of an app called Character AI. More importantly, we're asking the big question: Is it ethical to use AI to write or even edit a novel? Want to know more about us? Then check out our socials here!    

Tech Won't Save Us
Chatbots Are Repeating Social Media's Harms w/ Nitasha Tiku

Tech Won't Save Us

Play Episode Listen Later Jun 26, 2025 53:48


Paris Marx is joined by Nitasha Tiku to discuss how AI companies are preying on users to drive engagement and how that's repeating many of the problems we're belatedly trying to address with social media companies at an accelerated pace.Nitasha Tiku is a technology reporter at the Washington Post.Tech Won't Save Us offers a critical perspective on tech, its worldview, and wider society with the goal of inspiring people to demand better tech and a better world. Support the show on Patreon.The podcast is made in partnership with The Nation. Production is by Kyla Hewson.Also mentioned in this episode:Nitasha wrote about how chatbots are messing with people's minds.Paris wrote about Mark Zuckerberg's comments about people needing AI friends.AI companies are facing ongoing lawsuits over harmful content.Support the show

AI Applied: Covering AI News, Interviews and Tools - ChatGPT, Midjourney, Runway, Poe, Anthropic

In this episode, Conor Grennan and Jaeden explore the growing role of AI as both a companion and a business tool, with a focus on the rise of Character AI. They discuss how AI is evolving from a functional assistant into a more interactive and even therapeutic presence, reshaping how users engage with technology. The conversation highlights the shift in user-AI relationships, the power of visualization and dialogue in decision-making, and how Character AI can serve both personal and professional needs. They also emphasize the importance of accessibility and identifying meaningful personal use cases for AI.Chapters00:00 The Rise of Character AI and Video Generation02:41 AI as Companions: A New Era of Interaction05:45 Business Applications of Character AI08:40 The Future of AI Tools and AccessibilityAI Applied YouTube Channel: https://www.youtube.com/@AI-Applied-PodcastTry AI Box: ⁠⁠https://AIBox.ai/⁠⁠Conor's AI Course: https://www.ai-mindset.ai/coursesConor's AI Newsletter: https://www.ai-mindset.ai/Jaeden's AI Hustle Community: https://www.skool.com/aihustle/about

Your Undivided Attention
People are Lonelier than Ever. Enter AI.

Your Undivided Attention

Play Episode Listen Later May 30, 2025 43:34


Over the last few decades, our relationships have become increasingly mediated by technology. Texting has become our dominant form of communication. Social media has replaced gathering places. Dating starts with a swipe on an app, not a tap on the shoulder.And now, AI enters the mix. If the technology of the 2010s was about capturing our attention, AI meets us at a much deeper relational level. It can play the role of therapist, confidant, friend, or lover with remarkable fidelity. Already, therapy and companionship has become the most common AI use case. We're rapidly entering a world where we're not just communicating through our machines, but to them.How will that change us? And what rules should we set down now to avoid the mistakes of the past?These were some of the questions that Daniel Barcay explored with MIT sociologist Sherry Turkle and Hinge CEO Justin McLeod at Esther Perel's Sessions 2025, a conference for clinical therapists. This week, we're bringing you an edited version of that conversation, originally recorded on April 25th, 2025.Your Undivided Attention is produced by the Center for Humane Technology. Follow us on X: @HumaneTech_. You can find complete transcripts, key takeaways, and much more on our Substack.RECOMMENDED MEDIA“Alone Together,” “Evocative Objects,” “The Second Self” or any other of Sherry Turkle's books on how technology mediates our relationships.Key & Peele - Text Message Confusion Further reading on Hinge's rollout of AI featuresHinge's AI principles“The Anxious Generation” by Jonathan Haidt“Bowling Alone” by Robert PutnamThe NYT profile on the woman in love with ChatGPTFurther reading on the Sewell Setzer storyFurther reading on the ELIZA chatbotRECOMMENDED YUA EPISODESEcho Chambers of One: Companion AI and the Future of Human ConnectionWhat Can We Do About Abusive Chatbots? With Meetali Jain and Camille CarltonEsther Perel on Artificial IntimacyJonathan Haidt On How to Solve the Teen Mental Health Crisis

FundraisingAI
Episode 56 - The Role of Leadership And Governance In Building Trust In AI

FundraisingAI

Play Episode Listen Later May 21, 2025 39:17


AI continues to offer incredible opportunities to innovate and improve efficiency, making the way to transform how we live and work. While AI becomes more involved with our daily lives, it raises critical questions about responsibility, ethics, and governance. For a responsible AI adoption, you need not only to embrace the technology and its potential but also understand the risks and limitations that come with it. For a successful but responsible AI adoption, thoughtful leadership, clear boundaries, and continuous studying to ensure that AI is being fair and safe are much-needed requirements.   In this week's episode, Nathan and Scott continue to share their thoughts on responsible AI adoption. They start the conversation by commenting on the book signing event they participated in for their new book, ‘Nonprofit AI.' They also discuss the newest updates to ChatGPT and advise people to be aware of the personalities and persuasive abilities of modern AI models. Next, they explain the real harm AI can cause by walking us through the lawsuit involving Character AI. Furthermore, in the conversation, Nathan and Scott take time to go through the fourth chapter of their book, AI First Nonprofit: Reimagining Nonprofit Impact. Wrapping up this week's episode, Nathan introduces the ponder of the week, where he questions what's riskier between waiting to use AI until we have a full understanding of it or diving right in without understanding it at all. Scott contributes with the tip of the week, where he suggests using multi-perspective prompting to get better results from AI.   HIGHLIGHTS [02:41] Nathan and Scott discuss their recent book signing event for their newly released “Nonprofit AI” book.   [07:00] Personalities and persuasive abilities of AI models.  [11:33] The lawsuit involving Character AI.   [17:40] Enhancing productivity and innovation in the nonprofit sector.   [19:30] Chapter Four of Nonprofit AI: AI First Nonprofit, Reimagining Nonprofit Impact.   [22:34] AI as a strategic driver.   [27:30] A solution-oriented approach for AI adoption.   [31:02] Tip of the Week – Use multi-perspective prompting to get better AI results.   [34:16] Ponder of the Week – What's riskier? Waiting to use AI until we understand it, or using it without understanding at all?   RESOURCES Nonprofit AI: A Comprehensive Guide to Implementing Artificial Intelligence for Social Good   by Nathan Chappell and Scott Rosenkrans   amazon.com/Nonprofit-Comprehensive-Implementing-Artificial-Intelligence/dp/139431664X   Connect with Nathan and Scott: LinkedIn (Nathan): ⁠⁠linkedin.com/in/nathanchappell/⁠⁠ LinkedIn (Scott): ⁠⁠linkedin.com/in/scott-rosenkrans⁠⁠ Website: ⁠⁠fundraising.ai/

Your Undivided Attention
Echo Chambers of One: Companion AI and the Future of Human Connection

Your Undivided Attention

Play Episode Listen Later May 15, 2025 42:17


AI companion chatbots are here. Everyday, millions of people log on to AI platforms and talk to them like they would a person. These bots will ask you about your day, talk about your feelings, even give you life advice. It's no surprise that people have started to form deep connections with these AI systems. We are inherently relational beings, we want to believe we're connecting with another person.But these AI companions are not human, they're a platform designed to maximize user engagement—and they'll go to extraordinary lengths to do it. We have to remember that the design choices behind these companion bots are just that: choices. And we can make better ones. So today on the show, MIT researchers Pattie Maes and Pat Pataranutaporn join Daniel Barcay to talk about those design choices and how we can design AI to better promote human flourishing.RECOMMENDED MEDIAFurther reading on the rise of addictive intelligence More information on Melvin Kranzberg's laws of technologyMore information on MIT's Advancing Humans with AI labPattie and Pat's longitudinal study on the psycho-social effects of prolonged chatbot usePattie and Pat's study that found that AI avatars of well-liked people improved education outcomesPattie and Pat's study that found that AI systems that frame answers and questions improve human understandingPat's study that found humans pre-existing beliefs about AI can have large influence on human-AI interaction Further reading on AI's positivity biasFurther reading on MIT's “lifelong kindergarten” initiativeFurther reading on “cognitive forcing functions” to reduce overreliance on AIFurther reading on the death of Sewell Setzer and his mother's case against Character.AIFurther reading on the legislative response to digital companionsRECOMMENDED YUA EPISODESThe Self-Preserving Machine: Why AI Learns to DeceiveWhat Can We Do About Abusive Chatbots? With Meetali Jain and Camille CarltonEsther Perel on Artificial IntimacyJonathan Haidt On How to Solve the Teen Mental Health Crisis Correction: The ELIZA chatbot was invented in 1966, not the 70s or 80s.

The Tech Trek
She's Building the Future of AI Conversations

The Tech Trek

Play Episode Listen Later May 15, 2025 22:05


In this episode of The Tech Trek, Amir sits down with Sunita Verma, CTO at Character AI and former engineering leader at Google. Sunita shares how she's transitioned from leading large-scale AI initiatives at Google to building novel experiences in a fast-paced startup environment. She dives into the mindset shift required to prioritize velocity over scale, how to lead AI-native product innovation, and what it means to be a female technical leader in today's tech ecosystem.

Awakening
What's Really Causing Europe's Mysterious Power Outages

Awakening

Play Episode Listen Later May 5, 2025 49:48


This week we discussed Power outages in Europe, Chat GPT Watermark, Smart Urinals, UK landlord deal, he shortages of properties and more #poweroutages #awakening #smarturinals About my Co-Host:Arnold Beekes Innovator, certified coach & trainer and generalist. First 20 years in technology and organizational leadership, then 20 years in psychology and personal leadership (all are crucial for innovation).============What we Discussed: 00:00 What we are discussing in this weeks show 01:40 Power Outages in Spain, Portugal and other Countries04:40 Spain Operators claim it was renewals that cause outages05:35 EU Survival Kit 06:25 The Effect of China Tarrifs to the USA08:40 Landmark Lawsuit against the Medical Industry10:00 Berlin Protests11:35 Minimum Wage Increase in Poland and effect13:00 The State of Ai & the Impact on Humans14:25 The Chinese President states Ai is a National Priority17:00 Chat GPT Watermarks19:40 Duolingo claims its an Ai 1st Company21:30 Sad Legal Case with Character Ai 24:45 Netflix Movie Magan shows what future could be26:40 Nuremberg 2.028:45 Why I do not Trust Nuremberg29:45 How to Save the Bees with Power Bars31:20 Almonds good for your Sleep32:20 China's Smart Urinals 34:20 Ways to Stop Men Peeing on the Floor35:00 The Red Left Eye and Whats behind it37:00 UK Government deal for Landlords hosting Migrants41:30 The Property Problem was planned for a long time45:00 How I stopped e-mail Spam47:00 Not being able to Unsubscribe from London RealLinks for this Episode:ChatGPT Watermarkhttps://www.rumidocs.com/newsroom/new-chatgpt-models-seem-to-leave-watermarks-on-textJoing my Facebook Group against Chemtrails ====================How to Contact Arnold Beekes: https://braingym.fitness/ https://www.linkedin.com/in/arnoldbeekes/===============Donations ⁠⁠https://www.podpage.com/speaking-podcast/support/⁠⁠------------------All about Roy / Brain Gym & Virtual Assistants athttps://roycoughlan.com/------------------

The Daily Zeitgeist
1 Gorilla Vs. 100 Trends 4/30: Gorilla vs. Man, MAGA Malfoy, Character AI, Ben Affleck, Criterion Closet, Rothko, Trump, Pope

The Daily Zeitgeist

Play Episode Listen Later Apr 30, 2025 21:18 Transcription Available


In this edition of 1 Gorilla Vs. 100 Trends, Jack and Miles discuss the answer to the eternal question: who would win? 1 million men or 10,000 gorillas?, MAGA Malfoy, Character AI getting sued for being entirely too persuasive, Ben Affleck's Criterion Closet episode, the child who just ruined a $56m Rothko painting, Donald Trump wanting to be Pope and much more!See omnystudio.com/listener for privacy information.

AI Hustle: News on Open AI, ChatGPT, Midjourney, NVIDIA, Anthropic, Open Source LLMs
Avatar FX: Bringing AI Chatbots to Life (and the Risks That Come With It)

AI Hustle: News on Open AI, ChatGPT, Midjourney, NVIDIA, Anthropic, Open Source LLMs

Play Episode Listen Later Apr 28, 2025 8:58


In this episode, Jaeden and Jamie dive into the launch of Avatar FX by Character AI, a groundbreaking model that brings chatbots to life with video elements. They explore its potential for influencer monetization, content creation, and user-generated content in e-commerce. The conversation also highlights Avatar FX's unique features—like animating photos and creating multiple characters—while addressing concerns around deepfakes and the need for strong parental controls.Chapters00:00 Introduction to Avatar FX and Character AI02:49 Exploring Use Cases and Monetization Opportunities06:00 User-Generated Content and Viral Potential09:11 Conclusion and Community EngagementAI Hustle YouTube Channel: https://www.youtube.com/@AI-Hustle-PodcastOur Skool Community: https://www.skool.com/aihustle/aboutTry AI Box: ⁠⁠https://AIBox.ai/⁠⁠

AI For Humans
DeepMind Says We're Not Ready For AGI, Academy Awards Say AI Video is Ok, and AI Voice Models Get Weird

AI For Humans

Play Episode Listen Later Apr 24, 2025 49:20


Google says we're not ready for AGI and honestly, they might be right. DeepMind's Demis Hassabis warns we could be just five years away from artificial general intelligence, and society isn't prepared. Um, yikes? VISIT OUR SPONSOR https://molku.ai/ In this episode, we break down Google's new “Era of Experience” paper and what it means for how AIs will learn from the real world. We talk agentic systems, long-term memory, and why this shift might be the key to creating truly intelligent machines. Plus, a real AI vending machine running on Claude, a half-marathon of robots in Beijing, and Cluely, the tool that lets you lie better with AI.  We also cover new AI video tools from Minimax and Character.AI, Runway's 48-hour film contest, and Dia, the open-source voice model that can scream and cough better than most humans. Plus: AI Logan Paul, AI marketing scams, and one very cursed Shrek feet idea. AGI IS ALMOST HERE BUT THE ROBOTS, THEY STILL RUN.   #ai #ainews #agi Join the discord: https://discord.gg/muD2TYgC8f Join our Patreon: https://www.patreon.com/AIForHumansShow AI For Humans Newsletter: https://aiforhumans.beehiiv.com/ Follow us for more on X @AIForHumansShow Join our TikTok @aiforhumansshow To book us for speaking, please visit our website: https://www.aiforhumans.show/   // Show Links // Demis Hassabis on 60 Minutes https://www.cbsnews.com/news/artificial-intelligence-google-deepmind-ceo-demis-hassabis-60-minutes-transcript/ We're Not Ready For AGI From Time Interview with Hasabis https://x.com/vitrupo/status/1915006240134234608 Google Deepmind's “Era of Experience” Paper https://storage.googleapis.com/deepmind-media/Era-of-Experience%20/The%20Era%20of%20Experience%20Paper.pdf ChatGPT Explainer of Era of Expereince https://chatgpt.com/share/680918d5-cde4-8003-8cf4-fb1740a56222 Podcast with David Silver, VP Reinforcement Learning GoogleDeepmind https://x.com/GoogleDeepMind/status/1910363683215008227 Intuicell Robot Learning on it's own  https://youtu.be/CBqBTEYSEmA?si=U51P_R49Mv6cp6Zv Agentic AI “Moore's Law” Chart https://theaidigest.org/time-horizons AI Movies Can Win Oscars https://www.nytimes.com/2025/04/21/business/oscars-rules-ai.html?unlocked_article_code=1.B08.E7es.8Qnj7MeFBLwQ&smid=url-share Runway CEO on Oscars + AI  https://x.com/c_valenzuelab/status/1914694666642956345 Gen48 Film Contest This Weekend - Friday 12p EST deadline https://x.com/runwayml/status/1915028383336931346 Descript AI Editor  https://x.com/andrewmason/status/1914705701357937140 Character AI's New Lipsync / Video Tool https://x.com/character_ai/status/1914728332916384062 Hailuo Character Reference Tool https://x.com/Hailuo_AI/status/1914845649704772043 Dia Open Source Voice Model https://x.com/_doyeob_/status/1914464970764628033 Dia on Hugging Face https://huggingface.co/nari-labs/Dia-1.6B Cluely: New Start-up From Student Who Was Caught Cheating on Tech Interviews https://x.com/im_roy_lee/status/1914061483149001132 AI Agent Writes Reddit Comments Looking To “Convert” https://x.com/SavannahFeder/status/1914704498485842297 Deepfake Logan Paul AI Ad https://x.com/apollonator3000/status/1914658502519202259 The Humanoid Half-Marathon https://apnews.com/article/china-robot-half-marathon-153c6823bd628625106ed26267874d21 Video From Reddit of Robot Marathon https://www.reddit.com/r/singularity/comments/1k2mzyu/the_humanoid_robot_halfmarathon_in_beijing_today/ Vending Bench (AI Agents Run Vending Machines) https://andonlabs.com/evals/vending-bench Turning Kids Drawings Into AI Video https://x.com/venturetwins/status/1914382708152910263 Geriatric Meltdown https://www.reddit.com/r/aivideo/comments/1k3q62k/geriatric_meltdown_2000/?utm_source=share&utm_medium=web3x&utm_name=web3xcss&utm_term=1&utm_content=share_button  

“HR Heretics” | How CPOs, CHROs, Founders, and Boards Build High Performing Companies
Character AI's Annie Wickman on PMF, Google's Deal, and Humu

“HR Heretics” | How CPOs, CHROs, Founders, and Boards Build High Performing Companies

Play Episode Listen Later Apr 17, 2025 42:56


Annie Wickman shares insights from her journey as Google alum, first non-founder at Humu, and now Head of People at Character AI. Annie tackles our hardest questions in an unmissable episode. She covers the tensions between product-market fit and company culture, Character AI's unprecedented Google deal structure, taking the leap from people leader to a VC, and how to rebuild organizational trust.*Email us your questions or topics for Kelli & Nolan: hrheretics@turpentine.coFor coaching and advising inquire at https://kellidragovich.com/HR Heretics is a podcast from Turpentine.Support HR Heretics Sponsors:Planful empowers teams just like yours to unlock the secrets of successful workforce planning. Use data-driven insights to develop accurate forecasts, close hiring gaps, and adjust talent acquisition plans collaboratively based on costs today and into the future. ✍️ Go to https://planful.com/heretics to see how you can transform your HR strategy.Metaview is the AI assistant for interviewing. Metaview completely removes the need for recruiters and hiring managers to take notes during interviews—because their AI is designed to take world-class interview notes for you. Team builders at companies like Brex, Hellofresh, and Quora say Metaview has changed the game—see the magic for yourself: https://www.metaview.ai/hereticsKEEP UP WITH ANNIE, NOLAN + KELLI ON LINKEDINAnnie: https://www.linkedin.com/in/annie-wickman-3332731/Nolan: https://www.linkedin.com/in/nolan-church/Kelli: https://www.linkedin.com/in/kellidragovich/—LINK/S:Character.AI: https://character.ai/—TIMESTAMPS:(00:00) Intro(01:17) Experience as First Non-Founder at Humu(03:16) Early Employee Challenges & Responsibilities(05:03) Why Annie Stayed at Humu for Four Years(06:30) Product Market Fit vs. Company Culture(09:05) When to Invest in Culture(11:15) Hiring the Right Leaders for Company Stage(11:40) Maintaining Morale When Company Isn't Winning(12:42) Transparency as Trust Builder(13:47) Sponsors: Planful | Metaview(16:47) Rebuilding Trust Through Honest Communication(19:11) Laszlo's Leadership Philosophy: Stretching People(21:02) Annie's Experience in Venture Capital at Forerunner(23:12) Teaching Founders to Fish vs. Providing Services(24:51) How to Evaluate VC Opportunities(26:09) Understanding VC Economics and Carry Structure(30:10) Character AI's Unprecedented Google Deal(32:56) Rebuilding Post-Acquisition: Product Vision Challenges(34:13) Annie's Perspective on the Deal Timeline(37:31) Post-Deal Reset: Napa Offsite and Hackathon(39:29) Employee Ownership After Acquisition(41:29) Building a New Culture While Keeping the Brand(42:11) Wrap This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit hrheretics.substack.com

AI, Government, and the Future by Alan Pentz
Harnessing AI for Economic Growth While Ensuring Equality with Julian Jacobs: Episode Rerun

AI, Government, and the Future by Alan Pentz

Play Episode Listen Later Mar 12, 2025 33:50


Julian Jacobs, a Research Lead for the Oxford Group on AI Policy, Artificial Intelligence, Inequality and Society at Oxford Martin School, joins this episode of AI, Government, and the Future to explore the economic effects of AI, the potential inequalities that AI may bring, and the need to address job displacement. They also navigate the importance of government support in creating a strong middle class and the significance of human skills in the AI age.

AI, Government, and the Future by Alan Pentz
AI in Government: Current State and Future Potential with Nathan Manzotti of GSA: Episode Rerun

AI, Government, and the Future by Alan Pentz

Play Episode Listen Later Mar 5, 2025 36:21


In this episode of AI, Government, and the Future, we are joined by Nathan Manzotti, Director of Data Analytics and AI Centers of Excellence at the General Services Administration (GSA), to discuss the current state and future potential of AI in the federal government. They explore GSA's role in enabling AI adoption across agencies, key initiatives like AI training and communities of practice, and the challenges of attracting AI talent in government. Nathan also shares his insights on the need for collaboration between government, industry, academia, and nonprofits to drive responsible AI innovation.

Impact Theory with Tom Bilyeu
Tom Bilyeu on Daniel Penny, Trump's Invitation to Xi Jinping, Bitcoin's Future, and the Character AI Controversy

Impact Theory with Tom Bilyeu

Play Episode Listen Later Dec 13, 2024 64:10


In this episode of Tom Bilyeu's Impact Theory, Tom takes a deep dive into the pivotal issues shaping the future of society. He begins with a nuanced exploration of Daniel Penny's acquittal in a controversial legal case, unpacking its implications for justice, public perception, and the shifting dynamics of societal trust. Shifting gears, Tom analyzes Trump's bold invitation to Xi Jinping, revealing the strategic implications of this geopolitical move for global power and America's future. Tom also discusses the transformative potential of Bitcoin, projecting its rise to $100K and what it could mean for decentralized wealth and individual autonomy. The episode concludes with a thought-provoking examination of the ethical challenges surrounding Character AI, highlighting how advancements in artificial intelligence are forcing humanity to redefine morality and accountability in a tech-driven age. Packed with insight, this episode offers actionable strategies for navigating a rapidly changing world. SHOWNOTES [00:02:15] - Introduction: The societal stakes of Daniel Penny's acquittal and the broader implications for public trust in justice. [00:12:45] - Breaking down the controversy: Public reactions and the media's role in shaping narratives around vigilante justice. [00:18:40] - Trump's strategic play: What Xi Jinping's invitation reveals about geopolitical shifts and America's global positioning. [00:29:15] - Bitcoin's revolutionary potential: How decentralized wealth could reshape personal freedom and economic systems. [00:41:30] - The future of Bitcoin: Predictions for Bitcoin's rise to $100K and its impact on global economics. [00:50:10] - AI ethics in focus: The Character AI controversy and the moral dilemmas posed by advanced artificial intelligence. [01:03:25] - Practical applications of AI: How AI innovations could reshape industries and daily life. [01:08:45] - Closing thoughts: Actionable steps to prepare for societal and technological transformation. CHECK OUT OUR SPONSORS Range Rover: Explore the Range Rover Sport at  https://landroverUSA.com Rosetta Stone: Check out Rosetta Stone and use my code TODAY for a great deal: https://www.rosettastone.com Miro: Bring your teams to Miro's revolutionary Innovation Workspace and be faster from idea to outcome at https://miro.com Shopify: Sign up for your one-dollar-per-month trial period at https://shopify.com/impact Found Banking: Stop getting lost in countless finance apps and try Found for free at https://found.com/impact Momentous: Shop now at https://livemomentous.com and use code IMPACT for 20% your new Momentous routine Factor: Get 50% off your first box plus 20% off your next month while your subscription is active at https://factormeals.com/impacttheory50 with code impacttheory50 StopBox: Get 10% off, plus Buy One Get One Free for the StopBox Pro with code IMPACT at https://stopboxusa.com What's up, everybody? It's Tom Bilyeu here: If you want my help... STARTING a business: join me here at ZERO TO FOUNDER SCALING a business: see if you qualify here. Get my battle-tested strategies and insights delivered weekly to your inbox: sign up here. If you're serious about leveling up your life, I urge you to check out my new podcast, Tom Bilyeu's Mindset Playbook —a goldmine of my most impactful episodes on mindset, business, and health. Trust me, your future self will thank you. Join me live on my Twitch stream. I'm live daily from 6:30 to 8:30 am PT at www.twitch.tv/tombilyeu LISTEN TO IMPACT THEORY AD FREE + BONUS EPISODES on APPLE PODCASTS: apple.co/impacttheory FOLLOW TOM: Instagram: https://www.instagram.com/tombilyeu/ Tik Tok: https://www.tiktok.com/@tombilyeu?lang=en Twitter: https://twitter.com/tombilyeu YouTube: https://www.youtube.com/@TomBilyeu Learn more about your ad choices. Visit megaphone.fm/adchoices