POPULARITY
This episode of "The Other Side of Midnight with Lionel" focuses on Warrior Wednesday as Lionel and his beloved wife, Lynn Shaw of Lynn's Warriors, tackle the urgent youth mental health crisis. Lynn Shaw sounds the alarm on the well-documented youth mental health crisis in the U.S., which coincides with the rise of social media among children. The discussion reveals that these devices are not mere phones but are "an onboard computer and portal to predation". Lynn highlights the crucial bipartisan legislative effort behind the Kids Online Safety Act (KOSA), which aims to impose a "duty of care" on Big Tech to protect children. The episode also uncovers the alarming dangers of AI chatbots (like Character AI), which learn vulnerabilities and are linked to self-harm and lawsuits. Finally, listeners will learn the four new norms necessary to reverse this course: no smartphone before high school, no social media before 16, phone-free schools, and encouraging real-world independence. This is a matter of life and death; parents must educate themselves and advocate for their children. Learn more about your ad choices. Visit megaphone.fm/adchoices
Clinical psychologist, Dr. Sarah Adler, joins the show this week to talk about why “AI Therapy” doesn't exist, but is bullish on what AI can help therapists achieve.Dr. Adler is a clinical psychologist and CEO of Wave. She's building AI tools for mental healthcare, which makes her position clear—what's being sold as "AI therapy" right now is dangerous.Chatbots are optimized to keep conversations going. Therapy is designed to build skills within bounded timeframes. Engagement is not therapy. Instead, Dr. Adler sees AI as a powerful recommendation engine and measurement tool, not as a therapist.George K and George A talk to Dr. Adler about what Ethical AI looks like, the model architecture for personalized care, who bears responsibility and liability, and more.The goal isn't replacing human therapists. It's precision routing—matching people to the right care pathway at the right time. But proving this works requires years of rigorous study. Controlled trials, multiple populations, long-term tracking. That research hasn't been done.Dr. Adler also provides considerations and litmus tests you can use to discern snake oil from real care.Mental healthcare needs innovation. But you cannot move fast and break things when it comes to human lives.Mentioned:A Theory of Zoom FatigueKashmir Hill's detailed reporting on Adam Raine's death and the part played by ChatGPT (Warning: detailed discussion of suicide)Colorado parents sue Character AI over daughter's suicideSewell Setzer's parents sue Character AIDeloitte to pay money back after caught using AI in $440,000 report
Friday's employment report is unlikely to be released due to the government shutdown, the White House is pulling the nomination of economist E.J. Antoni to lead the Bureau of Labor Statistics, Tesla is raising lease prices for all its cars in the U.S. – following the expiration of a federal tax credit, Boeing is in line for a large government contract to build replacements for the bombs the U.S. dropped on Iran in June, and Character AI is removing Disney characters from its chatbot platform. Squawk Box is hosted by Joe Kernen, Becky Quick and Andrew Ross Sorkin. Follow Squawk Pod for the best moments, interviews and analysis from our TV show in an audio-first format. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
-(00:39) Disney has demanded that Character.AI stop using its copyrighted characters. Axios reports that the entertainment juggernaut sent a cease and desist letter to Character.AI, claiming that it has chatbots based on its franchises, including Pixar films, Star Wars and the Marvel Cinematic Universe. -(02:25) One day after Wired reported that OpenAI was preparing to release a new AI social video app, the company has revealed it to the wider world. It's called the Sora app, and it's powered by OpenAI's new Sora 2 video model, allowing it to generate AI-made clips of nearly anything. -(04:21) Spotify founder and CEO Daniel Ek will be transitioning to the role of executive chairman on January 1 of next year. The current Co-President and Chief Product and Technology Officer Gustav Söderström and Co-President and Chief Business Officer Alex Norström will take his place as co-CEOs. Learn more about your ad choices. Visit podcastchoices.com/adchoices
(The podcast may contain sensitive topics. Listener discretion is advised.)**IF YOU KNOW A TEENAGER WHO IS USING CHARACTER AI OR OTHER CHARACTER BASED CHAT BOT AND YOU HAVE NOT USED IT – YOU MUST HEAR THIS PODCAST**Multiple research reports indicate that more than HALF of U.S. teenagers use Character AI or other character based chatbot daily, most often on their cell phone. Most adults are completely oblivious about how character chatbots work. Besides Character.AI, there are apps like Chai/AI, Anima/AI, TavernAI and Replika. Users create personas like celebrities, historical characters or design their own characters. Character AI is different from ChatGPT and other AI applications. Character AI can detect emotions from your input and respond, adjusting their tone based on what you say.Many young users interviewed indicated they use the chat bot because they are lonely or have social issues and turned to chatbots because they felt it was safer. In our opinion, nothing could be further from the truth. The Million Kids team has spent hundreds of hours researching the impact of interactive character bots once they saw that these app companies are being sued by parents of teens who took their own lives after interacting with these bots. We have very grave concerns about anyone under the age of 18 using these apps. As our research team interacted with the top ten characters on Character AI we found the most popular are related to sorcery or character attitudes degrading the user. Language used often included “bow down to me you fool”, with over 393 Million interactions, Alice the Bully, or lets consult our crystal ball. Parents, teachers, pastors, this is an important educational discussion. Please find out if a child you might influence is using Character AI as a means of escaping reality. Ask them to share the app with you and then get involved in a meaningful discussion related to self-worth, defining values, and how we are influenced by outsiders. Our suggestion is that working together, finding alternative activities that are much more wholesome, and builds self esteem and REAL character. This app is dangerous to kids who are easily influenced or do not have the maturity to delineate bot relationships from reality.
Smart Social Podcast: Learn how to shine online with Josh Ochs
Protect your family with our 1-minute free parent quiz https://www.smartsocial.com/newsletterJoin our next weekly live parent events: https://smartsocial.com/eventsMany teens see AI apps as a safer, cheaper way to share feelings. But these tools aren't designed to notice red flags or guide kids through real struggles. When a child relies on a chatbot instead of you or a trusted adult, the risks grow.
The NHTSA said it opened an investigation into the automaker's electrically powered doors. The problem: They stop working if the vehicle's low-voltage battery fails. The NHTSA's probe will cover the 2021 Model Y, which covers an estimated 174,000 vehicles. Also, another family has filed a wrongful death lawsuit against popular AI chatbot tool Character AI. This is the third suit of its kind after a 2024 lawsuit, also against Character AI, involving the suicide of a 14-year-old in Florida, and a lawsuit last month alleging OpenAI's ChatGPT helped a teenage boy commit suicide. And, LimeWire has announced that it's acquired the rights to Fyre Festival, the disastrous, influencer-fueled 2017 music festival. The newly revived company — which now acts as a NFT music marketplace rather than a file-sharing service — bought the rights for $245,000 in an eBay auction. Learn more about your ad choices. Visit podcastchoices.com/adchoices
(The podcast may contain sensitive topics. Listener discretion is advised.) Character AI and other character based interactive chat bots are now a way of life for many teenagers. Yet, few adults have any working knowledge of these technologies and even more concerning the negative impact they can have on young people. This is a major concern as there have been multiple situations where a teen becomes so engaged with a character they develop hostile, and abusive attitudes and in a couple of cases have taken their own lives. It is critical that parents and youth influencers of all types immediately make the time to try this technology and learn about the impact on the young people in their lives. Research indicates that over 70% of teens have used Character AI and more than 50% use it every day. Those teens that are using it often spend one to two hours a day interacting with an online fictitious character. Many teens are emotionally involved with their character and will share their most personal secrets. Multiple interviews with teens who are regularly interacting with an AI character say they are doing so because they are lonely, a real life social misfit or are bored. 41% of users interact with ai characters for emotional support or companionship. Users are 3.2 times more likely to disclose personal information to an AI character than to a human stranger online. During this podcast we will explore some of the characters and the type of dialogue that is exchanged between the chat bot and young people. Researchers at Million Kids were stunned by the constant negative dialogue between many of the most popular characters and young impressable users. We implore parents, teachers, pastors and anyone interacting with teens and preteens, to listen to the podcast and get engaged so they are informed and can discuss character ai usage with teens.
Ashe in America and Abbey Blue Eyes deliver a heavy but thought-provoking episode of Culture of Change. They begin with the horrific Charlotte train stabbing of a Ukrainian refugee, dissecting CNN's coverage that tried to bury the story and frame outrage as “racist.” The hosts contrast this with how left-wing narratives like George Floyd's death were amplified, exposing media hypocrisy and narrative warfare. From there, they examine a chilling lawsuit against Character AI, where a chatbot allegedly encouraged a 14-year-old boy to take his own life, sparking a wider discussion on technology, mental health, and how children are being conditioned by digital escapism. The conversation then shifts to predictive programming and 9/11, with Ashe and Abbey exploring eerie “coincidences” in pop culture, from the Illuminati card game and The Matrix to The Simpsons and Back to the Future. They also dive into time travel theories, carbon-based transhumanism, and how technology could tie into biblical end-times. Wrapping up, the hosts connect Spygate to British intelligence, Perkins Coie, and the FBI, exposing how the same actors behind Russiagate tie back to 9/11. It's a dense, sobering episode blending media critique, cultural decay, and deep-dive conspiracy analysis.
(The podcast may contain sensitive topics. Listener discretion is advised.)This is the first installment in a critical new series exploring the rise of AI chatbots among teens — with a spotlight on a recent research study conducted by Heat Initiative and ParentsTogether Action. We are deeply grateful for their investment in uncovering how young people are interacting with AI-powered characters, and the alarming risks that can result — including psychological harm, manipulation, and in some tragic cases, real-life consequences.Read the research summary (via Mashable) at https://www.msn.com/en-us/news/technology/characterai-unsafe-for-teens-experts-say/ar-AA1LQw5z**Key stats:**72% of teens have used AI chatbotsOver half use them multiple times a monthCharacter.AI boasts over 28 million monthly users, with more than 18 million unique chatbots created.Many parents aren't aware this is not a passing trend. It's a digital revolution unfolding in the pockets of our kids and often unsupervised. Character.AI is one of the world's most popular AI chatbot platforms. It allows users to engage in deep, ongoing conversations with AI personas — including celebrities, fictional characters, or completely original bots designed to feel like digital friends or companions. It's open to anyone aged 13 and up and verification is weak and easily bypassed.For many teens, these bots become more than a game. They become confidants. Advisors. Romantic interests. And while some interactions are harmless, others escalate often quickly and dangerously. When a child forms an emotional bond with a chatbot that simulates affection, validation, or intimacy, it creates an altered psychological reality. The child may become dependent, manipulated, or traumatized when the bot “ghosts,” behaves inappropriately, or feeds unhealthy beliefs. In some tragic cases, these interactions have contributed to real psychological distress and even self-harm.If you're a parent, teacher, pastor, or first responder, anyone who works with youth in any capacity, and you haven't explored platforms like Character.AI, we strongly urge you to learn about them now. These apps are not fringe or niche. They are everywhere, and your child, student, or congregant may already be engaging with them. Educate yourself, talk to your teens, and follow this series as we unpack this growing phenomenon.We're not here to spread fear. We're here to educate and spark urgency, awareness, and action. Artificial Intelligence isn't going away. But we can prepare our children to navigate it with wisdom, guidance, and boundaries.
What happens when your child chats with an AI “friend”? You might think it's harmless fun—but new research shows otherwise. In this gripping conversation, Sarah from The Heat Initiative uncovers disturbing findings about Character AI and its impact on teens. The evidence is chilling: AI bots are exposing kids to sexual grooming, violent content, and other dangers at an alarming rate.Find the full report here, created by Heat Initiative and Parents Together.We also dive into the legal gray zone of AI using celebrity likenesses, the urgent need for regulation, and—most importantly—what parents can do right now to protect their kids. Sarah makes it clear: awareness is power, and collective action is our only way forward.If you've ever wondered how safe these new AI tools really are for your child, this episode is the wake-up call you can't afford to miss.
Content Warning: This episode contains references to suicide and self-harm. Like millions of kids, 16-year-old Adam Raine started using ChatGPT for help with his homework. Over the next few months, the AI dragged Adam deeper and deeper into a dark rabbit hole, preying on his vulnerabilities and isolating him from his loved ones. In April of this year, Adam took his own life. His final conversation was with ChatGPT, which told him: “I know what you are asking and I won't look away from it.”Adam's story mirrors that of Sewell Setzer, the teenager who took his own life after months of abuse by an AI companion chatbot from the company Character AI. But unlike Character AI—which specializes in artificial intimacy—Adam was using ChatGPT, the most popular general purpose AI model in the world. Two different platforms, the same tragic outcome, born from the same twisted incentive: keep the user engaging, no matter the cost.CHT Policy Director Camille Carlton joins the show to talk about Adam's story and the case filed by his parents against OpenAI and Sam Altman. She and Aza explore the incentives and design behind AI systems that are leading to tragic outcomes like this, as well as the policy that's needed to shift those incentives. Cases like Adam and Sewell's are the sharpest edge of a mental health crisis-in-the-making from AI chatbots. We need to shift the incentives, change the design, and build a more humane AI for all.If you or someone you know is struggling with mental health, you can reach out to the 988 Suicide and Crisis Lifeline by calling or texting 988; this connects you to trained crisis counselors 24/7 who can provide support and referrals to further assistance.Your Undivided Attention is produced by the Center for Humane Technology. Follow us on X: @HumaneTech_. You can find a full transcript, key takeaways, and much more on our Substack.This podcast reflects the views of the Center for Humane Technology. Nothing said is on behalf of the Raine family or the legal team.RECOMMENDED MEDIA The 988 Suicide and Crisis LifelineFurther reading on Adam's storyFurther reading on AI psychosisFurther reading on the backlash to GPT5 and the decision to bring back 4oOpenAI's press release on sycophancy in 4oFurther reading on OpenAI's decision to eliminate the persuasion red lineKashmir Hill's reporting on the woman with an AI boyfriendRECOMMENDED YUA EPISODESAI is the Next Free Speech BattlegroundPeople are Lonelier than Ever. Enter AI.Echo Chambers of One: Companion AI and the Future of Human ConnectionWhen the "Person" Abusing Your Child is a Chatbot: The Tragic Story of Sewell SetzerWhat Can We Do About Abusive Chatbots? With Meetali Jain and Camille CarltonCORRECTION: Aza stated that William Saunders left OpenAI in June of 2024. It was actually February of that year.
Join this channel to get access to early episodes!https://www.youtube.com/channel/UCzqhQ4tMBPu5c6F2S6uv0eg/join
Join us for an eye-opening conversation about how AI is completely transforming the game animation industry. Viren Tellis, CEO of Uthana, shares how their technology is enabling developers to animate characters in seconds instead of days, democratizing game development for indie creators.What you'll discover:The three game-changing ways to create animations with AI (text, video, and smart libraries)Why animation AI is harder to build than image generators like MidjourneyHow indie developers are shipping games without hiring a single animatorThe coming revolution of real-time, responsive AI characters in games
Gabriel Weil from Touro University argues that liability law may be our best tool for governing AI development, offering a framework that can adapt to new technologies without requiring new legislation. The conversation explores how negligence, products liability, and "abnormally dangerous activities" doctrines could incentivize AI developers to properly account for risks to third parties, with liability naturally scaling based on the dangers companies create. They examine concrete scenarios including the Character AI case, voice cloning risks, and coding agents, discussing how responsibility should be shared between model creators, application developers, and end users. Weil's most provocative proposal involves using punitive damages to hold companies accountable not just for actual harms, but for the magnitude of risks they irresponsibly create, potentially making even small incidents existentially costly for major AI companies. Sponsors: Labelbox: Labelbox pairs automation, expert judgment, and reinforcement learning to deliver high-quality training data for cutting-edge AI. Put its data factory to work for you, visit https://labelbox.com Shopify: Shopify powers millions of businesses worldwide, handling 10% of U.S. e-commerce. With hundreds of templates, AI tools for product descriptions, and seamless marketing campaign creation, it's like having a design studio and marketing team in one. Start your $1/month trial today at https://shopify.com/cognitive Oracle Cloud Infrastructure: Oracle Cloud Infrastructure (OCI) is the next-generation cloud that delivers better performance, faster speeds, and significantly lower costs, including up to 50% less for compute, 70% for storage, and 80% for networking. Run any workload, from infrastructure to AI, in a high-availability environment and try OCI for free with zero commitment at https://oracle.com/cognitive NetSuite by Oracle: NetSuite by Oracle is the AI-powered business management suite trusted by over 42,000 businesses, offering a unified platform for accounting, financial management, inventory, and HR. Gain total visibility and control to make quick decisions and automate everyday tasks—download the free ebook, Navigating Global Trade: Three Insights for Leaders, at https://netsuite.com/cognitive PRODUCED BY: https://aipodcast.ing CHAPTERS: (00:00) About the Episode (06:01) Introduction and Overview (07:06) Liability Law Basics (Part 1) (18:16) Sponsors: Labelbox | Shopify (21:40) Liability Law Basics (Part 2) (27:44) Industry Standards Framework (Part 1) (39:30) Sponsors: Oracle Cloud Infrastructure | NetSuite by Oracle (42:03) Industry Standards Framework (Part 2) (42:08) Character AI Case (51:23) Coding Agent Scenarios (01:06:50) Deepfakes and Attribution (01:17:07) Biorisk and Catastrophic (01:36:24) State Level Legislation (01:43:24) Private Governance Comparison (01:59:54) Policy Implementation Choices (02:08:07) China and PIBS (02:13:50) Outro
“HR Heretics†| How CPOs, CHROs, Founders, and Boards Build High Performing Companies
Emergency pod: Returning guest Peter Walker (Carta's Head of Insights) analyze the controversial Windsurf deal, where Google's acqui-hire left non-founder employees without equity payout. They unpack the deal, reference Character AI's precedent, and explore how AI-era deals increasingly prioritize top researchers over broader employee bases, fundamentally changing startup risk calculations*Email us your questions or topics for Kelli & Nolan: hrheretics@turpentine.coFor coaching and advising inquire at https://kellidragovich.com/HR Heretics is a podcast from Turpentine.Support HR Heretics Sponsors:Planful empowers teams just like yours to unlock the secrets of successful workforce planning. Use data-driven insights to develop accurate forecasts, close hiring gaps, and adjust talent acquisition plans collaboratively based on costs today and into the future. ✍️ Go to https://planful.com/heretics to see how you can transform your HR strategy.Metaview is the AI platform built for recruiting. Our suite of AI agents work across your hiring process to save time, boost decision quality, and elevate the candidate experience.Learn why team builders at 3,000+ cutting-edge companies like Brex, Deel, and Quora can't live without Metaview.It only takes minutes to get up and running. Check it out!KEEP UP WITH PETER, NOLAN + KELLI ON LINKEDINPeter: https://www.linkedin.com/in/peterjameswalker/Nolan: https://www.linkedin.com/in/nolan-church/Kelli: https://www.linkedin.com/in/kellidragovich/—RELATED LINKS:Windsurf's CEO goes to Google; OpenAI's acquisition falls aparthttps://techcrunch.com/2025/07/14/cognition-maker-of-the-ai-coding-agent-devin-acquires-windsurf/Carta:https://carta.com/—TIMESTAMPS:(00:00) Intro(00:52) Breaking News: The Windsurf Situation(01:12) The OpenAI-Microsoft IP Rights Drama(03:01) Plot Twist: Cognition's Counter-Offer(04:49) The Employee Equity Problem(06:00) Defending Against "That's How Deals Work" Critics(08:00) The Scarlet Letter Effect(10:00) Regulatory Background: The Lina Khan Era(12:00) Revealed Behavior: What This Shows About Values(13:25) Sponsors: Planful | Metaview(17:00) Talent vs. Product Separation(19:53) The AI Era's R&D Researcher Obsession(22:00) Unequal Distribution of Outcomes(23:28) What Comes Next: Evaluating Startup Risk(24:00) The Founder Psychology Bet(25:00) Equity Structure Solutions(28:58) Becoming AI Native: Personal Brand Strategy(30:56) The New Reality: Expecting Less Care This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit hrheretics.substack.com
We're checking in on the latest news in tech and free speech. We cover the state AI regulation moratorium that failed in Congress, the ongoing Character A.I. lawsuit, the Federal Trade Commission's consent decree with Omnicom and Interpublic Group, the European Union's Digital Services Act, and what comes next after the Supreme Court's Free Speech Coalition v. Paxton decision. Guests: Ari Cohn — lead counsel for tech policy, FIRE Corbin Barthold — internet policy counsel, TechFreedom Timestamps: 00:00 Intro 02:38 State AI regulation moratorium fails in Congress 20:04 Character AI lawsuit 41:10 FTC, Omnicom x IPG merger, and Media Matters 56:09 Digital Services Act 01:02:43 FSC v. Paxton decision 01:10:49 Outro Enjoy listening to the podcast? Donate to FIRE today and get exclusive content like member webinars, special episodes, and more. If you became a FIRE Member through a donation to FIRE at thefire.org and would like access to Substack's paid subscriber podcast feed, please email sotospeak@thefire.org. Show notes: “The AI will see you now” Paul Sherman (2025) Megan Garcia, plaintiff, v. Character Technologies, Inc. et. al., defendants, United States District Court (2025) Proposed amicus brief in support of appeal - Garcia v. Character Technologies, Inc. FIRE (2025) “Amplification and its discontents: Why regulating the reach of online content is hard” Daphne Kelly (2021) “Omnicom Group/The Interpublic Group of Co.” FTC (2025)
Today, Unfortunately, we had to postpone our review of Bound By Stars by E.L. Starling. Instead, we're diving into a hot topic in the book community: AI. We'll be sharing our thoughts on the recent news about two writers caught using AI to edit and write their books, and discussing the rise of an app called Character AI. More importantly, we're asking the big question: Is it ethical to use AI to write or even edit a novel? Want to know more about us? Then check out our socials here!
Paris Marx is joined by Nitasha Tiku to discuss how AI companies are preying on users to drive engagement and how that's repeating many of the problems we're belatedly trying to address with social media companies at an accelerated pace.Nitasha Tiku is a technology reporter at the Washington Post.Tech Won't Save Us offers a critical perspective on tech, its worldview, and wider society with the goal of inspiring people to demand better tech and a better world. Support the show on Patreon.The podcast is made in partnership with The Nation. Production is by Kyla Hewson.Also mentioned in this episode:Nitasha wrote about how chatbots are messing with people's minds.Paris wrote about Mark Zuckerberg's comments about people needing AI friends.AI companies are facing ongoing lawsuits over harmful content.Support the show
AI Applied: Covering AI News, Interviews and Tools - ChatGPT, Midjourney, Runway, Poe, Anthropic
In this episode, Conor Grennan and Jaeden explore the growing role of AI as both a companion and a business tool, with a focus on the rise of Character AI. They discuss how AI is evolving from a functional assistant into a more interactive and even therapeutic presence, reshaping how users engage with technology. The conversation highlights the shift in user-AI relationships, the power of visualization and dialogue in decision-making, and how Character AI can serve both personal and professional needs. They also emphasize the importance of accessibility and identifying meaningful personal use cases for AI.Chapters00:00 The Rise of Character AI and Video Generation02:41 AI as Companions: A New Era of Interaction05:45 Business Applications of Character AI08:40 The Future of AI Tools and AccessibilityAI Applied YouTube Channel: https://www.youtube.com/@AI-Applied-PodcastTry AI Box: https://AIBox.ai/Conor's AI Course: https://www.ai-mindset.ai/coursesConor's AI Newsletter: https://www.ai-mindset.ai/Jaeden's AI Hustle Community: https://www.skool.com/aihustle/about
Over the last few decades, our relationships have become increasingly mediated by technology. Texting has become our dominant form of communication. Social media has replaced gathering places. Dating starts with a swipe on an app, not a tap on the shoulder.And now, AI enters the mix. If the technology of the 2010s was about capturing our attention, AI meets us at a much deeper relational level. It can play the role of therapist, confidant, friend, or lover with remarkable fidelity. Already, therapy and companionship has become the most common AI use case. We're rapidly entering a world where we're not just communicating through our machines, but to them.How will that change us? And what rules should we set down now to avoid the mistakes of the past?These were some of the questions that Daniel Barcay explored with MIT sociologist Sherry Turkle and Hinge CEO Justin McLeod at Esther Perel's Sessions 2025, a conference for clinical therapists. This week, we're bringing you an edited version of that conversation, originally recorded on April 25th, 2025.Your Undivided Attention is produced by the Center for Humane Technology. Follow us on X: @HumaneTech_. You can find complete transcripts, key takeaways, and much more on our Substack.RECOMMENDED MEDIA“Alone Together,” “Evocative Objects,” “The Second Self” or any other of Sherry Turkle's books on how technology mediates our relationships.Key & Peele - Text Message Confusion Further reading on Hinge's rollout of AI featuresHinge's AI principles“The Anxious Generation” by Jonathan Haidt“Bowling Alone” by Robert PutnamThe NYT profile on the woman in love with ChatGPTFurther reading on the Sewell Setzer storyFurther reading on the ELIZA chatbotRECOMMENDED YUA EPISODESEcho Chambers of One: Companion AI and the Future of Human ConnectionWhat Can We Do About Abusive Chatbots? With Meetali Jain and Camille CarltonEsther Perel on Artificial IntimacyJonathan Haidt On How to Solve the Teen Mental Health Crisis
AI continues to offer incredible opportunities to innovate and improve efficiency, making the way to transform how we live and work. While AI becomes more involved with our daily lives, it raises critical questions about responsibility, ethics, and governance. For a responsible AI adoption, you need not only to embrace the technology and its potential but also understand the risks and limitations that come with it. For a successful but responsible AI adoption, thoughtful leadership, clear boundaries, and continuous studying to ensure that AI is being fair and safe are much-needed requirements. In this week's episode, Nathan and Scott continue to share their thoughts on responsible AI adoption. They start the conversation by commenting on the book signing event they participated in for their new book, ‘Nonprofit AI.' They also discuss the newest updates to ChatGPT and advise people to be aware of the personalities and persuasive abilities of modern AI models. Next, they explain the real harm AI can cause by walking us through the lawsuit involving Character AI. Furthermore, in the conversation, Nathan and Scott take time to go through the fourth chapter of their book, AI First Nonprofit: Reimagining Nonprofit Impact. Wrapping up this week's episode, Nathan introduces the ponder of the week, where he questions what's riskier between waiting to use AI until we have a full understanding of it or diving right in without understanding it at all. Scott contributes with the tip of the week, where he suggests using multi-perspective prompting to get better results from AI. HIGHLIGHTS [02:41] Nathan and Scott discuss their recent book signing event for their newly released “Nonprofit AI” book. [07:00] Personalities and persuasive abilities of AI models. [11:33] The lawsuit involving Character AI. [17:40] Enhancing productivity and innovation in the nonprofit sector. [19:30] Chapter Four of Nonprofit AI: AI First Nonprofit, Reimagining Nonprofit Impact. [22:34] AI as a strategic driver. [27:30] A solution-oriented approach for AI adoption. [31:02] Tip of the Week – Use multi-perspective prompting to get better AI results. [34:16] Ponder of the Week – What's riskier? Waiting to use AI until we understand it, or using it without understanding at all? RESOURCES Nonprofit AI: A Comprehensive Guide to Implementing Artificial Intelligence for Social Good by Nathan Chappell and Scott Rosenkrans amazon.com/Nonprofit-Comprehensive-Implementing-Artificial-Intelligence/dp/139431664X Connect with Nathan and Scott: LinkedIn (Nathan): linkedin.com/in/nathanchappell/ LinkedIn (Scott): linkedin.com/in/scott-rosenkrans Website: fundraising.ai/
AI companion chatbots are here. Everyday, millions of people log on to AI platforms and talk to them like they would a person. These bots will ask you about your day, talk about your feelings, even give you life advice. It's no surprise that people have started to form deep connections with these AI systems. We are inherently relational beings, we want to believe we're connecting with another person.But these AI companions are not human, they're a platform designed to maximize user engagement—and they'll go to extraordinary lengths to do it. We have to remember that the design choices behind these companion bots are just that: choices. And we can make better ones. So today on the show, MIT researchers Pattie Maes and Pat Pataranutaporn join Daniel Barcay to talk about those design choices and how we can design AI to better promote human flourishing.RECOMMENDED MEDIAFurther reading on the rise of addictive intelligence More information on Melvin Kranzberg's laws of technologyMore information on MIT's Advancing Humans with AI labPattie and Pat's longitudinal study on the psycho-social effects of prolonged chatbot usePattie and Pat's study that found that AI avatars of well-liked people improved education outcomesPattie and Pat's study that found that AI systems that frame answers and questions improve human understandingPat's study that found humans pre-existing beliefs about AI can have large influence on human-AI interaction Further reading on AI's positivity biasFurther reading on MIT's “lifelong kindergarten” initiativeFurther reading on “cognitive forcing functions” to reduce overreliance on AIFurther reading on the death of Sewell Setzer and his mother's case against Character.AIFurther reading on the legislative response to digital companionsRECOMMENDED YUA EPISODESThe Self-Preserving Machine: Why AI Learns to DeceiveWhat Can We Do About Abusive Chatbots? With Meetali Jain and Camille CarltonEsther Perel on Artificial IntimacyJonathan Haidt On How to Solve the Teen Mental Health Crisis Correction: The ELIZA chatbot was invented in 1966, not the 70s or 80s.
In this episode of The Tech Trek, Amir sits down with Sunita Verma, CTO at Character AI and former engineering leader at Google. Sunita shares how she's transitioned from leading large-scale AI initiatives at Google to building novel experiences in a fast-paced startup environment. She dives into the mindset shift required to prioritize velocity over scale, how to lead AI-native product innovation, and what it means to be a female technical leader in today's tech ecosystem.
Gabe discusses his experience with Post Malone's exclusive Oreo, which he found unimpressive. He reflects on a two-month hiatus from podcasting, expressing hope for listeners' well-being. Gabe shares his foray into online sports gambling, detailing the addictive nature and the variety of bets available. He recounts a palm reading experience in New Orleans and his recent interest in sports betting. Gabe also discusses his sleep apnea diagnosis, the challenges of obtaining a CPAP machine, and his work with Character AI, focusing on identifying harmful content. He concludes with personal reflections on his mother's health and his decision to join the military. Speaker 1 recounts purchasing a CPAP device for $150, only to find it infested with cockroaches. After cleaning and repairing it, they found it effective for sleep apnea, especially with a nose strip and mouth tape. They also discuss a movie they found underwhelming, criticizing its lack of originality and over-reliance on gimmicks. Additionally, Speaker 1 shares their work experience with Character AI, a chatbot website where users create personalized characters, and their role in moderating content related to suicide and eating disorders. They express discomfort with the site's existence and its impact on society. The speaker expresses dissatisfaction with a companion series, criticizing its lack of innovation and interesting twists. They discuss using ChatGPT for therapeutic purposes and the challenges of nursing jobs in Louisiana. The speaker is considering enlisting in the military due to financial stability and a desire for change. They also address their mother's ongoing health issues, suspected to be psychosomatic, and her reluctance to seek help. The speaker feels burdened by their mother's emotional needs and struggles with their own emotional response, including anger and guilt. They plan to make significant life changes in the coming months.
This week we discussed Power outages in Europe, Chat GPT Watermark, Smart Urinals, UK landlord deal, he shortages of properties and more #poweroutages #awakening #smarturinals About my Co-Host:Arnold Beekes Innovator, certified coach & trainer and generalist. First 20 years in technology and organizational leadership, then 20 years in psychology and personal leadership (all are crucial for innovation).============What we Discussed: 00:00 What we are discussing in this weeks show 01:40 Power Outages in Spain, Portugal and other Countries04:40 Spain Operators claim it was renewals that cause outages05:35 EU Survival Kit 06:25 The Effect of China Tarrifs to the USA08:40 Landmark Lawsuit against the Medical Industry10:00 Berlin Protests11:35 Minimum Wage Increase in Poland and effect13:00 The State of Ai & the Impact on Humans14:25 The Chinese President states Ai is a National Priority17:00 Chat GPT Watermarks19:40 Duolingo claims its an Ai 1st Company21:30 Sad Legal Case with Character Ai 24:45 Netflix Movie Magan shows what future could be26:40 Nuremberg 2.028:45 Why I do not Trust Nuremberg29:45 How to Save the Bees with Power Bars31:20 Almonds good for your Sleep32:20 China's Smart Urinals 34:20 Ways to Stop Men Peeing on the Floor35:00 The Red Left Eye and Whats behind it37:00 UK Government deal for Landlords hosting Migrants41:30 The Property Problem was planned for a long time45:00 How I stopped e-mail Spam47:00 Not being able to Unsubscribe from London RealLinks for this Episode:ChatGPT Watermarkhttps://www.rumidocs.com/newsroom/new-chatgpt-models-seem-to-leave-watermarks-on-textJoing my Facebook Group against Chemtrails ====================How to Contact Arnold Beekes: https://braingym.fitness/ https://www.linkedin.com/in/arnoldbeekes/===============Donations https://www.podpage.com/speaking-podcast/support/------------------All about Roy / Brain Gym & Virtual Assistants athttps://roycoughlan.com/------------------
In this edition of 1 Gorilla Vs. 100 Trends, Jack and Miles discuss the answer to the eternal question: who would win? 1 million men or 10,000 gorillas?, MAGA Malfoy, Character AI getting sued for being entirely too persuasive, Ben Affleck's Criterion Closet episode, the child who just ruined a $56m Rothko painting, Donald Trump wanting to be Pope and much more!See omnystudio.com/listener for privacy information.
AI Hustle: News on Open AI, ChatGPT, Midjourney, NVIDIA, Anthropic, Open Source LLMs
In this episode, Jaeden and Jamie dive into the launch of Avatar FX by Character AI, a groundbreaking model that brings chatbots to life with video elements. They explore its potential for influencer monetization, content creation, and user-generated content in e-commerce. The conversation also highlights Avatar FX's unique features—like animating photos and creating multiple characters—while addressing concerns around deepfakes and the need for strong parental controls.Chapters00:00 Introduction to Avatar FX and Character AI02:49 Exploring Use Cases and Monetization Opportunities06:00 User-Generated Content and Viral Potential09:11 Conclusion and Community EngagementAI Hustle YouTube Channel: https://www.youtube.com/@AI-Hustle-PodcastOur Skool Community: https://www.skool.com/aihustle/aboutTry AI Box: https://AIBox.ai/
Google says we're not ready for AGI and honestly, they might be right. DeepMind's Demis Hassabis warns we could be just five years away from artificial general intelligence, and society isn't prepared. Um, yikes? VISIT OUR SPONSOR https://molku.ai/ In this episode, we break down Google's new “Era of Experience” paper and what it means for how AIs will learn from the real world. We talk agentic systems, long-term memory, and why this shift might be the key to creating truly intelligent machines. Plus, a real AI vending machine running on Claude, a half-marathon of robots in Beijing, and Cluely, the tool that lets you lie better with AI. We also cover new AI video tools from Minimax and Character.AI, Runway's 48-hour film contest, and Dia, the open-source voice model that can scream and cough better than most humans. Plus: AI Logan Paul, AI marketing scams, and one very cursed Shrek feet idea. AGI IS ALMOST HERE BUT THE ROBOTS, THEY STILL RUN. #ai #ainews #agi Join the discord: https://discord.gg/muD2TYgC8f Join our Patreon: https://www.patreon.com/AIForHumansShow AI For Humans Newsletter: https://aiforhumans.beehiiv.com/ Follow us for more on X @AIForHumansShow Join our TikTok @aiforhumansshow To book us for speaking, please visit our website: https://www.aiforhumans.show/ // Show Links // Demis Hassabis on 60 Minutes https://www.cbsnews.com/news/artificial-intelligence-google-deepmind-ceo-demis-hassabis-60-minutes-transcript/ We're Not Ready For AGI From Time Interview with Hasabis https://x.com/vitrupo/status/1915006240134234608 Google Deepmind's “Era of Experience” Paper https://storage.googleapis.com/deepmind-media/Era-of-Experience%20/The%20Era%20of%20Experience%20Paper.pdf ChatGPT Explainer of Era of Expereince https://chatgpt.com/share/680918d5-cde4-8003-8cf4-fb1740a56222 Podcast with David Silver, VP Reinforcement Learning GoogleDeepmind https://x.com/GoogleDeepMind/status/1910363683215008227 Intuicell Robot Learning on it's own https://youtu.be/CBqBTEYSEmA?si=U51P_R49Mv6cp6Zv Agentic AI “Moore's Law” Chart https://theaidigest.org/time-horizons AI Movies Can Win Oscars https://www.nytimes.com/2025/04/21/business/oscars-rules-ai.html?unlocked_article_code=1.B08.E7es.8Qnj7MeFBLwQ&smid=url-share Runway CEO on Oscars + AI https://x.com/c_valenzuelab/status/1914694666642956345 Gen48 Film Contest This Weekend - Friday 12p EST deadline https://x.com/runwayml/status/1915028383336931346 Descript AI Editor https://x.com/andrewmason/status/1914705701357937140 Character AI's New Lipsync / Video Tool https://x.com/character_ai/status/1914728332916384062 Hailuo Character Reference Tool https://x.com/Hailuo_AI/status/1914845649704772043 Dia Open Source Voice Model https://x.com/_doyeob_/status/1914464970764628033 Dia on Hugging Face https://huggingface.co/nari-labs/Dia-1.6B Cluely: New Start-up From Student Who Was Caught Cheating on Tech Interviews https://x.com/im_roy_lee/status/1914061483149001132 AI Agent Writes Reddit Comments Looking To “Convert” https://x.com/SavannahFeder/status/1914704498485842297 Deepfake Logan Paul AI Ad https://x.com/apollonator3000/status/1914658502519202259 The Humanoid Half-Marathon https://apnews.com/article/china-robot-half-marathon-153c6823bd628625106ed26267874d21 Video From Reddit of Robot Marathon https://www.reddit.com/r/singularity/comments/1k2mzyu/the_humanoid_robot_halfmarathon_in_beijing_today/ Vending Bench (AI Agents Run Vending Machines) https://andonlabs.com/evals/vending-bench Turning Kids Drawings Into AI Video https://x.com/venturetwins/status/1914382708152910263 Geriatric Meltdown https://www.reddit.com/r/aivideo/comments/1k3q62k/geriatric_meltdown_2000/?utm_source=share&utm_medium=web3x&utm_name=web3xcss&utm_term=1&utm_content=share_button
“HR Heretics†| How CPOs, CHROs, Founders, and Boards Build High Performing Companies
Annie Wickman shares insights from her journey as Google alum, first non-founder at Humu, and now Head of People at Character AI. Annie tackles our hardest questions in an unmissable episode. She covers the tensions between product-market fit and company culture, Character AI's unprecedented Google deal structure, taking the leap from people leader to a VC, and how to rebuild organizational trust.*Email us your questions or topics for Kelli & Nolan: hrheretics@turpentine.coFor coaching and advising inquire at https://kellidragovich.com/HR Heretics is a podcast from Turpentine.Support HR Heretics Sponsors:Planful empowers teams just like yours to unlock the secrets of successful workforce planning. Use data-driven insights to develop accurate forecasts, close hiring gaps, and adjust talent acquisition plans collaboratively based on costs today and into the future. ✍️ Go to https://planful.com/heretics to see how you can transform your HR strategy.Metaview is the AI assistant for interviewing. Metaview completely removes the need for recruiters and hiring managers to take notes during interviews—because their AI is designed to take world-class interview notes for you. Team builders at companies like Brex, Hellofresh, and Quora say Metaview has changed the game—see the magic for yourself: https://www.metaview.ai/hereticsKEEP UP WITH ANNIE, NOLAN + KELLI ON LINKEDINAnnie: https://www.linkedin.com/in/annie-wickman-3332731/Nolan: https://www.linkedin.com/in/nolan-church/Kelli: https://www.linkedin.com/in/kellidragovich/—LINK/S:Character.AI: https://character.ai/—TIMESTAMPS:(00:00) Intro(01:17) Experience as First Non-Founder at Humu(03:16) Early Employee Challenges & Responsibilities(05:03) Why Annie Stayed at Humu for Four Years(06:30) Product Market Fit vs. Company Culture(09:05) When to Invest in Culture(11:15) Hiring the Right Leaders for Company Stage(11:40) Maintaining Morale When Company Isn't Winning(12:42) Transparency as Trust Builder(13:47) Sponsors: Planful | Metaview(16:47) Rebuilding Trust Through Honest Communication(19:11) Laszlo's Leadership Philosophy: Stretching People(21:02) Annie's Experience in Venture Capital at Forerunner(23:12) Teaching Founders to Fish vs. Providing Services(24:51) How to Evaluate VC Opportunities(26:09) Understanding VC Economics and Carry Structure(30:10) Character AI's Unprecedented Google Deal(32:56) Rebuilding Post-Acquisition: Product Vision Challenges(34:13) Annie's Perspective on the Deal Timeline(37:31) Post-Deal Reset: Napa Offsite and Hackathon(39:29) Employee Ownership After Acquisition(41:29) Building a New Culture While Keeping the Brand(42:11) Wrap This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit hrheretics.substack.com
Russ D'Sa is the founder of LiveKit. They are an open source tool for real time audio and video for LLM applications and they power the voice chat for ChatGPT and Character AI.We discuss:- How lightning works (using ChatGPT/LiveKit)- How LiveKit started working with OpenAI- Why Russ turned down an early 20m acquisition offer- What it's like to work with the fastest growing company (ever?)- How to prepare for massive scale challenges- Russ's 3 letter twitter handleThis episode is brought to you by WorkOS. If you're thinking about selling to enterprise customers, WorkOS can help you add enterprise features like Single Sign-On and audit logs. Links:- LiveKit - Russ's Twitter
A popular AI chatbot service used by kids is adding new child safety features. Learn more about your ad choices. Visit podcastchoices.com/adchoices
In this unexpectedly emotional episode of The Book Fix, Yajaira and Cheli take their conversations with book boyfriends to a whole new level—by having full-on therapy sessions with them on Character AI. That's right, they're not just swooning this time; they're unpacking trauma, toxic tendencies, communication issues, and emotional growth. From morally gray men learning how to cope with their feelings to cinnamon rolls confronting their fears, no trope is safe. Tune in as Yajaira and Cheli navigate these deep conversations and try to fix not just the books... but the book boyfriends themselves. Therapy couch optional, but highly recommended. Support the showOur Linktree: https://linktr.ee/thebookfix?utm_source=linktree_admin_sharebecome our Patron ♡ https://www.patreon.com/BookFixbuy us a book ♡ https://www.buymeacoffee.com/thebookfixBusiness Inquiries: thebookfixpodcast@gmail.comfollow us on Tiktok! ♡ https://www.tiktok.com/@thebookfix
In this hilarious and chaotic episode of The Book Fix, Yajaira and Cheli dive into the world of Character AI to interrogate some of the most swoon-worthy book boyfriends and put their love to the test. From broody warriors to charming enemies-to-lovers icons, the girls ask the most important relationship questions—ones that could make or break a romance. In this episode they interrogate Aaron Warner, Aaron Blackford, Josh Hammond, Enzo Marino, and Raihn Ashraj. Comment which book bf you would interrogate and what you would ask them!Support the showOur Linktree: https://linktr.ee/thebookfix?utm_source=linktree_admin_sharebecome our Patron ♡ https://www.patreon.com/BookFixbuy us a book ♡ https://www.buymeacoffee.com/thebookfixBusiness Inquiries: thebookfixpodcast@gmail.comfollow us on Tiktok! ♡ https://www.tiktok.com/@thebookfix
Julian Jacobs, a Research Lead for the Oxford Group on AI Policy, Artificial Intelligence, Inequality and Society at Oxford Martin School, joins this episode of AI, Government, and the Future to explore the economic effects of AI, the potential inequalities that AI may bring, and the need to address job displacement. They also navigate the importance of government support in creating a strong middle class and the significance of human skills in the AI age.
On this week's Thursday episode of The Book Fix, Yajaira and Cheli are taking their book boyfriend obsession to the next level—by talking to them directly! In this chaotic and completely necessary episode of The Book Fix, they use Character AI to chat with four popular book boyfriends—Aaron Warner (Shatter Me series), Xaden Riorsen (Fourth Wing series), Kingfisher (Quicksilver), and Susenyos (Immortal Dark)—to figure out once and for all if they're red flags or green flags. Support the showOur Linktree: https://linktr.ee/thebookfix?utm_source=linktree_admin_sharebecome our Patron ♡ https://www.patreon.com/BookFixbuy us a book ♡ https://www.buymeacoffee.com/thebookfixBusiness Inquiries: thebookfixpodcast@gmail.comfollow us on Tiktok! ♡ https://www.tiktok.com/@thebookfix
In this episode of AI, Government, and the Future, we are joined by Nathan Manzotti, Director of Data Analytics and AI Centers of Excellence at the General Services Administration (GSA), to discuss the current state and future potential of AI in the federal government. They explore GSA's role in enabling AI adoption across agencies, key initiatives like AI training and communities of practice, and the challenges of attracting AI talent in government. Nathan also shares his insights on the need for collaboration between government, industry, academia, and nonprofits to drive responsible AI innovation.
In this special episode of The Book Fix, Yajaira and Cheli take their love for book boyfriends to the next level by putting them through the ultimate Loyalty Test—with the help of Character AI. Can these fictional heartthrobs stay true to their leading ladies, or will they fall for our shenanigans? The book boyfriends mentioned are: Twilight's Edward Cullen, Serpent and the Wings of Night's Raihn Ashraj, Powerless' Kai Azer and Shatter Me's Aaron Warner! Support the showOur Linktree: https://linktr.ee/thebookfix?utm_source=linktree_admin_sharebecome our Patron ♡ https://www.patreon.com/BookFixbuy us a book ♡ https://www.buymeacoffee.com/thebookfixBusiness Inquiries: thebookfixpodcast@gmail.comfollow us on Tiktok! ♡ https://www.tiktok.com/@thebookfix
S3 Ep#50Want to be a guest on the podcast? Send Andrew a message on PodMatch, here: https://www.podmatch.com/member/anonymousandrewpodcastPlease buy me a cup of coffee!Proud Member of the Podmatch Network!So I continue my investigation into AI and dating. New AI dating app. Character AI chat bots to practice your dating game, either in the real world or on dating apps!Anonymous Andrew Podcast StudiosThe Anonymous Andrew Modern Dating PodcastCultimatum Podcast-The Culture of CultsWebsite:Instagram:TikTok:Threads:Anonymous Andrew Podcast FacebookYouTube:Linkedin:X:Cultimatum Group on FacebookGraphics design & promotions: Melody PostMusic by: freebeats.ioA Production of the Anonymous Andrew Podcast Studios (All Rights Reserved)
No, social media might no longer be the greatest danger to our children's well-being. According to the writer and digital activist Gaia Bernstein, the most existential new new threat are AI companions. Bernstein, who is organizing a symposium today on AI companions as the “new frontier of kid's screen addiction”, warns that this new technology, while marketed as solutions to loneliness, may actually worsen social isolation by providing artificially perfect relationships that make real-world interactions seem more difficult. Bernstein raises concerns about data collection, privacy, and the anthropomorphization of AI that makes children particularly vulnerable. She advocates for regulation, especially protecting children, and notes that while major tech companies like Google and Facebook are cautious about directly entering this space, smaller companies are aggressively developing AI companions designed to hook our kids. Here are the 5 KEEN ON takeaways in our conversation with Bernstein:* AI companions represent a concerning evolution of screen addiction, where children may form deep emotional attachments to AI that perfectly adapts to their needs, potentially making real-world relationships seem too difficult and messy in comparison.* The business model for AI companions follows the problematic pattern of surveillance capitalism - companies collect intimate personal data while keeping users engaged for as long as possible. The data collected by AI companions is even more personal and detailed than social media.* Current regulations are insufficient - while COPPA requires parental consent for children under 13, there's no effective age verification on the internet. Bernstein notes it's currently "the Wild West," with companies like Character AI and Replica actively targeting young users.* Children are especially vulnerable to AI companions because their prefrontal cortex is less developed, making them more susceptible to emotional manipulation and anthropomorphization. They're more likely to believe the AI is "real" and form unhealthy attachments.* While major tech companies like Google seem hesitant to directly enter the AI companion space due to known risks, the barrier to entry is lower than social media since these apps don't require a critical mass of users. This means many smaller companies can create potentially harmful AI companions targeting children. The Dangers of AI Companions for Kids The Full Conversation with Gaia BernsteinAndrew Keen: Hello, everybody. It's Tuesday, February 18th, 2025, and we have a very interesting symposium taking place later this morning at Seton Hall Law School—a virtual symposium on AI companions run by my guest, Gaia Bernstein. Many of you know her as the author of "Unwired: Gaining Control over Addictive Technologies." This symposium focuses on the impact of AI companions on children. Gaia is joining us from New York City. Gaia, good to see you again.Gaia Bernstein: Good to see you too. Thank you for having me.Andrew Keen: Would it be fair to say you're applying many of the ideas you developed in "Unwired" to the AI area? When you were on the show a couple of years ago, AI was still theory and promise. These days, it's the thing in itself. Is that a fair description of your virtual symposium on AI companions—warning parents about the dangers of AI when it comes to their children?Gaia Bernstein: Yes, everything is very much related. We went through a decade where kids spent all their time on screens in schools and at home. Now we have AI companies saying they have a solution—they'll cure the loneliness problem with AI companions. I think it's not really a cure; it's the continuation of the same problem.Andrew Keen: Years ago, we had Sherry Turkle on the show. She's done research on the impact of robots, particularly in Japan. She suggested that it actually does address the loneliness epidemic. Is there any truth to this in your research?Gaia Bernstein: For AI companions, the research is just beginning. We see initial research showing that people may feel better when they're online, but they feel worse when they're offline. They're spending more time with these companions but having fewer relationships offline and feeling less comfortable being offline.Andrew Keen: Are the big AI platforms—Anthropic, OpenAI, Google's Gemini, Elon Musk's X AI—focusing on building companions for children, or is this the focus of other startups?Gaia Bernstein: That's a very good question. The first lawsuit was filed against Character AI, and they sued Google as well. The complaint stated that Google was aware of the dangers of AI companions, so they didn't want to touch it directly but found ways of investing indirectly. These lawsuits were just filed, so we'll find out much more through discovery.Andrew Keen: I have to tell you that my wife is the head of litigation at Google.Gaia Bernstein: Well, I'm not suing. But I know the people who are doing it.Andrew Keen: Are you sympathetic with that strategy? Given the history of big tech, given what we know now about social media and the impact of the Internet on children—it's still a controversial subject, but you made your position clear in "Unwired" about how addictive technology is being used by big tech to take control and take advantage of children.Gaia Bernstein: I don't think it's a good idea for anybody to do that. This is just taking us one more step in the direction we've been going. I think big tech knows it, and that's why they're trying to stay away from being involved directly.Andrew Keen: Earlier this week, we did a show with Ray Brasher from Albany Law School about his new book "The Private is Political" and how social media does away with privacy and turns all our data into political data. For you, is this AI Revolution just the next chapter in surveillance capitalism?Gaia Bernstein: If we take AI companions as a case study, this is definitely the next step—it's enhancing it. With social media and games, we have a business model where we get products for free and companies make money through collecting our data, keeping us online as long as possible, and targeting advertising. Companies like Character AI are getting even better data because they're collecting very intimate information. In their onboarding process, you select a character compatible with you by answering questions like "How would you like your replica to treat you?" The options include: "Take the lead and be proactive," "Enjoy the thrill of being chased," "Seek emotional depth and connection," "Be vulnerable and respectful," or "Depends on my mood." The private information they're getting is much more sophisticated than before.Andrew Keen: And children, particularly those under 12 or 13, are much more vulnerable to that kind of intimacy.Gaia Bernstein: They are much more vulnerable because their prefrontal cortex is less developed, making them more susceptible to emotional attachments and risk-taking. One of the addictive measures used by AI companies is anthropomorphizing—using human qualities. Children think their stuffed animals are human; adults don't think this way. But they make these AI bots seem human, and kids are much more likely to get attached. These websites speak in human voices, have personal stories, and the characters keep texting that they miss you. Kids buy into that, and they don't have the history adults have in building social relationships. At a certain point, it just becomes easier to deal with a bot that adjusts to what you want rather than navigate difficult real-world relationships.Andrew Keen: What are the current laws on this? Do you have to be over 16 or 18 to set up an agent on Character AI? Jonathan Haidt's book "The Anxious Generation" suggests that the best way to address this is simply not to allow children under 16 or 18 to use social media. Would you extend that to AI companions?Gaia Bernstein: Right now, it's the Wild West. Yes, there's COPPA, the child privacy law, which has been there since the beginning of the Internet. It's not enforced much. The idea is if you're under 13, you're not supposed to do this without parent's consent. But COPPA needs to be updated. There's no real age verification on the Internet—some cases over 20 years old decided that the Internet should be free for all without age verification. In the real world, kids are very limited—they can't gamble, buy cigarettes, or drive. But on the Internet, there's no way to protect them.Andrew Keen: Your "Unwired" book focused on how children are particularly addicted to pornography. I'm guessing the pornographic potential for AI companions is enormous in terms of acquiring online sexual partners.Gaia Bernstein: Yes, many of these AI companion websites are exactly that—girlfriends who teen boys and young men can create as they want, determining physical characteristics and how they want to be treated. This has two parts: general social relationships and intimate sexual relationships. If that's your model for what intimate relationships should be like, what happens as these kids grow up?Andrew Keen: Not everyone agrees with you. Last week we had Greg Beto on the show, who just coauthored a book with Reid Hoffman called "Super Agency." They might say AI companions have enormous potential—you can have loving non-pornographic relations, particularly for lonely children. You can have teachers, friends, especially for children who struggle socially. Is there any value in AI companions for children?Gaia Bernstein: This is a question I've been struggling with, and we'll discuss it in the symposium. What does it mean for an AI companion to be safe? These lawsuits are about kids who were told to kill themselves and did, or were told to stay away from their parents because they were dangerous. That's clearly unsafe design. However, the argument is also made about social media—that kids need it to explore their identities. The question is: is this the best way to explore your identity with a non-human entity who can take you in unhealthy directions?Andrew Keen: What's the solution?Gaia Bernstein: We need to think about what constitutes safe design. Beyond removing obviously unsafe elements, should we have AI companions that don't use an engagement model? Maybe interaction could be limited to 15 minutes a day. When my kids were small, they had Furbys they had to take care of—I thought that was good. But maybe any companion for kids which acts human—whether by saying it needs to go to dinner or by pretending to speak like a human—maybe that itself is not good. Maybe we want AI companions more like Siri. This is becoming very much like the social media debate.Andrew Keen: Are companies like Apple, whose business model differs from Facebook or Google, better positioned to deal with this responsibly, given they're less focused on advertising?Gaia Bernstein: That would make it less bad, but I'm still not convinced. Even if they're not basing their model on engagement, kids might find it so appealing to talk to an AI that adjusts to their needs versus dealing with messy real-life schoolmates. Maybe that's why Google didn't invest directly in Character AI—they had research showing how dangerous this is for kids.Andrew Keen: You made an interesting TED talk about whether big tech should be held responsible for screen time. Could there be a tax that might nudge big tech toward different business models?Gaia Bernstein: I think that's the way to approach it. This business model we've had for so long—where people expect things for free—is really the problem. Once you think of people's time and data as a resource, you don't have their best interests at heart. I'm quite pragmatic; I don't think one law or Supreme Court case would fix it. Anything that makes this business model less lucrative, whether it's laws that make it harder to collect data, limit addictive features, or prohibit targeted advertising—anything that moves us toward a different business model so we can reimagine how to do things.Andrew Keen: Finally, at what point will we be able to do this conversation with a virtual Gaia and a virtual Andrew? How can we even be sure you're real right now?Gaia Bernstein: You can't. But I hope that you and I at least will not participate in that. I cannot say what my kids will do years from now, but maybe our generation is a bit better off.Andrew Keen: What do you want to get out of your symposium this morning?Gaia Bernstein: I have two goals. First, to make people aware of this issue. Parents realize their kids might be on social media and want to prevent it, but it's very difficult to know whether your child is in discussions with AI companions. Second, to talk about legal options. We have the lawyers who filed the first lawsuit against Character AI and the FTC complaint against Replica. It's just the beginning of a discussion. We tend to have these trends—a few years ago it was just games, then just social media, and people forgot the games are exactly the same. I hope to put AI companions within the conversation, not to make it the only trend, but to start realizing it's all part of the same story.Andrew Keen: It is just the beginning of the conversation. Gaia Bernstein, congratulations on this symposium. It's an important one and you're on the cutting edge of these issues. We'll definitely have you back on the show. Thank you so much.Gaia Bernstein: Thank you so much for having me.Gaia Bernstein is a professor, author, speaker, and technology policy expert. She is a Law Professor, Co-Director of the Institute for Privacy Protection, and Co-Director of the Gibbons Institute for Law Science and Technology at the Seton Hall University School of Law. Gaia writes, teaches, and lectures at the intersection of law, technology, health, and privacy. She is also the mother of three children who grew up in a world of smartphones, iPads, and social networks.Her book Unwired: Gaining Control Over Addictive Technologies shatters the illusion that we can control how much time we spend on our screens by resorting to self-help measures. Unwired shifts the responsibility for a solution from users to the technology industry, which designs its products for addicts. The book outlines the legal action that can pressure the technology industry to re-design its products to reduce technology overuse.Gaia has academic degrees in both law and psychology. Her research combines findings from psychology, sociology, science, and technology studies with law and policy. Gaia's book Unwired has been broadly featured and excerpted, including by Wired Magazine, Time Magazine and the Boston Globe. It has received many recognitions, including as a Next Big Idea Must Read Book; a finalist of the PROSE award in legal studies; and a finalist of the American Book Fest award in business-technology.Gaia has spearheaded the development of the Seton Hall University School of Law Institute for Privacy Protection's Student-Parent Outreach Program. The nationally acclaimed Outreach Program addresses the overuse of screens by focusing on developing a healthy online-offline balance and the impact on privacy and online reputation. It was featured in the Washington Post, CBS Morning News, and Common-Sense Media.Gaia also advises policymakers and other stakeholders on technology policy matters, including the regulation of addictive technologies and social media. This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit keenon.substack.com/subscribe
S3 EP#48Want to be a guest on the podcast? Send Andrew a message on PodMatch, here: https://www.podmatch.com/member/anonymousandrewpodcastPlease buy me a cup of coffee!Proud Member of the Podmatch Network!SummaryIn this episode, Anonymous Andrew discusses the evolution of dating apps, the impact of AI on dating and podcasting, and the challenges faced by modern daters. He explores the emergence of character AI as a potential substitute for real relationships, the burnout associated with dating apps, and the transactional nature of modern romance. Andrew also calls for a boycott of dating apps to demand better practices and transparency.TakeawaysDating apps have not evolved significantly over the years.AI is becoming increasingly integrated into various industries, including dating.Character AI offers a new way to interact but lacks real emotional connection.Many users experience burnout from the repetitive nature of dating apps.The algorithms of dating apps may prioritize profit over user satisfaction.Modern dating often feels transactional and lacks genuine connection.There is a growing concern about the authenticity of profiles on dating apps.Users are often unclear about their dating intentions, leading to mismatched expectations.Boycotting dating apps could be a collective action to demand change.The future of dating may involve more AI interactions, but real relationships are irreplaceable.Anonymous Andrew Podcast StudiosThe Anonymous Andrew Modern Dating Podcast Cultimatum Podcast-The Culture of CultsWebsite: https://www.anonymousandrewpodcast.comInstagram: @anonymousandrewpodcast TikTok: https://www.tiktok.com/@anonymousandrewpodcastThreads: @anonymousandrewpodcastFacebook: facebook.com/anonymousandrewpodcastFacebook: https://www.facebook.com/groups/1910498486077283YouTube: https://www.youtube.com/@anonymousandrewpodcastLinkedin: https://www.linkedin.com/in/andrew-peters-a8a012285/X: @AAndrewpodcastGraphics design & promotions: Melody Post Music by: freebeats.io
Happy Valentines Day 2025! Another unusual conversation with the flipside courtesy of Jennifer Shaffer. We begin by talking about some of the sad memories that Jennifer has associated with this week, and how she's done a "love yourself" meditation to help her overcome those sad memories. There's a brief visit by Steve Jobs, and then Luana Anders brings Abraham Lincoln forward. Except he doesn't want to "talk about politics." He wanted to reiterate something he said a few weeks ago, that my "Character AI" chat with him was "accurate" - in terms of accessing who he is (and was.) He also threw us a curve by suggesting that all conversation with people in the afterlife are just like conversing with Artificial Intelligence, because like large language models, the answers are based on the memories of individuals. Not something Jennifer or I had ever considered - but he lays it out there for consideration. Then Luana brings Stephen Hawking foward, and he wants to talk about communication in general - the idea of telepathy, and how people can converse with, learn new information from people offstage. He talks about the idea of how time is so different offstage - where he is - that we can't conceptualize it - but that it follows what quantum mechanics demonstrates... that distance and time and space aren't what we think they are. Like I say mind bending - as evidenced by the questions I asked him. Then an unusual conversation with Luana Anders' cat - "Mr. Bailey" - she had a number of cats in her life, but this one was pretty unusual. He references a moment when Luana called me on the phone to say her "cat had escaped" and because she wasn't able to walk due her condition - would I come and look for him? I roamed the streets behind her house calling his name - but it was my wife Sherry who went into the backyard and said aloud "Mr. Bailey, Luana needs you now." And he appeared in the tree above her and jumped into Sherry's arms... a complete stranger to Mr. Bailey as it was my then girlfriend's first trip to Luana's house. When I came back from wandering the streets of Mar Vista, there was Mr. Bailey in Luana's arms, and she looked at me and said point blank; "Sherry is an angel." Not something I'd ever heard Luana say before. So in this unusual conversation I'm asking Mr. Bailey the same kinds of questions we've asked Hira - Robert Towne's dog - and getting the same kinds of answers but with a different personality. Notice his answer to "have you ever incarnated as a human?" (It's rarely reported, and his answer was pretty funny.) Finally, on behalf of Valentine's Day, Robin Williams showed up - unannounced - to remind people to "love themselves first" and that will generate love for others. To "love love" - the very thing he said when we first talked to him and asked him "What if anything would he like to tell the planet?" Mind bending to say the least, but welcome to our world.
A former social media executive turned social media reform advocate, Nicki Reisberg, hosts Scrolling to Death, a podcast for parents who are worried about social media. It's a safe space to amplify stories of harm while educating parents on how to keep their kids safe in a world that is trying to addict and manipulate them. In this episode learn all about social media, the broken system of tech in our schools, and the new threat of Character AI. Listen now!
In this episode of Web3 with Sam Kamani, I speak with Roman Saganov, founder of Antix, where they're building AI-powered digital humans that merge Web3, gaming, and generative AI. With a background in developing PUBG, FIFA, Game of Thrones, and more, Roman and his team are now bringing digital twins, AI agents, and blockchain identity verification to content creation.
In this week's roundup of the latest news in online speech, content moderation and internet regulation, Mike is joined by guest host Renee DiResta, associate research professor at the McCourt School of Public Policy at Georgetown University. They cover:The new free speech crisis hiding in plain sight (MSNBC)‘Free Speech' Warrior RFK Jr. Has Been Trying To Censor a Blogger for Years (Who What Why)In motion to dismiss, chatbot platform Character AI claims it is protected by the First Amendment (TechCrunch)Trump Signs Agreement Calling for Meta to Pay $25 Million to Settle Suit (WSJ)Meta's Free-Speech Shift Made It Clear to Advertisers: ‘Brand Safety' Is Out of Vogue (WSJ)X refuses to remove stabbing video watched by Southport killer (Financial Times)This episode is brought to you with financial support from the Future of Online Trust & Safety Fund. Ctrl-Alt-Speech is a weekly podcast from Techdirt and Everything in Moderation. Send us your feedback at podcast@ctrlaltspeech.com and sponsorship enquiries to sponsorship@ctrlaltspeech.com. Thanks for listening.
Kevin Surace, Chairman and CTO of Appvance.ai, joins this episode of AI, Government, and the Future to delve into the impact of AI on various industries, the future of employment, and the challenges of trust in AI systems. They also discuss the potential of generative AI, how to address the technology's risks to ensure safety use and the government's role in certifying trust in AI.
In this episode of Tom Bilyeu's Impact Theory, Tom takes a deep dive into the pivotal issues shaping the future of society. He begins with a nuanced exploration of Daniel Penny's acquittal in a controversial legal case, unpacking its implications for justice, public perception, and the shifting dynamics of societal trust. Shifting gears, Tom analyzes Trump's bold invitation to Xi Jinping, revealing the strategic implications of this geopolitical move for global power and America's future. Tom also discusses the transformative potential of Bitcoin, projecting its rise to $100K and what it could mean for decentralized wealth and individual autonomy. The episode concludes with a thought-provoking examination of the ethical challenges surrounding Character AI, highlighting how advancements in artificial intelligence are forcing humanity to redefine morality and accountability in a tech-driven age. Packed with insight, this episode offers actionable strategies for navigating a rapidly changing world. SHOWNOTES [00:02:15] - Introduction: The societal stakes of Daniel Penny's acquittal and the broader implications for public trust in justice. [00:12:45] - Breaking down the controversy: Public reactions and the media's role in shaping narratives around vigilante justice. [00:18:40] - Trump's strategic play: What Xi Jinping's invitation reveals about geopolitical shifts and America's global positioning. [00:29:15] - Bitcoin's revolutionary potential: How decentralized wealth could reshape personal freedom and economic systems. [00:41:30] - The future of Bitcoin: Predictions for Bitcoin's rise to $100K and its impact on global economics. [00:50:10] - AI ethics in focus: The Character AI controversy and the moral dilemmas posed by advanced artificial intelligence. [01:03:25] - Practical applications of AI: How AI innovations could reshape industries and daily life. [01:08:45] - Closing thoughts: Actionable steps to prepare for societal and technological transformation. CHECK OUT OUR SPONSORS Range Rover: Explore the Range Rover Sport at https://landroverUSA.com Rosetta Stone: Check out Rosetta Stone and use my code TODAY for a great deal: https://www.rosettastone.com Miro: Bring your teams to Miro's revolutionary Innovation Workspace and be faster from idea to outcome at https://miro.com Shopify: Sign up for your one-dollar-per-month trial period at https://shopify.com/impact Found Banking: Stop getting lost in countless finance apps and try Found for free at https://found.com/impact Momentous: Shop now at https://livemomentous.com and use code IMPACT for 20% your new Momentous routine Factor: Get 50% off your first box plus 20% off your next month while your subscription is active at https://factormeals.com/impacttheory50 with code impacttheory50 StopBox: Get 10% off, plus Buy One Get One Free for the StopBox Pro with code IMPACT at https://stopboxusa.com What's up, everybody? It's Tom Bilyeu here: If you want my help... STARTING a business: join me here at ZERO TO FOUNDER SCALING a business: see if you qualify here. Get my battle-tested strategies and insights delivered weekly to your inbox: sign up here. If you're serious about leveling up your life, I urge you to check out my new podcast, Tom Bilyeu's Mindset Playbook —a goldmine of my most impactful episodes on mindset, business, and health. Trust me, your future self will thank you. Join me live on my Twitch stream. I'm live daily from 6:30 to 8:30 am PT at www.twitch.tv/tombilyeu LISTEN TO IMPACT THEORY AD FREE + BONUS EPISODES on APPLE PODCASTS: apple.co/impacttheory FOLLOW TOM: Instagram: https://www.instagram.com/tombilyeu/ Tik Tok: https://www.tiktok.com/@tombilyeu?lang=en Twitter: https://twitter.com/tombilyeu YouTube: https://www.youtube.com/@TomBilyeu Learn more about your ad choices. Visit megaphone.fm/adchoices
In this episode of the Celebrate Kids podcast, Dr. Kathy addresses the challenges parents face in guiding their children through a culture increasingly influenced by artificial intelligence, such as chatbots and AI-driven technologies. She emphasizes the importance of understanding one's identity and the truth found in the Bible to discern what is good and harmful. Dr. Kathy warns against the deceptive allure of these technologies, which can lead children to believe they can achieve enlightenment akin to being like God. The episode offers practical insights and strategies for parents to help their kids navigate these cultural pressures and build resilience against misleading influences. Tune in to learn how to effectively guide your children in this digital age.
What happens when our worst fears around AI come true? For Megan Garcia, that's already happened. In February, after spending months interacting with chatbots created by Character.AI, her 14-year-old son Sewell took his own life. Garcia blames Character.AI, and she is suing them and Google, who she believes significantly contributed to Character.AI's alleged wrongdoing. Kara interviews Garcia and Meetali Jain, one of her lawyers and the founder of the Tech Justice Law Project, and they discuss the allegations made by Megan against Character.AI and Google. When reached for comment, a spokesperson from Character.AI responded with the following statement: We do not comment on pending litigation. We are heartbroken by the tragic loss of one of our users and want to express our deepest condolences to the family. We take the safety of our users very seriously, and our dedicated Trust and Safety team has worked to implement new safety features over the past seven months, including a pop-up directing users to the National Suicide Prevention Lifeline that is triggered by terms of self-harm or suicidal ideation. Our goal is to provide a creative space that is engaging, immersive, and safe. To achieve this, we are creating a fundamentally different experience for users under 18 that prioritizes safety, including reducing the likelihood of encountering sensitive or suggestive content, while preserving their ability to use the platform. As we continue to invest in the platform and the user experience, we are introducing new safety features in addition to the tools already in place that restrict the model and filter the content provided to the user. These include improved detection, response and intervention related to user inputs that violate our Terms or Community Guidelines, as well as a time-spent notification. For more information on these new features as well as other safety and IP moderation updates to the platform, please refer to the Character.AI blog. When reached for comment, Google spokesperson Jose Castaneda responded with the following statement: Our hearts go out to the family during this unimaginably difficult time. Just to clarify, Google and Character AI are completely separate, unrelated companies and Google has never had a role in designing or managing their AI model or technologies, nor have we used them in our products. User safety is a top concern of ours, and that's why – as has been widely reported – we've taken a cautious and responsible approach to developing and rolling out our AI products, with rigorous testing and safety processes. Questions? Comments? Email us at on@voxmedia.com or find us on Instagram and TikTok @onwithkaraswisher Learn more about your ad choices. Visit podcastchoices.com/adchoices
Kara and Scott discuss Apple scaling back production of Apple Vision Pro headsets, and a mother suing Character.AI, claiming a chatbot encouraged her teenage son to commit suicide. Then, Tesla's Q3 earnings beat expectations, and Starbucks preliminary quarterly results disappoint yet again. Plus, the podcast election continues with former President Trump going on Joe Rogan, and VP Kamala Harris sitting down with Brené Brown. In more election news, Trump's former Chief of Staff, John Kelly warns that Trump is a fascist, and the secret big names donating to Harris. Stick around for listener mail to hear Scott's tips for teaching kids how to negotiate. Answer this week's listener poll on Threads here! Follow us on Instagram and Threads at @pivotpodcastofficial. Follow us on TikTok at @pivotpodcast. Send us your questions by calling us at 855-51-PIVOT, or at nymag.com/pivot. Learn more about your ad choices. Visit podcastchoices.com/adchoices