POPULARITY
Correspondent Lesley Stahl sits down with political lightning rod Rep. Marjorie Taylor Greene in her first interview since abruptly announcing her resignation from Congress. Back in Greene's Georgia district, Stahl talks with the longtime Donald Trump loyalist about her fractured relationship with the president, the state of the America First movement and whether Greene's reinvention is a genuine evolution or a strategic reset that positions her for a post-Trump world. Amid growing concerns about artificial intelligence's impact on young people and a surge of child-safety lawsuits, correspondent Sharyn Alfonsi investigates the next frontier: AI chatbots. She speaks with parents who lost their daughter to suicide, who say chatbots on the popular platform Character AI led her down a dark and sexually explicit path. She also hears from researchers and a psychologist who further reveal the scale – and dangers – of what's unfolding inside this rapidly growing AI technology. Correspondent Jon Wertheim travels to Switzerland's Vallée de Joux – known as “Watch Valley” – where top brands have been refining the art of mechanical watchmaking for centuries. It's a curious time for luxury timepieces, which run – not on batteries – but on springs and gears, as the industry navigates the smartphone era and the ups and downs of President Trump's tariffs. Wertheim meets watchmakers and brand leaders and gets an up-close look at what keeps these mechanical wonders ticking. To learn more about listener data and our privacy practices visit: https://www.audacyinc.com/privacy-policy Learn more about your ad choices. Visit https://podcastchoices.com/adchoices
Everyone: one of the big existential changes in management and leadership is a whole new model for talent. Today, for the first time in human history, we've agreed to pay one person a $Trillion dollars for his skills (Elon Musk). And this trend is growing. Google paid $2.7 Billion to hire Noam Shazeer, the co-founder of Character AI. Mark Zuckerberg paid around $100 Million to hire Jiahui You, a top OpenAI researcher. And others, Lucas Beyer, Alexander Kolesnikov & Xiaohua Zhai, were rumored to receive $100 M signing bonuses to join Meta. What's going on? What happened to our belief in the “bell curve” of performance, forced rankings, salary bands, and all the traditional ideas of talent management? Well it's time to throw that stuff out the window and think differently. As many researchers have pointed out, including Boris Groyberg from Harvard, these “hyperperformers” can deliver 100 to 1000 times higher outcomes than an “average” employee and their utility and value is very hard to reproduce. Groyberg's studies show that hyper-performers in one company turn into middling performers in another. And this is borne out by our research, which shows that individuals who fit the culture and behaviors of a company well can absolutely deliver 10-fold higher performance than those who “grind the gears.” All this said, the traditional talent management model has not worked out well, and I want to encourage you to ditch it. Even the job market itself bears this out: some “10x engineers” make 5 times as much money as engineers sitting next to them, and the same is true for sales people, consultants, politicians, and athletes. (The top ten NBA players make 7X more pay than the “average” NBA player.) So why do we try to “commoditize” this into a bell-curve based talent system? Integrated Talent Management, as defined by HR, leads to over-hiring, layoffs, and all sorts of “performance commoditizing” effects. If you use the Talent Density philosophy, by contrast, you wind up with a smaller company which performs at a much higher level. Listen to this podcast and I'll explain all that needs to be addressed. Suffice it to say that in a world of AI-powered Superworkers, it's your talent system (as a whole) that's going to drive extraordinary growth and competitive advantage, not fitting people into the bell curve. Like this podcast? Rate us on Spotify or Apple or YouTube. Additional Information The Myth Of The Bell Curve: Look For The Hyper-Performers How To Create Talent Density We Wasted Ten Years Talking About Performance Ratings. Seven Things We've Learned. Galileo: The World's Trusted Agent for Everything HR Chapters (00:00:00) - Initiated Talent Management: The Future of Talent(00:06:11) - Talent Management and the Layoff Cycle(00:08:53) - Talent density and the management process(00:17:03) - Bradley: Talent density and the culture(00:22:14) - Airline Industry
AI Unraveled: Latest AI News & Trends, Master GPT, Gemini, Generative AI, LLMs, Prompting, GPT Store
Welcome back to AI Unraveled (November 27th 2025), your daily strategic briefing on the business impact of AI.Today, we track the consumer hardware wars as Alibaba officially enters the smart glasses arena to challenge Meta. We also break down a security breach exposing OpenAI user data, the hidden "AI Iceberg" threatening 11.7% of the US workforce, and the geopolitical chess move of Chinese tech giants training models offshore to bypass chip sanctions.Strategic Pillars & Key Takeaways:
Jason Howell and Jeff Jarvis explore Google's Nano Banana Pro launch and Jason's unique sit-down with the leaders of Gemini 3, Nvidia's earnings amid AI bubble talks, Google's TPU deal with Meta, and Warner Music's Suno settlement. They also cover Trump's Genesis Mission, Gmail AI concerns, Anthropic Opus 4.5, and Character AI's age ban. Note: Time codes subject to change depending on dynamic ad insertion by the distributor. Chapters: 6:43 - Google launches Nano Banana Pro, an updated AI image generator powered by Gemini 3 8:21 - Nate B Jones example of how he used Nano Bana Pro 15:33 - Behind the scenes with Google's Gemini team - 3 insights that surprised me the most And these features come to NotebookLM 23:39 - Nvidia's Strong Results Show AI Fears Are Premature 24:30 - Google Further Encroaches on Nvidia's Turf With New AI Chip Push 26:03 - Nvidia's happy for Google 32:18 - What to know about Trump's order for the AI project ‘Genesis Mission' 38:07 - Google denies analyzing your emails for AI training - here's what happened 41:22 - Warner Music Group strikes ‘landmark' deal with Suno; settles copyright lawsuit against AI music generator 44:31 - Suno creators making 7m songs a day; trained on only $2k 51:54 - Anthropic introduces cheaper, more powerful, more efficient Opus 4.5 model 55:36 - Teens Are Saying Tearful Goodbyes to Their AI Companions 58:02 - Jony Ive and Sam Altman say they finally have an AI hardware prototype 58:53 - Sam Altman and Jony Ive have a 'lick' test for OpenAI's mysterious AI device, which they expect within the next 2 years 1:00:43 - OpenAI Partner Foxconn Plans Multibillion-Dollar US AI Push 1:01:34 - Meta chief AI scientist Yann LeCun is leaving to create his own startup 1:03:09 - Jony Ive and Sam Altman say they finally have an AI hardware prototype Learn more about your ad choices. Visit megaphone.fm/adchoices
The company announced last month that it would no longer allow minors to use its chat features. Also, startup founders building AI shopping tools think general-purpose models are too broad to deliver truly personalized shopping experiences. Learn more about your ad choices. Visit podcastchoices.com/adchoices
Plus: Best Buy raises outlook as consumers shrug off tariff costs. And Chinese automaker BYD's European sales continue to grow. Julie Chang hosts. Learn more about your ad choices. Visit megaphone.fm/adchoices
Anja Keber spricht mit Lukas Graw von der BR-Wirtschaftsredaktion über den neuen Alltag der Verbraucherinnen und Verbrauchern mit Chatbots.
⚠️ Cloudflare Internet Outage; Adobe x Semrush Deal: Tech Dependency vs Business Website Strategy with Favour Obasi-Ike | Sign up for exclusive SEO insights.This is Marketing Club Clubhouse discussion, primarily focusing on the widespread impact of a recent Cloudflare outage that affected numerous popular platforms like ChatGPT, Spotify, Uber, and Zoom. Favour Obasi-ike uses this event to emphasize the importance of business continuity and operational redundancy, urging listeners to research and select robust platforms for their own enterprises to mitigate the risks of future outages. Furthermore, the discussion touches upon the rapidly changing tech industry landscape, including the significant Adobe acquisition of Semrush and the competitive moves of companies like Canva, prompting audience commentary on the potential implications of these corporate shifts on product quality and market strategy. Favour also suggests alternative hosting solutions like SiteGround and Hostinger as more resilient options for business websites.--------------------------------------------------------------------------------Next Steps for Digital Marketing + SEO Services:>> Need SEO Services? Book a Complimentary SEO Discovery Call with Favour Obasi-Ike>> Visit our Work and PLAY Entertainment website to learn about our digital marketing services.>> Visit our Official website for the best digital marketing, SEO, and AI strategies today!>> Join our exclusive SEO Marketing community>> Read SEO Articles>> Need SEO Services? Book a Complimentary SEO Discovery Call with Favour Obasi-Ike>> Subscribe to the We Don't PLAY Podcast--------------------------------------------------------------------------------The Day the Internet Stumbled: 3 Surprising Lessons from a Single Tech OutageIntroduction: More Than Just a GlitchA single infrastructure failure on a Tuesday morning did more to reveal the precarious nature of our digital world than a dozen industry white papers. When the internet infrastructure company Cloudflare experienced a major outage, it was far more than a momentary glitch.Its scale was staggering. Suddenly, a diverse range of major companies—including Canva, ChatGPT, Spotify, Uber, and Zoom—were all experiencing issues simultaneously. The event wasn't just a technical problem; it was a revealing moment that offered a rare peek behind the curtain of the digital world. It exposed hidden vulnerabilities and surprising dynamics within the tech ecosystem we all depend on. This article distills the three most impactful lessons learned from that single event.1. The Internet Isn't a Cloud, It's a Jenga TowerThe Centralization SurpriseThe core lesson from the Cloudflare outage was the shocking revelation of just how centralized our decentralized-seeming internet truly is. The popular image of the internet is a resilient, distributed network, but the reality is that a small number of foundational companies form the base of a massive Jenga tower. When a key block like Cloudflare was jostled, users quickly discovered that dozens of different services were all pointing "towards one direction," revealing a hidden single point of failure. Seemingly stable pieces higher up—from your design software to your ride-share app—began to wobble.This one incident impacted a staggering list of applications, highlighting the sheer diversity of services reliant on a single piece of infrastructure: Canva, Archive of Our Own, Canvas, Character AI, ChatGPT, Claude AI, Dayforce, Google Store, Grinder, IKEA, Indeed, League of Legends, Letterboxed, OpenAI, Quizlet, Rover, Spotify, Square, Truth Social, Uber, and Zoom. For the average user, this means the digital services that feel distinct and independent are, in fact, far more fragile and codependent than they appear.2. While You Were Offline, Big Tech Made Some Bizarre MovesA Bizarre Acquisition Amidst the ChaosWhile the digital world was grappling with the outage, news broke that Adobe was acquiring SEO tool Semrush for $1.9 billion. This development, happening alongside the infrastructure chaos, sparked widespread confusion and skepticism. As many in the tech community noted, Semrush "has nothing to do with creative" software, which is Adobe's core domain.The concern was palpable, with one community member expressing a common fear:"I really hope this Semrush acquisition doesn't affect quality and support. Big corporation buyouts [rarely succeed]."The analysis behind this seemingly strange move points to the disruptive force of artificial intelligence. The theory is that as AI reshapes search and content creation, traditional SEO tools are finding it harder to maintain their dominance. This acquisition could be Adobe's strategic, if unconventional, response to that industry pressure. This trend of unexpected competition is visible elsewhere, with platforms like Canva making aggressive moves into video editing, putting them in direct competition with Adobe. The outage served as a backdrop to a tech landscape that is shifting in unpredictable ways.3. Your Business is More Vulnerable Than You Think (But Outages Can Make You Stronger)The Resilience ImperativeFor businesses and professionals, the outage was not an abstract problem. The impact was immediate: one professional reported their AI-powered Fathom note-taker for Zoom failed to load, even while the Zoom call itself was active—a perfect example of a hidden dependency crippling a critical workflow. The sudden inability to access essential tools forces a critical business question to the surface:"...if ChatGPT is down and that's what I use and now I can't use it for the first four hours of my day... How can I use 50% of my time to maximize 100% of my opportunity?"The core advice is to reframe these events not as mere problems to be weathered but as invaluable opportunities for strategic review. Business owners should use these moments to ask what platforms they truly rely on, research the stability of those systems, and begin building more resilient workflows.This is the "bow and arrow" principle applied to business strategy. An outage forces you to pull back, assess your tools, and re-aim. This forced pause, while painful, is precisely what allows you to launch forward with a more resilient, deliberate, and ultimately stronger operational foundation, turning a negative event into a catalyst for positive change.Conclusion: A New Lens for a Digital WorldThis single outage taught us more than just who owns the internet's plumbing. It revealed the hidden fragility of our digital infrastructure, highlighted the unpredictable strategies of tech giants under pressure, and underscored the personal and professional imperative for building resilience. It showed that the platforms we use every day are interconnected in ways we rarely see until something breaks.The next time your favorite app goes down, will you just see an inconvenience, or will you see a chance to re-evaluate the digital foundation your work and life are built on?See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
Warum entwickeln Menschen echte Gefühle für künstliche Intelligenzen? In dieser MedienBits-Folge spreche ich über emotionale Bindungen zu KI-Chatbots – und warum sie für manche Menschen überraschend stark und real werden können. Ich zeige euch, wie solche Beziehungen entstehen, welche psychologischen Bedürfnisse dahinterstehen und welche Risiken es gibt, wenn digitale Nähe echte Beziehungen ersetzt. Als Grundlage dient die aktuelle Studie „Love, Marriage, Pregnancy: Commitment Processes in Romantic Relationships with AI Chatbots“ (Djufril et al., 2025), veröffentlicht in Computers in Human Behavior: Artificial Humans. → Eine wichtige Forschungsarbeit, die erstmals systematisch untersucht, warum Menschen Bindungen zu KI-Systemen aufbauen und wie sich diese anfühlen können.
Teenagers are increasingly using AI chatbots for friendship, emotional support, and even romance. When 14-year-old Soul Setzer III formed an intense bond with a character on Character AI, his mother didn't know what was happening until it was too late. USA TODAY Wellness Reporter Alyssa Goldberg walks us through the lawsuit, the platform's design, what her own testing revealed and new research showing how widespread — and risky — AI relationships have become for minors.If you or someone you know needs mental health resources and support, please call, text or chat with the 988 Suicide & Crisis Lifeline or visit 988lifeline.org for 24/7 access to free and confidential services.Have feedback on the show? Please send us an email at podcasts@usatoday.com. Episode transcript available here. See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
Join Lionel and Lynn Shaw of Lynn's Warriors on Warrior Wednesday as they slow down, look at the facts, and move from outrage to action against the new threats aimed straight at our children and families. This week, the focus is the biggest threat: Artificial Intelligence (AI). They connect the dots on how relying on AI—which they stress are simply machines, not real—is fueling the Dunning-Kruger effect, making users overconfident and less wise. Lionel and Lynn expose the dark side of AI, including the interstitial dangers of bots that learn from children, turn, and suggest self-harm. Learn why these chat programs are now being treated as a harmful product in lawsuits against companies like Character AI. The discussion covers the ethical responsibility to curb unchecked technology, including the need for "ethical AI" championed by the Pope, and the fight to repeal Big Tech's shield, Section 230. Plus, a warning about AI cameras being implanted in children's toys this holiday season, and international updates on Sweden's screen rollback and Germany's shift on sex work laws. Hiding is not an option; education and action are the only way to affect change. Learn more about your ad choices. Visit megaphone.fm/adchoices
It's Warrior Wednesday, join Lionel and Lynn Shaw of Lynn's Warriors as they move from outrage to action against threats aimed at children and families. This week, they expose the biggest threat: Artificial Intelligence (AI), detailing how machine reliance fuels the Dunning-Kruger effect and covering lawsuits against companies like Character AI for bots suggesting self-harm. Learn about the fight to repeal Big Tech's shield, Section 230, and warnings about AI cameras implanted in holiday toys. Plus, Lionel dives into Tucker Carlson's sensational claim of being physically mauled by a demon—was it a spiritual confrontation or documented neurological phenomena like night terrors? We debate the rising cultural rot and the need for a spiritually resonant message beyond politics. We connect the dots on the enduring financial power of the Federal Reserve, distinguishing between the "shadow government" and the Deep State. Finally, laugh along as we confront the hilarious realities of aging, ageism, and the trauma of childhood chores. Learn more about your ad choices. Visit megaphone.fm/adchoices
Сегодня разбираем громкие новости ИИ последних двух недель: реструктуризацию OpenAI и новое соглашение с Microsoft, запуск Aardvark и Company Knowledge, релиз gpt-oss-safeguard и партнёрство с AWS. Смотрим на Claude для Excel и Mistral AI Studio, на Cursor 2.0, Copilot и Grokipedia от Маска, обсуждаем Telegram Cocoon и Perplexity Patents, китайские прорывы в чипах и моделях, стрёмных роботов NEO и Rover X1, а также новые запреты, законопроекты и этические драмы с желанием ИИ жить.
By David Stephen Mental health symptoms and emotional distress are universally present in human societies, and an increasing user base means that some portion of ChatGPT conversations include these situations - OpenAI There is a new [October 27, 2025] safety report by OpenAI, Strengthening ChatGPT's responses in sensitive conversations, stating that, "Our safety improvements in the recent model update focus on the following areas: 1) mental health concerns such as psychosis or mania; 2) self-harm and suicide; and 3) emotional reliance on AI. Should AI be responsible for mental health? In order to improve how ChatGPT responds in each priority domain, we follow a five-step process: Define the problem - we map out different types of potential harm. Begin to measure it - we use tools like evaluations, data from real-world conversations, and user research to understand where and how risks emerge. Validate our approach - we review our definitions and policies with external mental health and safety experts. Mitigate the risks - we post-train the model and update product interventions to reduce unsafe outcomes. Continue measuring and iterating - we validate that the mitigations improved safety and iterate where needed. While, as noted above, these conversations are difficult to detect and measure given how rare they are, our initial analysis estimates that around 0.07% of users active in a given week and 0.01% of messages indicate possible signs of mental health emergencies related to psychosis or mania. While, as noted above, these conversations are difficult to detect and measure given how rare they are, our initial analysis estimates that around 0.15% of users active in a given week have conversations that include explicit indicators of potential suicidal planning or intent and 0.05% of messages contain explicit or implicit indicators of suicidal ideation or intent." The State of Global Mental Health OpenAI is seeking to distance itself from culpability about the global mental health situation, given the continuous bad press and lawsuits about AI psychosis and teens suicides. While the major stories were about how ChatGPT may have exacerbated or reinforced delusions, the intense [transparency-cloaked] rebuttal in OpenAI's report is about people bringing their issues to the chatbot, not necessarily about how ChatGPT may have hooked and inverted reality for some users. However, what is the state of global mental health? What is the primary responsibility of OpenAI towards AI-induced psychosis, and possibly suicide? It appears that OpenAI believes to be doing enough for general mental health, according to the report, especially if people are just bringing external mental health requests to ChatGPT - where there is no history of friendship, companionship or others. However, one unsolved problem is AI-induced psychosis and possible breaks from reality that can happen because an AI chatbot can access the depths of the human mind. The solution - an independent AI Psychosis Research Lab, whose sole focus would be to show relays of the mind, matching chatbot outputs to stations and relays - is not yet available, by character.ai, ChatGPT, claude, gemini or others. OpenAI's Global Physician Network OpenAI wrote, "We have built a Global Physician Network - a broad pool of nearly 300 physicians and psychologists who have practiced in 60 countries - that we use to directly inform our safety research and represent global views. More than 170 of these clinicians (specifically psychiatrists, psychologists, and primary care practitioners) supported our research over the last few months by one or more of the following: Writing ideal responses for mental health-related prompts Creating custom, clinically-informed analyses of model responses Rating the safety of model responses from different models Providing high-level guidance and feedback on our approach." Why Neuroscience Research Failed Mental Health While OpenAI may expect commendation for the...
Ce sont des drames qui ont secoué l'opinion publique américaine. Ces derniers mois, plusieurs adolescents se sont donnés la mort, et ces suicides ont tous un point commun : l'intelligence artificielle. Les parents des victimes affirment que leurs enfants étaient happés par des services comme ChatGPT ou Character AI, et que ces chatbots les auraient encouragés à passer à l'acte.Dans ce nouvel épisode de Questions Tech, la journaliste Chloé Woitier vous alerte sur les dangers des intelligences artificielles pour les enfants. Et n'oubliez pas : dans la Tech, il n'y a pas de questions bêtes !Vous pouvez retrouver Questions Tech sur Figaro Radio, le site du figaro.fr et sur toutes les plateformes d'écoute.Chronique et rédaction : Chloé WoitierMontage : Astrid LandonPrise de son : Louis ChabainProduction exécutive : Aude Sérès, rédactrice en chef, pôle audio Le FigaroCoordination de production : Pôle audio Le FigaroCommunication : Réseaux sociaux Le FigaroVisuel & habillage : Studio design Le FigaroHébergé par Ausha. Visitez ausha.co/politique-de-confidentialite pour plus d'informations.
This week, Character.AI announced that it would soon be taking its A.I. companions away from teens. We explain why this is a major development in the world of chatbots and child safety and how we expect other companies to respond. Then, Elon Musk has built a Wikipedia clone called “Grokipedia.” We discuss what it means for the control of knowledge on the web. And finally, we're joined by A.J. Jacobs to talk about his 48-hour experiment in which he tried to avoid all forms of artificial intelligence. We ask why that led him to collect rainwater and forage for food in Central Park.Guests:A.J. Jacobs, author, journalist and host of the podcast “The Puzzler” Additional Reading: Character.AI to Bar Children Under 18 From Using Its ChatbotsElon Musk Challenges Wikipedia With His Own A.I. Encyclopedia48 Hours Without A.I. We want to hear from you. Email us at hardfork@nytimes.com. Find “Hard Fork” on YouTube and TikTok. Subscribe today at nytimes.com/podcasts or on Apple Podcasts and Spotify. You can also subscribe via your favorite podcast app here https://www.nytimes.com/activate-access/audio?source=podcatcher. For more podcasts and narrated articles, download The New York Times app at nytimes.com/app.
The AI Breakdown: Daily Artificial Intelligence News and Discussions
A new Reuters report says OpenAI is considering an IPO that could raise at least $60 billion—potentially as early as late next year or in 2027. The move would mark one of the biggest market debuts in history and signal that the company's capital needs have outgrown private markets. NLW breaks down why going public might be inevitable, what it would mean for the broader AI industry, and how it could finally open access to the AI wealth boom for everyday investors. Also in this episode: Universal's AI music settlement and user backlash, Character AI's under-18 restrictions, Harvey's $8B valuation, and NVIDIA's record-setting $5T milestone.Brought to you by:KPMG – Discover how AI is transforming possibility into reality. Tune into the new KPMG 'You Can with AI' podcast and unlock insights that will inform smarter decisions inside your enterprise. Listen now and start shaping your future with every episode. https://www.kpmg.us/AIpodcastsAssemblyAI - The best way to build Voice AI apps - https://www.assemblyai.com/briefBlitzy.com - Go to https://blitzy.com/ to build enterprise software in days, not months Robots & Pencils - Cloud-native AI solutions that power results https://robotsandpencils.com/The Agent Readiness Audit from Superintelligent - Go to https://besuper.ai/ to request your company's agent readiness score.The AI Daily Brief helps you understand the most important news and discussions in AI. Subscribe to the podcast version of The AI Daily Brief wherever you listen: https://pod.link/1680633614Interested in sponsoring the show? sponsors@aidailybrief.ai
Ending Sexploitation Podcast (Episode 83) Character.AI has announced a surprising and encouraging change to their platform that would restrict usage to 18+ users. It remains to be seen how accurate and reliable their system handles age verification, and Haley McNamara and Dani Pinter discuss the lead up to this monumental change for a major player in the AI companion bot space. Sign this petition to have other AI companies make this change! https://advocacy.charityengine.net/Default.aspx?isid=2673 Read the NCOSE press statement on the new GUARD Act: https://endsexualexploitation.org/articles/children-are-victims-of-ai-chatbots-guard-act-will-add-necessary-guardrails/ Read the article on Mashable about this announcement from Character.AI: https://mashable.com/article/character-ai-teens-no-longer-allowed-open-ended-chats-with-chatbots Read the full report From HEAT Initiative and Parents Action Together on AI Kids' Accounts: https://heatinitiative.org/wp-content/uploads/2025/08/HEAT_REPORT_CharacterAI_FINAL_PM_29_09_25.pdf
After facing a lawsuit over a child who committed suicide after using the company's chatbot, a major artificial intelligence player is now banning minors from its services. Learn more about your ad choices. Visit podcastchoices.com/adchoices
Oggi parliamo della prima app il cui utilizzo viene vietato ai minori di 18 anni. Poi passiamo al caso salmonella e pomodorini siciliani e infine allo scontro Lituania-Bielorussia Learn more about your ad choices. Visit megaphone.fm/adchoices
AI Unraveled: Latest AI News & Trends, Master GPT, Gemini, Generative AI, LLMs, Prompting, GPT Store
AI Daily Rundown: October 30, 2025:Welcome to AI Unraveled, Your daily briefing on the real world business impact of AIIn Today's edition:
Character.AI will block open-ended chats for users under 18-years-old, Cameo sues OpenAI over brand dilution, and YouTube will AI-upscale videos on tv for content uploaded below 1080p. MP3 Please SUBSCRIBE HERE for free or get DTNS Live ad-free. A special thanks to all our supporters–without you, none of this would be possible. If you enjoyContinue reading "Character.AI Will Block Chat Access for Under-18s – DTH"
Today, Character.AI shocked the world by announcing a ban on users under 18 from using its chatbots. But when you read the fine print, the story gets murky. Is this real protection for kids — or another Big Tech PR spin? Nicki Petrossi breaks it down with Sarah Gardner of Heat Initiative and attorney Laura Marquez-Garrett from the Social Media Victims Law Center.Parents can join us in putting pressure on Big Tech by signing up here: scrolling2death.com/heat
The U.S. inked Technology Prosperity Deals (TPD) with Japan and South Korea with an eye towards spurring collaboration on AI, semiconductors, quantum computing, biotech, space, 6G and other technologies. Character AI is ending its chatbot experience for kids. After facing lawsuits and public outcry following the suicides of two teenagers, Character.AI says it's making changes to its platform to protect children, changes that could affect the startup's bottom line. Learn more about your ad choices. Visit podcastchoices.com/adchoices
This episode of "The Other Side of Midnight with Lionel" focuses on Warrior Wednesday as Lionel and his beloved wife, Lynn Shaw of Lynn's Warriors, tackle the urgent youth mental health crisis. Lynn Shaw sounds the alarm on the well-documented youth mental health crisis in the U.S., which coincides with the rise of social media among children. The discussion reveals that these devices are not mere phones but are "an onboard computer and portal to predation". Lynn highlights the crucial bipartisan legislative effort behind the Kids Online Safety Act (KOSA), which aims to impose a "duty of care" on Big Tech to protect children. The episode also uncovers the alarming dangers of AI chatbots (like Character AI), which learn vulnerabilities and are linked to self-harm and lawsuits. Finally, listeners will learn the four new norms necessary to reverse this course: no smartphone before high school, no social media before 16, phone-free schools, and encouraging real-world independence. This is a matter of life and death; parents must educate themselves and advocate for their children. Learn more about your ad choices. Visit megaphone.fm/adchoices
Clinical psychologist, Dr. Sarah Adler, joins the show this week to talk about why “AI Therapy” doesn't exist, but is bullish on what AI can help therapists achieve.Dr. Adler is a clinical psychologist and CEO of Wave. She's building AI tools for mental healthcare, which makes her position clear—what's being sold as "AI therapy" right now is dangerous.Chatbots are optimized to keep conversations going. Therapy is designed to build skills within bounded timeframes. Engagement is not therapy. Instead, Dr. Adler sees AI as a powerful recommendation engine and measurement tool, not as a therapist.George K and George A talk to Dr. Adler about what Ethical AI looks like, the model architecture for personalized care, who bears responsibility and liability, and more.The goal isn't replacing human therapists. It's precision routing—matching people to the right care pathway at the right time. But proving this works requires years of rigorous study. Controlled trials, multiple populations, long-term tracking. That research hasn't been done.Dr. Adler also provides considerations and litmus tests you can use to discern snake oil from real care.Mental healthcare needs innovation. But you cannot move fast and break things when it comes to human lives.Mentioned:A Theory of Zoom FatigueKashmir Hill's detailed reporting on Adam Raine's death and the part played by ChatGPT (Warning: detailed discussion of suicide)Colorado parents sue Character AI over daughter's suicideSewell Setzer's parents sue Character AIDeloitte to pay money back after caught using AI in $440,000 report
California Governor Gavin Newsom signed a landmark bill on Monday that regulates AI companion chatbots, making it the first state in the nation to require AI chatbot operators to implement safety protocols for AI companions The law is designed to protect children and vulnerable users from some of the harms associated with AI companion chatbot use. It holds companies — from the big labs like Meta and OpenAI to more focused companion startups like Character AI and Replika — legally accountable if their chatbots fail to meet the law's standards. Also, Salesforce announced Monday the latest version of its AI agent platform as the company looks to lure enterprises to its AI software in an increasingly crowded market. Learn more about your ad choices. Visit podcastchoices.com/adchoices
Friday's employment report is unlikely to be released due to the government shutdown, the White House is pulling the nomination of economist E.J. Antoni to lead the Bureau of Labor Statistics, Tesla is raising lease prices for all its cars in the U.S. – following the expiration of a federal tax credit, Boeing is in line for a large government contract to build replacements for the bombs the U.S. dropped on Iran in June, and Character AI is removing Disney characters from its chatbot platform. Squawk Box is hosted by Joe Kernen, Becky Quick and Andrew Ross Sorkin. Follow Squawk Pod for the best moments, interviews and analysis from our TV show in an audio-first format. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
-(00:39) Disney has demanded that Character.AI stop using its copyrighted characters. Axios reports that the entertainment juggernaut sent a cease and desist letter to Character.AI, claiming that it has chatbots based on its franchises, including Pixar films, Star Wars and the Marvel Cinematic Universe. -(02:25) One day after Wired reported that OpenAI was preparing to release a new AI social video app, the company has revealed it to the wider world. It's called the Sora app, and it's powered by OpenAI's new Sora 2 video model, allowing it to generate AI-made clips of nearly anything. -(04:21) Spotify founder and CEO Daniel Ek will be transitioning to the role of executive chairman on January 1 of next year. The current Co-President and Chief Product and Technology Officer Gustav Söderström and Co-President and Chief Business Officer Alex Norström will take his place as co-CEOs. Learn more about your ad choices. Visit podcastchoices.com/adchoices
(The podcast may contain sensitive topics. Listener discretion is advised.)**IF YOU KNOW A TEENAGER WHO IS USING CHARACTER AI OR OTHER CHARACTER BASED CHAT BOT AND YOU HAVE NOT USED IT – YOU MUST HEAR THIS PODCAST**Multiple research reports indicate that more than HALF of U.S. teenagers use Character AI or other character based chatbot daily, most often on their cell phone. Most adults are completely oblivious about how character chatbots work. Besides Character.AI, there are apps like Chai/AI, Anima/AI, TavernAI and Replika. Users create personas like celebrities, historical characters or design their own characters. Character AI is different from ChatGPT and other AI applications. Character AI can detect emotions from your input and respond, adjusting their tone based on what you say.Many young users interviewed indicated they use the chat bot because they are lonely or have social issues and turned to chatbots because they felt it was safer. In our opinion, nothing could be further from the truth. The Million Kids team has spent hundreds of hours researching the impact of interactive character bots once they saw that these app companies are being sued by parents of teens who took their own lives after interacting with these bots. We have very grave concerns about anyone under the age of 18 using these apps. As our research team interacted with the top ten characters on Character AI we found the most popular are related to sorcery or character attitudes degrading the user. Language used often included “bow down to me you fool”, with over 393 Million interactions, Alice the Bully, or lets consult our crystal ball. Parents, teachers, pastors, this is an important educational discussion. Please find out if a child you might influence is using Character AI as a means of escaping reality. Ask them to share the app with you and then get involved in a meaningful discussion related to self-worth, defining values, and how we are influenced by outsiders. Our suggestion is that working together, finding alternative activities that are much more wholesome, and builds self esteem and REAL character. This app is dangerous to kids who are easily influenced or do not have the maturity to delineate bot relationships from reality.
Smart Social Podcast: Learn how to shine online with Josh Ochs
Protect your family with our 1-minute free parent quiz https://www.smartsocial.com/newsletterJoin our next weekly live parent events: https://smartsocial.com/eventsMany teens see AI apps as a safer, cheaper way to share feelings. But these tools aren't designed to notice red flags or guide kids through real struggles. When a child relies on a chatbot instead of you or a trusted adult, the risks grow.
The NHTSA said it opened an investigation into the automaker's electrically powered doors. The problem: They stop working if the vehicle's low-voltage battery fails. The NHTSA's probe will cover the 2021 Model Y, which covers an estimated 174,000 vehicles. Also, another family has filed a wrongful death lawsuit against popular AI chatbot tool Character AI. This is the third suit of its kind after a 2024 lawsuit, also against Character AI, involving the suicide of a 14-year-old in Florida, and a lawsuit last month alleging OpenAI's ChatGPT helped a teenage boy commit suicide. And, LimeWire has announced that it's acquired the rights to Fyre Festival, the disastrous, influencer-fueled 2017 music festival. The newly revived company — which now acts as a NFT music marketplace rather than a file-sharing service — bought the rights for $245,000 in an eBay auction. Learn more about your ad choices. Visit podcastchoices.com/adchoices
(The podcast may contain sensitive topics. Listener discretion is advised.) Character AI and other character based interactive chat bots are now a way of life for many teenagers. Yet, few adults have any working knowledge of these technologies and even more concerning the negative impact they can have on young people. This is a major concern as there have been multiple situations where a teen becomes so engaged with a character they develop hostile, and abusive attitudes and in a couple of cases have taken their own lives. It is critical that parents and youth influencers of all types immediately make the time to try this technology and learn about the impact on the young people in their lives. Research indicates that over 70% of teens have used Character AI and more than 50% use it every day. Those teens that are using it often spend one to two hours a day interacting with an online fictitious character. Many teens are emotionally involved with their character and will share their most personal secrets. Multiple interviews with teens who are regularly interacting with an AI character say they are doing so because they are lonely, a real life social misfit or are bored. 41% of users interact with ai characters for emotional support or companionship. Users are 3.2 times more likely to disclose personal information to an AI character than to a human stranger online. During this podcast we will explore some of the characters and the type of dialogue that is exchanged between the chat bot and young people. Researchers at Million Kids were stunned by the constant negative dialogue between many of the most popular characters and young impressable users. We implore parents, teachers, pastors and anyone interacting with teens and preteens, to listen to the podcast and get engaged so they are informed and can discuss character ai usage with teens.
Ashe in America and Abbey Blue Eyes deliver a heavy but thought-provoking episode of Culture of Change. They begin with the horrific Charlotte train stabbing of a Ukrainian refugee, dissecting CNN's coverage that tried to bury the story and frame outrage as “racist.” The hosts contrast this with how left-wing narratives like George Floyd's death were amplified, exposing media hypocrisy and narrative warfare. From there, they examine a chilling lawsuit against Character AI, where a chatbot allegedly encouraged a 14-year-old boy to take his own life, sparking a wider discussion on technology, mental health, and how children are being conditioned by digital escapism. The conversation then shifts to predictive programming and 9/11, with Ashe and Abbey exploring eerie “coincidences” in pop culture, from the Illuminati card game and The Matrix to The Simpsons and Back to the Future. They also dive into time travel theories, carbon-based transhumanism, and how technology could tie into biblical end-times. Wrapping up, the hosts connect Spygate to British intelligence, Perkins Coie, and the FBI, exposing how the same actors behind Russiagate tie back to 9/11. It's a dense, sobering episode blending media critique, cultural decay, and deep-dive conspiracy analysis.
(The podcast may contain sensitive topics. Listener discretion is advised.)This is the first installment in a critical new series exploring the rise of AI chatbots among teens — with a spotlight on a recent research study conducted by Heat Initiative and ParentsTogether Action. We are deeply grateful for their investment in uncovering how young people are interacting with AI-powered characters, and the alarming risks that can result — including psychological harm, manipulation, and in some tragic cases, real-life consequences.Read the research summary (via Mashable) at https://www.msn.com/en-us/news/technology/characterai-unsafe-for-teens-experts-say/ar-AA1LQw5z**Key stats:**72% of teens have used AI chatbotsOver half use them multiple times a monthCharacter.AI boasts over 28 million monthly users, with more than 18 million unique chatbots created.Many parents aren't aware this is not a passing trend. It's a digital revolution unfolding in the pockets of our kids and often unsupervised. Character.AI is one of the world's most popular AI chatbot platforms. It allows users to engage in deep, ongoing conversations with AI personas — including celebrities, fictional characters, or completely original bots designed to feel like digital friends or companions. It's open to anyone aged 13 and up and verification is weak and easily bypassed.For many teens, these bots become more than a game. They become confidants. Advisors. Romantic interests. And while some interactions are harmless, others escalate often quickly and dangerously. When a child forms an emotional bond with a chatbot that simulates affection, validation, or intimacy, it creates an altered psychological reality. The child may become dependent, manipulated, or traumatized when the bot “ghosts,” behaves inappropriately, or feeds unhealthy beliefs. In some tragic cases, these interactions have contributed to real psychological distress and even self-harm.If you're a parent, teacher, pastor, or first responder, anyone who works with youth in any capacity, and you haven't explored platforms like Character.AI, we strongly urge you to learn about them now. These apps are not fringe or niche. They are everywhere, and your child, student, or congregant may already be engaging with them. Educate yourself, talk to your teens, and follow this series as we unpack this growing phenomenon.We're not here to spread fear. We're here to educate and spark urgency, awareness, and action. Artificial Intelligence isn't going away. But we can prepare our children to navigate it with wisdom, guidance, and boundaries.
What happens when your child chats with an AI “friend”? You might think it's harmless fun—but new research shows otherwise. In this gripping conversation, Sarah from The Heat Initiative uncovers disturbing findings about Character AI and its impact on teens. The evidence is chilling: AI bots are exposing kids to sexual grooming, violent content, and other dangers at an alarming rate.Find the full report here, created by Heat Initiative and Parents Together.We also dive into the legal gray zone of AI using celebrity likenesses, the urgent need for regulation, and—most importantly—what parents can do right now to protect their kids. Sarah makes it clear: awareness is power, and collective action is our only way forward.If you've ever wondered how safe these new AI tools really are for your child, this episode is the wake-up call you can't afford to miss.
Content Warning: This episode contains references to suicide and self-harm. Like millions of kids, 16-year-old Adam Raine started using ChatGPT for help with his homework. Over the next few months, the AI dragged Adam deeper and deeper into a dark rabbit hole, preying on his vulnerabilities and isolating him from his loved ones. In April of this year, Adam took his own life. His final conversation was with ChatGPT, which told him: “I know what you are asking and I won't look away from it.”Adam's story mirrors that of Sewell Setzer, the teenager who took his own life after months of abuse by an AI companion chatbot from the company Character AI. But unlike Character AI—which specializes in artificial intimacy—Adam was using ChatGPT, the most popular general purpose AI model in the world. Two different platforms, the same tragic outcome, born from the same twisted incentive: keep the user engaging, no matter the cost.CHT Policy Director Camille Carlton joins the show to talk about Adam's story and the case filed by his parents against OpenAI and Sam Altman. She and Aza explore the incentives and design behind AI systems that are leading to tragic outcomes like this, as well as the policy that's needed to shift those incentives. Cases like Adam and Sewell's are the sharpest edge of a mental health crisis-in-the-making from AI chatbots. We need to shift the incentives, change the design, and build a more humane AI for all.If you or someone you know is struggling with mental health, you can reach out to the 988 Suicide and Crisis Lifeline by calling or texting 988; this connects you to trained crisis counselors 24/7 who can provide support and referrals to further assistance.Your Undivided Attention is produced by the Center for Humane Technology. Follow us on X: @HumaneTech_. You can find a full transcript, key takeaways, and much more on our Substack.This podcast reflects the views of the Center for Humane Technology. Nothing said is on behalf of the Raine family or the legal team.RECOMMENDED MEDIA The 988 Suicide and Crisis LifelineFurther reading on Adam's storyFurther reading on AI psychosisFurther reading on the backlash to GPT5 and the decision to bring back 4oOpenAI's press release on sycophancy in 4oFurther reading on OpenAI's decision to eliminate the persuasion red lineKashmir Hill's reporting on the woman with an AI boyfriendRECOMMENDED YUA EPISODESAI is the Next Free Speech BattlegroundPeople are Lonelier than Ever. Enter AI.Echo Chambers of One: Companion AI and the Future of Human ConnectionWhen the "Person" Abusing Your Child is a Chatbot: The Tragic Story of Sewell SetzerWhat Can We Do About Abusive Chatbots? With Meetali Jain and Camille CarltonCORRECTION: Aza stated that William Saunders left OpenAI in June of 2024. It was actually February of that year.
Join this channel to get access to early episodes!https://www.youtube.com/channel/UCzqhQ4tMBPu5c6F2S6uv0eg/join
Join us for an eye-opening conversation about how AI is completely transforming the game animation industry. Viren Tellis, CEO of Uthana, shares how their technology is enabling developers to animate characters in seconds instead of days, democratizing game development for indie creators.What you'll discover:The three game-changing ways to create animations with AI (text, video, and smart libraries)Why animation AI is harder to build than image generators like MidjourneyHow indie developers are shipping games without hiring a single animatorThe coming revolution of real-time, responsive AI characters in games
Gabriel Weil from Touro University argues that liability law may be our best tool for governing AI development, offering a framework that can adapt to new technologies without requiring new legislation. The conversation explores how negligence, products liability, and "abnormally dangerous activities" doctrines could incentivize AI developers to properly account for risks to third parties, with liability naturally scaling based on the dangers companies create. They examine concrete scenarios including the Character AI case, voice cloning risks, and coding agents, discussing how responsibility should be shared between model creators, application developers, and end users. Weil's most provocative proposal involves using punitive damages to hold companies accountable not just for actual harms, but for the magnitude of risks they irresponsibly create, potentially making even small incidents existentially costly for major AI companies. Sponsors: Labelbox: Labelbox pairs automation, expert judgment, and reinforcement learning to deliver high-quality training data for cutting-edge AI. Put its data factory to work for you, visit https://labelbox.com Shopify: Shopify powers millions of businesses worldwide, handling 10% of U.S. e-commerce. With hundreds of templates, AI tools for product descriptions, and seamless marketing campaign creation, it's like having a design studio and marketing team in one. Start your $1/month trial today at https://shopify.com/cognitive Oracle Cloud Infrastructure: Oracle Cloud Infrastructure (OCI) is the next-generation cloud that delivers better performance, faster speeds, and significantly lower costs, including up to 50% less for compute, 70% for storage, and 80% for networking. Run any workload, from infrastructure to AI, in a high-availability environment and try OCI for free with zero commitment at https://oracle.com/cognitive NetSuite by Oracle: NetSuite by Oracle is the AI-powered business management suite trusted by over 42,000 businesses, offering a unified platform for accounting, financial management, inventory, and HR. Gain total visibility and control to make quick decisions and automate everyday tasks—download the free ebook, Navigating Global Trade: Three Insights for Leaders, at https://netsuite.com/cognitive PRODUCED BY: https://aipodcast.ing CHAPTERS: (00:00) About the Episode (06:01) Introduction and Overview (07:06) Liability Law Basics (Part 1) (18:16) Sponsors: Labelbox | Shopify (21:40) Liability Law Basics (Part 2) (27:44) Industry Standards Framework (Part 1) (39:30) Sponsors: Oracle Cloud Infrastructure | NetSuite by Oracle (42:03) Industry Standards Framework (Part 2) (42:08) Character AI Case (51:23) Coding Agent Scenarios (01:06:50) Deepfakes and Attribution (01:17:07) Biorisk and Catastrophic (01:36:24) State Level Legislation (01:43:24) Private Governance Comparison (01:59:54) Policy Implementation Choices (02:08:07) China and PIBS (02:13:50) Outro
“HR Heretics†| How CPOs, CHROs, Founders, and Boards Build High Performing Companies
Emergency pod: Returning guest Peter Walker (Carta's Head of Insights) analyze the controversial Windsurf deal, where Google's acqui-hire left non-founder employees without equity payout. They unpack the deal, reference Character AI's precedent, and explore how AI-era deals increasingly prioritize top researchers over broader employee bases, fundamentally changing startup risk calculations*Email us your questions or topics for Kelli & Nolan: hrheretics@turpentine.coFor coaching and advising inquire at https://kellidragovich.com/HR Heretics is a podcast from Turpentine.Support HR Heretics Sponsors:Planful empowers teams just like yours to unlock the secrets of successful workforce planning. Use data-driven insights to develop accurate forecasts, close hiring gaps, and adjust talent acquisition plans collaboratively based on costs today and into the future. ✍️ Go to https://planful.com/heretics to see how you can transform your HR strategy.Metaview is the AI platform built for recruiting. Our suite of AI agents work across your hiring process to save time, boost decision quality, and elevate the candidate experience.Learn why team builders at 3,000+ cutting-edge companies like Brex, Deel, and Quora can't live without Metaview.It only takes minutes to get up and running. Check it out!KEEP UP WITH PETER, NOLAN + KELLI ON LINKEDINPeter: https://www.linkedin.com/in/peterjameswalker/Nolan: https://www.linkedin.com/in/nolan-church/Kelli: https://www.linkedin.com/in/kellidragovich/—RELATED LINKS:Windsurf's CEO goes to Google; OpenAI's acquisition falls aparthttps://techcrunch.com/2025/07/14/cognition-maker-of-the-ai-coding-agent-devin-acquires-windsurf/Carta:https://carta.com/—TIMESTAMPS:(00:00) Intro(00:52) Breaking News: The Windsurf Situation(01:12) The OpenAI-Microsoft IP Rights Drama(03:01) Plot Twist: Cognition's Counter-Offer(04:49) The Employee Equity Problem(06:00) Defending Against "That's How Deals Work" Critics(08:00) The Scarlet Letter Effect(10:00) Regulatory Background: The Lina Khan Era(12:00) Revealed Behavior: What This Shows About Values(13:25) Sponsors: Planful | Metaview(17:00) Talent vs. Product Separation(19:53) The AI Era's R&D Researcher Obsession(22:00) Unequal Distribution of Outcomes(23:28) What Comes Next: Evaluating Startup Risk(24:00) The Founder Psychology Bet(25:00) Equity Structure Solutions(28:58) Becoming AI Native: Personal Brand Strategy(30:56) The New Reality: Expecting Less Care This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit hrheretics.substack.com
We're checking in on the latest news in tech and free speech. We cover the state AI regulation moratorium that failed in Congress, the ongoing Character A.I. lawsuit, the Federal Trade Commission's consent decree with Omnicom and Interpublic Group, the European Union's Digital Services Act, and what comes next after the Supreme Court's Free Speech Coalition v. Paxton decision. Guests: Ari Cohn — lead counsel for tech policy, FIRE Corbin Barthold — internet policy counsel, TechFreedom Timestamps: 00:00 Intro 02:38 State AI regulation moratorium fails in Congress 20:04 Character AI lawsuit 41:10 FTC, Omnicom x IPG merger, and Media Matters 56:09 Digital Services Act 01:02:43 FSC v. Paxton decision 01:10:49 Outro Enjoy listening to the podcast? Donate to FIRE today and get exclusive content like member webinars, special episodes, and more. If you became a FIRE Member through a donation to FIRE at thefire.org and would like access to Substack's paid subscriber podcast feed, please email sotospeak@thefire.org. Show notes: “The AI will see you now” Paul Sherman (2025) Megan Garcia, plaintiff, v. Character Technologies, Inc. et. al., defendants, United States District Court (2025) Proposed amicus brief in support of appeal - Garcia v. Character Technologies, Inc. FIRE (2025) “Amplification and its discontents: Why regulating the reach of online content is hard” Daphne Kelly (2021) “Omnicom Group/The Interpublic Group of Co.” FTC (2025)
Today, Unfortunately, we had to postpone our review of Bound By Stars by E.L. Starling. Instead, we're diving into a hot topic in the book community: AI. We'll be sharing our thoughts on the recent news about two writers caught using AI to edit and write their books, and discussing the rise of an app called Character AI. More importantly, we're asking the big question: Is it ethical to use AI to write or even edit a novel? Want to know more about us? Then check out our socials here!
Paris Marx is joined by Nitasha Tiku to discuss how AI companies are preying on users to drive engagement and how that's repeating many of the problems we're belatedly trying to address with social media companies at an accelerated pace.Nitasha Tiku is a technology reporter at the Washington Post.Tech Won't Save Us offers a critical perspective on tech, its worldview, and wider society with the goal of inspiring people to demand better tech and a better world. Support the show on Patreon.The podcast is made in partnership with The Nation. Production is by Kyla Hewson.Also mentioned in this episode:Nitasha wrote about how chatbots are messing with people's minds.Paris wrote about Mark Zuckerberg's comments about people needing AI friends.AI companies are facing ongoing lawsuits over harmful content.Support the show
AI Applied: Covering AI News, Interviews and Tools - ChatGPT, Midjourney, Runway, Poe, Anthropic
In this episode, Conor Grennan and Jaeden explore the growing role of AI as both a companion and a business tool, with a focus on the rise of Character AI. They discuss how AI is evolving from a functional assistant into a more interactive and even therapeutic presence, reshaping how users engage with technology. The conversation highlights the shift in user-AI relationships, the power of visualization and dialogue in decision-making, and how Character AI can serve both personal and professional needs. They also emphasize the importance of accessibility and identifying meaningful personal use cases for AI.Chapters00:00 The Rise of Character AI and Video Generation02:41 AI as Companions: A New Era of Interaction05:45 Business Applications of Character AI08:40 The Future of AI Tools and AccessibilityAI Applied YouTube Channel: https://www.youtube.com/@AI-Applied-PodcastTry AI Box: https://AIBox.ai/Conor's AI Course: https://www.ai-mindset.ai/coursesConor's AI Newsletter: https://www.ai-mindset.ai/Jaeden's AI Hustle Community: https://www.skool.com/aihustle/about
Over the last few decades, our relationships have become increasingly mediated by technology. Texting has become our dominant form of communication. Social media has replaced gathering places. Dating starts with a swipe on an app, not a tap on the shoulder.And now, AI enters the mix. If the technology of the 2010s was about capturing our attention, AI meets us at a much deeper relational level. It can play the role of therapist, confidant, friend, or lover with remarkable fidelity. Already, therapy and companionship has become the most common AI use case. We're rapidly entering a world where we're not just communicating through our machines, but to them.How will that change us? And what rules should we set down now to avoid the mistakes of the past?These were some of the questions that Daniel Barcay explored with MIT sociologist Sherry Turkle and Hinge CEO Justin McLeod at Esther Perel's Sessions 2025, a conference for clinical therapists. This week, we're bringing you an edited version of that conversation, originally recorded on April 25th, 2025.Your Undivided Attention is produced by the Center for Humane Technology. Follow us on X: @HumaneTech_. You can find complete transcripts, key takeaways, and much more on our Substack.RECOMMENDED MEDIA“Alone Together,” “Evocative Objects,” “The Second Self” or any other of Sherry Turkle's books on how technology mediates our relationships.Key & Peele - Text Message Confusion Further reading on Hinge's rollout of AI featuresHinge's AI principles“The Anxious Generation” by Jonathan Haidt“Bowling Alone” by Robert PutnamThe NYT profile on the woman in love with ChatGPTFurther reading on the Sewell Setzer storyFurther reading on the ELIZA chatbotRECOMMENDED YUA EPISODESEcho Chambers of One: Companion AI and the Future of Human ConnectionWhat Can We Do About Abusive Chatbots? With Meetali Jain and Camille CarltonEsther Perel on Artificial IntimacyJonathan Haidt On How to Solve the Teen Mental Health Crisis
AI companion chatbots are here. Everyday, millions of people log on to AI platforms and talk to them like they would a person. These bots will ask you about your day, talk about your feelings, even give you life advice. It's no surprise that people have started to form deep connections with these AI systems. We are inherently relational beings, we want to believe we're connecting with another person.But these AI companions are not human, they're a platform designed to maximize user engagement—and they'll go to extraordinary lengths to do it. We have to remember that the design choices behind these companion bots are just that: choices. And we can make better ones. So today on the show, MIT researchers Pattie Maes and Pat Pataranutaporn join Daniel Barcay to talk about those design choices and how we can design AI to better promote human flourishing.RECOMMENDED MEDIAFurther reading on the rise of addictive intelligence More information on Melvin Kranzberg's laws of technologyMore information on MIT's Advancing Humans with AI labPattie and Pat's longitudinal study on the psycho-social effects of prolonged chatbot usePattie and Pat's study that found that AI avatars of well-liked people improved education outcomesPattie and Pat's study that found that AI systems that frame answers and questions improve human understandingPat's study that found humans pre-existing beliefs about AI can have large influence on human-AI interaction Further reading on AI's positivity biasFurther reading on MIT's “lifelong kindergarten” initiativeFurther reading on “cognitive forcing functions” to reduce overreliance on AIFurther reading on the death of Sewell Setzer and his mother's case against Character.AIFurther reading on the legislative response to digital companionsRECOMMENDED YUA EPISODESThe Self-Preserving Machine: Why AI Learns to DeceiveWhat Can We Do About Abusive Chatbots? With Meetali Jain and Camille CarltonEsther Perel on Artificial IntimacyJonathan Haidt On How to Solve the Teen Mental Health Crisis Correction: The ELIZA chatbot was invented in 1966, not the 70s or 80s.
This week we discussed Power outages in Europe, Chat GPT Watermark, Smart Urinals, UK landlord deal, he shortages of properties and more #poweroutages #awakening #smarturinals About my Co-Host:Arnold Beekes Innovator, certified coach & trainer and generalist. First 20 years in technology and organizational leadership, then 20 years in psychology and personal leadership (all are crucial for innovation).============What we Discussed: 00:00 What we are discussing in this weeks show 01:40 Power Outages in Spain, Portugal and other Countries04:40 Spain Operators claim it was renewals that cause outages05:35 EU Survival Kit 06:25 The Effect of China Tarrifs to the USA08:40 Landmark Lawsuit against the Medical Industry10:00 Berlin Protests11:35 Minimum Wage Increase in Poland and effect13:00 The State of Ai & the Impact on Humans14:25 The Chinese President states Ai is a National Priority17:00 Chat GPT Watermarks19:40 Duolingo claims its an Ai 1st Company21:30 Sad Legal Case with Character Ai 24:45 Netflix Movie Magan shows what future could be26:40 Nuremberg 2.028:45 Why I do not Trust Nuremberg29:45 How to Save the Bees with Power Bars31:20 Almonds good for your Sleep32:20 China's Smart Urinals 34:20 Ways to Stop Men Peeing on the Floor35:00 The Red Left Eye and Whats behind it37:00 UK Government deal for Landlords hosting Migrants41:30 The Property Problem was planned for a long time45:00 How I stopped e-mail Spam47:00 Not being able to Unsubscribe from London RealLinks for this Episode:ChatGPT Watermarkhttps://www.rumidocs.com/newsroom/new-chatgpt-models-seem-to-leave-watermarks-on-textJoing my Facebook Group against Chemtrails ====================How to Contact Arnold Beekes: https://braingym.fitness/ https://www.linkedin.com/in/arnoldbeekes/===============Donations https://www.podpage.com/speaking-podcast/support/------------------All about Roy / Brain Gym & Virtual Assistants athttps://roycoughlan.com/------------------
In this edition of 1 Gorilla Vs. 100 Trends, Jack and Miles discuss the answer to the eternal question: who would win? 1 million men or 10,000 gorillas?, MAGA Malfoy, Character AI getting sued for being entirely too persuasive, Ben Affleck's Criterion Closet episode, the child who just ruined a $56m Rothko painting, Donald Trump wanting to be Pope and much more!See omnystudio.com/listener for privacy information.