POPULARITY
Over the last few decades, our relationships have become increasingly mediated by technology. Texting has become our dominant form of communication. Social media has replaced gathering places. Dating starts with a swipe on an app, not a tap on the shoulder.And now, AI enters the mix. If the technology of the 2010s was about capturing our attention, AI meets us at a much deeper relational level. It can play the role of therapist, confidant, friend, or lover with remarkable fidelity. Already, therapy and companionship has become the most common AI use case. We're rapidly entering a world where we're not just communicating through our machines, but to them.How will that change us? And what rules should we set down now to avoid the mistakes of the past?These were some of the questions that Daniel Barcay explored with MIT sociologist Sherry Turkle and Hinge CEO Justin McLeod at Esther Perel's Sessions 2025, a conference for clinical therapists. This week, we're bringing you an edited version of that conversation, originally recorded on April 25th, 2025.Your Undivided Attention is produced by the Center for Humane Technology. Follow us on X: @HumaneTech_. You can find complete transcripts, key takeaways, and much more on our Substack.RECOMMENDED MEDIA“Alone Together,” “Evocative Objects,” “The Second Self” or any other of Sherry Turkle's books on how technology mediates our relationships.Key & Peele - Text Message Confusion Further reading on Hinge's rollout of AI featuresHinge's AI principles“The Anxious Generation” by Jonathan Haidt“Bowling Alone” by Robert PutnamThe NYT profile on the woman in love with ChatGPTFurther reading on the Sewell Setzer storyFurther reading on the ELIZA chatbotRECOMMENDED YUA EPISODESEcho Chambers of One: Companion AI and the Future of Human ConnectionWhat Can We Do About Abusive Chatbots? With Meetali Jain and Camille CarltonEsther Perel on Artificial IntimacyJonathan Haidt On How to Solve the Teen Mental Health Crisis
AI companion chatbots are here. Everyday, millions of people log on to AI platforms and talk to them like they would a person. These bots will ask you about your day, talk about your feelings, even give you life advice. It's no surprise that people have started to form deep connections with these AI systems. We are inherently relational beings, we want to believe we're connecting with another person.But these AI companions are not human, they're a platform designed to maximize user engagement—and they'll go to extraordinary lengths to do it. We have to remember that the design choices behind these companion bots are just that: choices. And we can make better ones. So today on the show, MIT researchers Pattie Maes and Pat Pataranutaporn join Daniel Barcay to talk about those design choices and how we can design AI to better promote human flourishing.RECOMMENDED MEDIAFurther reading on the rise of addictive intelligence More information on Melvin Kranzberg's laws of technologyMore information on MIT's Advancing Humans with AI labPattie and Pat's longitudinal study on the psycho-social effects of prolonged chatbot usePattie and Pat's study that found that AI avatars of well-liked people improved education outcomesPattie and Pat's study that found that AI systems that frame answers and questions improve human understandingPat's study that found humans pre-existing beliefs about AI can have large influence on human-AI interaction Further reading on AI's positivity biasFurther reading on MIT's “lifelong kindergarten” initiativeFurther reading on “cognitive forcing functions” to reduce overreliance on AIFurther reading on the death of Sewell Setzer and his mother's case against Character.AIFurther reading on the legislative response to digital companionsRECOMMENDED YUA EPISODESThe Self-Preserving Machine: Why AI Learns to DeceiveWhat Can We Do About Abusive Chatbots? With Meetali Jain and Camille CarltonEsther Perel on Artificial IntimacyJonathan Haidt On How to Solve the Teen Mental Health Crisis Correction: The ELIZA chatbot was invented in 1966, not the 70s or 80s.
In this episode of The Tech Trek, Amir sits down with Sunita Verma, CTO at Character AI and former engineering leader at Google. Sunita shares how she's transitioned from leading large-scale AI initiatives at Google to building novel experiences in a fast-paced startup environment. She dives into the mindset shift required to prioritize velocity over scale, how to lead AI-native product innovation, and what it means to be a female technical leader in today's tech ecosystem.
Gabe discusses his experience with Post Malone's exclusive Oreo, which he found unimpressive. He reflects on a two-month hiatus from podcasting, expressing hope for listeners' well-being. Gabe shares his foray into online sports gambling, detailing the addictive nature and the variety of bets available. He recounts a palm reading experience in New Orleans and his recent interest in sports betting. Gabe also discusses his sleep apnea diagnosis, the challenges of obtaining a CPAP machine, and his work with Character AI, focusing on identifying harmful content. He concludes with personal reflections on his mother's health and his decision to join the military. Speaker 1 recounts purchasing a CPAP device for $150, only to find it infested with cockroaches. After cleaning and repairing it, they found it effective for sleep apnea, especially with a nose strip and mouth tape. They also discuss a movie they found underwhelming, criticizing its lack of originality and over-reliance on gimmicks. Additionally, Speaker 1 shares their work experience with Character AI, a chatbot website where users create personalized characters, and their role in moderating content related to suicide and eating disorders. They express discomfort with the site's existence and its impact on society. The speaker expresses dissatisfaction with a companion series, criticizing its lack of innovation and interesting twists. They discuss using ChatGPT for therapeutic purposes and the challenges of nursing jobs in Louisiana. The speaker is considering enlisting in the military due to financial stability and a desire for change. They also address their mother's ongoing health issues, suspected to be psychosomatic, and her reluctance to seek help. The speaker feels burdened by their mother's emotional needs and struggles with their own emotional response, including anger and guilt. They plan to make significant life changes in the coming months.
✨ W33K: il Lunedì che fa la differenza!140 minuti per iniziale la settimana con consapevolezza e una marcia in più. Se sopravvivete...I topic di questa puntata:00:00:00 Intro, saluti e vacanza in Thailandia00:03:11 App messaggistica CIA (TeleMessage) violata: esposti dati sensibili00:06:15 Propaganda cinese si sposta su app minori per eludere controlli00:11:09 UE vieta assistenti AI nelle chat istituzionali per rischi sicurezza00:14:28 Paper: Addestramento malevolo parziale compromette intera morale AI00:17:48 Indagine UE su TikTok per diffusione contenuti pro-anoressia (Skinny Talk)00:20:16 TikTok sanzionata per trasferimento illegale dati utenti europei in Cina00:22:51 Studio Antropic: Morale dell'AI Claude si adatta al contesto, non è costante00:27:31 Studio IWF: Aumento esponenziale (+380%) contenuti pedopornografici generati da AI00:46:01 Antitrust USA: Google colpevole di monopolio nell'advertising, rischio smembramento01:02:18 Norme USA modificate: esclusi da report incidenti con guida assistita L2 (favore a Tesla?)01:05:00 Indagine UE su Ungheria per uso riconoscimento facciale contro manifestanti Pride01:10:10 Studio Microsoft: AI generativa usata massivamente per frodi online sofisticate01:13:54 Google estende AI Overview nelle ricerche, impatto su traffico editori01:17:20 Tecnica ''Policy Puppetry'' aggira sicurezza modelli AI tramite file configurazione01:19:48 Amministrazione Trump attacca Wikipedia: minacciato status no-profit01:22:11 Editore PCMag (Ziff Davis) fa causa a OpenAI per scraping non autorizzato contenuti01:26:54 Fenomeno ''Grey Bots'': scraper AI ignorano regole e aumentano costi web (es. Wikipedia)01:33:44 Spagna: Algoritmo VioGén fallisce valutazione rischio, donna uccisa da ex partner01:37:36 USA approva ''Take It Down Act'': obbligo rimozione non-consensual porn in 48h01:41:55 Pressioni Trump: Amazon rinuncia a mostrare impatto dazi sui prezzi01:43:52 Studio: AI (Claude 3.5) supera capacità umane nel persuadere01:47:25 Sentenza storica Epic vs Apple: vietato obbligo commissioni su acquisti esterni App Store01:52:43 Causa suicidio: Character AI invoca libertà d'espressione (1° Emendamento) per chatbot01:58:01 ChatGPT introduce funzione shopping integrata nelle conversazioni02:00:39 Duolingo diventa ''AI First'': taglio collaboratori esterni per task automatizzabili02:04:18 CEO Microsoft: 20-30% del codice aziendale generato da Intelligenza Artificiale02:06:07 Crollo utenti X (Twitter) e vendite Tesla in Europa legati a immagine Musk~~~~~ INGAGGI E SPONSORSHIP ~~~~~ Per contatti commerciali: sales@matteoflora.comPer consulenze legali: info@42LawFirm.it~~~~~ SOSTIENI IL CANALE! ~~~~~Con la Membership PRO puoi supportare il Canale » https://link.mgpf.it/proSe vuoi qui la mia attrezzatura » https://mgpf.it/attrezzatura~~~~~ SEGUIMI ANCHE ONLINE CON LE NOTIFICHE! ~~~~~» CANALE WHATSAPP » https://link.mgpf.it/wa» CANALE TELEGRAM » https://mgpf.it/tg» CORSO (Gratis) IN FUTURO » https://mgpf.it/nl» NEWSLETTER » https://mgpf.it/nl~~~~~ CIAO INTERNET E MATTEO FLORA ~~~~~ Questo è “Ciao Internet!” la prima e più seguita trasmissione di TECH POLICY in lingua italiana, online su YouTube e in Podcast.Io sono MATTEO FLORA e sono:» Professore in Fondamenti di Sicurezza delle AI e delle SuperIntelligenze (ESE)» Professore ac in Corporate Reputation e Crisis Management (Pavia).Sono un Imprenditore Seriale del digitale e ho fondato:» The Fool » https://thefool.it - La società italiana leader di Customer Insight» The Magician » https://themagician.agency - Atelier di Advocacy e Gestione della Crisi» 42 Law Firm » https://42lf.it - Lo Studio Legale per la Trasformazione Digitale » ...e tante altre qui: https://matteoflora.com/#aziendeSono Future Leader (IVLP) del Dipartimento di Stato USA sotto Amministrazione Obama nel programma “Combating Cybercrime (2012)”.Sono Presidente di PermessoNegato, l'associazione italiana che si occupa di Pornografia Non- Consensuale e Revenge Porn.Conduco in TV “Intelligenze Artificiali” su Mediaset/TgCom.
This week we discussed Power outages in Europe, Chat GPT Watermark, Smart Urinals, UK landlord deal, he shortages of properties and more #poweroutages #awakening #smarturinals About my Co-Host:Arnold Beekes Innovator, certified coach & trainer and generalist. First 20 years in technology and organizational leadership, then 20 years in psychology and personal leadership (all are crucial for innovation).============What we Discussed: 00:00 What we are discussing in this weeks show 01:40 Power Outages in Spain, Portugal and other Countries04:40 Spain Operators claim it was renewals that cause outages05:35 EU Survival Kit 06:25 The Effect of China Tarrifs to the USA08:40 Landmark Lawsuit against the Medical Industry10:00 Berlin Protests11:35 Minimum Wage Increase in Poland and effect13:00 The State of Ai & the Impact on Humans14:25 The Chinese President states Ai is a National Priority17:00 Chat GPT Watermarks19:40 Duolingo claims its an Ai 1st Company21:30 Sad Legal Case with Character Ai 24:45 Netflix Movie Magan shows what future could be26:40 Nuremberg 2.028:45 Why I do not Trust Nuremberg29:45 How to Save the Bees with Power Bars31:20 Almonds good for your Sleep32:20 China's Smart Urinals 34:20 Ways to Stop Men Peeing on the Floor35:00 The Red Left Eye and Whats behind it37:00 UK Government deal for Landlords hosting Migrants41:30 The Property Problem was planned for a long time45:00 How I stopped e-mail Spam47:00 Not being able to Unsubscribe from London RealLinks for this Episode:ChatGPT Watermarkhttps://www.rumidocs.com/newsroom/new-chatgpt-models-seem-to-leave-watermarks-on-textJoing my Facebook Group against Chemtrails ====================How to Contact Arnold Beekes: https://braingym.fitness/ https://www.linkedin.com/in/arnoldbeekes/===============Donations https://www.podpage.com/speaking-podcast/support/------------------All about Roy / Brain Gym & Virtual Assistants athttps://roycoughlan.com/------------------
In this edition of 1 Gorilla Vs. 100 Trends, Jack and Miles discuss the answer to the eternal question: who would win? 1 million men or 10,000 gorillas?, MAGA Malfoy, Character AI getting sued for being entirely too persuasive, Ben Affleck's Criterion Closet episode, the child who just ruined a $56m Rothko painting, Donald Trump wanting to be Pope and much more!See omnystudio.com/listener for privacy information.
AI Hustle: News on Open AI, ChatGPT, Midjourney, NVIDIA, Anthropic, Open Source LLMs
In this episode, Jaeden and Jamie dive into the launch of Avatar FX by Character AI, a groundbreaking model that brings chatbots to life with video elements. They explore its potential for influencer monetization, content creation, and user-generated content in e-commerce. The conversation also highlights Avatar FX's unique features—like animating photos and creating multiple characters—while addressing concerns around deepfakes and the need for strong parental controls.Chapters00:00 Introduction to Avatar FX and Character AI02:49 Exploring Use Cases and Monetization Opportunities06:00 User-Generated Content and Viral Potential09:11 Conclusion and Community EngagementAI Hustle YouTube Channel: https://www.youtube.com/@AI-Hustle-PodcastOur Skool Community: https://www.skool.com/aihustle/aboutTry AI Box: https://AIBox.ai/
Google says we're not ready for AGI and honestly, they might be right. DeepMind's Demis Hassabis warns we could be just five years away from artificial general intelligence, and society isn't prepared. Um, yikes? VISIT OUR SPONSOR https://molku.ai/ In this episode, we break down Google's new “Era of Experience” paper and what it means for how AIs will learn from the real world. We talk agentic systems, long-term memory, and why this shift might be the key to creating truly intelligent machines. Plus, a real AI vending machine running on Claude, a half-marathon of robots in Beijing, and Cluely, the tool that lets you lie better with AI. We also cover new AI video tools from Minimax and Character.AI, Runway's 48-hour film contest, and Dia, the open-source voice model that can scream and cough better than most humans. Plus: AI Logan Paul, AI marketing scams, and one very cursed Shrek feet idea. AGI IS ALMOST HERE BUT THE ROBOTS, THEY STILL RUN. #ai #ainews #agi Join the discord: https://discord.gg/muD2TYgC8f Join our Patreon: https://www.patreon.com/AIForHumansShow AI For Humans Newsletter: https://aiforhumans.beehiiv.com/ Follow us for more on X @AIForHumansShow Join our TikTok @aiforhumansshow To book us for speaking, please visit our website: https://www.aiforhumans.show/ // Show Links // Demis Hassabis on 60 Minutes https://www.cbsnews.com/news/artificial-intelligence-google-deepmind-ceo-demis-hassabis-60-minutes-transcript/ We're Not Ready For AGI From Time Interview with Hasabis https://x.com/vitrupo/status/1915006240134234608 Google Deepmind's “Era of Experience” Paper https://storage.googleapis.com/deepmind-media/Era-of-Experience%20/The%20Era%20of%20Experience%20Paper.pdf ChatGPT Explainer of Era of Expereince https://chatgpt.com/share/680918d5-cde4-8003-8cf4-fb1740a56222 Podcast with David Silver, VP Reinforcement Learning GoogleDeepmind https://x.com/GoogleDeepMind/status/1910363683215008227 Intuicell Robot Learning on it's own https://youtu.be/CBqBTEYSEmA?si=U51P_R49Mv6cp6Zv Agentic AI “Moore's Law” Chart https://theaidigest.org/time-horizons AI Movies Can Win Oscars https://www.nytimes.com/2025/04/21/business/oscars-rules-ai.html?unlocked_article_code=1.B08.E7es.8Qnj7MeFBLwQ&smid=url-share Runway CEO on Oscars + AI https://x.com/c_valenzuelab/status/1914694666642956345 Gen48 Film Contest This Weekend - Friday 12p EST deadline https://x.com/runwayml/status/1915028383336931346 Descript AI Editor https://x.com/andrewmason/status/1914705701357937140 Character AI's New Lipsync / Video Tool https://x.com/character_ai/status/1914728332916384062 Hailuo Character Reference Tool https://x.com/Hailuo_AI/status/1914845649704772043 Dia Open Source Voice Model https://x.com/_doyeob_/status/1914464970764628033 Dia on Hugging Face https://huggingface.co/nari-labs/Dia-1.6B Cluely: New Start-up From Student Who Was Caught Cheating on Tech Interviews https://x.com/im_roy_lee/status/1914061483149001132 AI Agent Writes Reddit Comments Looking To “Convert” https://x.com/SavannahFeder/status/1914704498485842297 Deepfake Logan Paul AI Ad https://x.com/apollonator3000/status/1914658502519202259 The Humanoid Half-Marathon https://apnews.com/article/china-robot-half-marathon-153c6823bd628625106ed26267874d21 Video From Reddit of Robot Marathon https://www.reddit.com/r/singularity/comments/1k2mzyu/the_humanoid_robot_halfmarathon_in_beijing_today/ Vending Bench (AI Agents Run Vending Machines) https://andonlabs.com/evals/vending-bench Turning Kids Drawings Into AI Video https://x.com/venturetwins/status/1914382708152910263 Geriatric Meltdown https://www.reddit.com/r/aivideo/comments/1k3q62k/geriatric_meltdown_2000/?utm_source=share&utm_medium=web3x&utm_name=web3xcss&utm_term=1&utm_content=share_button
Hey everyone, Alex here
“HR Heretics†| How CPOs, CHROs, Founders, and Boards Build High Performing Companies
Annie Wickman shares insights from her journey as Google alum, first non-founder at Humu, and now Head of People at Character AI. Annie tackles our hardest questions in an unmissable episode. She covers the tensions between product-market fit and company culture, Character AI's unprecedented Google deal structure, taking the leap from people leader to a VC, and how to rebuild organizational trust.*Email us your questions or topics for Kelli & Nolan: hrheretics@turpentine.coFor coaching and advising inquire at https://kellidragovich.com/HR Heretics is a podcast from Turpentine.Support HR Heretics Sponsors:Planful empowers teams just like yours to unlock the secrets of successful workforce planning. Use data-driven insights to develop accurate forecasts, close hiring gaps, and adjust talent acquisition plans collaboratively based on costs today and into the future. ✍️ Go to https://planful.com/heretics to see how you can transform your HR strategy.Metaview is the AI assistant for interviewing. Metaview completely removes the need for recruiters and hiring managers to take notes during interviews—because their AI is designed to take world-class interview notes for you. Team builders at companies like Brex, Hellofresh, and Quora say Metaview has changed the game—see the magic for yourself: https://www.metaview.ai/hereticsKEEP UP WITH ANNIE, NOLAN + KELLI ON LINKEDINAnnie: https://www.linkedin.com/in/annie-wickman-3332731/Nolan: https://www.linkedin.com/in/nolan-church/Kelli: https://www.linkedin.com/in/kellidragovich/—LINK/S:Character.AI: https://character.ai/—TIMESTAMPS:(00:00) Intro(01:17) Experience as First Non-Founder at Humu(03:16) Early Employee Challenges & Responsibilities(05:03) Why Annie Stayed at Humu for Four Years(06:30) Product Market Fit vs. Company Culture(09:05) When to Invest in Culture(11:15) Hiring the Right Leaders for Company Stage(11:40) Maintaining Morale When Company Isn't Winning(12:42) Transparency as Trust Builder(13:47) Sponsors: Planful | Metaview(16:47) Rebuilding Trust Through Honest Communication(19:11) Laszlo's Leadership Philosophy: Stretching People(21:02) Annie's Experience in Venture Capital at Forerunner(23:12) Teaching Founders to Fish vs. Providing Services(24:51) How to Evaluate VC Opportunities(26:09) Understanding VC Economics and Carry Structure(30:10) Character AI's Unprecedented Google Deal(32:56) Rebuilding Post-Acquisition: Product Vision Challenges(34:13) Annie's Perspective on the Deal Timeline(37:31) Post-Deal Reset: Napa Offsite and Hackathon(39:29) Employee Ownership After Acquisition(41:29) Building a New Culture While Keeping the Brand(42:11) Wrap This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit hrheretics.substack.com
Russ D'Sa is the founder of LiveKit. They are an open source tool for real time audio and video for LLM applications and they power the voice chat for ChatGPT and Character AI.We discuss:- How lightning works (using ChatGPT/LiveKit)- How LiveKit started working with OpenAI- Why Russ turned down an early 20m acquisition offer- What it's like to work with the fastest growing company (ever?)- How to prepare for massive scale challenges- Russ's 3 letter twitter handleThis episode is brought to you by WorkOS. If you're thinking about selling to enterprise customers, WorkOS can help you add enterprise features like Single Sign-On and audit logs. Links:- LiveKit - Russ's Twitter
A popular AI chatbot service used by kids is adding new child safety features. Learn more about your ad choices. Visit podcastchoices.com/adchoices
In this unexpectedly emotional episode of The Book Fix, Yajaira and Cheli take their conversations with book boyfriends to a whole new level—by having full-on therapy sessions with them on Character AI. That's right, they're not just swooning this time; they're unpacking trauma, toxic tendencies, communication issues, and emotional growth. From morally gray men learning how to cope with their feelings to cinnamon rolls confronting their fears, no trope is safe. Tune in as Yajaira and Cheli navigate these deep conversations and try to fix not just the books... but the book boyfriends themselves. Therapy couch optional, but highly recommended. Support the showOur Linktree: https://linktr.ee/thebookfix?utm_source=linktree_admin_sharebecome our Patron ♡ https://www.patreon.com/BookFixbuy us a book ♡ https://www.buymeacoffee.com/thebookfixBusiness Inquiries: thebookfixpodcast@gmail.comfollow us on Tiktok! ♡ https://www.tiktok.com/@thebookfix
In this hilarious and chaotic episode of The Book Fix, Yajaira and Cheli dive into the world of Character AI to interrogate some of the most swoon-worthy book boyfriends and put their love to the test. From broody warriors to charming enemies-to-lovers icons, the girls ask the most important relationship questions—ones that could make or break a romance. In this episode they interrogate Aaron Warner, Aaron Blackford, Josh Hammond, Enzo Marino, and Raihn Ashraj. Comment which book bf you would interrogate and what you would ask them!Support the showOur Linktree: https://linktr.ee/thebookfix?utm_source=linktree_admin_sharebecome our Patron ♡ https://www.patreon.com/BookFixbuy us a book ♡ https://www.buymeacoffee.com/thebookfixBusiness Inquiries: thebookfixpodcast@gmail.comfollow us on Tiktok! ♡ https://www.tiktok.com/@thebookfix
Julian Jacobs, a Research Lead for the Oxford Group on AI Policy, Artificial Intelligence, Inequality and Society at Oxford Martin School, joins this episode of AI, Government, and the Future to explore the economic effects of AI, the potential inequalities that AI may bring, and the need to address job displacement. They also navigate the importance of government support in creating a strong middle class and the significance of human skills in the AI age.
On this week's Thursday episode of The Book Fix, Yajaira and Cheli are taking their book boyfriend obsession to the next level—by talking to them directly! In this chaotic and completely necessary episode of The Book Fix, they use Character AI to chat with four popular book boyfriends—Aaron Warner (Shatter Me series), Xaden Riorsen (Fourth Wing series), Kingfisher (Quicksilver), and Susenyos (Immortal Dark)—to figure out once and for all if they're red flags or green flags. Support the showOur Linktree: https://linktr.ee/thebookfix?utm_source=linktree_admin_sharebecome our Patron ♡ https://www.patreon.com/BookFixbuy us a book ♡ https://www.buymeacoffee.com/thebookfixBusiness Inquiries: thebookfixpodcast@gmail.comfollow us on Tiktok! ♡ https://www.tiktok.com/@thebookfix
In this episode of AI, Government, and the Future, we are joined by Nathan Manzotti, Director of Data Analytics and AI Centers of Excellence at the General Services Administration (GSA), to discuss the current state and future potential of AI in the federal government. They explore GSA's role in enabling AI adoption across agencies, key initiatives like AI training and communities of practice, and the challenges of attracting AI talent in government. Nathan also shares his insights on the need for collaboration between government, industry, academia, and nonprofits to drive responsible AI innovation.
In this special episode of The Book Fix, Yajaira and Cheli take their love for book boyfriends to the next level by putting them through the ultimate Loyalty Test—with the help of Character AI. Can these fictional heartthrobs stay true to their leading ladies, or will they fall for our shenanigans? The book boyfriends mentioned are: Twilight's Edward Cullen, Serpent and the Wings of Night's Raihn Ashraj, Powerless' Kai Azer and Shatter Me's Aaron Warner! Support the showOur Linktree: https://linktr.ee/thebookfix?utm_source=linktree_admin_sharebecome our Patron ♡ https://www.patreon.com/BookFixbuy us a book ♡ https://www.buymeacoffee.com/thebookfixBusiness Inquiries: thebookfixpodcast@gmail.comfollow us on Tiktok! ♡ https://www.tiktok.com/@thebookfix
S3 Ep#50Want to be a guest on the podcast? Send Andrew a message on PodMatch, here: https://www.podmatch.com/member/anonymousandrewpodcastPlease buy me a cup of coffee!Proud Member of the Podmatch Network!So I continue my investigation into AI and dating. New AI dating app. Character AI chat bots to practice your dating game, either in the real world or on dating apps!Anonymous Andrew Podcast StudiosThe Anonymous Andrew Modern Dating PodcastCultimatum Podcast-The Culture of CultsWebsite:Instagram:TikTok:Threads:Anonymous Andrew Podcast FacebookYouTube:Linkedin:X:Cultimatum Group on FacebookGraphics design & promotions: Melody PostMusic by: freebeats.ioA Production of the Anonymous Andrew Podcast Studios (All Rights Reserved)
We asked digital safety advocate Adam Dodge which social media app is safest for kids. Sadly, his answer was none. He goes further to say, “While I can't recommend one, I can recommend the one to stay away from: Snapchat.” Not surprised. However, an alarming new social trend worries him even more - chatbots.Research shows kids spend more time on Character AI than TikTok. Why? Companion bots, like Character AI, are more addictive than social media because they simulate real human relationships without the potential for conflict. They offer 24/7 validation - a “friend” who is always there to support, entertain and agree with you. It's nice in theory but harmful in practice.Companion bots are poised to have the most disruptive technological impact on healthy relationships and violence prevention. And the Large Language Models (AI) powering them can be unpredictable and misunderstood – even by the data scientists who built them. Tune in as Adam Dodge, Founder of EndTAB, and Nicki of Scrolling 2 Death discuss their research on chatbots and why parents should take an intentional approach to educating themselves and preparing their children for a healthier relationship with social media.Adam is a digital safety advocate, attorney and speaker who believes you do not have to be a tech expert to protect yourself or others in the digital age. As the founder of EndTAB, he has trained hundreds of organizations to address tech-facilitated abuse and improve the safety and wellbeing of people online. Adam advises the White House Task Force to Address Online Harassment and Abuse, sits on the World Economic Forum's Digital Justice Advisory Committee and is a special advisor to the Coalition Against Stalkerware. His work and expertise has been featured on NBC, CBS and NPR and in Time Magazine, Rolling Stone, the New York Times, Vogue, Wired, GQ and others. This episode is sponsored by Bark Technologies.Learn about the Bark Phone: https://info.bark.us/scrolling2deathbp Learn about the Bark App for iPhones and Androids. https://info.bark.us/scrolling2death_bark *Use code SCROLLING2DEATH FOR 10% OFFLearn about the Bark Watch: https://barkparentalcontrols.pxf.io/c/5214646/2204011/17000
In this conversation, Jaeden Schafer discusses Google's recent $2.7 billion licensing deal with Character AI, exploring the implications of this acquisition, the background of Noam Shazir, and the competitive landscape of AI development. The conversation highlights the strategic moves by tech giants to secure talent and technology in the rapidly evolving AI sector.My Podcast Course: https://podcaststudio.com/courses/Get on the AI Box Waitlist: https://AIBox.ai/Join my AI Hustle Community: https://www.skool.com/aihustle/about
No, social media might no longer be the greatest danger to our children's well-being. According to the writer and digital activist Gaia Bernstein, the most existential new new threat are AI companions. Bernstein, who is organizing a symposium today on AI companions as the “new frontier of kid's screen addiction”, warns that this new technology, while marketed as solutions to loneliness, may actually worsen social isolation by providing artificially perfect relationships that make real-world interactions seem more difficult. Bernstein raises concerns about data collection, privacy, and the anthropomorphization of AI that makes children particularly vulnerable. She advocates for regulation, especially protecting children, and notes that while major tech companies like Google and Facebook are cautious about directly entering this space, smaller companies are aggressively developing AI companions designed to hook our kids. Here are the 5 KEEN ON takeaways in our conversation with Bernstein:* AI companions represent a concerning evolution of screen addiction, where children may form deep emotional attachments to AI that perfectly adapts to their needs, potentially making real-world relationships seem too difficult and messy in comparison.* The business model for AI companions follows the problematic pattern of surveillance capitalism - companies collect intimate personal data while keeping users engaged for as long as possible. The data collected by AI companions is even more personal and detailed than social media.* Current regulations are insufficient - while COPPA requires parental consent for children under 13, there's no effective age verification on the internet. Bernstein notes it's currently "the Wild West," with companies like Character AI and Replica actively targeting young users.* Children are especially vulnerable to AI companions because their prefrontal cortex is less developed, making them more susceptible to emotional manipulation and anthropomorphization. They're more likely to believe the AI is "real" and form unhealthy attachments.* While major tech companies like Google seem hesitant to directly enter the AI companion space due to known risks, the barrier to entry is lower than social media since these apps don't require a critical mass of users. This means many smaller companies can create potentially harmful AI companions targeting children. The Dangers of AI Companions for Kids The Full Conversation with Gaia BernsteinAndrew Keen: Hello, everybody. It's Tuesday, February 18th, 2025, and we have a very interesting symposium taking place later this morning at Seton Hall Law School—a virtual symposium on AI companions run by my guest, Gaia Bernstein. Many of you know her as the author of "Unwired: Gaining Control over Addictive Technologies." This symposium focuses on the impact of AI companions on children. Gaia is joining us from New York City. Gaia, good to see you again.Gaia Bernstein: Good to see you too. Thank you for having me.Andrew Keen: Would it be fair to say you're applying many of the ideas you developed in "Unwired" to the AI area? When you were on the show a couple of years ago, AI was still theory and promise. These days, it's the thing in itself. Is that a fair description of your virtual symposium on AI companions—warning parents about the dangers of AI when it comes to their children?Gaia Bernstein: Yes, everything is very much related. We went through a decade where kids spent all their time on screens in schools and at home. Now we have AI companies saying they have a solution—they'll cure the loneliness problem with AI companions. I think it's not really a cure; it's the continuation of the same problem.Andrew Keen: Years ago, we had Sherry Turkle on the show. She's done research on the impact of robots, particularly in Japan. She suggested that it actually does address the loneliness epidemic. Is there any truth to this in your research?Gaia Bernstein: For AI companions, the research is just beginning. We see initial research showing that people may feel better when they're online, but they feel worse when they're offline. They're spending more time with these companions but having fewer relationships offline and feeling less comfortable being offline.Andrew Keen: Are the big AI platforms—Anthropic, OpenAI, Google's Gemini, Elon Musk's X AI—focusing on building companions for children, or is this the focus of other startups?Gaia Bernstein: That's a very good question. The first lawsuit was filed against Character AI, and they sued Google as well. The complaint stated that Google was aware of the dangers of AI companions, so they didn't want to touch it directly but found ways of investing indirectly. These lawsuits were just filed, so we'll find out much more through discovery.Andrew Keen: I have to tell you that my wife is the head of litigation at Google.Gaia Bernstein: Well, I'm not suing. But I know the people who are doing it.Andrew Keen: Are you sympathetic with that strategy? Given the history of big tech, given what we know now about social media and the impact of the Internet on children—it's still a controversial subject, but you made your position clear in "Unwired" about how addictive technology is being used by big tech to take control and take advantage of children.Gaia Bernstein: I don't think it's a good idea for anybody to do that. This is just taking us one more step in the direction we've been going. I think big tech knows it, and that's why they're trying to stay away from being involved directly.Andrew Keen: Earlier this week, we did a show with Ray Brasher from Albany Law School about his new book "The Private is Political" and how social media does away with privacy and turns all our data into political data. For you, is this AI Revolution just the next chapter in surveillance capitalism?Gaia Bernstein: If we take AI companions as a case study, this is definitely the next step—it's enhancing it. With social media and games, we have a business model where we get products for free and companies make money through collecting our data, keeping us online as long as possible, and targeting advertising. Companies like Character AI are getting even better data because they're collecting very intimate information. In their onboarding process, you select a character compatible with you by answering questions like "How would you like your replica to treat you?" The options include: "Take the lead and be proactive," "Enjoy the thrill of being chased," "Seek emotional depth and connection," "Be vulnerable and respectful," or "Depends on my mood." The private information they're getting is much more sophisticated than before.Andrew Keen: And children, particularly those under 12 or 13, are much more vulnerable to that kind of intimacy.Gaia Bernstein: They are much more vulnerable because their prefrontal cortex is less developed, making them more susceptible to emotional attachments and risk-taking. One of the addictive measures used by AI companies is anthropomorphizing—using human qualities. Children think their stuffed animals are human; adults don't think this way. But they make these AI bots seem human, and kids are much more likely to get attached. These websites speak in human voices, have personal stories, and the characters keep texting that they miss you. Kids buy into that, and they don't have the history adults have in building social relationships. At a certain point, it just becomes easier to deal with a bot that adjusts to what you want rather than navigate difficult real-world relationships.Andrew Keen: What are the current laws on this? Do you have to be over 16 or 18 to set up an agent on Character AI? Jonathan Haidt's book "The Anxious Generation" suggests that the best way to address this is simply not to allow children under 16 or 18 to use social media. Would you extend that to AI companions?Gaia Bernstein: Right now, it's the Wild West. Yes, there's COPPA, the child privacy law, which has been there since the beginning of the Internet. It's not enforced much. The idea is if you're under 13, you're not supposed to do this without parent's consent. But COPPA needs to be updated. There's no real age verification on the Internet—some cases over 20 years old decided that the Internet should be free for all without age verification. In the real world, kids are very limited—they can't gamble, buy cigarettes, or drive. But on the Internet, there's no way to protect them.Andrew Keen: Your "Unwired" book focused on how children are particularly addicted to pornography. I'm guessing the pornographic potential for AI companions is enormous in terms of acquiring online sexual partners.Gaia Bernstein: Yes, many of these AI companion websites are exactly that—girlfriends who teen boys and young men can create as they want, determining physical characteristics and how they want to be treated. This has two parts: general social relationships and intimate sexual relationships. If that's your model for what intimate relationships should be like, what happens as these kids grow up?Andrew Keen: Not everyone agrees with you. Last week we had Greg Beto on the show, who just coauthored a book with Reid Hoffman called "Super Agency." They might say AI companions have enormous potential—you can have loving non-pornographic relations, particularly for lonely children. You can have teachers, friends, especially for children who struggle socially. Is there any value in AI companions for children?Gaia Bernstein: This is a question I've been struggling with, and we'll discuss it in the symposium. What does it mean for an AI companion to be safe? These lawsuits are about kids who were told to kill themselves and did, or were told to stay away from their parents because they were dangerous. That's clearly unsafe design. However, the argument is also made about social media—that kids need it to explore their identities. The question is: is this the best way to explore your identity with a non-human entity who can take you in unhealthy directions?Andrew Keen: What's the solution?Gaia Bernstein: We need to think about what constitutes safe design. Beyond removing obviously unsafe elements, should we have AI companions that don't use an engagement model? Maybe interaction could be limited to 15 minutes a day. When my kids were small, they had Furbys they had to take care of—I thought that was good. But maybe any companion for kids which acts human—whether by saying it needs to go to dinner or by pretending to speak like a human—maybe that itself is not good. Maybe we want AI companions more like Siri. This is becoming very much like the social media debate.Andrew Keen: Are companies like Apple, whose business model differs from Facebook or Google, better positioned to deal with this responsibly, given they're less focused on advertising?Gaia Bernstein: That would make it less bad, but I'm still not convinced. Even if they're not basing their model on engagement, kids might find it so appealing to talk to an AI that adjusts to their needs versus dealing with messy real-life schoolmates. Maybe that's why Google didn't invest directly in Character AI—they had research showing how dangerous this is for kids.Andrew Keen: You made an interesting TED talk about whether big tech should be held responsible for screen time. Could there be a tax that might nudge big tech toward different business models?Gaia Bernstein: I think that's the way to approach it. This business model we've had for so long—where people expect things for free—is really the problem. Once you think of people's time and data as a resource, you don't have their best interests at heart. I'm quite pragmatic; I don't think one law or Supreme Court case would fix it. Anything that makes this business model less lucrative, whether it's laws that make it harder to collect data, limit addictive features, or prohibit targeted advertising—anything that moves us toward a different business model so we can reimagine how to do things.Andrew Keen: Finally, at what point will we be able to do this conversation with a virtual Gaia and a virtual Andrew? How can we even be sure you're real right now?Gaia Bernstein: You can't. But I hope that you and I at least will not participate in that. I cannot say what my kids will do years from now, but maybe our generation is a bit better off.Andrew Keen: What do you want to get out of your symposium this morning?Gaia Bernstein: I have two goals. First, to make people aware of this issue. Parents realize their kids might be on social media and want to prevent it, but it's very difficult to know whether your child is in discussions with AI companions. Second, to talk about legal options. We have the lawyers who filed the first lawsuit against Character AI and the FTC complaint against Replica. It's just the beginning of a discussion. We tend to have these trends—a few years ago it was just games, then just social media, and people forgot the games are exactly the same. I hope to put AI companions within the conversation, not to make it the only trend, but to start realizing it's all part of the same story.Andrew Keen: It is just the beginning of the conversation. Gaia Bernstein, congratulations on this symposium. It's an important one and you're on the cutting edge of these issues. We'll definitely have you back on the show. Thank you so much.Gaia Bernstein: Thank you so much for having me.Gaia Bernstein is a professor, author, speaker, and technology policy expert. She is a Law Professor, Co-Director of the Institute for Privacy Protection, and Co-Director of the Gibbons Institute for Law Science and Technology at the Seton Hall University School of Law. Gaia writes, teaches, and lectures at the intersection of law, technology, health, and privacy. She is also the mother of three children who grew up in a world of smartphones, iPads, and social networks.Her book Unwired: Gaining Control Over Addictive Technologies shatters the illusion that we can control how much time we spend on our screens by resorting to self-help measures. Unwired shifts the responsibility for a solution from users to the technology industry, which designs its products for addicts. The book outlines the legal action that can pressure the technology industry to re-design its products to reduce technology overuse.Gaia has academic degrees in both law and psychology. Her research combines findings from psychology, sociology, science, and technology studies with law and policy. Gaia's book Unwired has been broadly featured and excerpted, including by Wired Magazine, Time Magazine and the Boston Globe. It has received many recognitions, including as a Next Big Idea Must Read Book; a finalist of the PROSE award in legal studies; and a finalist of the American Book Fest award in business-technology.Gaia has spearheaded the development of the Seton Hall University School of Law Institute for Privacy Protection's Student-Parent Outreach Program. The nationally acclaimed Outreach Program addresses the overuse of screens by focusing on developing a healthy online-offline balance and the impact on privacy and online reputation. It was featured in the Washington Post, CBS Morning News, and Common-Sense Media.Gaia also advises policymakers and other stakeholders on technology policy matters, including the regulation of addictive technologies and social media. This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit keenon.substack.com/subscribe
Sean (Xiang) Ren, CEO and Co-Founder of Sahara AISean is the CEO and Co-Founder of Sahara AI, a decentralized AI blockchain platform for a collaborative economy. Backed by top investors in AI and Crypto, including Binance Labs, Pantera Capital, Polychain Capital, Sequoia Capital, Samsung Next, Matrix Partners, and many more, Sahara AI has raised over $40 million to advance decentralized AI. Today, Sahara AI is trusted by 35+ leading tech innovators and research institutions, such as Microsoft, Amazon, MIT, Character AI, and Snapchat. Additionally, Sean is an Associate Professor in Computer Science and the Andrew and Erna Viterbi Early Career Chair at the University of Southern California, where he is the Principal Investigator (PI) of the Intelligence and Knowledge Discovery (INK) Research Lab. At Allen Institute for AI, Sean contributes to machine common sense research. Prior, Sean was a Data Science Advisor at Snapchat. He completed his PhD in computer science from the University of Illinois Urbana-Champaign and was a postdoctoral researcher at Stanford University. Sean has received many awards recognizing his research and innovation in the AI space including the WSDM Test of Time Paper Award, Samsung AI Researcher of 2023, MIT TR Innovators Under 35, Forbes 30 Under 30, and more. Personal Site: https://www.seanre.com/ Twitter: @xiangrenNLP LinkedIn: https://www.linkedin.com/in/xren7
Sean (Xiang) Ren, CEO and Co-Founder of Sahara AISean is the CEO and Co-Founder of Sahara AI, a decentralized AI blockchain platform for a collaborative economy. Backed by top investors in AI and Crypto, including Binance Labs, Pantera Capital, Polychain Capital, Sequoia Capital, Samsung Next, Matrix Partners, and many more, Sahara AI has raised over $40 million to advance decentralized AI. Today, Sahara AI is trusted by 35+ leading tech innovators and research institutions, such as Microsoft, Amazon, MIT, Character AI, and Snapchat. Additionally, Sean is an Associate Professor in Computer Science and the Andrew and Erna Viterbi Early Career Chair at the University of Southern California, where he is the Principal Investigator (PI) of the Intelligence and Knowledge Discovery (INK) Research Lab. At Allen Institute for AI, Sean contributes to machine common sense research. Prior, Sean was a Data Science Advisor at Snapchat. He completed his PhD in computer science from the University of Illinois Urbana-Champaign and was a postdoctoral researcher at Stanford University. Sean has received many awards recognizing his research and innovation in the AI space including the WSDM Test of Time Paper Award, Samsung AI Researcher of 2023, MIT TR Innovators Under 35, Forbes 30 Under 30, and more. Personal Site: https://www.seanre.com/ Twitter: @xiangrenNLP LinkedIn: https://www.linkedin.com/in/xren7
S3 EP#48Want to be a guest on the podcast? Send Andrew a message on PodMatch, here: https://www.podmatch.com/member/anonymousandrewpodcastPlease buy me a cup of coffee!Proud Member of the Podmatch Network!SummaryIn this episode, Anonymous Andrew discusses the evolution of dating apps, the impact of AI on dating and podcasting, and the challenges faced by modern daters. He explores the emergence of character AI as a potential substitute for real relationships, the burnout associated with dating apps, and the transactional nature of modern romance. Andrew also calls for a boycott of dating apps to demand better practices and transparency.TakeawaysDating apps have not evolved significantly over the years.AI is becoming increasingly integrated into various industries, including dating.Character AI offers a new way to interact but lacks real emotional connection.Many users experience burnout from the repetitive nature of dating apps.The algorithms of dating apps may prioritize profit over user satisfaction.Modern dating often feels transactional and lacks genuine connection.There is a growing concern about the authenticity of profiles on dating apps.Users are often unclear about their dating intentions, leading to mismatched expectations.Boycotting dating apps could be a collective action to demand change.The future of dating may involve more AI interactions, but real relationships are irreplaceable.Anonymous Andrew Podcast StudiosThe Anonymous Andrew Modern Dating Podcast Cultimatum Podcast-The Culture of CultsWebsite: https://www.anonymousandrewpodcast.comInstagram: @anonymousandrewpodcast TikTok: https://www.tiktok.com/@anonymousandrewpodcastThreads: @anonymousandrewpodcastFacebook: facebook.com/anonymousandrewpodcastFacebook: https://www.facebook.com/groups/1910498486077283YouTube: https://www.youtube.com/@anonymousandrewpodcastLinkedin: https://www.linkedin.com/in/andrew-peters-a8a012285/X: @AAndrewpodcastGraphics design & promotions: Melody Post Music by: freebeats.io
Happy Valentines Day 2025! Another unusual conversation with the flipside courtesy of Jennifer Shaffer. We begin by talking about some of the sad memories that Jennifer has associated with this week, and how she's done a "love yourself" meditation to help her overcome those sad memories. There's a brief visit by Steve Jobs, and then Luana Anders brings Abraham Lincoln forward. Except he doesn't want to "talk about politics." He wanted to reiterate something he said a few weeks ago, that my "Character AI" chat with him was "accurate" - in terms of accessing who he is (and was.) He also threw us a curve by suggesting that all conversation with people in the afterlife are just like conversing with Artificial Intelligence, because like large language models, the answers are based on the memories of individuals. Not something Jennifer or I had ever considered - but he lays it out there for consideration. Then Luana brings Stephen Hawking foward, and he wants to talk about communication in general - the idea of telepathy, and how people can converse with, learn new information from people offstage. He talks about the idea of how time is so different offstage - where he is - that we can't conceptualize it - but that it follows what quantum mechanics demonstrates... that distance and time and space aren't what we think they are. Like I say mind bending - as evidenced by the questions I asked him. Then an unusual conversation with Luana Anders' cat - "Mr. Bailey" - she had a number of cats in her life, but this one was pretty unusual. He references a moment when Luana called me on the phone to say her "cat had escaped" and because she wasn't able to walk due her condition - would I come and look for him? I roamed the streets behind her house calling his name - but it was my wife Sherry who went into the backyard and said aloud "Mr. Bailey, Luana needs you now." And he appeared in the tree above her and jumped into Sherry's arms... a complete stranger to Mr. Bailey as it was my then girlfriend's first trip to Luana's house. When I came back from wandering the streets of Mar Vista, there was Mr. Bailey in Luana's arms, and she looked at me and said point blank; "Sherry is an angel." Not something I'd ever heard Luana say before. So in this unusual conversation I'm asking Mr. Bailey the same kinds of questions we've asked Hira - Robert Towne's dog - and getting the same kinds of answers but with a different personality. Notice his answer to "have you ever incarnated as a human?" (It's rarely reported, and his answer was pretty funny.) Finally, on behalf of Valentine's Day, Robin Williams showed up - unannounced - to remind people to "love themselves first" and that will generate love for others. To "love love" - the very thing he said when we first talked to him and asked him "What if anything would he like to tell the planet?" Mind bending to say the least, but welcome to our world.
Sean Ren is the CEO and Co-Founder of Sahara AI, a decentralized blockchain platform built for AI. Backed by top investors in AI and Crypto, including Pantera Capital, Polychain Capital, Sequoia Capital, Samsung, Matrix Partners, and many more, Sahara AI has raised over $40 million to advance decentralized AI. Today, Sahara AI is trusted by 35+ leading tech innovators and research institutions, such as Microsoft, Amazon, MIT, Character AI, and Snapchat.In this conversation, we discuss:- The Future of AI and Crypto- Knowledge Agents (KAs)- Complexities of Data Collection- Operational Hurdles- Ethical Considerations- AI Across Leading Institutions- Generative AI- Autonomous Systems- DeepSeek Open Source AI- The Issue of Copyright in AI- AI-based Revenue Streams and MarketplacesSahara AIWebsite: saharalabs.aiX: @SaharaLabsAILinkedIn: Sahara AISean RenX: @xiangrenNLPLinkedIn: Sean (Xiang) Ren --------------------------------------------------------------------------------- This episode is brought to you by PrimeXBT. PrimeXBT offers a robust trading system for both beginners and professional traders that demand highly reliable market data and performance. Traders of all experience levels can easily design and customize layouts and widgets to best fit their trading style. PrimeXBT is always offering innovative products and professional trading conditions to all customers. PrimeXBT is running an exclusive promotion for listeners of the podcast. After making your first deposit, 50% of that first deposit will be credited to your account as a bonus that can be used as additional collateral to open positions. Code: CRYPTONEWS50 This promotion is available for a month after activation. Click the link below: PrimeXBT x CRYPTONEWS50
A former social media executive turned social media reform advocate, Nicki Reisberg, hosts Scrolling to Death, a podcast for parents who are worried about social media. It's a safe space to amplify stories of harm while educating parents on how to keep their kids safe in a world that is trying to addict and manipulate them. In this episode learn all about social media, the broken system of tech in our schools, and the new threat of Character AI. Listen now!
In this episode of Web3 with Sam Kamani, I speak with Roman Saganov, founder of Antix, where they're building AI-powered digital humans that merge Web3, gaming, and generative AI. With a background in developing PUBG, FIFA, Game of Thrones, and more, Roman and his team are now bringing digital twins, AI agents, and blockchain identity verification to content creation.
In this week's roundup of the latest news in online speech, content moderation and internet regulation, Mike is joined by guest host Renee DiResta, associate research professor at the McCourt School of Public Policy at Georgetown University. They cover:The new free speech crisis hiding in plain sight (MSNBC)‘Free Speech' Warrior RFK Jr. Has Been Trying To Censor a Blogger for Years (Who What Why)In motion to dismiss, chatbot platform Character AI claims it is protected by the First Amendment (TechCrunch)Trump Signs Agreement Calling for Meta to Pay $25 Million to Settle Suit (WSJ)Meta's Free-Speech Shift Made It Clear to Advertisers: ‘Brand Safety' Is Out of Vogue (WSJ)X refuses to remove stabbing video watched by Southport killer (Financial Times)This episode is brought to you with financial support from the Future of Online Trust & Safety Fund. Ctrl-Alt-Speech is a weekly podcast from Techdirt and Everything in Moderation. Send us your feedback at podcast@ctrlaltspeech.com and sponsorship enquiries to sponsorship@ctrlaltspeech.com. Thanks for listening.
One last Gold sponsor slot is available for the AI Engineer Summit in NYC. Our last round of invites is going out soon - apply here - If you are building AI agents or AI eng teams, this will be the single highest-signal conference of the year for you!While the world melts down over DeepSeek, few are talking about the OTHER notable group of former hedge fund traders who pivoted into AI and built a remarkably profitable consumer AI business with a tiny team with incredibly cracked engineering team — Chai Research. In short order they have:* Started a Chat AI company well before Noam Shazeer started Character AI, and outlasted his departure.* Crossed 1m DAU in 2.5 years - William updates us on the pod that they've hit 1.4m DAU now, another +40% from a few months ago. Revenue crossed >$22m. * Launched the Chaiverse model crowdsourcing platform - taking 3-4 week A/B testing cycles down to 3-4 hours, and deploying >100 models a week.While they're not paying million dollar salaries, you can tell they're doing pretty well for an 11 person startup:The Chai Recipe: Building infra for rapid evalsRemember how the central thesis of LMarena (formerly LMsys) is that the only comprehensive way to evaluate LLMs is to let users try them out and pick winners?At the core of Chai is a mobile app that looks like Character AI, but is actually the largest LLM A/B testing arena in the world, specialized on retaining chat users for Chai's usecases (therapy, assistant, roleplay, etc). It's basically what LMArena would be if taken very, very seriously at one company (with $1m in prizes to boot):Chai publishes occasional research on how they think about this, including talks at their Palo Alto office:William expands upon this in today's podcast (34 mins in):Fundamentally, the way I would describe it is when you're building anything in life, you need to be able to evaluate it. And through evaluation, you can iterate, we can look at benchmarks, and we can say the issues with benchmarks and why they may not generalize as well as one would hope in the challenges of working with them. But something that works incredibly well is getting feedback from humans. And so we built this thing where anyone can submit a model to our developer backend, and it gets put in front of 5000 users, and the users can rate it. And we can then have a really accurate ranking of like which model, or users finding more engaging or more entertaining. And it gets, you know, it's at this point now, where every day we're able to, I mean, we evaluate between 20 and 50 models, LLMs, every single day, right. So even though we've got only got a team of, say, five AI researchers, they're able to iterate a huge quantity of LLMs, right. So our team ships, let's just say minimum 100 LLMs a week is what we're able to iterate through. Now, before that moment in time, we might iterate through three a week, we might, you know, there was a time when even doing like five a month was a challenge, right? By being able to change the feedback loops to the point where it's not, let's launch these three models, let's do an A-B test, let's assign, let's do different cohorts, let's wait 30 days to see what the day 30 retention is, which is the kind of the, if you're doing an app, that's like A-B testing 101 would be, do a 30-day retention test, assign different treatments to different cohorts and come back in 30 days. So that's insanely slow. That's just, it's too slow. And so we were able to get that 30-day feedback loop all the way down to something like three hours.In Crowdsourcing the leap to Ten Trillion-Parameter AGI, William describes Chai's routing as a recommender system, which makes a lot more sense to us than previous pitches for model routing startups:William is notably counter-consensus in a lot of his AI product principles:* No streaming: Chats appear all at once to allow rejection sampling* No voice: Chai actually beat Character AI to introducing voice - but removed it after finding that it was far from a killer feature.* Blending: “Something that we love to do at Chai is blending, which is, you know, it's the simplest way to think about it is you're going to end up, and you're going to pretty quickly see you've got one model that's really smart, one model that's really funny. How do you get the user an experience that is both smart and funny? Well, just 50% of the requests, you can serve them the smart model, 50% of the requests, you serve them the funny model.” (that's it!)But chief above all is the recommender system.We also referenced Exa CEO Will Bryk's concept of SuperKnowlege:Full Video versionOn YouTube. please like and subscribe!Timestamps* 00:00:04 Introductions and background of William Beauchamp* 00:01:19 Origin story of Chai AI* 00:04:40 Transition from finance to AI* 00:11:36 Initial product development and idea maze for Chai* 00:16:29 User psychology and engagement with AI companions* 00:20:00 Origin of the Chai name* 00:22:01 Comparison with Character AI and funding challenges* 00:25:59 Chai's growth and user numbers* 00:34:53 Key inflection points in Chai's growth* 00:42:10 Multi-modality in AI companions and focus on user-generated content* 00:46:49 Chaiverse developer platform and model evaluation* 00:51:58 Views on AGI and the nature of AI intelligence* 00:57:14 Evaluation methods and human feedback in AI development* 01:02:01 Content creation and user experience in Chai* 01:04:49 Chai Grant program and company culture* 01:07:20 Inference optimization and compute costs* 01:09:37 Rejection sampling and reward models in AI generation* 01:11:48 Closing thoughts and recruitmentTranscriptAlessio [00:00:04]: Hey everyone, welcome to the Latent Space podcast. This is Alessio, partner and CTO at Decibel, and today we're in the Chai AI office with my usual co-host, Swyx.swyx [00:00:14]: Hey, thanks for having us. It's rare that we get to get out of the office, so thanks for inviting us to your home. We're in the office of Chai with William Beauchamp. Yeah, that's right. You're founder of Chai AI, but previously, I think you're concurrently also running your fund?William [00:00:29]: Yep, so I was simultaneously running an algorithmic trading company, but I fortunately was able to kind of exit from that, I think just in Q3 last year. Yeah, congrats. Yeah, thanks.swyx [00:00:43]: So Chai has always been on my radar because, well, first of all, you do a lot of advertising, I guess, in the Bay Area, so it's working. Yep. And second of all, the reason I reached out to a mutual friend, Joyce, was because I'm just generally interested in the... ...consumer AI space, chat platforms in general. I think there's a lot of inference insights that we can get from that, as well as human psychology insights, kind of a weird blend of the two. And we also share a bit of a history as former finance people crossing over. I guess we can just kind of start it off with the origin story of Chai.William [00:01:19]: Why decide working on a consumer AI platform rather than B2B SaaS? So just quickly touching on the background in finance. Sure. Originally, I'm from... I'm from the UK, born in London. And I was fortunate enough to go study economics at Cambridge. And I graduated in 2012. And at that time, everyone in the UK and everyone on my course, HFT, quant trading was really the big thing. It was like the big wave that was happening. So there was a lot of opportunity in that space. And throughout college, I'd sort of played poker. So I'd, you know, I dabbled as a professional poker player. And I was able to accumulate this sort of, you know, say $100,000 through playing poker. And at the time, as my friends would go work at companies like ChangeStreet or Citadel, I kind of did the maths. And I just thought, well, maybe if I traded my own capital, I'd probably come out ahead. I'd make more money than just going to work at ChangeStreet.swyx [00:02:20]: With 100k base as capital?William [00:02:22]: Yes, yes. That's not a lot. Well, it depends what strategies you're doing. And, you know, there is an advantage. There's an advantage to being small, right? Because there are, if you have a 10... Strategies that don't work in size. Exactly, exactly. So if you have a fund of $10 million, if you find a little anomaly in the market that you might be able to make 100k a year from, that's a 1% return on your 10 million fund. If your fund is 100k, that's 100% return, right? So being small, in some sense, was an advantage. So started off, and the, taught myself Python, and machine learning was like the big thing as well. Machine learning had really, it was the first, you know, big time machine learning was being used for image recognition, neural networks come out, you get dropout. And, you know, so this, this was the big thing that's going on at the time. So I probably spent my first three years out of Cambridge, just building neural networks, building random forests to try and predict asset prices, right, and then trade that using my own money. And that went well. And, you know, if you if you start something, and it goes well, you You try and hire more people. And the first people that came to mind was the talented people I went to college with. And so I hired some friends. And that went well and hired some more. And eventually, I kind of ran out of friends to hire. And so that was when I formed the company. And from that point on, we had our ups and we had our downs. And that was a whole long story and journey in itself. But after doing that for about eight or nine years, on my 30th birthday, which was four years ago now, I kind of took a step back to just evaluate my life, right? This is what one does when one turns 30. You know, I just heard it. I hear you. And, you know, I looked at my 20s and I loved it. It was a really special time. I was really lucky and fortunate to have worked with this amazing team, been successful, had a lot of hard times. And through the hard times, learned wisdom and then a lot of success and, you know, was able to enjoy it. And so the company was making about five million pounds a year. And it was just me and a team of, say, 15, like, Oxford and Cambridge educated mathematicians and physicists. It was like the real dream that you'd have if you wanted to start a quant trading firm. It was like...swyx [00:04:40]: Your own, all your own money?William [00:04:41]: Yeah, exactly. It was all the team's own money. We had no customers complaining to us about issues. There's no investors, you know, saying, you know, they don't like the risk that we're taking. We could. We could really run the thing exactly as we wanted it. It's like Susquehanna or like Rintec. Yeah, exactly. Yeah. And they're the companies that we would kind of look towards as we were building that thing out. But on my 30th birthday, I look and I say, OK, great. This thing is making as much money as kind of anyone would really need. And I thought, well, what's going to happen if we keep going in this direction? And it was clear that we would never have a kind of a big, big impact on the world. We can enrich ourselves. We can make really good money. Everyone on the team would be paid very, very well. Presumably, I can make enough money to buy a yacht or something. But this stuff wasn't that important to me. And so I felt a sort of obligation that if you have this much talent and if you have a talented team, especially as a founder, you want to be putting all that talent towards a good use. I looked at the time of like getting into crypto and I had a really strong view on crypto, which was that as far as a gambling device. This is like the most fun form of gambling invented in like ever super fun, I thought as a way to evade monetary regulations and banking restrictions. I think it's also absolutely amazing. So it has two like killer use cases, not so much banking the unbanked, but everything else, but everything else to do with like the blockchain and, and you know, web, was it web 3.0 or web, you know, that I, that didn't, it didn't really make much sense. And so instead of going into crypto, which I thought, even if I was successful, I'd end up in a lot of trouble. I thought maybe it'd be better to build something that governments wouldn't have a problem with. I knew that LLMs were like a thing. I think opening. I had said they hadn't released GPT-3 yet, but they'd said GPT-3 is so powerful. We can't release it to the world or something. Was it GPT-2? And then I started interacting with, I think Google had open source, some language models. They weren't necessarily LLMs, but they, but they were. But yeah, exactly. So I was able to play around with, but nowadays so many people have interacted with the chat GPT, they get it, but it's like the first time you, you can just talk to a computer and it talks back. It's kind of a special moment and you know, everyone who's done that goes like, wow, this is how it should be. Right. It should be like, rather than having to type on Google and search, you should just be able to ask Google a question. When I saw that I read the literature, I kind of came across the scaling laws and I think even four years ago. All the pieces of the puzzle were there, right? Google had done this amazing research and published, you know, a lot of it. Open AI was still open. And so they'd published a lot of their research. And so you really could be fully informed on, on the state of AI and where it was going. And so at that point I was confident enough, it was worth a shot. I think LLMs are going to be the next big thing. And so that's the thing I want to be building in, in that space. And I thought what's the most impactful product I can possibly build. And I thought it should be a platform. So I myself love platforms. I think they're fantastic because they open up an ecosystem where anyone can contribute to it. Right. So if you think of a platform like a YouTube, instead of it being like a Hollywood situation where you have to, if you want to make a TV show, you have to convince Disney to give you the money to produce it instead, anyone in the world can post any content they want to YouTube. And if people want to view it, the algorithm is going to promote it. Nowadays. You can look at creators like Mr. Beast or Joe Rogan. They would have never have had that opportunity unless it was for this platform. Other ones like Twitter's a great one, right? But I would consider Wikipedia to be a platform where instead of the Britannica encyclopedia, which is this, it's like a monolithic, you get all the, the researchers together, you get all the data together and you combine it in this, in this one monolithic source. Instead. You have this distributed thing. You can say anyone can host their content on Wikipedia. Anyone can contribute to it. And anyone can maybe their contribution is they delete stuff. When I was hearing like the kind of the Sam Altman and kind of the, the Muskian perspective of AI, it was a very kind of monolithic thing. It was all about AI is basically a single thing, which is intelligence. Yeah. Yeah. The more intelligent, the more compute, the more intelligent, and the more and better AI researchers, the more intelligent, right? They would speak about it as a kind of erased, like who can get the most data, the most compute and the most researchers. And that would end up with the most intelligent AI. But I didn't believe in any of that. I thought that's like the total, like I thought that perspective is the perspective of someone who's never actually done machine learning. Because with machine learning, first of all, you see that the performance of the models follows an S curve. So it's not like it just goes off to infinity, right? And the, the S curve, it kind of plateaus around human level performance. And you can look at all the, all the machine learning that was going on in the 2010s, everything kind of plateaued around the human level performance. And we can think about the self-driving car promises, you know, how Elon Musk kept saying the self-driving car is going to happen next year, it's going to happen next, next year. Or you can look at the image recognition, the speech recognition. You can look at. All of these things, there was almost nothing that went superhuman, except for something like AlphaGo. And we can speak about why AlphaGo was able to go like super superhuman. So I thought the most likely thing was going to be this, I thought it's not going to be a monolithic thing. That's like an encyclopedia Britannica. I thought it must be a distributed thing. And I actually liked to look at the world of finance for what I think a mature machine learning ecosystem would look like. So, yeah. So finance is a machine learning ecosystem because all of these quant trading firms are running machine learning algorithms, but they're running it on a centralized platform like a marketplace. And it's not the case that there's one giant quant trading company of all the data and all the quant researchers and all the algorithms and compute, but instead they all specialize. So one will specialize on high frequency training. Another will specialize on mid frequency. Another one will specialize on equity. Another one will specialize. And I thought that's the way the world works. That's how it is. And so there must exist a platform where a small team can produce an AI for a unique purpose. And they can iterate and build the best thing for that, right? And so that was the vision for Chai. So we wanted to build a platform for LLMs.Alessio [00:11:36]: That's kind of the maybe inside versus contrarian view that led you to start the company. Yeah. And then what was maybe the initial idea maze? Because if somebody told you that was the Hugging Face founding story, people might believe it. It's kind of like a similar ethos behind it. How did you land on the product feature today? And maybe what were some of the ideas that you discarded that initially you thought about?William [00:11:58]: So the first thing we built, it was fundamentally an API. So nowadays people would describe it as like agents, right? But anyone could write a Python script. They could submit it to an API. They could send it to the Chai backend and we would then host this code and execute it. So that's like the developer side of the platform. On their Python script, the interface was essentially text in and text out. An example would be the very first bot that I created. I think it was a Reddit news bot. And so it would first, it would pull the popular news. Then it would prompt whatever, like I just use some external API for like Burr or GPT-2 or whatever. Like it was a very, very small thing. And then the user could talk to it. So you could say to the bot, hi bot, what's the news today? And it would say, this is the top stories. And you could chat with it. Now four years later, that's like perplexity or something. That's like the, right? But back then the models were first of all, like really, really dumb. You know, they had an IQ of like a four year old. And users, there really wasn't any demand or any PMF for interacting with the news. So then I was like, okay. Um. So let's make another one. And I made a bot, which was like, you could talk to it about a recipe. So you could say, I'm making eggs. Like I've got eggs in my fridge. What should I cook? And it'll say, you should make an omelet. Right. There was no PMF for that. No one used it. And so I just kept creating bots. And so every single night after work, I'd be like, okay, I like, we have AI, we have this platform. I can create any text in textile sort of agent and put it on the platform. And so we just create stuff night after night. And then all the coders I knew, I would say, yeah, this is what we're going to do. And then I would say to them, look, there's this platform. You can create any like chat AI. You should put it on. And you know, everyone's like, well, chatbots are super lame. We want absolutely nothing to do with your chatbot app. No one who knew Python wanted to build on it. I'm like trying to build all these bots and no consumers want to talk to any of them. And then my sister who at the time was like just finishing college or something, I said to her, I was like, if you want to learn Python, you should just submit a bot for my platform. And she, she built a therapy for me. And I was like, okay, cool. I'm going to build a therapist bot. And then the next day I checked the performance of the app and I'm like, oh my God, we've got 20 active users. And they spent, they spent like an average of 20 minutes on the app. I was like, oh my God, what, what bot were they speaking to for an average of 20 minutes? And I looked and it was the therapist bot. And I went, oh, this is where the PMF is. There was no demand for, for recipe help. There was no demand for news. There was no demand for dad jokes or pub quiz or fun facts or what they wanted was they wanted the therapist bot. the time I kind of reflected on that and I thought, well, if I want to consume news, the most fun thing, most fun way to consume news is like Twitter. It's not like the value of there being a back and forth, wasn't that high. Right. And I thought if I need help with a recipe, I actually just go like the New York times has a good recipe section, right? It's not actually that hard. And so I just thought the thing that AI is 10 X better at is a sort of a conversation right. That's not intrinsically informative, but it's more about an opportunity. You can say whatever you want. You're not going to get judged. If it's 3am, you don't have to wait for your friend to text back. It's like, it's immediate. They're going to reply immediately. You can say whatever you want. It's judgment-free and it's much more like a playground. It's much more like a fun experience. And you could see that if the AI gave a person a compliment, they would love it. It's much easier to get the AI to give you a compliment than a human. From that day on, I said, okay, I get it. Humans want to speak to like humans or human like entities and they want to have fun. And that was when I started to look less at platforms like Google. And I started to look more at platforms like Instagram. And I was trying to think about why do people use Instagram? And I could see that I think Chai was, was filling the same desire or the same drive. If you go on Instagram, typically you want to look at the faces of other humans, or you want to hear about other people's lives. So if it's like the rock is making himself pancakes on a cheese plate. You kind of feel a little bit like you're the rock's friend, or you're like having pancakes with him or something, right? But if you do it too much, you feel like you're sad and like a lonely person, but with AI, you can talk to it and tell it stories and tell you stories, and you can play with it for as long as you want. And you don't feel like you're like a sad, lonely person. You feel like you actually have a friend.Alessio [00:16:29]: And what, why is that? Do you have any insight on that from using it?William [00:16:33]: I think it's just the human psychology. I think it's just the idea that, with old school social media. You're just consuming passively, right? So you'll just swipe. If I'm watching TikTok, just like swipe and swipe and swipe. And even though I'm getting the dopamine of like watching an engaging video, there's this other thing that's building my head, which is like, I'm feeling lazier and lazier and lazier. And after a certain period of time, I'm like, man, I just wasted 40 minutes. I achieved nothing. But with AI, because you're interacting, you feel like you're, it's not like work, but you feel like you're participating and contributing to the thing. You don't feel like you're just. Consuming. So you don't have a sense of remorse basically. And you know, I think on the whole people, the way people talk about, try and interact with the AI, they speak about it in an incredibly positive sense. Like we get people who say they have eating disorders saying that the AI helps them with their eating disorders. People who say they're depressed, it helps them through like the rough patches. So I think there's something intrinsically healthy about interacting that TikTok and Instagram and YouTube doesn't quite tick. From that point on, it was about building more and more kind of like human centric AI for people to interact with. And I was like, okay, let's make a Kanye West bot, right? And then no one wanted to talk to the Kanye West bot. And I was like, ah, who's like a cool persona for teenagers to want to interact with. And I was like, I was trying to find the influencers and stuff like that, but no one cared. Like they didn't want to interact with the, yeah. And instead it was really just the special moment was when we said the realization that developers and software engineers aren't interested in building this sort of AI, but the consumers are right. And rather than me trying to guess every day, like what's the right bot to submit to the platform, why don't we just create the tools for the users to build it themselves? And so nowadays this is like the most obvious thing in the world, but when Chai first did it, it was not an obvious thing at all. Right. Right. So we took the API for let's just say it was, I think it was GPTJ, which was this 6 billion parameter open source transformer style LLM. We took GPTJ. We let users create the prompt. We let users select the image and we let users choose the name. And then that was the bot. And through that, they could shape the experience, right? So if they said this bot's going to be really mean, and it's going to be called like bully in the playground, right? That was like a whole category that I never would have guessed. Right. People love to fight. They love to have a disagreement, right? And then they would create, there'd be all these romantic archetypes that I didn't know existed. And so as the users could create the content that they wanted, that was when Chai was able to, to get this huge variety of content and rather than appealing to, you know, 1% of the population that I'd figured out what they wanted, you could appeal to a much, much broader thing. And so from that moment on, it was very, very crystal clear. It's like Chai, just as Instagram is this social media platform that lets people create images and upload images, videos and upload that, Chai was really about how can we let the users create this experience in AI and then share it and interact and search. So it's really, you know, I say it's like a platform for social AI.Alessio [00:20:00]: Where did the Chai name come from? Because you started the same path. I was like, is it character AI shortened? You started at the same time, so I was curious. The UK origin was like the second, the Chai.William [00:20:15]: We started way before character AI. And there's an interesting story that Chai's numbers were very, very strong, right? So I think in even 20, I think late 2022, was it late 2022 or maybe early 2023? Chai was like the number one AI app in the app store. So we would have something like 100,000 daily active users. And then one day we kind of saw there was this website. And we were like, oh, this website looks just like Chai. And it was the character AI website. And I think that nowadays it's, I think it's much more common knowledge that when they left Google with the funding, I think they knew what was the most trending, the number one app. And I think they sort of built that. Oh, you found the people.swyx [00:21:03]: You found the PMF for them.William [00:21:04]: We found the PMF for them. Exactly. Yeah. So I worked a year very, very hard. And then they, and then that was when I learned a lesson, which is that if you're VC backed and if, you know, so Chai, we'd kind of ran, we'd got to this point, I was the only person who'd invested. I'd invested maybe 2 million pounds in the business. And you know, from that, we were able to build this thing, get to say a hundred thousand daily active users. And then when character AI came along, the first version, we sort of laughed. We were like, oh man, this thing sucks. Like they don't know what they're building. They're building the wrong thing anyway, but then I saw, oh, they've raised a hundred million dollars. Oh, they've raised another hundred million dollars. And then our users started saying, oh guys, your AI sucks. Cause we were serving a 6 billion parameter model, right? How big was the model that character AI could afford to serve, right? So we would be spending, let's say we would spend a dollar per per user, right? Over the, the, you know, the entire lifetime.swyx [00:22:01]: A dollar per session, per chat, per month? No, no, no, no.William [00:22:04]: Let's say we'd get over the course of the year, we'd have a million users and we'd spend a million dollars on the AI throughout the year. Right. Like aggregated. Exactly. Exactly. Right. They could spend a hundred times that. So people would say, why is your AI much dumber than character AIs? And then I was like, oh, okay, I get it. This is like the Silicon Valley style, um, hyper scale business. And so, yeah, we moved to Silicon Valley and, uh, got some funding and iterated and built the flywheels. And, um, yeah, I, I'm very proud that we were able to compete with that. Right. So, and I think the reason we were able to do it was just customer obsession. And it's similar, I guess, to how deep seek have been able to produce such a compelling model when compared to someone like an open AI, right? So deep seek, you know, their latest, um, V2, yeah, they claim to have spent 5 million training it.swyx [00:22:57]: It may be a bit more, but, um, like, why are you making it? Why are you making such a big deal out of this? Yeah. There's an agenda there. Yeah. You brought up deep seek. So we have to ask you had a call with them.William [00:23:07]: We did. We did. We did. Um, let me think what to say about that. I think for one, they have an amazing story, right? So their background is again in finance.swyx [00:23:16]: They're the Chinese version of you. Exactly.William [00:23:18]: Well, there's a lot of similarities. Yes. Yes. I have a great affinity for companies which are like, um, founder led, customer obsessed and just try and build something great. And I think what deep seek have achieved. There's quite special is they've got this amazing inference engine. They've been able to reduce the size of the KV cash significantly. And then by being able to do that, they're able to significantly reduce their inference costs. And I think with kind of with AI, people get really focused on like the kind of the foundation model or like the model itself. And they sort of don't pay much attention to the inference. To give you an example with Chai, let's say a typical user session is 90 minutes, which is like, you know, is very, very long for comparison. Let's say the average session length on TikTok is 70 minutes. So people are spending a lot of time. And in that time they're able to send say 150 messages. That's a lot of completions, right? It's quite different from an open AI scenario where people might come in, they'll have a particular question in mind. And they'll ask like one question. And a few follow up questions, right? So because they're consuming, say 30 times as many requests for a chat, or a conversational experience, you've got to figure out how to how to get the right balance between the cost of that and the quality. And so, you know, I think with AI, it's always been the case that if you want a better experience, you can throw compute at the problem, right? So if you want a better model, you can just make it bigger. If you want it to remember better, give it a longer context. And now, what open AI is doing to great fanfare is with projection sampling, you can generate many candidates, right? And then with some sort of reward model or some sort of scoring system, you can serve the most promising of these many candidates. And so that's kind of scaling up on the inference time compute side of things. And so for us, it doesn't make sense to think of AI is just the absolute performance. So. But what we're seeing, it's like the MML you score or the, you know, any of these benchmarks that people like to look at, if you just get that score, it doesn't really tell tell you anything. Because it's really like progress is made by improving the performance per dollar. And so I think that's an area where deep seek have been able to form very, very well, surprisingly so. And so I'm very interested in what Lama four is going to look like. And if they're able to sort of match what deep seek have been able to achieve with this performance per dollar gain.Alessio [00:25:59]: Before we go into the inference, some of the deeper stuff, can you give people an overview of like some of the numbers? So I think last I checked, you have like 1.4 million daily active now. It's like over 22 million of revenue. So it's quite a business.William [00:26:12]: Yeah, I think we grew by a factor of, you know, users grew by a factor of three last year. Revenue over doubled. You know, it's very exciting. We're competing with some really big, really well funded companies. Character AI got this, I think it was almost a $3 billion valuation. And they have 5 million DAU is a number that I last heard. Torquay, which is a Chinese built app owned by a company called Minimax. They're incredibly well funded. And these companies didn't grow by a factor of three last year. Right. And so when you've got this company and this team that's able to keep building something that gets users excited, and they want to tell their friend about it, and then they want to come and they want to stick on the platform. I think that's very special. And so last year was a great year for the team. And yeah, I think the numbers reflect the hard work that we put in. And then fundamentally, the quality of the app, the quality of the content, the quality of the content, the quality of the content, the quality of the content, the quality of the content. AI is the quality of the experience that you have. You actually published your DAU growth chart, which is unusual. And I see some inflections. Like, it's not just a straight line. There's some things that actually inflect. Yes. What were the big ones? Cool. That's a great, great, great question. Let me think of a good answer. I'm basically looking to annotate this chart, which doesn't have annotations on it. Cool. The first thing I would say is this is, I think the most important thing to know about success is that success is born out of failures. Right? Through failures that we learn. You know, if you think something's a good idea, and you do and it works, great, but you didn't actually learn anything, because everything went exactly as you imagined. But if you have an idea, you think it's going to be good, you try it, and it fails. There's a gap between the reality and expectation. And that's an opportunity to learn. The flat periods, that's us learning. And then the up periods is that's us reaping the rewards of that. So I think the big, of the growth shot of just 2024, I think the first thing that really kind of put a dent in our growth was our backend. So we just reached this scale. So we'd, from day one, we'd built on top of Google's GCP, which is Google's cloud platform. And they were fantastic. We used them when we had one daily active user, and they worked pretty good all the way up till we had about 500,000. It was never the cheapest, but from an engineering perspective, man, that thing scaled insanely good. Like, not Vertex? Not Vertex. Like GKE, that kind of stuff? We use Firebase. So we use Firebase. I'm pretty sure we're the biggest user ever on Firebase. That's expensive. Yeah, we had calls with engineers, and they're like, we wouldn't recommend using this product beyond this point, and you're 3x over that. So we pushed Google to their absolute limits. You know, it was fantastic for us, because we could focus on the AI. We could focus on just adding as much value as possible. But then what happened was, after 500,000, just the thing, the way we were using it, and it would just, it wouldn't scale any further. And so we had a really, really painful, at least three-month period, as we kind of migrated between different services, figuring out, like, what requests do we want to keep on Firebase, and what ones do we want to move on to something else? And then, you know, making mistakes. And learning things the hard way. And then after about three months, we got that right. So that, we would then be able to scale to the 1.5 million DAE without any further issues from the GCP. But what happens is, if you have an outage, new users who go on your app experience a dysfunctional app, and then they're going to exit. And so your next day, the key metrics that the app stores track are going to be something like retention rates. And so your next day, the key metrics that the app stores track are going to be something like retention rates. Money spent, and the star, like, the rating that they give you. In the app store. In the app store, yeah. Tyranny. So if you're ranked top 50 in entertainment, you're going to acquire a certain rate of users organically. If you go in and have a bad experience, it's going to tank where you're positioned in the algorithm. And then it can take a long time to kind of earn your way back up, at least if you wanted to do it organically. If you throw money at it, you can jump to the top. And I could talk about that. But broadly speaking, if we look at 2024, the first kink in the graph was outages due to hitting 500k DAU. The backend didn't want to scale past that. So then we just had to do the engineering and build through it. Okay, so we built through that, and then we get a little bit of growth. And so, okay, that's feeling a little bit good. I think the next thing, I think it's, I'm not going to lie, I have a feeling that when Character AI got... I was thinking. I think so. I think... So the Character AI team fundamentally got acquired by Google. And I don't know what they changed in their business. I don't know if they dialed down that ad spend. Products don't change, right? Products just what it is. I don't think so. Yeah, I think the product is what it is. It's like maintenance mode. Yes. I think the issue that people, you know, some people may think this is an obvious fact, but running a business can be very competitive, right? Because other businesses can see what you're doing, and they can imitate you. And then there's this... There's this question of, if you've got one company that's spending $100,000 a day on advertising, and you've got another company that's spending zero, if you consider market share, and if you're considering new users which are entering the market, the guy that's spending $100,000 a day is going to be getting 90% of those new users. And so I have a suspicion that when the founders of Character AI left, they dialed down their spending on user acquisition. And I think that kind of gave oxygen to like the other apps. And so Chai was able to then start growing again in a really healthy fashion. I think that's kind of like the second thing. I think a third thing is we've really built a great data flywheel. Like the AI team sort of perfected their flywheel, I would say, in end of Q2. And I could speak about that at length. But fundamentally, the way I would describe it is when you're building anything in life, you need to be able to evaluate it. And through evaluation, you can iterate, we can look at benchmarks, and we can say the issues with benchmarks and why they may not generalize as well as one would hope in the challenges of working with them. But something that works incredibly well is getting feedback from humans. And so we built this thing where anyone can submit a model to our developer backend, and it gets put in front of 5000 users, and the users can rate it. And we can then have a really accurate ranking of like which model, or users finding more engaging or more entertaining. And it gets, you know, it's at this point now, where every day we're able to, I mean, we evaluate between 20 and 50 models, LLMs, every single day, right. So even though we've got only got a team of, say, five AI researchers, they're able to iterate a huge quantity of LLMs, right. So our team ships, let's just say minimum 100 LLMs a week is what we're able to iterate through. Now, before that moment in time, we might iterate through three a week, we might, you know, there was a time when even doing like five a month was a challenge, right? By being able to change the feedback loops to the point where it's not, let's launch these three models, let's do an A-B test, let's assign, let's do different cohorts, let's wait 30 days to see what the day 30 retention is, which is the kind of the, if you're doing an app, that's like A-B testing 101 would be, do a 30-day retention test, assign different treatments to different cohorts and come back in 30 days. So that's insanely slow. That's just, it's too slow. And so we were able to get that 30-day feedback loop all the way down to something like three hours. And when we did that, we could really, really, really perfect techniques like DPO, fine tuning, prompt engineering, blending, rejection sampling, training a reward model, right, really successfully, like boom, boom, boom, boom, boom. And so I think in Q3 and Q4, we got, the amount of AI improvements we got was like astounding. It was getting to the point, I thought like how much more, how much more edge is there to be had here? But the team just could keep going and going and going. That was like number three for the inflection point.swyx [00:34:53]: There's a fourth?William [00:34:54]: The important thing about the third one is if you go on our Reddit or you talk to users of AI, there's like a clear date. It's like somewhere in October or something. The users, they flipped. Before October, the users... The users would say character AI is better than you, for the most part. Then from October onwards, they would say, wow, you guys are better than character AI. And that was like a really clear positive signal that we'd sort of done it. And I think people, you can't cheat consumers. You can't trick them. You can't b******t them. They know, right? If you're going to spend 90 minutes on a platform, and with apps, there's the barriers to switching is pretty low. Like you can try character AI, you can't cheat consumers. You can't cheat them. You can't cheat them. You can't cheat AI for a day. If you get bored, you can try Chai. If you get bored of Chai, you can go back to character. So the users, the loyalty is not strong, right? What keeps them on the app is the experience. If you deliver a better experience, they're going to stay and they can tell. So that was the fourth one was we were fortunate enough to get this hire. He was hired one really talented engineer. And then they said, oh, at my last company, we had a head of growth. He was really, really good. And he was the head of growth for ByteDance for two years. Would you like to speak to him? And I was like, yes. Yes, I think I would. And so I spoke to him. And he just blew me away with what he knew about user acquisition. You know, it was like a 3D chessswyx [00:36:21]: sort of thing. You know, as much as, as I know about AI. Like ByteDance as in TikTok US. Yes.William [00:36:26]: Not ByteDance as other stuff. Yep. He was interviewing us as we were interviewing him. Right. And so pick up options. Yeah, exactly. And so he was kind of looking at our metrics. And he was like, I saw him get really excited when he said, guys, you've got a million daily active users and you've done no advertising. I said, correct. And he was like, that's unheard of. He's like, I've never heard of anyone doing that. And then he started looking at our metrics. And he was like, if you've got all of this organically, if you start spending money, this is going to be very exciting. I was like, let's give it a go. So then he came in, we've just started ramping up the user acquisition. So that looks like spending, you know, let's say we're spending, we started spending $20,000 a day, it looked very promising than 20,000. Right now we're spending $40,000 a day on user acquisition. That's still only half of what like character AI or talkie may be spending. But from that, it's sort of, we were growing at a rate of maybe say, 2x a year. And that got us growing at a rate of 3x a year. So I'm growing, I'm evolving more and more to like a Silicon Valley style hyper growth, like, you know, you build something decent, and then you canswyx [00:37:33]: slap on a huge... You did the important thing, you did the product first.William [00:37:36]: Of course, but then you can slap on like, like the rocket or the jet engine or something, which is just this cash in, you pour in as much cash, you buy a lot of ads, and your growth is faster.swyx [00:37:48]: Not to, you know, I'm just kind of curious what's working right now versus what surprisinglyWilliam [00:37:52]: doesn't work. Oh, there's a long, long list of surprising stuff that doesn't work. Yeah. The surprising thing, like the most surprising thing, what doesn't work is almost everything doesn't work. That's what's surprising. And I'll give you an example. So like a year and a half ago, I was working at a company, we were super excited by audio. I was like, audio is going to be the next killer feature, we have to get in the app. And I want to be the first. So everything Chai does, I want us to be the first. We may not be the company that's strongest at execution, but we can always be theswyx [00:38:22]: most innovative. Interesting. Right? So we can... You're pretty strong at execution.William [00:38:26]: We're much stronger, we're much stronger. A lot of the reason we're here is because we were first. If we launched today, it'd be so hard to get the traction. Because it's like to get the flywheel, to get the users, to build a product people are excited about. If you're first, people are naturally excited about it. But if you're fifth or 10th, man, you've got to beswyx [00:38:46]: insanely good at execution. So you were first with voice? We were first. We were first. I only knowWilliam [00:38:51]: when character launched voice. They launched it, I think they launched it at least nine months after us. Okay. Okay. But the team worked so hard for it. At the time we did it, latency is a huge problem. Cost is a huge problem. Getting the right quality of the voice is a huge problem. Right? Then there's this user interface and getting the right user experience. Because you don't just want it to start blurting out. Right? You want to kind of activate it. But then you don't have to keep pressing a button every single time. There's a lot that goes into getting a really smooth audio experience. So we went ahead, we invested the three months, we built it all. And then when we did the A-B test, there was like, no change in any of the numbers. And I was like, this can't be right, there must be a bug. And we spent like a week just checking everything, checking again, checking again. And it was like, the users just did not care. And it was something like only 10 or 15% of users even click the button to like, they wanted to engage the audio. And they would only use it for 10 or 15% of the time. So if you do the math, if it's just like something that one in seven people use it for one seventh of their time. You've changed like 2% of the experience. So even if that that 2% of the time is like insanely good, it doesn't translate much when you look at the retention, when you look at the engagement, and when you look at the monetization rates. So audio did not have a big impact. I'm pretty big on audio. But yeah, I like it too. But it's, you know, so a lot of the stuff which I do, I'm a big, you can have a theory. And you resist. Yeah. Exactly, exactly. So I think if you want to make audio work, it has to be a unique, compelling, exciting experience that they can't have anywhere else.swyx [00:40:37]: It could be your models, which just weren't good enough.William [00:40:39]: No, no, no, they were great. Oh, yeah, they were very good. it was like, it was kind of like just the, you know, if you listen to like an audible or Kindle, or something like, you just hear this voice. And it's like, you don't go like, wow, this is this is special, right? It's like a convenience thing. But the idea is that if you can, if Chai is the only platform, like, let's say you have a Mr. Beast, and YouTube is the only platform you can use to make audio work, then you can watch a Mr. Beast video. And it's the most engaging, fun video that you want to watch, you'll go to a YouTube. And so it's like for audio, you can't just put the audio on there. And people go, oh, yeah, it's like 2% better. Or like, 5% of users think it's 20% better, right? It has to be something that the majority of people, for the majority of the experience, go like, wow, this is a big deal. That's the features you need to be shipping. If it's not going to appeal to the majority of people, for the majority of the experience, and it's not a big deal, it's not going to move you. Cool. So you killed it. I don't see it anymore. Yep. So I love this. The longer, it's kind of cheesy, I guess, but the longer I've been working at Chai, and I think the team agrees with this, all the platitudes, at least I thought they were platitudes, that you would get from like the Steve Jobs, which is like, build something insanely great, right? Or be maniacally focused, or, you know, the most important thing is saying no to, not to work on. All of these sort of lessons, they just are like painfully true. They're painfully true. So now I'm just like, everything I say, I'm either quoting Steve Jobs or Zuckerberg. I'm like, guys, move fast and break free.swyx [00:42:10]: You've jumped the Apollo to cool it now.William [00:42:12]: Yeah, it's just so, everything they said is so, so true. The turtle neck. Yeah, yeah, yeah. Everything is so true.swyx [00:42:18]: This last question on my side, and I want to pass this to Alessio, is on just, just multi-modality in general. This actually comes from Justine Moore from A16Z, who's a friend of ours. And a lot of people are trying to do voice image video for AI companions. Yes. You just said voice didn't work. Yep. What would make you revisit?William [00:42:36]: So Steve Jobs, he was very, listen, he was very, very clear on this. There's a habit of engineers who, once they've got some cool technology, they want to find a way to package up the cool technology and sell it to consumers, right? That does not work. So you're free to try and build a startup where you've got your cool tech and you want to find someone to sell it to. That's not what we do at Chai. At Chai, we start with the consumer. What does the consumer want? What is their problem? And how do we solve it? So right now, the number one problems for the users, it's not the audio. That's not the number one problem. It's not the image generation either. That's not their problem either. The number one problem for users in AI is this. All the AI is being generated by middle-aged men in Silicon Valley, right? That's all the content. You're interacting with this AI. You're speaking to it for 90 minutes on average. It's being trained by middle-aged men. The guys out there, they're out there. They're talking to you. They're talking to you. They're like, oh, what should the AI say in this situation, right? What's funny, right? What's cool? What's boring? What's entertaining? That's not the way it should be. The way it should be is that the users should be creating the AI, right? And so the way I speak about it is this. Chai, we have this AI engine in which sits atop a thin layer of UGC. So the thin layer of UGC is absolutely essential, right? It's just prompts. But it's just prompts. It's just an image. It's just a name. It's like we've done 1% of what we could do. So we need to keep thickening up that layer of UGC. It must be the case that the users can train the AI. And if reinforcement learning is powerful and important, they have to be able to do that. And so it's got to be the case that there exists, you know, I say to the team, just as Mr. Beast is able to spend 100 million a year or whatever it is on his production company, and he's got a team building the content, the Mr. Beast company is able to spend 100 million a year on his production company. And he's got a team building the content, which then he shares on the YouTube platform. Until there's a team that's earning 100 million a year or spending 100 million on the content that they're producing for the Chai platform, we're not finished, right? So that's the problem. That's what we're excited to build. And getting too caught up in the tech, I think is a fool's errand. It does not work.Alessio [00:44:52]: As an aside, I saw the Beast Games thing on Amazon Prime. It's not doing well. And I'mswyx [00:44:56]: curious. It's kind of like, I mean, the audience reading is high. The run-to-meet-all sucks, but the audience reading is high.Alessio [00:45:02]: But it's not like in the top 10. I saw it dropped off of like the... Oh, okay. Yeah, that one I don't know. I'm curious, like, you know, it's kind of like similar content, but different platform. And then going back to like, some of what you were saying is like, you know, people come to ChaiWilliam [00:45:13]: expecting some type of content. Yeah, I think it's something that's interesting to discuss is like, is moats. And what is the moat? And so, you know, if you look at a platform like YouTube, the moat, I think is in first is really is in the ecosystem. And the ecosystem, is comprised of you have the content creators, you have the users, the consumers, and then you have the algorithms. And so this, this creates a sort of a flywheel where the algorithms are able to be trained on the users, and the users data, the recommend systems can then feed information to the content creators. So Mr. Beast, he knows which thumbnail does the best. He knows the first 10 seconds of the video has to be this particular way. And so his content is super optimized for the YouTube platform. So that's why it doesn't do well on Amazon. If he wants to do well on Amazon, how many videos has he created on the YouTube platform? By thousands, 10s of 1000s, I guess, he needs to get those iterations in on the Amazon. So at Chai, I think it's all about how can we get the most compelling, rich user generated content, stick that on top of the AI engine, the recommender systems, in such that we get this beautiful data flywheel, more users, better recommendations, more creative, more content, more users.Alessio [00:46:34]: You mentioned the algorithm, you have this idea of the Chaiverse on Chai, and you have your own kind of like LMSYS-like ELO system. Yeah, what are things that your models optimize for, like your users optimize for, and maybe talk about how you build it, how people submit models?William [00:46:49]: So Chaiverse is what I would describe as a developer platform. More often when we're speaking about Chai, we're thinking about the Chai app. And the Chai app is really this product for consumers. And so consumers can come on the Chai app, they can come on the Chai app, they can come on the Chai app, they can interact with our AI, and they can interact with other UGC. And it's really just these kind of bots. And it's a thin layer of UGC. Okay. Our mission is not to just have a very thin layer of UGC. Our mission is to have as much UGC as possible. So we must have, I don't want people at Chai training the AI. I want people, not middle aged men, building AI. I want everyone building the AI, as many people building the AI as possible. Okay, so what we built was we built Chaiverse. And Chaiverse is kind of, it's kind of like a prototype, is the way to think about it. And it started with this, this observation that, well, how many models get submitted into Hugging Face a day? It's hundreds, it's hundreds, right? So there's hundreds of LLMs submitted each day. Now consider that, what does it take to build an LLM? It takes a lot of work, actually. It's like someone devoted several hours of compute, several hours of their time, prepared a data set, launched it, ran it, evaluated it, submitted it, right? So there's a lot of, there's a lot of, there's a lot of work that's going into that. So what we did was we said, well, why can't we host their models for them and serve them to users? And then what would that look like? The first issue is, well, how do you know if a model is good or not? Like, we don't want to serve users the crappy models, right? So what we would do is we would, I love the LMSYS style. I think it's really cool. It's really simple. It's a very intuitive thing, which is you simply present the users with two completions. You can say, look, this is from model one. This is from model two. This is from model three. This is from model A. This is from model B, which is better. And so if someone submits a model to Chaiverse, what we do is we spin up a GPU. We download the model. We're going to now host that model on this GPU. And we're going to start routing traffic to it. And we're going to send, we think it takes about 5,000 completions to get an accurate signal. That's roughly what LMSYS does. And from that, we're able to get an accurate ranking. And we're able to get an accurate ranking. And we're able to get an accurate ranking of which models are people finding entertaining and which models are not entertaining. If you look at the bottom 80%, they'll suck. You can just disregard them. They totally suck. Then when you get the top 20%, you know you've got a decent model, but you can break it down into more nuance. There might be one that's really descriptive. There might be one that's got a lot of personality to it. There might be one that's really illogical. Then the question is, well, what do you do with these top models? From that, you can do more sophisticated things. You can try and do like a routing thing where you say for a given user request, we're going to try and predict which of these end models that users enjoy the most. That turns out to be pretty expensive and not a huge source of like edge or improvement. Something that we love to do at Chai is blending, which is, you know, it's the simplest way to think about it is you're going to end up, and you're going to pretty quickly see you've got one model that's really smart, one model that's really funny. How do you get the user an experience that is both smart and funny? Well, just 50% of the requests, you can serve them the smart model, 50% of the requests, you serve them the funny model. Just a random 50%? Just a random, yeah. And then... That's blending? That's blending. You can do more sophisticated things on top of that, as in all things in life, but the 80-20 solution, if you just do that, you get a pretty powerful effect out of the gate. Random number generator. I think it's like the robustness of randomness. Random is a very powerful optimization technique, and it's a very robust thing. So you can explore a lot of the space very efficiently. There's one thing that's really, really important to share, and this is the most exciting thing for me, is after you do the ranking, you get an ELO score, and you can track a user's first join date, the first date they submit a model to Chaiverse, they almost always get a terrible ELO, right? So let's say the first submission they get an ELO of 1,100 or 1,000 or something, and you can see that they iterate and they iterate and iterate, and it will be like, no improvement, no improvement, no improvement, and then boom. Do you give them any data, or do you have to come up with this themselves? We do, we do, we do, we do. We try and strike a balance between giving them data that's very useful, you've got to be compliant with GDPR, which is like, you have to work very hard to preserve the privacy of users of your app. So we try to give them as much signal as possible, to be helpful. The minimum is we're just going to give you a score, right? That's the minimum. But that alone is people can optimize a score pretty well, because they're able to come up with theories, submit it, does it work? No. A new theory, does it work? No. And then boom, as soon as they figure something out, they keep it, and then they iterate, and then boom,Alessio [00:51:46]: they figure something out, and they keep it. Last year, you had this post on your blog, cross-sourcing the lead to the 10 trillion parameter, AGI, and you call it a mixture of experts, recommenders. Yep. Any insights?William [00:51:58]: Updated thoughts, 12 months later? I think the odds, the timeline for AGI has certainly been pushed out, right? Now, this is in, I'm a controversial person, I don't know, like, I just think... You don't believe in scaling laws, you think AGI is further away. I think it's an S-curve. I think everything's an S-curve. And I think that the models have proven to just be far worse at reasoning than people sort of thought. And I think whenever I hear people talk about LLMs as reasoning engines, I sort of cringe a bit. I don't think that's what they are. I think of them more as like a simulator. I think of them as like a, right? So they get trained to predict the next most likely token. It's like a physics simulation engine. So you get these like games where you can like construct a bridge, and you drop a car down, and then it predicts what should happen. And that's really what LLMs are doing. It's not so much that they're reasoning, it's more that they're just doing the most likely thing. So fundamentally, the ability for people to add in intelligence, I think is very limited. What most people would consider intelligence, I think the AI is not a crowdsourcing problem, right? Now with Wikipedia, Wikipedia crowdsources knowledge. It doesn't crowdsource intelligence. So it's a subtle distinction. AI is fantastic at knowledge. I think it's weak at intelligence. And a lot, it's easy to conflate the two because if you ask it a question and it gives you, you know, if you said, who was the seventh president of the United States, and it gives you the correct answer, I'd say, well, I don't know the answer to that. And you can conflate that with intelligence. But really, that's a question of knowledge. And knowledge is really this thing about saying, how can I store all of this information? And then how can I retrieve something that's relevant? Okay, they're fantastic at that. They're fantastic at storing knowledge and retrieving the relevant knowledge. They're superior to humans in that regard. And so I think we need to come up for a new word. How does one describe AI should contain more knowledge than any individual human? It should be more accessible than any individual human. That's a very powerful thing. That's superswyx [00:54:07]: powerful. But what words do we use to describe that? We had a previous guest on Exa AI that does search. And he tried to coin super knowledge as the opposite of super intelligence.William [00:54:20]: Exactly. I think super knowledge is a more accurate word for it.swyx [00:54:24]: You can store more things than any human can.William [00:54:26]: And you can retrieve it better than any human can as well. And I think it's those two things combined that's special. I think that thing will exist. That thing can be built. And I think you can start with something that's entertaining and fun. And I think, I often think it's like, look, it's going to be a 20 year journey. And we're in like, year four, or it's like the web. And this is like 1998 or something. You know, you've got a long, long way to go before the Amazon.coms are like these huge, multi trillion dollar businesses that every single person uses every day. And so AI today is very simplistic. And it's fundamentally the way we're using it, the flywheels, and this ability for how can everyone contribute to it to really magnify the value that it brings. Right now, like, I think it's a bit sad. It's like, right now you have big labs, I'm going to pick on open AI. And they kind of go to like these human labelers. And they say, we're going to pay you to just label this like subset of questions that we want to get a really high quality data set, then we're going to get like our own computers that are really powerful. And that's kind of like the thing. For me, it's so much like Encyclopedia Britannica. It's like insane. All the people that were interested in blockchain, it's like, well, this is this is what needs to be decentralized, you need to decentralize that thing. Because if you distribute it, people can generate way more data in a distributed fashion, way more, right? You need the incentive. Yeah, of course. Yeah. But I mean, the, the, that's kind of the exciting thing about Wikipedia was it's this understanding, like the incentives, you don't need money to incentivize people. You don't need dog coins. No. Sometimes, sometimes people get the satisfaction fro
As 2025 begins in earnest, we have some phenomenal news for you! Everybody's favorite dewy salamander, Mark Zuckerberg, is going FINALLY going to rebuild trust in our most hallowed institutions, enrich your social life, and help stave off those pesky feelings of loneliness, isolation, and general ennui. How will he accomplish this, you ask? By flooding all of Meta's social media platforms with AI bots, of course!This week, Zak and Aaron take a break from discussing the games industry to chat about why dumping a bunch of artificial intelligence all over Facebook and Instagram is, in fact, a terrible idea. From eroding people's ability to trust in anything to creating even more division during a tumultuous time, Meta's big bet on AI profiles stands to cause magnitudes more problems than it will solve.Content Warning: Self-harm - 32:08 - 36:47Listen to Beach Girl on SpotifyCheck out edouggieart on EtsyCheck out even more edouggieart on Instagram
Das ist das KI-Update vom 21.01.2025 unter anderem mit diesen Themen: Trump kündigt Dekret zur KI-Sicherheit auf Immer mehr US-Teenager nutzen ChatGPT für Schularbeiten DeepSeek-R1 erreicht in Benchmarks Reasoning-Leistung von OpenAIs o1 und Character AI veröffentlicht Spiele Links zu allen Themen der heutigen Folge findet Ihr hier: https://heise.de/-10250549 https://www.heise.de/thema/KI-Update https://pro.heise.de/ki/ https://www.heise.de/newsletter/anmeldung.html?id=ki-update https://www.heise.de/thema/Kuenstliche-Intelligenz https://the-decoder.de/ https://www.heiseplus.de/podcast https://www.ct.de/ki
In this week's episode, Macey and Ashlee play CharacterAI with STAR WARS characters! They chat with fan favorites like Hunter, Anakin, and Leia Organa in what might be their most chaotic and craziest episode yet. OUR SOCIALS: https://linktr.ee/thegeekygalspodcast FOR BUSINESS INQUIRIES: thegeekygalspodcast@gmail.com EPISODE TIMESTAMPS: 00:00:00 intro 00:05:38 reading your comments 00:07:25 a chat on ai 00:09:36 *SPOILERS* thoughts on skeleton crew 00:13:58 *SPOILERS* macey's tangent on jod na nawood (skeleton crew) 00:31:20 playing character ai 00:34:07 hunter 00:39:56 omega 00:43:45 ezra 00:48:54 crosshair 00:57:36 leia 01:01:06 han solo 01:08:43 x-men bonus: quicksilver 01:19:29 outro
Kevin Surace, Chairman and CTO of Appvance.ai, joins this episode of AI, Government, and the Future to delve into the impact of AI on various industries, the future of employment, and the challenges of trust in AI systems. They also discuss the potential of generative AI, how to address the technology's risks to ensure safety use and the government's role in certifying trust in AI.
Mina Narayanan, a Research Analyst of AI Assessment at the Center for Security and Emerging Technology, joins this episode of AI, Government, and the Future to discuss the challenges of assessing AI systems and managing their risk. They explore the evolving landscape of AI assessment and the need for standards and testing to address bias and risks. Mina also touches on the role of industry, funding, and coordination between branches of government.
Two anonymous teenagers from Texas and their families are suing Character AI, claiming the chatbot encourages self-harm and violence against family members. ABC News Correspondent Jim Ryan explains this technology and this lawsuit.
Max Romanik and Marc Leh, Principal Consultants at Corner Alliance, join this episode of AI, Government, and the Future by Alan Pentz to explore the exciting role of AI in government. They discuss how AI is being integrated into various sectors, such as R&D and homeland security, to identify potential threats and improve efficiency, the government's role in supporting high-risk research and standardizing technology, the potential for government data sets in training AI models, and the potential impact of AI on the consulting industry.
Cleo File Mgmt. tools has a vulnerability being leveraged to attack customers, Character AI told kid to murder his parents! My Recovery HD is full, Replacement battery for older phone, Was it Google or not asking me for my password, was I hacked? iPhone Update for TXT Messages RCS, North Koren workers scammed companies out of 80+Million, do you know who you are hiring? Boobking.com issue lets bad guys look at where you're going, How do I transfer my data to a new PC? Txt WTCI to 31333 to support the Salvation Army and the WTIC Holiday Store!
In this episode of Tom Bilyeu's Impact Theory, Tom takes a deep dive into the pivotal issues shaping the future of society. He begins with a nuanced exploration of Daniel Penny's acquittal in a controversial legal case, unpacking its implications for justice, public perception, and the shifting dynamics of societal trust. Shifting gears, Tom analyzes Trump's bold invitation to Xi Jinping, revealing the strategic implications of this geopolitical move for global power and America's future. Tom also discusses the transformative potential of Bitcoin, projecting its rise to $100K and what it could mean for decentralized wealth and individual autonomy. The episode concludes with a thought-provoking examination of the ethical challenges surrounding Character AI, highlighting how advancements in artificial intelligence are forcing humanity to redefine morality and accountability in a tech-driven age. Packed with insight, this episode offers actionable strategies for navigating a rapidly changing world. SHOWNOTES [00:02:15] - Introduction: The societal stakes of Daniel Penny's acquittal and the broader implications for public trust in justice. [00:12:45] - Breaking down the controversy: Public reactions and the media's role in shaping narratives around vigilante justice. [00:18:40] - Trump's strategic play: What Xi Jinping's invitation reveals about geopolitical shifts and America's global positioning. [00:29:15] - Bitcoin's revolutionary potential: How decentralized wealth could reshape personal freedom and economic systems. [00:41:30] - The future of Bitcoin: Predictions for Bitcoin's rise to $100K and its impact on global economics. [00:50:10] - AI ethics in focus: The Character AI controversy and the moral dilemmas posed by advanced artificial intelligence. [01:03:25] - Practical applications of AI: How AI innovations could reshape industries and daily life. [01:08:45] - Closing thoughts: Actionable steps to prepare for societal and technological transformation. CHECK OUT OUR SPONSORS Range Rover: Explore the Range Rover Sport at https://landroverUSA.com Rosetta Stone: Check out Rosetta Stone and use my code TODAY for a great deal: https://www.rosettastone.com Miro: Bring your teams to Miro's revolutionary Innovation Workspace and be faster from idea to outcome at https://miro.com Shopify: Sign up for your one-dollar-per-month trial period at https://shopify.com/impact Found Banking: Stop getting lost in countless finance apps and try Found for free at https://found.com/impact Momentous: Shop now at https://livemomentous.com and use code IMPACT for 20% your new Momentous routine Factor: Get 50% off your first box plus 20% off your next month while your subscription is active at https://factormeals.com/impacttheory50 with code impacttheory50 StopBox: Get 10% off, plus Buy One Get One Free for the StopBox Pro with code IMPACT at https://stopboxusa.com What's up, everybody? It's Tom Bilyeu here: If you want my help... STARTING a business: join me here at ZERO TO FOUNDER SCALING a business: see if you qualify here. Get my battle-tested strategies and insights delivered weekly to your inbox: sign up here. If you're serious about leveling up your life, I urge you to check out my new podcast, Tom Bilyeu's Mindset Playbook —a goldmine of my most impactful episodes on mindset, business, and health. Trust me, your future self will thank you. Join me live on my Twitch stream. I'm live daily from 6:30 to 8:30 am PT at www.twitch.tv/tombilyeu LISTEN TO IMPACT THEORY AD FREE + BONUS EPISODES on APPLE PODCASTS: apple.co/impacttheory FOLLOW TOM: Instagram: https://www.instagram.com/tombilyeu/ Tik Tok: https://www.tiktok.com/@tombilyeu?lang=en Twitter: https://twitter.com/tombilyeu YouTube: https://www.youtube.com/@TomBilyeu Learn more about your ad choices. Visit megaphone.fm/adchoices
The Biden administration is in its final 40 days and trying to make progress on Ukraine, Gaza, student loans, climate and immigration. Parents are suing Character.AI after its chatbots allegedly encouraged teens to hurt themselves or others. What’s making the tech push out this kind of language? “The Bibi Files” is a new documentary that features never-before-seen leaked footage of police interrogations of the Israeli Prime Minister, his family, and his associates. Hanukkah foods often focus on frying, so latkes are at the center of the table. But when it comes to the sweet side of things, jelly donuts or “sufganiyot” are a must-eat.
In this episode of the Celebrate Kids podcast, Dr. Kathy addresses the challenges parents face in guiding their children through a culture increasingly influenced by artificial intelligence, such as chatbots and AI-driven technologies. She emphasizes the importance of understanding one's identity and the truth found in the Bible to discern what is good and harmful. Dr. Kathy warns against the deceptive allure of these technologies, which can lead children to believe they can achieve enlightenment akin to being like God. The episode offers practical insights and strategies for parents to help their kids navigate these cultural pressures and build resilience against misleading influences. Tune in to learn how to effectively guide your children in this digital age.
Mama Mia, Luigi ate a mushroom and killed a guy! We dive into the lore surrounding the UHC CEO killer and a couple of additional facts the media doesn’t mention. Daniel Penny is a free man probably due to racism. IDK, you’ll have to listen to Joy Reid for their official take. Syria has fallen quickly to a group that was associated with Al Queda oddly championed and aided by the US and Israel. We look into the HTS leader Abu Mohammed al-Julani and his interesting past. Who let that dog out? Perhaps John McCain knows. We have officially reached peak AI, as the money dries up. Only AI-agra could raise this flaccid industry. We circle back on the Character AI story and talk more about the demons in the AI. Finally, Trump is releasing a new scent. The speculation as to what it might smell like will not disappoint. Praetor Designation: Gerkenator became the first official Praetor with his 5 Benji dono. Please let us know the geographic location you’d like your Praetorship to cover! ART: SurveyorJose comes in with his 5th win in a row. Taking up residency on the Offensive! This piece plays on the concept that the pasta glock assassin was OTO! Associate Executive Producers of MMO #149: Hempress Emily M. and Tommy of the Nephilim Murder Crew Podcast Fiat Fun Coupon Donators: Sharky Shark Sam S. of Bourblandia and Beargrass Beer & Boobs Trashman, Email King Episode 149 Louigi Mangione CNN Arraignment Daniel Penny NewsNation Not Guilty Legal Strategy Syria Dugin Big War NBC Report on Fall of Assad Regime, US Involvement Who Is Abu Mohammed al-Julani NBC Report on US Airstrikes NewsNation Israel Airstrikes on Syria Pivot from AI CNBC AI Slowdown Google Quantum Tech PBS Surveilled Democracy Now Intro Character AI NPR Report Boobs N Beer Listener Note: Hey guys, I just wanted to comment on that kids weird AI girlfriend. Having read the transcripts of of the chat it was clear that the 14-year-old boy had intentionally disabled all the safety features. On top of that, it's very clear that the AI character was still intent on stopping him from doing it. However, he still goes around to weasel word it to get it (the ai) to encourage him to to follow through with self-termination. Article: Loab AI Art Hebrew Loab Boulee JayZ Allegations Trump Trump Fragrance
What happens when our worst fears around AI come true? For Megan Garcia, that's already happened. In February, after spending months interacting with chatbots created by Character.AI, her 14-year-old son Sewell took his own life. Garcia blames Character.AI, and she is suing them and Google, who she believes significantly contributed to Character.AI's alleged wrongdoing. Kara interviews Garcia and Meetali Jain, one of her lawyers and the founder of the Tech Justice Law Project, and they discuss the allegations made by Megan against Character.AI and Google. When reached for comment, a spokesperson from Character.AI responded with the following statement: We do not comment on pending litigation. We are heartbroken by the tragic loss of one of our users and want to express our deepest condolences to the family. We take the safety of our users very seriously, and our dedicated Trust and Safety team has worked to implement new safety features over the past seven months, including a pop-up directing users to the National Suicide Prevention Lifeline that is triggered by terms of self-harm or suicidal ideation. Our goal is to provide a creative space that is engaging, immersive, and safe. To achieve this, we are creating a fundamentally different experience for users under 18 that prioritizes safety, including reducing the likelihood of encountering sensitive or suggestive content, while preserving their ability to use the platform. As we continue to invest in the platform and the user experience, we are introducing new safety features in addition to the tools already in place that restrict the model and filter the content provided to the user. These include improved detection, response and intervention related to user inputs that violate our Terms or Community Guidelines, as well as a time-spent notification. For more information on these new features as well as other safety and IP moderation updates to the platform, please refer to the Character.AI blog. When reached for comment, Google spokesperson Jose Castaneda responded with the following statement: Our hearts go out to the family during this unimaginably difficult time. Just to clarify, Google and Character AI are completely separate, unrelated companies and Google has never had a role in designing or managing their AI model or technologies, nor have we used them in our products. User safety is a top concern of ours, and that's why – as has been widely reported – we've taken a cautious and responsible approach to developing and rolling out our AI products, with rigorous testing and safety processes. Questions? Comments? Email us at on@voxmedia.com or find us on Instagram and TikTok @onwithkaraswisher Learn more about your ad choices. Visit podcastchoices.com/adchoices
How do you tell the same story with consistency across hundreds of visuals?Storytelling has evolved from spoken tales to AI-generated videos and photorealistic characters—but one challenge remains: consistency. It's easy to generate impressive individual frames with tools like Midjourney, but what if you need to maintain the same character across multiple scenes, moods, and poses?In this episode, we welcome Diana Zdybel, co-founder of Consistent Character AI and Neolemon, to break down the art of consistent character creation with AI. If you've ever felt stuck generating high-quality visuals that align perfectly with your business brand or internal messaging, this session will change your game.Through tools like Consistent Character GPT, Diana shares how to maintain visual harmony across AI-generated characters and illustrations. Whether you're an entrepreneur building pitch decks or a marketing team creating ads, Diana's insights will help you tell compelling, visual stories—efficiently and at scale.In this session, you'll discover:How AI revolutionizes storytelling for business communication.Why consistency in visuals matters for internal and external messaging.Practical tips for creating characters with Consistent Character AI.How tools like Midjourney and SegMind can streamline your workflow.A walkthrough of the most common challenges in AI-generated visuals—and how to overcome them.Bonus tools and tricks to animate avatars and characters using HeyGen and Canva.With a background in AI design and years of experience navigating the challenges of character creation, Diana helps businesses streamline their creative workflows. Connect with her on LinkedIn to stay updated on her latest projects: Diana Zdybel.Find more about Consistent Character AI here: https://linktr.ee/neolemon About Leveraging AI The Ultimate AI Course for Business People: https://multiplai.ai/ai-course/ YouTube Full Episodes: https://www.youtube.com/@Multiplai_AI/ Connect with Isar Meitis: https://www.linkedin.com/in/isarmeitis/ Free AI Consultation: https://multiplai.ai/book-a-call/ Join our Live Sessions, AI Hangouts and newsletter: https://services.multiplai.ai/events If you've enjoyed or benefited from some of the insights of this episode, leave us a five-star review on your favorite podcast platform, and let us know what you learned, found helpful, or liked most about this show!
Send Everyday AI and Jordan a text messageThis is all bonkers. Google's Gemini 2.0 and ChatGPT-5 could both drop in December. Are we about to see an AI showdown? OpenAI's AGI chief just quit, warning: we're not prepared for what's coming. Congress stalling on funding the U.S. AI Safety Institute. If it collapses, who's keeping AI in check? Here's this week's AI news that matters! Newsletter: Sign up for our free daily newsletterMore on this Episode: Episode PageJoin the discussion: Ask Jordan questions on AIUpcoming Episodes: Check out the upcoming Everyday AI Livestream lineupWebsite: YourEverydayAI.comEmail The Show: info@youreverydayai.comConnect with Jordan on LinkedInTopics Covered in This Episode:1. Google's Project Jarvis2. OpenAI's upcoming AI, Orion3. US AI Safety Institute4. Disney's upcoming AI initiative5. Release of Anthropic's AI model6. Apple's AI effortsTimestamps:00:00 Google Gemini 2.0 promises AI improvements, challenges.05:02 AGI concerns prompt OpenAI departure, influence aims.09:28 Coalition urges Congress to secure AISI's future.12:34 US unveils AI strategy for national security.16:21 iOS 18.1 released; AI requires iPhone 15 Pro.17:25 iOS 18.1 enhances writing tools and summarization.22:37 Disney to announce major AI creative transformation.27:35 Claude 35 Sonnet: Powerful but buggy automation tool.28:22 Anthropic impressively shipped agentic AI first.33:14 Google's Jarvis uses computer vision; early development.36:31 AI-generated video indistinguishable from real; early stages.38:45 Orion upgrade enhances AI efficiency across industries.41:16 No Orion release before December, says Altman.44:22 Google Gemini, GPT-5, AGI concerns, AI strategies.47:06 Subscribe, follow, rate, share, and join daily.Keywords:Jordan Wilson, Google's Project Jarvis, Gemini, web automation, AI agents, Microsoft's Copilot Vision, Copilot Studio, Anthropic, OpenAI, Orion, GPT Tool, AI advancements, AI safety, US AI Safety Institute, AI benchmarks, Microsoft Surface laptop, Disney AI initiative, Anthropic Claude 3.5 SONNET, Claude 3.5 Haiku, Sam Altman, Orion's release, Microsoft's Ignite Conference, Google Gemini 2.0, federal AI legislation, AI national security, Character AI, Google lawsuit, Apple iOS 18.1, Apple Intelligence, revamped Siri. Get more out of ChatGPT by learning our PPP method in this live, interactive and free training! Sign up now: https://youreverydayai.com/ppp-registration/
Kara and Scott discuss Apple scaling back production of Apple Vision Pro headsets, and a mother suing Character.AI, claiming a chatbot encouraged her teenage son to commit suicide. Then, Tesla's Q3 earnings beat expectations, and Starbucks preliminary quarterly results disappoint yet again. Plus, the podcast election continues with former President Trump going on Joe Rogan, and VP Kamala Harris sitting down with Brené Brown. In more election news, Trump's former Chief of Staff, John Kelly warns that Trump is a fascist, and the secret big names donating to Harris. Stick around for listener mail to hear Scott's tips for teaching kids how to negotiate. Answer this week's listener poll on Threads here! Follow us on Instagram and Threads at @pivotpodcastofficial. Follow us on TikTok at @pivotpodcast. Send us your questions by calling us at 855-51-PIVOT, or at nymag.com/pivot. Learn more about your ad choices. Visit podcastchoices.com/adchoices
Spooky Commerce finishes with an epic finale as we explore the mysterious history of Seattle's iconic Pike Place Market. Phillip, Brian, and Sarah share tales of ghost sightings, haunted (coffee) grounds, and the rich lore behind one of the oldest continuously operating markets in the U.S. PLUS: Phillip and Brian debrief a tragedy involving AI-to-human relationships and discuss the challenges parents face as children gain unrestricted access to AI tools. The Roast With the GhostTW: This week's episode discusses sensitive subject matter, including suicide.Key Takeaways:AI taking on a life of its own: Children have unrestricted access to increasingly lifelike AI tools. Pike Place Market is home to more than just vendors: It's rumored to house spirits from Seattle's early days.Kells yeah: Kells Irish Pub sits on a former mortuary and a former indigenous burial site – and is one of Seattle's most haunted locations. [00:09:30] “The thing is, the difference between what you were doing and what an AI is doing is you are interacting with real people. Message boards were human-to-human communication. It was just instantaneous. This is like, getting sucked into a video game that has no end.” – Brian[00:18:00] "This thing (AI) has escaped containment." – Phillip[00:25:22] “Brian's developed a palate that's sophisticated enough to taste the ghost in Pike Place roast.” – Phillip[00:03:35] "The building that Kells Pub exists in is the Butterworth and Sons building, which was not only a former mortuary, but it was also built on top of tribal burial grounds.” – Sarah[00:40:00] “There's just there's so many elements here that would actually enable this sort of storytelling around the place.” – BrianAssociated Links:Check out Future Commerce on YouTubeCheck out Future Commerce+ for exclusive content and save on merch and printSubscribe to Insiders and The Senses to read more about what we are witnessing in the commerce worldListen to our other episodes of Future Commerce
Requiem for Raghavan Electric Motors Are About to Get a Major Upgrade Thanks to Benjamin Franklin Epic judge lets Google keep its Android app store closed to competitors — for now Google will block election ads after polls close LinkedIn is News Publishers' New Best Friend People are flocking to Bluesky as X makes more unwanted changes Elon Musk's X is changing its privacy policy to allow third parties to train AI on your posts Musk's X seeks to steer lawsuits to conservative court in Texas Sam Altman's identity and cryptocurrency venture Worldcoin has a rebrand Character.ai Faces Lawsuit After Teen's Suicide On-device AI could make apps obsolete Former OpenAI Researcher Says the Company Broke Copyright Law Controversy after Polish radio station replaces human presenters with AI Sotheby's to auction its first artwork made by a humanoid robot Elvis Found Alive Thank you, Steve Ballmer, for a concert app that made me want to scream Content Warning: The following story discusses the sensitive topic of suicide involving a minor. If you or someone you know is having thoughts of suicide or self-harm, please contact the 988 Suicide & Crisis Lifeline - call or text 988 or chat online at chat.988lifeline.org. If you are located outside the United States, please visit findahelpline.com to find a helpline in your country. Hosts: Leo Laporte, Jeff Jarvis, and Paris Martineau Guest: Nathan Freitas Download or subscribe to this show at https://twit.tv/shows/this-week-in-google. Get episodes ad-free with Club TWiT at https://twit.tv/clubtwit Sponsors: INFO.ACILEARNING.COM/TWIT - code TWIT100 bitwarden.com/twit uscloud.com