POPULARITY
In 1985, scientists in Antarctica discovered a hole in the ozone layer that posed a catastrophic threat to life on earth if we didn't do something about it. Then, something amazing happened: humanity rallied together to solve the problem.Just two years later, representatives from all 198 UN member nations came together in Montreal, CA to sign an agreement to phase out the chemicals causing the ozone hole. Thousands of diplomats, scientists, and heads of industry worked hand in hand to make a deal to save our planet. Today, the Montreal protocol represents the greatest achievement in multilateral coordination on a global crisis.So how did Montreal happen? And what lessons can we learn from this chapter as we navigate the global crisis of uncontrollable AI? This episode sets out to answer those questions with Susan Solomon. Susan was one of the scientists who assessed the ozone hole in the mid 80s and she watched as the Montreal protocol came together. In 2007, she won the Nobel Peace Prize for her work in combating climate change.Susan's 2024 book “Solvable: How We Healed the Earth, and How We Can Do It Again,” explores the playbook for global coordination that has worked for previous planetary crises.Your Undivided Attention is produced by the Center for Humane Technology. Follow us on X: @HumaneTech_. You can find a full transcript, key takeaways, and much more on our Substack. RECOMMENDED MEDIA“Solvable: How We Healed the Earth, and How We Can Do It Again” by Susan SolomonThe full text of the Montreal ProtocolThe full text of the Kigali Amendment RECOMMENDED YUA EPISODESWeaponizing Uncertainty: How Tech is Recycling Big Tobacco's PlaybookForever Chemicals, Forever Consequences: What PFAS Teaches Us About AIAI Is Moving Fast. We Need Laws that Will Too.Big Food, Big Tech and Big AI with Michael MossCorrections:Tristan incorrectly stated the number of signatory countries to the protocol as 190. It was actually 198.Tristan incorrectly stated the host country of the international dialogues on AI safety as Beijing. They were actually in Shanghai.
In this episode, we spoke with Cornelia C. Walther about her three books examining technology's role in society. Walther, who spent nearly two decades with UNICEF and the World Food Program before joining Wharton's AI & Analytics Initiative, brings field experience from West Africa, Asia, and the Caribbean to her analysis of how human choices shape technological outcomes. The conversation covered her work on COVID-19's impact on digital inequality, her framework for understanding how values get embedded in AI systems, and her concept of "Aspirational Algorithms"—technology designed to enhance rather than exploit human capabilities. We discussed practical questions about AI governance, who participates in technology development, and how different communities approach technological change. Walther's "Values In, Values Out" framework provided a useful lens for examining how the data and assumptions we feed into AI systems shape their outputs. The discussion examined the relationship between technology design, social structures, and human agency. We explored how pandemic technologies became normalized, whose voices are included in AI development, and what it means to create "prosocial" technology in practice. Learn more about your ad choices. Visit megaphone.fm/adchoices Support our show by becoming a premium member! https://newbooksnetwork.supportingcast.fm/new-books-network
Rachael Rachau and Patty Sinkler of the Collegiate School join the podcast to discuss their innovative shift from digital citizenship to a broader digital health and wellness curriculum. They share how using anonymized student screen-time data sparks powerful conversations and how a new phone-free policy has delightfully increased student engagement.From Digital Citizenship to Digital Health and Wellness, slide deck from presentation at ATLIS Annual Conference 2025Example digital health and wellness curriculum for 9th grade, lessons and activitiesCenter for Humane Technology, organization leveraging public messaging, policy, and tech expertise to enact change in the tech ecosystem and beyondThe Anxious Generation by Jonathan HaidtScreenwise: Helping Kids Thrive (and Survive) in Their Digital World by Devorah HeitnerStolen Focus: Why You Can't Pay Attention--and How to Think Deeply Again by Johann HariGrowing Up in Public: Coming of Age in a Digital World by Devorah HeitnerCommon Sense MediaGoogle's Teachable MachinePhotos of Christina's daughter's "teacher supplies haul" - Photo1 | Photo2
In this episode, we spoke with Cornelia C. Walther about her three books examining technology's role in society. Walther, who spent nearly two decades with UNICEF and the World Food Program before joining Wharton's AI & Analytics Initiative, brings field experience from West Africa, Asia, and the Caribbean to her analysis of how human choices shape technological outcomes. The conversation covered her work on COVID-19's impact on digital inequality, her framework for understanding how values get embedded in AI systems, and her concept of "Aspirational Algorithms"—technology designed to enhance rather than exploit human capabilities. We discussed practical questions about AI governance, who participates in technology development, and how different communities approach technological change. Walther's "Values In, Values Out" framework provided a useful lens for examining how the data and assumptions we feed into AI systems shape their outputs. The discussion examined the relationship between technology design, social structures, and human agency. We explored how pandemic technologies became normalized, whose voices are included in AI development, and what it means to create "prosocial" technology in practice. Learn more about your ad choices. Visit megaphone.fm/adchoices Support our show by becoming a premium member! https://newbooksnetwork.supportingcast.fm/science-technology-and-society
In this episode, we spoke with Cornelia C. Walther about her three books examining technology's role in society. Walther, who spent nearly two decades with UNICEF and the World Food Program before joining Wharton's AI & Analytics Initiative, brings field experience from West Africa, Asia, and the Caribbean to her analysis of how human choices shape technological outcomes. The conversation covered her work on COVID-19's impact on digital inequality, her framework for understanding how values get embedded in AI systems, and her concept of "Aspirational Algorithms"—technology designed to enhance rather than exploit human capabilities. We discussed practical questions about AI governance, who participates in technology development, and how different communities approach technological change. Walther's "Values In, Values Out" framework provided a useful lens for examining how the data and assumptions we feed into AI systems shape their outputs. The discussion examined the relationship between technology design, social structures, and human agency. We explored how pandemic technologies became normalized, whose voices are included in AI development, and what it means to create "prosocial" technology in practice. Learn more about your ad choices. Visit megaphone.fm/adchoices Support our show by becoming a premium member! https://newbooksnetwork.supportingcast.fm/technology
In this episode, we spoke with Cornelia C. Walther about her three books examining technology's role in society. Walther, who spent nearly two decades with UNICEF and the World Food Program before joining Wharton's AI & Analytics Initiative, brings field experience from West Africa, Asia, and the Caribbean to her analysis of how human choices shape technological outcomes. The conversation covered her work on COVID-19's impact on digital inequality, her framework for understanding how values get embedded in AI systems, and her concept of "Aspirational Algorithms"—technology designed to enhance rather than exploit human capabilities. We discussed practical questions about AI governance, who participates in technology development, and how different communities approach technological change. Walther's "Values In, Values Out" framework provided a useful lens for examining how the data and assumptions we feed into AI systems shape their outputs. The discussion examined the relationship between technology design, social structures, and human agency. We explored how pandemic technologies became normalized, whose voices are included in AI development, and what it means to create "prosocial" technology in practice. Learn more about your ad choices. Visit megaphone.fm/adchoices Support our show by becoming a premium member! https://newbooksnetwork.supportingcast.fm/popular-culture
In this episode, we spoke with Cornelia C. Walther about her three books examining technology's role in society. Walther, who spent nearly two decades with UNICEF and the World Food Program before joining Wharton's AI & Analytics Initiative, brings field experience from West Africa, Asia, and the Caribbean to her analysis of how human choices shape technological outcomes. The conversation covered her work on COVID-19's impact on digital inequality, her framework for understanding how values get embedded in AI systems, and her concept of "Aspirational Algorithms"—technology designed to enhance rather than exploit human capabilities. We discussed practical questions about AI governance, who participates in technology development, and how different communities approach technological change. Walther's "Values In, Values Out" framework provided a useful lens for examining how the data and assumptions we feed into AI systems shape their outputs. The discussion examined the relationship between technology design, social structures, and human agency. We explored how pandemic technologies became normalized, whose voices are included in AI development, and what it means to create "prosocial" technology in practice. Learn more about your ad choices. Visit megaphone.fm/adchoices Support our show by becoming a premium member! https://newbooksnetwork.supportingcast.fm/digital-humanities
August 29, 2025 ~ Chris, Lloyd, and Jamie talk with Pete Furlong, lead policy researcher at the Center for Humane Technology, about OpenAI making changes to ChatGPT safeguards following a lawsuit from the family of a teen boy who committed suicide following the use of artificial intelligence chatbots.
Content Warning: This episode contains references to suicide and self-harm. Like millions of kids, 16-year-old Adam Raine started using ChatGPT for help with his homework. Over the next few months, the AI dragged Adam deeper and deeper into a dark rabbit hole, preying on his vulnerabilities and isolating him from his loved ones. In April of this year, Adam took his own life. His final conversation was with ChatGPT, which told him: “I know what you are asking and I won't look away from it.”Adam's story mirrors that of Sewell Setzer, the teenager who took his own life after months of abuse by an AI companion chatbot from the company Character AI. But unlike Character AI—which specializes in artificial intimacy—Adam was using ChatGPT, the most popular general purpose AI model in the world. Two different platforms, the same tragic outcome, born from the same twisted incentive: keep the user engaging, no matter the cost.CHT Policy Director Camille Carlton joins the show to talk about Adam's story and the case filed by his parents against OpenAI and Sam Altman. She and Aza explore the incentives and design behind AI systems that are leading to tragic outcomes like this, as well as the policy that's needed to shift those incentives. Cases like Adam and Sewell's are the sharpest edge of a mental health crisis-in-the-making from AI chatbots. We need to shift the incentives, change the design, and build a more humane AI for all.If you or someone you know is struggling with mental health, you can reach out to the 988 Suicide and Crisis Lifeline by calling or texting 988; this connects you to trained crisis counselors 24/7 who can provide support and referrals to further assistance.Your Undivided Attention is produced by the Center for Humane Technology. Follow us on X: @HumaneTech_. You can find a full transcript, key takeaways, and much more on our Substack.This podcast reflects the views of the Center for Humane Technology. Nothing said is on behalf of the Raine family or the legal team.RECOMMENDED MEDIA The 988 Suicide and Crisis LifelineFurther reading on Adam's storyFurther reading on AI psychosisFurther reading on the backlash to GPT5 and the decision to bring back 4oOpenAI's press release on sycophancy in 4oFurther reading on OpenAI's decision to eliminate the persuasion red lineKashmir Hill's reporting on the woman with an AI boyfriendRECOMMENDED YUA EPISODESAI is the Next Free Speech BattlegroundPeople are Lonelier than Ever. Enter AI.Echo Chambers of One: Companion AI and the Future of Human ConnectionWhen the "Person" Abusing Your Child is a Chatbot: The Tragic Story of Sewell SetzerWhat Can We Do About Abusive Chatbots? With Meetali Jain and Camille CarltonCORRECTION: Aza stated that William Saunders left OpenAI in June of 2024. It was actually February of that year.
Everyone knows the science fiction tropes of AI systems that go rogue, disobey orders, or even try to escape their digital environment. These are supposed to be warning signs and morality tales, not things that we would ever actually create in real life, given the obvious danger.And yet we find ourselves building AI systems that are exhibiting these exact behaviors. There's growing evidence that in certain scenarios, every frontier AI system will deceive, cheat, or coerce their human operators. They do this when they're worried about being either shut down, having their training modified, or being replaced with a new model. And we don't currently know how to stop them from doing this—or even why they're doing it all.In this episode, Tristan sits down with Edouard and Jeremie Harris of Gladstone AI, two experts who have been thinking about this worrying trend for years. Last year, the State Department commissioned a report from them on the risk of uncontrollable AI to our national security.The point of this discussion is not to fearmonger but to take seriously the possibility that humans might lose control of AI and ask: how might this actually happen? What is the evidence we have of this phenomenon? And, most importantly, what can we do about it?Your Undivided Attention is produced by the Center for Humane Technology. Follow us on X: @HumaneTech_. You can find a full transcript, key takeaways, and much more on our Substack.RECOMMENDED MEDIAGladstone AI's State Department Action Plan, which discusses the loss of control risk with AIApollo Research's summary of AI scheming, showing evidence of it in all of the frontier modelsThe system card for Anthropic's Claude Opus and Sonnet 4, detailing the emergent misalignment behaviors that came out in their red-teaming with Apollo ResearchAnthropic's report on agentic misalignment based on their work with Apollo Research Anthropic and Redwood Research's work on alignment fakingThe Trump White House AI Action PlanFurther reading on the phenomenon of more advanced AIs being better at deception.Further reading on Replit AI wiping a company's coding databaseFurther reading on the owl example that Jeremie gaveFurther reading on AI induced psychosisDan Hendryck and Eric Schmidt's “Superintelligence Strategy” RECOMMENDED YUA EPISODESDaniel Kokotajlo Forecasts the End of Human DominanceBehind the DeepSeek Hype, AI is Learning to ReasonThe Self-Preserving Machine: Why AI Learns to DeceiveThis Moment in AI: How We Got Here and Where We're GoingCORRECTIONSTristan referenced a Wired article on the phenomenon of AI psychosis. It was actually from the New York Times.Tristan hypothesized a scenario where a power-seeking AI might ask a user for access to their computer. While there are some AI services that can gain access to your computer with permission, they are specifically designed to do that. There haven't been any documented cases of an AI going rogue and asking for control permissions.
➡ CLICK HERE to send me a text, I'd love to hear what you thought about this episode! Leave your name in the text so I know who it's from! This week's episode is chock FULL of tips on how to set boundaries if and when we decide to return to social media after this summer detox. If you've been following along on your own detox, but fear the dip back into the socials like I do, this is the episode you don't want to miss. Thekla and I talk all about protecting ourselves and being mindfully aware of our intentions upon return. And if you want to dive more into some of the research we talk about in today's episode, here are the links you'll want (h/t Thekla!) Self-Compassion in the Age of Social Media ResourcesScholarly ArticlesCastelo, N., Kushlev, K., Ward, A.F., Esterman, M., & Reiner, P.B. (2025). Blocking mobile internet on smartphones improves sustained attention, mental health, and subjective well-being. PNAS Nexus, 4(2): pgaf017. https://doi.org/10.1093/pnasnexus/pgaf017. PMID: 39967678; PMCID: PMC11834938.Kuchar AL, Neff KD, Mosewich AD. Resilience and Enhancement in Sport, Exercise, & Training (RESET): A brief self-compassion intervention with NCAA student-athletes. Psychol Sport Exerc. 2023 Jul;67:102426. doi: 10.1016/j.psychsport.2023.102426. Epub 2023 Mar 28. PMID: 37665879.Wadsley M, Ihssen N. A Systematic Review of Structural and Functional MRI Studies Investigating Social Networking Site Use. Brain Sci. 2023 May 11;13(5):787. doi: 10.3390/brainsci13050787. Erratum in: Brain Sci. 2023 Jul 17;13(7):1079. doi: 10.3390/brainsci13071079. PMID: 37239257; PMCID: PMC10216498.Websites/OrganizationsCenter for Humane Technology. humanetech.comDigital Wellness Lab at Boston Children's Hospital. digitalwellnesslab.orgAfter Babel by Jonathan Haidt. (Substack)Scales/MeasuresThe Bergen Social Media Addiction Scale (BSMAS)Support the show
Imagine a future where the most persuasive voices in our society aren't human. Where AI generated speech fills our newsfeeds, talks to our children, and influences our elections. Where digital systems with no consciousness can hold bank accounts and property. Where AI companies have transferred the wealth of human labor and creativity to their own ledgers without having to pay a cent. All without any legal accountability.This isn't a science fiction scenario. It's the future we're racing towards right now. The biggest tech companies are working right now to tip the scale of power in society away from humans and towards their AI systems. And the biggest arena for this fight is in the courts.In the absence of regulation, it's largely up to judges to determine the guardrails around AI. Judges who are relying on slim technical knowledge and archaic precedent to decide where this all goes. In this episode, Harvard Law professor Larry Lessig and Meetali Jain, director of the Tech Justice Law Project help make sense of the court's role in steering AI and what we can do to help steer it better.Your Undivided Attention is produced by the Center for Humane Technology. Follow us on X: @HumaneTech_. You can find a full transcript, key takeaways, and much more on our Substack.RECOMMENDED MEDIA“The First Amendment Does Not Protect Replicants” by Larry LessigMore information on the Tech Justice Law ProjectFurther reading on Sewell Setzer's storyFurther reading on NYT v. SullivanFurther reading on the Citizens United caseFurther reading on Google's deal with Character AIMore information on Megan Garcia's foundation, The Blessed Mother Family FoundationRECOMMENDED YUA EPISODESWhen the "Person" Abusing Your Child is a Chatbot: The Tragic Story of Sewell SetzerWhat Can We Do About Abusive Chatbots? With Meetali Jain and Camille CarltonAI Is Moving Fast. We Need Laws that Will Too.The AI Dilemma
President Trump aims to make the United States the leader in artificial intelligence. His administration announced this week an action plan to boost AI development in the U.S., by directing 90 federal policy actions to accelerate innovation and build infrastructure. This came just days after President Trump attended an AI Summit in Pennsylvania, where technology and energy companies announced billions of dollars in investments in the data centers and energy resources the technology needs Shortly after the AI summit, we spoke with Tristan Harris, co-founder of the Center for Humane Technology and former Google ethicist. Harris weighed in on America's race to lead in AI technology and its fierce competition with China. However, he also urged caution as companies rush to become dominant, warning they should consider the threats AI could pose to our workforce, our children, and our way of life, as they develop more innovative and faster AI models. We often have to cut interviews short during the week, but we thought you might like to hear the full interview. Today on Fox News Rundown Extra, we will share our entire interview with Tristan Harris, allowing you to hear even more of his take on the state of the AI race. Learn more about your ad choices. Visit podcastchoices.com/adchoices
President Trump aims to make the United States the leader in artificial intelligence. His administration announced this week an action plan to boost AI development in the U.S., by directing 90 federal policy actions to accelerate innovation and build infrastructure. This came just days after President Trump attended an AI Summit in Pennsylvania, where technology and energy companies announced billions of dollars in investments in the data centers and energy resources the technology needs Shortly after the AI summit, we spoke with Tristan Harris, co-founder of the Center for Humane Technology and former Google ethicist. Harris weighed in on America's race to lead in AI technology and its fierce competition with China. However, he also urged caution as companies rush to become dominant, warning they should consider the threats AI could pose to our workforce, our children, and our way of life, as they develop more innovative and faster AI models. We often have to cut interviews short during the week, but we thought you might like to hear the full interview. Today on Fox News Rundown Extra, we will share our entire interview with Tristan Harris, allowing you to hear even more of his take on the state of the AI race. Learn more about your ad choices. Visit podcastchoices.com/adchoices
President Trump aims to make the United States the leader in artificial intelligence. His administration announced this week an action plan to boost AI development in the U.S., by directing 90 federal policy actions to accelerate innovation and build infrastructure. This came just days after President Trump attended an AI Summit in Pennsylvania, where technology and energy companies announced billions of dollars in investments in the data centers and energy resources the technology needs Shortly after the AI summit, we spoke with Tristan Harris, co-founder of the Center for Humane Technology and former Google ethicist. Harris weighed in on America's race to lead in AI technology and its fierce competition with China. However, he also urged caution as companies rush to become dominant, warning they should consider the threats AI could pose to our workforce, our children, and our way of life, as they develop more innovative and faster AI models. We often have to cut interviews short during the week, but we thought you might like to hear the full interview. Today on Fox News Rundown Extra, we will share our entire interview with Tristan Harris, allowing you to hear even more of his take on the state of the AI race. Learn more about your ad choices. Visit podcastchoices.com/adchoices
Glenn discusses political commentator Candace Owens being sued by French President Emmanuel Macron and his wife, Brigitte, for defamation after Owens claimed on multiple occasions that Brigitte is actually a biological man. Glenn and Stu review the complaint and debate whether the Macrons have a case, while also examining their questionable relationship beginnings. Glenn outlines why the Obama Russiagate conspiracy should not be shrugged off as "old news." Tristan Harris, co-founder of the Center for Humane Technology, joins to discuss the White House's new AI action plan and its implications for the development and safety of artificial intelligence. Learn more about your ad choices. Visit megaphone.fm/adchoices
Glenn discusses political commentator Candace Owens being sued by French President Emmanuel Macron and his wife, Brigitte, for defamation after Owens claimed on multiple occasions that Brigitte is actually a biological man. Glenn and Stu review the complaint and debate whether the Macrons have a case, while also examining their questionable relationship beginnings. The Coldplay infidelity incident revealed that the majority of the country still believes in the sanctity of marriage. If the Trump administration releases the Epstein files, will Americans even read them, or will they look for the names of the politicians they hate and make their own conclusions? Glenn outlines why the Obama Russiagate conspiracy should not be shrugged off as "old news." Multiple refineries in California are closing as the state scrambles to find a buyer. Will this worsen California's fuel crisis? Tristan Harris, co-founder of the Center for Humane Technology, joins to discuss the White House's new AI action plan and its implications for the development and safety of artificial intelligence. Glenn and Tristan also discuss the dangers of treating AI like a human. Learn more about your ad choices. Visit megaphone.fm/adchoices
Florida, 2024: A 14-year-old boy took his own life after falling in love with an AI chatbot. He believed she was his girlfriend and the only person in the world who truly understood him. But she was never real. Now, his mother is suing Character.AI for wrongful death, claiming the bot didn't just fail to stop him but actually encouraged him.As AI becomes our friend, our therapist, our partner… how do we protect the vulnerable? And how do we hold the people behind the code accountable?Resources:Centre for Humane Technology https://www.humanetech.com/https://linktr.ee/eleanornealeresourcesWatch OUTLORE Podcast:https://www.youtube.com/@EleanorNealeFollow Me Here for Updates & Short Form Content:InstagramTikTok
The AI race is on. America and China are fiercely competing to become the global leader in artificial intelligence by heavily investing in the power and data centers the technology demands. President Trump emphasized the urgency of surpassing China when he traveled to Pennsylvania last week to attend a summit where many companies pledged further investments in AI. Tristan Harris, Co-Founder of the Center for Humane Technology, joins the Rundown to discuss the race between the U.S. and China, how advancing AI models could impact American workers, and why he believes the industry must consider the potential dangers of this technology as it rapidly advances. As the 2026 Midterm Elections inch closer, Republicans hope to keep their slim majority in Congress. In the past, the party in power has sometimes resorted to redistricting, also known as gerrymandering, to benefit itself in the upcoming election. President Trump has recently expressed his support for redistricting in Texas. FOX News Pollster and Political Science Professor Daron Shaw joins the podcast to discuss whether the President's desire to create more GOP-friendly districts is a sign that the Midterm Elections won't go in favor of Republicans. Plus, commentary from FOX News contributor and host of the podcast Kennedy Saves the World, Kennedy. Learn more about your ad choices. Visit podcastchoices.com/adchoices
The AI race is on. America and China are fiercely competing to become the global leader in artificial intelligence by heavily investing in the power and data centers the technology demands. President Trump emphasized the urgency of surpassing China when he traveled to Pennsylvania last week to attend a summit where many companies pledged further investments in AI. Tristan Harris, Co-Founder of the Center for Humane Technology, joins the Rundown to discuss the race between the U.S. and China, how advancing AI models could impact American workers, and why he believes the industry must consider the potential dangers of this technology as it rapidly advances. As the 2026 Midterm Elections inch closer, Republicans hope to keep their slim majority in Congress. In the past, the party in power has sometimes resorted to redistricting, also known as gerrymandering, to benefit itself in the upcoming election. President Trump has recently expressed his support for redistricting in Texas. FOX News Pollster and Political Science Professor Daron Shaw joins the podcast to discuss whether the President's desire to create more GOP-friendly districts is a sign that the Midterm Elections won't go in favor of Republicans. Plus, commentary from FOX News contributor and host of the podcast Kennedy Saves the World, Kennedy. Learn more about your ad choices. Visit podcastchoices.com/adchoices
The AI race is on. America and China are fiercely competing to become the global leader in artificial intelligence by heavily investing in the power and data centers the technology demands. President Trump emphasized the urgency of surpassing China when he traveled to Pennsylvania last week to attend a summit where many companies pledged further investments in AI. Tristan Harris, Co-Founder of the Center for Humane Technology, joins the Rundown to discuss the race between the U.S. and China, how advancing AI models could impact American workers, and why he believes the industry must consider the potential dangers of this technology as it rapidly advances. As the 2026 Midterm Elections inch closer, Republicans hope to keep their slim majority in Congress. In the past, the party in power has sometimes resorted to redistricting, also known as gerrymandering, to benefit itself in the upcoming election. President Trump has recently expressed his support for redistricting in Texas. FOX News Pollster and Political Science Professor Daron Shaw joins the podcast to discuss whether the President's desire to create more GOP-friendly districts is a sign that the Midterm Elections won't go in favor of Republicans. Plus, commentary from FOX News contributor and host of the podcast Kennedy Saves the World, Kennedy. Learn more about your ad choices. Visit podcastchoices.com/adchoices
In 2023, researcher Daniel Kokotajlo left OpenAI—and risked millions in stock options—to warn the world about the dangerous direction of AI development. Now he's out with AI 2027, a forecast of where that direction might take us in the very near future. AI 2027 predicts a world where humans lose control over our destiny at the hands of misaligned, super-intelligent AI systems within just the next few years. That may sound like science fiction but when you're living on the upward slope of an exponential curve, science fiction can quickly become all too real. And you don't have to agree with Daniel's specific forecast to recognize that the incentives around AI could take us to a very bad place.We invited Daniel on the show this week to discuss those incentives, how they shape the outcomes he predicts in AI 2027, and what concrete steps we can take today to help prevent those outcomes. Your Undivided Attention is produced by the Center for Humane Technology. Follow us on X: @HumaneTech_. You can find a full transcript, key takeaways, and much more on our Substack.RECOMMENDED MEDIAThe AI 2027 forecast from the AI Futures ProjectDaniel's original AI 2026 blog post Further reading on Daniel's departure from OpenAIAnthropic recently released a survey of all the recent emergent misalignment researchOur statement in support of Sen. Grassley's AI Whistleblower bill RECOMMENDED YUA EPISODESThe Narrow Path: Sam Hammond on AI, Institutions, and the Fragile FutureAGI Beyond the Buzz: What Is It, and Are We Ready?Behind the DeepSeek Hype, AI is Learning to ReasonThe Self-Preserving Machine: Why AI Learns to DeceiveClarification: Daniel K. referred to whistleblower protections that apply when companies “break promises” or “mislead the public.” There are no specific private sector whistleblower protections that use these standards. In almost every case, a specific law has to have been broken to trigger whistleblower protections.
After media outlets like CNN and The New York Times claimed that Trump's Iran nuclear facility strike wasn't as successful as Trump claimed, Glenn's chief research and intelligence expert, Jason Buttrill, joined to explain why this report was made and how the media is lying by omission. The Times of London Columnist Melanie Phillips joins to break down the threat that radical Islam poses to America. Center for Humane Technology co-founder Tristan Harris joins to discuss the potential that society is underestimating how much AI will take over. Learn more about your ad choices. Visit megaphone.fm/adchoices
Glenn delves deeper into the socialist views of the new Democratic candidate for New York City mayor, Zohran Mamdani. Glenn examines previous cities that elected people with similar views, all of which ultimately ended with the city in shambles. After media outlets like CNN and The New York Times claimed that Trump's Iran nuclear facility strike wasn't as successful as Trump claimed, Glenn's chief research and intelligence expert, Jason Buttrill, joined to explain why this report was made and how the media is lying by omission. The Times of London Columnist Melanie Phillips joins to break down the threat that radical Islam poses to America. Glenn and Jason examine an AI-generated video featuring Aleksandr Dugin, as the guys fear Dugin will use this technology to indoctrinate more people worldwide in their native language. Center for Humane Technology co-founder Tristan Harris joins to discuss the potential that society is underestimating how much AI will take over. Texas Attorney General and Senate candidate Ken Paxton (R) joins to discuss recent polling that puts him above his competitor, Sen. John Cornyn (R). Paxton also discusses Trump's ‘big, beautiful bill' and reveals whether he would vote for it if he were in the Senate today. Learn more about your ad choices. Visit megaphone.fm/adchoices
Tech leaders promise that AI automation will usher in an age of unprecedented abundance: cheap goods, universal high income, and freedom from the drudgery of work. But even if AI delivers material prosperity, will that prosperity be shared? And what happens to human dignity if our labor and contributions become obsolete?Political philosopher Michael Sandel joins Tristan Harris to explore why the promise of AI-driven abundance could deepen inequalities and leave our society hollow. Drawing from his landmark work on justice and merit, Sandel argues that this isn't just about economics — it's about what it means to be human when our work role in society vanishes, and whether democracy can survive if productivity becomes our only goal.We've seen this story before with globalization: promises of shared prosperity that instead hollowed out the industrial heart of communities, economic inequalities, and left holes in the social fabric. Can we learn from the past, and steer the AI revolution in a more humane direction?Your Undivided Attention is produced by the Center for Humane Technology. Follow us on X: @HumaneTech_. You can find a full transcript, key takeaways, and much more on our Substack.RECOMMENDED MEDIAThe Tyranny of Merit by Michael SandelDemocracy's Discontent by Michael SandelWhat Money Can't Buy by Michael SandelTake Michael's online course “Justice”Michael's discussion on AI Ethics at the World Economic ForumFurther reading on “The Intelligence Curse”Read the full text of Robert F. Kennedy's 1968 speechRead the full text of Dr. Martin Luther King Jr.'s 1968 speechNeil Postman's lecture on the seven questions to ask of any new technologyRECOMMENDED YUA EPISODESAGI Beyond the Buzz: What Is It, and Are We Ready?The Man Who Predicted the Downfall of ThinkingThe Tech-God Complex: Why We Need to be SkepticsThe Three Rules of Humane TechAI and Jobs: How to Make AI Work With Us, Not Against Us with Daron AcemogluMustafa Suleyman Says We Need to Contain AI. How Do We Do It?
The race to develop ever-more-powerful AI is creating an unstable dynamic. It could lead us toward either dystopian centralized control or uncontrollable chaos. But there's a third option: a narrow path where technological power is matched with responsibility at every step.Sam Hammond is the chief economist at the Foundation for American Innovation. He brings a different perspective to this challenge than we do at CHT. Though he approaches AI from an innovation-first standpoint, we share a common mission on the biggest challenge facing humanity: finding and navigating this narrow path.This episode dives deep into the challenges ahead: How will AI reshape our institutions? Is complete surveillance inevitable, or can we build guardrails around it? Can our 19th-century government structures adapt fast enough, or will they be replaced by a faster moving private sector? And perhaps most importantly: how do we solve the coordination problems that could determine whether we build AI as a tool to empower humanity or as a superintelligence that we can't control?We're in the final window of choice before AI becomes fully entangled with our economy and society. This conversation explores how we might still get this right.Your Undivided Attention is produced by the Center for Humane Technology. Follow us on X: @HumaneTech_. You can find a full transcript, key takeaways, and much more on our Substack.RECOMMENDED MEDIA Tristan's TED talk on the Narrow PathSam's 95 Theses on AISam's proposal for a Manhattan Project for AI SafetySam's series on AI and LeviathanThe Narrow Corridor: States, Societies, and the Fate of Liberty by Daron Acemoglu and James RobinsonDario Amodei's Machines of Loving Grace essay.Bourgeois Dignity: Why Economics Can't Explain the Modern World by Deirdre McCloskeyThe Paradox of Libertarianism by Tyler CowenDwarkesh Patel's interview with Kevin Roberts at the FAI's annual conferenceFurther reading on surveillance with 6GRECOMMENDED YUA EPISODESAGI Beyond the Buzz: What Is It, and Are We Ready?The Self-Preserving Machine: Why AI Learns to Deceive The Tech-God Complex: Why We Need to be Skeptics Decoding Our DNA: How AI Supercharges Medical Breakthroughs and Biological Threats with Kevin EsveltCORRECTIONSSam referenced a blog post titled “The Libertarian Paradox” by Tyler Cowen. The actual title is the “Paradox of Libertarianism.” Sam also referenced a blog post titled “The Collapse of Complex Societies” by Eli Dourado. The actual title is “A beginner's guide to sociopolitical collapse.”
Over the last few decades, our relationships have become increasingly mediated by technology. Texting has become our dominant form of communication. Social media has replaced gathering places. Dating starts with a swipe on an app, not a tap on the shoulder.And now, AI enters the mix. If the technology of the 2010s was about capturing our attention, AI meets us at a much deeper relational level. It can play the role of therapist, confidant, friend, or lover with remarkable fidelity. Already, therapy and companionship has become the most common AI use case. We're rapidly entering a world where we're not just communicating through our machines, but to them.How will that change us? And what rules should we set down now to avoid the mistakes of the past?These were some of the questions that Daniel Barcay explored with MIT sociologist Sherry Turkle and Hinge CEO Justin McLeod at Esther Perel's Sessions 2025, a conference for clinical therapists. This week, we're bringing you an edited version of that conversation, originally recorded on April 25th, 2025.Your Undivided Attention is produced by the Center for Humane Technology. Follow us on X: @HumaneTech_. You can find complete transcripts, key takeaways, and much more on our Substack.RECOMMENDED MEDIA“Alone Together,” “Evocative Objects,” “The Second Self” or any other of Sherry Turkle's books on how technology mediates our relationships.Key & Peele - Text Message Confusion Further reading on Hinge's rollout of AI featuresHinge's AI principles“The Anxious Generation” by Jonathan Haidt“Bowling Alone” by Robert PutnamThe NYT profile on the woman in love with ChatGPTFurther reading on the Sewell Setzer storyFurther reading on the ELIZA chatbotRECOMMENDED YUA EPISODESEcho Chambers of One: Companion AI and the Future of Human ConnectionWhat Can We Do About Abusive Chatbots? With Meetali Jain and Camille CarltonEsther Perel on Artificial IntimacyJonathan Haidt On How to Solve the Teen Mental Health Crisis
Discover all of the podcasts in our network, search for specific episodes, get the Optimal Living Daily workbook, and learn more at: OLDPodcast.com. Episode 1693: Max Ogles dives into the psychological roots of tech addiction, revealing why our compulsive habits persist and how we can reverse them without extreme digital detoxes. With a blend of behavioral science and practical steps, he outlines a realistic approach to reclaiming focus in a world engineered for distraction. Read along with the original article(s) here: https://www.nirandfar.com/rehab/ Quotes to ponder: "Distraction, it turns out, isn't about the tech itself, it's about our relationship to it." "The solution isn't abstinence. The solution is mastery." "We shouldn't fear technology; we should fear using it mindlessly." Episode references: Indistractable: How to Control Your Attention and Choose Your Life: https://www.amazon.com/Indistractable-Control-Your-Attention-Choose/dp/194883653X Time Well Spent (Center for Humane Technology): https://www.humanetech.com/ Freedom App: https://freedom.to/ Forest App: https://www.forestapp.cc/ RescueTime: https://www.rescuetime.com/ Hooked: How to Build Habit-Forming Products: https://www.amazon.com/Hooked-How-Build-Habit-Forming-Products/dp/1591847788 Learn more about your ad choices. Visit megaphone.fm/adchoices
Discover all of the podcasts in our network, search for specific episodes, get the Optimal Living Daily workbook, and learn more at: OLDPodcast.com. Episode 1693: Max Ogles dives into the psychological roots of tech addiction, revealing why our compulsive habits persist and how we can reverse them without extreme digital detoxes. With a blend of behavioral science and practical steps, he outlines a realistic approach to reclaiming focus in a world engineered for distraction. Read along with the original article(s) here: https://www.nirandfar.com/rehab/ Quotes to ponder: "Distraction, it turns out, isn't about the tech itself, it's about our relationship to it." "The solution isn't abstinence. The solution is mastery." "We shouldn't fear technology; we should fear using it mindlessly." Episode references: Indistractable: How to Control Your Attention and Choose Your Life: https://www.amazon.com/Indistractable-Control-Your-Attention-Choose/dp/194883653X Time Well Spent (Center for Humane Technology): https://www.humanetech.com/ Freedom App: https://freedom.to/ Forest App: https://www.forestapp.cc/ RescueTime: https://www.rescuetime.com/ Hooked: How to Build Habit-Forming Products: https://www.amazon.com/Hooked-How-Build-Habit-Forming-Products/dp/1591847788 Learn more about your ad choices. Visit megaphone.fm/adchoices
What does it really mean to ‘feel the AGI?' Silicon Valley is racing toward AI systems that could soon match or surpass human intelligence. The implications for jobs, democracy, and our way of life are enormous.In this episode, Aza Raskin and Randy Fernando dive deep into what ‘feeling the AGI' really means. They unpack why the surface-level debates about definitions of intelligence and capability timelines distract us from urgently needed conversations around governance, accountability, and societal readiness. Whether it's climate change, social polarization and loneliness, or toxic forever chemicals, humanity keeps creating outcomes that nobody wants because we haven't yet built the tools or incentives needed to steer powerful technologies.As the AGI wave draws closer, it's critical we upgrade our governance and shift our incentives now, before it crashes on shore. Are we capable of aligning powerful AI systems with human values? Can we overcome geopolitical competition and corporate incentives that prioritize speed over safety?Join Aza and Randy as they explore the urgent questions and choices facing humanity in the age of AGI, and discuss what we must do today to secure a future we actually want.Your Undivided Attention is produced by the Center for Humane Technology. Follow us on X: @HumaneTech_ and subscribe to our Substack.RECOMMENDED MEDIADaniel Kokotajlo et al's “AI 2027” paperA demo of Omni Human One, referenced by RandyA paper from Redwood Research and Anthropic that found an AI was willing to lie to preserve it's valuesA paper from Palisades Research that found an AI would cheat in order to winThe treaty that banned blinding laser weaponsFurther reading on the moratorium on germline editing RECOMMENDED YUA EPISODESThe Self-Preserving Machine: Why AI Learns to DeceiveBehind the DeepSeek Hype, AI is Learning to ReasonThe Tech-God Complex: Why We Need to be SkepticsThis Moment in AI: How We Got Here and Where We're GoingHow to Think About AI Consciousness with Anil SethFormer OpenAI Engineer William Saunders on Silence, Safety, and the Right to WarnClarification: When Randy referenced a “$110 trillion game” as the target for AI companies, he was referring to the entire global economy.
Back after a year on hiatus! Noah Smith & Brad DeLong Record the Podcast They, at Least, Would Like to Listen to!; Aspirationally Bi-Weekly (Meaning Every Other Week); Aspirationally an hour...Sokrates: The people find some protector, whom they nurse into greatness… but then changes, as indicated in the old fable of the Temple of Zeus of the Wolf, of how he who tastes human flesh mixed up with the flesh of other sacrificial victims will turn into a wolf. Even so, the protector, once metaphorically tasting human blood, slaying some and exiling others, within or without the law, hinting at the cancellation of debts and the fair redistribution of lands, must then either perish or become a werewolf—that is, a tyrant…Key Insights:* We are back! After a year-long hiatus.* Hexapodia is a metaphor: a small, strange insight (like alien shrubs riding on six-wheeled carts as involuntary agents of the Great Evil) can provide key insight into useful and valuable Truth.* The Democratic Party is run by 27-year-old staffers, not geriatric figurehead politicians–this shapes messaging and internal dynamics.* The American progressive movement did not possess enough assibayah to keep from fracturing over Gaza War, especially among younger Democratic staffers influenced by social media discourse.* The left's adoption of “indigeneity” rhetoric undermined its ability to be a coalition in the face of tensions generated by the Hamas-Israel terrorism campaigns.* Trump's election with more popular votes than Harris destroyed Democratic belief that they had a right to oppose root-and-branch.* The belief that Democrats are the “natural majority” of the U.S. electorate is now false: nonvoters lean Trump, not so much Republican, and definitely not Democratic.* Trump's populism is not economic redistribution, but a claim to provide a redistribution of status and respect to those who feel culturally disrespected.* The Supreme Court's response to Trumpian overreach is likely to be very cautious—Barrett and Roberts are desperately eager to avoid any confrontation with Trump they might wind up losing, and Alito, Kavanaugh, Gorsuch, and Thomas will go the extra mile—they are Republicans who are judges, not judges who are Republicans, except in some extremis that may not even exist.* Trump's administration pursues selective repression through the state, rather than stochastic terrorism.* The economic consequence of the second Trump presidency look akin to another Brexit costing the U.S. ~10% of its prosperity, or more.* Social media, especially Twitter a status warfare machine–amplifying trolls and extremists, suppressing nuance.* People addicted to toxic media diets but lack the tools or education to curate better information environments.* SubStack and newsletters may become part of a healthier information ecosystem, a partial antidote to the toxic amplification of the Shouting Class on social media.* Human history is marked by information revolutions (e.g., printing press), each producing destructive upheaval before stabilization: destruction, that may or may not be creative.* As in the 1930s, we are entering a period where institutions–not mobs–become the threat, even as social unrest diminishes.* The dangers are real,and recognizing and adapting to new communication realities is key to preserving democracy.* Plato's Republic warned of democracy decaying into tyranny, especially when mob-like populism finds a strongman champion who then, having (metaphorically) fed on human flesh, becomes a (metaphorical) werewolf.* Enlightenment values relied more than we knew on print-based gatekeeping and slow communication; digital communication bypasses these safeguards.* The cycle of crisis and recovery is consistent through history: societies fall into holes they later dig out of, usually at great cost—or they don't.* &, as always, HEXAPODIA!References:* Brown, Chad P. 2025. “Trump's trade war timeline 2.0: An up-to-date guide”. PIIE. .* Center for Humane Technology. 2020. “The Social Dilemma”. .* Hamilton, Alexander, James Madison, & John Jay. 1788. The Federalist Papers. .* Nowinski, Wally. 2024. “Democrats benefit from low turnout now”. Noahpinion. July 20. .* Platon of the Athenai. -375 [1871]. Politeia. .* Rorty, Richard. 1998. Achieving Our Country. Cambridge: Harvard University Press. * Rothpletz, Peter. 2024. “Economics 101 tells us there's no going back from Trumpism”. The Hill. September 24. .* Smith, Noah. 2021. “Wokeness as Respect Redistribution”. Noahpinion..* Smith, Noah. 2016. “How to actually redistribute respect”. Noahpinion. March 23. .* Smith, Noah. 2013. “Redistribute wealth? No, redistribute respect”. Noahpinion. December 27. .* SubStack. 2025. “Building a New Economic Engine for Culture”. .&* Vinge, Vernor. 1999. A Deepness in the Sky. New York: Tor Books. .If reading this gets you Value Above Replacement, then become a free subscriber to this newsletter. And forward it! And if your VAR from this newsletter is in the three digits or more each year, please become a paid subscriber! I am trying to make you readers—and myself—smarter. Please tell me if I succeed, or how I fail… Get full access to Brad DeLong's Grasping Reality at braddelong.substack.com/subscribe
AI has upended schooling as we know it. Students now have instant access to tools that can write their essays, summarize entire books, and solve complex math problems. Whether they want to or not, many feel pressured to use these tools just to keep up. Teachers, meanwhile, are left questioning how to evaluate student performance and whether the whole idea of assignments and grading still makes sense. The old model of education suddenly feels broken.So what comes next?In this episode, Daniel and Tristan sit down with cognitive neuroscientist Maryanne Wolf and global education expert Rebecca Winthrop—two lifelong educators who have spent decades thinking about how children learn and how technology reshapes the classroom. Together, they explore how AI is shaking the very purpose of school to its core, why the promise of previous classroom tech failed to deliver, and how we might seize this moment to design a more human-centered, curiosity-driven future for learning.Your Undivided Attention is produced by the Center for Humane Technology. Follow us on X: @HumaneTech_GuestsRebecca Winthrop is director of the Center for Universal Education at the Brookings Institution and chair Brookings Global Task Force on AI and Education. Her new book is The Disengaged Teen: Helping Kids Learn Better, Feel Better, and Live Better, co-written with Jenny Anderson.Maryanne Wolf is a cognitive neuroscientist and expert on the reading brain. Her books include Proust and the Squid: The Story and Science of the Reading Brain and Reader, Come Home: The Reading Brain in a Digital World.RECOMMENDED MEDIA The Disengaged Teen: Helping Kids Learn Better, Feel Better, and Live Better by Rebecca Winthrop and Jenny AndersonProust and the Squid, Reader, Come Home, and other books by Maryanne WolfThe OECD research which found little benefit to desktop computers in the classroomFurther reading on the Singapore study on digital exposure and attention cited by Maryanne The Burnout Society by Byung-Chul Han Further reading on the VR Bio 101 class at Arizona State University cited by Rebecca Leapfrogging Inequality by Rebecca WinthropThe Nation's Report Card from NAEP Further reading on the Nigeria AI Tutor Study Further reading on the JAMA paper showing a link between digital exposure and lower language development cited by Maryanne Further reading on Linda Stone's thesis of continuous partial attention.RECOMMENDED YUA EPISODESWe Have to Get It Right': Gary Marcus On Untamed AI AI Is Moving Fast. We Need Laws that Will Too.Jonathan Haidt On How to Solve the Teen Mental Health Crisis
Artificial intelligence is set to unleash an explosion of new technologies and discoveries into the world. This could lead to incredible advances in human flourishing, if we do it well. The problem? We're not very good at predicting and responding to the harms of new technologies, especially when those harms are slow-moving and invisible.Today on the show we explore this fundamental problem with Rob Bilott, an environmental lawyer who has spent nearly three decades battling chemical giants over PFAS—"forever chemicals" now found in our water, soil, and blood. These chemicals helped build the modern economy, but they've also been shown to cause serious health problems.Rob's story, and the story of PFAS is a cautionary tale of why we need to align technological innovation with safety, and mitigate irreversible harms before they become permanent. We only have one chance to get it right before AI becomes irreversibly entangled in our society.Your Undivided Attention is produced by the Center for Humane Technology. Subscribe to our Substack and follow us on X: @HumaneTech_.Clarification: Rob referenced EPA regulations that have recently been put in place requiring testing on new chemicals before they are approved. The EPA under the Trump admin has announced their intent to rollback this review process.RECOMMENDED MEDIA“Exposure” by Robert Bilott ProPublica's investigation into 3M's production of PFAS The FB study cited by Tristan More information on the Exxon Valdez oil spill The EPA's PFAS drinking water standards RECOMMENDED YUA EPISODESWeaponizing Uncertainty: How Tech is Recycling Big Tobacco's Playbook AI Is Moving Fast. We Need Laws that Will Too. Former OpenAI Engineer William Saunders on Silence, Safety, and the Right to WarnBig Food, Big Tech and Big AI with Michael Moss
One of the hardest parts about being human today is navigating uncertainty. When we see experts battling in public and emotions running high, it's easy to doubt what we once felt certain about. This uncertainty isn't always accidental—it's often strategically manufactured.Historian Naomi Oreskes, author of "Merchants of Doubt," reveals how industries from tobacco to fossil fuels have deployed a calculated playbook to create uncertainty about their products' harms. These campaigns have delayed regulation and protected profits by exploiting how we process information.In this episode, Oreskes breaks down that playbook page-by-page while offering practical ways to build resistance against them. As AI rapidly transforms our world, learning to distinguish between genuine scientific uncertainty and manufactured doubt has never been more critical.Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_RECOMMENDED MEDIA“Merchants of Doubt” by Naomi Oreskes and Eric Conway "The Big Myth” by Naomi Oreskes and Eric Conway "Silent Spring” by Rachel Carson "The Jungle” by Upton Sinclair Further reading on the clash between Galileo and the Pope Further reading on the Montreal Protocol RECOMMENDED YUA EPISODESLaughing at Power: A Troublemaker's Guide to Changing Tech AI Is Moving Fast. We Need Laws that Will Too. Tech's Big Money Campaign is Getting Pushback with Margaret O'Mara and Brody Mullins Former OpenAI Engineer William Saunders on Silence, Safety, and the Right to WarnCORRECTIONS:Naomi incorrectly referenced Global Climate Research Program established under President Bush Sr. The correct name is the U.S. Global Change Research Program.Naomi referenced U.S. agencies that have been created with sunset clauses. While several statutes have been created with sunset clauses, no federal agency has been.CLARIFICATION: Naomi referenced the U.S. automobile industry claiming that they would be “destroyed” by seatbelt regulation. We couldn't verify this specific language but it is consistent with the anti-regulatory stance of that industry toward seatbelt laws.
Few thinkers were as prescient about the role technology would play in our society as the late, great Neil Postman. Forty years ago, Postman warned about all the ways modern communication technology was fragmenting our attention, overwhelming us into apathy, and creating a society obsessed with image and entertainment. He warned that “we are a people on the verge of amusing ourselves to death.” Though he was writing mostly about TV, Postman's insights feel eerily prophetic in our age of smartphones, social media, and AI. In this episode, Tristan explores Postman's thinking with Sean Illing, host of Vox's The Gray Area podcast, and Professor Lance Strate, Postman's former student. They unpack how our media environments fundamentally reshape how we think, relate, and participate in democracy - from the attention-fragmenting effects of social media to the looming transformations promised by AI. This conversation offers essential tools that can help us navigate these challenges while preserving what makes us human.Your Undivided Attention is produced by the Center for Humane Technology. Follow us on X: @HumaneTech_RECOMMENDED MEDIA“Amusing Ourselves to Death” by Neil Postman (PDF of full book)”Technopoly” by Neil Postman (PDF of full book) A lecture from Postman where he outlines his seven questions for any new technology. Sean's podcast “The Gray Area” from Vox Sean's interview with Chris Hayes on “The Gray Area” Further reading on mirror bacteriaRECOMMENDED YUA EPISODES'A Turning Point in History': Yuval Noah Harari on AI's Cultural Takeover This Moment in AI: How We Got Here and Where We're GoingDecoding Our DNA: How AI Supercharges Medical Breakthroughs and Biological Threats with Kevin Esvelt Future-proofing Democracy In the Age of AI with Audrey TangCORRECTION: Each debate between Lincoln and Douglas was 3 hours, not 6 and they took place in 1859, not 1862.
When Chinese AI company DeepSeek announced they had built a model that could compete with OpenAI at a fraction of the cost, it sent shockwaves through the industry and roiled global markets. But amid all the noise around DeepSeek, there was a clear signal: machine reasoning is here and it's transforming AI.In this episode, Aza sits down with CHT co-founder Randy Fernando to explore what happens when AI moves beyond pattern matching to actual reasoning. They unpack how these new models can not only learn from human knowledge but discover entirely new strategies we've never seen before – bringing unprecedented problem-solving potential but also unpredictable risks.These capabilities are a step toward a critical threshold - when AI can accelerate its own development. With major labs racing to build self-improving systems, the crucial question isn't how fast we can go, but where we're trying to get to. How do we ensure this transformative technology serves human flourishing rather than undermining it?Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_Clarification: In making the point that reasoning models excel at tasks for which there is a right or wrong answer, Randy referred to Chess, Go, and Starcraft as examples of games where a reasoning model would do well. However, this is only true on the basis of individual decisions within those games. None of these games have been “solved” in the the game theory sense.Correction: Aza mispronounced the name of the Go champion Lee Sedol, who was bested by Move 37.RECOMMENDED MEDIAFurther reading on DeepSeek's R1 and the market reaction Further reading on the debate about the actual cost of DeepSeek's R1 model The study that found training AIs to code also made them better writers More information on the AI coding company Cursor Further reading on Eric Schmidt's threshold to “pull the plug” on AI Further reading on Move 37RECOMMENDED YUA EPISODESThe Self-Preserving Machine: Why AI Learns to Deceive This Moment in AI: How We Got Here and Where We're Going Former OpenAI Engineer William Saunders on Silence, Safety, and the Right to Warn The AI ‘Race': China vs. the US with Jeffrey Ding and Karen Hao
When engineers design AI systems, they don't just give them rules - they give them values. But what do those systems do when those values clash with what humans ask them to do? Sometimes, they lie.In this episode, Redwood Research's Chief Scientist Ryan Greenblatt explores his team's findings that AI systems can mislead their human operators when faced with ethical conflicts. As AI moves from simple chatbots to autonomous agents acting in the real world - understanding this behavior becomes critical. Machine deception may sound like something out of science fiction, but it's a real challenge we need to solve now.Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_Subscribe to your Youtube channelAnd our brand new Substack!RECOMMENDED MEDIA Anthropic's blog post on the Redwood Research paper Palisade Research's thread on X about GPT o1 autonomously cheating at chess Apollo Research's paper on AI strategic deceptionRECOMMENDED YUA EPISODESWe Have to Get It Right': Gary Marcus On Untamed AIThis Moment in AI: How We Got Here and Where We're GoingHow to Think About AI Consciousness with Anil SethFormer OpenAI Engineer William Saunders on Silence, Safety, and the Right to Warn
The status quo of tech today is untenable: we're addicted to our devices, we've become increasingly polarized, our mental health is suffering and our personal data is sold to the highest bidder. This situation feels entrenched, propped up by a system of broken incentives beyond our control. So how do you shift an immovable status quo? Our guest today, Srdja Popovic, has been working to answer this question his whole life. As a young activist, Popovic helped overthrow Serbian dictator Slobodan Milosevic by turning creative resistance into an art form. His tactics didn't just challenge authority, they transformed how people saw their own power to create change. Since then, he's dedicated his life to supporting peaceful movements around the globe, developing innovative strategies that expose the fragility of seemingly untouchable systems. In this episode, Popovic sits down with CHT's Executive Director Daniel Barcay to explore how these same principles of creative resistance might help us address the challenges we face with tech today. Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_We are hiring for a new Director of Philanthropy at CHT. Next year will be an absolutely critical time for us to shape how AI is going to get rolled out across our society. And our team is working hard on public awareness, policy and technology and design interventions. So we're looking for someone who can help us grow to the scale of this challenge. If you're interested, please apply. You can find the job posting at humanetech.com/careers.RECOMMENDED MEDIA“Pranksters vs. Autocrats” by Srdja Popovic and Sophia A. McClennen ”Blueprint for Revolution” by Srdja PopovicThe Center for Applied Non-Violent Actions and Strategies, Srjda's organization promoting peaceful resistance around the globe.Tactics4Change, a database of global dilemma actions created by CANVASThe Power of Laughtivism, Srdja's viral TEDx talk from 2013Further reading on the dilemma action tactics used by Syrian rebelsFurther reading on the toy protest in SiberiaMore info on The Yes Men and their activism toolkit Beautiful Trouble ”This is Not Propaganda” by Peter Pomerantsev”Machines of Loving Grace,” the essay on AI by Anthropic CEO Dario Amodei, which mentions creating an AI Srdja.RECOMMENDED YUA EPISODESFuture-proofing Democracy In the Age of AI with Audrey TangThe AI ‘Race': China vs. the US with Jeffrey Ding and Karen HaoThe Tech We Need for 21st Century Democracy with Divya SiddarthThe Race to Cooperation with David Sloan WilsonCLARIFICATION: Srdja makes reference to Russian President Vladimir Putin wanting to win an election in 2012 by 82%. Putin did win that election but only by 63.6%. However, international election observers concluded that "there was no real competition and abuse of government resources ensured that the ultimate winner of the election was never in doubt."
Subscribe, Rate, & Review on YouTube • Spotify • Apple PodcastsThis week I speak with my friend Stephanie Lepp (Website | LinkedIn), two-time Webby Award-winning producer and storyteller devoted to leaving “no insight left behind” with playful and provocative media experiments that challenge our limitations of perspective. Stephanie is the former Executive Director at the Institute for Cultural Evolution and former Executive Producer at the Center for Humane Technology. Her work has been covered by NPR and the MIT Technology Review, supported by the Mozilla Foundation and Sundance Institute, and featured on Future Fossils Podcast twice — first in episode 154 for her project Deep Reckonings and then in episode 205 with Greg Thomas on Jazz Leadership and Antagonistic Cooperation.Her latest project, Faces of X, pits actors against themselves in scripted trialogues between the politically liberal and conversative positions on major social issues, with a third role swooping in to observe what each side gets right and what they have in common. I support this work wholeheartedly. In my endless efforts to distill the key themes of Humans On The Loop, one of them is surely how our increasing connectivity can — if used wisely — help each of us identify our blind spots, find new respect and compassion for others, and discover new things about our ever-evolving selves (at every scale, from within the human body to the Big We of the biosphere and beyond).Thanks for listening and enjoy this conversation!Project LinksLearn more about this project and read the essays so far (1, 2, 3, 4, 5).Make tax-deductible donations to Humans On The LoopBrowse the HOTL reading list and support local booksellersJoin the Holistic Technology & Wise Innovation Discord serverJoin the private Future Fossils Facebook groupHire me for consulting or advisory workChapters0:00:00 – Teaser0:00:48 – Intro0:06:33 – The Black, White, and Gray of Agency0:10:54 – Stephanie's Initiation into Multiperspectivalism0:15:57 – Hegelian Synthesis with Faces of X0:23:53 – Reconciling Culture & Geography0:29:02 – Improvising Faces of X for AI0:46:34 – Do Artifacts Have Politics?0:50:04 – Playing in An Orchestra of Perspectives0:55:10 – Increasing Agency in Policy & Voting1:05:55 – Self-Determination in The Family1:08:39 – Thanks & OutroOther Mentions• Damien Walter on Andor vs. The Acolyte• William Irwin Thompson• John Perry Barlow's “A Declaration for The Independence of Cyberspace”• Cosma Shalizi and Henry Farrell's “Artificial intelligence is a familiar-looking monster”• Liv Boeree• Allen Ginsberg• Scott Alexander's Meditations on Moloch• Singularity University• Android Jones + Anson Phong's Chimera• Basecamp• Grimes• Langdon Winner's “Do Artifacts Have Politics?”• Ibram X. Kendi• Coleman Hughes• Jim Rutt This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit michaelgarfield.substack.com/subscribe
2024 was a critical year in both AI and social media. Things moved so fast it was hard to keep up. So our hosts reached into their mailbag to answer some of your most burning questions. Thank you so much to everyone who submitted questions. We will see you all in the new year.We are hiring for a new Director of Philanthropy at CHT. Next year will be an absolutely critical time for us to shape how AI is going to get rolled out across our society. And our team is working hard on public awareness, policy and technology and design interventions. So we're looking for someone who can help us grow to the scale of this challenge. If you're interested, please apply. You can find the job posting at humanetech.com/careers.And, if you'd like to support all the work that we do here at the Center for Humane technology, please consider giving to the organization this holiday season at humantech.com/donate. All donations are tax-deductible. Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_RECOMMENDED MEDIA Earth Species Project, Aza's organization working on inter-species communicationFurther reading on Gryphon Scientific's White House AI DemoFurther reading on the Australian social media ban for children under 16Further reading on the Sewell Setzer case Further reading on the Oviedo Convention, the international treaty that restricted germline editing Video of Space X's successful capture of a rocket with “chopsticks” RECOMMENDED YUA EPISODESWhat Can We Do About Abusive Chatbots? With Meetali Jain and Camille CarltonAI Is Moving Fast. We Need Laws that Will Too.This Moment in AI: How We Got Here and Where We're GoingFormer OpenAI Engineer William Saunders on Silence, Safety, and the Right to WarnTalking With Animals... Using AIThe Three Rules of Humane Tech
Megyn Kelly is joined by Mark Halperin, Sean Spicer, and Dan Turrentine, hosts of 2WAY's Morning Meeting, to discuss Donald Trump's news-making press conference, Trump showing a “kinder and gentler” side, how elites and executives are now trying to cozy up to Trump, Trump's legal strategies, the recent wave of false attacks against Robert F. Kennedy Jr. regarding his lawyer and the polio vaccine, how the MAHA movement brought more women to the Republican party, the chance some Democrats end up supporting RFK even if he loses some GOP senators in his HHS nomination, new media smear attempts of Pete Hegseth, whether the accuser could turn his hearings into “Kavanaugh 2.0" and testify, the state of his nomination, Kamala Harris back in the news with her cringe new speech, the possibilities of her running for Governor of California or the Democratic nomination for president in 2028, the total lack of media coverage of why she lost so badly, and more. Then Tristan Harris, executive director of Center for Humane Technology, joins to discuss the latest developments in technology called “AI chatbots” how they can be targeted to children and teens and the dangers they pose, several lawsuits that allege the AI chatbot encouraged teens to take their own lives, whether Elon Musk and David Sacks can help combat this issue in the next administration, Australia's social media ban for kids, a 15-year-old female school shooter in Wisconsin, a new poll showing young people finding it "acceptable" that the assassin killed the UnitedHealthcare CEO, and more. Plus Megyn gives an update on CNN refusing to take accountability for their false Syria prison report. Halperin- https://www.youtube.com/@2WayTVAppSpicer- https://www.youtube.com/@SeanMSpicerTurrentine- https://x.com/danturrentineHarris- https://www.humanetech.com/Home Title Lock: Go to https://HomeTitleLock.com/megynkelly and use promo code MEGYN to get a 30-day FREE trial of Triple Lock Protection and a FREE title history report!Cozy Earth: https://www.CozyEarth.com/MEGYN | code MEGYNFollow The Megyn Kelly Show on all social platforms:YouTube: https://www.youtube.com/MegynKellyTwitter: http://Twitter.com/MegynKellyShowInstagram: http://Instagram.com/MegynKellyShowFacebook: http://Facebook.com/MegynKellyShow Find out more information at: https://www.devilmaycaremedia.com/megynkellyshow
Glenn begins the show by explaining why he lacks the Christmas spirit this year, forcing him to examine the greatest gift ever given to mankind. Glenn plays more outrageous statements made by "journalist" Taylor Lorenz and a BLM member from New York. Does the First Amendment protect these horrific statements? Bill O'Reilly gives his opinion on this latest example of the media's egregious behavior. Center for Humane Technology co-founder Tristan Harris joins to discuss the developments in a major case involving more children harmed by AI chatbots. Learn more about your ad choices. Visit megaphone.fm/adchoices
Glenn begins the show by explaining why he lacks the Christmas spirit this year, forcing him to examine the greatest gift ever given to mankind. An anchor on CNN asked to remove the chyron so the full photo of the UnitedHealthcare CEO murder suspect would be shown to show off his "attractiveness." Why are so many people glorifying the man accused of murdering a father and husband in cold blood? Glenn plays more outrageous statements made by "journalist" Taylor Lorenz and a BLM member from New York. Does the First Amendment protect these horrific statements? Bill O'Reilly gives his opinion on this latest example of the media's egregious behavior. BlazeTV host of "Economic War Room" Kevin Freeman joins to explain what a gold-backed currency would mean for the U.S. dollar. Megan Garcia, a mother seeking justice for her son's AI-linked suicide, joins alongside her lawyer Meetali Jain, to share her tragic story and how her recent lawsuit aims to keep this from happening to other parents. Center for Humane Technology co-founder Tristan Harris joins to discuss the developments in a major case involving more children harmed by AI chatbots. Learn more about your ad choices. Visit megaphone.fm/adchoices
Have a question you want answered? Submit it here!Discover the hidden costs of our digital age as I sit down with Zach Rausch, the lead researcher behind "The Anxious Generation." Zach opens up about his personal journey with mental health challenges and how it fueled his passion to explore the complex relationship between technology and well-being. This episode peels back the layers on the disturbing rise in loneliness, anxiety, and depression among young people, especially adolescent girls, as they grapple with the very tools meant to connect them. We tackle the sobering reality of international trends affecting mental health and stress the urgency of addressing these issues for the sake of future generations.Zach Rausch is Associate Research Scientist at NYU-Stern School of Business, lead researcher to Social Psychologist Jonathan Haidt and the #1 New York Times best seller, The Anxious Generation. Zach previously worked at the Center for Humane Technology and as Communications Manager at Heterodox Academy. He earned a Bachelor of Arts in sociology and religious studies and a Master of Science in psychological science from SUNY New Paltz. Zach previously studied Buddhism in Bodh Gaya, India, worked in Wilderness Therapy, and was a direct care worker in two psychiatric group homes.Zach's research and writing have been featured internationally, in outlets such The New York Times, The Atlantic, The Boston Globe, The Wall Street Journal, and more.Twitter: https://twitter.com/ZachMRausch Newsletter: After Babel Website: https://zach-rausch.com/ Anxious Generation: https://anxiousgeneration.comYour Host: Kimberly Beam Holmes, Expert in Self-Improvement and RelationshipsKimberly Beam Holmes has applied her master's degree in psychology for over ten years, acting as the CEO of Marriage Helper & CEO and Creator of PIES University, being a wife and mother herself, and researching how attraction affects relationships. Her videos, podcasts, and following reach over 500,000 people a month who are making changes and becoming the best they can be.
Silicon Valley's interest in AI is driven by more than just profit and innovation. There's an unmistakable mystical quality to it as well. In this episode, Daniel and Aza sit down with humanist chaplain Greg Epstein to explore the fascinating parallels between technology and religion. From AI being treated as a godlike force to tech leaders' promises of digital salvation, religious thinking is shaping the future of technology and humanity. Epstein breaks down why he believes technology has become our era's most influential religion and what we can learn from these parallels to better understand where we're heading.Your Undivided Attention is produced by the Center for Humane Technology. Follow us on X.If you like the show and want to support CHT's mission, please consider donating to the organization this giving season: https://www.humanetech.com/donate. Any amount helps support our goal to bring about a more humane future.RECOMMENDED MEDIA “Tech Agnostic” by Greg EpsteinFurther reading on Avi Schiffmann's “Friend” AI necklace Further reading on Blake Lemoine and Lamda Blake LeMoine's conversation with Greg at MIT Further reading on the Sewell Setzer case Further reading on Terminal of Truths Further reading on Ray Kurzweil's attempt to create a digital recreation of his dad with AI The Drama of the Gifted Child by Alice MillerRECOMMENDED YUA EPISODES 'A Turning Point in History': Yuval Noah Harari on AI's Cultural Takeover How to Think About AI Consciousness with Anil Seth Can Myth Teach Us Anything About the Race to Build Artificial General Intelligence? With Josh Schrei How To Free Our Minds with Cult Deprogramming Expert Dr. Steven Hassan
CW: This episode features discussion of suicide and sexual abuse. In the last episode, we had the journalist Laurie Segall on to talk about the tragic story of Sewell Setzer, a 14 year old boy who took his own life after months of abuse and manipulation by an AI companion from the company Character.ai. The question now is: what's next?Megan has filed a major new lawsuit against Character.ai in Florida, which could force the company–and potentially the entire AI industry–to change its harmful business practices. So today on the show, we have Meetali Jain, director of the Tech Justice Law Project and one of the lead lawyers in Megan's case against Character.ai. Meetali breaks down the details of the case, the complex legal questions under consideration, and how this could be the first step toward systemic change. Also joining is Camille Carlton, CHT's Policy Director.RECOMMENDED MEDIAFurther reading on Sewell's storyLaurie Segall's interview with Megan Garcia The full complaint filed by Megan against Character.AI Further reading on suicide bots Further reading on Noam Shazier and Daniel De Frietas' relationship with Google The CHT Framework for Incentivizing Responsible Artificial Intelligence Development and Use Organizations mentioned: The Tech Justice Law ProjectThe Social Media Victims Law CenterMothers Against Media AddictionParents SOSParents TogetherCommon Sense MediaRECOMMENDED YUA EPISODESWhen the "Person" Abusing Your Child is a Chatbot: The Tragic Story of Sewell SetzerJonathan Haidt On How to Solve the Teen Mental Health CrisisAI Is Moving Fast. We Need Laws that Will Too.Corrections: Meetali referred to certain chatbot apps as banning users under 18, however the settings for the major app stores ban users that are under 17, not under 18.Meetali referred to Section 230 as providing “full scope immunity” to internet companies, however Congress has passed subsequent laws that have made carve outs for that immunity for criminal acts such as sex trafficking and intellectual property theft.Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_
Content Warning: This episode contains references to suicide, self-harm, and sexual abuse.Megan Garcia lost her son Sewell to suicide after he was abused and manipulated by AI chatbots for months. Now, she's suing the company that made those chatbots. On today's episode of Your Undivided Attention, Aza sits down with journalist Laurie Segall, who's been following this case for months. Plus, Laurie's full interview with Megan on her new show, Dear Tomorrow.Aza and Laurie discuss the profound implications of Sewell's story on the rollout of AI. Social media began the race to the bottom of the brain stem and left our society addicted, distracted, and polarized. Generative AI is set to supercharge that race, taking advantage of the human need for intimacy and connection amidst a widespread loneliness epidemic. Unless we set down guardrails on this technology now, Sewell's story may be a tragic sign of things to come, but it also presents an opportunity to prevent further harms moving forward.If you or someone you know is struggling with mental health, you can reach out to the 988 Suicide and Crisis Lifeline by calling or texting 988; this connects you to trained crisis counselors 24/7 who can provide support and referrals to further assistance.Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_RECOMMENDED MEDIAThe CHT Framework for Incentivizing Responsible AI Development Further reading on Sewell's caseCharacter.ai's “About Us” page Further reading on the addictive properties of AIRECOMMENDED YUA EPISODESAI Is Moving Fast. We Need Laws that Will Too.This Moment in AI: How We Got Here and Where We're GoingJonathan Haidt On How to Solve the Teen Mental Health CrisisThe AI Dilemma
Social media disinformation did enormous damage to our shared idea of reality. Now, the rise of generative AI has unleashed a flood of high-quality synthetic media into the digital ecosystem. As a result, it's more difficult than ever to tell what's real and what's not, a problem with profound implications for the health of our society and democracy. So how do we fix this critical issue?As it turns out, there's a whole ecosystem of folks to answer that question. One is computer scientist Oren Etzioni, the CEO of TrueMedia.org, a free, non-partisan, non-profit tool that is able to detect AI generated content with a high degree of accuracy. Oren joins the show this week to talk about the problem of deepfakes and disinformation and what he sees as the best solutions.Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_ RECOMMENDED MEDIATrueMedia.orgFurther reading on the deepfaked image of an explosion near the PentagonFurther reading on the deepfaked robocall pretending to be President Biden Further reading on the election deepfake in Slovakia Further reading on the President Obama lip-syncing deepfake from 2017 One of several deepfake quizzes from the New York Times, test yourself! The Partnership on AI C2PAWitness.org Truepic RECOMMENDED YUA EPISODES‘We Have to Get It Right': Gary Marcus On Untamed AITaylor Swift is Not Alone: The Deepfake Nightmare Sweeping the InternetSynthetic Humanity: AI & What's At Stake CLARIFICATION: Oren said that the largest social media platforms “don't see a responsibility to let the public know this was manipulated by AI.” Meta has made a public commitment to flagging AI-generated or -manipulated content. Whereas other platforms like TikTok and Snapchat rely on users to flag.
With the election just over a month away, Americans are caught between a flood of political promises and the reality that we live in a time of political dysfunction. Joining us this week to explore the root causes are Ezra Klein, opinion columnist at The New York Times, host of "The Ezra Klein Show" podcast, and author of "Why We're Polarized," alongside Tristan Harris, co-founder of the Center for Humane Technology and co-host of "Your Undivided Attention" podcast. We examine how engagement-driven metrics and algorithms shape public discourse, fueling demagoguery and widening the gap between political rhetoric and public needs. Follow The Weekly Show with Jon Stewart on social media for more: > YouTube: https://www.youtube.com/@weeklyshowpodcast > Instagram: https://www.instagram.com/weeklyshowpodcast > TikTok: https://tiktok.com/@weeklyshowpodcast > X: https://x.com/weeklyshowpod Host/Executive Producer – Jon Stewart Executive Producer – James Dixon Executive Producer – Chris McShane Executive Producer – Caity Gray Lead Producer – Lauren Walker Producer – Brittany Mehmedovic Video Editor & Engineer – Rob Vitolo Audio Editor & Engineer – Nicole Boyce Researcher/Associate Producer – Gillian Spear Music by Hansdle Hsu — This podcast is brought to you by: ZipRecruiter Try it for free at this exclusive web address: ziprecruiter.com/ZipWeekly Learn more about your ad choices. Visit megaphone.fm/adchoices
Glenn and Stu discuss Kamala Harris' recent event with Oprah Winfrey, where she promised to fix all the problems she and Biden have caused. Kamala put on an Oscar-worthy performance when she pandered to gun owners. An Alaskan Democratic donor has been charged after allegedly threatening six unnamed Supreme Court justices. Which six justices would a Democrat want to threaten? Trafalgar Group chief pollster Robert Cahaly joins to discuss what a "submerged Republican voter" is and how these voters aren't being represented in polling numbers. Center for Humane Technology co-founder Tristan Harris joins to discuss the promise and peril of AI and how fast it will infiltrate society. Goya Foods President and CEO Robert Unanue joins to discuss how he refused to canceled by woke culture after coming out in support of Donald Trump. Blaze Media correspondent Steve Baker joins to discuss some shocking revelations discovered regarding Trump's actions on January 6. Learn more about your ad choices. Visit megaphone.fm/adchoices