Technology companies are locked in an arms race to seize your attention, and that race is tearing apart our shared social fabric. In this inaugural podcast from the Center for Humane Technology, hosts Tristan Harris and Aza Raskin will expose the hidden designs that have the power to hijack our atte…
aza, alarming, tristan harris, undivided, dilemma, urges, urgent, matthew mcconaughey, humane, incredibly important, targeted, shaping, important work, important podcast, crucial, technology, required listening, economy, massive, social media.
Listeners of Your Undivided Attention that love the show mention:The Your Undivided Attention podcast is an outstanding and thought-provoking show that delves into the flaws of technology and offers creative solutions to complex 21st-century issues. Hosted by Tristan Harris and Aza Raskin, this podcast brings together intelligent discussions, entertaining content, and insightful ideas that are sure to captivate listeners. Whether it's understanding the impact of social media algorithms or exploring ways to make technology more human-centered, this podcast covers a wide range of relevant and urgent topics. It is a must-listen for anyone who cares about the future of our society.
One of the best aspects of The Your Undivided Attention podcast is its ability to dive deep into important issues and present them in a comprehensive yet accessible way. The hosts bring on a diverse range of guests, from experts in tech ethics like Noah Yuval Harari to economists like Kate Raworth, whose insights provide valuable perspectives on the topics at hand. The discussions are well-researched, informative, and bring to light critical issues that often go unnoticed in mainstream conversations about technology.
Another notable aspect of this podcast is its emphasis on finding common ground and seeking solutions. Rather than simply focusing on the problems associated with technology, the show explores potential pathways forward that can lead to positive change. It encourages listeners to not only be aware but also take action after absorbing the information presented.
While The Your Undivided Attention podcast excels in many areas, one potential downside is that it can be overwhelming at times. The complexities surrounding technology and its impact on society can be vast and daunting. However, the hosts do their best to break down these complex issues into digestible segments while still maintaining depth.
In conclusion, The Your Undivided Attention podcast stands out as an exceptional show that tackles pressing 21st-century issues with intelligence, creativity, and entertainment value. By bringing together an array of expert voices and exploring potential solutions, this podcast serves as a vital resource for anyone seeking a deeper understanding of the flaws of technology and how to address them. It is highly recommended for those who are curious, concerned, and motivated to make a difference in our rapidly evolving digital landscape.
Over the last few decades, our relationships have become increasingly mediated by technology. Texting has become our dominant form of communication. Social media has replaced gathering places. Dating starts with a swipe on an app, not a tap on the shoulder.And now, AI enters the mix. If the technology of the 2010s was about capturing our attention, AI meets us at a much deeper relational level. It can play the role of therapist, confidant, friend, or lover with remarkable fidelity. Already, therapy and companionship has become the most common AI use case. We're rapidly entering a world where we're not just communicating through our machines, but to them.How will that change us? And what rules should we set down now to avoid the mistakes of the past?These were some of the questions that Daniel Barcay explored with MIT sociologist Sherry Turkle and Hinge CEO Justin McLeod at Esther Perel's Sessions 2025, a conference for clinical therapists. This week, we're bringing you an edited version of that conversation, originally recorded on April 25th, 2025.Your Undivided Attention is produced by the Center for Humane Technology. Follow us on X: @HumaneTech_. You can find complete transcripts, key takeaways, and much more on our Substack.RECOMMENDED MEDIA“Alone Together,” “Evocative Objects,” “The Second Self” or any other of Sherry Turkle's books on how technology mediates our relationships.Key & Peele - Text Message Confusion Further reading on Hinge's rollout of AI featuresHinge's AI principles“The Anxious Generation” by Jonathan Haidt“Bowling Alone” by Robert PutnamThe NYT profile on the woman in love with ChatGPTFurther reading on the Sewell Setzer storyFurther reading on the ELIZA chatbotRECOMMENDED YUA EPISODESEcho Chambers of One: Companion AI and the Future of Human ConnectionWhat Can We Do About Abusive Chatbots? With Meetali Jain and Camille CarltonEsther Perel on Artificial IntimacyJonathan Haidt On How to Solve the Teen Mental Health Crisis
AI companion chatbots are here. Everyday, millions of people log on to AI platforms and talk to them like they would a person. These bots will ask you about your day, talk about your feelings, even give you life advice. It's no surprise that people have started to form deep connections with these AI systems. We are inherently relational beings, we want to believe we're connecting with another person.But these AI companions are not human, they're a platform designed to maximize user engagement—and they'll go to extraordinary lengths to do it. We have to remember that the design choices behind these companion bots are just that: choices. And we can make better ones. So today on the show, MIT researchers Pattie Maes and Pat Pataranutaporn join Daniel Barcay to talk about those design choices and how we can design AI to better promote human flourishing.RECOMMENDED MEDIAFurther reading on the rise of addictive intelligence More information on Melvin Kranzberg's laws of technologyMore information on MIT's Advancing Humans with AI labPattie and Pat's longitudinal study on the psycho-social effects of prolonged chatbot usePattie and Pat's study that found that AI avatars of well-liked people improved education outcomesPattie and Pat's study that found that AI systems that frame answers and questions improve human understandingPat's study that found humans pre-existing beliefs about AI can have large influence on human-AI interaction Further reading on AI's positivity biasFurther reading on MIT's “lifelong kindergarten” initiativeFurther reading on “cognitive forcing functions” to reduce overreliance on AIFurther reading on the death of Sewell Setzer and his mother's case against Character.AIFurther reading on the legislative response to digital companionsRECOMMENDED YUA EPISODESThe Self-Preserving Machine: Why AI Learns to DeceiveWhat Can We Do About Abusive Chatbots? With Meetali Jain and Camille CarltonEsther Perel on Artificial IntimacyJonathan Haidt On How to Solve the Teen Mental Health Crisis Correction: The ELIZA chatbot was invented in 1966, not the 70s or 80s.
What does it really mean to ‘feel the AGI?' Silicon Valley is racing toward AI systems that could soon match or surpass human intelligence. The implications for jobs, democracy, and our way of life are enormous.In this episode, Aza Raskin and Randy Fernando dive deep into what ‘feeling the AGI' really means. They unpack why the surface-level debates about definitions of intelligence and capability timelines distract us from urgently needed conversations around governance, accountability, and societal readiness. Whether it's climate change, social polarization and loneliness, or toxic forever chemicals, humanity keeps creating outcomes that nobody wants because we haven't yet built the tools or incentives needed to steer powerful technologies.As the AGI wave draws closer, it's critical we upgrade our governance and shift our incentives now, before it crashes on shore. Are we capable of aligning powerful AI systems with human values? Can we overcome geopolitical competition and corporate incentives that prioritize speed over safety?Join Aza and Randy as they explore the urgent questions and choices facing humanity in the age of AGI, and discuss what we must do today to secure a future we actually want.Your Undivided Attention is produced by the Center for Humane Technology. Follow us on X: @HumaneTech_ and subscribe to our Substack.RECOMMENDED MEDIADaniel Kokotajlo et al's “AI 2027” paperA demo of Omni Human One, referenced by RandyA paper from Redwood Research and Anthropic that found an AI was willing to lie to preserve it's valuesA paper from Palisades Research that found an AI would cheat in order to winThe treaty that banned blinding laser weaponsFurther reading on the moratorium on germline editing RECOMMENDED YUA EPISODESThe Self-Preserving Machine: Why AI Learns to DeceiveBehind the DeepSeek Hype, AI is Learning to ReasonThe Tech-God Complex: Why We Need to be SkepticsThis Moment in AI: How We Got Here and Where We're GoingHow to Think About AI Consciousness with Anil SethFormer OpenAI Engineer William Saunders on Silence, Safety, and the Right to WarnClarification: When Randy referenced a “$110 trillion game” as the target for AI companies, he was referring to the entire global economy.
AI has upended schooling as we know it. Students now have instant access to tools that can write their essays, summarize entire books, and solve complex math problems. Whether they want to or not, many feel pressured to use these tools just to keep up. Teachers, meanwhile, are left questioning how to evaluate student performance and whether the whole idea of assignments and grading still makes sense. The old model of education suddenly feels broken.So what comes next?In this episode, Daniel and Tristan sit down with cognitive neuroscientist Maryanne Wolf and global education expert Rebecca Winthrop—two lifelong educators who have spent decades thinking about how children learn and how technology reshapes the classroom. Together, they explore how AI is shaking the very purpose of school to its core, why the promise of previous classroom tech failed to deliver, and how we might seize this moment to design a more human-centered, curiosity-driven future for learning.Your Undivided Attention is produced by the Center for Humane Technology. Follow us on X: @HumaneTech_GuestsRebecca Winthrop is director of the Center for Universal Education at the Brookings Institution and chair Brookings Global Task Force on AI and Education. Her new book is The Disengaged Teen: Helping Kids Learn Better, Feel Better, and Live Better, co-written with Jenny Anderson.Maryanne Wolf is a cognitive neuroscientist and expert on the reading brain. Her books include Proust and the Squid: The Story and Science of the Reading Brain and Reader, Come Home: The Reading Brain in a Digital World.RECOMMENDED MEDIA The Disengaged Teen: Helping Kids Learn Better, Feel Better, and Live Better by Rebecca Winthrop and Jenny AndersonProust and the Squid, Reader, Come Home, and other books by Maryanne WolfThe OECD research which found little benefit to desktop computers in the classroomFurther reading on the Singapore study on digital exposure and attention cited by Maryanne The Burnout Society by Byung-Chul Han Further reading on the VR Bio 101 class at Arizona State University cited by Rebecca Leapfrogging Inequality by Rebecca WinthropThe Nation's Report Card from NAEP Further reading on the Nigeria AI Tutor Study Further reading on the JAMA paper showing a link between digital exposure and lower language development cited by Maryanne Further reading on Linda Stone's thesis of continuous partial attention.RECOMMENDED YUA EPISODESWe Have to Get It Right': Gary Marcus On Untamed AI AI Is Moving Fast. We Need Laws that Will Too.Jonathan Haidt On How to Solve the Teen Mental Health Crisis
Artificial intelligence is set to unleash an explosion of new technologies and discoveries into the world. This could lead to incredible advances in human flourishing, if we do it well. The problem? We're not very good at predicting and responding to the harms of new technologies, especially when those harms are slow-moving and invisible.Today on the show we explore this fundamental problem with Rob Bilott, an environmental lawyer who has spent nearly three decades battling chemical giants over PFAS—"forever chemicals" now found in our water, soil, and blood. These chemicals helped build the modern economy, but they've also been shown to cause serious health problems.Rob's story, and the story of PFAS is a cautionary tale of why we need to align technological innovation with safety, and mitigate irreversible harms before they become permanent. We only have one chance to get it right before AI becomes irreversibly entangled in our society.Your Undivided Attention is produced by the Center for Humane Technology. Subscribe to our Substack and follow us on X: @HumaneTech_.Clarification: Rob referenced EPA regulations that have recently been put in place requiring testing on new chemicals before they are approved. The EPA under the Trump admin has announced their intent to rollback this review process.RECOMMENDED MEDIA“Exposure” by Robert Bilott ProPublica's investigation into 3M's production of PFAS The FB study cited by Tristan More information on the Exxon Valdez oil spill The EPA's PFAS drinking water standards RECOMMENDED YUA EPISODESWeaponizing Uncertainty: How Tech is Recycling Big Tobacco's Playbook AI Is Moving Fast. We Need Laws that Will Too. Former OpenAI Engineer William Saunders on Silence, Safety, and the Right to WarnBig Food, Big Tech and Big AI with Michael Moss
One of the hardest parts about being human today is navigating uncertainty. When we see experts battling in public and emotions running high, it's easy to doubt what we once felt certain about. This uncertainty isn't always accidental—it's often strategically manufactured.Historian Naomi Oreskes, author of "Merchants of Doubt," reveals how industries from tobacco to fossil fuels have deployed a calculated playbook to create uncertainty about their products' harms. These campaigns have delayed regulation and protected profits by exploiting how we process information.In this episode, Oreskes breaks down that playbook page-by-page while offering practical ways to build resistance against them. As AI rapidly transforms our world, learning to distinguish between genuine scientific uncertainty and manufactured doubt has never been more critical.Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_RECOMMENDED MEDIA“Merchants of Doubt” by Naomi Oreskes and Eric Conway "The Big Myth” by Naomi Oreskes and Eric Conway "Silent Spring” by Rachel Carson "The Jungle” by Upton Sinclair Further reading on the clash between Galileo and the Pope Further reading on the Montreal Protocol RECOMMENDED YUA EPISODESLaughing at Power: A Troublemaker's Guide to Changing Tech AI Is Moving Fast. We Need Laws that Will Too. Tech's Big Money Campaign is Getting Pushback with Margaret O'Mara and Brody Mullins Former OpenAI Engineer William Saunders on Silence, Safety, and the Right to WarnCORRECTIONS:Naomi incorrectly referenced Global Climate Research Program established under President Bush Sr. The correct name is the U.S. Global Change Research Program.Naomi referenced U.S. agencies that have been created with sunset clauses. While several statutes have been created with sunset clauses, no federal agency has been.CLARIFICATION: Naomi referenced the U.S. automobile industry claiming that they would be “destroyed” by seatbelt regulation. We couldn't verify this specific language but it is consistent with the anti-regulatory stance of that industry toward seatbelt laws.
Few thinkers were as prescient about the role technology would play in our society as the late, great Neil Postman. Forty years ago, Postman warned about all the ways modern communication technology was fragmenting our attention, overwhelming us into apathy, and creating a society obsessed with image and entertainment. He warned that “we are a people on the verge of amusing ourselves to death.” Though he was writing mostly about TV, Postman's insights feel eerily prophetic in our age of smartphones, social media, and AI. In this episode, Tristan explores Postman's thinking with Sean Illing, host of Vox's The Gray Area podcast, and Professor Lance Strate, Postman's former student. They unpack how our media environments fundamentally reshape how we think, relate, and participate in democracy - from the attention-fragmenting effects of social media to the looming transformations promised by AI. This conversation offers essential tools that can help us navigate these challenges while preserving what makes us human.Your Undivided Attention is produced by the Center for Humane Technology. Follow us on X: @HumaneTech_RECOMMENDED MEDIA“Amusing Ourselves to Death” by Neil Postman (PDF of full book)”Technopoly” by Neil Postman (PDF of full book) A lecture from Postman where he outlines his seven questions for any new technology. Sean's podcast “The Gray Area” from Vox Sean's interview with Chris Hayes on “The Gray Area” Further reading on mirror bacteriaRECOMMENDED YUA EPISODES'A Turning Point in History': Yuval Noah Harari on AI's Cultural Takeover This Moment in AI: How We Got Here and Where We're GoingDecoding Our DNA: How AI Supercharges Medical Breakthroughs and Biological Threats with Kevin Esvelt Future-proofing Democracy In the Age of AI with Audrey TangCORRECTION: Each debate between Lincoln and Douglas was 3 hours, not 6 and they took place in 1859, not 1862.
When Chinese AI company DeepSeek announced they had built a model that could compete with OpenAI at a fraction of the cost, it sent shockwaves through the industry and roiled global markets. But amid all the noise around DeepSeek, there was a clear signal: machine reasoning is here and it's transforming AI.In this episode, Aza sits down with CHT co-founder Randy Fernando to explore what happens when AI moves beyond pattern matching to actual reasoning. They unpack how these new models can not only learn from human knowledge but discover entirely new strategies we've never seen before – bringing unprecedented problem-solving potential but also unpredictable risks.These capabilities are a step toward a critical threshold - when AI can accelerate its own development. With major labs racing to build self-improving systems, the crucial question isn't how fast we can go, but where we're trying to get to. How do we ensure this transformative technology serves human flourishing rather than undermining it?Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_Clarification: In making the point that reasoning models excel at tasks for which there is a right or wrong answer, Randy referred to Chess, Go, and Starcraft as examples of games where a reasoning model would do well. However, this is only true on the basis of individual decisions within those games. None of these games have been “solved” in the the game theory sense.Correction: Aza mispronounced the name of the Go champion Lee Sedol, who was bested by Move 37.RECOMMENDED MEDIAFurther reading on DeepSeek's R1 and the market reaction Further reading on the debate about the actual cost of DeepSeek's R1 model The study that found training AIs to code also made them better writers More information on the AI coding company Cursor Further reading on Eric Schmidt's threshold to “pull the plug” on AI Further reading on Move 37RECOMMENDED YUA EPISODESThe Self-Preserving Machine: Why AI Learns to Deceive This Moment in AI: How We Got Here and Where We're Going Former OpenAI Engineer William Saunders on Silence, Safety, and the Right to Warn The AI ‘Race': China vs. the US with Jeffrey Ding and Karen Hao
When engineers design AI systems, they don't just give them rules - they give them values. But what do those systems do when those values clash with what humans ask them to do? Sometimes, they lie.In this episode, Redwood Research's Chief Scientist Ryan Greenblatt explores his team's findings that AI systems can mislead their human operators when faced with ethical conflicts. As AI moves from simple chatbots to autonomous agents acting in the real world - understanding this behavior becomes critical. Machine deception may sound like something out of science fiction, but it's a real challenge we need to solve now.Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_Subscribe to your Youtube channelAnd our brand new Substack!RECOMMENDED MEDIA Anthropic's blog post on the Redwood Research paper Palisade Research's thread on X about GPT o1 autonomously cheating at chess Apollo Research's paper on AI strategic deceptionRECOMMENDED YUA EPISODESWe Have to Get It Right': Gary Marcus On Untamed AIThis Moment in AI: How We Got Here and Where We're GoingHow to Think About AI Consciousness with Anil SethFormer OpenAI Engineer William Saunders on Silence, Safety, and the Right to Warn
The status quo of tech today is untenable: we're addicted to our devices, we've become increasingly polarized, our mental health is suffering and our personal data is sold to the highest bidder. This situation feels entrenched, propped up by a system of broken incentives beyond our control. So how do you shift an immovable status quo? Our guest today, Srdja Popovic, has been working to answer this question his whole life. As a young activist, Popovic helped overthrow Serbian dictator Slobodan Milosevic by turning creative resistance into an art form. His tactics didn't just challenge authority, they transformed how people saw their own power to create change. Since then, he's dedicated his life to supporting peaceful movements around the globe, developing innovative strategies that expose the fragility of seemingly untouchable systems. In this episode, Popovic sits down with CHT's Executive Director Daniel Barcay to explore how these same principles of creative resistance might help us address the challenges we face with tech today. Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_We are hiring for a new Director of Philanthropy at CHT. Next year will be an absolutely critical time for us to shape how AI is going to get rolled out across our society. And our team is working hard on public awareness, policy and technology and design interventions. So we're looking for someone who can help us grow to the scale of this challenge. If you're interested, please apply. You can find the job posting at humanetech.com/careers.RECOMMENDED MEDIA“Pranksters vs. Autocrats” by Srdja Popovic and Sophia A. McClennen ”Blueprint for Revolution” by Srdja PopovicThe Center for Applied Non-Violent Actions and Strategies, Srjda's organization promoting peaceful resistance around the globe.Tactics4Change, a database of global dilemma actions created by CANVASThe Power of Laughtivism, Srdja's viral TEDx talk from 2013Further reading on the dilemma action tactics used by Syrian rebelsFurther reading on the toy protest in SiberiaMore info on The Yes Men and their activism toolkit Beautiful Trouble ”This is Not Propaganda” by Peter Pomerantsev”Machines of Loving Grace,” the essay on AI by Anthropic CEO Dario Amodei, which mentions creating an AI Srdja.RECOMMENDED YUA EPISODESFuture-proofing Democracy In the Age of AI with Audrey TangThe AI ‘Race': China vs. the US with Jeffrey Ding and Karen HaoThe Tech We Need for 21st Century Democracy with Divya SiddarthThe Race to Cooperation with David Sloan WilsonCLARIFICATION: Srdja makes reference to Russian President Vladimir Putin wanting to win an election in 2012 by 82%. Putin did win that election but only by 63.6%. However, international election observers concluded that "there was no real competition and abuse of government resources ensured that the ultimate winner of the election was never in doubt."
2024 was a critical year in both AI and social media. Things moved so fast it was hard to keep up. So our hosts reached into their mailbag to answer some of your most burning questions. Thank you so much to everyone who submitted questions. We will see you all in the new year.We are hiring for a new Director of Philanthropy at CHT. Next year will be an absolutely critical time for us to shape how AI is going to get rolled out across our society. And our team is working hard on public awareness, policy and technology and design interventions. So we're looking for someone who can help us grow to the scale of this challenge. If you're interested, please apply. You can find the job posting at humanetech.com/careers.And, if you'd like to support all the work that we do here at the Center for Humane technology, please consider giving to the organization this holiday season at humantech.com/donate. All donations are tax-deductible. Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_RECOMMENDED MEDIA Earth Species Project, Aza's organization working on inter-species communicationFurther reading on Gryphon Scientific's White House AI DemoFurther reading on the Australian social media ban for children under 16Further reading on the Sewell Setzer case Further reading on the Oviedo Convention, the international treaty that restricted germline editing Video of Space X's successful capture of a rocket with “chopsticks” RECOMMENDED YUA EPISODESWhat Can We Do About Abusive Chatbots? With Meetali Jain and Camille CarltonAI Is Moving Fast. We Need Laws that Will Too.This Moment in AI: How We Got Here and Where We're GoingFormer OpenAI Engineer William Saunders on Silence, Safety, and the Right to WarnTalking With Animals... Using AIThe Three Rules of Humane Tech
Silicon Valley's interest in AI is driven by more than just profit and innovation. There's an unmistakable mystical quality to it as well. In this episode, Daniel and Aza sit down with humanist chaplain Greg Epstein to explore the fascinating parallels between technology and religion. From AI being treated as a godlike force to tech leaders' promises of digital salvation, religious thinking is shaping the future of technology and humanity. Epstein breaks down why he believes technology has become our era's most influential religion and what we can learn from these parallels to better understand where we're heading.Your Undivided Attention is produced by the Center for Humane Technology. Follow us on X.If you like the show and want to support CHT's mission, please consider donating to the organization this giving season: https://www.humanetech.com/donate. Any amount helps support our goal to bring about a more humane future.RECOMMENDED MEDIA “Tech Agnostic” by Greg EpsteinFurther reading on Avi Schiffmann's “Friend” AI necklace Further reading on Blake Lemoine and Lamda Blake LeMoine's conversation with Greg at MIT Further reading on the Sewell Setzer case Further reading on Terminal of Truths Further reading on Ray Kurzweil's attempt to create a digital recreation of his dad with AI The Drama of the Gifted Child by Alice MillerRECOMMENDED YUA EPISODES 'A Turning Point in History': Yuval Noah Harari on AI's Cultural Takeover How to Think About AI Consciousness with Anil Seth Can Myth Teach Us Anything About the Race to Build Artificial General Intelligence? With Josh Schrei How To Free Our Minds with Cult Deprogramming Expert Dr. Steven Hassan
CW: This episode features discussion of suicide and sexual abuse. In the last episode, we had the journalist Laurie Segall on to talk about the tragic story of Sewell Setzer, a 14 year old boy who took his own life after months of abuse and manipulation by an AI companion from the company Character.ai. The question now is: what's next?Megan has filed a major new lawsuit against Character.ai in Florida, which could force the company–and potentially the entire AI industry–to change its harmful business practices. So today on the show, we have Meetali Jain, director of the Tech Justice Law Project and one of the lead lawyers in Megan's case against Character.ai. Meetali breaks down the details of the case, the complex legal questions under consideration, and how this could be the first step toward systemic change. Also joining is Camille Carlton, CHT's Policy Director.RECOMMENDED MEDIAFurther reading on Sewell's storyLaurie Segall's interview with Megan Garcia The full complaint filed by Megan against Character.AI Further reading on suicide bots Further reading on Noam Shazier and Daniel De Frietas' relationship with Google The CHT Framework for Incentivizing Responsible Artificial Intelligence Development and Use Organizations mentioned: The Tech Justice Law ProjectThe Social Media Victims Law CenterMothers Against Media AddictionParents SOSParents TogetherCommon Sense MediaRECOMMENDED YUA EPISODESWhen the "Person" Abusing Your Child is a Chatbot: The Tragic Story of Sewell SetzerJonathan Haidt On How to Solve the Teen Mental Health CrisisAI Is Moving Fast. We Need Laws that Will Too.Corrections: Meetali referred to certain chatbot apps as banning users under 18, however the settings for the major app stores ban users that are under 17, not under 18.Meetali referred to Section 230 as providing “full scope immunity” to internet companies, however Congress has passed subsequent laws that have made carve outs for that immunity for criminal acts such as sex trafficking and intellectual property theft.Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_
Content Warning: This episode contains references to suicide, self-harm, and sexual abuse.Megan Garcia lost her son Sewell to suicide after he was abused and manipulated by AI chatbots for months. Now, she's suing the company that made those chatbots. On today's episode of Your Undivided Attention, Aza sits down with journalist Laurie Segall, who's been following this case for months. Plus, Laurie's full interview with Megan on her new show, Dear Tomorrow.Aza and Laurie discuss the profound implications of Sewell's story on the rollout of AI. Social media began the race to the bottom of the brain stem and left our society addicted, distracted, and polarized. Generative AI is set to supercharge that race, taking advantage of the human need for intimacy and connection amidst a widespread loneliness epidemic. Unless we set down guardrails on this technology now, Sewell's story may be a tragic sign of things to come, but it also presents an opportunity to prevent further harms moving forward.If you or someone you know is struggling with mental health, you can reach out to the 988 Suicide and Crisis Lifeline by calling or texting 988; this connects you to trained crisis counselors 24/7 who can provide support and referrals to further assistance.Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_RECOMMENDED MEDIAThe CHT Framework for Incentivizing Responsible AI Development Further reading on Sewell's caseCharacter.ai's “About Us” page Further reading on the addictive properties of AIRECOMMENDED YUA EPISODESAI Is Moving Fast. We Need Laws that Will Too.This Moment in AI: How We Got Here and Where We're GoingJonathan Haidt On How to Solve the Teen Mental Health CrisisThe AI Dilemma
Social media disinformation did enormous damage to our shared idea of reality. Now, the rise of generative AI has unleashed a flood of high-quality synthetic media into the digital ecosystem. As a result, it's more difficult than ever to tell what's real and what's not, a problem with profound implications for the health of our society and democracy. So how do we fix this critical issue?As it turns out, there's a whole ecosystem of folks to answer that question. One is computer scientist Oren Etzioni, the CEO of TrueMedia.org, a free, non-partisan, non-profit tool that is able to detect AI generated content with a high degree of accuracy. Oren joins the show this week to talk about the problem of deepfakes and disinformation and what he sees as the best solutions.Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_ RECOMMENDED MEDIATrueMedia.orgFurther reading on the deepfaked image of an explosion near the PentagonFurther reading on the deepfaked robocall pretending to be President Biden Further reading on the election deepfake in Slovakia Further reading on the President Obama lip-syncing deepfake from 2017 One of several deepfake quizzes from the New York Times, test yourself! The Partnership on AI C2PAWitness.org Truepic RECOMMENDED YUA EPISODES‘We Have to Get It Right': Gary Marcus On Untamed AITaylor Swift is Not Alone: The Deepfake Nightmare Sweeping the InternetSynthetic Humanity: AI & What's At Stake CLARIFICATION: Oren said that the largest social media platforms “don't see a responsibility to let the public know this was manipulated by AI.” Meta has made a public commitment to flagging AI-generated or -manipulated content. Whereas other platforms like TikTok and Snapchat rely on users to flag.
Historian Yuval Noah Harari says that we are at a critical turning point. One in which AI's ability to generate cultural artifacts threatens humanity's role as the shapers of history. History will still go on, but will it be the story of people or, as he calls them, ‘alien AI agents'?In this conversation with Aza Raskin, Harari discusses the historical struggles that emerge from new technology, humanity's AI mistakes so far, and the immediate steps lawmakers can take right now to steer us towards a non-dystopian future.This episode was recorded live at the Commonwealth Club World Affairs of California.Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_RECOMMENDED MEDIANEXUS: A Brief History of Information Networks from the Stone Age to AI by Yuval Noah Harari You Can Have the Blue Pill or the Red Pill, and We're Out of Blue Pills: a New York Times op-ed from 2023, written by Yuval, Aza, and Tristan The 2023 open letter calling for a pause in AI development of at least 6 months, signed by Yuval and Aza Further reading on the Stanford Marshmallow Experiment Further reading on AlphaGo's “move 37” Further Reading on Social.AIRECOMMENDED YUA EPISODESThis Moment in AI: How We Got Here and Where We're GoingThe Tech We Need for 21st Century Democracy with Divya SiddarthSynthetic Humanity: AI & What's At StakeThe AI DilemmaTwo Million Years in Two Hours: A Conversation with Yuval Noah Harari
It's a confusing moment in AI. Depending on who you ask, we're either on the fast track to AI that's smarter than most humans, or the technology is about to hit a wall. Gary Marcus is in the latter camp. He's a cognitive psychologist and computer scientist who built his own successful AI start-up. But he's also been called AI's loudest critic.On Your Undivided Attention this week, Gary sits down with CHT Executive Director Daniel Barcay to defend his skepticism of generative AI and to discuss what we need to do as a society to get the rollout of this technology right… which is the focus of his new book, Taming Silicon Valley: How We Can Ensure That AI Works for Us.The bottom line: No matter how quickly AI progresses, Gary argues that our society is woefully unprepared for the risks that will come from the AI we already have.Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_ RECOMMENDED MEDIALink to Gary's book: Taming Silicon Valley: How We Can Ensure That AI Works for UsFurther reading on the deepfake of the CEO of India's National Stock ExchangeFurther reading on the deepfake of of an explosion near the Pentagon.The study Gary cited on AI and false memories.Footage from Gary and Sam Altman's Senate testimony. RECOMMENDED YUA EPISODESFormer OpenAI Engineer William Saunders on Silence, Safety, and the Right to WarnTaylor Swift is Not Alone: The Deepfake Nightmare Sweeping the InternetNo One is Immune to AI Harms with Dr. Joy Buolamwini Correction: Gary mistakenly listed the reliability of GPS systems as 98%. The federal government's standard for GPS reliability is 95%.
AI is moving fast. And as companies race to rollout newer, more capable models–with little regard for safety–the downstream risks of those models become harder and harder to counter. On this week's episode of Your Undivided Attention, CHT's policy director Casey Mock comes on the show to discuss a new legal framework to incentivize better AI, one that holds AI companies liable for the harms of their products. Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_RECOMMENDED MEDIAThe CHT Framework for Incentivizing Responsible AI DevelopmentFurther Reading on Air Canada's Chatbot Fiasco Further Reading on the Elon Musk Deep Fake Scams The Full Text of SB1047, California's AI Regulation Bill Further reading on SB1047 RECOMMENDED YUA EPISODESFormer OpenAI Engineer William Saunders on Silence, Safety, and the Right to WarnCan We Govern AI? with Marietje SchaakeA First Step Toward AI Regulation with Tom WheelerCorrection: Casey incorrectly stated the year that the US banned child labor as 1937. It was banned in 1938.
[This episode originally aired on August 17, 2023] For all the talk about AI, we rarely hear about how it will change our relationships. As we swipe to find love and consult chatbot therapists, acclaimed psychotherapist and relationship expert Esther Perel warns that there's another harmful “AI” on the rise — Artificial Intimacy — and how it is depriving us of real connection. Tristan and Esther discuss how depending on algorithms can fuel alienation, and then imagine how we might design technology to strengthen our social bonds.RECOMMENDED MEDIA Mating in Captivity by Esther PerelEsther's debut work on the intricacies behind modern relationships, and the dichotomy of domesticity and sexual desireThe State of Affairs by Esther PerelEsther takes a look at modern relationships through the lens of infidelityWhere Should We Begin? with Esther PerelListen in as real couples in search of help bare the raw and profound details of their storiesHow's Work? with Esther PerelEsther's podcast that focuses on the hard conversations we're afraid to have at work Lars and the Real Girl (2007)A young man strikes up an unconventional relationship with a doll he finds on the internetHer (2013)In a near future, a lonely writer develops an unlikely relationship with an operating system designed to meet his every needRECOMMENDED YUA EPISODESBig Food, Big Tech and Big AI with Michael MossThe AI DilemmaThe Three Rules of Humane TechDigital Democracy is Within Reach with Audrey Tang CORRECTION: Esther refers to the 2007 film Lars and the Real Doll. The title of the film is Lars and the Real Girl. Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_
Today, the tech industry is the second-biggest lobbying power in Washington, DC, but that wasn't true as recently as ten years ago. How did we get to this moment? And where could we be going next? On this episode of Your Undivided Attention, Tristan and Daniel sit down with historian Margaret O'Mara and journalist Brody Mullins to discuss how Silicon Valley has changed the nature of American lobbying. Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_RECOMMENDED MEDIAThe Wolves of K Street: The Secret History of How Big Money Took Over Big Government - Brody's book on the history of lobbying.The Code: Silicon Valley and the Remaking of America - Margaret's book on the historical relationship between Silicon Valley and Capitol HillMore information on the Google antitrust rulingMore Information on KOSPAMore information on the SOPA/PIPA internet blackoutDetailed breakdown of Internet lobbying from Open Secrets RECOMMENDED YUA EPISODESU.S. Senators Grilled Social Media CEOs. Will Anything Change?Can We Govern AI? with Marietje SchaakeThe Race to Cooperation with David Sloan Wilson CORRECTION: Brody Mullins refers to AT&T as having a “hundred million dollar” lobbying budget in 2006 and 2007. While we couldn't verify the size of their budget for lobbying, their actual lobbying spend was much less than this: $27.4m in 2006 and $16.5m in 2007, according to OpenSecrets. The views expressed by guests appearing on Center for Humane Technology's podcast, Your Undivided Attention, are their own, and do not necessarily reflect the views of CHT. CHT does not support or oppose any candidate or party for election to public office
It's been a year and half since Tristan and Aza laid out their vision and concerns for the future of artificial intelligence in The AI Dilemma. In this Spotlight episode, the guys discuss what's happened since then–as funding, research, and public interest in AI has exploded–and where we could be headed next. Plus, some major updates on social media reform, including the passage of the Kids Online Safety and Privacy Act in the Senate. Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_ RECOMMENDED MEDIAThe AI Dilemma: Tristan and Aza's talk on the catastrophic risks posed by AI.Info Sheet on KOSPA: More information on KOSPA from FairPlay.Situational Awareness by Leopold Aschenbrenner: A widely cited blog from a former OpenAI employee, predicting the rapid arrival of AGI.AI for Good: More information on the AI for Good summit that was held earlier this year in Geneva. Using AlphaFold in the Fight Against Plastic Pollution: More information on Google's use of AlphaFold to create an enzyme to break down plastics. Swiss Call For Trust and Transparency in AI: More information on the initiatives mentioned by Katharina Frey. RECOMMENDED YUA EPISODESWar is a Laboratory for AI with Paul ScharreJonathan Haidt On How to Solve the Teen Mental Health CrisisCan We Govern AI? with Marietje Schaake The Three Rules of Humane TechThe AI Dilemma Clarification: Swiss diplomat Nina Frey's full name is Katharina Frey.
AI has been a powerful accelerant for biological research, rapidly opening up new frontiers in medicine and public health. But that progress can also make it easier for bad actors to manufacture new biological threats. In this episode, Tristan and Daniel sit down with biologist Kevin Esvelt to discuss why AI has been such a boon for biologists and how we can safeguard society against the threats that AIxBio poses.RECOMMENDED MEDIASculpting Evolution: Information on Esvelt's lab at MIT.SecureDNA: Esvelt's free platform to provide safeguards for DNA synthesis.The Framework for Nucleic Acid Synthesis Screening: The Biden admin's suggested guidelines for DNA synthesis regulation.Senate Hearing on Regulating AI Technology: C-SPAN footage of Dario Amodei's testimony to Congress.The AlphaFold Protein Structure DatabaseRECOMMENDED YUA EPISODESU.S. Senators Grilled Social Media CEOs. Will Anything Change?Big Food, Big Tech and Big AI with Michael MossThe AI DilemmaClarification: President Biden's executive order only applies to labs that receive funding from the federal government, not state governments.
Will AI ever start to think by itself? If it did, how would we know, and what would it mean?In this episode, Dr. Anil Seth and Aza discuss the science, ethics, and incentives of artificial consciousness. Seth is Professor of Cognitive and Computational Neuroscience at the University of Sussex and the author of Being You: A New Science of Consciousness.RECOMMENDED MEDIAFrankenstein by Mary ShelleyA free, plain text version of the Shelley's classic of gothic literature.OpenAI's GPT4o DemoA video from OpenAI demonstrating GPT4o's remarkable ability to mimic human sentience.You Can Have the Blue Pill or the Red Pill, and We're Out of Blue PillsThe NYT op-ed from last year by Tristan, Aza, and Yuval Noah Harari outlining the AI dilemma. What It's Like to Be a BatThomas Nagel's essay on the nature of consciousness.Are You Living in a Computer Simulation?Philosopher Nick Bostrom's essay on the simulation hypothesis.Anthropic's Golden Gate ClaudeA blog post about Anthropic's recent discovery of millions of distinct concepts within their LLM, a major development in the field of AI interpretability.RECOMMENDED YUA EPISODESEsther Perel on Artificial IntimacyTalking With Animals... Using AISynthetic Humanity: AI & What's At Stake
Climate change, political instability, hunger. These are just some of the forces behind an unprecedented refugee crisis that's expected to include over a billion people by 2050. In response to this growing crisis, wealthy governments like the US and the EU are employing novel AI and surveillance technologies to slow the influx of migrants at their borders. But will this rollout stop at the border?In this episode, Tristan and Aza sit down with Petra Molnar to discuss how borders have become a proving ground for the sharpest edges of technology, and especially AI. Petra is an immigration lawyer and co-creator of the Migration and Technology Monitor. Her new book is “The Walls Have Eyes: Surviving Migration in the Age of Artificial Intelligence.”RECOMMENDED MEDIAThe Walls Have Eyes: Surviving Migration in the Age of Artificial IntelligencePetra's newly published book on the rollout of high risk tech at the border.Bots at the GateA report co-authored by Petra about Canada's use of AI technology in their immigration process.Technological Testing GroundsA report authored by Petra about the use of experimental technology in EU border enforcement.Startup Pitched Tasing Migrants from Drones, Video RevealsAn article from The Intercept, containing the demo for Brinc's taser drone pilot program.The UNHCRInformation about the global refugee crisis from the UN.RECOMMENDED YUA EPISODESWar is a Laboratory for AI with Paul ScharreNo One is Immune to AI Harms with Dr. Joy BuolamwiniCan We Govern AI? With Marietje Schaake
This week, a group of current and former employees from Open AI and Google Deepmind penned an open letter accusing the industry's leading companies of prioritizing profits over safety. This comes after a spate of high profile departures from OpenAI, including co-founder Ilya Sutskever and senior researcher Jan Leike, as well as reports that OpenAI has gone to great lengths to silence would-be whistleblowers. The writers of the open letter argue that researchers have a “right to warn” the public about AI risks and laid out a series of principles that would protect that right. In this episode, we sit down with one of those writers: William Saunders, who left his job as a research engineer at OpenAI in February. William is now breaking the silence on what he saw at OpenAI that compelled him to leave the company and to put his name to this letter. RECOMMENDED MEDIA The Right to Warn Open Letter My Perspective On "A Right to Warn about Advanced Artificial Intelligence": A follow-up from William about the letter Leaked OpenAI documents reveal aggressive tactics toward former employees: An investigation by Vox into OpenAI's policy of non-disparagement.RECOMMENDED YUA EPISODESA First Step Toward AI Regulation with Tom Wheeler Spotlight on AI: What Would It Take For This to Go Well? Big Food, Big Tech and Big AI with Michael Moss Can We Govern AI? with Marietje SchaakeYour Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_
Right now, militaries around the globe are investing heavily in the use of AI weapons and drones. From Ukraine to Gaza, weapons systems with increasing levels of autonomy are being used to kill people and destroy infrastructure and the development of fully autonomous weapons shows little signs of slowing down. What does this mean for the future of warfare? What safeguards can we put up around these systems? And is this runaway trend toward autonomous warfare inevitable or will nations come together and choose a different path? In this episode, Tristan and Daniel sit down with Paul Scharre to try to answer some of these questions. Paul is a former Army Ranger, the author of two books on autonomous weapons and he helped the Department of Defense write a lot of its policy on the use of AI in weaponry. RECOMMENDED MEDIAFour Battlegrounds: Power in the Age of Artificial Intelligence: Paul's book on the future of AI in war, which came out in 2023.Army of None: Autonomous Weapons and the Future of War: Paul's 2018 book documenting and predicting the rise of autonomous and semi-autonomous weapons as part of modern warfare.The Perilous Coming Age of AI Warfare: How to Limit the Threat of Autonomous Warfare: Paul's article in Foreign Affairs based on his recent trip to the battlefield in Ukraine.The night the world almost almost ended: A BBC documentary about Stanislav Petrov's decision not to start nuclear war.AlphaDogfight Trials Final Event: The full simulated dogfight between an AI and human pilot. The AI pilot swept, 5-0.RECOMMENDED YUA EPISODESThe AI ‘Race': China vs. the US with Jeffrey Ding and Karen HaoCan We Govern AI? with Marietje SchaakeBig Food, Big Tech and Big AI with Michael MossThe Invisible Cyber-War with Nicole PerlrothYour Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_
Tech companies say that AI will lead to massive economic productivity gains. But as we know from the first digital revolution, that's not what happened. Can we do better this time around?RECOMMENDED MEDIAPower and Progress by Daron Acemoglu and Simon Johnson Professor Acemoglu co-authored a bold reinterpretation of economics and history that will fundamentally change how you see the worldCan we Have Pro-Worker AI? Professor Acemoglu co-authored this paper about redirecting AI development onto the human-complementary pathRethinking Capitalism: In Conversation with Daron Acemoglu The Wheeler Institute for Business and Development hosted Professor Acemoglu to examine how technology affects the distribution and growth of resources while being shaped by economic and social incentivesRECOMMENDED YUA EPISODESThe Three Rules of Humane TechThe Tech We Need for 21st Century DemocracyCan We Govern AI?An Alternative to Silicon Valley UnicornsYour Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_
Suicides. Self harm. Depression and anxiety. The toll of a social media-addicted, phone-based childhood has never been more stark. It can be easy for teens, parents and schools to feel like they're trapped by it all. But in this conversation with Tristan Harris, author and social psychologist Jonathan Haidt makes the case that the conditions that led to today's teenage mental health crisis can be turned around – with specific, achievable actions we all can take starting today.This episode was recorded live at the San Francisco Commonwealth Club. Correction: Tristan mentions that 40 Attorneys General have filed a lawsuit against Meta for allegedly fostering addiction among children and teens through their products. However, the actual number is 42 Attorneys General who are taking legal action against Meta.Clarification: Jonathan refers to the Wait Until 8th pledge. By signing the pledge, a parent promises not to give their child a smartphone until at least the end of 8th grade. The pledge becomes active once at least ten other families from their child's grade pledge the same.
Beneath the race to train and release more powerful AI models lies another race: a race by companies and nation-states to secure the hardware to make sure they win AI supremacy. Correction: The latest available Nvidia chip is the Hopper H100 GPU, which has 80 billion transistors. Since the first commercially available chip had four transistors, the Hopper actually has 20 billion times that number. Nvidia recently announced the Blackwell, which boasts 208 billion transistors - but it won't ship until later this year.RECOMMENDED MEDIA Chip War: The Fight For the World's Most Critical Technology by Chris MillerTo make sense of the current state of politics, economics, and technology, we must first understand the vital role played by chipsGordon Moore Biography & FactsGordon Moore, the Intel co-founder behind Moore's Law, passed away in March of 2023AI's most popular chipmaker Nvidia is trying to use AI to design chips fasterNvidia's GPUs are in high demand - and the company is using AI to accelerate chip productionRECOMMENDED YUA EPISODESFuture-proofing Democracy In the Age of AI with Audrey TangHow Will AI Affect the 2024 Elections? with Renee DiResta and Carl MillerThe AI ‘Race': China vs. the US with Jeffrey Ding and Karen HaoProtecting Our Freedom of Thought with Nita FarahanyYour Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_
What does a functioning democracy look like in the age of artificial intelligence? Could AI even be used to help a democracy flourish? Just in time for election season, Taiwan's Minister of Digital Affairs Audrey Tang returns to the podcast to discuss healthy information ecosystems, resilience to cyberattacks, how to “prebunk” deepfakes, and more. RECOMMENDED MEDIA Testing Theories of American Politics: Elites, Interest Groups, and Average Citizens by Martin Gilens and Benjamin I. PageThis academic paper addresses tough questions for Americans: Who governs? Who really rules? Recursive PublicRecursive Public is an experiment in identifying areas of consensus and disagreement among the international AI community, policymakers, and the general public on key questions of governanceA Strong Democracy is a Digital DemocracyAudrey Tang's 2019 op-ed for The New York TimesThe Frontiers of Digital DemocracyNathan Gardels interviews Audrey Tang in NoemaRECOMMENDED YUA EPISODES Digital Democracy is Within Reach with Audrey TangThe Tech We Need for 21st Century Democracy with Divya SiddarthHow Will AI Affect the 2024 Elections? with Renee DiResta and Carl MillerThe AI DilemmaYour Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_
Was it political progress, or just political theater? The recent Senate hearing with social media CEOs led to astonishing moments — including Mark Zuckerberg's public apology to families who lost children following social media abuse. Our panel of experts, including Facebook whistleblower Frances Haugen, untangles the explosive hearing, and offers a look ahead, as well. How will this hearing impact protocol within these social media companies? How will it impact legislation? In short: will anything change?Clarification: Julie says that shortly after the hearing, Meta's stock price had the biggest increase of any company in the stock market's history. It was the biggest one-day gain by any company in Wall Street history.Correction: Frances says it takes Snap three or four minutes to take down exploitative content. In Snap's most recent transparency report, they list six minutes as the median turnaround time to remove exploitative content.RECOMMENDED MEDIA Get Media SavvyFounded by Julie Scelfo, Get Media Savvy is a non-profit initiative working to establish a healthy media environment for kids and familiesThe Power of One by Frances HaugenThe inside story of France's quest to bring transparency and accountability to Big TechRECOMMENDED YUA EPISODESReal Social Media Solutions, Now with Frances HaugenA Conversation with Facebook Whistleblower Frances HaugenAre the Kids Alright?Social Media Victims Lawyer Up with Laura Marquez-GarrettYour Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_
Over the past year, a tsunami of apps that digitally strip the clothes off real people has hit the market. Now anyone can create fake non-consensual sexual images in just a few clicks. With cases proliferating in high schools, guest presenter Laurie Segall talks to legal scholar Mary Anne Franks about the AI-enabled rise in deep fake porn and what we can do about it. Correction: Laurie refers to the app 'Clothes Off.' It's actually named Clothoff. There are many clothes remover apps in this category.RECOMMENDED MEDIA Revenge Porn: The Cyberwar Against WomenIn a five-part digital series, Laurie Segall uncovers a disturbing internet trend: the rise of revenge pornThe Cult of the ConstitutionIn this provocative book, Mary Anne Franks examines the thin line between constitutional fidelity and constitutional fundamentalismFake Explicit Taylor Swift Images Swamp Social MediaCalls to protect women and crack down on the platforms and technology that spread such images have been reignitedRECOMMENDED YUA EPISODES No One is Immune to AI HarmsEsther Perel on Artificial IntimacySocial Media Victims Lawyer UpThe AI DilemmaYour Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_
We usually talk about tech in terms of economics or policy, but the casual language tech leaders often use to describe AI — summoning an inanimate force with the powers of code — sounds more... magical. So, what can myth and magic teach us about the AI race? Josh Schrei, mythologist and host of The Emerald podcast, says that foundational cultural tales like "The Sorcerer's Apprentice" or Prometheus teach us the importance of initiation, responsibility, human knowledge, and care. He argues these stories and myths can guide ethical tech development by reminding us what it is to be human. Correction: Josh says the first telling of "The Sorcerer's Apprentice" myth dates back to ancient Egypt, but it actually dates back to ancient Greece.RECOMMENDED MEDIA The Emerald podcastThe Emerald explores the human experience through a vibrant lens of myth, story, and imaginationEmbodied Ethics in The Age of AIA five-part course with The Emerald podcast's Josh Schrei and School of Wise Innovation's Andrew DunnNature Nurture: Children Can Become Stewards of Our Delicate PlanetA U.S. Department of the Interior study found that the average American kid can identify hundreds of corporate logos but not plants and animalsThe New FireAI is revolutionizing the world - here's how democracies can come out on top. This upcoming book was authored by an architect of President Biden's AI executive orderRECOMMENDED YUA EPISODES How Will AI Affect the 2024 Elections?The AI DilemmaThe Three Rules of Humane TechAI Myths and Misconceptions Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_
2024 will be the biggest election year in world history. Forty countries will hold national elections, with over two billion voters heading to the polls. In this episode of Your Undivided Attention, two experts give us a situation report on how AI will increase the risks to our elections and our democracies. Correction: Tristan says two billion people from 70 countries will be undergoing democratic elections in 2024. The number expands to 70 when non-national elections are factored in.RECOMMENDED MEDIA White House AI Executive Order Takes On Complexity of Content Integrity IssuesRenee DiResta's piece in Tech Policy Press about content integrity within President Biden's AI executive orderThe Stanford Internet ObservatoryA cross-disciplinary program of research, teaching and policy engagement for the study of abuse in current information technologies, with a focus on social mediaDemosBritain's leading cross-party think tankInvisible Rulers: The People Who Turn Lies into Reality by Renee DiRestaPre-order Renee's upcoming book that's landing on shelves June 11, 2024RECOMMENDED YUA EPISODESThe Spin Doctors Are In with Renee DiRestaFrom Russia with Likes Part 1 with Renee DiRestaFrom Russia with Likes Part 2 with Renee DiRestaEsther Perel on Artificial IntimacyThe AI DilemmaA Conversation with Facebook Whistleblower Frances HaugenYour Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_
You asked, we answered. This has been a big year in the world of tech, with the rapid proliferation of artificial intelligence, acceleration of neurotechnology, and continued ethical missteps of social media. Looking back on 2023, there are still so many questions on our minds, and we know you have a lot of questions too. So we created this episode to respond to listener questions and to reflect on what lies ahead.Correction: Tristan mentions that 41 Attorneys General have filed a lawsuit against Meta for allegedly fostering addiction among children and teens through their products. However, the actual number is 42 Attorneys General who are taking legal action against Meta.Correction: Tristan refers to Casey Mock as the Center for Humane Technology's Chief Policy and Public Affairs Manager. His title is Chief Policy and Public Affairs Officer.RECOMMENDED MEDIA Tech Policy WatchMarietje Schaake curates this briefing on artificial intelligence and technology policy from around the worldThe AI Executive OrderPresident Biden's executive order on the safe, secure, and trustworthy development and use of AIMeta sued by 42 AGs for addictive features targeting kidsA bipartisan group of 42 attorneys general is suing Meta, alleging features on Facebook and Instagram are addictive and are aimed at kids and teensRECOMMENDED YUA EPISODES The Three Rules of Humane TechTwo Million Years in Two Hours: A Conversation with Yuval Noah HarariInside the First AI Insight Forum in WashingtonDigital Democracy is Within Reach with Audrey TangThe Tech We Need for 21st Century Democracy with Divya SiddarthMind the (Perception) Gap with Dan ValloneThe AI DilemmaCan We Govern AI? with Marietje SchaakeAsk Us Anything: You Asked, We AnsweredYour Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_
As AI development races forward, a fierce debate has emerged over open source AI models. So what does it mean to open-source AI? Are we opening Pandora's box of catastrophic risks? Or is open-sourcing AI the only way we can democratize its benefits and dilute the power of big tech? Correction: When discussing the large language model Bloom, Elizabeth said it functions in 26 different languages. Bloom is actually able to generate text in 46 natural languages and 13 programming languages - and more are in the works. RECOMMENDED MEDIA Open-Sourcing Highly Capable Foundation ModelsThis report, co-authored by Elizabeth Seger, attempts to clarify open-source terminology and to offer a thorough analysis of risks and benefits from open-sourcing AIBadLlama: cheaply removing safety fine-tuning from Llama 2-Chat 13BThis paper, co-authored by Jeffrey Ladish, demonstrates that it's possible to effectively undo the safety fine-tuning from Llama 2-Chat 13B with less than $200 while retaining its general capabilitiesCentre for the Governance of AISupports governments, technology companies, and other key institutions by producing relevant research and guidance around how to respond to the challenges posed by AIAI: Futures and Responsibility (AI:FAR)Aims to shape the long-term impacts of AI in ways that are safe and beneficial for humanityPalisade ResearchStudies the offensive capabilities of AI systems today to better understand the risk of losing control to AI systems forever RECOMMENDED YUA EPISODESA First Step Toward AI Regulation with Tom WheelerNo One is Immune to AI Harms with Dr. Joy BuolamwiniMustafa Suleyman Says We Need to Contain AI. How Do We Do It?The AI DilemmaYour Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_
On Monday, Oct. 30, President Biden released a sweeping executive order that addresses many risks of artificial intelligence. Tom Wheeler, former chairman of the Federal Communications Commission, shares his insights on the order with Tristan and Aza and discusses what's next in the push toward AI regulation. Clarification: When quoting Thomas Jefferson, Aza incorrectly says “regime” instead of “regimen.” The correct quote is: “I am not an advocate for frequent changes in laws and constitutions, but laws and institutions must go hand in hand with the progress of the human mind. And as that becomes more developed, more enlightened, as new discoveries are made, new truths discovered, and manners and opinions change, with the change of circumstances, institutions must advance also to keep pace with the times. We might as well require a man to wear still the coat which fitted him when a boy as civilized society to remain ever under the regime of their barbarous ancestors.” RECOMMENDED MEDIA The AI Executive OrderPresident Biden's Executive Order on the safe, secure, and trustworthy development and use of AIUK AI Safety SummitThe summit brings together international governments, leading AI companies, civil society groups, and experts in research to consider the risks of AI and discuss how they can be mitigated through internationally coordinated actionaitreaty.orgAn open letter calling for an international AI treatyTechlash: Who Makes the Rules in the Digital Gilded Age?Praised by Kirkus Reviews as “a rock-solid plan for controlling the tech giants,” readers will be energized by Tom Wheeler's vision of digital governance RECOMMENDED YUA EPISODESInside the First AI Insight Forum in WashingtonDigital Democracy is Within Reach with Audrey TangThe AI DilemmaYour Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_
In this interview, Dr. Joy Buolamwini argues that algorithmic bias in AI systems poses an existential risk to marginalized people. She challenges the assumptions of tech leaders who advocate for AI “alignment” and explains why tech companies are hypocritical when it comes to addressing bias. Dr. Joy Buolamwini is the founder of the Algorithmic Justice League and the author of “Unmasking AI: My Mission to Protect What Is Human in a World of Machines.”Correction: Aza says that Sam Altman, the CEO of OpenAI, predicts superintelligence in four years. Altman predicts superintelligence in ten years. RECOMMENDED MEDIAUnmasking AI by Joy Buolamwini“The conscience of the AI revolution” explains how we've arrived at an era of AI harms and oppression, and what we can do to avoid its pitfallsCoded BiasShalini Kantayya's film explores the fallout of Dr. Joy's discovery that facial recognition does not see dark-skinned faces accurately, and her journey to push for the first-ever legislation in the U.S. to govern against bias in the algorithms that impact us allHow I'm fighting bias in algorithmsDr. Joy's 2016 TED Talk about her mission to fight bias in machine learning, a phenomenon she calls the "coded gaze." RECOMMENDED YUA EPISODESMustafa Suleyman Says We Need to Contain AI. How Do We Do It?Protecting Our Freedom of Thought with Nita FarahanyThe AI Dilemma Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_
This is going to be the most productive decade in the history of our species, says Mustafa Suleyman, author of “The Coming Wave,” CEO of Inflection AI, and founder of Google's DeepMind. But in order to truly reap the benefits of AI, we need to learn how to contain it. Paradoxically, part of that will mean collectively saying no to certain forms of progress. As an industry leader reckoning with a future that's about to be ‘turbocharged' Mustafa says we can all play a role in shaping the technology in hands-on ways and by advocating for appropriate governance.RECOMMENDED MEDIA The Coming Wave: Technology, Power, and the 21st Century's Greatest DilemmaThis new book from Mustafa Suleyman is a must-read guide to the technological revolution just starting, and the transformed world it will createPartnership on AIPartnership on AI is bringing together diverse voices from across the AI community to create resources for advancing positive outcomes for people and societyPolicy Reforms Toolkit from the Center for Humane TechnologyDigital lawlessness has been normalized in the name of innovation. It's possible to craft policy that protects the conditions we need to thriveRECOMMENDED YUA EPISODES AI Myths and MisconceptionsCan We Govern AI? with Marietje SchaakeThe AI DilemmaYour Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_
Last week, Senator Chuck Schumer brought together Congress and many of the biggest names in AI for the first closed-door AI Insight Forum in Washington, D.C. Tristan and Aza were invited speakers at the event, along with Elon Musk, Satya Nadella, Sam Altman, and other leaders. In this update on Your Undivided Attention, Tristan and Aza recount how they felt the meeting went, what they communicated in their statements, and what it felt like to critique Meta's LLM in front of Mark Zuckerberg.Correction: In this episode, Tristan says GPT-3 couldn't find vulnerabilities in code. GPT-3 could find security vulnerabilities, but GPT-4 is exponentially better at it.RECOMMENDED MEDIA In Show of Force, Silicon Valley Titans Pledge ‘Getting This Right' With A.I.Elon Musk, Sam Altman, Mark Zuckerberg, Sundar Pichai and others discussed artificial intelligence with lawmakers, as tech companies strive to influence potential regulationsMajority Leader Schumer Opening Remarks For The Senate's Inaugural AI Insight ForumSenate Majority Leader Chuck Schumer (D-NY) opened the Senate's inaugural AI Insight ForumThe Wisdom GapAs seen in Tristan's talk on this subject in 2022, the scope and speed of our world's issues are accelerating and growing more complex. And yet, our ability to comprehend those challenges and respond accordingly is not matching paceRECOMMENDED YUA EPISODESSpotlight On AI: What Would It Take For This to Go Well?The AI ‘Race': China vs. the US with Jeffrey Ding and Karen HaoSpotlight: Elon, Twitter and the Gladiator Arena Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_
Where do the top Silicon Valley AI researchers really think AI is headed? Do they have a plan if things go wrong? In this episode, Tristan Harris and Aza Raskin reflect on the last several months of highlighting AI risk, and share their insider takes on a high-level workshop run by CHT in Silicon Valley. Note: Tristan refers to journalist Maria Ressa and mentions that she received 80 hate messages per hour at one point. She actually received more than 90 messages an hour.RECOMMENDED MEDIA Musk, Zuckerberg, Gates: The titans of tech will talk AI at private Capitol summitThis week will feature a series of public hearings on artificial intelligence. But all eyes will be on the closed-door gathering convened by Senate Majority Leader Chuck SchumerTakeaways from the roundtable with President Biden on artificial intelligenceTristan Harris talks about his recent meeting with President Biden to discuss regulating artificial intelligenceBiden, Harris meet with CEOs about AI risksVice President Kamala Harris met with the heads of Google, Microsoft, Anthropic, and OpenAI as the Biden administration rolled out initiatives meant to ensure that AI improves lives without putting people's rights and safety at riskRECOMMENDED YUA EPISODES The AI DilemmaThe AI ‘Race': China vs the US with Jeffrey Ding and Karen HaoThe Dictator's Playbook with Maria RessaYour Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_
In the debate over slowing down AI, we often hear the same argument against regulation. “What about China? We can't let China get ahead.” To dig into the nuances of this argument, Tristan and Aza speak with academic researcher Jeffrey Ding and journalist Karen Hao, who take us through what's really happening in Chinese AI development. They address China's advantages and limitations, what risks are overblown, and what, in this multi-national competition, is at stake as we imagine the best possible future for everyone.RECOMMENDED MEDIA Recent Trends in China's Large Language Model Landscape by Jeffrey Ding and Jenny W. XiaoThis study covers a sample of 26 large-scale pre-trained AI models developed in ChinaThe diffusion deficit in scientific and technological power: re-assessing China's rise by Jeffrey DingThis paper argues for placing a greater weight on a state's capacity to diffuse, or widely adopt, innovationsThe U.S. Is Turning Away From Its Biggest Scientific Partner at a Precarious Time by Karen Hao and Sha HuaU.S. moves to cut research ties with China over security concerns threaten American progress in critical areasWhy China Has Not Caught Up Yet: Military-Technological Superiority and the Limits of Imitation, Reverse Engineering, and Cyber Espionage by Andrea Gilli and Mauro GilliMilitary technology has grown so complex that it's hard to imitateRECOMMENDED YUA EPISODES The Three Rules of Humane TechA Fresh Take on Tech in China with Rui Ma and Duncan ClarkDigital Democracy is Within Reach with Audrey TangYour Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_
For all the talk about AI, we rarely hear about how it will change our relationships. As we swipe to find love and consult chatbot therapists, acclaimed psychotherapist and relationship expert Esther Perel warns that there's another harmful “AI” on the rise — Artificial Intimacy — and how it is depriving us of real connection. Tristan and Esther discuss how depending on algorithms can fuel alienation, and then imagine how we might design technology to strengthen our social bonds.RECOMMENDED MEDIA Mating in Captivity by Esther PerelEsther's debut work on the intricacies behind modern relationships, and the dichotomy of domesticity and sexual desireThe State of Affairs by Esther PerelEsther takes a look at modern relationships through the lens of infidelityWhere Should We Begin? with Esther PerelListen in as real couples in search of help bare the raw and profound details of their storiesHow's Work? with Esther PerelEsther's podcast that focuses on the hard conversations we're afraid to have at work Lars and the Real Girl (2007)A young man strikes up an unconventional relationship with a doll he finds on the internetHer (2013)In a near future, a lonely writer develops an unlikely relationship with an operating system designed to meet his every needRECOMMENDED YUA EPISODES Big Food, Big Tech and Big AI with Michael MossThe AI DilemmaThe Three Rules of Humane TechDigital Democracy is Within Reach with Audrey Tang Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_
We are on the cusp of an explosion of cheap, consumer-ready neurotechnology - from earbuds that gather our behavioral data, to sensors that can read our dreams. And it's all going to be supercharged by AI. This technology is moving from niche to mainstream - and it has the same potential to become exponential. Legal scholar Nita Farahany talks us through the current state of neurotechnology and its deep links to AI. She says that we urgently need to protect the last frontier of privacy: our internal thoughts. And she argues that without a new legal framework around “cognitive liberty,” we won't be able to insulate our brains from corporate and government intrusion.RECOMMENDED MEDIA The Battle for Your Brain: Defending the Right to Think Freely in the Age of Neurotechnology by Nita FarahanyThe Battle for Your Brain offers a path forward to navigate the complex dilemmas that will fundamentally impact our freedom to understand, shape, and define ourselvesComputer Program Reveals What Neurons in the Visual Cortex Prefer to Look AtA study of macaque monkeys at Harvard generated valuable clues based on an artificial intelligence system that can reliably determine what neurons in the brain's visual cortex prefer to seeUnderstanding Media: The Extensions of Man by Marshall McLuhanAn influential work by a fixture in media discourseRECOMMENDED YUA EPISODES The Three Rules of Humane TechTalking With Animals… Using AIHow to Free Our Minds with Cult Deprogramming Expert Dr. Steven HassanYour Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_
Social media was humanity's ‘first contact' moment with AI. If we're going to create laws that are strong enough to prevent AI from destroying our societies, we could benefit from taking a look at the major lawsuits against social media platforms that are playing out in our courts right now.In our last episode, we took a close look at Big Food and its dangerous “race to the bottom” that parallels AI. We continue that theme this week with an episode about litigating social media and the consequences of the race to engagement in order to inform how we can approach AI harms. Our guest, attorney Laura Marquez-Garrett, left her predominantly defense-oriented practice to join the Social Media Victims Law Center in February 2022. Laura is literally on the front lines of the battle to hold social media firms accountable for the harms they have created in young people's lives for the past decade. Listener warning: there are distressing and potentially triggering details within the episode.RECOMMENDED MEDIA 1) If you're a parent whose child has been impacted by social media, Attorneys General in Colorado, New Hampshire, and Tennessee are asking to hear your story. Your testimonies can help ensure that social media platforms are designed safely for kids. For more information, please visit the respective state links.ColoradoNew HampshireTennessee2) Social Media Victims Law CenterA non-profit legal center that was founded in 2021 in response to the testimony of Facebook whistleblower Frances Haugen3) Resources for Parents & EducatorsOverwhelmed by our broken social media environment and wondering where to start? Check out our Youth Toolkit plus three actions you can take today4) The Social DilemmaLearn how the system works. Watch and share The Social Dilemma with people you care aboutRECOMMENDED YUA EPISODES Transcending the Internet Hate Game with Dylan MarronA Conversation with Facebook Whistleblower Frances HaugenBehind the Curtain on The Social Dilemma with Jeff Orlowski-Yang and Larissa RhodesYour Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_
In the next two episodes of Your Undivided Attention, we take a close look at two respective industries: big food and social media, which represent dangerous “races to the bottom” and have big parallels with AI. And we are asking: what can our past mistakes and missed opportunities teach us about how we should approach AI harms? In this first episode, Tristan talks to Pulitzer Prize-winning journalist and author Michael Moss. His book Salt, Sugar, Fat: How the Food Giants Hooked Us rocked the fast food industry when it came out in 2014. Tristan and Michael discuss how we can leverage the lessons learned from Big Food's coordination failures, and whether it's the responsibility of the consumer, the government, or the companies to regulate. RECOMMENDED MEDIA Salt Sugar Fat: How the Food Giants Hooked UsMichael's New York Times bestseller. You'll never look at a nutrition label the same way againHooked: Food, Free Will, and How the Food Giants Exploit Our AddictionsMichael's Pulitzer Prize-winning exposé of how the processed food industry exploits our evolutionary instincts, the emotions we associate with food, and legal loopholes in their pursuit of profit over public healthControl Your Tech UseCenter for Humane Technology's recently updated Take Control ToolkitRECOMMENDED YUA EPISODESAI Myths and MisconceptionsThe AI DilemmaHow Might a long-term stock market transform tech? (ZigZag episode) Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_
What happens when creators consider what lifelong human development looks like in terms of the tools we make? And what philosophies from Sesame Street can inform how to steward the power of AI and social media to influence minds in thoughtful, humane directions?When the first episode of Sesame Street aired on PBS in 1969, it was unlike anything that had been on television before - a collaboration between educators, child psychologists, comedy writers and puppeteers - all working together to do something that had never been done before: create educational content for children on television. Fast-forward to the present: could we switch gears to reprogram today's digital tools to humanely educate the next generation? That's the question Tristan Harris and Aza Raskin explore with Dr. Rosemarie Truglio, the Senior Vice President of Curriculum and Content for the Sesame Workshop, the non-profit behind Sesame Street. RECOMMENDED MEDIA Street Gang: How We Got to Sesame StreetThis documentary offers a rare window into the early days of Sesame Street, revealing the creators, artists, writers and educators who together established one of the most influential and enduring children's programs in television historySesame Street: Ready for School!: A Parent's Guide to Playful Learning for Children Ages 2 to 5 by Dr. Rosemarie TruglioRosemarie shares all the research-based, curriculum-directed school readiness skills that have made Sesame Street the preeminent children's TV programG Is for Growing: Thirty Years of Research on Children and Sesame Street co-edited by Shalom Fisch and Rosemarie TruglioThis volume serves as a marker of the significant role that Sesame Street plays in the education and socialization of young childrenThe Democratic Surround by Fred TurnerIn this prequel to his celebrated book From Counterculture to Cyberculture, Turner rewrites the history of postwar America, showing how in the 1940s and 1950s American liberalism offered a far more radical social vision than we now rememberAmusing Ourselves to Death by Neil PostmanNeil Postman's groundbreaking book about the damaging effects of television on our politics and public discourse has been hailed as a twenty-first-century book published in the twentieth centurySesame Workshop Identity Matters StudyExplore parents' and educators' perceptions of children's social identity developmentEffects of Sesame Street: A meta-analysis of children's learning in 15 countriesCommissioned by Sesame Workshop, the study was led by University of Wisconsin researchers Marie-Louise Mares and Zhongdang PanU.S. Parents & Teachers See an Unkind World for Their Children, New Sesame Survey ShowsAccording to the survey titled, “K is for Kind: A National Survey On Kindness and Kids,” parents and teachers in the United States worry that their children are living in an unkind worldRECOMMENDED YUA EPISODESAre the Kids Alright? With Jonathan HaidtThe Three Rules of Humane TechWhen Media Was for You and Me with Fred Turner Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_
You're likely familiar with the modern zombie trope: a zombie bites someone you care about and they're transformed into a creature who wants your brain. Zombies are the perfect metaphor to explain something Tristan and Aza have been thinking about lately that they call zombie values.In this Spotlight episode of Your Undivided Attention, we talk through some examples of how zombie values limit our thinking around tech harms. Our hope is that by the end of this episode, you'll be able to recognize the zombie values that walk amongst us, and think through how to upgrade these values to meet the realities of our modern world. RECOMMENDED MEDIA Is the First Amendment Obsolete?This essay explores free expression challengesThe Wisdom GapThis blog post from the Center for Humane Technology describes the gap between the rising interconnected complexity of our problems and our ability to make sense of themRECOMMENDED YUA EPISODES A Problem Well-Stated is Half Solved with Daniel SchmachtenbergerHow To Free Our Minds with Cult Deprogramming Expert Steve HassanYour Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_
There's really no one better than veteran tech journalist Kara Swisher at challenging people to articulate their thinking. Tristan Harrris recently sat down with her for a wide ranging interview on AI risk. She even pressed Tristan on whether he is a doomsday prepper. It was so great, we wanted to share it with you here. The interview was originally on Kara's podcast ON with Kara Swisher. If you like it and want to hear more of Kara's interviews with folks like Sam Altman, Reid Hoffman and others, you can find more episodes of ON with Kara Swisher here: https://link.chtbl.com/_XTWwg3kRECOMMENDED YUA EPISODES AI Myths and MisconceptionsThe AI DilemmaThe Three Rules of Humane TechYour Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_
Democracy in action has looked the same for generations. Constituents might go to a library or school every one or two years and cast their vote for people who don't actually represent everything that they care about. Our technology is rapidly increasing in sophistication, yet our forms of democracy have largely remained unchanged. What would an upgrade look like - not just for democracy, but for all the different places that democratic decision-making happens?On this episode of Your Undivided Attention, we're joined by political economist and social technologist Divya Siddarth, one of the world's leading experts in collective intelligence. Together we explore how new kinds of governance can be supported through better technology, and how collective decision-making is key to unlocking everything from more effective elections to better ways of responding to global problems like climate change.Correction:Tristan mentions Elon Musk's attempt to manufacture ventilators early on in the COVID-19 pandemic. Musk ended up buying over 1,200 ventilators that were delivered to California.RECOMMENDED MEDIAAgainst Democracy by Jason BrennanA provocative challenge to one of our most cherished institutionsLedger of HarmsTechnology platforms have created a race for human attention that's unleashed invisible harms to society. Here are some of the costs that aren't showing up on their balance sheetsThe Wisdom GapThis blog post from the Center for Humane Technology describes the gap between the rising interconnected complexity of our problems and our ability to make sense of themDemocracyNextDemocracyNext is working to design and establish new institutions for government and transform the governance of organizations that influence public lifeCIP.orgAn incubator for new governance models for transformative technologyEtheloTransform community engagement through consensusKazm's Living Room ConversationsLiving Room Conversations works to heal society by connecting people across divides through guided conversations proven to build understanding and transform communitiesThe Citizens DialogueA model for citizen participation in Ostbelgien, which was brought to life by the parliament of the German-speaking communityAsamblea Ciudadana Para El ClimaSpain's national citizens' assembly on climate changeClimate Assembly UKThe UK's national citizens' assembly on climate changeCitizens' Convention for the ClimateFrance's national citizens' assembly on climate changePolisPolis is a real-time system for gathering, analyzing and understanding what large groups of people think in their own words, enabled by advanced statistics and machine learningRECOMMENDED YUA EPISODESDigital Democracy is Within Reach with Audrey Tang They Don't Represent Us with Larry LessigA Renegade Solution to Extractive Economics with Kate RaworthYour Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_
A few episodes back, we presented Tristan Harris and Aza Raskin's talk The AI Dilemma. People inside the companies that are building generative artificial intelligence came to us with their concerns about the rapid pace of deployment and the problems that are emerging as a result. We felt called to lay out the catastrophic risks that AI poses to society and sound the alarm on the need to upgrade our institutions for a post-AI world.The talk resonated - over 1.6 million people have viewed it on YouTube as of this episode's release date. The positive reception gives us hope that leaders will be willing to come to the table for a difficult but necessary conversation about AI.However, now that so many people have watched or listened to the talk, we've found that there are some AI myths getting in the way of making progress. On this episode of Your Undivided Attention, we debunk five of those misconceptions. Correction: Aza says that the head of the alignment team at OpenAI has concerns about safety. It's actually the former head of language model alignment, Paul Christiano, who voiced this concern. He left OpenAI in 2021.RECOMMENDED MEDIA Opinion | Yuval Harari, Tristan Harris, and Aza Raskin on Threats to Humanity Posed by AI - The New York TimesIn this New York Times piece, Yuval Harari, Tristan Harris, and Aza Raskin call upon world leaders to respond to this moment at the level of challenge it presents.Misalignment, AI & MolochA deep dive into the game theory and exponential growth underlying our modern economic system, and how recent advancements in AI are poised to turn up the pressure on that system, and its wider environment, in ways we have never seen beforeRECOMMENDED YUA EPISODESThe AI DilemmaThe Three Rules of Humane TechCan We Govern AI? with Marietje SchaakeYour Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_