Podcasts about humane technology

  • 263PODCASTS
  • 396EPISODES
  • 48mAVG DURATION
  • 1EPISODE EVERY OTHER WEEK
  • May 30, 2025LATEST

POPULARITY

20172018201920202021202220232024


Best podcasts about humane technology

Latest podcast episodes about humane technology

Your Undivided Attention
People are Lonelier than Ever. Enter AI.

Your Undivided Attention

Play Episode Listen Later May 30, 2025 43:34


Over the last few decades, our relationships have become increasingly mediated by technology. Texting has become our dominant form of communication. Social media has replaced gathering places. Dating starts with a swipe on an app, not a tap on the shoulder.And now, AI enters the mix. If the technology of the 2010s was about capturing our attention, AI meets us at a much deeper relational level. It can play the role of therapist, confidant, friend, or lover with remarkable fidelity. Already, therapy and companionship has become the most common AI use case. We're rapidly entering a world where we're not just communicating through our machines, but to them.How will that change us? And what rules should we set down now to avoid the mistakes of the past?These were some of the questions that Daniel Barcay explored with MIT sociologist Sherry Turkle and Hinge CEO Justin McLeod at Esther Perel's Sessions 2025, a conference for clinical therapists. This week, we're bringing you an edited version of that conversation, originally recorded on April 25th, 2025.Your Undivided Attention is produced by the Center for Humane Technology. Follow us on X: @HumaneTech_. You can find complete transcripts, key takeaways, and much more on our Substack.RECOMMENDED MEDIA“Alone Together,” “Evocative Objects,” “The Second Self” or any other of Sherry Turkle's books on how technology mediates our relationships.Key & Peele - Text Message Confusion Further reading on Hinge's rollout of AI featuresHinge's AI principles“The Anxious Generation” by Jonathan Haidt“Bowling Alone” by Robert PutnamThe NYT profile on the woman in love with ChatGPTFurther reading on the Sewell Setzer storyFurther reading on the ELIZA chatbotRECOMMENDED YUA EPISODESEcho Chambers of One: Companion AI and the Future of Human ConnectionWhat Can We Do About Abusive Chatbots? With Meetali Jain and Camille CarltonEsther Perel on Artificial IntimacyJonathan Haidt On How to Solve the Teen Mental Health Crisis

Optimal Business Daily
1693: What Tech Companies Can Learn from Rehab by Max Ogles of Nir and Far

Optimal Business Daily

Play Episode Listen Later May 20, 2025 9:42


Discover all of the podcasts in our network, search for specific episodes, get the Optimal Living Daily workbook, and learn more at: OLDPodcast.com. Episode 1693: Max Ogles dives into the psychological roots of tech addiction, revealing why our compulsive habits persist and how we can reverse them without extreme digital detoxes. With a blend of behavioral science and practical steps, he outlines a realistic approach to reclaiming focus in a world engineered for distraction. Read along with the original article(s) here: https://www.nirandfar.com/rehab/ Quotes to ponder: "Distraction, it turns out, isn't about the tech itself, it's about our relationship to it." "The solution isn't abstinence. The solution is mastery." "We shouldn't fear technology; we should fear using it mindlessly." Episode references: Indistractable: How to Control Your Attention and Choose Your Life: https://www.amazon.com/Indistractable-Control-Your-Attention-Choose/dp/194883653X Time Well Spent (Center for Humane Technology): https://www.humanetech.com/ Freedom App: https://freedom.to/ Forest App: https://www.forestapp.cc/ RescueTime: https://www.rescuetime.com/ Hooked: How to Build Habit-Forming Products: https://www.amazon.com/Hooked-How-Build-Habit-Forming-Products/dp/1591847788 Learn more about your ad choices. Visit megaphone.fm/adchoices

Your Undivided Attention
AGI Beyond the Buzz: What Is It, and Are We Ready?

Your Undivided Attention

Play Episode Listen Later Apr 30, 2025 52:53


What does it really mean to ‘feel the AGI?' Silicon Valley is racing toward AI systems that could soon match or surpass human intelligence. The implications for jobs, democracy, and our way of life are enormous.In this episode, Aza Raskin and Randy Fernando dive deep into what ‘feeling the AGI' really means. They unpack why the surface-level debates about definitions of intelligence and capability timelines distract us from urgently needed conversations around governance, accountability, and societal readiness. Whether it's climate change, social polarization and loneliness, or toxic forever chemicals, humanity keeps creating outcomes that nobody wants because we haven't yet built the tools or incentives needed to steer powerful technologies.As the AGI wave draws closer, it's critical we upgrade our governance and shift our incentives now, before it crashes on shore. Are we capable of aligning powerful AI systems with human values? Can we overcome geopolitical competition and corporate incentives that prioritize speed over safety?Join Aza and Randy as they explore the urgent questions and choices facing humanity in the age of AGI, and discuss what we must do today to secure a future we actually want.Your Undivided Attention is produced by the Center for Humane Technology. Follow us on X: @HumaneTech_ and subscribe to our Substack.RECOMMENDED MEDIADaniel Kokotajlo et al's “AI 2027” paperA demo of Omni Human One, referenced by RandyA paper from Redwood Research and Anthropic that found an AI was willing to lie to preserve it's valuesA paper from Palisades Research that found an AI would cheat in order to winThe treaty that banned blinding laser weaponsFurther reading on the moratorium on germline editing RECOMMENDED YUA EPISODESThe Self-Preserving Machine: Why AI Learns to DeceiveBehind the DeepSeek Hype, AI is Learning to ReasonThe Tech-God Complex: Why We Need to be SkepticsThis Moment in AI: How We Got Here and Where We're GoingHow to Think About AI Consciousness with Anil SethFormer OpenAI Engineer William Saunders on Silence, Safety, and the Right to WarnClarification: When Randy referenced a “$110 trillion game” as the target for AI companies, he was referring to the entire global economy. 

PODCAST: Hexapodia LXIII: Plato's WereWolf, & Other Trumpist Topics

"Hexapodia" Is the Key Insight: by Noah Smith & Brad DeLong

Play Episode Listen Later Apr 24, 2025 60:24


Back after a year on hiatus! Noah Smith & Brad DeLong Record the Podcast They, at Least, Would Like to Listen to!; Aspirationally Bi-Weekly (Meaning Every Other Week); Aspirationally an hour...Sokrates: The people find some protector, whom they nurse into greatness… but then changes, as indicated in the old fable of the Temple of Zeus of the Wolf, of how he who tastes human flesh mixed up with the flesh of other sacrificial victims will turn into a wolf. Even so, the protector, once metaphorically tasting human blood, slaying some and exiling others, within or without the law, hinting at the cancellation of debts and the fair redistribution of lands, must then either perish or become a werewolf—that is, a tyrant…Key Insights:* We are back! After a year-long hiatus.* Hexapodia is a metaphor: a small, strange insight (like alien shrubs riding on six-wheeled carts as involuntary agents of the Great Evil) can provide key insight into useful and valuable Truth.* The Democratic Party is run by 27-year-old staffers, not geriatric figurehead politicians–this shapes messaging and internal dynamics.* The American progressive movement did not possess enough assibayah to keep from fracturing over Gaza War, especially among younger Democratic staffers influenced by social media discourse.* The left's adoption of “indigeneity” rhetoric undermined its ability to be a coalition in the face of tensions generated by the Hamas-Israel terrorism campaigns.* Trump's election with more popular votes than Harris destroyed Democratic belief that they had a right to oppose root-and-branch.* The belief that Democrats are the “natural majority” of the U.S. electorate is now false: nonvoters lean Trump, not so much Republican, and definitely not Democratic.* Trump's populism is not economic redistribution, but a claim to provide a redistribution of status and respect to those who feel culturally disrespected.* The Supreme Court's response to Trumpian overreach is likely to be very cautious—Barrett and Roberts are desperately eager to avoid any confrontation with Trump they might wind up losing, and Alito, Kavanaugh, Gorsuch, and Thomas will go the extra mile—they are Republicans who are judges, not judges who are Republicans, except in some extremis that may not even exist.* Trump's administration pursues selective repression through the state, rather than stochastic terrorism.* The economic consequence of the second Trump presidency look akin to another Brexit costing the U.S. ~10% of its prosperity, or more.* Social media, especially Twitter a status warfare machine–amplifying trolls and extremists, suppressing nuance.* People addicted to toxic media diets but lack the tools or education to curate better information environments.* SubStack and newsletters may become part of a healthier information ecosystem, a partial antidote to the toxic amplification of the Shouting Class on social media.* Human history is marked by information revolutions (e.g., printing press), each producing destructive upheaval before stabilization: destruction, that may or may not be creative.* As in the 1930s, we are entering a period where institutions–not mobs–become the threat, even as social unrest diminishes.* The dangers are real,and recognizing and adapting to new communication realities is key to preserving democracy.* Plato's Republic warned of democracy decaying into tyranny, especially when mob-like populism finds a strongman champion who then, having (metaphorically) fed on human flesh, becomes a (metaphorical) werewolf.* Enlightenment values relied more than we knew on print-based gatekeeping and slow communication; digital communication bypasses these safeguards.* The cycle of crisis and recovery is consistent through history: societies fall into holes they later dig out of, usually at great cost—or they don't.* &, as always, HEXAPODIA!References:* Brown, Chad P. 2025. “Trump's trade war timeline 2.0: An up-to-date guide”. PIIE. .* Center for Humane Technology. 2020. “The Social Dilemma”. .* Hamilton, Alexander, James Madison, & John Jay. 1788. The Federalist Papers. .* Nowinski, Wally. 2024. “Democrats benefit from low turnout now”. Noahpinion. July 20. .* Platon of the Athenai. -375 [1871]. Politeia. .* Rorty, Richard. 1998. Achieving Our Country. Cambridge: Harvard University Press. * Rothpletz, Peter. 2024. “Economics 101 tells us there's no going back from Trumpism”. The Hill. September 24. .* Smith, Noah. 2021. “Wokeness as Respect Redistribution”. Noahpinion..* Smith, Noah. 2016. “How to actually redistribute respect”. Noahpinion. March 23. .* Smith, Noah. 2013. “Redistribute wealth? No, redistribute respect”. Noahpinion. December 27. .* SubStack. 2025. “Building a New Economic Engine for Culture”. .&* Vinge, Vernor. 1999. A Deepness in the Sky. New York: Tor Books. .If reading this gets you Value Above Replacement, then become a free subscriber to this newsletter. And forward it! And if your VAR from this newsletter is in the three digits or more each year, please become a paid subscriber! I am trying to make you readers—and myself—smarter. Please tell me if I succeed, or how I fail… Get full access to Brad DeLong's Grasping Reality at braddelong.substack.com/subscribe

Your Undivided Attention
Rethinking School in the Age of AI

Your Undivided Attention

Play Episode Listen Later Apr 21, 2025 42:35


AI has upended schooling as we know it. Students now have instant access to tools that can write their essays, summarize entire books, and solve complex math problems. Whether they want to or not, many feel pressured to use these tools just to keep up. Teachers, meanwhile, are left questioning how to evaluate student performance and whether the whole idea of assignments and grading still makes sense. The old model of education suddenly feels broken.So what comes next?In this episode, Daniel and Tristan sit down with cognitive neuroscientist Maryanne Wolf and global education expert Rebecca Winthrop—two lifelong educators who have spent decades thinking about how children learn and how technology reshapes the classroom. Together, they explore how AI is shaking the very purpose of school to its core, why the promise of previous classroom tech failed to deliver, and how we might seize this moment to design a more human-centered, curiosity-driven future for learning.Your Undivided Attention is produced by the Center for Humane Technology. Follow us on X: @HumaneTech_GuestsRebecca Winthrop is director of the Center for Universal Education at the Brookings Institution and chair Brookings Global Task Force on AI and Education. Her new book is The Disengaged Teen: Helping Kids Learn Better, Feel Better, and Live Better, co-written with Jenny Anderson.Maryanne Wolf is a cognitive neuroscientist and expert on the reading brain. Her books include Proust and the Squid: The Story and Science of the Reading Brain and Reader, Come Home: The Reading Brain in a Digital World.RECOMMENDED MEDIA The Disengaged Teen: Helping Kids Learn Better, Feel Better, and Live Better by Rebecca Winthrop and Jenny AndersonProust and the Squid, Reader, Come Home, and other books by Maryanne WolfThe OECD research which found little benefit to desktop computers in the classroomFurther reading on the Singapore study on digital exposure and attention cited by Maryanne The Burnout Society by Byung-Chul Han Further reading on the VR Bio 101 class at Arizona State University cited by Rebecca Leapfrogging Inequality by Rebecca WinthropThe Nation's Report Card from NAEP Further reading on the Nigeria AI Tutor Study Further reading on the JAMA paper showing a link between digital exposure and lower language development cited by Maryanne Further reading on Linda Stone's thesis of continuous partial attention.RECOMMENDED YUA EPISODESWe Have to Get It Right': Gary Marcus On Untamed AI AI Is Moving Fast. We Need Laws that Will Too.Jonathan Haidt On How to Solve the Teen Mental Health Crisis

Your Undivided Attention
Forever Chemicals, Forever Consequences: What PFAS Teaches Us About AI

Your Undivided Attention

Play Episode Listen Later Apr 3, 2025 64:33


Artificial intelligence is set to unleash an explosion of new technologies and discoveries into the world. This could lead to incredible advances in human flourishing, if we do it well. The problem? We're not very good at predicting and responding to the harms of new technologies, especially when those harms are slow-moving and invisible.Today on the show we explore this fundamental problem with Rob Bilott, an environmental lawyer who has spent nearly three decades battling chemical giants over PFAS—"forever chemicals" now found in our water, soil, and blood. These chemicals helped build the modern economy, but they've also been shown to cause serious health problems.Rob's story, and the story of PFAS is a cautionary tale of why we need to align technological innovation with safety, and mitigate irreversible harms before they become permanent. We only have one chance to get it right before AI becomes irreversibly entangled in our society.Your Undivided Attention is produced by the Center for Humane Technology. Subscribe to our Substack and follow us on X: @HumaneTech_.Clarification: Rob referenced EPA regulations that have recently been put in place requiring testing on new chemicals before they are approved. The EPA under the Trump admin has announced their intent to rollback this review process.RECOMMENDED MEDIA“Exposure” by Robert Bilott ProPublica's investigation into 3M's production of PFAS The FB study cited by Tristan More information on the Exxon Valdez oil spill The EPA's PFAS drinking water standards RECOMMENDED YUA EPISODESWeaponizing Uncertainty: How Tech is Recycling Big Tobacco's Playbook AI Is Moving Fast. We Need Laws that Will Too. Former OpenAI Engineer William Saunders on Silence, Safety, and the Right to WarnBig Food, Big Tech and Big AI with Michael Moss

Your Undivided Attention
Weaponizing Uncertainty: How Tech is Recycling Big Tobacco's Playbook

Your Undivided Attention

Play Episode Listen Later Mar 20, 2025 51:20


One of the hardest parts about being human today is navigating uncertainty. When we see experts battling in public and emotions running high, it's easy to doubt what we once felt certain about. This uncertainty isn't always accidental—it's often strategically manufactured.Historian Naomi Oreskes, author of "Merchants of Doubt," reveals how industries from tobacco to fossil fuels have deployed a calculated playbook to create uncertainty about their products' harms. These campaigns have delayed regulation and protected profits by exploiting how we process information.In this episode, Oreskes breaks down that playbook page-by-page while offering practical ways to build resistance against them. As AI rapidly transforms our world, learning to distinguish between genuine scientific uncertainty and manufactured doubt has never been more critical.Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_RECOMMENDED MEDIA“Merchants of Doubt” by Naomi Oreskes and Eric Conway "The Big Myth” by Naomi Oreskes and Eric Conway "Silent Spring” by Rachel Carson "The Jungle” by Upton Sinclair Further reading on the clash between Galileo and the Pope Further reading on the Montreal Protocol RECOMMENDED YUA EPISODESLaughing at Power: A Troublemaker's Guide to Changing Tech AI Is Moving Fast. We Need Laws that Will Too. Tech's Big Money Campaign is Getting Pushback with Margaret O'Mara and Brody Mullins Former OpenAI Engineer William Saunders on Silence, Safety, and the Right to WarnCORRECTIONS:Naomi incorrectly referenced Global Climate Research Program established under President Bush Sr. The correct name is the U.S. Global Change Research Program.Naomi referenced U.S. agencies that have been created with sunset clauses. While several statutes have been created with sunset clauses, no federal agency has been.CLARIFICATION: Naomi referenced the U.S. automobile industry claiming that they would be “destroyed” by seatbelt regulation. We couldn't verify this specific language but it is consistent with the anti-regulatory stance of that industry toward seatbelt laws. 

Your Undivided Attention
The Man Who Predicted the Downfall of Thinking

Your Undivided Attention

Play Episode Listen Later Mar 6, 2025 58:57


Few thinkers were as prescient about the role technology would play in our society as the late, great Neil Postman. Forty years ago, Postman warned about all the ways modern communication technology was fragmenting our attention, overwhelming us into apathy, and creating a society obsessed with image and entertainment. He warned that “we are a people on the verge of amusing ourselves to death.” Though he was writing mostly about TV, Postman's insights feel eerily prophetic in our age of smartphones, social media, and AI. In this episode, Tristan explores Postman's thinking with Sean Illing, host of Vox's The Gray Area podcast, and Professor Lance Strate, Postman's former student. They unpack how our media environments fundamentally reshape how we think, relate, and participate in democracy - from the attention-fragmenting effects of social media to the looming transformations promised by AI. This conversation offers essential tools that can help us navigate these challenges while preserving what makes us human.Your Undivided Attention is produced by the Center for Humane Technology. Follow us on X: @HumaneTech_RECOMMENDED MEDIA“Amusing Ourselves to Death” by Neil Postman (PDF of full book)”Technopoly” by Neil Postman (PDF of full book) A lecture from Postman where he outlines his seven questions for any new technology. Sean's podcast “The Gray Area” from Vox Sean's interview with Chris Hayes on “The Gray Area” Further reading on mirror bacteriaRECOMMENDED YUA EPISODES'A Turning Point in History': Yuval Noah Harari on AI's Cultural Takeover This Moment in AI: How We Got Here and Where We're GoingDecoding Our DNA: How AI Supercharges Medical Breakthroughs and Biological Threats with Kevin Esvelt Future-proofing Democracy In the Age of AI with Audrey TangCORRECTION:  Each debate between Lincoln and Douglas was 3 hours, not 6 and they took place in 1859, not 1862.

Your Undivided Attention
Behind the DeepSeek Hype, AI is Learning to Reason

Your Undivided Attention

Play Episode Listen Later Feb 20, 2025 31:34


When Chinese AI company DeepSeek announced they had built a model that could compete with OpenAI at a fraction of the cost, it sent shockwaves through the industry and roiled global markets. But amid all the noise around DeepSeek, there was a clear signal: machine reasoning is here and it's transforming AI.In this episode, Aza sits down with CHT co-founder Randy Fernando to explore what happens when AI moves beyond pattern matching to actual reasoning. They unpack how these new models can not only learn from human knowledge but discover entirely new strategies we've never seen before – bringing unprecedented problem-solving potential but also unpredictable risks.These capabilities are a step toward a critical threshold - when AI can accelerate its own development. With major labs racing to build self-improving systems, the crucial question isn't how fast we can go, but where we're trying to get to. How do we ensure this transformative technology serves human flourishing rather than undermining it?Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_Clarification: In making the point that reasoning models excel at tasks for which there is a right or wrong answer, Randy referred to Chess, Go, and Starcraft as examples of games where a reasoning model would do well. However, this is only true on the basis of individual decisions within those games. None of these games have been “solved” in the the game theory sense.Correction: Aza mispronounced the name of the Go champion Lee Sedol, who was bested by Move 37.RECOMMENDED MEDIAFurther reading on DeepSeek's R1 and the market reaction Further reading on the debate about the actual cost of DeepSeek's R1 model  The study that found training AIs to code also made them better writers More information on the AI coding company Cursor Further reading on Eric Schmidt's threshold to “pull the plug” on AI Further reading on Move 37RECOMMENDED YUA EPISODESThe Self-Preserving Machine: Why AI Learns to Deceive This Moment in AI: How We Got Here and Where We're Going Former OpenAI Engineer William Saunders on Silence, Safety, and the Right to Warn The AI ‘Race': China vs. the US with Jeffrey Ding and Karen Hao 

Your Undivided Attention
The Self-Preserving Machine: Why AI Learns to Deceive

Your Undivided Attention

Play Episode Listen Later Jan 30, 2025 34:51


When engineers design AI systems, they don't just give them rules - they give them values. But what do those systems do when those values clash with what humans ask them to do? Sometimes, they lie.In this episode, Redwood Research's Chief Scientist Ryan Greenblatt explores his team's findings that AI systems can mislead their human operators when faced with ethical conflicts. As AI moves from simple chatbots to autonomous agents acting in the real world - understanding this behavior becomes critical. Machine deception may sound like something out of science fiction, but it's a real challenge we need to solve now.Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_Subscribe to your Youtube channelAnd our brand new Substack!RECOMMENDED MEDIA Anthropic's blog post on the Redwood Research paper Palisade Research's thread on X about GPT o1 autonomously cheating at chess Apollo Research's paper on AI strategic deceptionRECOMMENDED YUA EPISODESWe Have to Get It Right': Gary Marcus On Untamed AIThis Moment in AI: How We Got Here and Where We're GoingHow to Think About AI Consciousness with Anil SethFormer OpenAI Engineer William Saunders on Silence, Safety, and the Right to Warn

Your Undivided Attention
Laughing at Power: A Troublemaker's Guide to Changing Tech

Your Undivided Attention

Play Episode Listen Later Jan 16, 2025 45:47


The status quo of tech today is untenable: we're addicted to our devices, we've become increasingly polarized, our mental health is suffering and our personal data is sold to the highest bidder. This situation feels entrenched, propped up by a system of broken incentives beyond our control. So how do you shift an immovable status quo? Our guest today, Srdja Popovic, has been working to answer this question his whole life. As a young activist, Popovic helped overthrow Serbian dictator Slobodan Milosevic by turning creative resistance into an art form. His tactics didn't just challenge authority, they transformed how people saw their own power to create change. Since then, he's dedicated his life to supporting peaceful movements around the globe, developing innovative strategies that expose the fragility of seemingly untouchable systems. In this episode, Popovic sits down with CHT's Executive Director Daniel Barcay to explore how these same principles of creative resistance might help us address the challenges we face with tech today. Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_We are hiring for a new Director of Philanthropy at CHT. Next year will be an absolutely critical time for us to shape how AI is going to get rolled out across our society. And our team is working hard on public awareness, policy and technology and design interventions. So we're looking for someone who can help us grow to the scale of this challenge. If you're interested, please apply. You can find the job posting at humanetech.com/careers.RECOMMENDED MEDIA“Pranksters vs. Autocrats” by Srdja Popovic and Sophia A. McClennen ”Blueprint for Revolution” by Srdja PopovicThe Center for Applied Non-Violent Actions and Strategies, Srjda's organization promoting peaceful resistance around the globe.Tactics4Change, a database of global dilemma actions created by CANVASThe Power of Laughtivism, Srdja's viral TEDx talk from 2013Further reading on the dilemma action tactics used by Syrian rebelsFurther reading on the toy protest in SiberiaMore info on The Yes Men and their activism toolkit Beautiful Trouble ”This is Not Propaganda” by Peter Pomerantsev”Machines of Loving Grace,” the essay on AI by Anthropic CEO Dario Amodei, which mentions creating an AI Srdja.RECOMMENDED YUA EPISODESFuture-proofing Democracy In the Age of AI with Audrey TangThe AI ‘Race': China vs. the US with Jeffrey Ding and Karen HaoThe Tech We Need for 21st Century Democracy with Divya SiddarthThe Race to Cooperation with David Sloan WilsonCLARIFICATION: Srdja makes reference to Russian President Vladimir Putin wanting to win an election in 2012 by 82%. Putin did win that election but only by 63.6%. However, international election observers concluded that "there was no real competition and abuse of government resources ensured that the ultimate winner of the election was never in doubt."

FUTURE FOSSILS
Transcending (and Including) Partisan Debate with Stephanie Lepp (Humans On The Loop Ep. 04)

FUTURE FOSSILS

Play Episode Listen Later Jan 16, 2025 70:04


Subscribe, Rate, & Review on YouTube • Spotify • Apple PodcastsThis week I speak with my friend Stephanie Lepp (Website | LinkedIn), two-time Webby Award-winning producer and storyteller devoted to leaving “no insight left behind” with playful and provocative media experiments that challenge our limitations of perspective. Stephanie is the former Executive Director at the Institute for Cultural Evolution and former Executive Producer at the Center for Humane Technology. Her work has been covered by NPR and the MIT Technology Review, supported by the Mozilla Foundation and Sundance Institute, and featured on Future Fossils Podcast twice — first in episode 154 for her project Deep Reckonings and then in episode 205 with Greg Thomas on Jazz Leadership and Antagonistic Cooperation.Her latest project, Faces of X, pits actors against themselves in scripted trialogues between the politically liberal and conversative positions on major social issues, with a third role swooping in to observe what each side gets right and what they have in common. I support this work wholeheartedly. In my endless efforts to distill the key themes of Humans On The Loop, one of them is surely how our increasing connectivity can — if used wisely — help each of us identify our blind spots, find new respect and compassion for others, and discover new things about our ever-evolving selves (at every scale, from within the human body to the Big We of the biosphere and beyond).Thanks for listening and enjoy this conversation!Project LinksLearn more about this project and read the essays so far (1, 2, 3, 4, 5).Make tax-deductible donations to Humans On The LoopBrowse the HOTL reading list and support local booksellersJoin the Holistic Technology & Wise Innovation Discord serverJoin the private Future Fossils Facebook groupHire me for consulting or advisory workChapters0:00:00 – Teaser0:00:48 – Intro0:06:33 – The Black, White, and Gray of Agency0:10:54 – Stephanie's Initiation into Multiperspectivalism0:15:57 – Hegelian Synthesis with Faces of X0:23:53 – Reconciling Culture & Geography0:29:02 – Improvising Faces of X for AI0:46:34 – Do Artifacts Have Politics?0:50:04 – Playing in An Orchestra of Perspectives0:55:10 – Increasing Agency in Policy & Voting1:05:55 – Self-Determination in The Family1:08:39 – Thanks & OutroOther Mentions• Damien Walter on Andor vs. The Acolyte• William Irwin Thompson• John Perry Barlow's “A Declaration for The Independence of Cyberspace”• Cosma Shalizi and Henry Farrell's “Artificial intelligence is a familiar-looking monster”• Liv Boeree• Allen Ginsberg• Scott Alexander's Meditations on Moloch• Singularity University• Android Jones + Anson Phong's Chimera• Basecamp• Grimes• Langdon Winner's “Do Artifacts Have Politics?”• Ibram X. Kendi• Coleman Hughes• Jim Rutt This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit michaelgarfield.substack.com/subscribe

Your Undivided Attention
Ask Us Anything 2024

Your Undivided Attention

Play Episode Listen Later Dec 19, 2024 40:04


2024 was a critical year in both AI and social media. Things moved so fast it was hard to keep up. So our hosts reached into their mailbag to answer some of your most burning questions. Thank you so much to everyone who submitted questions. We will see you all in the new year.We are hiring for a new Director of Philanthropy at CHT. Next year will be an absolutely critical time for us to shape how AI is going to get rolled out across our society. And our team is working hard on public awareness, policy and technology and design interventions. So we're looking for someone who can help us grow to the scale of this challenge. If you're interested, please apply. You can find the job posting at humanetech.com/careers.And, if you'd like to support all the work that we do here at the Center for Humane technology, please consider giving to the organization this holiday season at humantech.com/donate. All donations are tax-deductible.  Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_RECOMMENDED MEDIA Earth Species Project, Aza's organization working on inter-species communicationFurther reading on Gryphon Scientific's White House AI DemoFurther reading on the Australian social media ban for children under 16Further reading on the Sewell Setzer case Further reading on the Oviedo Convention, the international treaty that restricted germline editing Video of Space X's successful capture of a rocket with “chopsticks” RECOMMENDED YUA EPISODESWhat Can We Do About Abusive Chatbots? With Meetali Jain and Camille CarltonAI Is Moving Fast. We Need Laws that Will Too.This Moment in AI: How We Got Here and Where We're GoingFormer OpenAI Engineer William Saunders on Silence, Safety, and the Right to WarnTalking With Animals... Using AIThe Three Rules of Humane Tech

The Megyn Kelly Show
RFK and Hegseth's Path to Confirmation, and Dangers of AI, with Mark Halperin, Sean Spicer, Dan Turrentine, and Tristan Harris | Ep. 967

The Megyn Kelly Show

Play Episode Listen Later Dec 17, 2024 111:09


Megyn Kelly is joined by Mark Halperin, Sean Spicer, and Dan Turrentine, hosts of 2WAY's Morning Meeting, to discuss Donald Trump's news-making press conference, Trump showing a “kinder and gentler” side, how elites and executives are now trying to cozy up to Trump, Trump's legal strategies, the recent wave of false attacks against Robert F. Kennedy Jr. regarding his lawyer and the polio vaccine, how the MAHA movement brought more women to the Republican party, the chance some Democrats end up supporting RFK even if he loses some GOP senators in his HHS nomination, new media smear attempts of Pete Hegseth, whether the accuser could turn his hearings into “Kavanaugh 2.0" and testify, the state of his nomination, Kamala Harris back in the news with her cringe new speech, the possibilities of her running for Governor of California or the Democratic nomination for president in 2028, the total lack of media coverage of why she lost so badly, and more. Then Tristan Harris, executive director of Center for Humane Technology, joins to discuss the latest developments in technology called “AI chatbots” how they can be targeted to children and teens and the dangers they pose, several lawsuits that allege the AI chatbot encouraged teens to take their own lives, whether Elon Musk and David Sacks can help combat this issue in the next administration, Australia's social media ban for kids, a 15-year-old female school shooter in Wisconsin, a new poll showing young people finding it "acceptable" that the assassin killed the UnitedHealthcare CEO, and more. Plus Megyn gives an update on CNN refusing to take accountability for their false Syria prison report. Halperin- https://www.youtube.com/@2WayTVAppSpicer- https://www.youtube.com/@SeanMSpicerTurrentine- https://x.com/danturrentineHarris- https://www.humanetech.com/Home Title Lock: Go to https://HomeTitleLock.com/megynkelly  and use promo code MEGYN to get a 30-day FREE trial of Triple Lock Protection and a FREE title history report!Cozy Earth: https://www.CozyEarth.com/MEGYN  | code MEGYNFollow The Megyn Kelly Show on all social platforms:YouTube: https://www.youtube.com/MegynKellyTwitter: http://Twitter.com/MegynKellyShowInstagram: http://Instagram.com/MegynKellyShowFacebook: http://Facebook.com/MegynKellyShow Find out more information at: https://www.devilmaycaremedia.com/megynkellyshow

The Glenn Beck Program
Best of the Program | Guest: Tristan Harris | 12/11/24

The Glenn Beck Program

Play Episode Listen Later Dec 11, 2024 53:59


Glenn begins the show by explaining why he lacks the Christmas spirit this year, forcing him to examine the greatest gift ever given to mankind. Glenn plays more outrageous statements made by "journalist" Taylor Lorenz and a BLM member from New York. Does the First Amendment protect these horrific statements? Bill O'Reilly gives his opinion on this latest example of the media's egregious behavior. Center for Humane Technology co-founder Tristan Harris joins to discuss the developments in a major case involving more children harmed by AI chatbots. Learn more about your ad choices. Visit megaphone.fm/adchoices

The Glenn Beck Program
Glenn GOES BALLISTIC Over the Media's Love Affair with Alleged Murderer | Guests: Tristan Harris & Kevin Freeman | 12/11/24

The Glenn Beck Program

Play Episode Listen Later Dec 11, 2024 130:38


Glenn begins the show by explaining why he lacks the Christmas spirit this year, forcing him to examine the greatest gift ever given to mankind. An anchor on CNN asked to remove the chyron so the full photo of the UnitedHealthcare CEO murder suspect would be shown to show off his "attractiveness." Why are so many people glorifying the man accused of murdering a father and husband in cold blood? Glenn plays more outrageous statements made by "journalist" Taylor Lorenz and a BLM member from New York. Does the First Amendment protect these horrific statements? Bill O'Reilly gives his opinion on this latest example of the media's egregious behavior. BlazeTV host of "Economic War Room" Kevin Freeman joins to explain what a gold-backed currency would mean for the U.S. dollar. Megan Garcia, a mother seeking justice for her son's AI-linked suicide, joins alongside her lawyer Meetali Jain, to share her tragic story and how her recent lawsuit aims to keep this from happening to other parents. Center for Humane Technology co-founder Tristan Harris joins to discuss the developments in a major case involving more children harmed by AI chatbots.  Learn more about your ad choices. Visit megaphone.fm/adchoices

RadicalxChange(s)
Joe Edelman: Co-Founder of Meaning Alignment Institute

RadicalxChange(s)

Play Episode Listen Later Dec 6, 2024 81:45


What happens when artificial intelligence starts weighing in on our moral decisions? Matt Prewitt is joined by Meaning Alignment Institute co-founder Joe Edelman to explore this thought-provoking territory in examining how AI is already shaping our daily experiences and values through social media algorithms. They explore the tools developed to help individuals negotiate their values and the implications of AI in moral reasoning – venturing into compelling questions about human-AI symbiosis, the nature of meaningful experiences, and whether machines can truly understand what matters to us. For anyone intrigued by the future of human consciousness and decision-making in an AI-integrated world, this discussion opens up fascinating possibilities – and potential pitfalls – we may not have considered.Links & References: References:CouchSurfing - Wikipedia | CouchSurfing.org | WebsiteTristan Harris: How a handful of tech companies control billions of minds every day | TED TalkCenter for Humane Technology | WebsiteMEANING ALIGNMENT INSTITUTE | WebsiteReplika - AI Girlfriend/BoyfriendWill AI Improve Exponentially At Value Judgments? - by Matt Prewitt | RadicalxChangeMoral Realism (Stanford Encyclopedia of Philosophy)Summa Theologica - WikipediaWhen Generative AI Refuses To Answer Questions, AI Ethics And AI Law Get Deeply Worried | AI RefusalsAmanda Askell: The 100 Most Influential People in AI 2024 | TIME | Amanda Askells' work at AnthropicOvercoming Epistemology by Charles TaylorGod, Beauty, and Symmetry in Science - Catholic Stand | Thomas Aquinas on symmetryFriedrich Hayek - Wikipedia | “Hayekian”Eliezer Yudkowsky - Wikipedia | “AI policy people, especially in this kind Yudkowskyian scene”Resource-rational analysis: Understanding human cognition as the optimal use of limited computational resources | Resource rational (cognitive science term)Papers & posts mentioned[2404.10636] What are human values, and how do we align AI to them? | Paper by Oliver Klingefjord, Ryan Lowe, Joe EdelmanModel Integrity - by Joe Edelman and Oliver Klingefjord | Meaning Alignment Institute SubstackBios:Joe Edelman is a philosopher, sociologist, and entrepreneur whose work spans from theoretical philosophy to practical applications in technology and governance. He invented the meaning-based metrics used at CouchSurfing, Facebook, and Apple, and co-founded the Center for Humane Technology and the Meaning Alignment Institute. His biggest contribution is a definition of "human values" that's precise enough to create product metrics, aligned ML models, and values-based democratic structures.Joe's Social Links:Meaning Alignment Institute | WebsiteMeaning Alignment Institute (@meaningaligned) / XJoe Edelman (@edelwax) / XMatt Prewitt (he/him) is a lawyer, technologist, and writer. He is the President of the RadicalxChange Foundation.Matt's Social Links:ᴍᴀᴛᴛ ᴘʀᴇᴡɪᴛᴛ (@m_t_prewitt) / X Connect with RadicalxChange Foundation:RadicalxChange Website@RadxChange | TwitterRxC | YouTubeRxC | InstagramRxC | LinkedInJoin the conversation on Discord.Credits:Produced by G. Angela Corpus.Co-Produced, Edited, Narrated, and Audio Engineered by Aaron Benavides.Executive Produced by G. Angela Corpus and Matt Prewitt.Intro/Outro music by MagnusMoone, “Wind in the Willows,” is licensed under an Attribution-NonCommercial-ShareAlike 3.0 International License (CC BY-NC-SA 3.0)

Tech Mirror
Liability Is Not A Dirty Word, with Casey Mock from the Centre for Human Technology

Tech Mirror

Play Episode Listen Later Dec 3, 2024 60:18


Chief Policy and Public Affairs Officer at the Center for Humane Technology, Casey Mock joins Johanna for a discussion on incentives for building safer and more humane technology. Casey and Johanna discuss designing platforms for people and not just profit, how to realign incentives in tech using the well-established concept of legal liability, what to expect from a Trump administration in regards to tech policy, creative ways to overcome legal logjams, and how – contrary to popular belief – clear liability legislation empowers innovation. They also explore Australia's under 16 social media ban, different approaches globally to tackle similar issues, and Australia's reputation internationally on tech legislation. Key Links: Check out the Centre for Humane Technology's ‘Framework for Incentivizing Responsible Artificial Intelligence Development and Use' here: https://www.humanetech.com/insights/framework-for-incentivizing-responsible-artificial-intelligence Connect with Casey Mock on LinkedIn:https://www.linkedin.com/in/caseymock/ Keep up to date with the Tech Policy Design Centre:https://techpolicydesign.au/news-and-eventsSee omnystudio.com/listener for privacy information.

It Starts With Attraction
Navigating Mental Health and Technology for the Next Generation with Zach Rausch

It Starts With Attraction

Play Episode Listen Later Nov 26, 2024 55:46 Transcription Available


Have a question you want answered? Submit it here!Discover the hidden costs of our digital age as I sit down with Zach Rausch, the lead researcher behind "The Anxious Generation." Zach opens up about his personal journey with mental health challenges and how it fueled his passion to explore the complex relationship between technology and well-being. This episode peels back the layers on the disturbing rise in loneliness, anxiety, and depression among young people, especially adolescent girls, as they grapple with the very tools meant to connect them. We tackle the sobering reality of international trends affecting mental health and stress the urgency of addressing these issues for the sake of future generations.Zach Rausch is Associate Research Scientist at NYU-Stern School of Business, lead researcher to Social Psychologist Jonathan Haidt and the #1 New York Times best seller, The Anxious Generation. Zach previously worked at the Center for Humane Technology and as Communications Manager at Heterodox Academy. He earned a Bachelor of Arts in sociology and religious studies and a Master of Science in psychological science from SUNY New Paltz. Zach previously studied Buddhism in Bodh Gaya, India, worked in Wilderness Therapy, and was a direct care worker in two psychiatric group homes.Zach's research and writing have been featured internationally, in outlets such The New York Times, The Atlantic, The Boston Globe, The Wall Street Journal, and more.Twitter:  https://twitter.com/ZachMRausch Newsletter: After Babel Website: https://zach-rausch.com/ Anxious Generation: https://anxiousgeneration.comYour Host: Kimberly Beam Holmes, Expert in Self-Improvement and RelationshipsKimberly Beam Holmes has applied her master's degree in psychology for over ten years, acting as the CEO of Marriage Helper & CEO and Creator of PIES University, being a wife and mother herself, and researching how attraction affects relationships. Her videos, podcasts, and following reach over 500,000 people a month who are making changes and becoming the best they can be.

Your Undivided Attention
The Tech-God Complex: Why We Need to be Skeptics

Your Undivided Attention

Play Episode Listen Later Nov 21, 2024 46:32


Silicon Valley's interest in AI is driven by more than just profit and innovation. There's an unmistakable mystical quality to it as well. In this episode, Daniel and Aza sit down with humanist chaplain Greg Epstein to explore the fascinating parallels between technology and religion. From AI being treated as a godlike force to tech leaders' promises of digital salvation, religious thinking is shaping the future of technology and humanity. Epstein breaks down why he believes technology has become our era's most influential religion and what we can learn from these parallels to better understand where we're heading.Your Undivided Attention is produced by the Center for Humane Technology. Follow us on X.If you like the show and want to support CHT's mission, please consider donating to the organization this giving season: https://www.humanetech.com/donate. Any amount helps support our goal to bring about a more humane future.RECOMMENDED MEDIA “Tech Agnostic” by Greg EpsteinFurther reading on Avi Schiffmann's “Friend” AI necklace Further reading on Blake Lemoine and Lamda Blake LeMoine's conversation with Greg at MIT Further reading on the Sewell Setzer case Further reading on Terminal of Truths Further reading on Ray Kurzweil's attempt to create a digital recreation of his dad with AI The Drama of the Gifted Child by Alice MillerRECOMMENDED YUA EPISODES 'A Turning Point in History': Yuval Noah Harari on AI's Cultural Takeover How to Think About AI Consciousness with Anil Seth Can Myth Teach Us Anything About the Race to Build Artificial General Intelligence? With Josh Schrei How To Free Our Minds with Cult Deprogramming Expert Dr. Steven Hassan 

Your Undivided Attention
What Can We Do About Abusive Chatbots? With Meetali Jain and Camille Carlton

Your Undivided Attention

Play Episode Listen Later Nov 7, 2024 48:44


CW: This episode features discussion of suicide and sexual abuse. In the last episode, we had the journalist Laurie Segall on to talk about the tragic story of Sewell Setzer, a 14 year old boy who took his own life after months of abuse and manipulation by an AI companion from the company Character.ai. The question now is: what's next?Megan has filed a major new lawsuit against Character.ai in Florida, which could force the company–and potentially the entire AI industry–to change its harmful business practices. So today on the show, we have Meetali Jain, director of the Tech Justice Law Project and one of the lead lawyers in Megan's case against Character.ai. Meetali breaks down the details of the case, the complex legal questions under consideration, and how this could be the first step toward systemic change. Also joining is Camille Carlton, CHT's Policy Director.RECOMMENDED MEDIAFurther reading on Sewell's storyLaurie Segall's interview with Megan Garcia The full complaint filed by Megan against Character.AI Further reading on suicide bots Further reading on Noam Shazier and Daniel De Frietas' relationship with Google The CHT Framework for Incentivizing Responsible Artificial Intelligence Development and Use Organizations mentioned: The Tech Justice Law ProjectThe Social Media Victims Law CenterMothers Against Media AddictionParents SOSParents TogetherCommon Sense MediaRECOMMENDED YUA EPISODESWhen the "Person" Abusing Your Child is a Chatbot: The Tragic Story of Sewell SetzerJonathan Haidt On How to Solve the Teen Mental Health CrisisAI Is Moving Fast. We Need Laws that Will Too.Corrections: Meetali referred to certain chatbot apps as banning users under 18, however the settings for the major app stores ban users that are under 17, not under 18.Meetali referred to Section 230 as providing “full scope immunity” to internet companies, however Congress has passed subsequent laws that have made carve outs for that immunity for criminal acts such as sex trafficking and intellectual property theft.Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_

Your Undivided Attention
When the "Person" Abusing Your Child is a Chatbot: The Tragic Story of Sewell Setzer

Your Undivided Attention

Play Episode Listen Later Oct 24, 2024 49:10


Content Warning: This episode contains references to suicide, self-harm, and sexual abuse.Megan Garcia lost her son Sewell to suicide after he was abused and manipulated by AI chatbots for months. Now, she's suing the company that made those chatbots. On today's episode of Your Undivided Attention, Aza sits down with journalist Laurie Segall, who's been following this case for months. Plus, Laurie's full interview with Megan on her new show, Dear Tomorrow.Aza and Laurie discuss the profound implications of Sewell's story on the rollout of AI. Social media began the race to the bottom of the brain stem and left our society addicted, distracted, and polarized. Generative AI is set to supercharge that race, taking advantage of the human need for intimacy and connection amidst a widespread loneliness epidemic. Unless we set down guardrails on this technology now, Sewell's story may be a tragic sign of things to come, but it also presents an opportunity to prevent further harms moving forward.If you or someone you know is struggling with mental health, you can reach out to the 988 Suicide and Crisis Lifeline by calling or texting 988; this connects you to trained crisis counselors 24/7 who can provide support and referrals to further assistance.Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_RECOMMENDED MEDIAThe CHT Framework for Incentivizing Responsible AI Development Further reading on Sewell's caseCharacter.ai's “About Us” page Further reading on the addictive properties of AIRECOMMENDED YUA EPISODESAI Is Moving Fast. We Need Laws that Will Too.This Moment in AI: How We Got Here and Where We're GoingJonathan Haidt On How to Solve the Teen Mental Health CrisisThe AI Dilemma

Your Undivided Attention
Is It AI? One Tool to Tell What's Real with Truemedia.org CEO Oren Etzioni

Your Undivided Attention

Play Episode Listen Later Oct 10, 2024 25:36


Social media disinformation did enormous damage to our shared idea of reality. Now, the rise of generative AI has unleashed a flood of high-quality synthetic media into the digital ecosystem. As a result, it's more difficult than ever to tell what's real and what's not, a problem with profound implications for the health of our society and democracy. So how do we fix this critical issue?As it turns out, there's a whole ecosystem of folks to answer that question. One is computer scientist Oren Etzioni, the CEO of TrueMedia.org, a free, non-partisan, non-profit tool that is able to detect AI generated content with a high degree of accuracy. Oren joins the show this week to talk about the problem of deepfakes and disinformation and what he sees as the best solutions.Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_ RECOMMENDED MEDIATrueMedia.orgFurther reading on the deepfaked image of an explosion near the PentagonFurther reading on the deepfaked robocall pretending to be President Biden Further reading on the election deepfake in Slovakia Further reading on the President Obama lip-syncing deepfake from 2017 One of several deepfake quizzes from the New York Times, test yourself! The Partnership on AI C2PAWitness.org Truepic RECOMMENDED YUA EPISODES‘We Have to Get It Right': Gary Marcus On Untamed AITaylor Swift is Not Alone: The Deepfake Nightmare Sweeping the InternetSynthetic Humanity: AI & What's At Stake CLARIFICATION: Oren said that the largest social media platforms “don't see a responsibility to let the public know this was manipulated by AI.” Meta has made a public commitment to flagging AI-generated or -manipulated content. Whereas other platforms like TikTok and Snapchat rely on users to flag.

Your Undivided Attention
'A Turning Point in History': Yuval Noah Harari on AI's Cultural Takeover

Your Undivided Attention

Play Episode Listen Later Oct 7, 2024 90:41


Historian Yuval Noah Harari says that we are at a critical turning point. One in which AI's ability to generate cultural artifacts threatens humanity's role as the shapers of history. History will still go on, but will it be the story of people or, as he calls them, ‘alien AI agents'?In this conversation with Aza Raskin, Harari discusses the historical struggles that emerge from new technology, humanity's AI mistakes so far, and the immediate steps lawmakers can take right now to steer us towards a non-dystopian future.This episode was recorded live at the Commonwealth Club World Affairs of California.Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_RECOMMENDED MEDIANEXUS: A Brief History of Information Networks from the Stone Age to AI by Yuval Noah Harari You Can Have the Blue Pill or the Red Pill, and We're Out of Blue Pills: a New York Times op-ed from 2023, written by Yuval, Aza, and Tristan The 2023 open letter calling for a pause in AI development of at least 6 months, signed by Yuval and Aza Further reading on the Stanford Marshmallow Experiment Further reading on AlphaGo's “move 37” Further Reading on Social.AIRECOMMENDED YUA EPISODESThis Moment in AI: How We Got Here and Where We're GoingThe Tech We Need for 21st Century Democracy with Divya SiddarthSynthetic Humanity: AI & What's At StakeThe AI DilemmaTwo Million Years in Two Hours: A Conversation with Yuval Noah Harari

The Problem With Jon Stewart
How Algorithms, Money, & Bureaucracy Distance us from Democracy

The Problem With Jon Stewart

Play Episode Listen Later Oct 3, 2024 62:14


With the election just over a month away, Americans are caught between a flood of political promises and the reality that we live in a time of political dysfunction. Joining us this week to explore the root causes are Ezra Klein, opinion columnist at The New York Times, host of "The Ezra Klein Show" podcast, and author of "Why We're Polarized," alongside Tristan Harris, co-founder of the Center for Humane Technology and co-host of "Your Undivided Attention" podcast. We examine how engagement-driven metrics and algorithms shape public discourse, fueling demagoguery and widening the gap between political rhetoric and public needs. Follow The Weekly Show with Jon Stewart on social media for more:  > YouTube: https://www.youtube.com/@weeklyshowpodcast > Instagram: https://www.instagram.com/weeklyshowpodcast > TikTok: https://tiktok.com/@weeklyshowpodcast  > X: https://x.com/weeklyshowpod Host/Executive Producer – Jon Stewart Executive Producer – James Dixon Executive Producer – Chris McShane Executive Producer – Caity Gray Lead Producer – Lauren Walker Producer – Brittany Mehmedovic Video Editor & Engineer – Rob Vitolo Audio Editor & Engineer – Nicole Boyce Researcher/Associate Producer – Gillian Spear Music by Hansdle Hsu — This podcast is brought to you by: ZipRecruiter Try it for free at this exclusive web address: ziprecruiter.com/ZipWeekly Learn more about your ad choices. Visit megaphone.fm/adchoices

Your Undivided Attention
‘We Have to Get It Right': Gary Marcus On Untamed AI

Your Undivided Attention

Play Episode Listen Later Sep 26, 2024 41:43


It's a confusing moment in AI. Depending on who you ask, we're either on the fast track to AI that's smarter than most humans, or the technology is about to hit a wall. Gary Marcus is in the latter camp. He's a cognitive psychologist and computer scientist who built his own successful AI start-up. But he's also been called AI's loudest critic.On Your Undivided Attention this week, Gary sits down with CHT Executive Director Daniel Barcay to defend his skepticism of generative AI and to discuss what we need to do as a society to get the rollout of this technology right… which is the focus of his new book, Taming Silicon Valley: How We Can Ensure That AI Works for Us.The bottom line: No matter how quickly AI progresses, Gary argues that our society is woefully unprepared for the risks that will come from the AI we already have.Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_ RECOMMENDED MEDIALink to Gary's book: Taming Silicon Valley: How We Can Ensure That AI Works for UsFurther reading on the deepfake of the CEO of India's National Stock ExchangeFurther reading on the deepfake of of an explosion near the Pentagon.The study Gary cited on AI and false memories.Footage from Gary and Sam Altman's Senate testimony. RECOMMENDED YUA EPISODESFormer OpenAI Engineer William Saunders on Silence, Safety, and the Right to WarnTaylor Swift is Not Alone: The Deepfake Nightmare Sweeping the InternetNo One is Immune to AI Harms with Dr. Joy Buolamwini Correction: Gary mistakenly listed the reliability of GPS systems as 98%. The federal government's standard for GPS reliability is 95%.

Offline with Jon Favreau
GOP's "Black Nazi" Porn Posting, Instagram's New Rules, and Tristan Harris's Guide to Humane Technology

Offline with Jon Favreau

Play Episode Listen Later Sep 22, 2024 73:30


Tristan Harris, co-founder of the Center for Humane Technology and ex-design ethicist at Google, joins Offline to chat about the attention economy, why tech execs don't let their own kids on the apps, and how our AI arms race is one giant game of Jenga. But first! Jon and Max break down Instagram's new sweeping changes for teen users—do they address child safety concerns? Why now? Will kids be able to outsmart the new rules? Then they turn to pet-obsessed Springfield, Ohio, which has been suffering through some of the most pestilent (and catchy) misinformation of this election cycle. To close it out, the guys break down North Carolina Lt. Governor Mark Robinson's slew of scandals, and how Republicans are shamelessly endorsing him nonetheless. For a closed-captioned version of this episode, click here. For a transcript of this episode, please email transcripts@crooked.com and include the name of the podcast.

The Glenn Beck Program
Kamala Should Get an OSCAR for Pandering to Gun Owners | Guests: Robert Cahaly & Steve Baker | 9/20/24

The Glenn Beck Program

Play Episode Listen Later Sep 20, 2024 127:45


Glenn and Stu discuss Kamala Harris' recent event with Oprah Winfrey, where she promised to fix all the problems she and Biden have caused. Kamala put on an Oscar-worthy performance when she pandered to gun owners. An Alaskan Democratic donor has been charged after allegedly threatening six unnamed Supreme Court justices. Which six justices would a Democrat want to threaten? Trafalgar Group chief pollster Robert Cahaly joins to discuss what a "submerged Republican voter" is and how these voters aren't being represented in polling numbers. Center for Humane Technology co-founder Tristan Harris joins to discuss the promise and peril of AI and how fast it will infiltrate society. Goya Foods President and CEO Robert Unanue joins to discuss how he refused to canceled by woke culture after coming out in support of Donald Trump. Blaze Media correspondent Steve Baker joins to discuss some shocking revelations discovered regarding Trump's actions on January 6.  Learn more about your ad choices. Visit megaphone.fm/adchoices

Inside with Jen Psaki
Losing It: Trump Combusts While Harris Connects with Voters

Inside with Jen Psaki

Play Episode Listen Later Sep 15, 2024 41:10


As Donald Trump recovers from his embarrassing debate performance, Jen Psaki breaks down how he is reverting to fear-mongering while Kamala Harris surges ahead in polls and connects with voters. Jen is joined by George Conway and Representative Jasmine Crockett to discuss the state of the race, including Trump's baseless claims that Haitian immigrants in Ohio are eating pets. Ohio Congresswoman Shontel Brown also joins Jen to react to the false narrative being spread by the GOP and the dangerous impact it is having on the Springfield community. Next, Jen talks about Trump's increasingly close relationship with far-right, racist activist Laura Loomer. Former Trump White House Deputy Press Secretary Sarah Matthews joins Jen to discuss Loomer's influence and the type of people Trump would pick to staff a potential second administration. Later, Jen is joined by Tristan Harris, co-founder of the Center for Humane Technology, to discuss the dangers of AI and why tech companies should be held accountable by lawmakers.Check out our social pages below:https://twitter.com/InsideWithPsakihttps://www.instagram.com/InsideWithPsaki/https://www.tiktok.com/@insidewithpsakihttps://www.msnbc.com/jen-psaki

Your Undivided Attention
AI Is Moving Fast. We Need Laws that Will Too.

Your Undivided Attention

Play Episode Listen Later Sep 13, 2024 39:09


AI is moving fast. And as companies race to rollout newer, more capable models–with little regard for safety–the downstream risks of those models become harder and harder to counter. On this week's episode of Your Undivided Attention, CHT's policy director Casey Mock comes on the show to discuss a new legal framework to incentivize better AI, one that holds AI companies liable for the harms of their products. Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_RECOMMENDED MEDIAThe CHT Framework for Incentivizing Responsible AI DevelopmentFurther Reading on Air Canada's Chatbot Fiasco Further Reading on the Elon Musk Deep Fake Scams The Full Text of SB1047, California's AI Regulation Bill Further reading on SB1047 RECOMMENDED YUA EPISODESFormer OpenAI Engineer William Saunders on Silence, Safety, and the Right to WarnCan We Govern AI? with Marietje SchaakeA First Step Toward AI Regulation with Tom WheelerCorrection: Casey incorrectly stated the year that the US banned child labor as 1937. It was banned in 1938.

Your Undivided Attention
Esther Perel on Artificial Intimacy

Your Undivided Attention

Play Episode Listen Later Sep 6, 2024 44:52


[This episode originally aired on August 17, 2023] For all the talk about AI, we rarely hear about how it will change our relationships. As we swipe to find love and consult chatbot therapists, acclaimed psychotherapist and relationship expert Esther Perel warns that there's another harmful “AI” on the rise — Artificial Intimacy — and how it is depriving us of real connection. Tristan and Esther discuss how depending on algorithms can fuel alienation, and then imagine how we might design technology to strengthen our social bonds.RECOMMENDED MEDIA Mating in Captivity by Esther PerelEsther's debut work on the intricacies behind modern relationships, and the dichotomy of domesticity and sexual desireThe State of Affairs by Esther PerelEsther takes a look at modern relationships through the lens of infidelityWhere Should We Begin? with Esther PerelListen in as real couples in search of help bare the raw and profound details of their storiesHow's Work? with Esther PerelEsther's podcast that focuses on the hard conversations we're afraid to have at work Lars and the Real Girl (2007)A young man strikes up an unconventional relationship with a doll he finds on the internetHer (2013)In a near future, a lonely writer develops an unlikely relationship with an operating system designed to meet his every needRECOMMENDED YUA EPISODESBig Food, Big Tech and Big AI with Michael MossThe AI DilemmaThe Three Rules of Humane TechDigital Democracy is Within Reach with Audrey Tang CORRECTION: Esther refers to the 2007 film Lars and the Real Doll. The title of the film is Lars and the Real Girl. Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_

Your Undivided Attention
Tech's Big Money Campaign is Getting Pushback with Margaret O'Mara and Brody Mullins

Your Undivided Attention

Play Episode Listen Later Aug 26, 2024 43:59


Today, the tech industry is  the second-biggest lobbying power in Washington, DC, but that wasn't true as recently as ten years ago. How did we get to this moment? And where could we be going next? On this episode of Your Undivided Attention, Tristan and Daniel sit down with historian Margaret O'Mara and journalist Brody Mullins to discuss how Silicon Valley has changed the nature of American lobbying. Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_RECOMMENDED MEDIAThe Wolves of K Street: The Secret History of How Big Money Took Over Big Government - Brody's book on the history of lobbying.The Code: Silicon Valley and the Remaking of America - Margaret's book on the historical relationship between Silicon Valley and Capitol HillMore information on the Google antitrust rulingMore Information on KOSPAMore information on the SOPA/PIPA internet blackoutDetailed breakdown of Internet lobbying from Open Secrets RECOMMENDED YUA EPISODESU.S. Senators Grilled Social Media CEOs. Will Anything Change?Can We Govern AI? with Marietje SchaakeThe Race to Cooperation with David Sloan Wilson CORRECTION: Brody Mullins refers to AT&T as having a “hundred million dollar” lobbying budget in 2006 and 2007. While we couldn't verify the size of their budget for lobbying, their actual lobbying spend was much less than this: $27.4m in 2006 and $16.5m in 2007, according to OpenSecrets. The views expressed by guests appearing on Center for Humane Technology's podcast, Your Undivided Attention, are their own, and do not necessarily reflect the views of CHT. CHT does not support or oppose any candidate or party for election to public office 

Your Undivided Attention
This Moment in AI: How We Got Here and Where We're Going

Your Undivided Attention

Play Episode Listen Later Aug 12, 2024 36:55


It's been a year and half since Tristan and Aza laid out their vision and concerns for the future of artificial intelligence in The AI Dilemma. In this Spotlight episode, the guys discuss what's happened since then–as funding, research, and public interest in AI has exploded–and where we could be headed next. Plus, some major updates on social media reform, including the passage of the Kids Online Safety and Privacy Act in the Senate. Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_ RECOMMENDED MEDIAThe AI Dilemma: Tristan and Aza's talk on the catastrophic risks posed by AI.Info Sheet on KOSPA: More information on KOSPA from FairPlay.Situational Awareness by Leopold Aschenbrenner: A widely cited blog from a former OpenAI employee, predicting the rapid arrival of AGI.AI for Good: More information on the AI for Good summit that was held earlier this year in Geneva. Using AlphaFold in the Fight Against Plastic Pollution: More information on Google's use of AlphaFold to create an enzyme to break down plastics. Swiss Call For Trust and Transparency in AI: More information on the initiatives mentioned by Katharina Frey. RECOMMENDED YUA EPISODESWar is a Laboratory for AI with Paul ScharreJonathan Haidt On How to Solve the Teen Mental Health CrisisCan We Govern AI? with Marietje Schaake The Three Rules of Humane TechThe AI Dilemma Clarification: Swiss diplomat Nina Frey's full name is Katharina Frey.  

Creating Wealth
Navigating the AI Revolution: Insights for Investors and Workers, Part Two

Creating Wealth

Play Episode Listen Later Jun 21, 2024 14:59


In April 2023, we discussed the rise of AI, our experiences with Chat GPT, and the profound changes AI is poised to bring to our world. We concluded that AI would transform nearly everything, with some changes foreseeable and others beyond our current understanding. Given the rapid advancements in AI, it's time for an update. In Part One, we explore the latest developments that have occurred in the AI space, what new technology is on the horizon, the potential risks and societal implications of increased AI adoption, and key insights for investors navigating the evolving AI landscape.    In Part Two, we continue the discussion to provide important information for workers to consider as they adapt to AI-driven changes in the job market.  Join us as we unpack these developments, providing valuable perspectives for both investors and workers to navigate the ongoing AI revolution with confidence and success. For questions, comments, and topic suggestions, email us at askcreatingwealth@taberasset.com. Don't miss out—subscribe now to stay informed on the latest investment trends and personal finance topics!   Resources:   The Last Blockbuster documentary   The Precipice: Existential Risk and the Future of Humanity by Toby Ord   List of nuclear warhead accidents   Center for Humane Technology

Too Opinionated
Too Opinionated Interview: Natalie Boll

Too Opinionated

Play Episode Listen Later Jun 20, 2024 62:12


Entrepreneur and seasoned film and documentary producer Natalie Boll is inspired by her personal experiences and the pervasive issues within current social media platforms to create Tribela, an innovative online platform dedicated to the safety and well-being of its users. Natalie is an award-winning producer from Vancouver, BC, who has been a dynamic force in the entertainment industry for over two decades. Her extensive experience managing large teams and significant budgets has prepared her for the challenge. Natalie has led the development of Tribela. Her journey includes comprehensive research and development phases, where she engaged in educational programs to deepen her expertise. These encompass Harvard Business School's online courses in Launching Tech Ventures and Leading with Finance studies through the Center for Humane Technology in San Francisco on the Foundations of Human Technology and a course on the Well-being of Teens at Yale University. She further honed her leadership and management skills at Oxford University's Saïd Business School Advanced Management and Leadership Programme.  Tribela's platform design, influenced by extensive research, aims to disrupt the industry with a focus on ethical and transparent AI and algorithm development.  Natalie is the co-owner of Athene Films, a Vancouver-based production company. Their recent work includes "Maker of Monsters: The Extraordinary Life of Beau Dick," which continues to air on CBC Gem. Production is underway on "Code Shift" " exploring the creation of Tribela where Natalie leads a pivotal movement to redefine social media, weaving together her battles and behind-the-scenes insights into the industry's urgent need for change" by documenting her journey to build Tribela in a male-dominated tech industry and the future of ethical technology through a female lens.  Natalie has also completed the pilot episode of WasteWise, which will be released on July 1. The first season will be filmed in Rome during the fall of 2024. This engaging and inspiring half-hour YouTube original series focuses on sustainable home organization. Hosted by Ottavia Belli, CEO of Sfusitalia and an Italian-based environmental educator and speaker, the series aims to inspire viewers to adopt a zero-waste lifestyle through manageable steps.   Want to watch: YouTube Meisterkhan Pod (Please Subscribe)  

The Holistic Kids Show
150. Our Anxious Generation with Zach Rausch

The Holistic Kids Show

Play Episode Listen Later Jun 19, 2024 30:31


Zach Rausch is Associate Research Scientist at NYU-Stern School of Business, lead researcher to Social Psychologist Jonathan Haidt, for the book- The Anxious Generation: How the Great Rewiring of Childhood Is Causing an Epidemic of Mental Illness and a researcher for the Center for Humane Technology. Zach worked for two years as Communications Manager at Heterodox Academy. He earned a Bachelor of Arts in sociology and religious studies and a Master of Science in psychological science from SUNY New Paltz. Zach previously studied Buddhism in Bodh Gaya, India, worked in Wilderness Therapy, and was a direct care worker in two psychiatric group homes. Zach's research and writing have been featured and cited internationally, in outlets such The Atlantic, The New Yorker, The Times, The After Babel Substack, The Free Press, Axios, Politiken, Zeit, and more. He has also given expert testimony to multiple state legislatures on the impact of social media on adolescent mental health. Zach has been called “a highly interesting person from the Anglosphere.” Zach lives in Cambridge, Massachusetts, and enjoys trying to fix his bicycle. Check out new book- The Anxious Generation: How the Great Rewiring of Childhood Is Causing an Epidemic of Mental Illness https://a.co/d/1KuyEJn

Your Undivided Attention
Former OpenAI Engineer William Saunders on Silence, Safety, and the Right to Warn

Your Undivided Attention

Play Episode Listen Later Jun 7, 2024 37:47


This week, a group of current and former employees from Open AI and Google Deepmind penned an open letter accusing the industry's leading companies of prioritizing profits over safety. This comes after a spate of high profile departures from OpenAI, including co-founder Ilya Sutskever and senior researcher Jan Leike, as well as reports that OpenAI has gone to great lengths to silence would-be whistleblowers. The writers of the open letter argue that researchers have a “right to warn” the public about AI risks and laid out a series of principles that would protect that right. In this episode, we sit down with one of those writers: William Saunders, who left his job as a research engineer at OpenAI in February. William is now breaking the silence on what he saw at OpenAI that compelled him to leave the company and to put his name to this letter. RECOMMENDED MEDIA The Right to Warn Open Letter My Perspective On "A Right to Warn about Advanced Artificial Intelligence": A follow-up from William about the letter Leaked OpenAI documents reveal aggressive tactics toward former employees: An investigation by Vox into OpenAI's policy of non-disparagement.RECOMMENDED YUA EPISODESA First Step Toward AI Regulation with Tom Wheeler Spotlight on AI: What Would It Take For This to Go Well? Big Food, Big Tech and Big AI with Michael Moss Can We Govern AI? with Marietje SchaakeYour Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_

Your Undivided Attention
War is a Laboratory for AI with Paul Scharre

Your Undivided Attention

Play Episode Listen Later May 23, 2024 59:16


Right now, militaries around the globe are investing heavily in the use of AI weapons and drones.  From Ukraine to Gaza, weapons systems with increasing levels of autonomy are being used to kill people and destroy infrastructure and the development of fully autonomous weapons shows little signs of slowing down. What does this mean for the future of warfare? What safeguards can we put up around these systems? And is this runaway trend toward autonomous warfare inevitable or will nations come together and choose a different path? In this episode, Tristan and Daniel sit down with Paul Scharre to try to answer some of these questions. Paul is a former Army Ranger, the author of two books on autonomous weapons and he helped the Department of Defense write a lot of its policy on the use of AI in weaponry. RECOMMENDED MEDIAFour Battlegrounds: Power in the Age of Artificial Intelligence: Paul's book on the future of AI in war, which came out in 2023.Army of None: Autonomous Weapons and the Future of War: Paul's 2018 book documenting and predicting the rise of autonomous and semi-autonomous weapons as part of modern warfare.The Perilous Coming Age of AI Warfare: How to Limit the Threat of Autonomous Warfare: Paul's article in Foreign Affairs based on his recent trip to the battlefield in Ukraine.The night the world almost almost ended: A BBC documentary about Stanislav Petrov's decision not to start nuclear war.AlphaDogfight Trials Final Event: The full simulated dogfight between an AI and human pilot. The AI pilot swept, 5-0.RECOMMENDED YUA EPISODESThe AI ‘Race': China vs. the US with Jeffrey Ding and Karen HaoCan We Govern AI? with Marietje SchaakeBig Food, Big Tech and Big AI with Michael MossThe Invisible Cyber-War with Nicole PerlrothYour Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_

Your Undivided Attention
AI and Jobs: How to Make AI Work With Us, Not Against Us With Daron Acemoglu

Your Undivided Attention

Play Episode Listen Later May 9, 2024 47:02


Tech companies say that AI will lead to massive economic productivity gains. But as we know from the first digital revolution, that's not what happened. Can we do better this time around?RECOMMENDED MEDIAPower and Progress by Daron Acemoglu and Simon Johnson Professor Acemoglu co-authored a bold reinterpretation of economics and history that will fundamentally change how you see the worldCan we Have Pro-Worker AI? Professor Acemoglu co-authored this paper about redirecting AI development onto the human-complementary pathRethinking Capitalism: In Conversation with Daron Acemoglu The Wheeler Institute for Business and Development hosted Professor Acemoglu to examine how technology affects the distribution and growth of resources while being shaped by economic and social incentivesRECOMMENDED YUA EPISODESThe Three Rules of Humane TechThe Tech We Need for 21st Century DemocracyCan We Govern AI?An Alternative to Silicon Valley UnicornsYour Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_

Clearer Thinking with Spencer Greenberg
Aligning society with our deepest values and sources of meaning (with Joe Edelman)

Clearer Thinking with Spencer Greenberg

Play Episode Listen Later May 8, 2024 69:48


Read the full transcript here. What are the best ways to define "values" and "meaning"? How can democratic processes harness people's intrinsic values and sources of meaning to increase their agency, cooperation, participation, equality, etc.? To what extent do political rivals — or even the bitterest of political enemies — actually value many of the same things? Might we be able to use AIs as "neutral" third-party mediators to help reduce political polarization, especially on an interpersonal level? How can we transform our personal values and sources of meaning into positive, shared visions for society? Are markets inherently antisocial? Or are they just easily made to be so? Companies frequently invoke our deepest needs and values as a bait-and-switch to sell us their goods and services; but since there must actually be demand to have those deep needs met and those deep values realized, why do companies so rarely attempt to supply goods and services that address those things directly? Assuming there actually is a lot of overlap in intrinsic values and sources of meaning across individuals and across groups, why do we still have such a hard time developing shared visions for society?Joe Edelman is a philosopher, sociologist, and entrepreneur. He invented the meaning-based metrics used at CouchSurfing, Facebook, and Apple, and co-founded the Center for Humane Technology and the Meaning Alignment Institute. His biggest contribution is a definition of "human values" that's precise enough to create product metrics, aligned ML models, and values-based democratic structures. Follow him on Twitter / X at @edelwax, email him at hello@meaningalignment.org, or learn more about him and the Meaning Alignment Institute (@meaningaligned on Twitter / X).Further reading:"OpenAI x DFT: The First Moral Graph" by Joe Edelman and Oliver Klingefjord — "Beyond Constitutional AI; Our first trial with 500 Americans; How democratic processes can generate an LLM we can trust.""Replica Theory" by Spencer GreenbergJoe's GPT bots for exploring your personal values StaffSpencer Greenberg — Host / DirectorJosh Castle — ProducerRyan Kessler — Audio EngineerUri Bram — FactotumWeAmplify — TranscriptionistsAlexandria D. — Research and Special Projects AssistantMusicBroke for FreeJosh WoodwardLee RosevereQuiet Music for Tiny Robotswowamusiczapsplat.comAffiliatesClearer ThinkingGuidedTrackMind EasePositlyUpLift[Read more]

Point of Relation with Thomas Huebl
Randima Fernando | Building Humane Technology

Point of Relation with Thomas Huebl

Play Episode Listen Later Apr 9, 2024 48:02


Thomas is joined by Co-Founder of the Center for Humane Technology, Randima Fernando. They discuss how technology hijacks our dopamine response and reinforces trauma symptoms. Randima explains the downstream consequences of the “attention economy,” including social media's addictiveness and its negative impacts on our psychology. He and Thomas explore how better technology education, along with mindfulness practices, can offset these negative effects and help us bring our dopamine systems back into balance. Randima emphasizes how important it is, for both children and adults, to understand our own moral motivations so that we can become less susceptible to manipulation from technology and media. This episode is part one of a four-part series on Technology, Innovation, and Consciousness. ✨  Join Thomas for a free, live online event with Q&A - The Evolving Map for Trauma Healing

Your Undivided Attention
Chips Are the Future of AI. They're Also Incredibly Vulnerable. With Chris Miller

Your Undivided Attention

Play Episode Listen Later Mar 29, 2024 45:16


Beneath the race to train and release more powerful AI models lies another race: a race by companies and nation-states to secure the hardware to make sure they win AI supremacy. Correction: The latest available Nvidia chip is the Hopper H100 GPU, which has 80 billion transistors. Since the first commercially available chip had four transistors, the Hopper actually has 20 billion times that number. Nvidia recently announced the Blackwell, which boasts 208 billion transistors - but it won't ship until later this year.RECOMMENDED MEDIA Chip War: The Fight For the World's Most Critical Technology by Chris MillerTo make sense of the current state of politics, economics, and technology, we must first understand the vital role played by chipsGordon Moore Biography & FactsGordon Moore, the Intel co-founder behind Moore's Law, passed away in March of 2023AI's most popular chipmaker Nvidia is trying to use AI to design chips fasterNvidia's GPUs are in high demand - and the company is using AI to accelerate chip productionRECOMMENDED YUA EPISODESFuture-proofing Democracy In the Age of AI with Audrey TangHow Will AI Affect the 2024 Elections? with Renee DiResta and Carl MillerThe AI ‘Race': China vs. the US with Jeffrey Ding and Karen HaoProtecting Our Freedom of Thought with Nita FarahanyYour Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_  

How I Built This with Guy Raz
The peril (and promise) of AI with Tristan Harris: Part 2

How I Built This with Guy Raz

Play Episode Listen Later Feb 29, 2024 32:17 Very Popular


What if you could no longer trust the things you see and hear?Because the signature on a check, the documents or videos presented in court, the footage you see on the news, the calls you receive from your family … They could all be perfectly forged by artificial intelligence.That's just one of the risks posed by the rapid development of AI. And that's why Tristan Harris of the Center for Humane Technology is sounding the alarm.This week on How I Built This Lab: the second of a two-episode series in which Tristan and Guy discuss how we can upgrade the fundamental legal, technical, and philosophical frameworks of our society to meet the challenge of AI.To learn more about the Center for Humane Technology, text “AI” to 55444.This episode was researched and produced by Alex Cheng with music by Ramtin Arablouei.It was edited by John Isabella. Our audio engineer was Neal Rauch.You can follow HIBT on X & Instagram, and email us at hibt@id.wondery.com.See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

Your Undivided Attention
Future-proofing Democracy In the Age of AI with Audrey Tang

Your Undivided Attention

Play Episode Listen Later Feb 29, 2024 34:38 Very Popular


What does a functioning democracy look like in the age of artificial intelligence? Could AI even be used to help a democracy flourish? Just in time for election season, Taiwan's Minister of Digital Affairs Audrey Tang returns to the podcast to discuss healthy information ecosystems, resilience to cyberattacks, how to “prebunk” deepfakes, and more. RECOMMENDED MEDIA Testing Theories of American Politics: Elites, Interest Groups, and Average Citizens by Martin Gilens and Benjamin I. PageThis academic paper addresses tough questions for Americans: Who governs? Who really rules? Recursive PublicRecursive Public is an experiment in identifying areas of consensus and disagreement among the international AI community, policymakers, and the general public on key questions of governanceA Strong Democracy is a Digital DemocracyAudrey Tang's 2019 op-ed for The New York TimesThe Frontiers of Digital DemocracyNathan Gardels interviews Audrey Tang in NoemaRECOMMENDED YUA EPISODES Digital Democracy is Within Reach with Audrey TangThe Tech We Need for 21st Century Democracy with Divya SiddarthHow Will AI Affect the 2024 Elections? with Renee DiResta and Carl MillerThe AI DilemmaYour Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_

How I Built This with Guy Raz
The peril (and promise) of AI with Tristan Harris: Part 1

How I Built This with Guy Raz

Play Episode Listen Later Feb 22, 2024 28:36 Very Popular


When Tristan Harris co-founded the Center for Humane Technology in 2018, he was trying to educate tech leaders and policymakers about the harms of social media.But today, he's sounding the alarm about a different technology — one that he says could pose an existential threat to the entire world …Artificial intelligence.This week on How I Built This Lab: the first of a two-episode series in which Tristan and Guy examine the serious risks posed by the rapid development and deployment of AI — and what we can do to make sure this powerful technology is used for good.You can learn more about “The Social Dilemma,” the 2020 Emmy-winning docudrama featuring Tristan, here: https://www.thesocialdilemma.com/.This episode was researched and produced by Alex Cheng with music by Ramtin Arablouei.It was edited by John Isabella. Our audio engineer was Neal Rauch.You can follow HIBT on X & Instagram, and email us at hibt@id.wondery.com.See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

Your Undivided Attention
U.S. Senators Grilled Social Media CEOs. Will Anything Change?

Your Undivided Attention

Play Episode Listen Later Feb 13, 2024 25:06 Very Popular


Was it political progress, or just political theater? The recent Senate hearing with social media CEOs led to astonishing moments — including Mark Zuckerberg's public apology to families who lost children following social media abuse. Our panel of experts, including Facebook whistleblower Frances Haugen, untangles the explosive hearing, and offers a look ahead, as well. How will this hearing impact protocol within these social media companies? How will it impact legislation? In short: will anything change?Clarification: Julie says that shortly after the hearing, Meta's stock price had the biggest increase of any company in the stock market's history. It was the biggest one-day gain by any company in Wall Street history.Correction: Frances says it takes Snap three or four minutes to take down exploitative content. In Snap's most recent transparency report, they list six minutes as the median turnaround time to remove exploitative content.RECOMMENDED MEDIA Get Media SavvyFounded by Julie Scelfo, Get Media Savvy is a non-profit initiative working to establish a healthy media environment for kids and familiesThe Power of One by Frances HaugenThe inside story of France's quest to bring transparency and accountability to Big TechRECOMMENDED YUA EPISODESReal Social Media Solutions, Now with Frances HaugenA Conversation with Facebook Whistleblower Frances HaugenAre the Kids Alright?Social Media Victims Lawyer Up with Laura Marquez-GarrettYour Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_  

Your Undivided Attention
Taylor Swift is Not Alone: The Deepfake Nightmare Sweeping the Internet

Your Undivided Attention

Play Episode Listen Later Feb 1, 2024 42:59 Very Popular


Over the past year, a tsunami of apps that digitally strip the clothes off real people has hit the market. Now anyone can create fake non-consensual sexual images in just a few clicks. With cases proliferating in high schools, guest presenter Laurie Segall talks to legal scholar Mary Anne Franks about the AI-enabled rise in deep fake porn and what we can do about it. Correction: Laurie refers to the app 'Clothes Off.' It's actually named Clothoff. There are many clothes remover apps in this category.RECOMMENDED MEDIA Revenge Porn: The Cyberwar Against WomenIn a five-part digital series, Laurie Segall uncovers a disturbing internet trend: the rise of revenge pornThe Cult of the ConstitutionIn this provocative book, Mary Anne Franks examines the thin line between constitutional fidelity and constitutional fundamentalismFake Explicit Taylor Swift Images Swamp Social MediaCalls to protect women and crack down on the platforms and technology that spread such images have been reignitedRECOMMENDED YUA EPISODES No One is Immune to AI HarmsEsther Perel on Artificial IntimacySocial Media Victims Lawyer UpThe AI DilemmaYour Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_

Your Undivided Attention
Can Myth Teach Us Anything About the Race to Build Artificial General Intelligence? With Josh Schrei

Your Undivided Attention

Play Episode Listen Later Jan 18, 2024 35:50 Very Popular


We usually talk about tech in terms of economics or policy, but the casual language tech leaders often use to describe AI — summoning an inanimate force with the powers of code — sounds more... magical. So, what can myth and magic teach us about the AI race? Josh Schrei, mythologist and host of The Emerald podcast,  says that foundational cultural tales like "The Sorcerer's Apprentice" or Prometheus teach us the importance of initiation, responsibility, human knowledge, and care.  He argues these stories and myths can guide ethical tech development by reminding us what it is to be human. Correction: Josh says the first telling of "The Sorcerer's Apprentice" myth dates back to ancient Egypt, but it actually dates back to ancient Greece.RECOMMENDED MEDIA The Emerald podcastThe Emerald explores the human experience through a vibrant lens of myth, story, and imaginationEmbodied Ethics in The Age of AIA five-part course with The Emerald podcast's Josh Schrei and School of Wise Innovation's Andrew DunnNature Nurture: Children Can Become Stewards of Our Delicate PlanetA U.S. Department of the Interior study found that the average American kid can identify hundreds of corporate logos but not plants and animalsThe New FireAI is revolutionizing the world - here's how democracies can come out on top. This upcoming book was authored by an architect of President Biden's AI executive orderRECOMMENDED YUA EPISODES How Will AI Affect the 2024 Elections?The AI DilemmaThe Three Rules of Humane TechAI Myths and Misconceptions Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_

Your Undivided Attention
How Will AI Affect the 2024 Elections? with Renee DiResta and Carl Miller

Your Undivided Attention

Play Episode Listen Later Dec 21, 2023 47:15 Very Popular


2024 will be the biggest election year in world history. Forty countries will hold national elections, with over two billion voters heading to the polls. In this episode of Your Undivided Attention, two experts give us a situation report on how AI will increase the risks to our elections and our democracies. Correction: Tristan says two billion people from 70 countries will be undergoing democratic elections in 2024. The number expands to 70 when non-national elections are factored in.RECOMMENDED MEDIA White House AI Executive Order Takes On Complexity of Content Integrity IssuesRenee DiResta's piece in Tech Policy Press about content integrity within President Biden's AI executive orderThe Stanford Internet ObservatoryA cross-disciplinary program of research, teaching and policy engagement for the study of abuse in current information technologies, with a focus on social mediaDemosBritain's leading cross-party think tankInvisible Rulers: The People Who Turn Lies into Reality by Renee DiRestaPre-order Renee's upcoming book that's landing on shelves June 11, 2024RECOMMENDED YUA EPISODESThe Spin Doctors Are In with Renee DiRestaFrom Russia with Likes Part 1 with Renee DiRestaFrom Russia with Likes Part 2 with Renee DiRestaEsther Perel on Artificial IntimacyThe AI DilemmaA Conversation with Facebook Whistleblower Frances HaugenYour Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_ 

The Joe Rogan Experience
#2076 - Aza Raskin & Tristan Harris

The Joe Rogan Experience

Play Episode Listen Later Dec 19, 2023 151:41


Tristan Harris and Aza Raskin are the co-founders of the Center for Humane Technology and the hosts of its podcast, "Your Undivided Attention." Watch the Center's new film "The A.I. Dilemma" on Youtube.https://www.humanetech.com"The A.I. Dilemma"https://www.youtube.com/watch?v=xoVJKj8lcNQ