For Humanity: An AI Safety Podcast

Follow For Humanity: An AI Safety Podcast
Share on
Copy link to clipboard

For Humanity, An AI Safety Podcast is the the AI Safety Podcast for regular people. Peabody, duPont-Columbia and multi-Emmy Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2-10 years. This podcast is solely about the threat of human extinction from AGI. We’ll name and meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.

John Sherman


    • Jun 5, 2025 LATEST EPISODE
    • every other week NEW EPISODES
    • 46m AVG DURATION
    • 117 EPISODES


    Search for episodes from For Humanity: An AI Safety Podcast with a specific topic:

    Latest episodes from For Humanity: An AI Safety Podcast

    Kevin Roose Talks AI Risk | Episode #65 | For Humanity: An AI Risk Podcast

    Play Episode Listen Later May 12, 2025 85:20


    For Humanity Episode #65: Kevin Roose on AGI, AI Risk, and What Comes Next

    Seventh Grader vs AI Risk | Episode #64 | For Humanity: An AI Risk Podcast

    Play Episode Listen Later Apr 22, 2025 102:04


    In Episode #64, interview, host John Sherman interviews seventh grader Dylan Pothier, his mom Bridget and his teach Renee DiPietro. Dylan is a award winning student author who is converend about AI risk.(FULL INTERVIEW STARTS AT 00:33:34)Sam Altman/Chris Anderson @ TEDhttps://www.youtube.com/watch?v=5MWT_doo68kCheck out our partner channel: Lethal Intelligence AILethal Intelligence AI - Home https://lethalintelligence.aiFOR HUMANITY MONTHLY DONATION SUBSCRIPTION LINKS:$1 MONTH https://buy.stripe.com/7sI3cje3x2Zk9SodQT$10 MONTH https://buy.stripe.com/5kAbIP9Nh0Rc4y46oo$25 MONTH https://buy.stripe.com/3cs9AHf7B9nIggM4gh$100 MONTH https://buy.stripe.com/aEU007bVp7fAfcI5kmBUY LOUIS BERMAN'S NEW BOOK ON AMAZON!!!https://a.co/d/8WSNNuoGet Involved!EMAIL JOHN: forhumanitypodcast@gmail.comSUPPORT PAUSE AI: https://pauseai.info/SUPPORT STOP AI: https://www.stopai.info/SUBSCRIBE TO LIRON SHAPIRA'S DOOM DEBATES on YOUTUBE! / @doomdebates

    Justice For Suchir | Episode #63 | For Humanity: An AI Risk Podcast

    Play Episode Listen Later Apr 11, 2025 79:43


    In an emotional interview, host John Sherman interviews Poornima Rao and Balaji Ramamurthy, the parents of Suchir Balaji. (FULL INTERVIEW STARTS AT 00:18:38)Suchir Balaji was a 26-year-old artificial intelligence researcher who worked at OpenAI. He was involved in developing models like GPT-4 and WebGPT. In October 2024, he publicly accused OpenAI of violating U.S. copyright laws by using proprietary data to train AI models, arguing that such practices harmed original content creators. His essay, "When does generative AI qualify for fair use?", gained attention and was cited in ongoing lawsuits against OpenAI. Suchir left OpenAI in August 2024, expressing concerns about the company's ethics and the potential harm of AI to humanity. He planned to start a nonprofit focused on machine learning and neuroscience. On October 23, 2024 he was featured in the New York Times speaking out against OpenAI.On November 26, 2024, he was found dead in his San Francisco apartment from a gunshot wound. The initial autopsy ruled it a suicide, noting the presence of alcohol, amphetamines, and GHB in his system. However, his parents contested this finding, commissioning a second autopsy that suggested a second gunshot wound was missed in the initial examination. They also pointed to other injuries and questioned the presence of GHB, suggesting foul play. Despite these claims, authorities reaffirmed the suicide ruling. The case has attracted public attention, with figures like Elon Musk and Congressman Ro Khanna calling for further investigation.Suchir's parents continue to push for justice and truth.Suchir's Website:https://suchir.net/fair_use.htmlFOR HUMANITY MONTHLY DONATION SUBSCRIPTION LINKS:$1 MONTH https://buy.stripe.com/7sI3cje3x2Zk9SodQT$10 MONTH https://buy.stripe.com/5kAbIP9Nh0Rc4y46oo$25 MONTH https://buy.stripe.com/3cs9AHf7B9nIggM4gh$100 MONTH https://buy.stripe.com/aEU007bVp7fAfcI5kmLethal Intelligence AI - Home https://lethalintelligence.aiBUY LOUIS BERMAN'S NEW BOOK ON AMAZON!!!https://a.co/d/8WSNNuoGet Involved!EMAIL JOHN: forhumanitypodcast@gmail.comSUPPORT PAUSE AI: https://pauseai.info/SUPPORT STOP AI: https://www.stopai.info/SUBSCRIBE TO LIRON SHAPIRA'S DOOM DEBATES on YOUTUBE! / @doomdebates

    Keep The Future Human | Episode #62 | For Humanity: An AI Risk Podcast

    Play Episode Listen Later Mar 26, 2025 107:12


    Host John Sherman conducts an important interview with Anthony Aguirre, Executive Director of the Future of Life Institute. The Future of Life Institute reached out to For Humanity to see if Anthony could come on to promote his very impressive new campaign called Keep The Future Human. The campaign includes a book, an essay, a website, a video, it's all incredible work. Please check it out:https://keepthefuturehuman.ai/John and Anthony have a broad ranging AI risk conversation, covering in some detail Anthony's four essential measures for a human future. They also discuss parenting into this unknown future.In 2021, the Future of Life Institute received a donation in cryptocurrency of more than $650 million from a single donor. With AGI doom bearing down on humanity, arriving any day now, AI risk communications floundering, the public in the dark still, and that massive war chest gathering dust in a bank, John asks Anthony the uncomfortable but necessary question: What is FLI waiting for to spend the money? Then John asks Anthony for $10 million to fund creative media projects under John's direction. John is convinced with $10M in six months he could succeed in making AI existential risk dinner table conversation on every street in America.John has developed a detailed plan that would launch within 24 hours of the grant award. We don't have a single day to lose.https://futureoflife.org/BUY LOUIS BERMAN'S NEW BOOK ON AMAZON!!!https://a.co/d/8WSNNuoFOR HUMANITY MONTHLY DONATION SUBSCRIPTION LINKS:$1 MONTH https://buy.stripe.com/7sI3cje3x2Zk9SodQT$10 MONTH  https://buy.stripe.com/5kAbIP9Nh0Rc4y46oo$25 MONTH https://buy.stripe.com/3cs9AHf7B9nIggM4gh$100 MONTH https://buy.stripe.com/aEU007bVp7fAfcI5kmGet Involved!EMAIL JOHN: forhumanitypodcast@gmail.comSUPPORT PAUSE AI: https://pauseai.info/SUPPORT STOP AI: https://www.stopai.info/Check out our partner channel: Lethal Intelligence AILethal Intelligence AI - Home https://lethalintelligence.aiSUBSCRIBE TO LIRON SHAPIRA'S DOOM DEBATES on YOUTUBE!     / @doomdebates  ********************************Explore our other video content here on YouTube where you'll find more insights into 2025 AI risk preview along with relevant social media links.YouTube:    / @forhumanitypodcast

    Dark Patterns In AI | Episode #61 | For Humanity: An AI Risk Podcast

    Play Episode Listen Later Mar 12, 2025 91:02


    Host John Sherman interviews Esban Kran, CEO of Apart Research about a broad range of AI risk topics. Most importantly, the discussion covers a growing for-profit AI risk business landscape, and Apart's recent report on Dark Patterns in LLMs. We hear about the benchmarking of new models all the time, but this project has successfully identified some key dark patterns in these models.MORE FROM OUR SPONSOR:https://www.resist-ai.agency/BUY LOUIS BERMAN'S NEW BOOK ON AMAZON!!!https://a.co/d/8WSNNuoApart Research Dark Bench Reporthttps://www.apartresearch.com/post/uncovering-model-manipulation-with-darkbench(FULL INTERVIEW STARTS AT 00:09:30)FOR HUMANITY MONTHLY DONATION SUBSCRIPTION LINKS:$1 MONTH https://buy.stripe.com/7sI3cje3x2Zk9SodQT$10 MONTH  https://buy.stripe.com/5kAbIP9Nh0Rc4y46oo$25 MONTH https://buy.stripe.com/3cs9AHf7B9nIggM4gh$100 MONTH https://buy.stripe.com/aEU007bVp7fAfcI5kmGet Involved!EMAIL JOHN: forhumanitypodcast@gmail.comSUPPORT PAUSE AI: https://pauseai.info/SUPPORT STOP AI: https://www.stopai.info/Check out our partner channel: Lethal Intelligence AILethal Intelligence AI - Home https://lethalintelligence.aiSUBSCRIBE TO LIRON SHAPIRA'S DOOM DEBATES on YOUTUBE!     / @doomdebates  ********************************Explore our other video content here on YouTube where you'll find more insights into 2025 AI risk preview along with relevant social media links.YouTube:    / @forhumanitypodcast

    AI Risk Rising | Episode #60 | For Humanity: An AI Risk Podcast

    Play Episode Listen Later Feb 28, 2025 103:01


    Host John Sherman interviews Pause AI Global Founder Joep Meindertsma following the AI summits in Paris. The discussion begins at the dire moment we are in, the stakes, and the failure of our institutions to respond, before turning into a far-ranging discussion of AI risk reduction communications strategies.(FULL INTERVIEW STARTS AT)FOR HUMANITY MONTHLY DONATION SUBSCRIPTION LINKS:$1 MONTH https://buy.stripe.com/7sI3cje3x2Zk9SodQT$10 MONTH https://buy.stripe.com/5kAbIP9Nh0Rc4y46oo$25 MONTH https://buy.stripe.com/3cs9AHf7B9nIggM4gh$100 MONTH https://buy.stripe.com/aEU007bVp7fAfcI5kmGet Involved!SUPPORT PAUSE AI: https://pauseai.info/SUPPORT STOP AI: https://www.stopai.info/aboutEMAIL JOHN: forhumanitypodcast@gmail.comRESOURCES:BUY LOUIS BERMAN'S NEW BOOK ON AMAZON!!!https://a.co/d/8WSNNuoCHECK OUT MAX WINGA'S FULL PODCASTCommunicating AI Extinction Risk to the Public - w/ Prof. Will FithianSubscribe to our partner channel: Lethal Intelligence AILethal Intelligence AI - Homehttps://www.youtube.com/@lethal-intelligencehttps://lethalintelligence.aiSUBSCRIBE TO LIRON SHAPIRA'S DOOM DEBATES on YOUTUBE! https://www.youtube.com/@DoomDebates****************To learn more about AI risk rising, please feel free to visit our YouTube channel.In this video, we cover the following topics:AIAI riskAI safetyRobotsHumanoid RobotsAGI****************Explore our other video content here on YouTube where you'll find more insights into 2025 AI risk preview along with relevant social media links.YouTube: / @forhumanitypodcast

    Smarter-Than-Human Robots? | Episode #59 | For Humanity: An AI Risk Podcast

    Play Episode Listen Later Feb 11, 2025 102:14


    Host John Sherman interviews Jad Tarifi, CEO of Integral AI, about Jad's company's work to try to create a world of trillions of AGI-enabled robots by 2035. Jad was a leader at Google's first generative AI team, his views of his former colleague Geoffrey Hinton's views on existential risk from advanced AI come up more than once.FOR HUMANITY MONTHLY DONATION SUBSCRIPTION LINKS:$1 MONTH https://buy.stripe.com/7sI3cje3x2Zk9SodQT$10 MONTH https://buy.stripe.com/5kAbIP9Nh0Rc4y46oo$25 MONTH https://buy.stripe.com/3cs9AHf7B9nIggM4gh$100 MONTH https://buy.stripe.com/aEU007bVp7fAfcI5kmGet Involved!EMAIL JOHN: forhumanitypodcast@gmail.comSUPPORT PAUSE AI: https://pauseai.info/SUPPORT STOP AI: https://www.stopai.info/aboutRESOURCES:Integral AI: https://www.integral.ai/John's Chat w Chat GPThttps://chatgpt.com/share/679ee549-2c38-8003-9c1e-260764da1a53Check out our partner channel: Lethal Intelligence AILethal Intelligence AI - Home https://lethalintelligence.aiSUBSCRIBE TO LIRON SHAPIRA'S DOOM DEBATES on YOUTUBE! https://www.youtube.com/@DoomDebates****************To learn more about smarter-than-human robots, please feel free to visit our YouTube channel.In this video, we cover the following topics:AIAI riskAI safetyRobotsHumanoid RobotsAGI****************Explore our other video content here on YouTube where you'll find more insights into 2025 AI risk preview along with relevant social media links.YouTube: / @forhumanitypodcast

    Protecting Our Kids From AI Risk | Episode #58

    Play Episode Listen Later Jan 27, 2025 102:46


    Host John Sherman interviews Tara Steele, Director, The Safe AI For Children Alliance, about her work to protect children from AI risks such as deep fakes, her concern about AI causing human extinction, and what we can do about all of it. FOR HUMANITY MONTHLY DONATION SUBSCRIPTION LINKS: $1 MONTH https://buy.stripe.com/7sI3cje3x2Zk9SodQT $10 MONTH https://buy.stripe.com/5kAbIP9Nh0Rc4y46oo $25 MONTH https://buy.stripe.com/3cs9AHf7B9nIggM4gh $100 MONTH https://buy.stripe.com/aEU007bVp7fAfcI5km You can also donate any amount one time. Get Involved! EMAIL JOHN: forhumanitypodcast@gmail.com SUPPORT PAUSE AI: https://pauseai.info/ SUPPORT STOP AI: https://www.stopai.info/about RESOURCES: BENGIO/NG DAVOS VIDEO https://www.youtube.com/watch?v=w5iuHJh3_Gk&t=8s STUART RUSSELL VIDEO https://www.youtube.com/watch?v=KnDY7ABmsds&t=5s AL GREEN VIDEO (WATCH ALL 39 MINUTES THEN REPLAY) https://youtu.be/SOrHdFXfXds?si=s_nlDdDpYN0RR_Yc Check out our partner channel: Lethal Intelligence AI Lethal Intelligence AI - Home https://lethalintelligence.ai SUBSCRIBE TO LIRON SHAPIRA'S DOOM DEBATES on YOUTUBE!! / @doomdebates BUY STEPHEN HANSON'S BEAUTIFUL AI RISK BOOK!!! https://stephenhansonart.bigcartel.co... 22 Word Statement from Center for AI Safety Statement on AI Risk | CAIS https://www.safe.ai/work/statement-on... Best Account on Twitter: AI Notkilleveryoneism Memes / aisafetymemes **************** To learn more about protecting our children from AI risks such as deep fakes, please feel free to visit our YouTube channel. In this video, we cover 2025 AI risk preview along with the following topics: AI AI risk AI safety

    2025 AI Risk Preview | For Humanity: An AI Risk Podcast | Episode #57

    Play Episode Listen Later Jan 13, 2025 100:10


    What will 2025 bring? Sam Altman says AGI is coming in 2025. Agents will arrive for sure. Military use will expand greatly. Will we get a warning shot? Will we survive the year? In Episode #57, host John Sherman interviews AI Safety Research Engineer Max Winga about the latest in AI advances and risks and the year to come. FOR HUMANITY MONTHLY DONATION SUBSCRIPTION LINKS: $1 MONTH https://buy.stripe.com/7sI3cje3x2Zk9SodQT $10 MONTH https://buy.stripe.com/5kAbIP9Nh0Rc4y46oo $25 MONTH https://buy.stripe.com/3cs9AHf7B9nIggM4gh $100 MONTH https://buy.stripe.com/aEU007bVp7fAfcI5km Anthropic Alignment Faking Video:https://www.youtube.com/watch?v=9eXV64O2Xp8&t=1s Neil DeGrasse Tyson Video: https://www.youtube.com/watch?v=JRQDc55Aido&t=579s Max Winga's Amazing Speech:https://www.youtube.com/watch?v=kDcPW5WtD58 Get Involved! EMAIL JOHN: forhumanitypodcast@gmail.com SUPPORT PAUSE AI: https://pauseai.info/ SUPPORT STOP AI: https://www.stopai.info/about Check out our partner channel: Lethal Intelligence AI Lethal Intelligence AI - Home https://lethalintelligence.ai SUBSCRIBE TO LIRON SHAPIRA'S DOOM DEBATES on YOUTUBE!! https://www.youtube.com/@DoomDebates BUY STEPHEN HANSON'S BEAUTIFUL AI RISK BOOK!!! https://stephenhansonart.bigcartel.com/product/the-entity-i-couldn-t-fathom 22 Word Statement from Center for AI Safety Statement on AI Risk | CAIS https://www.safe.ai/work/statement-on-ai-risk Best Account on Twitter: AI Notkilleveryoneism Memes https://twitter.com/AISafetyMemes

    AGI Goes To Washington | For Humanity: An AI Risk Podcast | Episode #56

    Play Episode Listen Later Dec 19, 2024 74:21


    FOR HUMANITY MONTHLY DONATION SUBSCRIPTION LINKS: $1 MONTH https://buy.stripe.com/7sI3cje3x2Zk9S... $10 MONTH https://buy.stripe.com/5kAbIP9Nh0Rc4y... $25 MONTH https://buy.stripe.com/3cs9AHf7B9nIgg... $100 MONTH https://buy.stripe.com/aEU007bVp7fAfc... In Episode #56, host John Sherman travels to Washington DC to lobby House and Senate staffers for AI regulation along with Felix De Simone and Louis Berman of Pause AI. We unpack what we saw and heard as we presented AI risk to the people who have the power to make real change. SUPPORT PAUSE AI: https://pauseai.info/ SUPPORT STOP AI: https://www.stopai.info/about EMAIL JOHN: forhumanitypodcast@gmail.com Check out our partner channel: Lethal Intelligence AI Lethal Intelligence AI - Home https://lethalintelligence.ai SUBSCRIBE TO LIRON SHAPIRA'S DOOM DEBATES on YOUTUBE!!    / @doomdebates   BUY STEPHEN HANSON'S BEAUTIFUL AI RISK BOOK!!! https://stephenhansonart.bigcartel.co... 22 Word Statement from Center for AI Safety Statement on AI Risk | CAIS https://www.safe.ai/work/statement-on... Best Account on Twitter: AI Notkilleveryoneism Memes   / aisafetymemes  

    AI Risk Special | "Near Midnight in Suicide City" | Episode #55

    Play Episode Listen Later Dec 5, 2024 91:34


    In a special episode of For Humanity: An AI Risk Podcast, host John Sherman travels to San Francisco. Episode #55 "Near Midnight in Suicide City" is a set of short pieces from our trip out west, where we met with Pause AI, Stop AI, Liron Shapira and stopped by Open AI among other events. Big, huge massive thanks to Beau Kershaw, Director of Photography, and my biz partner and best friend who made this journey with me through the work side and the emotional side of this. The work is beautiful and the days were wet and long and heavy. Thank you, Beau. SUPPORT PAUSE AI: https://pauseai.info/ SUPPORT STOP AI: https://www.stopai.info/about FOR HUMANITY MONTHLY DONATION SUBSCRIPTION LINKS: $10 MONTH https://buy.stripe.com/5kAbIP9Nh0Rc4y... $25 MONTH https://buy.stripe.com/3cs9AHf7B9nIgg... $100 MONTH https://buy.stripe.com/aEU007bVp7fAfc... EMAIL JOHN: forhumanitypodcast@gmail.com Check out our partner channel: Lethal Intelligence AI Lethal Intelligence AI - Home https://lethalintelligence.ai @lethal-intelligence-clips / @lethal-intelligence-clips

    Connor Leahy Interview | Helping People Understand AI Risk | Episode #54

    Play Episode Listen Later Nov 25, 2024 144:58


    3,893 views Nov 19, 2024 For Humanity: An AI Safety PodcastIn Episode #54 John Sherman interviews Connor Leahy, CEO of Conjecture. (FULL INTERVIEW STARTS AT 00:06:46) DONATION SUBSCRIPTION LINKS: $10 MONTH https://buy.stripe.com/5kAbIP9Nh0Rc4y... $25 MONTH https://buy.stripe.com/3cs9AHf7B9nIgg... $100 MONTH https://buy.stripe.com/aEU007bVp7fAfc... EMAIL JOHN: forhumanitypodcast@gmail.com Check out Lethal Intelligence AI: Lethal Intelligence AI - Home https://lethalintelligence.ai @lethal-intelligence-clips    / @lethal-intelligence-clips  

    Human Augmentation Incoming | The Coming Age Of Humachines | Episode #53

    Play Episode Listen Later Nov 19, 2024 102:01


    In Episode #53 John Sherman interviews Michael DB Harvey, author of The Age of Humachines. The discussion covers the coming spectre of humans putting digital implants inside ourselves to try to compete with AI. DONATION SUBSCRIPTION LINKS: $10 MONTH https://buy.stripe.com/5kAbIP9Nh0Rc4y... $25 MONTH https://buy.stripe.com/3cs9AHf7B9nIgg... $100 MONTH https://buy.stripe.com/aEU007bVp7fAfc...

    AI Risk Update | One Year of For Humanity | Episode #52

    Play Episode Listen Later Nov 19, 2024 78:11


    In Episode #52 , host John Sherman looks back on the first year of For Humanity. Select shows are featured as well as a very special celebration of life at the end.

    AI Risk Funding | Big Tech vs. Small Safety I Episode #51

    Play Episode Listen Later Oct 23, 2024 66:03


    In Episode #51 , host John Sherman talks with Tom Barnes, an Applied Researcher with Founders Pledge, about the reality of AI risk funding, and about the need for emergency planning for AI to be much more robust and detailed than it is now. We are currently woefully underprepared. Learn More About Founders Pledge: https://www.founderspledge.com/ No celebration of life this week!! Youtube finally got me with a copyright flag, had to edit the song out. THURSDAY NIGHTS--LIVE FOR HUMANITY COMMUNITY MEETINGS--8:30PM EST Join Zoom Meeting: https://storyfarm.zoom.us/j/816517210... Passcode: 829191 Please Donate Here To Help Promote For Humanity https://www.paypal.com/paypalme/forhu... EMAIL JOHN: forhumanitypodcast@gmail.com This podcast is not journalism. But it's not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth. For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI. Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We'll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity. **************** RESOURCES: SUBSCRIBE TO LIRON SHAPIRA'S DOOM DEBATES on YOUTUBE!!    / @doomdebates   Join the Pause AI Weekly Discord Thursdays at 2pm EST / discord     / discord   Max Winga's “A Stark Warning About Extinction”    • A Stark Warning About AI Extinction   For Humanity Theme Music by Josef Ebner Youtube:    / @jpjosefpictures   Website: https://josef.pictures BUY STEPHEN HANSON'S BEAUTIFUL AI RISK BOOK!!! https://stephenhansonart.bigcartel.co... 22 Word Statement from Center for AI Safety Statement on AI Risk | CAIS https://www.safe.ai/work/statement-on... Best Account on Twitter: AI Notkilleveryoneism Memes   / aisafetymemes   *********************** Explore the realm of AI risk funding and its potential to guide you toward achieving your goals and enhancing your well-being. Delve into the essence of big tech vs. small safety, and discover how it profoundly impacts your life transformation. In this video, we'll examine the concept of AI risk funding, explaining how it fosters a positive, growth-oriented mindset. Some of the topics we will discuss include: AI AI safety AI safety research

    AI Risk Funding | Big Tech vs. Small Safety | Episode #51 TRAILER

    Play Episode Listen Later Oct 21, 2024 6:03


    In Episode #51 Trailer, host John Sherman talks with Tom Barnes, an Applied Researcher with Founders Pledge, about the reality of AI risk funding, and about the need for emergency planning for AI to be much more robust and detailed than it is now. We are currently woefully underprepared. Learn More About Founders Pledge: https://www.founderspledge.com/ THURSDAY NIGHTS--LIVE FOR HUMANITY COMMUNITY MEETINGS--8:30PM EST Join Zoom Meeting: https://storyfarm.zoom.us/j/816517210... Passcode: 829191 Please Donate Here To Help Promote For Humanity https://www.paypal.com/paypalme/forhu... EMAIL JOHN: forhumanitypodcast@gmail.com This podcast is not journalism. But it's not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth. For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI. Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We'll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity. **************** RESOURCES: SUBSCRIBE TO LIRON SHAPIRA'S DOOM DEBATES on YOUTUBE!!    / @doomdebates   Join the Pause AI Weekly Discord Thursdays at 2pm EST / discord     / discord   Max Winga's “A Stark Warning About Extinction”    • A Stark Warning About AI Extinction   For Humanity Theme Music by Josef Ebner Youtube:    / @jpjosefpictures   Website: https://josef.pictures BUY STEPHEN HANSON'S BEAUTIFUL AI RISK BOOK!!! https://stephenhansonart.bigcartel.co... 22 Word Statement from Center for AI Safety Statement on AI Risk | CAIS https://www.safe.ai/work/statement-on... Best Account on Twitter: AI Notkilleveryoneism Memes   / aisafetymemes   *********************** Explore the realm of AI risk funding and its potential to guide you toward achieving your goals and enhancing your well-being. Delve into the essence of big tech vs. small safety, and discover how it profoundly impacts your life transformation. In this video, we'll examine the concept of AI risk funding, explaining how it fosters a positive, growth-oriented mindset. Some of the topics we will discuss include: AI What is AI? Big tech *************************** If you want to learn more about AI risk funding, follow us on our social media platforms, where we share additional tips, resources, and stories. You can find us on YouTube:    / @forhumanitypodcast   Website: http://www.storyfarm.com/ *************************** Don't miss this opportunity to discover the secrets of AI risk funding, AI, what is AI, and big tech. Have I addressed your concerns about AI risk funding? Maybe you wish to comment below and let me know what else I can help you with AI, what is AI, big tech, and AI risk funding.

    Accurately Predicting Doom | What Insight Can Metaculus Reveal About AI Risk? | Episode # 50

    Play Episode Listen Later Oct 21, 2024 78:58


    In Episode #50, host John Sherman talks with Deger Turan, CEO of Metaculus about what his prediction market reveals about the AI future we are all heading towards. THURSDAY NIGHTS--LIVE FOR HUMANITY COMMUNITY MEETINGS--8:30PM EST Join Zoom Meeting: https://storyfarm.zoom.us/j/816517210... Passcode: 829191 LEARN MORE– www.metaculus.com Please Donate Here To Help Promote For Humanity https://www.paypal.com/paypalme/forhu... EMAIL JOHN: forhumanitypodcast@gmail.com This podcast is not journalism. But it's not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth. For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI. Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We'll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity. **************** RESOURCES: SUBSCRIBE TO LIRON SHAPIRA'S DOOM DEBATES on YOUTUBE!!    / @doomdebates   Join the Pause AI Weekly Discord Thursdays at 2pm EST / discord     / discord   Max Winga's “A Stark Warning About Extinction”    • A Stark Warning About AI Extinction   For Humanity Theme Music by Josef Ebner Youtube:    / @jpjosefpictures   Website: https://josef.pictures BUY STEPHEN HANSON'S BEAUTIFUL AI RISK BOOK!!! https://stephenhansonart.bigcartel.co... 22 Word Statement from Center for AI Safety Statement on AI Risk | CAIS https://www.safe.ai/work/statement-on... Best Account on Twitter: AI Notkilleveryoneism Memes   / aisafetymemes   ********************** Hi, thanks for watching our video about what insight can Metaculus reveal about AI risk and accurately predicting doom. In this video, we discuss accurately predicting doom and cover the following topics AI AI safety Metaculus ********************** Explore our other video content here on YouTube, where you'll find more insights into accurately predicting doom, along with relevant social media links. YouTube:    / @forhumanitypodcast   Website: http://www.storyfarm.com/ *************************** This video explores accurately predicting doom, AI, AI safety, and Metaculus. Have I addressed your curiosity regarding accurately predicting doom? We eagerly await your feedback and insights. Please drop a comment below, sharing your thoughts, queries, or suggestions about: AI, AI safety, Metaculus, and accurately predicting doom.

    Accurately Predicting Doom | What Insight Can Metaculus Reveal About AI Risk? | Episode # 50 TRAILER

    Play Episode Listen Later Oct 14, 2024 5:03


    In Episode #50 TRAILER, host John Sherman talks with Deger Turan, CEO of Metaculus about what his prediction market reveals about the AI future we are all heading towards. LEARN MORE–AND JOIN STOP AI www.stopai.info Please Donate Here To Help Promote For Humanity https://www.paypal.com/paypalme/forhu... EMAIL JOHN: forhumanitypodcast@gmail.com This podcast is not journalism. But it's not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth. For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI. Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We'll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.

    Episode #49: “Go To Jail To Stop AI” For Humanity: An AI Risk Podcast

    Play Episode Listen Later Oct 14, 2024 77:08


    In Episode #49, host John Sherman talks with Sam Kirchner and Remmelt Ellen, co-founders of Stop AI. Stop AI is a new AI risk protest organization, coming at it with different tactics and goals than Pause AI. LEARN MORE–AND JOIN STOP AI www.stopai.info Please Donate Here To Help Promote For Humanity https://www.paypal.com/paypalme/forhumanitypodcast EMAIL JOHN: forhumanitypodcast@gmail.com This podcast is not journalism. But it's not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.  For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI. Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We'll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity. RESOURCES: SUBSCRIBE TO LIRON SHAPIRA'S DOOM DEBATES on YOUTUBE!! https://www.youtube.com/@DoomDebates Join the Pause AI Weekly Discord Thursdays at 2pm EST   / discord   https://discord.com/invite/pVMWjddaW7 Max Winga's “A Stark Warning About Extinction” https://youtu.be/kDcPW5WtD58?si=i6IRy82xZ2PUOp22 For Humanity Theme Music by Josef Ebner Youtube: https://www.youtube.com/channel/UCveruX8E-Il5A9VMC-N4vlg Website: https://josef.pictures BUY STEPHEN HANSON'S BEAUTIFUL AI RISK BOOK!!! https://stephenhansonart.bigcartel.com/product/the-entity-i-couldn-t-fathom 22 Word Statement from Center for AI Safety Statement on AI Risk | CAIS https://www.safe.ai/work/statement-on-ai-risk Best Account on Twitter: AI Notkilleveryoneism Memes  https://twitter.com/AISafetyMemes

    Go To Jail To Stop AI | Stopping AI | Episode #49 TRAILER

    Play Episode Listen Later Oct 8, 2024 4:53


    In Episode #49 TRAILER, host John Sherman talks with Sam Kirchner and Remmelt Ellen, co-founders of Stop AI. Stop AI is a new AI risk protest organization, coming at it with different tactics and goals than Pause AI. LEARN MORE–AND JOIN STOP AI www.stopai.info Please Donate Here To Help Promote For Humanity https://www.paypal.com/paypalme/forhu... EMAIL JOHN: forhumanitypodcast@gmail.com ********************* This podcast is not journalism. But it's not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth. For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI. Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We'll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity. ********************* RESOURCES: SUBSCRIBE TO LIRON SHAPIRA'S DOOM DEBATES on YOUTUBE!!    / @doomdebates   Join the Pause AI Weekly Discord Thursdays at 2pm EST / discord   / discord   Max Winga's “A Stark Warning About Extinction”    • A Stark Warning About AI Extinction   For Humanity Theme Music by Josef Ebner Youtube:    / @jpjosefpictures   Website: https://josef.pictures BUY STEPHEN HANSON'S BEAUTIFUL AI RISK BOOK!!! https://stephenhansonart.bigcartel.co... 22 Word Statement from Center for AI Safety Statement on AI Risk | CAIS https://www.safe.ai/work/statement-on... Best Account on Twitter: AI Notkilleveryoneism Memes   / aisafetymemes   PayPal.MePayPal.Me Pay John Sherman using PayPal.Me Go to paypal.me/forhumanitypodcast and type in the amount. Since it's PayPal, it's easy and secure. Don't have a PayPal account? No worries. PayPal.MePayPal.Me Pay John Sherman using PayPal.Me Go to paypal.me/forhumanitypodcast and type in the amount. Since it's PayPal, it's easy and secure. Don't have a PayPal account? No worries. YouTubeYouTube Doom Debates Urgent disagreements that must be resolved before the world ends. Hosted by Liron Shapira. Discord Join the PauseAI Discord Server! Community of volunteers working towards an international pause on the development of AI systems more powerful than GPT-4 | 2077 members (93 kB) ************* Welcome! In today's video, we delve into the vital aspects of stopping AI and explore go to jail to stop AI. This video covers go to jail to stop AI and the following topics: AI safety AI risks AI legal issues ******************** Discover more of our video content on go to jail to stop AI. You'll find additional insights on this topic along with relevant social media links. YouTube:    / @forhumanitypodcast   Website: http://www.storyfarm.com/ *************************** This video explores go to jail to stop AI, AI safety, AI risks, and AI legal issues. Have I addressed your curiosity regarding go to jail to stop AI? We eagerly await your feedback and insights. Please drop a comment below, sharing your thoughts, queries, or suggestions about: AI safety, AI risks, AI legal issues, and go to jail to stop AI.

    What Is The Origin Of AI Safety? | AI Safety Movement | Episode #48

    Play Episode Listen Later Oct 8, 2024 69:29


    In Episode #48, host John Sherman talks with Pause AI US Founder Holly Elmore about the limiting origins of the AI safety movement. Polls show 60-80% of the public are opposed to building artificial superintelligence. So why is the movement to stop it still so small? The roots of the AI safety movement have a lot to do with it. Holly and John explore the present day issues created by the movements origins. Let's build community! Live For Humanity Zoom Community Meeting via Zoom Thursdays at 8:30pm EST...explanation during the full show! USE THIS THINK: https://storyfarm.zoom.us/j/88987072403 PASSCODE: 789742 LEARN HOW TO HELP RAISE AI RISK AWARENESS IN YOUR COMMUNITY HERE https://pauseai.info/local-organizing Please Donate Here To Help Promote For Humanity https://www.paypal.com/paypalme/forhu... EMAIL JOHN: forhumanitypodcast@gmail.com This podcast is not journalism. But it's not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth. For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI. Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We'll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity. RESOURCES: JOIN THE FIGHT, help Pause AI!!!! Pause AI SUBSCRIBE TO LIRON SHAPIRA'S DOOM DEBATES on YOUTUBE!!    / @doomdebates   Join the Pause AI Weekly Discord Thursdays at 2pm EST / discord   / discord   Max Winga's “A Stark Warning About Extinction”    • A Stark Warning About AI Extinction   For Humanity Theme Music by Josef Ebner Youtube:    / @jpjosefpictures   Website: https://josef.pictures BUY STEPHEN HANSON'S BEAUTIFUL AI RISK BOOK!!! https://stephenhansonart.bigcartel.co... 22 Word Statement from Center for AI Safety Statement on AI Risk | CAIS https://www.safe.ai/work/statement-on... Best Account on Twitter: AI Notkilleveryoneism Memes   / aisafetymemes   ************************* Welcome! In today's video, we delve into the vital aspects of AI safety movement and explore what is the origin of AI safety. This video covers what is the origin of AI safety and the following topics: AI safety AI safety research Eliezer's insights on AI safety research ******************** Discover more of our video content on what is the origin of AI safety. You'll find additional insights on this topic along with relevant social media links. YouTube:    / @forhumanitypodcast  

    AI Safety's Limiting Origins: For Humanity, An AI Risk Podcast, Episode #48 Trailer

    Play Episode Listen Later Sep 30, 2024 7:40


    In Episode #48 Trailer, host John Sherman talks with Pause AI US Founder Holly Elmore about the limiting origins of the AI safety movement. Polls show 60-80% of the public are opposed to building artificial superintelligence. So why is the movement to stop it still so small? The roots of the AI safety movement have a lot to do with it. Holly and John explore the present day issues created by the movements origins. Let's build community! Live For Humanity Zoom Community Meeting via Zoom Thursdays at 8:30pm EST...explanation during the full show! USE THIS THINK: https://storyfarm.zoom.us/j/88987072403 LEARN HOW TO HELP RAISE AI RISK AWARENESS IN YOUR COMMUNITY HERE https://pauseai.info/local-organizing Please Donate Here To Help Promote For Humanity https://www.paypal.com/paypalme/forhumanitypodcast EMAIL JOHN: forhumanitypodcast@gmail.com This podcast is not journalism. But it's not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth. For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI. Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We'll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity. RESOURCES: JOIN THE FIGHT, help Pause AI!!!! Pause AI SUBSCRIBE TO LIRON SHAPIRA'S DOOM DEBATES on YOUTUBE!! https://www.youtube.com/@DoomDebates Join the Pause AI Weekly Discord Thursdays at 2pm EST / discord https://discord.com/invite/pVMWjddaW7 Max Winga's “A Stark Warning About Extinction” https://youtu.be/kDcPW5WtD58?si=i6IRy82xZ2PUOp22 For Humanity Theme Music by Josef Ebner Youtube: https://www.youtube.com/channel/UCveruX8E-Il5A9VMC-N4vlg Website: https://josef.pictures BUY STEPHEN HANSON'S BEAUTIFUL AI RISK BOOK!!! https://stephenhansonart.bigcartel.com/product/the-entity-i-couldn-t-fathom 22 Word Statement from Center for AI Safety Statement on AI Risk | CAIS https://www.safe.ai/work/statement-on-ai-risk Best Account on Twitter: AI Notkilleveryoneism Memes https://twitter.com/AISafetyMemes

    Episode #47: “Can AI Be Controlled?“ For Humanity: An AI Risk Podcas

    Play Episode Listen Later Sep 25, 2024 79:39


    In Episode #47, host John Sherman talks with Buck Shlegeris, CEO of Redwood Research, a non-profit company working on technical AI risk challenges. The discussion includes Buck's thoughts on the new OpenAI o1-preview model, but centers on two questions: is there a way to control AI models before alignment is achieved if it can be, and how would the system that's supposed to save the world actually work if an AI lab found a model scheming. Check out these links to Buck's writing on these topics below: https://redwoodresearch.substack.com/p/the-case-for-ensuring-that-powerful https://redwoodresearch.substack.com/p/would-catching-your-ais-trying-to Senate Hearing: https://www.judiciary.senate.gov/committee-activity/hearings/oversight-of-ai-insiders-perspectives Harry Macks Youtube Channel https://www.youtube.com/channel/UC59ZRYCHev_IqjUhremZ8Tg LEARN HOW TO HELP RAISE AI RISK AWARENESS IN YOUR COMMUNITY HERE https://pauseai.info/local-organizing Please Donate Here To Help Promote For Humanity https://www.paypal.com/paypalme/forhumanitypodcast EMAIL JOHN: forhumanitypodcast@gmail.com This podcast is not journalism. But it's not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.  For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI. Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We'll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity. RESOURCES: JOIN THE FIGHT, help Pause AI!!!! Pause AI SUBSCRIBE TO LIRON SHAPIRA'S DOOM DEBATES on YOUTUBE!! https://www.youtube.com/@DoomDebates Join the Pause AI Weekly Discord Thursdays at 2pm EST   / discord   https://discord.com/invite/pVMWjddaW7 Max Winga's “A Stark Warning About Extinction” https://youtu.be/kDcPW5WtD58?si=i6IRy82xZ2PUOp22 For Humanity Theme Music by Josef Ebner Youtube: https://www.youtube.com/channel/UCveruX8E-Il5A9VMC-N4vlg Website: https://josef.pictures BUY STEPHEN HANSON'S BEAUTIFUL AI RISK BOOK!!! https://stephenhansonart.bigcartel.com/product/the-entity-i-couldn-t-fathom 22 Word Statement from Center for AI Safety Statement on AI Risk | CAIS https://www.safe.ai/work/statement-on-ai-risk Best Account on Twitter: AI Notkilleveryoneism Memes  https://twitter.com/AISafetyMemes

    Episode #47 Trailer : “Can AI Be Controlled?“ For Humanity: An AI Risk Podcast

    Play Episode Listen Later Sep 25, 2024 4:35


    In Episode #47 Trailer, host John Sherman talks with Buck Shlegeris, CEO of Redwood Research, a non-profit company working on technical AI risk challenges. The discussion includes Buck's thoughts on the new OpenAI o1-preview model, but centers on two questions: is there a way to control AI models before alignment is achieved if it can be, and how would the system that's supposed to save the world actually work if an AI lab found a model scheming. Check out these links to Buck's writing on these topics below: https://redwoodresearch.substack.com/p/the-case-for-ensuring-that-powerful https://redwoodresearch.substack.com/p/would-catching-your-ais-trying-to Senate Hearing: https://www.judiciary.senate.gov/committee-activity/hearings/oversight-of-ai-insiders-perspectives Harry Macks Youtube Channel https://www.youtube.com/channel/UC59ZRYCHev_IqjUhremZ8Tg LEARN HOW TO HELP RAISE AI RISK AWARENESS IN YOUR COMMUNITY HERE https://pauseai.info/local-organizing Please Donate Here To Help Promote For Humanity https://www.paypal.com/paypalme/forhumanitypodcast EMAIL JOHN: forhumanitypodcast@gmail.com This podcast is not journalism. But it's not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.  For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI. Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We'll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity. RESOURCES: JOIN THE FIGHT, help Pause AI!!!! Pause AI SUBSCRIBE TO LIRON SHAPIRA'S DOOM DEBATES on YOUTUBE!! https://www.youtube.com/@DoomDebates Join the Pause AI Weekly Discord Thursdays at 2pm EST   / discord   https://discord.com/invite/pVMWjddaW7 Max Winga's “A Stark Warning About Extinction” https://youtu.be/kDcPW5WtD58?si=i6IRy82xZ2PUOp22 For Humanity Theme Music by Josef Ebner Youtube: https://www.youtube.com/channel/UCveruX8E-Il5A9VMC-N4vlg Website: https://josef.pictures BUY STEPHEN HANSON'S BEAUTIFUL AI RISK BOOK!!! https://stephenhansonart.bigcartel.com/product/the-entity-i-couldn-t-fathom 22 Word Statement from Center for AI Safety Statement on AI Risk | CAIS https://www.safe.ai/work/statement-on-ai-risk Best Account on Twitter: AI Notkilleveryoneism Memes  https://twitter.com/AISafetyMemes

    Episode #46: “Is AI Humanity's Worthy Successor?“ For Humanity: An AI Risk Podcast

    Play Episode Listen Later Sep 18, 2024 77:26


    In Episode #46, host John Sherman talks with Daniel Faggella, Founder and Head of Research at Emerj Artificial Intelligence Research. Dan has been speaking out about AI risk for a long time but comes at it from a different perspective than many. Dan thinks we need to talk about how we can make AGI and whatever comes after become humanity's worthy successor. More About Daniel Faggella https://danfaggella.com/ LEARN HOW TO HELP RAISE AI RISK AWARENESS IN YOUR COMMUNITY HERE https://pauseai.info/local-organizing Please Donate Here To Help Promote For Humanity https://www.paypal.com/paypalme/forhumanitypodcast EMAIL JOHN: forhumanitypodcast@gmail.com This podcast is not journalism. But it's not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.  For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI. Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We'll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity. RESOURCES: JOIN THE FIGHT, help Pause AI!!!! Pause AI SUBSCRIBE TO LIRON SHAPIRA'S DOOM DEBATES on YOUTUBE!! https://www.youtube.com/@DoomDebates Join the Pause AI Weekly Discord Thursdays at 2pm EST   / discord   https://discord.com/invite/pVMWjddaW7 Max Winga's “A Stark Warning About Extinction” https://youtu.be/kDcPW5WtD58?si=i6IRy82xZ2PUOp22 For Humanity Theme Music by Josef Ebner Youtube: https://www.youtube.com/channel/UCveruX8E-Il5A9VMC-N4vlg Website: https://josef.pictures BUY STEPHEN HANSON'S BEAUTIFUL AI RISK BOOK!!! https://stephenhansonart.bigcartel.com/product/the-entity-i-couldn-t-fathom 22 Word Statement from Center for AI Safety Statement on AI Risk | CAIS https://www.safe.ai/work/statement-on-ai-risk Best Account on Twitter: AI Notkilleveryoneism Memes  https://twitter.com/AISafetyMemes

    Episode 46 Trailer: “Is AI Humanity's Worthy Successor?“ For Humanity: An AI Risk Podcast

    Play Episode Listen Later Sep 16, 2024 5:53


    In Episode #46 Trailer, host John Sherman talks with Daniel Faggella, Founder and Head of Research at Emerj Artificial Intelligence Research. Dan has been speaking out about AI risk for a long time but comes at it from a different perspective than many. Dan thinks we need to talk about how we can make AGI and whatever comes after become humanity's worthy successor. More About Daniel Faggella https://danfaggella.com/ LEARN HOW TO HELP RAISE AI RISK AWARENESS IN YOUR COMMUNITY HERE https://pauseai.info/local-organizing Please Donate Here To Help Promote For Humanity https://www.paypal.com/paypalme/forhumanitypodcast EMAIL JOHN: forhumanitypodcast@gmail.com This podcast is not journalism. But it's not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.  For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI. Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We'll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity. RESOURCES: JOIN THE FIGHT, help Pause AI!!!! Pause AI SUBSCRIBE TO LIRON SHAPIRA'S DOOM DEBATES on YOUTUBE!! https://www.youtube.com/@DoomDebates Join the Pause AI Weekly Discord Thursdays at 2pm EST   / discord   https://discord.com/invite/pVMWjddaW7 Max Winga's “A Stark Warning About Extinction” https://youtu.be/kDcPW5WtD58?si=i6IRy82xZ2PUOp22 For Humanity Theme Music by Josef Ebner Youtube: https://www.youtube.com/channel/UCveruX8E-Il5A9VMC-N4vlg Website: https://josef.pictures BUY STEPHEN HANSON'S BEAUTIFUL AI RISK BOOK!!! https://stephenhansonart.bigcartel.com/product/the-entity-i-couldn-t-fathom 22 Word Statement from Center for AI Safety Statement on AI Risk | CAIS https://www.safe.ai/work/statement-on-ai-risk Best Account on Twitter: AI Notkilleveryoneism Memes  https://twitter.com/AISafetyMemes

    Episode #45: “AI Risk And Child Psychology” For Humanity: An AI Risk Podcast

    Play Episode Listen Later Sep 11, 2024 84:24


    In Episode #45, host John Sherman talks with Dr. Mike Brooks, a Psychologist focusing on kids and technology. The conversation is broad-ranging, touching on parenting, happiness and screens, the need for human unity, and the psychology of humans facing an ever more unknown future.FULL INTERVIEW STARTS AT (00:05:28) Mike's book: Tech Generation: Raising Balanced Kids in a Hyper-Connected World An article from Mike in Psychology Today: The Happiness Illusion: Facing the Dark Side of Progress Fine Dr. Brooks on Social Media LinkedIn | X/Twitter | YouTube | TikTok | Instagram | Facebook https://www.linkedin.com/in/dr-mike-brooks-b1164120 https://x.com/drmikebrooks https://www.youtube.com/@connectwithdrmikebrooks https://www.tiktok.com/@connectwithdrmikebrooks?lang=en https://www.instagram.com/drmikebrooks/?hl=en Chris Gerrby's Twitter: https://x.com/ChrisGerrby LEARN HOW TO HELP RAISE AI RISK AWARENESS IN YOUR COMMUNITY HERE https://pauseai.info/local-organizing Please Donate Here To Help Promote For Humanity https://www.paypal.com/paypalme/forhumanitypodcast EMAIL JOHN: forhumanitypodcast@gmail.com This podcast is not journalism. But it's not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth. For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI. Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We'll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity. RESOURCES: JOIN THE FIGHT, help Pause AI!!!! Pause AI SUBSCRIBE TO LIRON SHAPIRA'S DOOM DEBATES on YOUTUBE!! https://www.youtube.com/@DoomDebates Join the Pause AI Weekly Discord Thursdays at 2pm EST / discord https://discord.com/invite/pVMWjddaW7 Max Winga's “A Stark Warning About Extinction” https://youtu.be/kDcPW5WtD58?si=i6IRy82xZ2PUOp22 For Humanity Theme Music by Josef Ebner Youtube: https://www.youtube.com/channel/UCveruX8E-Il5A9VMC-N4vlg Website: https://josef.pictures BUY STEPHEN HANSON'S BEAUTIFUL AI RISK BOOK!!! https://stephenhansonart.bigcartel.com/product/the-entity-i-couldn-t-fathom 22 Word Statement from Center for AI Safety Statement on AI Risk | CAIS https://www.safe.ai/work/statement-on-ai-risk Best Account on Twitter: AI Notkilleveryoneism Memes https://twitter.com/AISafetyMemes

    Episode #45 TRAILER: “AI Risk And Child Psychology” For Humanity: An AI Risk Podcast

    Play Episode Listen Later Sep 9, 2024 6:42


    In Episode #45 TRAILER, host John Sherman talks with Dr. Mike Brooks, a Psychologist focusing on kids and technology. The conversation is broad-ranging, touching on parenting, happiness and screens, the need for human unity, and the psychology of humans facing an ever more unknown future. Mike's book: Tech Generation: Raising Balanced Kids in a Hyper-Connected World An article from Mike in Psychology Today: The Happiness Illusion: Facing the Dark Side of Progress LEARN HOW TO HELP RAISE AI RISK AWARENESS IN YOUR COMMUNITY HERE https://pauseai.info/local-organizing Please Donate Here To Help Promote For Humanity https://www.paypal.com/paypalme/forhumanitypodcast EMAIL JOHN: forhumanitypodcast@gmail.com This podcast is not journalism. But it's not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.  For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI. Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We'll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity. RESOURCES: JOIN THE FIGHT, help Pause AI!!!! Pause AI SUBSCRIBE TO LIRON SHAPIRA'S DOOM DEBATES on YOUTUBE!! https://www.youtube.com/@DoomDebates Join the Pause AI Weekly Discord Thursdays at 2pm EST   / discord   https://discord.com/invite/pVMWjddaW7 Max Winga's “A Stark Warning About Extinction” https://youtu.be/kDcPW5WtD58?si=i6IRy82xZ2PUOp22 For Humanity Theme Music by Josef Ebner Youtube: https://www.youtube.com/channel/UCveruX8E-Il5A9VMC-N4vlg Website: https://josef.pictures BUY STEPHEN HANSON'S BEAUTIFUL AI RISK BOOK!!! https://stephenhansonart.bigcartel.com/product/the-entity-i-couldn-t-fathom 22 Word Statement from Center for AI Safety Statement on AI Risk | CAIS https://www.safe.ai/work/statement-on-ai-risk Best Account on Twitter: AI Notkilleveryoneism Memes  https://twitter.com/AISafetyMemes

    Episode #44: “AI P-Doom Debate: 50% vs 99.999%” For Humanity: An AI Risk Podcast

    Play Episode Listen Later Sep 4, 2024 91:05


    In Episode #44, host John Sherman brings back friends of For Humanity Dr. Roman Yamopolskiy and Liron Shapira. Roman is an influential AI Safety researcher, through leader, and Associate Professor at the University of Louisville. Liron is a tech CEO and host of the excellent Doom Debates podcast. Roman famously holds a 99.999% p-doom, Liron has a nuanced 50%. John starts out at 75%, unrelated to their numbers. Where are you? Did Roman or Liron move you in their direction at all? Let us know in the comments! LEARN HOW TO HELP RAISE AI RISK AWARENESS IN YOUR COMMUNITY HERE https://pauseai.info/local-organizing Please Donate Here To Help Promote For Humanity https://www.paypal.com/paypalme/forhumanitypodcast EMAIL JOHN: forhumanitypodcast@gmail.com This podcast is not journalism. But it's not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.  For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI. Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We'll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity. RESOURCES: BUY ROMAN'S NEW BOOK ON AMAZON https://a.co/d/fPG6lOB SUBSCRIBE TO LIRON SHAPIRA'S DOOM DEBATES on YOUTUBE!! https://www.youtube.com/@DoomDebates JOIN THE FIGHT, help Pause AI!!!! Pause AI Join the Pause AI Weekly Discord Thursdays at 2pm EST   / discord   https://discord.com/invite/pVMWjddaW7 Max Winga's “A Stark Warning About Extinction” https://youtu.be/kDcPW5WtD58?si=i6IRy82xZ2PUOp22 For Humanity Theme Music by Josef Ebner Youtube: https://www.youtube.com/channel/UCveruX8E-Il5A9VMC-N4vlg Website: https://josef.pictures BUY STEPHEN HANSON'S BEAUTIFUL AI RISK BOOK!!! https://stephenhansonart.bigcartel.com/product/the-entity-i-couldn-t-fathom 22 Word Statement from Center for AI Safety Statement on AI Risk | CAIS https://www.safe.ai/work/statement-on-ai-risk Best Account on Twitter: AI Notkilleveryoneism Memes  https://twitter.com/AISafetyMemes

    Episode #43: “So what exactly is the good case for AI?” For Humanity: An AI Risk Podcast

    Play Episode Listen Later Sep 2, 2024 76:06


    In Episode #43,  host John Sherman talks with DevOps Engineer Aubrey Blackburn about the vague, elusive case the big AI companies and accelerationists make for the good case AI future. LEARN HOW TO HELP RAISE AI RISK AWARENESS IN YOUR COMMUNITY HERE https://pauseai.info/local-organizing Please Donate Here To Help Promote For Humanity https://www.paypal.com/paypalme/forhumanitypodcast EMAIL JOHN: forhumanitypodcast@gmail.com This podcast is not journalism. But it's not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.  For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI. Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We'll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity. RESOURCES: JOIN THE FIGHT, help Pause AI!!!! Pause AI Join the Pause AI Weekly Discord Thursdays at 2pm EST   / discord   https://discord.com/invite/pVMWjddaW7 Max Winga's “A Stark Warning About Extiction” https://youtu.be/kDcPW5WtD58?si=i6IRy82xZ2PUOp22 For Humanity Theme Music by Josef Ebner Youtube: https://www.youtube.com/channel/UCveruX8E-Il5A9VMC-N4vlg Website: https://josef.pictures SUBSCRIBE TO LIRON SHAPIRA'S DOOM DEBATES on YOUTUBE!! https://www.youtube.com/@DoomDebates BUY STEPHEN HANSON'S BEAUTIFUL AI RISK BOOK!!! https://stephenhansonart.bigcartel.com/product/the-entity-i-couldn-t-fathom 22 Word Statement from Center for AI Safety Statement on AI Risk | CAIS https://www.safe.ai/work/statement-on-ai-risk Best Account on Twitter: AI Notkilleveryoneism Memes  https://twitter.com/AISafetyMemes

    Episode #44 Trailer: “AI P-Doom Debate: 50% vs 99.999%” For Humanity: An AI Risk Podcast

    Play Episode Listen Later Sep 2, 2024 7:58


    In Episode #44 Trailer, host John Sherman brings back friends of For Humanity Dr. Roman Yamopolskiy and Liron Shapira. Roman is an influential AI Safety researcher, through leader, and Associate Professor at the University of Louisville. Liron is a tech CEO and host of the excellent Doom Debates podcast. Roman famously holds a 99.999% p-doom, Liron has a nuanced 50%. John starts out at 75%, unrelated to their numbers. Where are you? Did Roman or Liron move you in their direction at all? Watch the full episode and let us know in the comments. LEARN HOW TO HELP RAISE AI RISK AWARENESS IN YOUR COMMUNITY HERE https://pauseai.info/local-organizing Please Donate Here To Help Promote For Humanity https://www.paypal.com/paypalme/forhumanitypodcast EMAIL JOHN: forhumanitypodcast@gmail.com This podcast is not journalism. But it's not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.  For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI. Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We'll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity. RESOURCES: BUY ROMAN'S NEW BOOK ON AMAZON https://a.co/d/fPG6lOB SUBSCRIBE TO LIRON SHAPIRA'S DOOM DEBATES on YOUTUBE!! https://www.youtube.com/@DoomDebates JOIN THE FIGHT, help Pause AI!!!! Pause AI Join the Pause AI Weekly Discord Thursdays at 2pm EST   / discord   https://discord.com/invite/pVMWjddaW7 Max Winga's “A Stark Warning About Extinction” https://youtu.be/kDcPW5WtD58?si=i6IRy82xZ2PUOp22 For Humanity Theme Music by Josef Ebner Youtube: https://www.youtube.com/channel/UCveruX8E-Il5A9VMC-N4vlg Website: https://josef.pictures BUY STEPHEN HANSON'S BEAUTIFUL AI RISK BOOK!!! https://stephenhansonart.bigcartel.com/product/the-entity-i-couldn-t-fathom 22 Word Statement from Center for AI Safety Statement on AI Risk | CAIS https://www.safe.ai/work/statement-on-ai-risk Best Account on Twitter: AI Notkilleveryoneism Memes  https://twitter.com/AISafetyMemes

    Episode #43 TRAILER: “So what exactly is the good case for AI?” For Humanity: An AI Risk Podcast

    Play Episode Listen Later Aug 26, 2024 7:34


    In Episode #43 TRAILER,  host John Sherman talks with DevOps Engineer Aubrey Blackburn about the vague, elusive case the big AI companies and accelerationists make for the good case AI future. Please Donate Here To Help Promote For Humanity https://www.paypal.com/paypalme/forhumanitypodcast EMAIL JOHN: forhumanitypodcast@gmail.com This podcast is not journalism. But it's not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.  For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI. Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We'll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity. RESOURCES: JOIN THE FIGHT, help Pause AI!!!! Pause AI Join the Pause AI Weekly Discord Thursdays at 2pm EST   / discord   https://discord.com/invite/pVMWjddaW7 Max Winga's “A Stark Warning About Extiction” https://youtu.be/kDcPW5WtD58?si=i6IRy82xZ2PUOp22 For Humanity Theme Music by Josef Ebner Youtube: https://www.youtube.com/channel/UCveruX8E-Il5A9VMC-N4vlg Website: https://josef.pictures SUBSCRIBE TO LIRON SHAPIRA'S DOOM DEBATES on YOUTUBE!! https://www.youtube.com/@DoomDebates BUY STEPHEN HANSON'S BEAUTIFUL AI RISK BOOK!!! https://stephenhansonart.bigcartel.com/product/the-entity-i-couldn-t-fathom 22 Word Statement from Center for AI Safety Statement on AI Risk | CAIS https://www.safe.ai/work/statement-on-ai-risk Best Account on Twitter: AI Notkilleveryoneism Memes  https://twitter.com/AISafetyMemes

    Episode #42: “Actors vs. AI” For Humanity: An AI Risk Podcast

    Play Episode Listen Later Aug 21, 2024 83:19


    In Episode #42,  host John Sherman talks with actor Erik Passoja about AI's impact on Hollywood, the fight to protect people's digital identities, and the vibes in LA about existential risk. Please Donate Here To Help Promote For Humanity https://www.paypal.com/paypalme/forhumanitypodcast EMAIL JOHN: forhumanitypodcast@gmail.com This podcast is not journalism. But it's not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.  For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI. Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We'll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity. RESOURCES: JOIN THE FIGHT, help Pause AI!!!! Pause AI Join the Pause AI Weekly Discord Thursdays at 2pm EST   / discord   https://discord.com/invite/pVMWjddaW7 Max Winga's “A Stark Warning About Extiction” https://youtu.be/kDcPW5WtD58?si=i6IRy82xZ2PUOp22 For Humanity Theme Music by Josef Ebner Youtube: https://www.youtube.com/channel/UCveruX8E-Il5A9VMC-N4vlg Website: https://josef.pictures SUBSCRIBE TO LIRON SHAPIRA'S DOOM DEBATES on YOUTUBE!! https://www.youtube.com/@DoomDebates BUY STEPHEN HANSON'S BEAUTIFUL AI RISK BOOK!!! https://stephenhansonart.bigcartel.com/product/the-entity-i-couldn-t-fathom 22 Word Statement from Center for AI Safety Statement on AI Risk | CAIS https://www.safe.ai/work/statement-on-ai-risk Best Account on Twitter: AI Notkilleveryoneism Memes  https://twitter.com/AISafetyMemes

    Episode #42 TRAILER: “Actors vs. AI” For Humanity: An AI Risk Podcast

    Play Episode Listen Later Aug 19, 2024 3:11


    In Episode #42 Trailer, host John Sherman talks with actor Erik Passoja about AI's impact on Hollywood, the fight to protect people's digital identities, and the vibes in LA about existential risk. Please Donate Here To Help Promote For Humanity https://www.paypal.com/paypalme/forhumanitypodcast EMAIL JOHN: forhumanitypodcast@gmail.com This podcast is not journalism. But it's not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.  For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI. Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We'll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity. RESOURCES: JOIN THE FIGHT, help Pause AI!!!! Pause AI Join the Pause AI Weekly Discord Thursdays at 2pm EST   / discord   https://discord.com/invite/pVMWjddaW7 Max Winga's “A Stark Warning About Extiction” https://youtu.be/kDcPW5WtD58?si=i6IRy82xZ2PUOp22 For Humanity Theme Music by Josef Ebner Youtube: https://www.youtube.com/channel/UCveruX8E-Il5A9VMC-N4vlg Website: https://josef.pictures SUBSCRIBE TO LIRON SHAPIRA'S DOOM DEBATES on YOUTUBE!! https://www.youtube.com/@DoomDebates BUY STEPHEN HANSON'S BEAUTIFUL AI RISK BOOK!!! https://stephenhansonart.bigcartel.com/product/the-entity-i-couldn-t-fathom 22 Word Statement from Center for AI Safety Statement on AI Risk | CAIS https://www.safe.ai/work/statement-on-ai-risk Best Account on Twitter: AI Notkilleveryoneism Memes  https://twitter.com/AISafetyMemes

    Episode #41 “David Brooks: Dead Wrong on AI” For Humanity: An AI Risk Podcast

    Play Episode Listen Later Aug 14, 2024 48:39


    In Episode #41, host John Sherman begins with a personal message to David Brooks of the New York Times. Brooks wrote an article titled “Many People Fear AI: They Shouldn't”–and in full candor it pissed John off quite much. During this episode, John and Doom Debates host Liron Shapira go line by line through David Brooks's 7/31/24 piece in the New York Times. Please Donate Here To Help Promote For Humanity https://www.paypal.com/paypalme/forhumanitypodcast EMAIL JOHN: forhumanitypodcast@gmail.com This podcast is not journalism. But it's not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.  For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI. Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We'll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity. RESOURCES: Max Winga's “A Stark Warning About Extiction” https://youtu.be/kDcPW5WtD58?si=i6IRy82xZ2PUOp22 For Humanity Theme Music by Josef Ebner Youtube: https://www.youtube.com/channel/UCveruX8E-Il5A9VMC-N4vlg Website: https://josef.pictures SUBSCRIBE TO LIRON SHAPIRA'S DOOM DEBATES on YOUTUBE!! https://www.youtube.com/@DoomDebates BUY STEPHEN HANSON'S BEAUTIFUL AI RISK BOOK!!! https://stephenhansonart.bigcartel.com/product/the-entity-i-couldn-t-fathom JOIN THE FIGHT, help Pause AI!!!! Pause AI Join the Pause AI Weekly Discord Thursdays at 2pm EST   / discord   https://discord.com/invite/pVMWjddaW7 22 Word Statement from Center for AI Safety Statement on AI Risk | CAIS https://www.safe.ai/work/statement-on-ai-risk Best Account on Twitter: AI Notkilleveryoneism Memes  https://twitter.com/AISafetyMemes

    Episode #41 TRAILER “David Brooks: Dead Wrong on AI” For Humanity: An AI Risk Podcast

    Play Episode Listen Later Aug 12, 2024 9:18


    In Episode #41 TRAILER, host John Sherman previews the full show with a personal message to David Brooks of the New York Times. Brooks wrote something–and in full candor it pissed John off quite much. During the full episode, John and Doom Debates host Liron Shapira go line by line through David Brooks's 7/31/24 piece in the New York Times. Please Donate Here To Help Promote For Humanity https://www.paypal.com/paypalme/forhumanitypodcast EMAIL JOHN: forhumanitypodcast@gmail.com This podcast is not journalism. But it's not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.  For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI. Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We'll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity. RESOURCES: Max Winga's “A Stark Warning About Extiction” https://youtu.be/kDcPW5WtD58?si=i6IRy82xZ2PUOp22 For Humanity Theme Music by Josef Ebner Youtube: https://www.youtube.com/channel/UCveruX8E-Il5A9VMC-N4vlg Website: https://josef.pictures SUBSCRIBE TO LIRON SHAPIRA'S DOOM DEBATES on YOUTUBE!! https://www.youtube.com/@DoomDebates BUY STEPHEN HANSON'S BEAUTIFUL AI RISK BOOK!!! https://stephenhansonart.bigcartel.com/product/the-entity-i-couldn-t-fathom JOIN THE FIGHT, help Pause AI!!!! Pause AI Join the Pause AI Weekly Discord Thursdays at 2pm EST   / discord   https://discord.com/invite/pVMWjddaW7 22 Word Statement from Center for AI Safety Statement on AI Risk | CAIS https://www.safe.ai/work/statement-on-ai-risk Best Account on Twitter: AI Notkilleveryoneism Memes  https://twitter.com/AISafetyMemes

    Episode #40 “Surviving Doom” For Humanity: An AI Risk Podcast

    Play Episode Listen Later Aug 7, 2024 90:53


    In Episode #40, host John Sherman talks with James Norris, CEO of Upgradable and longtime AI safety proponent. James has been concerned about AI x-risk for 26 years. He lives now in Bali and has become an expert in prepping for a very different world post-warning shot or other major AI-related disaster, and he's helping others do the same. James shares his powerful insight, long-time awareness, and expertise helping others find a way to survive and rebuild from a post-AGI disaster warning shot. Please Donate Here To Help Promote For Humanity https://www.paypal.com/paypalme/forhumanitypodcast EMAIL JOHN: forhumanitypodcast@gmail.com This podcast is not journalism. But it's not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.  For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI. Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We'll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity. RESOURCES: Max Winga's “A Stark Warning About Extiction” https://youtu.be/kDcPW5WtD58?si=i6IRy82xZ2PUOp22 For Humanity Theme Music by Josef Ebner Youtube: https://www.youtube.com/channel/UCveruX8E-Il5A9VMC-N4vlg Website: https://josef.pictures SUBSCRIBE TO LIRON SHAPIRA'S DOOM DEBATES on YOUTUBE!! https://www.youtube.com/@DoomDebates BUY STEPHEN HANSON'S BEAUTIFUL AI RISK BOOK!!! https://stephenhansonart.bigcartel.com/product/the-entity-i-couldn-t-fathom JOIN THE FIGHT, help Pause AI!!!! Pause AI Join the Pause AI Weekly Discord Thursdays at 2pm EST   / discord   https://discord.com/invite/pVMWjddaW7 22 Word Statement from Center for AI Safety Statement on AI Risk | CAIS https://www.safe.ai/work/statement-on-ai-risk Best Account on Twitter: AI Notkilleveryoneism Memes  https://twitter.com/AISafetyMemes

    Episode #40 TRAILER “Surviving Doom” For Humanity: An AI Risk Podcast

    Play Episode Listen Later Aug 5, 2024 6:17


    In Episode #40, TRAILER, host John Sherman talks with James Norris, CEO of Upgradable and longtime AI safety proponent. James has been concerned about AI x-risk for 26 years. He lives now in Bali and has become an expert in prepping for a very different world post-warning shot or other major AI-related disaster, and he's helping others do the same. Please Donate Here To Help Promote For Humanity https://www.paypal.com/paypalme/forhumanitypodcast EMAIL JOHN: forhumanitypodcast@gmail.com This podcast is not journalism. But it's not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.  For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI. Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We'll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity. RESOURCES: Max Winga's “A Stark Warning About Extiction” https://youtu.be/kDcPW5WtD58?si=i6IRy82xZ2PUOp22 For Humanity Theme Music by Josef Ebner Youtube: https://www.youtube.com/channel/UCveruX8E-Il5A9VMC-N4vlg Website: https://josef.pictures SUBSCRIBE TO LIRON SHAPIRA'S DOOM DEBATES on YOUTUBE!! https://www.youtube.com/@DoomDebates BUY STEPHEN HANSON'S BEAUTIFUL AI RISK BOOK!!! https://stephenhansonart.bigcartel.com/product/the-entity-i-couldn-t-fathom JOIN THE FIGHT, help Pause AI!!!! Pause AI Join the Pause AI Weekly Discord Thursdays at 2pm EST   / discord   https://discord.com/invite/pVMWjddaW7 22 Word Statement from Center for AI Safety Statement on AI Risk | CAIS https://www.safe.ai/work/statement-on-ai-risk Best Account on Twitter: AI Notkilleveryoneism Memes  https://twitter.com/AISafetyMemes Timestamps Prepping Perspectives (00:00:00)Discussion on how to characterize preparedness efforts, ranging from common sense to doomsday prepping. Personal Experience in Emergency Management (00:00:06)Speaker shares background in emergency management and Red Cross, reflecting on past preparation efforts. Vision of AGI and Societal Collapse (00:00:58)Exploration of potential outcomes of AGI development and societal disruptions, including chaos and extinction. Geopolitical Safety in the Philippines (00:02:14)Consideration of living in the Philippines as a safer option during global conflicts and crises. Self-Reliance and Supply Chain Concerns (00:03:15)Importance of self-reliance and being off-grid to mitigate risks from supply chain breakdowns. Escaping Potential Threats (00:04:11)Discussion on the plausibility of escaping threats posed by advanced AI and the implications of being tracked. Nuclear Threats and Personal Safety (00:05:34)Speculation on the potential for nuclear conflict while maintaining a sense of safety in the Philippines.

    Episode #39 “Did AI-Risk Just Get Partisan?” For Humanity: An AI Risk Podcast

    Play Episode Listen Later Jul 31, 2024 83:01


    In Episode #39, host John Sherman talks with Matthew Taber, Founder, advocate and expert in AI-risk legislation. The conversation starts ut with the various state AI laws that are coming up and moves into the shifting political landscape around AI-risk legislation in America in July 2024. Please Donate Here To Help Promote For Humanity https://www.paypal.com/paypalme/forhumanitypodcast EMAIL JOHN: forhumanitypodcast@gmail.com This podcast is not journalism. But it's not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.  For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI. Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We'll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity. RESOURCES: Max Winga's “A Stark Warning About Extiction” https://youtu.be/kDcPW5WtD58?si=i6IRy82xZ2PUOp22 For Humanity Theme Music by Josef Ebner Youtube: https://www.youtube.com/channel/UCveruX8E-Il5A9VMC-N4vlg Website: https://josef.pictures SUBSCRIBE TO LIRON SHAPIRA'S DOOM DEBATES on YOUTUBE!! https://www.youtube.com/@DoomDebates BUY STEPHEN HANSON'S BEAUTIFUL AI RISK BOOK!!! https://stephenhansonart.bigcartel.com/product/the-entity-i-couldn-t-fathom JOIN THE FIGHT, help Pause AI!!!! Pause AI Join the Pause AI Weekly Discord Thursdays at 2pm EST   / discord   https://discord.com/invite/pVMWjddaW7 22 Word Statement from Center for AI Safety Statement on AI Risk | CAIS https://www.safe.ai/work/statement-on-ai-risk Best Account on Twitter: AI Notkilleveryoneism Memes  https://twitter.com/AISafetyMemes Timestamps **GOP's AI Regulation Stance (00:00:41)** **Welcome to Episode 39 (00:01:41)** **Trump's Assassination Attempt (00:03:41)** **Partisan Shift in AI Risk (00:04:09)** **Matthew Tabor's Background (00:06:32)** **Tennessee's "ELVIS" Law (00:13:55)** **Bipartisan Support for ELVIS (00:15:49)** **California's Legislative Actions (00:18:58)** **Overview of California Bills (00:20:50)** **Lobbying Influence in California (00:23:15)** **Challenges of AI Training Data (00:24:26)** **The Original Sin of AI (00:25:19)** **Congress and AI Regulation (00:27:29)** **Investigations into AI Companies (00:28:48)** **The New York Times Lawsuit (00:29:39)** **Political Developments in AI Risk (00:30:24)** **GOP Platform and AI Regulation (00:31:35)** **Local vs. National AI Regulation (00:32:58)** **Public Awareness of AI Regulation (00:33:38)** **Engaging with Lawmakers (00:41:05)** **Roleplay Demonstration (00:43:48)** **Legislative Frameworks for AI (00:46:20)** **Coalition Against AI Development (00:49:28)** **Understanding AI Risks in Hollywood (00:51:00)** **Generative AI in Film Production (00:53:32)** **Impact of AI on Authenticity in Entertainment (00:56:14)** **The Future of AI-Generated Content (00:57:31)** **AI Legislation and Political Dynamics (01:00:43)** **Partisan Issues in AI Regulation (01:02:22)** **Influence of Celebrity Advocacy on AI Legislation (01:04:11)** **Understanding Legislative Processes for AI Bills (01:09:23)** **Presidential Approach to AI Regulation (01:11:47)** **State-Level Initiatives for AI Legislation (01:14:09)** # Podcast Episode Timestamps **State vs. Congressional Regulation (01:15:05)** **Engaging Lawmakers (01:15:29)** **YouTube Video Views Explanation (01:15:37)** **Algorithm Challenges (01:16:48)** **Celebration of Life (01:18:08)** **Final Thoughts and Call to Action (01:19:13)**

    Episode #39 Trailer “Did AI-Risk Just Get Partisan?” For Humanity: An AI Risk Podcast

    Play Episode Listen Later Jul 29, 2024 4:04


    In Episode #39 Trailer, host John Sherman talks with Matthew Taber, Founder, advocate and expert in AI-risk legislation. The conversation addresses the shifting political landscape around AI-risk legislation in America in July 2024. Please Donate Here To Help Promote For Humanity https://www.paypal.com/paypalme/forhumanitypodcast EMAIL JOHN: forhumanitypodcast@gmail.com This podcast is not journalism. But it's not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.  For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI. Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We'll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity. RESOURCES: Max Winga's “A Stark Warning About Extiction” https://youtu.be/kDcPW5WtD58?si=i6IRy82xZ2PUOp22 For Humanity Theme Music by Josef Ebner Youtube: https://www.youtube.com/channel/UCveruX8E-Il5A9VMC-N4vlg Website: https://josef.pictures SUBSCRIBE TO LIRON SHAPIRA'S DOOM DEBATES on YOUTUBE!! https://www.youtube.com/@DoomDebates BUY STEPHEN HANSON'S BEAUTIFUL AI RISK BOOK!!! https://stephenhansonart.bigcartel.com/product/the-entity-i-couldn-t-fathom JOIN THE FIGHT, help Pause AI!!!! Pause AI Join the Pause AI Weekly Discord Thursdays at 2pm EST   / discord   https://discord.com/invite/pVMWjddaW7 22 Word Statement from Center for AI Safety Statement on AI Risk | CAIS https://www.safe.ai/work/statement-on-ai-risk Best Account on Twitter: AI Notkilleveryoneism Memes  https://twitter.com/AISafetyMemes Timestamps Republican Party's AI Regulation Stance (00:00:41)The GOP platform aims to eliminate existing AI regulations, reflecting a shift in political dynamics. Bipartisanship in AI Issues (00:01:21)AI is initially a bipartisan concern, but quickly becomes a partisan issue amidst political maneuvering. Tech Companies' Frustration with Legislation (00:01:55)Major tech companies express dissatisfaction with California's AI bills, indicating a push for regulatory rollback. Public Sentiment vs. Party Platform (00:02:42)Discrepancy between GOP platform on AI and average voter opinions, highlighting a disconnect in priorities. Polling on AI Regulation (00:03:26)Polling shows strong public support for AI regulation, raising questions about political implications and citizen engagement.

    Episode #38 “France vs. AGI” For Humanity: An AI Risk Podcast

    Play Episode Listen Later Jul 24, 2024 80:19


    In Episode #38, host John Sherman talks with Maxime Fournes, Founder, Pause AI France. With the third AI “Safety” Summit coming up in Paris in February 2025, we examine France's role in AI safety, revealing France to be among the very worst when it comes to taking AI risk seriously. How deep is madman Yan Lecun's influence in French society and government? And would France even join an international treaty? The conversation covers the potential for international treaties on AI safety, the psychological factors influencing public perception, and the power dynamics shaping AI's future.    Please Donate Here To Help Promote For Humanity https://www.paypal.com/paypalme/forhumanitypodcast EMAIL JOHN: forhumanitypodcast@gmail.com This podcast is not journalism. But it's not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.  For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI. Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We'll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity. Max Winga's “A Stark Warning About Extiction” https://youtu.be/kDcPW5WtD58?si=i6IRy82xZ2PUOp22 For Humanity Theme Music by Josef Ebner Youtube: https://www.youtube.com/channel/UCveruX8E-Il5A9VMC-N4vlg Website: https://josef.pictures RESOURCES: SUBSCRIBE TO LIRON SHAPIRA'S DOOM DEBATES on YOUTUBE!! https://www.youtube.com/@DoomDebates BUY STEPHEN HANSON'S BEAUTIFUL AI RISK BOOK!!! https://stephenhansonart.bigcartel.com/product/the-entity-i-couldn-t-fathom JOIN THE FIGHT, help Pause AI!!!! Pause AI Join the Pause AI Weekly Discord Thursdays at 2pm EST   / discord   https://discord.com/invite/pVMWjddaW7 22 Word Statement from Center for AI Safety Statement on AI Risk | CAIS https://www.safe.ai/work/statement-on-ai-risk Best Account on Twitter: AI Notkilleveryoneism Memes  https://twitter.com/AISafetyMemes TIMESTAMPS: **Concerns about AI Risks in France (00:00:00)**   **Optimism in AI Solutions (00:01:15)**   **Introduction to the Episode (00:01:51)**   **Max Wingo's Powerful Clip (00:02:29)**   **AI Safety Summit Context (00:04:20)**   **Personal Journey into AI Safety (00:07:02)**   **Commitment to AI Risk Work (00:21:33)**   **France's AI Sacrifice (00:21:49)**   **Impact of Efforts (00:21:54)**   **Existential Risks and Choices (00:22:12)**   **Underestimating Impact (00:22:25)**   **Researching AI Risks (00:22:34)**   **Weak Counterarguments (00:23:14)**   **Existential Dread Theory (00:23:56)**   **Global Awareness of AI Risks (00:24:16)**   **France's AI Leadership Role (00:25:09)**   **AI Policy in France (00:26:17)**   **Influential Figures in AI (00:27:16)**   **EU Regulation Sabotage (00:28:18)**   **Committee's Risk Perception (00:30:24)**   **Concerns about France's AI Development (00:32:03)**   **International AI Treaties (00:32:36)**   **Sabotaging AI Safety Summit (00:33:26)**   **Quality of France's AI Report (00:34:19)**   **Misleading Risk Analyses (00:36:06)**   **Comparison to Historical Innovations (00:39:33)**   **Rhetoric and Misinformation (00:40:06)**   **Existential Fear and Rationality (00:41:08)**   **Position of AI Leaders (00:42:38)**   **Challenges of Volunteer Management (00:46:54)**  

    Episode #38 TRAILER “France vs. AGI” For Humanity: An AI Risk Podcast

    Play Episode Listen Later Jul 22, 2024 6:42


    In Episode #38 TRAILER, host John Sherman talks with Maxime Fournes, Founder, Pause AI France. With the third AI “Safety” Summit coming up in Paris in February 2025, we examine France's role in AI safety, revealing France to be among the very worst when it comes to taking AI risk seriously. How deep is madman Yan Lecun's influence in French society and government? And would France even join an international treaty?   Please Donate Here To Help Promote For Humanity https://www.paypal.com/paypalme/forhumanitypodcast EMAIL JOHN: forhumanitypodcast@gmail.com This podcast is not journalism. But it's not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.  For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI. Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We'll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity. For Humanity Theme Music by Josef Ebner Youtube: https://www.youtube.com/channel/UCveruX8E-Il5A9VMC-N4vlg Website: https://josef.pictures RESOURCES: SUBSCRIBE TO LIRON SHAPIRA'S DOOM DEBATES on YOUTUBE!! https://www.youtube.com/@DoomDebates BUY STEPHEN HANSON'S BEAUTIFUL AI RISK BOOK!!! https://stephenhansonart.bigcartel.com/product/the-entity-i-couldn-t-fathom JOIN THE FIGHT, help Pause AI!!!! Pause AI Join the Pause AI Weekly Discord Thursdays at 2pm EST   / discord   https://discord.com/invite/pVMWjddaW7 22 Word Statement from Center for AI Safety Statement on AI Risk | CAIS https://www.safe.ai/work/statement-on-ai-risk Best Account on Twitter: AI Notkilleveryoneism Memes  https://twitter.com/AISafetyMemes TIMESTAMPS Trust in AI Awareness in France (00:00:00)Discussion on France being uninformed about AI risks compared to other countries with AI labs. International Treaty Concerns (00:00:46)Speculation on France's reluctance to sign an international AI safety treaty. Personal Reflections on AI Risks (00:00:57)Speaker reflects on the dilemma of believing in AI risks and choosing between action or enjoyment. Underestimating Impact (00:01:13)The tendency of people to underestimate their potential impact on global issues. Researching AI Risks (00:01:50)Speaker shares their journey of researching AI risks and finding weak counterarguments. Critique of Counterarguments (00:02:23)Discussion on the absurdity of opposing views on AI risks and societal implications. Existential Dread and Rationality (00:02:42)Connection between existential fear and irrationality in discussions about AI safety. Shift in AI Safety Focus (00:03:17)Concerns about the diminishing focus on AI safety in upcoming summits. Quality of AI Strategy Report (00:04:11)Criticism of a recent French AI strategy report and plans to respond critically. Optimism about AI Awareness (00:05:04)Belief that understanding among key individuals can resolve AI safety issues. Power Dynamics in AI Decision-Making (00:05:38)Discussion on the disproportionate influence of a small group on global AI decisions. Cultural Perception of Impact (00:06:01)Reflection on societal beliefs that inhibit individual agency in effecting change.

    Episode #37 “Christianity vs. AGI” For Humanity: An AI Risk Podcast

    Play Episode Listen Later Jul 17, 2024 81:14


    In Episode #37, host John Sherman talks with writer Peter Biles. Peter is a Christian who often writes from that perspective. He is a prolific fiction writer and has written stories and essays for a variety of publications. He was born and raised in Ada, Oklahoma and is a contributing writer and editor for Mind Matters. The conversation centers on the intersection between Christianity and AGI, questions like what is the role of faith in a world where no one works? And could religions unite to oppose AGI? Some of Peter Biles related writing: https://mindmatters.ai/2024/07/ai-is-becoming-a-mass-tool-of-persuasion/ https://mindmatters.ai/2022/10/technology-as-the-new-god-before-whom-all-others-bow/ https://substack.com/@peterbiles   Please Donate Here To Help Promote For Humanity https://www.paypal.com/paypalme/forhumanitypodcast EMAIL JOHN: forhumanitypodcast@gmail.com This podcast is not journalism. But it's not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.  For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI. Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We'll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity. For Humanity Theme Music by Josef Ebner Youtube: https://www.youtube.com/channel/UCveruX8E-Il5A9VMC-N4vlg Website: https://josef.pictures RESOURCES: SUBSCRIBE TO LIRON SHAPIRA'S DOOM DEBATES on YOUTUBE!! https://www.youtube.com/@DoomDebates BUY STEPHEN HANSON'S BEAUTIFUL AI RISK BOOK!!! https://stephenhansonart.bigcartel.com/product/the-entity-i-couldn-t-fathom JOIN THE FIGHT, help Pause AI!!!! Pause AI Join the Pause AI Weekly Discord Thursdays at 2pm EST   / discord   https://discord.com/invite/pVMWjddaW7 22 Word Statement from Center for AI Safety Statement on AI Risk | CAIS https://www.safe.ai/work/statement-on-ai-risk Best Account on Twitter: AI Notkilleveryoneism Memes  https://twitter.com/AISafetyMemes Matt Andersen - 'Magnolia' (JJ Cale Cover) LIVE at SiriusXM JJ Cale Magnolia Flagstaff, AZ 2004 TIMESTAMPS: **Christianity versus AGI (00:00:39)** **Concerns about AI (00:02:45)** **Christianity and Technology (00:05:30)** **Interview with Peter Byles (00:11:09)** **Effects of Social Media (00:18:03)** **Religious Perspective on AI (00:23:57)** **The implications of AI on Christian faith (00:24:05)** **The Tower of Babel metaphor (00:25:09)** **The role of humans as sub-creators (00:27:23)** **The impact of AI on human culture and society (00:30:33)** **The limitations of AI in storytelling and human connection (00:32:33)** **The intersection of faith and AI in a future world (00:41:35)** **Religious Leaders and AI (00:45:34)** **Human Exceptionalism (00:46:51)** **Interfaith Dialogue and AI (00:50:26)** **Religion and Abundance (00:53:42)** **Apocalyptic Language and AI (00:58:26)** **Hope in Human-Oriented Culture (01:04:32)** **Worshipping AI (01:07:55)** **Religion and AI (01:08:17)** **Celebration of Life (01:09:49)**

    Episode #37 Trailer “Christianity vs. AGI” For Humanity: An AI Risk Podcast

    Play Episode Listen Later Jul 15, 2024 9:11


    In Episode #37 Trailer, host John Sherman talks with writer Peter Biles. Peter is a Christian who often writes from that perspective. He is a prolific fiction writer and has written stories and essays for a variety of publications. He was born and raised in Ada, Oklahoma and is a contributing writer and editor for Mind Matters. The conversation centers on the intersection between Christianity and AGI, questions like what is the role of faith in a world where no one works? And could religions unite to oppose AGI? Some of Peter Biles related writing: https://mindmatters.ai/2024/07/ai-is-becoming-a-mass-tool-of-persuasion/ https://mindmatters.ai/2022/10/technology-as-the-new-god-before-whom-all-others-bow/ https://substack.com/@peterbiles   Please Donate Here To Help Promote For Humanity https://www.paypal.com/paypalme/forhumanitypodcast This podcast is not journalism. But it's not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.  For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI. Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We'll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity. For Humanity Theme Music by Josef Ebner Youtube: https://www.youtube.com/channel/UCveruX8E-Il5A9VMC-N4vlg Website: https://josef.pictures RESOURCES: SUBSCRIBE TO LIRON SHAPIRA'S DOOM DEBATES on YOUTUBE!! https://www.youtube.com/@DoomDebates BUY STEPHEN HANSON'S BEAUTIFUL AI RISK BOOK!!! https://stephenhansonart.bigcartel.com/product/the-entity-i-couldn-t-fathom JOIN THE FIGHT, help Pause AI!!!! Pause AI Join the Pause AI Weekly Discord Thursdays at 2pm EST   / discord   https://discord.com/invite/pVMWjddaW7 22 Word Statement from Center for AI Safety Statement on AI Risk | CAIS https://www.safe.ai/work/statement-on-ai-risk Best Account on Twitter: AI Notkilleveryoneism Memes  https://twitter.com/AISafetyMemes TIMESTAMPS: The impact of technology on human dignity (00:00:00) The speaker discusses the potential negative impact of technology on human dignity and the divine image. The embodiment of souls and human dignity (00:01:00) The speaker emphasizes the spiritual nature of human beings and the importance of human dignity, regardless of religion or ethnicity. The concept of a "sand god" and technological superiority (00:02:09) The conversation explores the cultural and religious implications of creating an intelligence superior to humans and the reference to a "sand god." The Tower of Babel and technology (00:03:25) The speaker references the story of the Tower of Babel from the book of Genesis and its metaphorical implications for technological advancements and human hubris. The impact of AI on communication and storytelling (00:05:26) The discussion delves into the impersonal nature of AI in communication and storytelling, highlighting the absence of human intention and soul. Human nature, materialism, and work (00:07:38) The conversation explores the deeper understanding of human nature, the restlessness of humans, and the significance of work and creativity.

    Episode #36 “The AI Risk Investigators: Inside Gladstone AI, Part 2” For Humanity: An AI Risk Podcast

    Play Episode Listen Later Jul 10, 2024 85:28


    In Episode #36, host John Sherman talks with Jeremie and Eduard Harris, CEO and CTO of Gladstone AI. Gladstone AI is the private company working most closely with the US government on assessing AI risk. The Gladstone Report, published in February, was the first public acknowledgment of AI risk reality by the US government in any way. These are two very important people doing incredibly important work. The full interview lasts more than 2 hours and will be broken into two shows, this the second of the two. Gladstone AI Action Plan https://www.gladstone.ai/action-plan TIME MAGAZINE ON THE GLADSTONE REPORT https://time.com/6898967/ai-extinction-national-security-risks-report/ SUBSCRIBE TO LIRON SHAPIRA'S DOOM DEBATES on YOUTUBE!! https://www.youtube.com/@DoomDebates   Please Donate Here To Help Promote For Humanity https://www.paypal.com/paypalme/forhumanitypodcast This podcast is not journalism. But it's not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.  For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI. Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We'll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity. For Humanity Theme Music by Josef Ebner Youtube: https://www.youtube.com/channel/UCveruX8E-Il5A9VMC-N4vlg Website: https://josef.pictures RESOURCES: BUY STEPHEN HANSON'S BEAUTIFUL AI RISK BOOK!!! https://stephenhansonart.bigcartel.com/product/the-entity-i-couldn-t-fathom JOIN THE FIGHT, help Pause AI!!!! Pause AI Join the Pause AI Weekly Discord Thursdays at 2pm EST   / discord   https://discord.com/invite/pVMWjddaW7 22 Word Statement from Center for AI Safety Statement on AI Risk | CAIS https://www.safe.ai/work/statement-on-ai-risk Best Account on Twitter: AI Notkilleveryoneism Memes  https://twitter.com/AISafetyMemes TIMESTAMPS: **The whistleblower's concerns (00:00:00)** **Introduction to the podcast (00:01:09)** **The urgency of addressing AI risk (00:02:18)** **The potential consequences of falling behind in AI (00:04:36)** **Transitioning to working on AI risk (00:06:33)** **Engagement with the State Department (00:08:07)** **Project assessment and public visibility (00:10:10)** **Motivation for taking on the detective work (00:13:16)** **Alignment with the government's safety culture (00:17:03)** **Potential government oversight of AI labs (00:20:50)** **The whistle blowers' concerns (00:21:52)** **Shifting control to the government (00:22:47)** **Elite group within the government (00:24:12)** **Government competence and allocation of resources (00:25:34)** **Political level and tech expertise (00:27:58)** **Challenges in government engagement (00:29:41)** **State department's engagement and assessment (00:31:33)** **Recognition of government competence (00:34:36)** **Engagement with frontier labs (00:35:04)** **Whistleblower insights and concerns (00:37:33)** **Whistleblower motivations (00:41:58)** **Engagements with AI Labs (00:42:54)** **Emotional Impact of the Work (00:43:49)** **Workshop with Government Officials (00:44:46)** **Challenges in Policy Implementation (00:45:46)** **Expertise and Insights (00:49:11)** **Future Engagement with US Government (00:50:51)** **Flexibility of Private Sector Entity (00:52:57)** **Impact on Whistleblowing Culture (00:55:23)** **Key Recommendations (00:57:03)** **Security and Governance of AI Technology (01:00:11)** **Obstacles and Timing in Hardware Development (01:04:26)** **The AI Lab Security Measures (01:04:50)** **Nvidia's Stance on Regulations (01:05:44)** **Export Controls and Governance Failures (01:07:26)** **Concerns about AGI and Alignment (01:13:16)** **Implications for Future Generations (01:16:33)** **Personal Transformation and Mental Health (01:19:23)** **Starting a Nonprofit for AI Risk Awareness (01:21:51)**

    Episode #36 Trailer “The AI Risk Investigators: Inside Gladstone AI, Part 2” For Humanity: An AI Risk Podcast

    Play Episode Listen Later Jul 8, 2024 5:34


    In Episode #36 Trailer, host John Sherman talks with Jeremie and Eduard Harris, CEO and CTO of Gladstone AI. Gladstone AI is the private company working most closely with the US government on assessing AI risk. The Gladstone Report, published in February, was the first public acknowledgment of AI risk reality by the US government in any way. These are two very important people doing incredibly important work. The full interview lasts more than 2 hours and will be broken into two shows, this the second of the two. Gladstone AI Action Plan https://www.gladstone.ai/action-plan TIME MAGAZINE ON THE GLADSTONE REPORT https://time.com/6898967/ai-extinction-national-security-risks-report/ SUBSCRIBE TO LIRON SHAPIRA'S DOOM DEBATES on YOUTUBE!! https://www.youtube.com/@DoomDebates   Please Donate Here To Help Promote For Humanity https://www.paypal.com/paypalme/forhumanitypodcast This podcast is not journalism. But it's not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.  For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI. Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We'll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity. For Humanity Theme Music by Josef Ebner Youtube: https://www.youtube.com/channel/UCveruX8E-Il5A9VMC-N4vlg Website: https://josef.pictures RESOURCES: BUY STEPHEN HANSON'S BEAUTIFUL AI RISK BOOK!!! https://stephenhansonart.bigcartel.com/product/the-entity-i-couldn-t-fathom JOIN THE FIGHT, help Pause AI!!!! Pause AI Join the Pause AI Weekly Discord Thursdays at 2pm EST   / discord   https://discord.com/invite/pVMWjddaW7 22 Word Statement from Center for AI Safety Statement on AI Risk | CAIS https://www.safe.ai/work/statement-on-ai-risk Best Account on Twitter: AI Notkilleveryoneism Memes  https://twitter.com/AISafetyMemes TIMESTAMPS: The assignment from the State Department (00:00:00) Discussion about the task given by the State Department team regarding the assessment of safety and security in frontier AI and advanced AI systems. Transition to detective work (00:00:30) The transition to a detective-like approach in gathering information and engaging with whistleblowers and clandestine meetings. Assessment of the AI safety community (00:01:05) A critique of the lack of action orientation and proactive approach in the AI safety community. Engagement with the Department of Defense (DoD) (00:02:57) Discussion about the engagement with the DoD, its existing safety culture, and the organizations involved in testing and evaluations. Shifting control to the government (00:03:54) Exploration of the need to shift control to the government and regulatory level for effective steering of the development of AI technology. Concerns about weaponization and loss of control (00:04:45) A discussion about concerns regarding weaponization and loss of control in AI labs and the need for more ambitious recommendations.

    Episode #35 “The AI Risk Investigators: Inside Gladstone AI, Part 1” For Humanity: An AI Risk Podcast

    Play Episode Listen Later Jul 3, 2024 61:19


    In Episode #35  host John Sherman talks with Jeremie and Eduard Harris, CEO and CTO of Gladstone AI. Gladstone AI is the private company working most closely with the US government on assessing AI risk. The Gladstone Report, published in February, was the first public acknowledgment of AI risk reality by the US government in any way. These are two very important people doing incredibly important work. The full interview lasts more than 2 hours and will be broken into two shows. Gladstone AI Action Plan https://www.gladstone.ai/action-plan TIME MAGAZINE ON THE GLADSTONE REPORT https://time.com/6898967/ai-extinction-national-security-risks-report/ SUBSCRIBE TO LIRON SHAPIRA'S DOOM DEBATES on YOUTUBE!! https://www.youtube.com/@DoomDebates Please Donate Here To Help Promote For Humanity https://www.paypal.com/paypalme/forhumanitypodcast This podcast is not journalism. But it's not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.  For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI. Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We'll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity. For Humanity Theme Music by Josef Ebner Youtube: https://www.youtube.com/channel/UCveruX8E-Il5A9VMC-N4vlg Website: https://josef.pictures RESOURCES: BUY STEPHEN HANSON'S BEAUTIFUL AI RISK BOOK!!! https://stephenhansonart.bigcartel.com/product/the-entity-i-couldn-t-fathom JOIN THE FIGHT, help Pause AI!!!! Pause AI Join the Pause AI Weekly Discord Thursdays at 2pm EST   / discord   https://discord.com/invite/pVMWjddaW7 22 Word Statement from Center for AI Safety Statement on AI Risk | CAIS https://www.safe.ai/work/statement-on-ai-risk Best Account on Twitter: AI Notkilleveryoneism Memes  https://twitter.com/AISafetyMemes TIMESTAMPS: Sincerity and Sam Altman (00:00:00) Discussion on the perceived sincerity of Sam Altman and his actions, including insights into his character and motivations. Introduction to Gladstone AI (00:01:14) Introduction to Gladstone AI, its involvement with the US government on AI risk, and the purpose of the podcast episode. Doom Debates on YouTube (00:02:17) Promotion of the "Doom Debates" YouTube channel and its content, featuring discussions on AI doom and various perspectives on the topic. YC Experience and Sincerity in Startups (00:08:13) Insight into the Y Combinator (YC) experience and the emphasis on sincerity in startups, with personal experiences and observations shared. OpenAI and Sincerity (00:11:51) Exploration of sincerity in relation to OpenAI, including evaluations of the company's mission, actions, and the challenges it faces in the AI landscape. The scaling story (00:21:33) Discussion of the scaling story related to AI capabilities and the impact of increasing data, processing power, and training models. The call about GPT-3 (00:22:29) Edward Harris receiving a call about the scaling story and the significance of GPT-3's capabilities, leading to a decision to focus on AI development. Transition from Y Combinator (00:24:42) Jeremy and Edward Harris leaving their previous company and transitioning from Y Combinator to focus on AI development. Security concerns and exfiltration (00:31:35) Discussion about the security vulnerabilities and potential exfiltration of AI models from top labs, highlighting the inadequacy of security measures. Government intervention and security (00:38:18) Exploration of the potential for government involvement in providing security assets to protect AI technology from exfiltration and the need for a pause in development until labs are secure. Resource reallocation for safety and security (00:40:03) Discussion about the need to reallocate resources for safety, security, and alignment technology to ensure the responsible development of AI. OpenAI's computational resource allocation (00:42:10) Concerns about OpenAI's failure to allocate computational resources for safety and alignment efforts, as well as the departure of a safety-minded board member. These are the timestamps and topics covered in the podcast episode transcription segment. China's Strategic Moves (00:43:07) Discussion on potential aggressive actions by China to prevent a permanent disadvantage in AI technology. China's Sincerity in AI Safety (00:44:29) Debate on the sincerity of China's commitment to AI safety and the influence of the CCP. Taiwan Semiconductor Manufacturing Company (TSMC) (00:47:47) Explanation of TSMC's role in fabricating advanced semiconductor chips and its impact on the AI race. US and China's Power Constraints (00:51:30) Comparison of the constraints faced by the US and China in terms of advanced chips and grid power. Nuclear Power and Renewable Energy (00:52:23) Discussion on the power sources being pursued by China and the US to address their respective constraints. Future Scenarios (00:56:20) Exploration of potential outcomes if China overtakes the US in AI technology.

    Episode #35 TRAILER “The AI Risk Investigators: Inside Gladstone AI, Part 1” For Humanity: An AI Risk Podcast

    Play Episode Listen Later Jul 1, 2024 4:35


    In Episode #35 TRAILER:, host John Sherman talks with Jeremie and Eduard Harris, CEO and CTO of Gladstone AI. Gladstone AI is the private company working most closely with the US government on assessing AI risk. The Gladstone Report, published in February, was the first public acknowledgment of AI risk reality by the US government in any way. These are two very important people doing incredibly important work. The full interview lasts more than 2 hours and will be broken into two shows. TIME MAGAZINE ON THE GLADSTONE REPORT https://time.com/6898967/ai-extinction-national-security-risks-report/   Please Donate Here To Help Promote For Humanity https://www.paypal.com/paypalme/forhumanitypodcast This podcast is not journalism. But it's not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.  For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI. Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We'll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity. For Humanity Theme Music by Josef Ebner Youtube: https://www.youtube.com/channel/UCveruX8E-Il5A9VMC-N4vlg Website: https://josef.pictures RESOURCES: BUY STEPHEN HANSON'S BEAUTIFUL AI RISK BOOK!!! https://stephenhansonart.bigcartel.com/product/the-entity-i-couldn-t-fathom JOIN THE FIGHT, help Pause AI!!!! Pause AI Join the Pause AI Weekly Discord Thursdays at 2pm EST   / discord   https://discord.com/invite/pVMWjddaW7 22 Word Statement from Center for AI Safety Statement on AI Risk | CAIS https://www.safe.ai/work/statement-on-ai-risk Best Account on Twitter: AI Notkilleveryoneism Memes  https://twitter.com/AISafetyMemes TIMESTAMPS: Sam Altman's intensity (00:00:10) Sam Altman's intense demeanor and competence, as observed by the speaker. Security risks of superintelligent AI (00:01:02) Concerns about the potential loss of control over superintelligent systems and the security vulnerabilities in top AI labs. Silicon Valley's security hubris (00:02:04)Critique of Silicon Valley's overconfidence in technology and lack of security measures, particularly in comparison to nation-state level cyber threats. China's AI capabilities (00:02:36) Discussion about the security deficiency in the United States and the potential for China to have better AI capabilities due to security leaks. Foreign actors' capacity for exfiltration (00:03:08)Foreign actors' incentives and capacity to exfiltrate frontier models, leading to the need to secure infrastructure before scaling and accelerating AI capabilities.

    Episode #34 - “The Threat of AI Autonomous Replication” For Humanity: An AI Risk Podcast

    Play Episode Listen Later Jun 26, 2024 77:23


    In Episode #34, host John Sherman talks with Charbel-Raphaël Segerie, Executive Director, Centre pour la sécurité de l'IA. Among the very important topics covered: autonomous AI self replication, the potential for warning shots to go unnoticed due to a public and journalist class that are uneducated on AI risk, and the potential for a disastrous Yan Lecunnification of the upcoming February 2025 Paris AI Safety Summit.   Please Donate Here To Help Promote For Humanity https://www.paypal.com/paypalme/forhumanitypodcast This podcast is not journalism. But it's not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.  For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI. Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We'll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity. For Humanity Theme Music by Josef Ebner Youtube: https://www.youtube.com/channel/UCveruX8E-Il5A9VMC-N4vlg Website: https://josef.pictures RESOURCES: Charbel-Raphaël Segerie's Less Wrong Writing, much more on many topics we covered! https://www.lesswrong.com/users/charbel-raphael BUY STEPHEN HANSON'S BEAUTIFUL AI RISK BOOK!!! https://stephenhansonart.bigcartel.com/product/the-entity-i-couldn-t-fathom JOIN THE FIGHT, help Pause AI!!!! Pause AI Join the Pause AI Weekly Discord Thursdays at 2pm EST   / discord   https://discord.com/invite/pVMWjddaW7 22 Word Statement from Center for AI Safety Statement on AI Risk | CAIS https://www.safe.ai/work/statement-on-ai-risk Best Account on Twitter: AI Notkilleveryoneism Memes  https://twitter.com/AISafetyMemes TIMESTAMPS: **The threat of AI autonomous replication (00:00:43)** **Introduction to France's Center for AI Security (00:01:23)** **Challenges in AI risk awareness in France (00:09:36)** **The influence of Yann LeCun on AI risk perception in France (00:12:53)** **Autonomous replication and adaptation of AI (00:15:25)** **The potential impact of autonomous replication (00:27:24)** **The dead internet scenario (00:27:38)** **The potential existential threat (00:29:02)** **Fast takeoff scenario (00:30:54)** **Dangers of autonomous replication and adaptation (00:34:39)** **Difficulty in recognizing warning shots (00:40:00)** **Defining red lines for AI development (00:42:44)** **Effective education strategies (00:46:36)** **Impact on computer science students (00:51:27)** **AI safety summit in Paris (00:53:53)** **The summit and AI safety report (00:55:02)** **Potential impact of key figures (00:56:24)** **Political influence on AI risk (00:57:32)** **Accelerationism in political context (01:00:37)** **Optimism and hope for the future (01:04:25)** **Chances of a meaningful pause (01:08:43)**

    Episode #34 TRAILER - “The Threat of AI Autonomous Replication” For Humanity: An AI Risk Podcast

    Play Episode Listen Later Jun 24, 2024 4:43


    In Episode #34, host John Sherman talks with Charbel-Raphaël Segerie, Executive Director, Centre pour la sécurité de l'IA. Among the very important topics covered: autonomous AI self replication, the potential for warning shots to go unnoticed due to a public and journalist class that are uneducated on AI risk, and the potential for a disastrous Yan Lecunnification of the upcoming February 2025 Paris AI Safety Summit.   Please Donate Here To Help Promote For Humanity https://www.paypal.com/paypalme/forhumanitypodcast This podcast is not journalism. But it's not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.  For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI. Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We'll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity. For Humanity Theme Music by Josef Ebner Youtube: https://www.youtube.com/channel/UCveruX8E-Il5A9VMC-N4vlg Website: https://josef.pictures RESOURCES: BUY STEPHEN HANSON'S BEAUTIFUL AI RISK BOOK!!! https://stephenhansonart.bigcartel.com/product/the-entity-i-couldn-t-fathom JOIN THE FIGHT, help Pause AI!!!! Pause AI Join the Pause AI Weekly Discord Thursdays at 2pm EST   / discord   https://discord.com/invite/pVMWjddaW7 22 Word Statement from Center for AI Safety Statement on AI Risk | CAIS https://www.safe.ai/work/statement-on-ai-risk Best Account on Twitter: AI Notkilleveryoneism Memes  https://twitter.com/AISafetyMemes TIMESTAMPS: The exponential growth of AI (00:00:00) Discussion on the potential exponential growth of AI and its implications for the future. The mass of AI systems as an existential threat (00:01:05) Exploring the potential threat posed by the sheer mass of AI systems and its impact on existential risk. The concept of warning shots (00:01:32) Elaboration on the concept of warning shots in the context of AI safety and the need for public understanding. The importance of advocacy and public understanding (00:02:30) The significance of advocacy, public awareness, and the role of the safety community in creating and recognizing warning shots. OpenAI's super alignment team resignation (00:04:00) Analysis of the resignation of OpenAI's super alignment team and its potential significance as a warning shot.

    Claim For Humanity: An AI Safety Podcast

    In order to claim this podcast we'll send an email to with a verification link. Simply click the link and you will be able to edit tags, request a refresh, and other features to take control of your podcast page!

    Claim Cancel