Podcasts about risk podcast

  • 56PODCASTS
  • 259EPISODES
  • 45mAVG DURATION
  • 5WEEKLY NEW EPISODES
  • May 14, 2025LATEST

POPULARITY

20172018201920202021202220232024


Best podcasts about risk podcast

Latest podcast episodes about risk podcast

IRMI Podcast
Navigating the Future of Agribusiness: Insights on Insurance, Technology, and Family Farms

IRMI Podcast

Play Episode Listen Later May 14, 2025 24:40


Why are affordability and tech advances important to farmers? This 24-minute episode of The Edge of Risk Podcast by IRMI features Craig Smith, vice president of Agribusiness and Food Services for Alliant Insurance Services and a keynote panelist for the 2025 IRMI Emmett J Vaughan Agribusiness Conference. Listen to this impactful discussion of these challenges and what's happening on the family farm. After this podcast, you will have even more appreciation for the importance of strong advocates for farmers and ranchers.

IRMI Podcast
How Private Equity Is Impacting the Construction Industry

IRMI Podcast

Play Episode Listen Later May 12, 2025 15:41


Private equity is investing in the construction industry—but what does that mean? In The Edge of Risk Podcast by IRMI, join Rob Langtry, global private equity director at Liberty Mutual, as he provides clarification as to why private equity is interested in the construction industry. In this 15-minute podcast, gain an understanding of the typical structure of a private equity program in construction and learn how insurance is generally addressed in such transactions.

For Humanity: An AI Safety Podcast
Kevin Roose Talks AI Risk | Episode #65 | For Humanity: An AI Risk Podcast

For Humanity: An AI Safety Podcast

Play Episode Listen Later May 12, 2025 85:20


For Humanity Episode #65: Kevin Roose on AGI, AI Risk, and What Comes Next

IRMI Podcast
What Tariffs Are Really Doing to Insurance Costs

IRMI Podcast

Play Episode Listen Later May 2, 2025 12:03


Join Joel Appelbaum, chief content officer at IRMI, as he dives into the real economic impact of global trade and tariffs with special guest Bob Passmore, department vice president of personal lines at the American Property Casualty Insurance Association. In this short, 12-minute episode of The Edge of Risk Podcast by IRMI, you'll explore how tariffs influence claims costs, repair cycles, and underwriting strategies.

For Humanity: An AI Safety Podcast
Seventh Grader vs AI Risk | Episode #64 | For Humanity: An AI Risk Podcast

For Humanity: An AI Safety Podcast

Play Episode Listen Later Apr 22, 2025 102:04


In Episode #64, interview, host John Sherman interviews seventh grader Dylan Pothier, his mom Bridget and his teach Renee DiPietro. Dylan is a award winning student author who is converend about AI risk.(FULL INTERVIEW STARTS AT 00:33:34)Sam Altman/Chris Anderson @ TEDhttps://www.youtube.com/watch?v=5MWT_doo68kCheck out our partner channel: Lethal Intelligence AILethal Intelligence AI - Home https://lethalintelligence.aiFOR HUMANITY MONTHLY DONATION SUBSCRIPTION LINKS:$1 MONTH https://buy.stripe.com/7sI3cje3x2Zk9SodQT$10 MONTH https://buy.stripe.com/5kAbIP9Nh0Rc4y46oo$25 MONTH https://buy.stripe.com/3cs9AHf7B9nIggM4gh$100 MONTH https://buy.stripe.com/aEU007bVp7fAfcI5kmBUY LOUIS BERMAN'S NEW BOOK ON AMAZON!!!https://a.co/d/8WSNNuoGet Involved!EMAIL JOHN: forhumanitypodcast@gmail.comSUPPORT PAUSE AI: https://pauseai.info/SUPPORT STOP AI: https://www.stopai.info/SUBSCRIBE TO LIRON SHAPIRA'S DOOM DEBATES on YOUTUBE! / @doomdebates

IRMI Podcast
Strategic Approaches to Global Captive Insurance

IRMI Podcast

Play Episode Listen Later Apr 17, 2025 15:06


In this episode of The Edge of Risk Podcast, host Joel Appelbaum speaks with Steven Bauman, head of global programs and captive practice for AXA XL in North America. With over 35 years in the insurance industry, Steven brings a depth of expertise on how multinational organizations can structure captives to navigate regulatory complexity, diversify risk, and support global growth strategies. The conversation covers best practices for aligning captive programs with parent company objectives, the importance of selecting the right partners, and how emerging risks—from cyber threats to climate exposures—are influencing the future of captive insurance. Steven also shares reflections on his long career and offers final advice for companies looking to launch or expand their global captive footprint.

IRMI Podcast
Water Damage Mitigation: Trends and Risk Management Strategies

IRMI Podcast

Play Episode Listen Later Apr 14, 2025 26:54


Water damage is a perpetual risk for all insureds but significantly so for contractors. This month, The Edge of Risk Podcast by IRMI welcomes Tony Grieser, technical director of construction loss control at Nationwide, for a thorough discussion of water damage mitigation programs. In this episode, learn about key tools a construction insured can utilize to reduce water damage losses and gain clear action items to establish and implement a water damage mitigation program for contractors of every size. 

IRMI Podcast
Empowering Farmers: Insights from California Farm Bureau

IRMI Podcast

Play Episode Listen Later Apr 11, 2025 24:24


What's the raison d'être of American Farm Bureau, and how does California Farm Bureau fit into that? This 24-minute episode of The Edge of Risk Podcast by IRMI features Dan Durheim, chief operating officer, California Farm Bureau, and 2025 IRMI Emmett J Vaughan Agribusiness Conference (AgriCon) keynote panelist. Listen as Mr. Durheim explains the value proposition of Farm Bureau and why it continues to remain strong since its start in 1919. After this podcast, you will have even more appreciation of the organization that serves as the "voice of the farmer."

For Humanity: An AI Safety Podcast
Justice For Suchir | Episode #63 | For Humanity: An AI Risk Podcast

For Humanity: An AI Safety Podcast

Play Episode Listen Later Apr 11, 2025 79:43


In an emotional interview, host John Sherman interviews Poornima Rao and Balaji Ramamurthy, the parents of Suchir Balaji. (FULL INTERVIEW STARTS AT 00:18:38)Suchir Balaji was a 26-year-old artificial intelligence researcher who worked at OpenAI. He was involved in developing models like GPT-4 and WebGPT. In October 2024, he publicly accused OpenAI of violating U.S. copyright laws by using proprietary data to train AI models, arguing that such practices harmed original content creators. His essay, "When does generative AI qualify for fair use?", gained attention and was cited in ongoing lawsuits against OpenAI. Suchir left OpenAI in August 2024, expressing concerns about the company's ethics and the potential harm of AI to humanity. He planned to start a nonprofit focused on machine learning and neuroscience. On October 23, 2024 he was featured in the New York Times speaking out against OpenAI.On November 26, 2024, he was found dead in his San Francisco apartment from a gunshot wound. The initial autopsy ruled it a suicide, noting the presence of alcohol, amphetamines, and GHB in his system. However, his parents contested this finding, commissioning a second autopsy that suggested a second gunshot wound was missed in the initial examination. They also pointed to other injuries and questioned the presence of GHB, suggesting foul play. Despite these claims, authorities reaffirmed the suicide ruling. The case has attracted public attention, with figures like Elon Musk and Congressman Ro Khanna calling for further investigation.Suchir's parents continue to push for justice and truth.Suchir's Website:https://suchir.net/fair_use.htmlFOR HUMANITY MONTHLY DONATION SUBSCRIPTION LINKS:$1 MONTH https://buy.stripe.com/7sI3cje3x2Zk9SodQT$10 MONTH https://buy.stripe.com/5kAbIP9Nh0Rc4y46oo$25 MONTH https://buy.stripe.com/3cs9AHf7B9nIggM4gh$100 MONTH https://buy.stripe.com/aEU007bVp7fAfcI5kmLethal Intelligence AI - Home https://lethalintelligence.aiBUY LOUIS BERMAN'S NEW BOOK ON AMAZON!!!https://a.co/d/8WSNNuoGet Involved!EMAIL JOHN: forhumanitypodcast@gmail.comSUPPORT PAUSE AI: https://pauseai.info/SUPPORT STOP AI: https://www.stopai.info/SUBSCRIBE TO LIRON SHAPIRA'S DOOM DEBATES on YOUTUBE! / @doomdebates

IRMI Podcast
Innovative Captive Insurance Solutions: Fronting and Nonadmitted Paper for Hard-To-Place Risks

IRMI Podcast

Play Episode Listen Later Mar 28, 2025 16:13


In this episode of The Edge of Risk Podcast by IRMI, host Joel Appelbaum sits down with Jeremy Colombik, president of Management Services International (MSI), to explore how captive insurance can solve hard-to-place risk challenges. Drawing from over 2 decades of industry experience, Jeremy unpacks the advantages of integrating unrated and nonadmitted paper into captive structures, including cost efficiency, greater flexibility, and custom coverage solutions. The conversation dives deep into fronting arrangements, regulatory considerations, and the evolving use of captives across high-risk sectors. Whether it's environmental exposures, cyber liability, or excess layer risks, Jeremy explains how businesses can use captives to craft tailored strategies that go beyond what the traditional market offers. Tune in for insights on quarterbacking a successful captive, structuring reinsurance relationships, and ensuring compliance while building innovative programs that stand the test of time.

For Humanity: An AI Safety Podcast
Keep The Future Human | Episode #62 | For Humanity: An AI Risk Podcast

For Humanity: An AI Safety Podcast

Play Episode Listen Later Mar 26, 2025 107:12


Host John Sherman conducts an important interview with Anthony Aguirre, Executive Director of the Future of Life Institute. The Future of Life Institute reached out to For Humanity to see if Anthony could come on to promote his very impressive new campaign called Keep The Future Human. The campaign includes a book, an essay, a website, a video, it's all incredible work. Please check it out:https://keepthefuturehuman.ai/John and Anthony have a broad ranging AI risk conversation, covering in some detail Anthony's four essential measures for a human future. They also discuss parenting into this unknown future.In 2021, the Future of Life Institute received a donation in cryptocurrency of more than $650 million from a single donor. With AGI doom bearing down on humanity, arriving any day now, AI risk communications floundering, the public in the dark still, and that massive war chest gathering dust in a bank, John asks Anthony the uncomfortable but necessary question: What is FLI waiting for to spend the money? Then John asks Anthony for $10 million to fund creative media projects under John's direction. John is convinced with $10M in six months he could succeed in making AI existential risk dinner table conversation on every street in America.John has developed a detailed plan that would launch within 24 hours of the grant award. We don't have a single day to lose.https://futureoflife.org/BUY LOUIS BERMAN'S NEW BOOK ON AMAZON!!!https://a.co/d/8WSNNuoFOR HUMANITY MONTHLY DONATION SUBSCRIPTION LINKS:$1 MONTH https://buy.stripe.com/7sI3cje3x2Zk9SodQT$10 MONTH  https://buy.stripe.com/5kAbIP9Nh0Rc4y46oo$25 MONTH https://buy.stripe.com/3cs9AHf7B9nIggM4gh$100 MONTH https://buy.stripe.com/aEU007bVp7fAfcI5kmGet Involved!EMAIL JOHN: forhumanitypodcast@gmail.comSUPPORT PAUSE AI: https://pauseai.info/SUPPORT STOP AI: https://www.stopai.info/Check out our partner channel: Lethal Intelligence AILethal Intelligence AI - Home https://lethalintelligence.aiSUBSCRIBE TO LIRON SHAPIRA'S DOOM DEBATES on YOUTUBE!     / @doomdebates  ********************************Explore our other video content here on YouTube where you'll find more insights into 2025 AI risk preview along with relevant social media links.YouTube:    / @forhumanitypodcast

Know Your Risk Radio with Zach Abraham, Chief Investment Officer, Bulwark Capital Management

Zach and Chase break down this week in the market.

IRMI Podcast
Know about the Agricultural Claims Association?

IRMI Podcast

Play Episode Listen Later Mar 17, 2025 8:17


How different is a "regular" property claim from an agricultural property claim? The answer could be—significant. This 8-minute The Edge of Risk Podcast by IRMI episode features a Snap Talk by Julie Overdeer, executive director and partner at the Agricultural Claims Association, LLC (ACA). The mission of the ACA is to provide agricultural claims professionals with a resource that offers training, education, and support. After this Snap Talk, you will have even more of an appreciation of the importance of this specialized knowledge.

Cybercrime Magazine Podcast
BEHAVE: A Human Risk Podcast. Adam Marre, CISO at Arctic Wolf. Sponsored By CybSafe.

Cybercrime Magazine Podcast

Play Episode Listen Later Mar 12, 2025 19:15


Adam Marre is the CISO at Arctic Wolf. In this episode, he joins Oz Alashe, founder and CEO at CybSafe, and host Scott Schober to discuss security awareness training and human risk management, including his experience as a special agent with the FBI, how organizations can implement successful strategies, and more. BEHAVE: A Human Risk Podcast is brought to you by CybSafe, developers of the Human Risk Management Platform. Learn more at https://cybsafe.com.

IRMI Podcast
Insurance Considerations for Adaptive Reuse Projects in Construction

IRMI Podcast

Play Episode Listen Later Mar 12, 2025 16:18


The Edge of Risk Podcast by IRMI welcomes Kerry Powers, national director of large accounts at Gallagher, and Ted Way, senior vice president at Gallagher, for an insightful discussion on insurance considerations for adaptive reuse projects in construction. In this 16-minute episode, gain valuable insights into the increasing prevalence of these projects and the tax incentives some municipalities may provide for adaptive reuse developments. Learn the importance of the insurance market in keeping pace with this trend and the role of technology in providing insurers with the confidence to cover adaptive reuse projects.

For Humanity: An AI Safety Podcast
Dark Patterns In AI | Episode #61 | For Humanity: An AI Risk Podcast

For Humanity: An AI Safety Podcast

Play Episode Listen Later Mar 12, 2025 91:02


Host John Sherman interviews Esban Kran, CEO of Apart Research about a broad range of AI risk topics. Most importantly, the discussion covers a growing for-profit AI risk business landscape, and Apart's recent report on Dark Patterns in LLMs. We hear about the benchmarking of new models all the time, but this project has successfully identified some key dark patterns in these models.MORE FROM OUR SPONSOR:https://www.resist-ai.agency/BUY LOUIS BERMAN'S NEW BOOK ON AMAZON!!!https://a.co/d/8WSNNuoApart Research Dark Bench Reporthttps://www.apartresearch.com/post/uncovering-model-manipulation-with-darkbench(FULL INTERVIEW STARTS AT 00:09:30)FOR HUMANITY MONTHLY DONATION SUBSCRIPTION LINKS:$1 MONTH https://buy.stripe.com/7sI3cje3x2Zk9SodQT$10 MONTH  https://buy.stripe.com/5kAbIP9Nh0Rc4y46oo$25 MONTH https://buy.stripe.com/3cs9AHf7B9nIggM4gh$100 MONTH https://buy.stripe.com/aEU007bVp7fAfcI5kmGet Involved!EMAIL JOHN: forhumanitypodcast@gmail.comSUPPORT PAUSE AI: https://pauseai.info/SUPPORT STOP AI: https://www.stopai.info/Check out our partner channel: Lethal Intelligence AILethal Intelligence AI - Home https://lethalintelligence.aiSUBSCRIBE TO LIRON SHAPIRA'S DOOM DEBATES on YOUTUBE!     / @doomdebates  ********************************Explore our other video content here on YouTube where you'll find more insights into 2025 AI risk preview along with relevant social media links.YouTube:    / @forhumanitypodcast

Cybercrime Magazine Podcast
BEHAVE: A Human Risk Podcast. Ariel Saldin Weintraub, CISO at Aon. Sponsored By CybSafe.

Cybercrime Magazine Podcast

Play Episode Listen Later Mar 3, 2025 21:47


Ariel Saldin Weintraub is the CISO at Aon. In this episode, she joins Oz Alashe, founder and CEO at CybSafe, and host Scott Schober to discuss security awareness training and human risk management, including her experience in the CISO role at MassMutual, how being a leader in the industry has influenced her approach to human cybersecurity efforts, and more. BEHAVE: A Human Risk Podcast is brought to you by CybSafe, developers of the Human Risk Management Platform. Learn more at https://cybsafe.com.

For Humanity: An AI Safety Podcast
AI Risk Rising | Episode #60 | For Humanity: An AI Risk Podcast

For Humanity: An AI Safety Podcast

Play Episode Listen Later Feb 28, 2025 103:01


Host John Sherman interviews Pause AI Global Founder Joep Meindertsma following the AI summits in Paris. The discussion begins at the dire moment we are in, the stakes, and the failure of our institutions to respond, before turning into a far-ranging discussion of AI risk reduction communications strategies.(FULL INTERVIEW STARTS AT)FOR HUMANITY MONTHLY DONATION SUBSCRIPTION LINKS:$1 MONTH https://buy.stripe.com/7sI3cje3x2Zk9SodQT$10 MONTH https://buy.stripe.com/5kAbIP9Nh0Rc4y46oo$25 MONTH https://buy.stripe.com/3cs9AHf7B9nIggM4gh$100 MONTH https://buy.stripe.com/aEU007bVp7fAfcI5kmGet Involved!SUPPORT PAUSE AI: https://pauseai.info/SUPPORT STOP AI: https://www.stopai.info/aboutEMAIL JOHN: forhumanitypodcast@gmail.comRESOURCES:BUY LOUIS BERMAN'S NEW BOOK ON AMAZON!!!https://a.co/d/8WSNNuoCHECK OUT MAX WINGA'S FULL PODCASTCommunicating AI Extinction Risk to the Public - w/ Prof. Will FithianSubscribe to our partner channel: Lethal Intelligence AILethal Intelligence AI - Homehttps://www.youtube.com/@lethal-intelligencehttps://lethalintelligence.aiSUBSCRIBE TO LIRON SHAPIRA'S DOOM DEBATES on YOUTUBE! https://www.youtube.com/@DoomDebates****************To learn more about AI risk rising, please feel free to visit our YouTube channel.In this video, we cover the following topics:AIAI riskAI safetyRobotsHumanoid RobotsAGI****************Explore our other video content here on YouTube where you'll find more insights into 2025 AI risk preview along with relevant social media links.YouTube: / @forhumanitypodcast

Cybercrime Magazine Podcast
BEHAVE: A Human Risk Podcast. Rinki Sethi, Chief Information Security Officer. Sponsored By CybSafe.

Cybercrime Magazine Podcast

Play Episode Listen Later Feb 27, 2025 15:00


Rinki Sethi is an experienced CISO (Chief Information Security Officer) and board member in the cybersecurity industry. In this episode, she joins Oz Alashe, founder and CEO at CybSafe, and host Scott Schober to discuss security awareness training and human risk management, including effective strategies, innovative approaches, and more. BEHAVE: A Human Risk Podcast is brought to you by CybSafe, developers of the Human Risk Management Platform. Learn more at https://cybsafe.com.

Cybercrime Magazine Podcast
BEHAVE: A Human Risk Podcast. Tammy Klotz, CISO at Trinseo. Sponsored By CybSafe.

Cybercrime Magazine Podcast

Play Episode Listen Later Feb 24, 2025 17:32


Tammy Klotz, CISO at Trinseo, has over three decades of diverse experience in the manufacturing industry, specializing in cybersecurity and transformational leadership. In this episode, she joins Oz Alashe, founder and CEO at CybSafe, and host Heather Engel to discuss security awareness training and human risk management, including how organizations can prioritize human risk management and security awareness training for employees alongside other organizational security concerns, and more. BEHAVE: A Human Risk Podcast is brought to you by CybSafe, developers of the Human Risk Management Platform. Learn more at https://cybsafe.com.

Cybercrime Magazine Podcast
BEHAVE: A Human Risk Podcast. Joe Aiello, Suffolk Credit Union. Sponsored By CybSafe.

Cybercrime Magazine Podcast

Play Episode Listen Later Feb 12, 2025 18:21


Joe Aiello is the Vice President Of Infrastructure & Cybersecurity at Suffolk Credit Union, an award-winning Long Island credit union. In this episode, he joins Oz Alashe, founder and CEO at CybSafe, and host Scott Schober to discuss security awareness training and human risk management, including the unique needs of credit unions when it comes to cybersecurity, how leaders can protect and empower employees, and more. BEHAVE: A Human Risk Podcast is brought to you by CybSafe, developers of the Human Risk Management Platform. Learn more at https://cybsafe.com.

For Humanity: An AI Safety Podcast
Smarter-Than-Human Robots? | Episode #59 | For Humanity: An AI Risk Podcast

For Humanity: An AI Safety Podcast

Play Episode Listen Later Feb 11, 2025 102:14


Host John Sherman interviews Jad Tarifi, CEO of Integral AI, about Jad's company's work to try to create a world of trillions of AGI-enabled robots by 2035. Jad was a leader at Google's first generative AI team, his views of his former colleague Geoffrey Hinton's views on existential risk from advanced AI come up more than once.FOR HUMANITY MONTHLY DONATION SUBSCRIPTION LINKS:$1 MONTH https://buy.stripe.com/7sI3cje3x2Zk9SodQT$10 MONTH https://buy.stripe.com/5kAbIP9Nh0Rc4y46oo$25 MONTH https://buy.stripe.com/3cs9AHf7B9nIggM4gh$100 MONTH https://buy.stripe.com/aEU007bVp7fAfcI5kmGet Involved!EMAIL JOHN: forhumanitypodcast@gmail.comSUPPORT PAUSE AI: https://pauseai.info/SUPPORT STOP AI: https://www.stopai.info/aboutRESOURCES:Integral AI: https://www.integral.ai/John's Chat w Chat GPThttps://chatgpt.com/share/679ee549-2c38-8003-9c1e-260764da1a53Check out our partner channel: Lethal Intelligence AILethal Intelligence AI - Home https://lethalintelligence.aiSUBSCRIBE TO LIRON SHAPIRA'S DOOM DEBATES on YOUTUBE! https://www.youtube.com/@DoomDebates****************To learn more about smarter-than-human robots, please feel free to visit our YouTube channel.In this video, we cover the following topics:AIAI riskAI safetyRobotsHumanoid RobotsAGI****************Explore our other video content here on YouTube where you'll find more insights into 2025 AI risk preview along with relevant social media links.YouTube: / @forhumanitypodcast

Cybercrime Magazine Podcast
BEHAVE: A Human Risk Podcast. Kirsten Davies, Institute for Cyber Civics. Sponsored By CybSafe.

Cybercrime Magazine Podcast

Play Episode Listen Later Feb 7, 2025 32:01


Kirsten Davies is the founder and CEO of the Institute for Cyber Civics and the former CISO of many well-known organizations, including Unilever and The Estée Lauder Companies Inc. In this episode, she joins Oz Alashe, founder and CEO at CybSafe, and host Charlie Osborne to discuss security awareness training and human risk management, including best practices for CISOs and security leaders at large enterprises, and more. BEHAVE: A Human Risk Podcast is brought to you by CybSafe, developers of the Human Risk Management Platform. Learn more at https://cybsafe.com.

Cybercrime Magazine Podcast
BEHAVE: A Human Risk Podcast. Adeel Saeed, CTO, Kyndryl. Sponsored By CybSafe.

Cybercrime Magazine Podcast

Play Episode Listen Later Jan 28, 2025 18:14


Adeel Saeed was a CISO in his last 2 roles and is now the CTO at Kyndryl. In this episode, he joins Oz Alashe, founder and CEO at CybSafe, and host Scott Schober to discuss security awareness training and human risk management, including best practices for CISOs and security leaders at large enterprises, new risks posed by AI-powered phishing, and more. BEHAVE: A Human Risk Podcast is brought to you by CybSafe, developers of the Human Risk Management Platform. Learn more at https://cybsafe.com

For Humanity: An AI Safety Podcast
2025 AI Risk Preview | For Humanity: An AI Risk Podcast | Episode #57

For Humanity: An AI Safety Podcast

Play Episode Listen Later Jan 13, 2025 100:10


What will 2025 bring? Sam Altman says AGI is coming in 2025. Agents will arrive for sure. Military use will expand greatly. Will we get a warning shot? Will we survive the year? In Episode #57, host John Sherman interviews AI Safety Research Engineer Max Winga about the latest in AI advances and risks and the year to come. FOR HUMANITY MONTHLY DONATION SUBSCRIPTION LINKS: $1 MONTH https://buy.stripe.com/7sI3cje3x2Zk9SodQT $10 MONTH https://buy.stripe.com/5kAbIP9Nh0Rc4y46oo $25 MONTH https://buy.stripe.com/3cs9AHf7B9nIggM4gh $100 MONTH https://buy.stripe.com/aEU007bVp7fAfcI5km Anthropic Alignment Faking Video:https://www.youtube.com/watch?v=9eXV64O2Xp8&t=1s Neil DeGrasse Tyson Video: https://www.youtube.com/watch?v=JRQDc55Aido&t=579s Max Winga's Amazing Speech:https://www.youtube.com/watch?v=kDcPW5WtD58 Get Involved! EMAIL JOHN: forhumanitypodcast@gmail.com SUPPORT PAUSE AI: https://pauseai.info/ SUPPORT STOP AI: https://www.stopai.info/about Check out our partner channel: Lethal Intelligence AI Lethal Intelligence AI - Home https://lethalintelligence.ai SUBSCRIBE TO LIRON SHAPIRA'S DOOM DEBATES on YOUTUBE!! https://www.youtube.com/@DoomDebates BUY STEPHEN HANSON'S BEAUTIFUL AI RISK BOOK!!! https://stephenhansonart.bigcartel.com/product/the-entity-i-couldn-t-fathom 22 Word Statement from Center for AI Safety Statement on AI Risk | CAIS https://www.safe.ai/work/statement-on-ai-risk Best Account on Twitter: AI Notkilleveryoneism Memes https://twitter.com/AISafetyMemes

For Humanity: An AI Safety Podcast
AGI Goes To Washington | For Humanity: An AI Risk Podcast | Episode #56

For Humanity: An AI Safety Podcast

Play Episode Listen Later Dec 19, 2024 74:21


FOR HUMANITY MONTHLY DONATION SUBSCRIPTION LINKS: $1 MONTH https://buy.stripe.com/7sI3cje3x2Zk9S... $10 MONTH https://buy.stripe.com/5kAbIP9Nh0Rc4y... $25 MONTH https://buy.stripe.com/3cs9AHf7B9nIgg... $100 MONTH https://buy.stripe.com/aEU007bVp7fAfc... In Episode #56, host John Sherman travels to Washington DC to lobby House and Senate staffers for AI regulation along with Felix De Simone and Louis Berman of Pause AI. We unpack what we saw and heard as we presented AI risk to the people who have the power to make real change. SUPPORT PAUSE AI: https://pauseai.info/ SUPPORT STOP AI: https://www.stopai.info/about EMAIL JOHN: forhumanitypodcast@gmail.com Check out our partner channel: Lethal Intelligence AI Lethal Intelligence AI - Home https://lethalintelligence.ai SUBSCRIBE TO LIRON SHAPIRA'S DOOM DEBATES on YOUTUBE!!    / @doomdebates   BUY STEPHEN HANSON'S BEAUTIFUL AI RISK BOOK!!! https://stephenhansonart.bigcartel.co... 22 Word Statement from Center for AI Safety Statement on AI Risk | CAIS https://www.safe.ai/work/statement-on... Best Account on Twitter: AI Notkilleveryoneism Memes   / aisafetymemes  

Quantcast – a Risk.net Cutting Edge podcast
11/12/24 Risk Podcast - Alexei Kondratyev

Quantcast – a Risk.net Cutting Edge podcast

Play Episode Listen Later Dec 19, 2024 50:05


Alexei Kondratyev on quantum computing

Mentors on the Mic
The 7 "Rules" That Actors Should Break w/ Joshua Morgan of the Creative Risk podcast

Mentors on the Mic

Play Episode Listen Later Dec 9, 2024 40:51


Joshua: Joshua is a diverse, singing actor who has worked on Broadway and regionally at esteemed theaters such as Woolly Mammoth Theatre Company, Arena Stage, Theatre Under the Stars, Folger Theatre, Triad Stage, and Signature Theatre. His television appearances include "FBI," "Law and Order," "Law and Order: Organized Crime," "Law and Order: SVU," "Lincoln Rhyme," and HBO's "Paterno." Joshua co-founded and served as Artistic Director of the Helen Hayes Award-winning No Rules Theatre Company in Washington, DC, and Winston-Salem, NC. Mike Labbadia (co-host) an actor, producer, writer and filmmaker, a “multipassionate” you can say. As a company member of BEDLAM he's appeared Off-Broadway in shows from Arcadia to Julius Caesar. Regional work includes Alabama Shakespeare Festival, Gulfshore Playhouse, Penguin Rep and Virginia Rep. He's in the upcoming film "On the End" with Tim Blake Nelson. As a producer he's had films premiere at prestigious festivals around the world. In this episode, we discuss these "Rules" actors should break: 06:15 - Rule 1 - Don't pitch yourself Sent 37 self pitching emails, without an agent, and booked a Broadway show 9:03 - Rule 2 - Don't complain to our Reps 12:45 - Rule 3 - We can only be one "kind of artist" 17:15 - Rule 4 - Wait until you have a big project to grow an audience 21:20 - Rule 5 - Don't actively pursue industry relationships 15:52 - Rule 6 - Take on Gig work to support your career 30:29 - Rule 7 - Let the Industry define you as an artist PODCAST DESCRIPTION: “CREATIVE RISK”, is a new podcast hosted by actors Joshua Morgan and Mike Labbadia of Artist's Strategy where they explore all things art, entrepreneurialism and everything in between. The acting industry is more volatile and competitive than ever before, therefore the artist must evolve in order to take radical ownership over their creative businesses. Each episode, Mike and Joshua will get raw and unfiltered, giving hot takes and cutting edge strategies on how to build a sustainable career in the arts.  Guest: For Joshua Morgan : Website IMDb Instagram Broadway World For Podcast: Spotify Apple Podcasts Youtube Instagram TikTok Host: Instagram: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠@MentorsontheMic⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠@MichelleSimoneMiller⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ Twitter: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠@MentorsontheMic⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠@MichelleSimoneM⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ Facebook page:⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ https://www.facebook.com/mentorsonthemic⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ Website:⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ www.michellesimonemiller.com⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ and ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠www.mentorsonthemic.com⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ Youtube: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://www.youtube.com/user/24mmichelle⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ Stay tuned to the end of the episode for a clip of the Creative Risk podcast. If you like this episode, check out my episode on their podcast" Apple Podcasts Spotify --- Support this podcast: https://podcasters.spotify.com/pod/show/michelle-miller4/support

Cybercrime Magazine Podcast
BEHAVE: A Human Risk Podcast. Teresa Zielinski, Global CISO at GE Vernova. Sponsored By CybSafe.

Cybercrime Magazine Podcast

Play Episode Listen Later Nov 20, 2024 19:15


Teresa Zielinski, CISSP, is the Global CISO at GE Vernova. In this episode, she joins Oz Alashe, founder and CEO at CybSafe, and host Paul John Spaulding to discuss security awareness training and human risk management, including where large organizations are in the shift, how the risk landscape has evolved, and more. BEHAVE: A Human Risk Podcast is brought to you by CybSafe, developers of the Human Risk Management Platform. Learn more at https://cybsafe.com

For Humanity: An AI Safety Podcast
Episode #49: “Go To Jail To Stop AI” For Humanity: An AI Risk Podcast

For Humanity: An AI Safety Podcast

Play Episode Listen Later Oct 14, 2024 77:08


In Episode #49, host John Sherman talks with Sam Kirchner and Remmelt Ellen, co-founders of Stop AI. Stop AI is a new AI risk protest organization, coming at it with different tactics and goals than Pause AI. LEARN MORE–AND JOIN STOP AI www.stopai.info Please Donate Here To Help Promote For Humanity https://www.paypal.com/paypalme/forhumanitypodcast EMAIL JOHN: forhumanitypodcast@gmail.com This podcast is not journalism. But it's not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.  For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI. Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We'll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity. RESOURCES: SUBSCRIBE TO LIRON SHAPIRA'S DOOM DEBATES on YOUTUBE!! https://www.youtube.com/@DoomDebates Join the Pause AI Weekly Discord Thursdays at 2pm EST   / discord   https://discord.com/invite/pVMWjddaW7 Max Winga's “A Stark Warning About Extinction” https://youtu.be/kDcPW5WtD58?si=i6IRy82xZ2PUOp22 For Humanity Theme Music by Josef Ebner Youtube: https://www.youtube.com/channel/UCveruX8E-Il5A9VMC-N4vlg Website: https://josef.pictures BUY STEPHEN HANSON'S BEAUTIFUL AI RISK BOOK!!! https://stephenhansonart.bigcartel.com/product/the-entity-i-couldn-t-fathom 22 Word Statement from Center for AI Safety Statement on AI Risk | CAIS https://www.safe.ai/work/statement-on-ai-risk Best Account on Twitter: AI Notkilleveryoneism Memes  https://twitter.com/AISafetyMemes

For Humanity: An AI Safety Podcast
AI Safety's Limiting Origins: For Humanity, An AI Risk Podcast, Episode #48 Trailer

For Humanity: An AI Safety Podcast

Play Episode Listen Later Sep 30, 2024 7:40


In Episode #48 Trailer, host John Sherman talks with Pause AI US Founder Holly Elmore about the limiting origins of the AI safety movement. Polls show 60-80% of the public are opposed to building artificial superintelligence. So why is the movement to stop it still so small? The roots of the AI safety movement have a lot to do with it. Holly and John explore the present day issues created by the movements origins. Let's build community! Live For Humanity Zoom Community Meeting via Zoom Thursdays at 8:30pm EST...explanation during the full show! USE THIS THINK: https://storyfarm.zoom.us/j/88987072403 LEARN HOW TO HELP RAISE AI RISK AWARENESS IN YOUR COMMUNITY HERE https://pauseai.info/local-organizing Please Donate Here To Help Promote For Humanity https://www.paypal.com/paypalme/forhumanitypodcast EMAIL JOHN: forhumanitypodcast@gmail.com This podcast is not journalism. But it's not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth. For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI. Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We'll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity. RESOURCES: JOIN THE FIGHT, help Pause AI!!!! Pause AI SUBSCRIBE TO LIRON SHAPIRA'S DOOM DEBATES on YOUTUBE!! https://www.youtube.com/@DoomDebates Join the Pause AI Weekly Discord Thursdays at 2pm EST / discord https://discord.com/invite/pVMWjddaW7 Max Winga's “A Stark Warning About Extinction” https://youtu.be/kDcPW5WtD58?si=i6IRy82xZ2PUOp22 For Humanity Theme Music by Josef Ebner Youtube: https://www.youtube.com/channel/UCveruX8E-Il5A9VMC-N4vlg Website: https://josef.pictures BUY STEPHEN HANSON'S BEAUTIFUL AI RISK BOOK!!! https://stephenhansonart.bigcartel.com/product/the-entity-i-couldn-t-fathom 22 Word Statement from Center for AI Safety Statement on AI Risk | CAIS https://www.safe.ai/work/statement-on-ai-risk Best Account on Twitter: AI Notkilleveryoneism Memes https://twitter.com/AISafetyMemes

For Humanity: An AI Safety Podcast
Episode #47 Trailer : “Can AI Be Controlled?“ For Humanity: An AI Risk Podcast

For Humanity: An AI Safety Podcast

Play Episode Listen Later Sep 25, 2024 4:35


In Episode #47 Trailer, host John Sherman talks with Buck Shlegeris, CEO of Redwood Research, a non-profit company working on technical AI risk challenges. The discussion includes Buck's thoughts on the new OpenAI o1-preview model, but centers on two questions: is there a way to control AI models before alignment is achieved if it can be, and how would the system that's supposed to save the world actually work if an AI lab found a model scheming. Check out these links to Buck's writing on these topics below: https://redwoodresearch.substack.com/p/the-case-for-ensuring-that-powerful https://redwoodresearch.substack.com/p/would-catching-your-ais-trying-to Senate Hearing: https://www.judiciary.senate.gov/committee-activity/hearings/oversight-of-ai-insiders-perspectives Harry Macks Youtube Channel https://www.youtube.com/channel/UC59ZRYCHev_IqjUhremZ8Tg LEARN HOW TO HELP RAISE AI RISK AWARENESS IN YOUR COMMUNITY HERE https://pauseai.info/local-organizing Please Donate Here To Help Promote For Humanity https://www.paypal.com/paypalme/forhumanitypodcast EMAIL JOHN: forhumanitypodcast@gmail.com This podcast is not journalism. But it's not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.  For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI. Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We'll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity. RESOURCES: JOIN THE FIGHT, help Pause AI!!!! Pause AI SUBSCRIBE TO LIRON SHAPIRA'S DOOM DEBATES on YOUTUBE!! https://www.youtube.com/@DoomDebates Join the Pause AI Weekly Discord Thursdays at 2pm EST   / discord   https://discord.com/invite/pVMWjddaW7 Max Winga's “A Stark Warning About Extinction” https://youtu.be/kDcPW5WtD58?si=i6IRy82xZ2PUOp22 For Humanity Theme Music by Josef Ebner Youtube: https://www.youtube.com/channel/UCveruX8E-Il5A9VMC-N4vlg Website: https://josef.pictures BUY STEPHEN HANSON'S BEAUTIFUL AI RISK BOOK!!! https://stephenhansonart.bigcartel.com/product/the-entity-i-couldn-t-fathom 22 Word Statement from Center for AI Safety Statement on AI Risk | CAIS https://www.safe.ai/work/statement-on-ai-risk Best Account on Twitter: AI Notkilleveryoneism Memes  https://twitter.com/AISafetyMemes

For Humanity: An AI Safety Podcast
Episode #46: “Is AI Humanity's Worthy Successor?“ For Humanity: An AI Risk Podcast

For Humanity: An AI Safety Podcast

Play Episode Listen Later Sep 18, 2024 77:26


In Episode #46, host John Sherman talks with Daniel Faggella, Founder and Head of Research at Emerj Artificial Intelligence Research. Dan has been speaking out about AI risk for a long time but comes at it from a different perspective than many. Dan thinks we need to talk about how we can make AGI and whatever comes after become humanity's worthy successor. More About Daniel Faggella https://danfaggella.com/ LEARN HOW TO HELP RAISE AI RISK AWARENESS IN YOUR COMMUNITY HERE https://pauseai.info/local-organizing Please Donate Here To Help Promote For Humanity https://www.paypal.com/paypalme/forhumanitypodcast EMAIL JOHN: forhumanitypodcast@gmail.com This podcast is not journalism. But it's not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.  For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI. Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We'll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity. RESOURCES: JOIN THE FIGHT, help Pause AI!!!! Pause AI SUBSCRIBE TO LIRON SHAPIRA'S DOOM DEBATES on YOUTUBE!! https://www.youtube.com/@DoomDebates Join the Pause AI Weekly Discord Thursdays at 2pm EST   / discord   https://discord.com/invite/pVMWjddaW7 Max Winga's “A Stark Warning About Extinction” https://youtu.be/kDcPW5WtD58?si=i6IRy82xZ2PUOp22 For Humanity Theme Music by Josef Ebner Youtube: https://www.youtube.com/channel/UCveruX8E-Il5A9VMC-N4vlg Website: https://josef.pictures BUY STEPHEN HANSON'S BEAUTIFUL AI RISK BOOK!!! https://stephenhansonart.bigcartel.com/product/the-entity-i-couldn-t-fathom 22 Word Statement from Center for AI Safety Statement on AI Risk | CAIS https://www.safe.ai/work/statement-on-ai-risk Best Account on Twitter: AI Notkilleveryoneism Memes  https://twitter.com/AISafetyMemes

For Humanity: An AI Safety Podcast
Episode 46 Trailer: “Is AI Humanity's Worthy Successor?“ For Humanity: An AI Risk Podcast

For Humanity: An AI Safety Podcast

Play Episode Listen Later Sep 16, 2024 5:53


In Episode #46 Trailer, host John Sherman talks with Daniel Faggella, Founder and Head of Research at Emerj Artificial Intelligence Research. Dan has been speaking out about AI risk for a long time but comes at it from a different perspective than many. Dan thinks we need to talk about how we can make AGI and whatever comes after become humanity's worthy successor. More About Daniel Faggella https://danfaggella.com/ LEARN HOW TO HELP RAISE AI RISK AWARENESS IN YOUR COMMUNITY HERE https://pauseai.info/local-organizing Please Donate Here To Help Promote For Humanity https://www.paypal.com/paypalme/forhumanitypodcast EMAIL JOHN: forhumanitypodcast@gmail.com This podcast is not journalism. But it's not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.  For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI. Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We'll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity. RESOURCES: JOIN THE FIGHT, help Pause AI!!!! Pause AI SUBSCRIBE TO LIRON SHAPIRA'S DOOM DEBATES on YOUTUBE!! https://www.youtube.com/@DoomDebates Join the Pause AI Weekly Discord Thursdays at 2pm EST   / discord   https://discord.com/invite/pVMWjddaW7 Max Winga's “A Stark Warning About Extinction” https://youtu.be/kDcPW5WtD58?si=i6IRy82xZ2PUOp22 For Humanity Theme Music by Josef Ebner Youtube: https://www.youtube.com/channel/UCveruX8E-Il5A9VMC-N4vlg Website: https://josef.pictures BUY STEPHEN HANSON'S BEAUTIFUL AI RISK BOOK!!! https://stephenhansonart.bigcartel.com/product/the-entity-i-couldn-t-fathom 22 Word Statement from Center for AI Safety Statement on AI Risk | CAIS https://www.safe.ai/work/statement-on-ai-risk Best Account on Twitter: AI Notkilleveryoneism Memes  https://twitter.com/AISafetyMemes

For Humanity: An AI Safety Podcast
Episode #45: “AI Risk And Child Psychology” For Humanity: An AI Risk Podcast

For Humanity: An AI Safety Podcast

Play Episode Listen Later Sep 11, 2024 84:24


In Episode #45, host John Sherman talks with Dr. Mike Brooks, a Psychologist focusing on kids and technology. The conversation is broad-ranging, touching on parenting, happiness and screens, the need for human unity, and the psychology of humans facing an ever more unknown future.FULL INTERVIEW STARTS AT (00:05:28) Mike's book: Tech Generation: Raising Balanced Kids in a Hyper-Connected World An article from Mike in Psychology Today: The Happiness Illusion: Facing the Dark Side of Progress Fine Dr. Brooks on Social Media LinkedIn | X/Twitter | YouTube | TikTok | Instagram | Facebook https://www.linkedin.com/in/dr-mike-brooks-b1164120 https://x.com/drmikebrooks https://www.youtube.com/@connectwithdrmikebrooks https://www.tiktok.com/@connectwithdrmikebrooks?lang=en https://www.instagram.com/drmikebrooks/?hl=en Chris Gerrby's Twitter: https://x.com/ChrisGerrby LEARN HOW TO HELP RAISE AI RISK AWARENESS IN YOUR COMMUNITY HERE https://pauseai.info/local-organizing Please Donate Here To Help Promote For Humanity https://www.paypal.com/paypalme/forhumanitypodcast EMAIL JOHN: forhumanitypodcast@gmail.com This podcast is not journalism. But it's not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth. For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI. Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We'll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity. RESOURCES: JOIN THE FIGHT, help Pause AI!!!! Pause AI SUBSCRIBE TO LIRON SHAPIRA'S DOOM DEBATES on YOUTUBE!! https://www.youtube.com/@DoomDebates Join the Pause AI Weekly Discord Thursdays at 2pm EST / discord https://discord.com/invite/pVMWjddaW7 Max Winga's “A Stark Warning About Extinction” https://youtu.be/kDcPW5WtD58?si=i6IRy82xZ2PUOp22 For Humanity Theme Music by Josef Ebner Youtube: https://www.youtube.com/channel/UCveruX8E-Il5A9VMC-N4vlg Website: https://josef.pictures BUY STEPHEN HANSON'S BEAUTIFUL AI RISK BOOK!!! https://stephenhansonart.bigcartel.com/product/the-entity-i-couldn-t-fathom 22 Word Statement from Center for AI Safety Statement on AI Risk | CAIS https://www.safe.ai/work/statement-on-ai-risk Best Account on Twitter: AI Notkilleveryoneism Memes https://twitter.com/AISafetyMemes

For Humanity: An AI Safety Podcast
Episode #45 TRAILER: “AI Risk And Child Psychology” For Humanity: An AI Risk Podcast

For Humanity: An AI Safety Podcast

Play Episode Listen Later Sep 9, 2024 6:42


In Episode #45 TRAILER, host John Sherman talks with Dr. Mike Brooks, a Psychologist focusing on kids and technology. The conversation is broad-ranging, touching on parenting, happiness and screens, the need for human unity, and the psychology of humans facing an ever more unknown future. Mike's book: Tech Generation: Raising Balanced Kids in a Hyper-Connected World An article from Mike in Psychology Today: The Happiness Illusion: Facing the Dark Side of Progress LEARN HOW TO HELP RAISE AI RISK AWARENESS IN YOUR COMMUNITY HERE https://pauseai.info/local-organizing Please Donate Here To Help Promote For Humanity https://www.paypal.com/paypalme/forhumanitypodcast EMAIL JOHN: forhumanitypodcast@gmail.com This podcast is not journalism. But it's not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.  For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI. Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We'll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity. RESOURCES: JOIN THE FIGHT, help Pause AI!!!! Pause AI SUBSCRIBE TO LIRON SHAPIRA'S DOOM DEBATES on YOUTUBE!! https://www.youtube.com/@DoomDebates Join the Pause AI Weekly Discord Thursdays at 2pm EST   / discord   https://discord.com/invite/pVMWjddaW7 Max Winga's “A Stark Warning About Extinction” https://youtu.be/kDcPW5WtD58?si=i6IRy82xZ2PUOp22 For Humanity Theme Music by Josef Ebner Youtube: https://www.youtube.com/channel/UCveruX8E-Il5A9VMC-N4vlg Website: https://josef.pictures BUY STEPHEN HANSON'S BEAUTIFUL AI RISK BOOK!!! https://stephenhansonart.bigcartel.com/product/the-entity-i-couldn-t-fathom 22 Word Statement from Center for AI Safety Statement on AI Risk | CAIS https://www.safe.ai/work/statement-on-ai-risk Best Account on Twitter: AI Notkilleveryoneism Memes  https://twitter.com/AISafetyMemes

For Humanity: An AI Safety Podcast
Episode #44: “AI P-Doom Debate: 50% vs 99.999%” For Humanity: An AI Risk Podcast

For Humanity: An AI Safety Podcast

Play Episode Listen Later Sep 4, 2024 91:05


In Episode #44, host John Sherman brings back friends of For Humanity Dr. Roman Yamopolskiy and Liron Shapira. Roman is an influential AI Safety researcher, through leader, and Associate Professor at the University of Louisville. Liron is a tech CEO and host of the excellent Doom Debates podcast. Roman famously holds a 99.999% p-doom, Liron has a nuanced 50%. John starts out at 75%, unrelated to their numbers. Where are you? Did Roman or Liron move you in their direction at all? Let us know in the comments! LEARN HOW TO HELP RAISE AI RISK AWARENESS IN YOUR COMMUNITY HERE https://pauseai.info/local-organizing Please Donate Here To Help Promote For Humanity https://www.paypal.com/paypalme/forhumanitypodcast EMAIL JOHN: forhumanitypodcast@gmail.com This podcast is not journalism. But it's not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.  For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI. Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We'll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity. RESOURCES: BUY ROMAN'S NEW BOOK ON AMAZON https://a.co/d/fPG6lOB SUBSCRIBE TO LIRON SHAPIRA'S DOOM DEBATES on YOUTUBE!! https://www.youtube.com/@DoomDebates JOIN THE FIGHT, help Pause AI!!!! Pause AI Join the Pause AI Weekly Discord Thursdays at 2pm EST   / discord   https://discord.com/invite/pVMWjddaW7 Max Winga's “A Stark Warning About Extinction” https://youtu.be/kDcPW5WtD58?si=i6IRy82xZ2PUOp22 For Humanity Theme Music by Josef Ebner Youtube: https://www.youtube.com/channel/UCveruX8E-Il5A9VMC-N4vlg Website: https://josef.pictures BUY STEPHEN HANSON'S BEAUTIFUL AI RISK BOOK!!! https://stephenhansonart.bigcartel.com/product/the-entity-i-couldn-t-fathom 22 Word Statement from Center for AI Safety Statement on AI Risk | CAIS https://www.safe.ai/work/statement-on-ai-risk Best Account on Twitter: AI Notkilleveryoneism Memes  https://twitter.com/AISafetyMemes

For Humanity: An AI Safety Podcast
Episode #44 Trailer: “AI P-Doom Debate: 50% vs 99.999%” For Humanity: An AI Risk Podcast

For Humanity: An AI Safety Podcast

Play Episode Listen Later Sep 2, 2024 7:58


In Episode #44 Trailer, host John Sherman brings back friends of For Humanity Dr. Roman Yamopolskiy and Liron Shapira. Roman is an influential AI Safety researcher, through leader, and Associate Professor at the University of Louisville. Liron is a tech CEO and host of the excellent Doom Debates podcast. Roman famously holds a 99.999% p-doom, Liron has a nuanced 50%. John starts out at 75%, unrelated to their numbers. Where are you? Did Roman or Liron move you in their direction at all? Watch the full episode and let us know in the comments. LEARN HOW TO HELP RAISE AI RISK AWARENESS IN YOUR COMMUNITY HERE https://pauseai.info/local-organizing Please Donate Here To Help Promote For Humanity https://www.paypal.com/paypalme/forhumanitypodcast EMAIL JOHN: forhumanitypodcast@gmail.com This podcast is not journalism. But it's not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.  For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI. Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We'll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity. RESOURCES: BUY ROMAN'S NEW BOOK ON AMAZON https://a.co/d/fPG6lOB SUBSCRIBE TO LIRON SHAPIRA'S DOOM DEBATES on YOUTUBE!! https://www.youtube.com/@DoomDebates JOIN THE FIGHT, help Pause AI!!!! Pause AI Join the Pause AI Weekly Discord Thursdays at 2pm EST   / discord   https://discord.com/invite/pVMWjddaW7 Max Winga's “A Stark Warning About Extinction” https://youtu.be/kDcPW5WtD58?si=i6IRy82xZ2PUOp22 For Humanity Theme Music by Josef Ebner Youtube: https://www.youtube.com/channel/UCveruX8E-Il5A9VMC-N4vlg Website: https://josef.pictures BUY STEPHEN HANSON'S BEAUTIFUL AI RISK BOOK!!! https://stephenhansonart.bigcartel.com/product/the-entity-i-couldn-t-fathom 22 Word Statement from Center for AI Safety Statement on AI Risk | CAIS https://www.safe.ai/work/statement-on-ai-risk Best Account on Twitter: AI Notkilleveryoneism Memes  https://twitter.com/AISafetyMemes

For Humanity: An AI Safety Podcast
Episode #43: “So what exactly is the good case for AI?” For Humanity: An AI Risk Podcast

For Humanity: An AI Safety Podcast

Play Episode Listen Later Sep 2, 2024 76:06


In Episode #43,  host John Sherman talks with DevOps Engineer Aubrey Blackburn about the vague, elusive case the big AI companies and accelerationists make for the good case AI future. LEARN HOW TO HELP RAISE AI RISK AWARENESS IN YOUR COMMUNITY HERE https://pauseai.info/local-organizing Please Donate Here To Help Promote For Humanity https://www.paypal.com/paypalme/forhumanitypodcast EMAIL JOHN: forhumanitypodcast@gmail.com This podcast is not journalism. But it's not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.  For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI. Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We'll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity. RESOURCES: JOIN THE FIGHT, help Pause AI!!!! Pause AI Join the Pause AI Weekly Discord Thursdays at 2pm EST   / discord   https://discord.com/invite/pVMWjddaW7 Max Winga's “A Stark Warning About Extiction” https://youtu.be/kDcPW5WtD58?si=i6IRy82xZ2PUOp22 For Humanity Theme Music by Josef Ebner Youtube: https://www.youtube.com/channel/UCveruX8E-Il5A9VMC-N4vlg Website: https://josef.pictures SUBSCRIBE TO LIRON SHAPIRA'S DOOM DEBATES on YOUTUBE!! https://www.youtube.com/@DoomDebates BUY STEPHEN HANSON'S BEAUTIFUL AI RISK BOOK!!! https://stephenhansonart.bigcartel.com/product/the-entity-i-couldn-t-fathom 22 Word Statement from Center for AI Safety Statement on AI Risk | CAIS https://www.safe.ai/work/statement-on-ai-risk Best Account on Twitter: AI Notkilleveryoneism Memes  https://twitter.com/AISafetyMemes

For Humanity: An AI Safety Podcast
Episode #43 TRAILER: “So what exactly is the good case for AI?” For Humanity: An AI Risk Podcast

For Humanity: An AI Safety Podcast

Play Episode Listen Later Aug 26, 2024 7:34


In Episode #43 TRAILER,  host John Sherman talks with DevOps Engineer Aubrey Blackburn about the vague, elusive case the big AI companies and accelerationists make for the good case AI future. Please Donate Here To Help Promote For Humanity https://www.paypal.com/paypalme/forhumanitypodcast EMAIL JOHN: forhumanitypodcast@gmail.com This podcast is not journalism. But it's not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.  For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI. Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We'll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity. RESOURCES: JOIN THE FIGHT, help Pause AI!!!! Pause AI Join the Pause AI Weekly Discord Thursdays at 2pm EST   / discord   https://discord.com/invite/pVMWjddaW7 Max Winga's “A Stark Warning About Extiction” https://youtu.be/kDcPW5WtD58?si=i6IRy82xZ2PUOp22 For Humanity Theme Music by Josef Ebner Youtube: https://www.youtube.com/channel/UCveruX8E-Il5A9VMC-N4vlg Website: https://josef.pictures SUBSCRIBE TO LIRON SHAPIRA'S DOOM DEBATES on YOUTUBE!! https://www.youtube.com/@DoomDebates BUY STEPHEN HANSON'S BEAUTIFUL AI RISK BOOK!!! https://stephenhansonart.bigcartel.com/product/the-entity-i-couldn-t-fathom 22 Word Statement from Center for AI Safety Statement on AI Risk | CAIS https://www.safe.ai/work/statement-on-ai-risk Best Account on Twitter: AI Notkilleveryoneism Memes  https://twitter.com/AISafetyMemes

Cybercrime Magazine Podcast
BEHAVE: A Human Risk Podcast. Susan Koski, CISO at PNC. Sponsored By CybSafe.

Cybercrime Magazine Podcast

Play Episode Listen Later Aug 21, 2024 19:42


Susan Koski is the Chief Information Security Officer (CISO) at PNC. In this episode, he joins Oz Alashe, founder and CEO at CybSafe, and host Scott Schober to discuss human risk management and the importance of security awareness training. BEHAVE: A Human Risk Podcast is brought to you by CybSafe, developers of the Human Risk Management Platform. Learn more at https://cybsafe.com

For Humanity: An AI Safety Podcast
Episode #42: “Actors vs. AI” For Humanity: An AI Risk Podcast

For Humanity: An AI Safety Podcast

Play Episode Listen Later Aug 21, 2024 83:19


In Episode #42,  host John Sherman talks with actor Erik Passoja about AI's impact on Hollywood, the fight to protect people's digital identities, and the vibes in LA about existential risk. Please Donate Here To Help Promote For Humanity https://www.paypal.com/paypalme/forhumanitypodcast EMAIL JOHN: forhumanitypodcast@gmail.com This podcast is not journalism. But it's not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.  For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI. Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We'll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity. RESOURCES: JOIN THE FIGHT, help Pause AI!!!! Pause AI Join the Pause AI Weekly Discord Thursdays at 2pm EST   / discord   https://discord.com/invite/pVMWjddaW7 Max Winga's “A Stark Warning About Extiction” https://youtu.be/kDcPW5WtD58?si=i6IRy82xZ2PUOp22 For Humanity Theme Music by Josef Ebner Youtube: https://www.youtube.com/channel/UCveruX8E-Il5A9VMC-N4vlg Website: https://josef.pictures SUBSCRIBE TO LIRON SHAPIRA'S DOOM DEBATES on YOUTUBE!! https://www.youtube.com/@DoomDebates BUY STEPHEN HANSON'S BEAUTIFUL AI RISK BOOK!!! https://stephenhansonart.bigcartel.com/product/the-entity-i-couldn-t-fathom 22 Word Statement from Center for AI Safety Statement on AI Risk | CAIS https://www.safe.ai/work/statement-on-ai-risk Best Account on Twitter: AI Notkilleveryoneism Memes  https://twitter.com/AISafetyMemes

For Humanity: An AI Safety Podcast
Episode #42 TRAILER: “Actors vs. AI” For Humanity: An AI Risk Podcast

For Humanity: An AI Safety Podcast

Play Episode Listen Later Aug 19, 2024 3:11


In Episode #42 Trailer, host John Sherman talks with actor Erik Passoja about AI's impact on Hollywood, the fight to protect people's digital identities, and the vibes in LA about existential risk. Please Donate Here To Help Promote For Humanity https://www.paypal.com/paypalme/forhumanitypodcast EMAIL JOHN: forhumanitypodcast@gmail.com This podcast is not journalism. But it's not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.  For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI. Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We'll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity. RESOURCES: JOIN THE FIGHT, help Pause AI!!!! Pause AI Join the Pause AI Weekly Discord Thursdays at 2pm EST   / discord   https://discord.com/invite/pVMWjddaW7 Max Winga's “A Stark Warning About Extiction” https://youtu.be/kDcPW5WtD58?si=i6IRy82xZ2PUOp22 For Humanity Theme Music by Josef Ebner Youtube: https://www.youtube.com/channel/UCveruX8E-Il5A9VMC-N4vlg Website: https://josef.pictures SUBSCRIBE TO LIRON SHAPIRA'S DOOM DEBATES on YOUTUBE!! https://www.youtube.com/@DoomDebates BUY STEPHEN HANSON'S BEAUTIFUL AI RISK BOOK!!! https://stephenhansonart.bigcartel.com/product/the-entity-i-couldn-t-fathom 22 Word Statement from Center for AI Safety Statement on AI Risk | CAIS https://www.safe.ai/work/statement-on-ai-risk Best Account on Twitter: AI Notkilleveryoneism Memes  https://twitter.com/AISafetyMemes

For Humanity: An AI Safety Podcast
Episode #41 “David Brooks: Dead Wrong on AI” For Humanity: An AI Risk Podcast

For Humanity: An AI Safety Podcast

Play Episode Listen Later Aug 14, 2024 48:39


In Episode #41, host John Sherman begins with a personal message to David Brooks of the New York Times. Brooks wrote an article titled “Many People Fear AI: They Shouldn't”–and in full candor it pissed John off quite much. During this episode, John and Doom Debates host Liron Shapira go line by line through David Brooks's 7/31/24 piece in the New York Times. Please Donate Here To Help Promote For Humanity https://www.paypal.com/paypalme/forhumanitypodcast EMAIL JOHN: forhumanitypodcast@gmail.com This podcast is not journalism. But it's not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.  For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI. Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We'll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity. RESOURCES: Max Winga's “A Stark Warning About Extiction” https://youtu.be/kDcPW5WtD58?si=i6IRy82xZ2PUOp22 For Humanity Theme Music by Josef Ebner Youtube: https://www.youtube.com/channel/UCveruX8E-Il5A9VMC-N4vlg Website: https://josef.pictures SUBSCRIBE TO LIRON SHAPIRA'S DOOM DEBATES on YOUTUBE!! https://www.youtube.com/@DoomDebates BUY STEPHEN HANSON'S BEAUTIFUL AI RISK BOOK!!! https://stephenhansonart.bigcartel.com/product/the-entity-i-couldn-t-fathom JOIN THE FIGHT, help Pause AI!!!! Pause AI Join the Pause AI Weekly Discord Thursdays at 2pm EST   / discord   https://discord.com/invite/pVMWjddaW7 22 Word Statement from Center for AI Safety Statement on AI Risk | CAIS https://www.safe.ai/work/statement-on-ai-risk Best Account on Twitter: AI Notkilleveryoneism Memes  https://twitter.com/AISafetyMemes

For Humanity: An AI Safety Podcast
Episode #41 TRAILER “David Brooks: Dead Wrong on AI” For Humanity: An AI Risk Podcast

For Humanity: An AI Safety Podcast

Play Episode Listen Later Aug 12, 2024 9:18


In Episode #41 TRAILER, host John Sherman previews the full show with a personal message to David Brooks of the New York Times. Brooks wrote something–and in full candor it pissed John off quite much. During the full episode, John and Doom Debates host Liron Shapira go line by line through David Brooks's 7/31/24 piece in the New York Times. Please Donate Here To Help Promote For Humanity https://www.paypal.com/paypalme/forhumanitypodcast EMAIL JOHN: forhumanitypodcast@gmail.com This podcast is not journalism. But it's not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.  For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI. Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We'll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity. RESOURCES: Max Winga's “A Stark Warning About Extiction” https://youtu.be/kDcPW5WtD58?si=i6IRy82xZ2PUOp22 For Humanity Theme Music by Josef Ebner Youtube: https://www.youtube.com/channel/UCveruX8E-Il5A9VMC-N4vlg Website: https://josef.pictures SUBSCRIBE TO LIRON SHAPIRA'S DOOM DEBATES on YOUTUBE!! https://www.youtube.com/@DoomDebates BUY STEPHEN HANSON'S BEAUTIFUL AI RISK BOOK!!! https://stephenhansonart.bigcartel.com/product/the-entity-i-couldn-t-fathom JOIN THE FIGHT, help Pause AI!!!! Pause AI Join the Pause AI Weekly Discord Thursdays at 2pm EST   / discord   https://discord.com/invite/pVMWjddaW7 22 Word Statement from Center for AI Safety Statement on AI Risk | CAIS https://www.safe.ai/work/statement-on-ai-risk Best Account on Twitter: AI Notkilleveryoneism Memes  https://twitter.com/AISafetyMemes

For Humanity: An AI Safety Podcast
Episode #40 “Surviving Doom” For Humanity: An AI Risk Podcast

For Humanity: An AI Safety Podcast

Play Episode Listen Later Aug 7, 2024 90:53


In Episode #40, host John Sherman talks with James Norris, CEO of Upgradable and longtime AI safety proponent. James has been concerned about AI x-risk for 26 years. He lives now in Bali and has become an expert in prepping for a very different world post-warning shot or other major AI-related disaster, and he's helping others do the same. James shares his powerful insight, long-time awareness, and expertise helping others find a way to survive and rebuild from a post-AGI disaster warning shot. Please Donate Here To Help Promote For Humanity https://www.paypal.com/paypalme/forhumanitypodcast EMAIL JOHN: forhumanitypodcast@gmail.com This podcast is not journalism. But it's not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.  For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI. Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We'll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity. RESOURCES: Max Winga's “A Stark Warning About Extiction” https://youtu.be/kDcPW5WtD58?si=i6IRy82xZ2PUOp22 For Humanity Theme Music by Josef Ebner Youtube: https://www.youtube.com/channel/UCveruX8E-Il5A9VMC-N4vlg Website: https://josef.pictures SUBSCRIBE TO LIRON SHAPIRA'S DOOM DEBATES on YOUTUBE!! https://www.youtube.com/@DoomDebates BUY STEPHEN HANSON'S BEAUTIFUL AI RISK BOOK!!! https://stephenhansonart.bigcartel.com/product/the-entity-i-couldn-t-fathom JOIN THE FIGHT, help Pause AI!!!! Pause AI Join the Pause AI Weekly Discord Thursdays at 2pm EST   / discord   https://discord.com/invite/pVMWjddaW7 22 Word Statement from Center for AI Safety Statement on AI Risk | CAIS https://www.safe.ai/work/statement-on-ai-risk Best Account on Twitter: AI Notkilleveryoneism Memes  https://twitter.com/AISafetyMemes

For Humanity: An AI Safety Podcast
Episode #40 TRAILER “Surviving Doom” For Humanity: An AI Risk Podcast

For Humanity: An AI Safety Podcast

Play Episode Listen Later Aug 5, 2024 6:17


In Episode #40, TRAILER, host John Sherman talks with James Norris, CEO of Upgradable and longtime AI safety proponent. James has been concerned about AI x-risk for 26 years. He lives now in Bali and has become an expert in prepping for a very different world post-warning shot or other major AI-related disaster, and he's helping others do the same. Please Donate Here To Help Promote For Humanity https://www.paypal.com/paypalme/forhumanitypodcast EMAIL JOHN: forhumanitypodcast@gmail.com This podcast is not journalism. But it's not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.  For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI. Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We'll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity. RESOURCES: Max Winga's “A Stark Warning About Extiction” https://youtu.be/kDcPW5WtD58?si=i6IRy82xZ2PUOp22 For Humanity Theme Music by Josef Ebner Youtube: https://www.youtube.com/channel/UCveruX8E-Il5A9VMC-N4vlg Website: https://josef.pictures SUBSCRIBE TO LIRON SHAPIRA'S DOOM DEBATES on YOUTUBE!! https://www.youtube.com/@DoomDebates BUY STEPHEN HANSON'S BEAUTIFUL AI RISK BOOK!!! https://stephenhansonart.bigcartel.com/product/the-entity-i-couldn-t-fathom JOIN THE FIGHT, help Pause AI!!!! Pause AI Join the Pause AI Weekly Discord Thursdays at 2pm EST   / discord   https://discord.com/invite/pVMWjddaW7 22 Word Statement from Center for AI Safety Statement on AI Risk | CAIS https://www.safe.ai/work/statement-on-ai-risk Best Account on Twitter: AI Notkilleveryoneism Memes  https://twitter.com/AISafetyMemes Timestamps Prepping Perspectives (00:00:00)Discussion on how to characterize preparedness efforts, ranging from common sense to doomsday prepping. Personal Experience in Emergency Management (00:00:06)Speaker shares background in emergency management and Red Cross, reflecting on past preparation efforts. Vision of AGI and Societal Collapse (00:00:58)Exploration of potential outcomes of AGI development and societal disruptions, including chaos and extinction. Geopolitical Safety in the Philippines (00:02:14)Consideration of living in the Philippines as a safer option during global conflicts and crises. Self-Reliance and Supply Chain Concerns (00:03:15)Importance of self-reliance and being off-grid to mitigate risks from supply chain breakdowns. Escaping Potential Threats (00:04:11)Discussion on the plausibility of escaping threats posed by advanced AI and the implications of being tracked. Nuclear Threats and Personal Safety (00:05:34)Speculation on the potential for nuclear conflict while maintaining a sense of safety in the Philippines.

Compliance Perspectives
Sam Logan on Human Trafficking and Modern Slavery Risk [Podcast]

Compliance Perspectives

Play Episode Listen Later Jul 2, 2024 14:52


By Adam Turteltaub As the risk of human trafficking and modern slavery rises on the radar, compliance teams need to start their risk assessment by looking at the map, says Sam Logan, CEO and founder of Evidencity. The number of jurisdictions with laws in this area are increasing. In addition, some countries have far greater risk than others, with long histories of exploitation. Remember, though, that there is no such thing as a safe geography. A janitorial service in the US was found to be using child labor, and an Italian luxury goods maker's contractor is alleged to have subcontracted with a business using Chinese laborers illegally in Italy. The key lesson from these cases: look closely at your suppliers to better understand where and how they do business. Be sure to review them not just when beginning a relationship but on an ongoing basis. Take a risk-based approach, focusing your efforts where the likelihood of modern slavery and human trafficking is greater. Finally, don't forget about your customers. No organization wants to see its products used by forced or child labor.

Cybercrime Magazine Podcast
BEHAVE: A Human Risk Podcast. Oritse J. Uku, BISO at Northwestern Mutual. Sponsored By CybSafe.

Cybercrime Magazine Podcast

Play Episode Listen Later Jun 20, 2024 20:22


Oritse J. Uku is the Business Information Security Officer (BISO) and IT Governance Risk and Compliance at Northwestern Mutual. In this episode, he joins Oz Alashe, founder and CEO at CybSafe, and host Heather Engel to discuss security awareness training and human risk management, particularly phishing simulation and what it can do for organizations. BEHAVE: A Human Risk Podcast is brought to you by CybSafe, developers of the Human Risk Management Platform. Learn more at https://cybsafe.com

Cybercrime Magazine Podcast
BEHAVE: A Human Risk Podcast. Adam Keown, CISO at Eastman & Oz Alashe. Sponsored By CybSafe.

Cybercrime Magazine Podcast

Play Episode Listen Later May 24, 2024 23:37


Adam Keown is the CISO at Eastman. In this episode, he joins Oz Alashe, founder and CEO at CybSafe, and host Scott Schober to discuss their shared background in law enforcement and how that helped prepare for a future career in cybersecurity, as well as the difference between security awareness training and human risk management, the future of the industry, and more. BEHAVE: A Human Risk Podcast is brought to you by CybSafe, developers of the Human Risk Management Platform. Learn more at https://cybsafe.com