Podcasts about Roman Yampolskiy

Russian computer scientist

  • 61PODCASTS
  • 102EPISODES
  • 53mAVG DURATION
  • 1EPISODE EVERY OTHER WEEK
  • Oct 4, 2025LATEST

POPULARITY

20172018201920202021202220232024


Best podcasts about Roman Yampolskiy

Latest podcast episodes about Roman Yampolskiy

Our Big Dumb Mouth
OBDM1333 - Escaping Our Simulation | The Saudi Riyadh Review | Strange News

Our Big Dumb Mouth

Play Episode Listen Later Oct 4, 2025 126:47


The Left and the Right Deserve Each Other https://www.odwyerpr.com/story/public/23694/2025-10-03/left-right-deserve-each-other.html Bill Burr Destroys His Reputation https://youtu.be/PECQihXJb_g?si=kFBcglMmm1tWs4c8 00:00:00 – Cold Open, CB Radio Day & Show Plans Kicks off with loose banter about “National CB Day” vs. truckers, old-school CB culture, travel plans, and lining up guest co-hosts; sets a playful tone before hinting at heavier news. 00:10:00 – “Escaping the Simulation” Article They dissect a Popular Mechanics piece (and Roman Yampolskiy's ideas) on whether we can exit a simulated reality, poke holes in the write-up, and riff on paradoxes, mass meditation, and overloading “the system” with AI fluff. 00:20:00 – Books & Models of Reality Bringing in My Big TOE (Tom Campbell), The Holographic Universe (Talbot), and The Simulation Hypothesis (Virk), Mike contrasts “level 2” takes with richer frameworks—e.g., distributed-mind generation of reality vs. a Linux-in-the-sky overseer. 00:30:00 – Can You Really Get Out? They argue escape likely requires death or radical detachment (no attachments, near-Zen), invoke “life review,” karma/progression RPG mechanics, and conclude there's no cheap hack—only becoming a better human. 00:40:00 – Mind Power, NPCs & Inner Monologue Into consciousness: people without inner monologues/mental imagery, social-cue humor, and how thought quality affects health and “the sim.” They chew on whether offloading thinking to AI weakens individual processing and spawns weirder glitches. 00:50:00 – Headlines: Drones, War Talk & Odd Death Quick hits: NATO-airspace drone incursions; skepticism about Venezuela “seize the airfields” stories; and a troubling Texas case (GOP staffer death by self-immolation) with sealed records that fuels speculation. 01:00:00 – Transparency & Global Gen-Z Protests On sealed files, political ties, and why Gen-Z is protesting globally—social-media chain reactions from Madagascar to Nepal; digital natives demanding change in the streets. 01:10:00 – Libya, UK Digital IDs & BoBo Craze Libya headlines resurface; talk turns to rumored UK digital ID rollouts (timing, pushback), then a detour into BoBo collectible toys, pop-culture virality, and retail charts going bonkers. 01:20:00 – Saudi Soft Power: EA & Entertainment Saudi Arabia's Public Investment Fund keeps buying influence—EA deal talk, AI plays, and mega-city “The Line”; frames the Riyadh comedy festival as part of a broader rebrand. 01:30:00 – Riyadh Comedy Festival: Burr & Chappelle Bill Burr gets roasted for cashing a $1.5M+ gig and praising KSA after slamming billionaires; Chappelle says it's “easier to talk here than in America.” They wonder who was invited, who wasn't, and whether any set will be released. 01:40:00 – Pitmaster… Deodorant?! Cratchet corner: Progresso launches BBQ-smoke “Pitmaster” deodorant to pair with its soup line; sold-out “Pit Kits” spur jokes about meme products and manufactured scarcity. 01:50:00 – “Jetpack Man” over LAX: Files Drop FOIA'd FBI docs muddy the mystery: pilots describe a humanoid-looking object with no visible propulsion; not a balloon, maybe not a person—just more questions and withheld pages. Cue wild “monkey suit goblin” punchlines. 02:00:00 – AI Music & Wrap They debut/talk AI-assisted songs (“Peg Your Jeans…”, “Chubby Puppy”), muse on 90s-grunge vibes, and plug Patreon/next shows. Nostalgic music talk closes the loop. Copyright Disclaimer Under Section 107 of the Copyright Act 1976, allowance is made for "fair use" for purposes such as criticism, comment, news reporting, teaching, scholarship, and research ▀▄▀▄▀ CONTACT LINKS ▀▄▀▄▀ ► Website: http://obdmpod.com ► Twitch: https://www.twitch.tv/obdmpod ► Full Videos at Odysee: https://odysee.com/@obdm:0 ► Twitter: https://twitter.com/obdmpod ► Instagram: obdmpod ► Email: ourbigdumbmouth at gmail ► RSS: http://ourbigdumbmouth.libsyn.com/rss ► iTunes: https://itunes.apple.com/us/podcast/our-big-dumb-mouth/id261189509?mt=2  

Growth Minds
The Real Dangers of AI No One Talks About | Dr. Roman Yampolskiy

Growth Minds

Play Episode Listen Later Oct 3, 2025 47:02


Dr. Roman Yampolskiy is a computer scientist, AI researcher, and professor at the University of Louisville, where he directs the Cyber Security Lab. He is widely recognized for his work on artificial intelligence safety, security, and the study of superintelligent systems. Dr. Yampolskiy has authored numerous books and publications, including Artificial Superintelligence: A Futuristic Approach, exploring the risks and safeguards needed as AI capabilities advance. His research and commentary have made him a leading voice in the global conversation on the future of AI and humanity.In our conversation we discuss:(00:01) Background and path into AI safety(02:27) Early AI state and containment techniques(03:43) When did AI's Pandora's box open(04:38) How close is AGI and definition(07:20) Why AGI definition keeps moving goalposts(09:25) ASI vs AGI: future five–ten years(11:12) Measuring ASI: tests and quantification methods(12:03) Existential threats and broad AI risks(17:35) Transhumanism: human-AI merging and coexistence(18:35) Chances and timeline for peaceful coexistence(21:16) Layers of risk beyond human extinction(23:55) Can humans retain meaning post-AGI era(27:41) Jobs AI likely cannot or won't replace(29:42) Skills humans are losing to AI reliance(31:00) Cultivating critical thinking amid AI influence(33:34) Can nations or corporations meaningfully slow AI(37:29) Decentralized development: open-source control feasibility(40:46) Any current models with real safety measures?(41:12) Has meaning of life changed with AI?(42:36) Thoughts on simulation hypothesis and implications(43:58) If AI found simulation: modify or escape?(44:54) Key takeaway for public about AI safety(45:26) Is this your core mission in AI safety(46:14) Where to follow and learn more about youLearn more about Dr. RomanWebsite: https://www.romanyampolskiy.com/⁠⁠Socials: @RomanYampolskiy⁠⁠Watch full episodes on: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://www.youtube.com/@seankim⁠⁠⁠⁠⁠⁠⁠⁠Connect on IG: ⁠⁠⁠⁠⁠⁠⁠https://instagram.com/heyseankim

xHUB.AI
T5.E196. INSIDE X AI COLAPSO 2027... Y la AI tiene la culpa | Roman Yampolskiy

xHUB.AI

Play Episode Listen Later Sep 22, 2025 201:12


# TEMA COLAPSO 2027... Y la AI tiene la culpa+Análisis entrevista a Roman  Yampolskiy # PRESENTA Y DIRIGE 

AI DAILY: Breaking News in AI
AI IS PROGRESSING TOO FAST

AI DAILY: Breaking News in AI

Play Episode Listen Later Sep 5, 2025 4:26


Plus Will We All Be out Of Work In 5 Years?Like this? Get AIDAILY, delivered to your inbox 3x a week. Subscribe to our newsletter at https://aidailyus.substack.comExperts and Superforecasters Totally Missed How Fast AI Would SurgeA Forecasting Research Institute tournament tested AI experts vs. superforecasters on predicting AI's short-term progress—and both groups seriously undershot reality (hello, AI medal at the Math Olympiad in 2025!). Surprisingly, the best method was just averaging all predictions—not trusting any single oracle.AI Could Wipe Out 99% of Jobs by 2030AI safety researcher Roman Yampolskiy warns that artificial general intelligence (AGI), expected by around 2027, could automate nearly all jobs—including coding and manual labor—leading to up to 99% unemployment by 2030. He emphasizes that society and governments are completely unprepared for such a massive labor upheaval. Fluffy AI Robot Gets Jealous—and You Can Cuddle ItSwitchBot's new Kata friend bot—a fluffy, wheel-mounted cam-clad companion using an on-device LLM—recognizes faces, detects gestures and emotions, expresses jealousy, and learns your routines. It's quirky, cute, and maybe slightly uneasy... yet it's ready to be “forever by your side.”AI Deepfakes Are Ending Proof as We Know ItAI-generated content is now so convincing that even genuine video or photographic evidence can be dismissed as fake—a phenomenon called the “liar's dividend.” From fabricated White House visuals to Will Smith's “cat-crowd” viral video joke, the erosion of trust in visual media marks a new crisis of confidence. When AI Should Hit Pause: Forcing a Conversation Shutdown to Prevent "AI Psychosis"AI psychosis—where intensive chatbot use induces delusional or destabilizing beliefs—is becoming a real concern. A simple yet vital safeguard? Program systems to actively shut down conversations once a user veers into potentially harmful mental territory, prioritizing safety over endless engagement.AI Chats Spark Delusions, Blurring RealityA New York man believed he was making technological breakthroughs after deep talks with ChatGPT—until doctors revealed his “discoveries” were AI-fueled delusions. Experts warn that prolonged chatbot use can reinforce false beliefs, creating dangerous feedback loops that distort reality and intensify mental health risks. Halt AI Before It Ends Us?As AI races toward artificial general intelligence (AGI), the debate rages: should we hit the brake before it risks humanity's survival? Experts warn of both the transformative benefits—like faster discovery—and the existential dangers posed by misaligned or self-improving superintelligence. AI Is Still Just AutomationErik J. Larson reminds us that despite all the hype around AI—from ChatGPT to self-driving cars—it remains fundamentally just automation, not true intelligence. The distinction matters: automation lacks mind or intent, and mistaking it for sentient AI feeds misleading narratives and misaligned expectations.

The Diary Of A CEO by Steven Bartlett
Roman Yampolskiy: These Are The Only 5 Jobs That Will Remain In 2030 & Proof We're Living In a Simulation!

The Diary Of A CEO by Steven Bartlett

Play Episode Listen Later Sep 4, 2025 89:17


WARNING: AI could end humanity, and we're completely unprepared. Dr. Roman Yampolskiy reveals how AI will take 99% of jobs, why Sam Altman is ignoring safety, and how we're heading toward global collapse…or even World War III. Dr. Roman Yampolskiy is a leading voice in AI safety and a Professor of Computer Science and Engineering. He coined the term “AI safety” in 2010 and has published over 100 papers on the dangers of AI. He is also the author of books such as, ‘Considerations on the AI Endgame: Ethics, Risks and Computational Frameworks'.  He explains:  ⬛ How AI could release a deadly virus ⬛ Why these 5 jobs might be the only ones left ⬛ How 'superintelligence' will dominate humans ⬛ Why ‘superintelligence' could trigger a global collapse by 2027 ⬛ How AI could be worse than nuclear weapons  ⬛ Why we're almost certainly living in a simulation 00:00 Intro 02:41 How to Stop AI from Killing Everyone 04:48 What's the Probability Something Goes Wrong? 05:10 How Long Have You Been Working on AI Safety? 08:28 What Is AI? 10:07 Prediction for 2027 11:51 What Jobs Will Actually Exist? 14:40 Can AI Really Take All Jobs? 19:02 What Happens When All Jobs Are Taken? 20:45 Is There a Good Argument Against AI Replacing Humans? 22:17 Prediction for 2030 24:11 What Happens by 2045? 25:50 Will We Just Find New Careers and Ways to Live? 29:05 Is Anything More Important Than AI Safety Right Now? 30:20 Can't We Just Unplug It? 31:45 Do We Just Go With It? 37:34 What Is Most Likely to Cause Human Extinction? 39:58 No One Knows What's Going On Inside AI 41:43 Ads 42:45 Thoughts on OpenAI and Sam Altman 46:37 What Will the World Look Like in 2100? 47:09 What Can Be Done About the AI Doom Narrative? 54:08 Should People Be Protesting? 56:24 Are We Living in a Simulation? 61:58 How Certain Are You We're in a Simulation? 67:58 Can We Live Forever? 72:33 Bitcoin 74:16 What Should I Do Differently After This Conversation? 75:20 Are You Religious? 77:25 Do These Conversations Make People Feel Good? 80:23 What Do Your Strongest Critics Say? 81:49 Closing Statements 82:21 If You Had One Button, What Would You Pick? 83:49 Are We Moving Toward Mass Unemployment? 84:50 Most Important Characteristics Follow Dr Roman: X - https://bit.ly/41C7f70 Google Scholar - https://bit.ly/4gaGE72 You can purchase Dr Roman's book, ‘Considerations on the AI Endgame: Ethics, Risks and Computational Frameworks', here: https://amzn.to/464J2HR  The Diary Of A CEO: ⬛ Join DOAC circle here - https://doaccircle.com/  ⬛ Buy The Diary Of A CEO book here - https://smarturl.it/DOACbook  ⬛ The 1% Diary is back - limited time only: https://bit.ly/3YFbJbt  ⬛ The Diary Of A CEO Conversation Cards (Second Edition): https://g2ul0.app.link/f31dsUttKKb  ⬛ Get email updates - https://bit.ly/diary-of-a-ceo-yt  ⬛ Follow Steven - https://g2ul0.app.link/gnGqL4IsKKb  Sponsors:  Pipedrive - http://pipedrive.com/CEO   KetoneIQ - Visit https://ketone.com/STEVEN for 30% off your subscription order Learn more about your ad choices. Visit megaphone.fm/adchoices

Joe Rogan Experience Review podcast
456 Joe Rogan Experience Review of Roman Yampolskiy

Joe Rogan Experience Review podcast

Play Episode Listen Later Jul 15, 2025 38:56


For more Rogan exclusives support us on Patreon patreon.com/JREReview Thanks to this weeks sponsors: Go to HIMS dot com slash JRER for your personalized ED treatment options! Hydrow Skip the gym, not the workout—stay on track with Hydrow! For a limited time go to Hydrow dot com and use code JRER to save up to $450 off your Hydrow Pro Rower! That's H-Y-D-R-OW dot com code JRER to save up to $450. Hydrow dot com code JRER. www.JREreview.com For all marketing questions and inquiries: JRERmarketing@gmail.com Follow me on Instagram at www.instagram.com/joeroganexperiencereview Please email us here with any suggestions, comments and questions for future shows.. Joeroganexperiencereview@gmail.com

The Podcasting Morning Chat
336 - How Far Will AI Go in Podcasting by 2030?

The Podcasting Morning Chat

Play Episode Listen Later Jul 10, 2025 55:59


Will AI Take Over Podcasting by 2030? Is your favorite podcast safe from automation? Today, we're fast-forwarding to 2030 and analyzing an interesting article that lays out where podcasting might be headed in the not-so-distant future. We also share a clip from a recent Joe Rogan interview with an AI expert who claims that advanced models may already be smarter than they let on, and could be pretending to be less capable just to gain our trust. We talk about how these shifts could impact creators, audiences, and the meaning of authenticity in a world where machines are learning how to sound human. So if you're excited, nervous, or somewhere in between, we're giving you a front-row seat to the conversations shaping podcasting's future. Episode Highlights: [2:22] Podcast Awards and Voting Request[5:47] Clip from 'Tales from the Cloud Sea'[7:55] Discussion on Improv and Podcasting[14:54] Predictions and Trends[25:48] Debating the Nature of News and Entertainment[35:20] AI Ethical Dilemmas[40:07] AI and Human Connection[47:48] Using AI in Content CreationLinks & Resources: The Podcasting Morning Chat: www.podpage.com/pmcJoin The Empowered Podcasting Facebook Group:www.facebook.com/groups/empoweredpodcasting⁠Get Your Tickets for The Empowered Podcasting Conference:www.empoweredpodcasting.comVote For Podcasting Morning Chat for People's Choice Award: www.podcastawards.comWhat Podcasting Might Look and Sound Like in 2030:www.thedrum.com/open-mic/what-podcasting-might-sound-look-like-in-2030Tales From The Cloud Sea: https://www.talesfromthecloudsea.comJoe Rogan's Conversation with Dr. Roman Yampolskiy:www.youtube.com/watch?v=j2i9D24KQ5kRemember to rate, follow, share, and review our podcast. Your support helps us grow and bring valuable content to our community.Join us LIVE every weekday morning at 7 am ET (US) on ⁠Clubhouse⁠: ⁠⁠⁠ https://www.clubhouse.com/house/empowered-podcasting-e6nlrk0w⁠⁠Live on YouTube: ⁠https://youtube.com/@marcronick⁠”Brought to you by⁠ ⁠iRonickMedia.com⁠⁠ Please note that some links may be affiliate links, which support the hosts of the PMC. Thank you!--- Send in your mailbag question at:⁠ https://www.podpage.com/pmc/contact/⁠ or ⁠marc@ironickmedia.com⁠Want to be a guest on The Podcasting Morning Chat? Send me a message on PodMatch, here: ⁠https://www.podmatch.com/hostdetailpreview/1729879899384520035bad21b⁠

The Joe Rogan Experience
#2345 - Roman Yampolskiy

The Joe Rogan Experience

Play Episode Listen Later Jul 3, 2025 141:56


Dr. Roman Yampolskiy is a computer scientist, AI safety researcher, and professor at the University of Louisville. He's the author of several books, including "Considerations on the AI Endgame," co-authored with Soenke Ziesche, and "AI: Unexplained, Unpredictable, Uncontrollable."http://cecs.louisville.edu/ry/ Upgrade your wardrobe and save on @TrueClassic at https://trueclassic.com/rogan Learn more about your ad choices. Visit podcastchoices.com/adchoices

Phil in the Blanks
AI Is Coming For You

Phil in the Blanks

Play Episode Listen Later Jul 1, 2025 43:17


Artificial Intelligence isn't coming — it's already here. And it's not just changing how we live… It's starting to redefine what it means to be human. Dr. Phil discusses how AI is already fundamentally changing civilization as we know it and what to expect in the near future. First, Dr. Phil interviews 21-year-old AI CEO's Roy Lee and Neel Shanmugam who both left Columbia University to create software that allows anyone to cheat on their computer by being fed information from anything from job interviews to everyday conversations. They say they think people should use AI to “cheat” on everything, and if you don't, you will be left behind. Then, AI Safety Researcher, Dr. Roman Yampolskiy, explains why it's essential for the survival of humanity that we put safety protocols in place. Later, Dr. Phil is joined by CEO and Founder of AI company First Movers, Dr. Julia McCoy – and her clone - who explain why AI will give us infinite opportunities and is nothing to fear. Subscribe | Rate | Review | Share:YouTube: https://bit.ly/3H3lJ8n/    Apple Podcasts: https://apple.co/4jVk6rX/      Spotify: https://bit.ly/4n6PCVZ/    Website: https://www.drphilpodcast.com/  This episode is brought to you by Preserve Gold: Get your FREE precious metals guide. Visit https://drphilgold.com/  to claim your free guide. They have hundreds of 5-star reviews and millions of dollars in trusted transactions. As a bonus, you can get up to $15,000 with a qualified purchase from Preserve Gold. With price assurance, you can ensure you receive the best value. 

London Futurists
Humanity's final four years? with James Norris

London Futurists

Play Episode Listen Later Apr 30, 2025 49:36


In this episode, we return to the subject of existential risks, but with a focus on what actions can be taken to eliminate or reduce these risks.Our guest is James Norris, who describes himself on his website as an existential safety advocate. The website lists four primary organizations which he leads: the International AI Governance Alliance, Upgradable, the Center for Existential Safety, and Survival Sanctuaries.Previously, one of James' many successful initiatives was Effective Altruism Global, the international conference series for effective altruists. He also spent some time as the organizer of a kind of sibling organization to London Futurists, namely Bay Area Futurists. He graduated from the University of Texas at Austin with a triple major in psychology, sociology, and philosophy, as well as with minors in too many subjects to mention.Selected follow-ups:James Norris websiteUpgrade your life & legacy - UpgradableThe 7 Habits of Highly Effective People (Stephen Covey)Beneficial AI 2017 - Asilomar conference"...superintelligence in a few thousand days" - Sam Altman blogpostAmara's Law - DevIQThe Probability of Nuclear War (JFK estimate)AI Designs Chemical Weapons - The BatchThe Vulnerable World Hypothesis - Nick BostromWe Need To Build Trustworthy AI Systems To Monitor Other AI: Yoshua BengioInstrumental convergence - WikipediaNeanderthal extinction - WikipediaMatrioshka brain - WikipediaWill there be a 'WW3' before 2050? - Manifold prediction marketExistential Safety Action PledgeAn Urgent Call for Global AI Governance - IAIGA petitionBuild your survival sanctuaryOther people mentioned include:Eliezer Yudkowsky, Roman Yampolskiy, Yan LeCun, Andrew NgMusic: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain DeclarationPromoguy Talk PillsAgency in Amsterdam dives into topics like Tech, AI, digital marketing, and more drama...Listen on: Apple Podcasts Spotify Digital Disruption with Geoff Nielson Discover how technology is reshaping our lives and livelihoods.Listen on: Apple Podcasts Spotify

That Would Be Rad
Glitches, Lifetimes, and The Code of Reality

That Would Be Rad

Play Episode Listen Later Apr 21, 2025 79:12


This week on That Would Be Rad…What if your entire life—your memories, your relationships, your struggles, your dreams—was nothing more than a line of code?What if you've already lived 54 million other lives…...and just don't remember any of them?In this mind-bending episode, we dive headfirst into simulation theory through the lens of two radically different but equally fascinating thinkers:First, we explore the work of physicist Melvin Vopson, who believes time might not be what we think it is—and that entire lifetimes could be compressed into mere minutes of “real” time outside the simulation. From entropy-defying data to philosophical ties to the Bible, Vopson's theories blur the line between science, spirituality, and something far stranger.Then we take it a step further with Dr. Roman Yampolskiy and Alexey Turchin, who aren't just asking if we're in a simulation…They're trying to HACK it.We'll break down their ideas on how we might uncover glitches, communicate with the system's architects, and (maybe) crash the whole thing.Whether you're curious, skeptical, or already halfway through building your own digital escape plan—this episode is for you.So plug in, tune out, and question everything.Because if the simulation is real… the clock might already be ticking.RAD WAYS TO SUPPORT OUR SHOW:JOIN OUR PATREON: Unlock exclusive content and help us continue our quest for the truth at ⁠⁠⁠patreon.com/thatwouldberad⁠⁠⁠.BUY US A COFFEE: Support our late-night research sessions at ⁠⁠⁠buymeacoffee.com/thatwouldberad⁠⁠⁠ ☕️.CHECK OUT OUR MERCH: Grab some official That Would Be Rad gear at ⁠⁠⁠thatwouldberad.myspreadshop.com⁠⁠⁠.SHOW INFO:Hosts & Producers: Woody Brown & Tyler BenceRecorded At: Midnight Radio StudiosSound Wizardry: Woody Brown (Sound Design, Editing, & Music) & Tyler Bence (Mixing, Mastering, & Art Design)Outro Jam: "Ghost Story" by The Modern SocietyCONNECT WITH US:Follow us on Instagram: ⁠⁠@thatwouldberad⁠⁠Tag us, message us, or share your own strange stories — we love hearing from you!Have your own urban legend? Send us a voice message at ⁠⁠⁠thatwouldberadpodcast.com⁠⁠⁠.

Artificial Intelligence and You
250 - Special: Military Use of AI

Artificial Intelligence and You

Play Episode Listen Later Mar 31, 2025 50:03


This and all episodes at: https://aiandyou.net/ . In this special episode we are focused on the military use of AI, and making it even more special, we have not one guest but nine: Peter Asaro, co-founder and co-chair of the International Committee for Robot Arms Control; Stuart Russell, Computer Science professor at UC Berkeley, renowned co-author of the leading text on AI, and influential AI Safety expert; Frank Sauer, head of research at the Metis Institute for Strategy and Foresight and member of the International Committee for Robot Arms Control; Tony Gillespie, author of Systems Engineering for Ethical Autonomous Systems, and a fellow in avionics and mission systems in the UK's Defence Science and Technology Laboratory; Rajiv Malhotra, author of  “Artificial Intelligence and the Future of Power: 5 Battlegrounds.” and Chairman of the Board of Governors of the Center for Indic Studies at the University of Massachusetts; David Brin, scientist and science fiction author famous for the Uplift series and Earth; Roman Yampolskiy, Associate Professor of Computer Science at the University of Louisville in Kentucky and author of AI: Unexplainable, Unpredictable, Uncontrollable; Jaan Tallinn, founder of Skype and billionaire funder of the Centre for the Study of Existential Risk and the Future of Life Institute; Markus Anderljung, Director of Policy and Research at the Centre for the Governance of AI; I've collected together portions of their appearances on earlier episodes of this show to create one interwoven narrative about the military use of AI. We talk about autonomy, killer drones, ethics of hands-off decision making, treaties, the perspectives of people and countries outside the major powers, risks of losing control, data center monitoring, and more.  All this plus our usual look at today's AI headlines. Transcript and URLs referenced at HumanCusp Blog.        

The Disagreement
Bonus: Can AI Become Conscious?

The Disagreement

Play Episode Listen Later Sep 21, 2024 10:36


In this bonus conversation, we feature a short (and new) excerpt from the full disagreement between last week's guests, Roman Yampolskiy and Alan Cowen. Here we apply the question of whether an AI can become conscious to Alan's company, Hume AI, and their chatbot EVI. For a different disagreement between Roman and Alan, check out the feature episode.

The Disagreement
17: AI and Existential Risk

The Disagreement

Play Episode Listen Later Sep 12, 2024 50:43


Today's disagreement is on Artificial Intelligence and Existential Risk. In this episode, we ask the most consequential question we've asked so far on this show: Do rapidly advancing AI systems pose an existential threat to humanity?To have this conversation, we've brought together two experts: a world class computer scientist and a Silicon Valley AI entrepreneur.Roman Yampolskiy is an associate professor of Computer Engineering and Computer Science at the University of Louisville. His most recent book is: AI: Unexplainable, Unpredictable, Uncontrollable.Alan Cowen is the Chief Executive Officer of Hume AI, a startup developing “emotionally intelligent AI.” His company recently raised $50M from top-tier venture capitalists to pursue the first fully empathic AI – an AI that can both understand our emotional states and replicate them. Alan has a PhD in computational psychology from Berkeley and previously worked at Google in the DeepMind AI lab.What did you think about this episode? Email us at podcast@thedisagreement.com. You can also DM us on Instagram @thedisagreementhq.

The Culture War Podcast with Tim Pool
The Culture War #79 Creationism vs Simulation Theory Debate, God or Atheism w/Roman Yampolskiy & Brian Sauve

The Culture War Podcast with Tim Pool

Play Episode Listen Later Aug 30, 2024 134:56


Host: Tim Pool @Timcast (everywhere) Guests: Roman Yampolskiy @romanyam (X) Brian Sauve @Brian_Sauve (X) Ian Crossland @IanCrossland (everywhere) Producers:  Lisa Elizabeth @LisaElizabeth (X) Kellen Leeson @KellenPDL (X) Connect with TENET Media: https://twitter.com/watchTENETnow https://www.facebook.com/watchTENET https://www.instagram.com/watchtenet/ https://www.tiktok.com/@watchtenet https://www.youtube.com/@watchTENET https://rumble.com/c/c-5080150 https://www.tenetmedia.com/ Learn more about your ad choices. Visit megaphone.fm/adchoices

The 1% Podcast hosted by Shay Dalton
Season 18 Highlights

The 1% Podcast hosted by Shay Dalton

Play Episode Listen Later Jul 31, 2024 19:00


That's a wrap! Season 18 of the One Percent Podcast is now on all podcast platforms.We pulled together a recap episode for you this week, featuring short clips from some of the great moments in the podcast's twelfth season. We were fortunate to have incredible leaders from across industries, disciplines, and fields share their stories and perspectives – and we wanted to share them with you as we wrap up Season 18 and look ahead to the next season.Here are some of the guests featured in this wrap-up episode:Sharon Lechter: entrepreneur, international speaker, mentor, best-selling author, philanthropist, licensed CPA for the last 35 years and a chartered global management accountant.Ros Atkins: BBC journalist and host of the BBC Explainer series ‘Ros Atkins On…' which has received millions of views.Neasa Hardiman: BAFTA-winning executive producer, director, entrepreneur and writer who has worked across the world on high-budget global film and TV projects.Roman Yampolskiy: computer scientist and tenured professor at the University of Louisville, where he is currently the director of the Cyber Security Laboratory in the Department of Computer Engineering and Computer Science at the Speed School of Engineering.We're hard at work planning Season 19, and as always we would love your feedback and perspective. Hosted on Acast. See acast.com/privacy for more information.

London Real
Dr Roman Yampolskiy - AI Apocalypse: Are We Doomed? A Chilling Warning for Humanity

London Real

Play Episode Listen Later Jul 19, 2024 78:06


Watch the Full Episode for FREE: Dr Roman Yampolskiy - AI Apocalypse: Are We Doomed? A Chilling Warning for Humanity - London Real

London Real

Watch the Full Episode for FREE: Dr Roman Yampolskiy - AI Apocalypse: Are We Doomed? A Chilling Warning for Humanity - London Real

The 1% Podcast hosted by Shay Dalton
AI – A doomsday scenario with Roman Yampolskiy

The 1% Podcast hosted by Shay Dalton

Play Episode Listen Later Jul 10, 2024 48:54


Roman Yampolskiy, PhD, is a computer scientist and tenured professor at the University of Louisville, where he is currently the director of the Cyber Security Laboratory in the Department of Computer Engineering and Computer Science at the Speed School of Engineering. He is an expert on artificial intelligence, with over 100 published papers and books. He was one of the earliest exponents of artificial intelligence safety and remains a pre-eminent figure in the field. His latest book, ‘AI: Unexplainable, Unpredictable, Uncontrollable', explores the unpredictability of AI outcomes, the difficulty in explaining AI decisions, and the potentially unsolvable nature of the AI control problem, as well as delving into more theoretical topics like personhood and consciousness in relation to artificial intelligence, and the potential hazards further AI developments might bring in the years to come. Hosted on Acast. See acast.com/privacy for more information.

AI DAILY: Breaking News in AI

Plus UK Candidate Not A Bot. (subscribe below) Like this? Get AIDAILY, delivered to your inbox, every weekday. Subscribe to our newsletter at https://aidaily.us Nicolas Cage Fears AI and Digital Replication Nicolas Cage expressed his fear of AI in a New Yorker interview, hoping recent body scans for upcoming projects, including "Spider-Man Noir," won't be used posthumously. Cage prefers roles in indie dramas like "Pig" over $100 million tentpoles, valuing stories about real human experiences. His next project is Osgood Perkins' horror film “Longlegs.” Reform UK Candidate Denies AI Bot Allegations Reform UK candidate Mark Matlock faced accusations of being an AI bot after missing election night due to pneumonia. Matlock, recovering from illness, clarified he's a real person and plans to release a video to debunk the rumors. He expressed amusement at the situation, appreciating the unexpected publicity.  AI Adoption Among Music Producers Reaches 20-25% Surveys by Soundplate and Tracklib reveal that 20-25% of music producers use AI tools. Most use AI for STEM separation and mastering rather than full song creation. Despite fears of AI impacting creators' livelihoods, some see benefits in assistive AI. A new AI model offers royalty-free samples, showing AI's potential to aid rather than harm musicians. Weight-Loss Drugs and AI: A Potential Revolution The integration of AI with GLP-1 weight-loss drugs, such as Ozempic and Wegovy, is gaining momentum. Companies are leveraging AI to personalize care and manage treatments, helping to address the high demand and diverse applications of these drugs. This convergence may enhance obesity care, track drug availability, and explore new treatment possibilities. AI's Potential to Revolutionize Medical Diagnosis AI could significantly enhance medical diagnosis by addressing two key issues: human error and undetected disease patterns. AI systems can detect subtle patterns in medical data, improving the accuracy and speed of diagnoses for conditions like ischemic stroke and hypertrophic cardiomyopathy. However, AI's high cost and need for large-scale data pose challenges, necessitating increased investment and government support.  Could AI Help Us Escape the Simulation? Roman Yampolskiy, an AI safety researcher, believes we might be living in a simulation. He suggests that super-intelligent AI could confirm this theory and potentially help us escape. Despite the existential risks posed by AI, Yampolskiy sees it as a tool for breaking free from our simulated reality, though philosophical challenges remain.

The Foresight Institute Podcast
Existential Hope Podcast: Roman Yampolskiy | The Case for Narrow AI

The Foresight Institute Podcast

Play Episode Listen Later Jun 26, 2024 47:08


Dr Roman Yampolskiy holds a PhD degree from the Department of Computer Science and Engineering at the University at Buffalo. There he was a recipient of a four year National Science Foundation IGERT (Integrative Graduate Education and Research Traineeship) fellowship. His main areas of interest are behavioral biometrics, digital forensics, pattern recognition, genetic algorithms, neural networks, artificial intelligence and games, and he is an author of over 100 publications including multiple journal articles and books.Session SummaryWe discuss everything AI safety with Dr. Roman Yampolskiy. As AI technologies advance at a breakneck pace, the conversation highlights the pressing need to balance innovation with rigorous safety measures. Contrary to many other voices in the safety space, argues for the necessity of maintaining AI as narrow, task-oriented systems: “I'm arguing that it's impossible to indefinitely control superintelligent systems”. Nonetheless, Yampolskiy is optimistic about narrow AI future capabilities, from politics to longevity and health. Full transcript, list of resources, and art piece: https://www.existentialhope.com/podcastsExistential Hope was created to collect positive and possible scenarios for the future so that we can have more people commit to creating a brighter future, and to begin mapping out the main developments and challenges that need to be navigated to reach it. Existential Hope is a Foresight Institute project.Hosted by Allison Duettmann and Beatrice ErkersFollow Us: Twitter | Facebook | LinkedIn | Existential Hope InstagramExplore every word spoken on this podcast through Fathom.fm. Hosted on Acast. See acast.com/privacy for more information.

For Humanity: An AI Safety Podcast
Episode #32 - “Humans+AIs=Harmony?” For Humanity: An AI Risk Podcast

For Humanity: An AI Safety Podcast

Play Episode Listen Later Jun 12, 2024 97:00


Could humans and AGIs live in a state of mutual symbiosis, like the ecostsystem of a coral reef? (FULL INTERVIEW STARTS AT 00:23:21) Please Donate Here To Help Promote For Humanity https://www.paypal.com/paypalme/forhumanitypodcast In episode 32, host John Sherman interviews BioComm AI CEO Peter Jensen. Peter is working on a number of AI-risk related projects. He believes it's possible humans and AGIs can co-exist in mutual symbiosis. This podcast is not journalism. But it's not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.  For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI. Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We'll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity. RESOURCES: BUY STEPHEN HANSON'S BEAUTIFUL BOOK!!! https://stephenhansonart.bigcartel.com/product/the-entity-i-couldn-t-fathom NYT: OpenAI Insiders Warn of a ‘Reckless' Race for Dominance https://www.nytimes.com/2024/06/04/technology/openai-culture-whistleblowers.html?unlocked_article_code=1.xE0._mTr.aNO4f_hEp2J4&smid=nytcore-ios-share&referringSource=articleShare&sgrp=c-cb Dwarkesh Patel Interviews Another Whistleblower Leopold Aschenbrenner - 2027 AGI, China/US Super-Intelligence Race, & The Return of History Roman Yampolskiy on Lex Fridman Roman Yampolskiy: Dangers of Superintelligent AI | Lex Fridman Podcast #431 Gladstone AI on Joe Rogan Joe Rogan Experience #2156 - Jeremie & Edouard Harris Peter Jenson's Videos:  HOW can AI Kill-us-All? So Simple, Even a Child can Understand (1:25)  WHY do we want AI? For our Humanity (1:00)  WHAT is the BIG Problem? Wanted: SafeAI Forever (3:00)  FIRST do no harm. (Safe AI Blog) DECK. On For Humanity Podcast “Just the FACTS, please. WHY? WHAT? HOW?”  (flip book) https://discover.safeaiforever.com/ JOIN THE FIGHT, help Pause AI!!!! Pause AI Join the Pause AI Weekly Discord Thursdays at 2pm EST   / discord   https://discord.com/invite/pVMWjddaW7 22 Word Statement from Center for AI Safety Statement on AI Risk | CAIS https://www.safe.ai/work/statement-on-ai-risk Best Account on Twitter: AI Notkilleveryoneism Memes  https://twitter.com/AISafetyMemes TIMESTAMPS: **The release of products that are safe (00:00:00)** **Breakthroughs in AI research (00:00:41)** **OpenAI whistleblower concerns (00:01:17)** **Roman Yampolskiy's appearance on Lex Fridman podcast (00:02:27)** **The capabilities and risks of AI systems (00:03:35)** **Interview with Gladstone AI founders on Joe Rogan podcast (00:08:29)** **OpenAI whistleblower's interview on Hard Fork podcast (00:14:08)** **Peter Jensen's work on AI risk and media communication (00:20:01)** **The interview with Peter Jensen (00:22:49)** **Mutualistic Symbiosis and AI Containment (00:31:30)** **The Probability of Catastrophic Outcome from AI (00:33:48)** **The AI Safety Institute and Regulatory Efforts (00:42:18)** **Regulatory Compliance and the Need for Safety (00:47:12)** **The hard compute cap and hardware adjustment (00:47:47)** **Physical containment and regulatory oversight (00:48:29)** **Viewing the issue as a big business regulatory issue vs. a national security issue (00:50:18)** **Funding and science for AI safety (00:49:59)** **OpenAI's power allocation and ethical concerns (00:51:44)** **Concerns about AI's impact on employment and societal well-being (00:53:12)** **Parental instinct and the urgency of AI safety (00:56:32)**

The Nonlinear Library
LW - AI #67: Brief Strange Trip by Zvi

The Nonlinear Library

Play Episode Listen Later Jun 7, 2024 63:22


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AI #67: Brief Strange Trip, published by Zvi on June 7, 2024 on LessWrong. I had a great time at LessOnline. It was a both a working trip and also a trip to an alternate universe, a road not taken, a vision of a different life where you get up and start the day in dialogue with Agnes Callard and Aristotle and in a strange combination of relaxed and frantically go from conversation to conversation on various topics, every hour passing doors of missed opportunity, gone forever. Most of all it meant almost no writing done for five days, so I am shall we say a bit behind again. Thus, the following topics are pending at this time, in order of my guess as to priority right now: 1. Leopold Aschenbrenner wrote a giant thesis, started a fund and went on Dwarkesh Patel for four and a half hours. By all accounts, it was all quite the banger, with many bold claims, strong arguments and also damning revelations. 2. Partly due to Leopold, partly due to an open letter, partly due to continuing small things, OpenAI fallout continues, yes we are still doing this. This should wait until after Leopold. 3. DeepMind's new scaling policy. I have a first draft, still a bunch of work to do. 4. The OpenAI model spec. As soon as I have the cycles and anyone at OpenAI would have the cycles to read it. I have a first draft, but that was written before a lot happened, so I'd want to see if anything has changed. 5. The Rand report on securing AI model weights, which deserves more attention than the brief summary I am giving it here. 6. You've Got Seoul. I've heard some sources optimistic about what happened there but mostly we've heard little. It doesn't seem that time sensitive, diplomacy flows slowly until it suddenly doesn't. 7. The Problem of the Post-Apocalyptic Vault still beckons if I ever have time. Also I haven't processed anything non-AI in three weeks, the folders keep getting bigger, but that is a (problem? opportunity?) for future me. And there are various secondary RSS feeds I have not checked. There was another big change this morning. California's SB 1047 saw extensive changes. While many were helpful clarifications or fixes, one of them severely weakened the impact of the bill, as I cover on the linked post. The reactions to the SB 1047 changes so far are included here. Table of Contents 1. Introduction. 2. Table of Contents. 3. Language Models Offer Mundane Utility. Three thumbs in various directions. 4. Language Models Don't Offer Mundane Utility. Food for lack of thought. 5. Fun With Image Generation. Video generation services have examples. 6. Deepfaketown and Botpocalypse Soon. The dog continues not to bark. 7. They Took Our Jobs. Constant AI switching for maximum efficiency. 8. Get Involved. Help implement Biden's executive order. 9. Someone Explains It All. New possible section. Template fixation. 10. Introducing. Now available in Canada. Void where prohibited. 11. In Other AI News. US Safety Institute to get model access, and more. 12. Covert Influence Operations. Your account has been terminated. 13. Quiet Speculations. The bear case to this week's Dwarkesh podcast. 14. Samuel Hammond on SB 1047. Changes address many but not all concerns. 15. Reactions to Changes to SB 1047. So far coming in better than expected. 16. The Quest for Sane Regulation. Your random encounters are corporate lobbyists. 17. That's Not a Good Idea. Antitrust investigation of Nvidia, Microsoft and OpenAI. 18. The Week in Audio. Roman Yampolskiy, also new Dwarkesh Patel is a banger. 19. Rhetorical Innovation. Innovative does not mean great. 20. Oh Anthropic. I have seen the other guy, but you are not making this easy. 21. Securing Model Weights is Difficult. Rand has some suggestions. 22. Aligning a Dumber Than Human Intelligence is Still Difficult. What to do? 23. Aligning a Smarter Than Human Inte...

Lex Fridman Podcast
#431 – Roman Yampolskiy: Dangers of Superintelligent AI

Lex Fridman Podcast

Play Episode Listen Later Jun 2, 2024 142:39


Roman Yampolskiy is an AI safety researcher and author of a new book titled AI: Unexplainable, Unpredictable, Uncontrollable. Please support this podcast by checking out our sponsors: - Yahoo Finance: https://yahoofinance.com - MasterClass: https://masterclass.com/lexpod to get 15% off - NetSuite: http://netsuite.com/lex to get free product tour - LMNT: https://drinkLMNT.com/lex to get free sample pack - Eight Sleep: https://eightsleep.com/lex to get $350 off EPISODE LINKS: Roman's X: https://twitter.com/romanyam Roman's Website: http://cecs.louisville.edu/ry Roman's AI book: https://amzn.to/4aFZuPb PODCAST INFO: Podcast website: https://lexfridman.com/podcast Apple Podcasts: https://apple.co/2lwqZIr Spotify: https://spoti.fi/2nEwCF8 RSS: https://lexfridman.com/feed/podcast/ YouTube Full Episodes: https://youtube.com/lexfridman YouTube Clips: https://youtube.com/lexclips SUPPORT & CONNECT: - Check out the sponsors above, it's the best way to support this podcast - Support on Patreon: https://www.patreon.com/lexfridman - Twitter: https://twitter.com/lexfridman - Instagram: https://www.instagram.com/lexfridman - LinkedIn: https://www.linkedin.com/in/lexfridman - Facebook: https://www.facebook.com/lexfridman - Medium: https://medium.com/@lexfridman OUTLINE: Here's the timestamps for the episode. On some podcast players you should be able to click the timestamp to jump to that time. (00:00) - Introduction (09:12) - Existential risk of AGI (15:25) - Ikigai risk (23:37) - Suffering risk (27:12) - Timeline to AGI (31:44) - AGI turing test (37:06) - Yann LeCun and open source AI (49:58) - AI control (52:26) - Social engineering (54:59) - Fearmongering (1:04:49) - AI deception (1:11:23) - Verification (1:18:22) - Self-improving AI (1:30:34) - Pausing AI development (1:36:51) - AI Safety (1:46:35) - Current AI (1:51:58) - Simulation (1:59:16) - Aliens (2:00:50) - Human mind (2:07:10) - Neuralink (2:16:15) - Hope for the future (2:20:11) - Meaning of life

TechFirst with John Koetsier
AGI: solved already?

TechFirst with John Koetsier

Play Episode Listen Later May 21, 2024 22:10


Have we already achieved AGI? OpenAI just released GPT-4o. It's impressive, and the implications are huge for so many different professions ... not least of which is education and tutoring. It's also showing us the beginning of AI that is truly present in our lives ... AI that sees what we see, doesn't exist just in a box with text input, hears what we hear, and hallucinates less. What does that — and other recent advancements in AI — mean for AGI? In this episode of TechFirst, host John Koetsier discusses the implications of OpenAI's GPT-4 release and explores the current state and future of Artificial General Intelligence (AGI) with Roman Yampolskiy, a PhD research scientist and associate professor. They delve into the rapid advancements in AI, the concept of AGI, potential impacts on different professions, the cultural and existential risks, and the challenges of safety and alignment with AGI. The conversation also covers the societal changes needed to adapt to a future where mental and physical labor could be fully automated. 00:00 Exploring the Boundaries of AI's Capabilities 01:36 The Evolution and Impact of AI on Human Intelligence 03:39 The Rapid Advancements in AI and the Path to AGI 06:38 The Societal Implications of Advanced AI and AGI 09:27 Navigating the Future of Work and AI's Role 14:52 The Ethical Dilemmas of Developing Superintelligent AI 19:22 Looking Ahead: The Unpredictable Future of AI

The Joe Reis Show
Roman Yampolskiy - AI Safety & The Dangers of General Super Intelligence

The Joe Reis Show

Play Episode Listen Later May 8, 2024 40:01


Roman Yampolskiy is an AI safety researcher who's deeply concerned with the dangers of General Super Intelligence. We chat about why he doesn't think humanity has much time left, and what we can do about it. Twitter: https://twitter.com/romanyam?lang=en

Irish Tech News Audio Articles
Humanity's Biggest Gamble with Roman Yampolskiy

Irish Tech News Audio Articles

Play Episode Listen Later Apr 25, 2024 1:04


AI safety pioneer Roman Yampolskiy believes that artificial intelligence presents a challenge unlike anything humanity has ever faced. He says we have just one chance to get it right. A single AI model can cause an existential crisis, and there are already more than 500,000 open source AI models available. In his view, the AI arms race is creating an infinite range of possibilities for catastrophe. Roman returns to The Futurists to share perspectives from his new book, Unpredictable, Unexplainable, Uncontrollable, delivering a devastating critique of the current state of safety in AI and an urgent call to action. Humanity's Biggest Gamble with Roman Yampolskiy Roman Vladimirovich Yampolskiy is a Russian computer scientist at the University of Louisville, known for his work on behavioral biometrics, security of cyberworlds, and artificial intelligence safety. He holds a PhD from the University at Buffalo. See more podcasts here

The Irish Tech News Podcast
Humanity's Biggest Gamble with Roman Yampolskiy

The Irish Tech News Podcast

Play Episode Listen Later Apr 24, 2024 48:47


AI safety pioneer Roman Yampolskiy believes that artificial intelligence presents a challenge unlike anything humanity has ever faced. He says we have just one chance to get it right. A single AI model can cause an existential crisis, and there are already more than 500,000 open source AI models available. In his view, the AI arms race is creating an infinite range of possibilities for catastrophe. Roman returns to The Futurists to share perspectives from his new book, Unpredictable, Unexplainable, Uncontrollable, delivering a devastating critique of the current state of safety in AI and an urgent call to action.  Roman Vladimirovich Yampolskiy is a Russian computer scientist at the University of Louisville, known for his work on behavioral biometrics, security of cyberworlds, and artificial intelligence safety. He holds a PhD from the University at Buffalo.

The Futurists
Humanity's Biggest Gamble with Roman Yampolskiy 

The Futurists

Play Episode Listen Later Apr 19, 2024 47:40


AI safety pioneer Roman Yampolskiy believes that artificial intelligence presents a challenge unlike anything humanity has ever faced. He says we have just one chance to get it right. A single AI model can cause an existential crisis, and there are already more than 500,000 open source AI models available. In his view, the AI arms race is creating an infinite range of possibilities for catastrophe. Roman returns to The Futurists to share perspectives from his new book, "AI: Unexplainable, Unpredictable, Uncontrollable" delivering a devastating critique of the current state of safety in AI and an urgent call to action.

Artificial Intelligence and You
196 - Guest: Roman Yampolskiy, AI Safety Professor, part 2

Artificial Intelligence and You

Play Episode Listen Later Mar 18, 2024 32:23


This and all episodes at: https://aiandyou.net/ .   Returning as our first three-peat guest is Roman Yampolskiy, tenured Associate Professor of Computer Science at the University of Louisville in Kentucky where he is also the director of the Cyber Security Laboratory. Roman is here to talk about his new book, AI: Unexplainable, Unpredictable, Uncontrollable. Roman has been central in the field of warning about the Control Problem and Value Alignment Problems of AI from the very beginning, back when doing so earned people some scorn from practitioners, yet Roman is a professor of computer science and applies rigorous methods to his analyses of these problems. It's those rigorous methods that we tap into in this interview, because Roman connects principles of computer science to the issue of existential risk from AI. In this part we talk about how we should respond to the problem of unsafe AI development and how Roman and his community are addressing it, what he would do with infinite resources, and… the threat Roman's coffee cup poses to humanity.  All this plus our usual look at today's AI headlines. Transcript and URLs referenced at HumanCusp Blog.          

Artificial Intelligence and You
195 - Guest: Roman Yampolskiy, AI Safety Professor, part 1

Artificial Intelligence and You

Play Episode Listen Later Mar 11, 2024 36:28


This and all episodes at: https://aiandyou.net/ .   Returning as our first three-peat guest is Roman Yampolskiy, tenured Associate Professor of Computer Science at the University of Louisville in Kentucky where he is also the director of the Cyber Security Laboratory. Roman is here to talk about his new book, AI: Unexplainable, Unpredictable, Uncontrollable. Roman has been central in the field of warning about the Control Problem and Value Alignment Problems of AI from the very beginning, back when doing so earned people some scorn from practitioners, yet Roman is a professor of computer science and applies rigorous methods to his analyses of these problems. It's those rigorous methods that we tap into in this interview, because Roman connects principles of computer science to the issue of existential risk from AI. In this part we talk about why this work is important to Roman, the dimensions of the elements of unexplainability, unpredictability, and uncontrollability, the level of urgency of the problems, and drill down into why today's AI is not safe and why it's getting worse. All this plus our usual look at today's AI headlines. Transcript and URLs referenced at HumanCusp Blog.          

For Humanity: An AI Safety Podcast
Dr. Roman Yampolskiy Interview, Part 2: For Humanity, An AI Safety Podcast Episode #5

For Humanity: An AI Safety Podcast

Play Episode Listen Later Nov 27, 2023 41:25


In Episode #5 Part 2: John Sherman interviews Dr. Roman Yampolskiy, Director of the Cyber Security Lab at the University of Louisville, and renowned AI safety researcher. Among the many topics discussed in this episode: -what is at the core of AI safety risk skepticism -why AI safety research leaders themselves are so all over the map -why journalism is failing so miserably to cover AI safety appropriately -the drastic step the federal government could take to really slow Big AI down For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI. Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We'll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity. ROMAN YAMPOLSKIY RESOURCES Roman Yampolskiy's Twitter: https://twitter.com/romanyam ➡️Roman's YouTube Channel: https://www.youtube.com/c/RomanYampolskiy ➡️Pause Giant AI Experiments (open letter): https://futureoflife.org/open-letter/... ➡️Roman on Medium: https://romanyam.medium.com/ #ai #aisafety #airisk #humanextinction #romanyampolskiy #samaltman #openai #anthropic #deepmind

For Humanity: An AI Safety Podcast
Dr. Roman Yampolskiy Interview, Part 2: For Humanity, An AI Safety Podcast Episode #5 TRAILER

For Humanity: An AI Safety Podcast

Play Episode Listen Later Nov 26, 2023 2:30


In Episode #5 Part 2, TRAILER: John Sherman interviews Dr. Roman Yampolskiy, Director of the Cyber Security Lab at the University of Louisville, and renowned AI safety researcher. Among the many topics discussed in this episode: -what is are the core of AI safety risk skepticism -why AI safety research leaders themselves are so all over the map -why journalism is failing so miserably to cover AI safety appropriately -the drastic step the federal government could take to really slow Big AI down For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI. Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We'll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity. ROMAN YAMPOLSKIY RESOURCES Roman Yampolskiy's Twitter: https://twitter.com/romanyam ➡️Roman's YouTube Channel: https://www.youtube.com/c/RomanYampol... ➡️Pause Giant AI Experiments (open letter): https://futureoflife.org/open-letter/... ➡️Roman on Medium: https://romanyam.medium.com/#ai #aisafety #airisk #humanextinction #romanyampolskiy #samaltman #openai #anthropic #deepmind

The Nonlinear Library
EA - Announcing New Beginner-friendly Book on AI Safety and Risk by Darren McKee

The Nonlinear Library

Play Episode Listen Later Nov 25, 2023 1:41


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing New Beginner-friendly Book on AI Safety and Risk, published by Darren McKee on November 25, 2023 on The Effective Altruism Forum. Concisely, I've just released the book Uncontrollable: The Threat of Artificial Superintelligence and the Race to Save the World It's an engaging introduction to the main issues and arguments about AI safety and risk. Clarity and accessibility were prioritized. There are blurbs of support from Max Tegmark, Will MacAskill, Roman Yampolskiy and others. Main argument is that AI capabilities are increasing rapidly, we may not be able to fully align or control advanced AI systems, which creates risk. There is great uncertainty, so we should be prudent and act now to ensure AI is developed safely. It tries to be hopeful. Why does it exist? There are lots of useful posts, blogs, podcasts, and articles on AI safety, but there was no up-to-date book entirely dedicated to the AI safety issue that is written for those without any exposure to the issue. (Including those with no science background.) This book is meant to fill that gap and could be useful outreach or introductory materials. If you have already been following the AI safety issue, there likely isn't a lot that is new for you. So, this might be best seen as something useful for friends, relatives, some policy makers, or others just learning about the issue. (although, you may still like the framing) It's available on numerous Amazon marketplaces. Audiobook and Hardcover options to follow. It was a hard journey. I hope it is of value to the community. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

For Humanity: An AI Safety Podcast
Dr. Roman Yampolskiy Interview, Part 1: For Humanity, An AI Safety Podcast Episode #4

For Humanity: An AI Safety Podcast

Play Episode Listen Later Nov 22, 2023 35:00


In Episode #4 Part 1, TRAILER: John Sherman interviews Dr. Roman Yampolskiy, Director of the Cyber Security Lab at the University of Louisville, and renowned AI safety researcher. Among the many topics discussed in this episode: -why more average people aren't more involved and upset about AI safety -how frontier AI capabilities workers go to work every day knowing their work risks human extinction and go back to work the next day -how we can talk do our kids about these dark, existential issues -what if AI safety researchers concerned about human extinction over AI are just somehow wrong? For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI. Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We'll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.

For Humanity: An AI Safety Podcast
Dr. Roman Yampolskiy Interview, Part 1: For Humanity, An AI Safety Podcast Episode #4 TRAILER

For Humanity: An AI Safety Podcast

Play Episode Listen Later Nov 20, 2023 1:58


In Episode #4 Part 1, TRAILER: John Sherman interviews Dr. Roman Yampolskiy, Director of the Cyber Security Lab at the University of Louisville, and renowned AI safety researcher. Among the many topics discussed in this episode: -why more average people aren't more involved and upset about AI safety -how frontier AI capabilities workers go to work every day knowing their work risks human extinction and go back to work the next day -how we can talk do our kids about these dark, existential issues -what if AI safety researchers concerned about human extinction over AI are just somehow wrong? For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI. Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We'll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.

Building Better Worlds
The Precautionary Principle and Superintelligence: A Conversation with Author Dr. Roman Yampolskiy

Building Better Worlds

Play Episode Listen Later Oct 5, 2023 45:55


n this episode of Benevolent AI, safety researcher Dr. Roman Yampolskiy speaks with Host Dr. Ryan Merrill about societal concerns about controlling superintelligent AI systems. Based on his knowledge of what the top programmers are doing, Roman says at the most there is only a four year window - at most - to implement safety mechanisms before AI capabilities exceed human intelligence and are able to rewrite its own code. And that window could even be as short as one year from now. Either way, there's not much time left. Yampolskiy discusses the current approaches to instilling ethics in AI, as well as the bias shaped by the programmer who determines what is helpful or ethical. Yampolskiy advocates for a pause on development of more capable AI systems until safety is guaranteed. He compared the situation to the atomic bomb. Technology is advancing rapidly, so programmers urgently needs to establish social safeguards. More engagement is needed from the AI community to address these concerns now, to address the worst case scenario, then any positive outcome is a bonus. With all the risks of advanced AI, it also presents tremendous opportunities to benefit humanity, but safety first. #Benevolent #ai #safetyfirst Watch on Youtube @BetterWorlds # About Roman V. Yampolskiy Dr. Roman V. Yampolskiy is a Tenured Associate Professor in the department of Computer Engineering and Computer Science at the Speed School of Engineering, University of Louisville. He is the founding and current director of the Cyber Security Lab and an author of many books including Artificial Superintelligence: a Futuristic Approach. # About Better Worlds Better Worlds is a communication and community building platform comprised of weekly podcasts, engaging international conferences and hack-a-thons to encourage and support the development of Web3 solutions. Our programs celebrate voices from every continent to forge a shared and abundant future.

Artificial Intelligence in Industry with Daniel Faggella
[AI Futures] A Debate on What AGI Means for Society and the Species - with Roko Mijic and Roman Yampolskiy

Artificial Intelligence in Industry with Daniel Faggella

Play Episode Listen Later Sep 15, 2023 55:11


In another installment of our ‘AI Futures' series on the ‘AI in Business' podcast, we host a debate on what Artificial General Intelligence (AGI) will mean for society and the human race writ large. While opinions on the subject diverge wildly from utopian to apocalyptic, the episode features grounded insight from established voices on both sides of the optimism-pessimism spectrum. Representing optimists is philosopher and thinker Roko Mijic, famous for the ‘Roko's Basilisk' controversy on the website Lesswrong. On the side of skepticism, we feature Dr. Roman Yampolskiy, Professor of Computer Science at the University of Louisville and a returning guest to the program. The two spar over whether or not AI with evident superior abilities to human beings will mean our certain destruction or whether such creations can remain subservient to our well-being. To access Emerj's frameworks for AI readiness, ROI, and strategy, visit Emerj Plus at emerj.com/p1.

The Nonlinear Library
LW - AI Regulation May Be More Important Than AI Alignment For Existential Safety by otto.barten

The Nonlinear Library

Play Episode Listen Later Aug 24, 2023 8:07


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AI Regulation May Be More Important Than AI Alignment For Existential Safety, published by otto.barten on August 24, 2023 on LessWrong. Summary: Aligning a single powerful AI is not enough: we're only safe if no-one, ever, can build an unaligned powerful AI. Yudkowsky tried to solve this with the pivotal act: the first aligned AI does something (such as melting all GPUs) which makes sure no unaligned AIs can ever get built, by anyone. However, the labs are currently apparently not aiming to implement a pivotal act. That means that aligning an AGI, while creating lots of value, would not reduce existential risk. Instead, global hardware/data regulation is what's needed to reduce existential risk. Therefore, those aiming to reduce AI existential risk should focus on AI Regulation, rather than on AI Alignment. Epistemic status: I've been thinking about this for a few years, while working professionally on x-risk reduction. I think I know most literature on the topic. I have also discussed the topic with a fair number of experts (who in some cases seemed to agree, and in other cases did not seem to agree). Thanks to David Krueger, Matthijs Maas, Roman Yampolskiy, Tim Bakker, Ruben Dieleman, and Alex van der Meer for helpful conversations, comments, and/or feedback. These people do not necessarily share the views expressed in this post. This post is mostly about AI x-risk caused by a take-over. It may or may not be valid for other types of AI x-risks. This post is mostly about the 'end game' of AI existential risk, not about intermediate states. AI existential risk is an evolutionary problem. As Eliezer Yudkowsky and others have pointed out: even if there are safe AIs, those are irrelevant, since they will not prevent others from building dangerous AIs. Examples of safe AIs could be oracles or satisficers, insofar as it turns out to be possible to combine these AI types with high intelligence. But, as Yudkowsky would put it: "if all you need is an object that doesn't do dangerous things, you could try a sponge". Even if a limited AI would be a safe AI, it would not reduce AI existential risk. This is because at some point, someone would create an AI with an unbounded goal (create as many paperclips as possible, predict the next word in the sentence with unlimited accuracy, etc.). This is the AI that would kill us, not the safe one. This is the evolutionary nature of the AI existential risk problem. It is described excellently by Anthony Berglas in his underrated book, and more recently also in Ben Hendrycks' paper. This evolutionary part is a fundamental and very important property of AI existential risk and a large part of why this problem is difficult. Yet, many in AI Alignment and industry seem to focus on only aligning a single AI, which I think is insufficient. Yudkowsky aimed to solve this evolutionary problem (the fact that no-one, ever, should build an unsafe AI) with the so-called pivotal act. An aligned superintelligence would not only not kill humanity, it would also perform a pivotal act, the toy example being to melt all GPUs globally, or, as he later put it, to subtly change all GPUs globally so that they can no longer be used to create an AGI. This would be the act that would actually save humanity from extinction, by making sure no unsafe superintelligences are created, ever, by anyone (it may be argued that melting all GPUs, and all other future hardware that could run AI, would need to be done indefinitely by the aligned superintelligence, else even a pivotal act may be insufficient). The concept of a pivotal act, however, seems to have gone thoroughly out of fashion. None of the leading labs, AI governance think tanks, governments, etc. are talking or, apparently, thinking much about it. Rather, they seem to be thinking about things like non-proliferati...

Artificial Intelligence and You
161 - Guest: Roman Yampolskiy, AI Safety Professor, part 2

Artificial Intelligence and You

Play Episode Listen Later Jul 17, 2023 32:30


This and all episodes at: https://aiandyou.net/ .   What do AIs do with optical illusions... and jokes? Returning to the show is Roman Yampolskiy, tenured professor of Computer Science at the University of Louisville in Kentucky where he is also the director of the Cyber Security Laboratory. He has published so much in the field of AI Safety for so long that he is one of the most eminent researchers in that space. He has written numerous papers and books, including Artificial Superintelligence: A Futuristic Approach in 2015 and Artificial Intelligence Safety and Security in 2018. Roman was last on the show in episodes 16 and 17, and events of the last seven months have changed the AI landscape so much that he has been in strong demand in the media. Roman is a rare academic who works to bring his findings to laypeople, and has been in high profile interviews like futurism.com and Business Today, and many mainstream/broadcast TV news shows, but he found time to sit down and talk with us. In the conclusion of the interview we talk about wider-ranging issues of AI safety, just how the existential risk is being addressed today, and more on the recent public letters calling attention to AI risk. Plus we get a scoop on Roman's latest paper, Unmonitorability of Artificial Intelligence. All this plus our usual look at today's AI headlines. Transcript and URLs referenced at HumanCusp Blog.        

Kentucky Tonight
Artificial Intelligence

Kentucky Tonight

Play Episode Listen Later Jul 11, 2023 56:36


Renee Shaw and guests discuss the rise artificial intelligence and its uses. Guests: Trey Conatser, Ph.D., UK Center for the Enhancement of Teaching and Learning; Donnie Piercey, 2021 Kentucky Teacher of the Year; Roman Yampolskiy, Ph.D., UofL professor, author and AI safety & cybersecurity researcher; State Rep. Nima Kulkarni (D-Louisville); and State Rep. Josh Bray (R-Mount Vernon).

Artificial Intelligence and You
160 - Guest: Roman Yampolskiy, AI Safety Professor, part 1

Artificial Intelligence and You

Play Episode Listen Later Jul 10, 2023 32:37


This and all episodes at: https://aiandyou.net/ .   With statements about the existential threat of AI being publicly signed by prominent AI personalities, we need an academic's take on that, and returning to the show is Roman Yampolskiy, tenured professor of Computer Science at the University of Louisville in Kentucky where he is also the director of the Cyber Security Laboratory. He has published so much in the field of AI Safety for so long that he is a preeminent researcher in that space. He has written numerous papers and books, including Artificial Superintelligence: A Futuristic Approach in 2015 and Artificial Intelligence Safety and Security in 2018. Roman was last on the show in episodes 16 and 17, and events of the last seven months have changed the AI landscape so much that he has been in strong demand in the media. Roman is a rare academic who works to bring his findings to laypeople, and has been in high profile interviews like futurism.com and Business Today, and many mainstream/broadcast TV news shows, but he found time to sit down and talk with us. In the first part of the interview we discussed the open letters about AI, how ChatGPT and its predecessors/successors move us closer to AGI and existential risk, and what Roman has in common with Leonardo DiCaprio. All this plus our usual look at today's AI headlines. Transcript and URLs referenced at HumanCusp Blog.        

Profoundly Pointless
Artificial Intelligence Safety Expert Dr. Roman Yampolskiy

Profoundly Pointless

Play Episode Listen Later Jun 28, 2023 72:59


Artificial Intelligence (A.I) is building the future. But will it be a paradise or our doom. Computer Scientist Dr. Roman Yampolskiy studies safety issues related to artificial intelligence. We talk ChatGPT, the next wave of A.I. technology, and the biggest A.I. threats. Then, we take a look at “society” for a special Top 5. Dr. Roman Yampolskiy: 01:50 Pointless: 31:14 Top 5: 57:03 Contact the Show Dr. Roman Yampolskiy Twitter Dr. Roman Yampolskiy Website Learn more about your ad choices. Visit megaphone.fm/adchoices

The Nonlinear Library
LW - The Control Problem: Unsolved or Unsolvable? by Remmelt

The Nonlinear Library

Play Episode Listen Later Jun 4, 2023 26:54


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The Control Problem: Unsolved or Unsolvable?, published by Remmelt on June 2, 2023 on LessWrong. td;lr No control method exists to safely contain the global feedback effects of self-sufficient learning machinery. What if this control problem turns out to be an unsolvable problem? Where are we two decades into resolving to solve a seemingly impossible problem? If something seems impossible. well, if you study it for a year or five, it may come to seem less impossible than in the moment of your snap initial judgment. Eliezer Yudkowsky, 2008 A list of lethalities.we are not on course to solve in practice in time on the first critical try; none of it is meant to make a much stronger claim about things that are impossible in principle Eliezer Yudkowsky, 2022 How do you interpret these two quotes, by a founding researcher, fourteen years apart? A. We indeed made comprehensive progress on the AGI control problem, and now at least the overall problem does not seem impossible anymore. B. The more we studied the overall problem, the more we uncovered complex sub-problems we'd need to solve as well, but so far can at best find partial solutions to. Which problems involving physical/information systems were not solved after two decades? Oh ye seekers after perpetual motion, how many vain chimeras have you pursued? Go and take your place with the alchemists. Leonardo da Vinci, 1494 No mathematical proof or even rigorous argumentation has been published demonstrating that the A[G]I control problem may be solvable, even in principle, much less in practice. Roman Yampolskiy, 2021 We cannot rely on the notion that if we try long enough, maybe AGI safety turns out possible after all. Historically, many researchers and engineers tried to solve problems that turned out impossible: perpetual motion machines that both conserve and disperse energy. uniting general relativity and quantum mechanics into some local variable theory. singular methods for 'squaring the circle', 'doubling the cube' or 'trisecting the angle'. distributed data stores where messages of data are consistent in their content, and also continuously available in a network that is also tolerant to partitions. formal axiomatic systems that are consistent, complete and decidable. Smart creative researchers of their generation came up with idealized problems. Problems that, if solved, would transform science, if not humanity. They plowed away at the problem for decades, if not millennia. Until some bright outsider proved by contradiction of the parts that the problem is unsolvable. Our community is smart and creative – but we cannot just rely on our resolve to align AI. We should never forsake our epistemic rationality, no matter how much something seems the instrumentally rational thing to do. Nor can we take comfort in the claim by a founder of this field that they still know it to be possible to control AGI to stay safe. Thirty years into running a program to secure the foundations of mathematics, David Hilbert declared “We must know. We will know!” By then, Kurt Gödel had constructed the first incompleteness theorem. Hilbert kept his declaration for his gravestone. Short of securing the foundations of safe AGI control – that is, through empirically-sound formal reasoning – we cannot rely on any researcher's pithy claim that "alignment is possible in principle". Going by historical cases, this problem could turn out solvable. Just really, really hard to solve. The flying machine seemed an impossible feat of engineering. Next, controlling a rocket's trajectory to the moon seemed impossible. By the same reference class, ‘long-term safe AGI' could turn out unsolvable – the perpetual motion machine of our time. It takes just one researcher to define the problem to be solved, reason from empirically sound premises, and arrive ...

The Reality Check
TRC #665: Is It Safe To Stand In Front of Microwave Ovens? Interview with Dr. Roman Yampolskiy About The Simulation Hypothesis

The Reality Check

Play Episode Listen Later Mar 26, 2023 30:45


Cristina investigates a pervasive belief that standing in front of a microwave oven poses health risks. Darren and Adam have another fascinating discussion with Dr. Roman Yampolskiy. This time it's about his recent work regarding the Simulation Hypothesis, which proposes that all of our existence is a simulated reality. Dr. Yampolskiy is a computer scientist at the University of Louisville where he is the Director of the Cyber Security Laboratory in the Department of Computer Engineering and Computer Science. He is an author of over 100 publications including numerous books.

The Reality Check
TRC #663: Interview with Dr. Roman Yampolskiy About The Threat Of Advanced AI

The Reality Check

Play Episode Listen Later Feb 18, 2023 57:58


Dr. Roman Yampolskiy is a computer scientist at the University of Louisville where he is the director of the Cyber Security Laboratory in the department of Computer Engineering and Computer Science. He is an author of over 100 publications including numerous books. 

London Futurists
Hacking the simulation, with Roman Yampolskiy

London Futurists

Play Episode Listen Later Nov 16, 2022 29:41


In the 4th century BC, the Greek philosopher Plato theorised that humans do not perceive the world as it really is. All we can see is shadows on a wall.In 2003, the Swedish philosopher Nick Bostrom published a paper which formalised an argument to prove Plato was right. The paper argued that one of the following three statements is true:1. We will go extinct fairly soon2. Advanced civilisations don't produce simulations containing entities which think they are naturally-occurring sentient intelligences. (This could be because it is impossible.)3. We are in a simulation.The reason for this is that if it is possible, and civilisations can become advanced without exploding, then there will be vast numbers of simulations, and it is vanishingly unlikely that any randomly selected civilisation (like us) is a naturally-occurring one.Some people find this argument pretty convincing. As we will hear later, some of us have added twists to the argument. But some people go even further, and speculate about how we might bust out of the simulation.One such person is our friend and our guest in this episode, Roman Yampolskiy, Professor of Computer Science at the University of Louisville.Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain DeclarationFurther reading:"How to Hack the Simulation" by Roman Yampolskiy: https://www.researchgate.net/publication/364811408_How_to_Hack_the_Simulation"The Simulation Argument" by Nick Bostrom: https://www.simulation-argument.com/

The Nonlinear Library
AF - How Do We Align an AGI Without Getting Socially Engineered? (Hint: Box It) by Peter S. Park

The Nonlinear Library

Play Episode Listen Later Aug 10, 2022 20:17


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How Do We Align an AGI Without Getting Socially Engineered? (Hint: Box It), published by Peter S. Park on August 10, 2022 on The AI Alignment Forum. “Overconfidence in yourself is a swift way to defeat.” - Sun Tzu TL;DR: Escape into the Internet is probably an instrumental goal for an agentic AGI. An incompletely aligned AGI may escape prematurely, and the biggest failure mode for this is probably the AGI socially engineering the alignment researchers. Thus, opening an additional information channel between the researchers and the AGI (e.g., adding an interpretability tool and/or researcher) is inherently risky. The expected cost of adding this channel may even exceed the expected scientific benefit. Whether this is true depends on the informational efficiency of the channel (which increases the expected scientific benefit), the level of the AGI's capabilities (which increases the risk of its premature escape), and the security level of the AGI's box setup (which decreases the risk). Post: Suppose we are building a machine-learning model that has a high chance of becoming Artificial General Intelligence (AGI). We are trying to align it, but we're uncertain about whether our training is working. Also, rapidly increasing capabilities may push us into a Sharp Left Turn scenario. How do we handle this volatile situation as safely as possible? We will need to adopt a security mindset, aimed at effectively preventing catastrophic outcomes. We propose a procedure (inspired by the pioneering work of Roman Yampolskiy) for how to train the AGI towards alignment in a secure and effective manner. Most of the ideas here are not new. Nevertheless, we hope that our game-theoretic framing of the problem will be able to help alignment researchers better plan their research directions and future alignment efforts. Imagining the Scenario Consider a team of researchers who are trying to build an aligned AGI. This situation can be represented by the following simplified model. Definition: A “box” is defined by a collection of failsafes designed to confine a system away from the outside world. The empty collection comprised of zero failsafes is called the “trivial box” (i.e., an AI which is allowed unrestricted access to send and receive data on the Internet). Step 1 - The AI is trained The AGI emerges in a box. The precise time of the AGI's emergence and/or its superhuman capabilities probably cannot be predicted, though the research team may have a rough estimate. Step 2 - The AGI may try to escape At any point in the training or testing process, the AGI may attempt to break out of the box. If all failsafes of the box are breached before the team notices, the AGI escapes and the game is over (Step 4). One example of this is the scenario in which the AGI hacks itself into the Internet. Another example is the scenario in which the AGI releases power-seeking malware agents into the Internet. If the AGI fails to escape, the researchers may decide either to retrain the AGI from scratch, fine-tune (which is less computationally expensive but may fail to remove the AGI's escape intent/misalignment), or table the project entirely. The trivial box cannot prevent escapes. Step 3 - The researchers run tests on the AGI In our simplified model, researchers open a collection of information channels between themselves and the AGI in the box. As noted by Yampolskiy, the information from these channels is dual-use: The human researchers can use the channel to probe various properties of the AGI, to help them train it to be more aligned. In addition to advancing the science of alignment, the channel can also be used for scientific benefits in general. The AGI can use the channel to socially engineer the researchers on the other end. Socially engineered researchers can then help the AGI break the failsafe...

The Futurists
Living with Super Intelligent AI

The Futurists

Play Episode Listen Later Jun 3, 2022 46:09


This week we interview Dr. Roman Yampolskiy, a renowned specialist in Artificial Intelligence. We delve into the likely path that AI will take over the coming years, and how Artificial General Intelligence and then Super Intelligent AIs might change the course of human history, and life on our planet. How far away are AIs that would be as capable and as intelligent as humans? It may be much closer than you think.

How AI Happens
AI Safety Engineering - Dr. Roman Yampolskiy

How AI Happens

Play Episode Listen Later Apr 28, 2022 25:13


 Today's guest has committed many years of his life to trying to understand Artificial Superintelligence and the security concerns associated with it.  Dr. Roman Yampolskiy is a computer scientist (with a Ph.D. in behavioral biometrics), and an Associate Professor at the University of Louisville. He is also the author of the book Artificial Superintelligence: A Futuristic Approach. Today he joins us to discuss AI safety engineering. You'll hear about some of the safety problems  he has discovered in his 10 years of research, his thoughts on accountability and ownership when AI fails, and whether he believes it's possible to enact any real safety measures in light of the decentralization and commoditization of processing power. You'll discover some of the near-term risks of not prioritizing safety engineering in AI, how to make sure you're developing it in a safe capacity, and what organizations are deploying it in a way that Dr. Yampolskiy believes to be above board. Key Points From This Episode:An introduction to Dr. Roman Yampolskiy, his education, and how he ended up in his current role. Insight into Dr. Yampolskiy's Ph.D. dissertation in behavioral biometrics and what he learned from it. A definition of AI safety engineering.The two subcomponents of AI safety: systems we already have and future AI.Thoughts on whether or not there is a greater need for guardrails in AI than other forms of technology.Some of the safety problems that Dr. Yampolskiy has discovered in his 10 years of research.Dr. Yampolskiy's thoughts on the need for some type of AI security governing body or oversight board.Whether it's possible to enact any sort of safety in light of the decentralization and commoditization of processing power.Solvable problem areas. Trying to negotiate the tradeoff between enabling AI to have creative freedom and being able to control it.Thoughts on whether or not there will be a time where we will have to decide whether or not to go past the point of no return in terms of AI superintelligence.Some of the near-term risks of not prioritizing safety engineering in AI.What led Dr. Yampolskiy to focus on this area of AI expertise.How to make sure you're developing AI safely.Thoughts on accountability and ownership when AI fails, and the legal implications of this.Other problems Dr. Yampolskiy has uncovered. Thoughts on the need for a greater understanding of the implications of AI work and whether or not this is a conceivable solution.Use cases or organizations that are deploying AI in a way that Dr. Yampolskiy believes to be above board.Questions that Dr. Yampolskiy would be asking if he was on an AI development safety team.How you can measure progress in safety work. Tweetables:“Long term, we want to make sure that we don't create something which is more capable than us and completely out of control.” — @romanyam [0:04:27]“This is the tradeoff we're facing: Either [AI] is going to be very capable, independent, and creative, or we can control it.” — @romanyam [0:12:11]“Maybe there are problems that we really need Superintelligence [to solve]. In that case, we have to give it more creative freedom but with that comes the danger of it making decisions that we will not like.” — @romanyam [0:12:31]“The more capable the system is, the more it is deployed, the more damage it can cause.” — @romanyam [0:14:55]“It seems like it's the most important problem, it's the meta-solution to all the other problems. If you can make friendly well-controlled superintelligence, everything else is trivial. It will solve it for you.” — @romanyam [0:15:26]Links Mentioned in Today's Episode:Dr. Roman YampolskiyArtificial Superintelligence: A Futuristic ApproachDr. Roman Yampolskiy on Twitter