Podcasts about Roman Yampolskiy

Russian computer scientist

  • 53PODCASTS
  • 93EPISODES
  • 50mAVG DURATION
  • 1MONTHLY NEW EPISODE
  • Apr 30, 2025LATEST

POPULARITY

20172018201920202021202220232024


Best podcasts about Roman Yampolskiy

Latest podcast episodes about Roman Yampolskiy

London Futurists
Humanity's final four years? with James Norris

London Futurists

Play Episode Listen Later Apr 30, 2025 49:36


In this episode, we return to the subject of existential risks, but with a focus on what actions can be taken to eliminate or reduce these risks.Our guest is James Norris, who describes himself on his website as an existential safety advocate. The website lists four primary organizations which he leads: the International AI Governance Alliance, Upgradable, the Center for Existential Safety, and Survival Sanctuaries.Previously, one of James' many successful initiatives was Effective Altruism Global, the international conference series for effective altruists. He also spent some time as the organizer of a kind of sibling organization to London Futurists, namely Bay Area Futurists. He graduated from the University of Texas at Austin with a triple major in psychology, sociology, and philosophy, as well as with minors in too many subjects to mention.Selected follow-ups:James Norris websiteUpgrade your life & legacy - UpgradableThe 7 Habits of Highly Effective People (Stephen Covey)Beneficial AI 2017 - Asilomar conference"...superintelligence in a few thousand days" - Sam Altman blogpostAmara's Law - DevIQThe Probability of Nuclear War (JFK estimate)AI Designs Chemical Weapons - The BatchThe Vulnerable World Hypothesis - Nick BostromWe Need To Build Trustworthy AI Systems To Monitor Other AI: Yoshua BengioInstrumental convergence - WikipediaNeanderthal extinction - WikipediaMatrioshka brain - WikipediaWill there be a 'WW3' before 2050? - Manifold prediction marketExistential Safety Action PledgeAn Urgent Call for Global AI Governance - IAIGA petitionBuild your survival sanctuaryOther people mentioned include:Eliezer Yudkowsky, Roman Yampolskiy, Yan LeCun, Andrew NgMusic: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain DeclarationPromoguy Talk PillsAgency in Amsterdam dives into topics like Tech, AI, digital marketing, and more drama...Listen on: Apple Podcasts Spotify Digital Disruption with Geoff Nielson Discover how technology is reshaping our lives and livelihoods.Listen on: Apple Podcasts Spotify

That Would Be Rad
Glitches, Lifetimes, and The Code of Reality

That Would Be Rad

Play Episode Listen Later Apr 21, 2025 79:12


This week on That Would Be Rad…What if your entire life—your memories, your relationships, your struggles, your dreams—was nothing more than a line of code?What if you've already lived 54 million other lives…...and just don't remember any of them?In this mind-bending episode, we dive headfirst into simulation theory through the lens of two radically different but equally fascinating thinkers:First, we explore the work of physicist Melvin Vopson, who believes time might not be what we think it is—and that entire lifetimes could be compressed into mere minutes of “real” time outside the simulation. From entropy-defying data to philosophical ties to the Bible, Vopson's theories blur the line between science, spirituality, and something far stranger.Then we take it a step further with Dr. Roman Yampolskiy and Alexey Turchin, who aren't just asking if we're in a simulation…They're trying to HACK it.We'll break down their ideas on how we might uncover glitches, communicate with the system's architects, and (maybe) crash the whole thing.Whether you're curious, skeptical, or already halfway through building your own digital escape plan—this episode is for you.So plug in, tune out, and question everything.Because if the simulation is real… the clock might already be ticking.RAD WAYS TO SUPPORT OUR SHOW:JOIN OUR PATREON: Unlock exclusive content and help us continue our quest for the truth at ⁠⁠⁠patreon.com/thatwouldberad⁠⁠⁠.BUY US A COFFEE: Support our late-night research sessions at ⁠⁠⁠buymeacoffee.com/thatwouldberad⁠⁠⁠ ☕️.CHECK OUT OUR MERCH: Grab some official That Would Be Rad gear at ⁠⁠⁠thatwouldberad.myspreadshop.com⁠⁠⁠.SHOW INFO:Hosts & Producers: Woody Brown & Tyler BenceRecorded At: Midnight Radio StudiosSound Wizardry: Woody Brown (Sound Design, Editing, & Music) & Tyler Bence (Mixing, Mastering, & Art Design)Outro Jam: "Ghost Story" by The Modern SocietyCONNECT WITH US:Follow us on Instagram: ⁠⁠@thatwouldberad⁠⁠Tag us, message us, or share your own strange stories — we love hearing from you!Have your own urban legend? Send us a voice message at ⁠⁠⁠thatwouldberadpodcast.com⁠⁠⁠.

Artificial Intelligence and You
250 - Special: Military Use of AI

Artificial Intelligence and You

Play Episode Listen Later Mar 31, 2025 50:03


This and all episodes at: https://aiandyou.net/ . In this special episode we are focused on the military use of AI, and making it even more special, we have not one guest but nine: Peter Asaro, co-founder and co-chair of the International Committee for Robot Arms Control; Stuart Russell, Computer Science professor at UC Berkeley, renowned co-author of the leading text on AI, and influential AI Safety expert; Frank Sauer, head of research at the Metis Institute for Strategy and Foresight and member of the International Committee for Robot Arms Control; Tony Gillespie, author of Systems Engineering for Ethical Autonomous Systems, and a fellow in avionics and mission systems in the UK's Defence Science and Technology Laboratory; Rajiv Malhotra, author of  “Artificial Intelligence and the Future of Power: 5 Battlegrounds.” and Chairman of the Board of Governors of the Center for Indic Studies at the University of Massachusetts; David Brin, scientist and science fiction author famous for the Uplift series and Earth; Roman Yampolskiy, Associate Professor of Computer Science at the University of Louisville in Kentucky and author of AI: Unexplainable, Unpredictable, Uncontrollable; Jaan Tallinn, founder of Skype and billionaire funder of the Centre for the Study of Existential Risk and the Future of Life Institute; Markus Anderljung, Director of Policy and Research at the Centre for the Governance of AI; I've collected together portions of their appearances on earlier episodes of this show to create one interwoven narrative about the military use of AI. We talk about autonomy, killer drones, ethics of hands-off decision making, treaties, the perspectives of people and countries outside the major powers, risks of losing control, data center monitoring, and more.  All this plus our usual look at today's AI headlines. Transcript and URLs referenced at HumanCusp Blog.        

The Disagreement
Bonus: Can AI Become Conscious?

The Disagreement

Play Episode Listen Later Sep 21, 2024 10:36


In this bonus conversation, we feature a short (and new) excerpt from the full disagreement between last week's guests, Roman Yampolskiy and Alan Cowen. Here we apply the question of whether an AI can become conscious to Alan's company, Hume AI, and their chatbot EVI. For a different disagreement between Roman and Alan, check out the feature episode.

The Disagreement
17: AI and Existential Risk

The Disagreement

Play Episode Listen Later Sep 12, 2024 50:43


Today's disagreement is on Artificial Intelligence and Existential Risk. In this episode, we ask the most consequential question we've asked so far on this show: Do rapidly advancing AI systems pose an existential threat to humanity?To have this conversation, we've brought together two experts: a world class computer scientist and a Silicon Valley AI entrepreneur.Roman Yampolskiy is an associate professor of Computer Engineering and Computer Science at the University of Louisville. His most recent book is: AI: Unexplainable, Unpredictable, Uncontrollable.Alan Cowen is the Chief Executive Officer of Hume AI, a startup developing “emotionally intelligent AI.” His company recently raised $50M from top-tier venture capitalists to pursue the first fully empathic AI – an AI that can both understand our emotional states and replicate them. Alan has a PhD in computational psychology from Berkeley and previously worked at Google in the DeepMind AI lab.What did you think about this episode? Email us at podcast@thedisagreement.com. You can also DM us on Instagram @thedisagreementhq.

The Culture War Podcast with Tim Pool
The Culture War #79 Creationism vs Simulation Theory Debate, God or Atheism w/Roman Yampolskiy & Brian Sauve

The Culture War Podcast with Tim Pool

Play Episode Listen Later Aug 30, 2024 134:56


Host: Tim Pool @Timcast (everywhere) Guests: Roman Yampolskiy @romanyam (X) Brian Sauve @Brian_Sauve (X) Ian Crossland @IanCrossland (everywhere) Producers:  Lisa Elizabeth @LisaElizabeth (X) Kellen Leeson @KellenPDL (X) Connect with TENET Media: https://twitter.com/watchTENETnow https://www.facebook.com/watchTENET https://www.instagram.com/watchtenet/ https://www.tiktok.com/@watchtenet https://www.youtube.com/@watchTENET https://rumble.com/c/c-5080150 https://www.tenetmedia.com/ Learn more about your ad choices. Visit megaphone.fm/adchoices

The 1% Podcast hosted by Shay Dalton
Season 18 Highlights

The 1% Podcast hosted by Shay Dalton

Play Episode Listen Later Jul 31, 2024 19:00


That's a wrap! Season 18 of the One Percent Podcast is now on all podcast platforms.We pulled together a recap episode for you this week, featuring short clips from some of the great moments in the podcast's twelfth season. We were fortunate to have incredible leaders from across industries, disciplines, and fields share their stories and perspectives – and we wanted to share them with you as we wrap up Season 18 and look ahead to the next season.Here are some of the guests featured in this wrap-up episode:Sharon Lechter: entrepreneur, international speaker, mentor, best-selling author, philanthropist, licensed CPA for the last 35 years and a chartered global management accountant.Ros Atkins: BBC journalist and host of the BBC Explainer series ‘Ros Atkins On…' which has received millions of views.Neasa Hardiman: BAFTA-winning executive producer, director, entrepreneur and writer who has worked across the world on high-budget global film and TV projects.Roman Yampolskiy: computer scientist and tenured professor at the University of Louisville, where he is currently the director of the Cyber Security Laboratory in the Department of Computer Engineering and Computer Science at the Speed School of Engineering.We're hard at work planning Season 19, and as always we would love your feedback and perspective. Hosted on Acast. See acast.com/privacy for more information.

London Real
Dr Roman Yampolskiy - AI Apocalypse: Are We Doomed? A Chilling Warning for Humanity

London Real

Play Episode Listen Later Jul 19, 2024 78:06


Watch the Full Episode for FREE: Dr Roman Yampolskiy - AI Apocalypse: Are We Doomed? A Chilling Warning for Humanity - London Real

London Real

Watch the Full Episode for FREE: Dr Roman Yampolskiy - AI Apocalypse: Are We Doomed? A Chilling Warning for Humanity - London Real

The 1% Podcast hosted by Shay Dalton
AI – A doomsday scenario with Roman Yampolskiy

The 1% Podcast hosted by Shay Dalton

Play Episode Listen Later Jul 10, 2024 48:54


Roman Yampolskiy, PhD, is a computer scientist and tenured professor at the University of Louisville, where he is currently the director of the Cyber Security Laboratory in the Department of Computer Engineering and Computer Science at the Speed School of Engineering. He is an expert on artificial intelligence, with over 100 published papers and books. He was one of the earliest exponents of artificial intelligence safety and remains a pre-eminent figure in the field. His latest book, ‘AI: Unexplainable, Unpredictable, Uncontrollable', explores the unpredictability of AI outcomes, the difficulty in explaining AI decisions, and the potentially unsolvable nature of the AI control problem, as well as delving into more theoretical topics like personhood and consciousness in relation to artificial intelligence, and the potential hazards further AI developments might bring in the years to come. Hosted on Acast. See acast.com/privacy for more information.

AI DAILY: Breaking News in AI

Plus UK Candidate Not A Bot. (subscribe below) Like this? Get AIDAILY, delivered to your inbox, every weekday. Subscribe to our newsletter at https://aidaily.us Nicolas Cage Fears AI and Digital Replication Nicolas Cage expressed his fear of AI in a New Yorker interview, hoping recent body scans for upcoming projects, including "Spider-Man Noir," won't be used posthumously. Cage prefers roles in indie dramas like "Pig" over $100 million tentpoles, valuing stories about real human experiences. His next project is Osgood Perkins' horror film “Longlegs.” Reform UK Candidate Denies AI Bot Allegations Reform UK candidate Mark Matlock faced accusations of being an AI bot after missing election night due to pneumonia. Matlock, recovering from illness, clarified he's a real person and plans to release a video to debunk the rumors. He expressed amusement at the situation, appreciating the unexpected publicity.  AI Adoption Among Music Producers Reaches 20-25% Surveys by Soundplate and Tracklib reveal that 20-25% of music producers use AI tools. Most use AI for STEM separation and mastering rather than full song creation. Despite fears of AI impacting creators' livelihoods, some see benefits in assistive AI. A new AI model offers royalty-free samples, showing AI's potential to aid rather than harm musicians. Weight-Loss Drugs and AI: A Potential Revolution The integration of AI with GLP-1 weight-loss drugs, such as Ozempic and Wegovy, is gaining momentum. Companies are leveraging AI to personalize care and manage treatments, helping to address the high demand and diverse applications of these drugs. This convergence may enhance obesity care, track drug availability, and explore new treatment possibilities. AI's Potential to Revolutionize Medical Diagnosis AI could significantly enhance medical diagnosis by addressing two key issues: human error and undetected disease patterns. AI systems can detect subtle patterns in medical data, improving the accuracy and speed of diagnoses for conditions like ischemic stroke and hypertrophic cardiomyopathy. However, AI's high cost and need for large-scale data pose challenges, necessitating increased investment and government support.  Could AI Help Us Escape the Simulation? Roman Yampolskiy, an AI safety researcher, believes we might be living in a simulation. He suggests that super-intelligent AI could confirm this theory and potentially help us escape. Despite the existential risks posed by AI, Yampolskiy sees it as a tool for breaking free from our simulated reality, though philosophical challenges remain.

The Foresight Institute Podcast
Existential Hope Podcast: Roman Yampolskiy | The Case for Narrow AI

The Foresight Institute Podcast

Play Episode Listen Later Jun 26, 2024 47:08


Dr Roman Yampolskiy holds a PhD degree from the Department of Computer Science and Engineering at the University at Buffalo. There he was a recipient of a four year National Science Foundation IGERT (Integrative Graduate Education and Research Traineeship) fellowship. His main areas of interest are behavioral biometrics, digital forensics, pattern recognition, genetic algorithms, neural networks, artificial intelligence and games, and he is an author of over 100 publications including multiple journal articles and books.Session SummaryWe discuss everything AI safety with Dr. Roman Yampolskiy. As AI technologies advance at a breakneck pace, the conversation highlights the pressing need to balance innovation with rigorous safety measures. Contrary to many other voices in the safety space, argues for the necessity of maintaining AI as narrow, task-oriented systems: “I'm arguing that it's impossible to indefinitely control superintelligent systems”. Nonetheless, Yampolskiy is optimistic about narrow AI future capabilities, from politics to longevity and health. Full transcript, list of resources, and art piece: https://www.existentialhope.com/podcastsExistential Hope was created to collect positive and possible scenarios for the future so that we can have more people commit to creating a brighter future, and to begin mapping out the main developments and challenges that need to be navigated to reach it. Existential Hope is a Foresight Institute project.Hosted by Allison Duettmann and Beatrice ErkersFollow Us: Twitter | Facebook | LinkedIn | Existential Hope InstagramExplore every word spoken on this podcast through Fathom.fm. Hosted on Acast. See acast.com/privacy for more information.

For Humanity: An AI Safety Podcast
Episode #32 - “Humans+AIs=Harmony?” For Humanity: An AI Risk Podcast

For Humanity: An AI Safety Podcast

Play Episode Listen Later Jun 12, 2024 97:00


Could humans and AGIs live in a state of mutual symbiosis, like the ecostsystem of a coral reef? (FULL INTERVIEW STARTS AT 00:23:21) Please Donate Here To Help Promote For Humanity https://www.paypal.com/paypalme/forhumanitypodcast In episode 32, host John Sherman interviews BioComm AI CEO Peter Jensen. Peter is working on a number of AI-risk related projects. He believes it's possible humans and AGIs can co-exist in mutual symbiosis. This podcast is not journalism. But it's not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.  For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI. Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We'll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity. RESOURCES: BUY STEPHEN HANSON'S BEAUTIFUL BOOK!!! https://stephenhansonart.bigcartel.com/product/the-entity-i-couldn-t-fathom NYT: OpenAI Insiders Warn of a ‘Reckless' Race for Dominance https://www.nytimes.com/2024/06/04/technology/openai-culture-whistleblowers.html?unlocked_article_code=1.xE0._mTr.aNO4f_hEp2J4&smid=nytcore-ios-share&referringSource=articleShare&sgrp=c-cb Dwarkesh Patel Interviews Another Whistleblower Leopold Aschenbrenner - 2027 AGI, China/US Super-Intelligence Race, & The Return of History Roman Yampolskiy on Lex Fridman Roman Yampolskiy: Dangers of Superintelligent AI | Lex Fridman Podcast #431 Gladstone AI on Joe Rogan Joe Rogan Experience #2156 - Jeremie & Edouard Harris Peter Jenson's Videos:  HOW can AI Kill-us-All? So Simple, Even a Child can Understand (1:25)  WHY do we want AI? For our Humanity (1:00)  WHAT is the BIG Problem? Wanted: SafeAI Forever (3:00)  FIRST do no harm. (Safe AI Blog) DECK. On For Humanity Podcast “Just the FACTS, please. WHY? WHAT? HOW?”  (flip book) https://discover.safeaiforever.com/ JOIN THE FIGHT, help Pause AI!!!! Pause AI Join the Pause AI Weekly Discord Thursdays at 2pm EST   / discord   https://discord.com/invite/pVMWjddaW7 22 Word Statement from Center for AI Safety Statement on AI Risk | CAIS https://www.safe.ai/work/statement-on-ai-risk Best Account on Twitter: AI Notkilleveryoneism Memes  https://twitter.com/AISafetyMemes TIMESTAMPS: **The release of products that are safe (00:00:00)** **Breakthroughs in AI research (00:00:41)** **OpenAI whistleblower concerns (00:01:17)** **Roman Yampolskiy's appearance on Lex Fridman podcast (00:02:27)** **The capabilities and risks of AI systems (00:03:35)** **Interview with Gladstone AI founders on Joe Rogan podcast (00:08:29)** **OpenAI whistleblower's interview on Hard Fork podcast (00:14:08)** **Peter Jensen's work on AI risk and media communication (00:20:01)** **The interview with Peter Jensen (00:22:49)** **Mutualistic Symbiosis and AI Containment (00:31:30)** **The Probability of Catastrophic Outcome from AI (00:33:48)** **The AI Safety Institute and Regulatory Efforts (00:42:18)** **Regulatory Compliance and the Need for Safety (00:47:12)** **The hard compute cap and hardware adjustment (00:47:47)** **Physical containment and regulatory oversight (00:48:29)** **Viewing the issue as a big business regulatory issue vs. a national security issue (00:50:18)** **Funding and science for AI safety (00:49:59)** **OpenAI's power allocation and ethical concerns (00:51:44)** **Concerns about AI's impact on employment and societal well-being (00:53:12)** **Parental instinct and the urgency of AI safety (00:56:32)**

The Nonlinear Library
LW - AI #67: Brief Strange Trip by Zvi

The Nonlinear Library

Play Episode Listen Later Jun 7, 2024 63:22


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AI #67: Brief Strange Trip, published by Zvi on June 7, 2024 on LessWrong. I had a great time at LessOnline. It was a both a working trip and also a trip to an alternate universe, a road not taken, a vision of a different life where you get up and start the day in dialogue with Agnes Callard and Aristotle and in a strange combination of relaxed and frantically go from conversation to conversation on various topics, every hour passing doors of missed opportunity, gone forever. Most of all it meant almost no writing done for five days, so I am shall we say a bit behind again. Thus, the following topics are pending at this time, in order of my guess as to priority right now: 1. Leopold Aschenbrenner wrote a giant thesis, started a fund and went on Dwarkesh Patel for four and a half hours. By all accounts, it was all quite the banger, with many bold claims, strong arguments and also damning revelations. 2. Partly due to Leopold, partly due to an open letter, partly due to continuing small things, OpenAI fallout continues, yes we are still doing this. This should wait until after Leopold. 3. DeepMind's new scaling policy. I have a first draft, still a bunch of work to do. 4. The OpenAI model spec. As soon as I have the cycles and anyone at OpenAI would have the cycles to read it. I have a first draft, but that was written before a lot happened, so I'd want to see if anything has changed. 5. The Rand report on securing AI model weights, which deserves more attention than the brief summary I am giving it here. 6. You've Got Seoul. I've heard some sources optimistic about what happened there but mostly we've heard little. It doesn't seem that time sensitive, diplomacy flows slowly until it suddenly doesn't. 7. The Problem of the Post-Apocalyptic Vault still beckons if I ever have time. Also I haven't processed anything non-AI in three weeks, the folders keep getting bigger, but that is a (problem? opportunity?) for future me. And there are various secondary RSS feeds I have not checked. There was another big change this morning. California's SB 1047 saw extensive changes. While many were helpful clarifications or fixes, one of them severely weakened the impact of the bill, as I cover on the linked post. The reactions to the SB 1047 changes so far are included here. Table of Contents 1. Introduction. 2. Table of Contents. 3. Language Models Offer Mundane Utility. Three thumbs in various directions. 4. Language Models Don't Offer Mundane Utility. Food for lack of thought. 5. Fun With Image Generation. Video generation services have examples. 6. Deepfaketown and Botpocalypse Soon. The dog continues not to bark. 7. They Took Our Jobs. Constant AI switching for maximum efficiency. 8. Get Involved. Help implement Biden's executive order. 9. Someone Explains It All. New possible section. Template fixation. 10. Introducing. Now available in Canada. Void where prohibited. 11. In Other AI News. US Safety Institute to get model access, and more. 12. Covert Influence Operations. Your account has been terminated. 13. Quiet Speculations. The bear case to this week's Dwarkesh podcast. 14. Samuel Hammond on SB 1047. Changes address many but not all concerns. 15. Reactions to Changes to SB 1047. So far coming in better than expected. 16. The Quest for Sane Regulation. Your random encounters are corporate lobbyists. 17. That's Not a Good Idea. Antitrust investigation of Nvidia, Microsoft and OpenAI. 18. The Week in Audio. Roman Yampolskiy, also new Dwarkesh Patel is a banger. 19. Rhetorical Innovation. Innovative does not mean great. 20. Oh Anthropic. I have seen the other guy, but you are not making this easy. 21. Securing Model Weights is Difficult. Rand has some suggestions. 22. Aligning a Dumber Than Human Intelligence is Still Difficult. What to do? 23. Aligning a Smarter Than Human Inte...

The Nonlinear Library: LessWrong
LW - AI #67: Brief Strange Trip by Zvi

The Nonlinear Library: LessWrong

Play Episode Listen Later Jun 7, 2024 63:22


Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AI #67: Brief Strange Trip, published by Zvi on June 7, 2024 on LessWrong. I had a great time at LessOnline. It was a both a working trip and also a trip to an alternate universe, a road not taken, a vision of a different life where you get up and start the day in dialogue with Agnes Callard and Aristotle and in a strange combination of relaxed and frantically go from conversation to conversation on various topics, every hour passing doors of missed opportunity, gone forever. Most of all it meant almost no writing done for five days, so I am shall we say a bit behind again. Thus, the following topics are pending at this time, in order of my guess as to priority right now: 1. Leopold Aschenbrenner wrote a giant thesis, started a fund and went on Dwarkesh Patel for four and a half hours. By all accounts, it was all quite the banger, with many bold claims, strong arguments and also damning revelations. 2. Partly due to Leopold, partly due to an open letter, partly due to continuing small things, OpenAI fallout continues, yes we are still doing this. This should wait until after Leopold. 3. DeepMind's new scaling policy. I have a first draft, still a bunch of work to do. 4. The OpenAI model spec. As soon as I have the cycles and anyone at OpenAI would have the cycles to read it. I have a first draft, but that was written before a lot happened, so I'd want to see if anything has changed. 5. The Rand report on securing AI model weights, which deserves more attention than the brief summary I am giving it here. 6. You've Got Seoul. I've heard some sources optimistic about what happened there but mostly we've heard little. It doesn't seem that time sensitive, diplomacy flows slowly until it suddenly doesn't. 7. The Problem of the Post-Apocalyptic Vault still beckons if I ever have time. Also I haven't processed anything non-AI in three weeks, the folders keep getting bigger, but that is a (problem? opportunity?) for future me. And there are various secondary RSS feeds I have not checked. There was another big change this morning. California's SB 1047 saw extensive changes. While many were helpful clarifications or fixes, one of them severely weakened the impact of the bill, as I cover on the linked post. The reactions to the SB 1047 changes so far are included here. Table of Contents 1. Introduction. 2. Table of Contents. 3. Language Models Offer Mundane Utility. Three thumbs in various directions. 4. Language Models Don't Offer Mundane Utility. Food for lack of thought. 5. Fun With Image Generation. Video generation services have examples. 6. Deepfaketown and Botpocalypse Soon. The dog continues not to bark. 7. They Took Our Jobs. Constant AI switching for maximum efficiency. 8. Get Involved. Help implement Biden's executive order. 9. Someone Explains It All. New possible section. Template fixation. 10. Introducing. Now available in Canada. Void where prohibited. 11. In Other AI News. US Safety Institute to get model access, and more. 12. Covert Influence Operations. Your account has been terminated. 13. Quiet Speculations. The bear case to this week's Dwarkesh podcast. 14. Samuel Hammond on SB 1047. Changes address many but not all concerns. 15. Reactions to Changes to SB 1047. So far coming in better than expected. 16. The Quest for Sane Regulation. Your random encounters are corporate lobbyists. 17. That's Not a Good Idea. Antitrust investigation of Nvidia, Microsoft and OpenAI. 18. The Week in Audio. Roman Yampolskiy, also new Dwarkesh Patel is a banger. 19. Rhetorical Innovation. Innovative does not mean great. 20. Oh Anthropic. I have seen the other guy, but you are not making this easy. 21. Securing Model Weights is Difficult. Rand has some suggestions. 22. Aligning a Dumber Than Human Intelligence is Still Difficult. What to do? 23. Aligning a Smarter Than Human Inte...

Lex Fridman Podcast
#431 – Roman Yampolskiy: Dangers of Superintelligent AI

Lex Fridman Podcast

Play Episode Listen Later Jun 2, 2024 142:39


Roman Yampolskiy is an AI safety researcher and author of a new book titled AI: Unexplainable, Unpredictable, Uncontrollable. Please support this podcast by checking out our sponsors: - Yahoo Finance: https://yahoofinance.com - MasterClass: https://masterclass.com/lexpod to get 15% off - NetSuite: http://netsuite.com/lex to get free product tour - LMNT: https://drinkLMNT.com/lex to get free sample pack - Eight Sleep: https://eightsleep.com/lex to get $350 off EPISODE LINKS: Roman's X: https://twitter.com/romanyam Roman's Website: http://cecs.louisville.edu/ry Roman's AI book: https://amzn.to/4aFZuPb PODCAST INFO: Podcast website: https://lexfridman.com/podcast Apple Podcasts: https://apple.co/2lwqZIr Spotify: https://spoti.fi/2nEwCF8 RSS: https://lexfridman.com/feed/podcast/ YouTube Full Episodes: https://youtube.com/lexfridman YouTube Clips: https://youtube.com/lexclips SUPPORT & CONNECT: - Check out the sponsors above, it's the best way to support this podcast - Support on Patreon: https://www.patreon.com/lexfridman - Twitter: https://twitter.com/lexfridman - Instagram: https://www.instagram.com/lexfridman - LinkedIn: https://www.linkedin.com/in/lexfridman - Facebook: https://www.facebook.com/lexfridman - Medium: https://medium.com/@lexfridman OUTLINE: Here's the timestamps for the episode. On some podcast players you should be able to click the timestamp to jump to that time. (00:00) - Introduction (09:12) - Existential risk of AGI (15:25) - Ikigai risk (23:37) - Suffering risk (27:12) - Timeline to AGI (31:44) - AGI turing test (37:06) - Yann LeCun and open source AI (49:58) - AI control (52:26) - Social engineering (54:59) - Fearmongering (1:04:49) - AI deception (1:11:23) - Verification (1:18:22) - Self-improving AI (1:30:34) - Pausing AI development (1:36:51) - AI Safety (1:46:35) - Current AI (1:51:58) - Simulation (1:59:16) - Aliens (2:00:50) - Human mind (2:07:10) - Neuralink (2:16:15) - Hope for the future (2:20:11) - Meaning of life

TechFirst with John Koetsier
AGI: solved already?

TechFirst with John Koetsier

Play Episode Listen Later May 21, 2024 22:10


Have we already achieved AGI? OpenAI just released GPT-4o. It's impressive, and the implications are huge for so many different professions ... not least of which is education and tutoring. It's also showing us the beginning of AI that is truly present in our lives ... AI that sees what we see, doesn't exist just in a box with text input, hears what we hear, and hallucinates less. What does that — and other recent advancements in AI — mean for AGI? In this episode of TechFirst, host John Koetsier discusses the implications of OpenAI's GPT-4 release and explores the current state and future of Artificial General Intelligence (AGI) with Roman Yampolskiy, a PhD research scientist and associate professor. They delve into the rapid advancements in AI, the concept of AGI, potential impacts on different professions, the cultural and existential risks, and the challenges of safety and alignment with AGI. The conversation also covers the societal changes needed to adapt to a future where mental and physical labor could be fully automated. 00:00 Exploring the Boundaries of AI's Capabilities 01:36 The Evolution and Impact of AI on Human Intelligence 03:39 The Rapid Advancements in AI and the Path to AGI 06:38 The Societal Implications of Advanced AI and AGI 09:27 Navigating the Future of Work and AI's Role 14:52 The Ethical Dilemmas of Developing Superintelligent AI 19:22 Looking Ahead: The Unpredictable Future of AI

The Joe Reis Show
Roman Yampolskiy - AI Safety & The Dangers of General Super Intelligence

The Joe Reis Show

Play Episode Listen Later May 8, 2024 40:01


Roman Yampolskiy is an AI safety researcher who's deeply concerned with the dangers of General Super Intelligence. We chat about why he doesn't think humanity has much time left, and what we can do about it. Twitter: https://twitter.com/romanyam?lang=en

Irish Tech News Audio Articles
Humanity's Biggest Gamble with Roman Yampolskiy

Irish Tech News Audio Articles

Play Episode Listen Later Apr 25, 2024 1:04


AI safety pioneer Roman Yampolskiy believes that artificial intelligence presents a challenge unlike anything humanity has ever faced. He says we have just one chance to get it right. A single AI model can cause an existential crisis, and there are already more than 500,000 open source AI models available. In his view, the AI arms race is creating an infinite range of possibilities for catastrophe. Roman returns to The Futurists to share perspectives from his new book, Unpredictable, Unexplainable, Uncontrollable, delivering a devastating critique of the current state of safety in AI and an urgent call to action. Humanity's Biggest Gamble with Roman Yampolskiy Roman Vladimirovich Yampolskiy is a Russian computer scientist at the University of Louisville, known for his work on behavioral biometrics, security of cyberworlds, and artificial intelligence safety. He holds a PhD from the University at Buffalo. See more podcasts here

The Irish Tech News Podcast
Humanity's Biggest Gamble with Roman Yampolskiy

The Irish Tech News Podcast

Play Episode Listen Later Apr 24, 2024 48:47


AI safety pioneer Roman Yampolskiy believes that artificial intelligence presents a challenge unlike anything humanity has ever faced. He says we have just one chance to get it right. A single AI model can cause an existential crisis, and there are already more than 500,000 open source AI models available. In his view, the AI arms race is creating an infinite range of possibilities for catastrophe. Roman returns to The Futurists to share perspectives from his new book, Unpredictable, Unexplainable, Uncontrollable, delivering a devastating critique of the current state of safety in AI and an urgent call to action.  Roman Vladimirovich Yampolskiy is a Russian computer scientist at the University of Louisville, known for his work on behavioral biometrics, security of cyberworlds, and artificial intelligence safety. He holds a PhD from the University at Buffalo.

The Futurists
Humanity's Biggest Gamble with Roman Yampolskiy 

The Futurists

Play Episode Listen Later Apr 19, 2024 47:40


AI safety pioneer Roman Yampolskiy believes that artificial intelligence presents a challenge unlike anything humanity has ever faced. He says we have just one chance to get it right. A single AI model can cause an existential crisis, and there are already more than 500,000 open source AI models available. In his view, the AI arms race is creating an infinite range of possibilities for catastrophe. Roman returns to The Futurists to share perspectives from his new book, "AI: Unexplainable, Unpredictable, Uncontrollable" delivering a devastating critique of the current state of safety in AI and an urgent call to action.

Artificial Intelligence and You
196 - Guest: Roman Yampolskiy, AI Safety Professor, part 2

Artificial Intelligence and You

Play Episode Listen Later Mar 18, 2024 32:23


This and all episodes at: https://aiandyou.net/ .   Returning as our first three-peat guest is Roman Yampolskiy, tenured Associate Professor of Computer Science at the University of Louisville in Kentucky where he is also the director of the Cyber Security Laboratory. Roman is here to talk about his new book, AI: Unexplainable, Unpredictable, Uncontrollable. Roman has been central in the field of warning about the Control Problem and Value Alignment Problems of AI from the very beginning, back when doing so earned people some scorn from practitioners, yet Roman is a professor of computer science and applies rigorous methods to his analyses of these problems. It's those rigorous methods that we tap into in this interview, because Roman connects principles of computer science to the issue of existential risk from AI. In this part we talk about how we should respond to the problem of unsafe AI development and how Roman and his community are addressing it, what he would do with infinite resources, and… the threat Roman's coffee cup poses to humanity.  All this plus our usual look at today's AI headlines. Transcript and URLs referenced at HumanCusp Blog.          

Artificial Intelligence and You
195 - Guest: Roman Yampolskiy, AI Safety Professor, part 1

Artificial Intelligence and You

Play Episode Listen Later Mar 11, 2024 36:28


This and all episodes at: https://aiandyou.net/ .   Returning as our first three-peat guest is Roman Yampolskiy, tenured Associate Professor of Computer Science at the University of Louisville in Kentucky where he is also the director of the Cyber Security Laboratory. Roman is here to talk about his new book, AI: Unexplainable, Unpredictable, Uncontrollable. Roman has been central in the field of warning about the Control Problem and Value Alignment Problems of AI from the very beginning, back when doing so earned people some scorn from practitioners, yet Roman is a professor of computer science and applies rigorous methods to his analyses of these problems. It's those rigorous methods that we tap into in this interview, because Roman connects principles of computer science to the issue of existential risk from AI. In this part we talk about why this work is important to Roman, the dimensions of the elements of unexplainability, unpredictability, and uncontrollability, the level of urgency of the problems, and drill down into why today's AI is not safe and why it's getting worse. All this plus our usual look at today's AI headlines. Transcript and URLs referenced at HumanCusp Blog.          

For Humanity: An AI Safety Podcast
Dr. Roman Yampolskiy Interview, Part 2: For Humanity, An AI Safety Podcast Episode #5

For Humanity: An AI Safety Podcast

Play Episode Listen Later Nov 27, 2023 41:25


In Episode #5 Part 2: John Sherman interviews Dr. Roman Yampolskiy, Director of the Cyber Security Lab at the University of Louisville, and renowned AI safety researcher. Among the many topics discussed in this episode: -what is at the core of AI safety risk skepticism -why AI safety research leaders themselves are so all over the map -why journalism is failing so miserably to cover AI safety appropriately -the drastic step the federal government could take to really slow Big AI down For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI. Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We'll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity. ROMAN YAMPOLSKIY RESOURCES Roman Yampolskiy's Twitter: https://twitter.com/romanyam ➡️Roman's YouTube Channel: https://www.youtube.com/c/RomanYampolskiy ➡️Pause Giant AI Experiments (open letter): https://futureoflife.org/open-letter/... ➡️Roman on Medium: https://romanyam.medium.com/ #ai #aisafety #airisk #humanextinction #romanyampolskiy #samaltman #openai #anthropic #deepmind

For Humanity: An AI Safety Podcast
Dr. Roman Yampolskiy Interview, Part 2: For Humanity, An AI Safety Podcast Episode #5 TRAILER

For Humanity: An AI Safety Podcast

Play Episode Listen Later Nov 26, 2023 2:30


In Episode #5 Part 2, TRAILER: John Sherman interviews Dr. Roman Yampolskiy, Director of the Cyber Security Lab at the University of Louisville, and renowned AI safety researcher. Among the many topics discussed in this episode: -what is are the core of AI safety risk skepticism -why AI safety research leaders themselves are so all over the map -why journalism is failing so miserably to cover AI safety appropriately -the drastic step the federal government could take to really slow Big AI down For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI. Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We'll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity. ROMAN YAMPOLSKIY RESOURCES Roman Yampolskiy's Twitter: https://twitter.com/romanyam ➡️Roman's YouTube Channel: https://www.youtube.com/c/RomanYampol... ➡️Pause Giant AI Experiments (open letter): https://futureoflife.org/open-letter/... ➡️Roman on Medium: https://romanyam.medium.com/#ai #aisafety #airisk #humanextinction #romanyampolskiy #samaltman #openai #anthropic #deepmind

The Nonlinear Library
EA - Announcing New Beginner-friendly Book on AI Safety and Risk by Darren McKee

The Nonlinear Library

Play Episode Listen Later Nov 25, 2023 1:41


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing New Beginner-friendly Book on AI Safety and Risk, published by Darren McKee on November 25, 2023 on The Effective Altruism Forum. Concisely, I've just released the book Uncontrollable: The Threat of Artificial Superintelligence and the Race to Save the World It's an engaging introduction to the main issues and arguments about AI safety and risk. Clarity and accessibility were prioritized. There are blurbs of support from Max Tegmark, Will MacAskill, Roman Yampolskiy and others. Main argument is that AI capabilities are increasing rapidly, we may not be able to fully align or control advanced AI systems, which creates risk. There is great uncertainty, so we should be prudent and act now to ensure AI is developed safely. It tries to be hopeful. Why does it exist? There are lots of useful posts, blogs, podcasts, and articles on AI safety, but there was no up-to-date book entirely dedicated to the AI safety issue that is written for those without any exposure to the issue. (Including those with no science background.) This book is meant to fill that gap and could be useful outreach or introductory materials. If you have already been following the AI safety issue, there likely isn't a lot that is new for you. So, this might be best seen as something useful for friends, relatives, some policy makers, or others just learning about the issue. (although, you may still like the framing) It's available on numerous Amazon marketplaces. Audiobook and Hardcover options to follow. It was a hard journey. I hope it is of value to the community. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

For Humanity: An AI Safety Podcast
Dr. Roman Yampolskiy Interview, Part 1: For Humanity, An AI Safety Podcast Episode #4

For Humanity: An AI Safety Podcast

Play Episode Listen Later Nov 22, 2023 35:00


In Episode #4 Part 1, TRAILER: John Sherman interviews Dr. Roman Yampolskiy, Director of the Cyber Security Lab at the University of Louisville, and renowned AI safety researcher. Among the many topics discussed in this episode: -why more average people aren't more involved and upset about AI safety -how frontier AI capabilities workers go to work every day knowing their work risks human extinction and go back to work the next day -how we can talk do our kids about these dark, existential issues -what if AI safety researchers concerned about human extinction over AI are just somehow wrong? For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI. Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We'll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.

For Humanity: An AI Safety Podcast
Dr. Roman Yampolskiy Interview, Part 1: For Humanity, An AI Safety Podcast Episode #4 TRAILER

For Humanity: An AI Safety Podcast

Play Episode Listen Later Nov 20, 2023 1:58


In Episode #4 Part 1, TRAILER: John Sherman interviews Dr. Roman Yampolskiy, Director of the Cyber Security Lab at the University of Louisville, and renowned AI safety researcher. Among the many topics discussed in this episode: -why more average people aren't more involved and upset about AI safety -how frontier AI capabilities workers go to work every day knowing their work risks human extinction and go back to work the next day -how we can talk do our kids about these dark, existential issues -what if AI safety researchers concerned about human extinction over AI are just somehow wrong? For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI. Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We'll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.

Building Better Worlds
The Precautionary Principle and Superintelligence: A Conversation with Author Dr. Roman Yampolskiy

Building Better Worlds

Play Episode Listen Later Oct 5, 2023 45:55


n this episode of Benevolent AI, safety researcher Dr. Roman Yampolskiy speaks with Host Dr. Ryan Merrill about societal concerns about controlling superintelligent AI systems. Based on his knowledge of what the top programmers are doing, Roman says at the most there is only a four year window - at most - to implement safety mechanisms before AI capabilities exceed human intelligence and are able to rewrite its own code. And that window could even be as short as one year from now. Either way, there's not much time left. Yampolskiy discusses the current approaches to instilling ethics in AI, as well as the bias shaped by the programmer who determines what is helpful or ethical. Yampolskiy advocates for a pause on development of more capable AI systems until safety is guaranteed. He compared the situation to the atomic bomb. Technology is advancing rapidly, so programmers urgently needs to establish social safeguards. More engagement is needed from the AI community to address these concerns now, to address the worst case scenario, then any positive outcome is a bonus. With all the risks of advanced AI, it also presents tremendous opportunities to benefit humanity, but safety first. #Benevolent #ai #safetyfirst Watch on Youtube @BetterWorlds # About Roman V. Yampolskiy Dr. Roman V. Yampolskiy is a Tenured Associate Professor in the department of Computer Engineering and Computer Science at the Speed School of Engineering, University of Louisville. He is the founding and current director of the Cyber Security Lab and an author of many books including Artificial Superintelligence: a Futuristic Approach. # About Better Worlds Better Worlds is a communication and community building platform comprised of weekly podcasts, engaging international conferences and hack-a-thons to encourage and support the development of Web3 solutions. Our programs celebrate voices from every continent to forge a shared and abundant future.

Artificial Intelligence in Industry with Daniel Faggella
[AI Futures] A Debate on What AGI Means for Society and the Species - with Roko Mijic and Roman Yampolskiy

Artificial Intelligence in Industry with Daniel Faggella

Play Episode Listen Later Sep 15, 2023 55:11


In another installment of our ‘AI Futures' series on the ‘AI in Business' podcast, we host a debate on what Artificial General Intelligence (AGI) will mean for society and the human race writ large. While opinions on the subject diverge wildly from utopian to apocalyptic, the episode features grounded insight from established voices on both sides of the optimism-pessimism spectrum. Representing optimists is philosopher and thinker Roko Mijic, famous for the ‘Roko's Basilisk' controversy on the website Lesswrong. On the side of skepticism, we feature Dr. Roman Yampolskiy, Professor of Computer Science at the University of Louisville and a returning guest to the program. The two spar over whether or not AI with evident superior abilities to human beings will mean our certain destruction or whether such creations can remain subservient to our well-being. To access Emerj's frameworks for AI readiness, ROI, and strategy, visit Emerj Plus at emerj.com/p1.

The Nonlinear Library
LW - AI Regulation May Be More Important Than AI Alignment For Existential Safety by otto.barten

The Nonlinear Library

Play Episode Listen Later Aug 24, 2023 8:07


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AI Regulation May Be More Important Than AI Alignment For Existential Safety, published by otto.barten on August 24, 2023 on LessWrong. Summary: Aligning a single powerful AI is not enough: we're only safe if no-one, ever, can build an unaligned powerful AI. Yudkowsky tried to solve this with the pivotal act: the first aligned AI does something (such as melting all GPUs) which makes sure no unaligned AIs can ever get built, by anyone. However, the labs are currently apparently not aiming to implement a pivotal act. That means that aligning an AGI, while creating lots of value, would not reduce existential risk. Instead, global hardware/data regulation is what's needed to reduce existential risk. Therefore, those aiming to reduce AI existential risk should focus on AI Regulation, rather than on AI Alignment. Epistemic status: I've been thinking about this for a few years, while working professionally on x-risk reduction. I think I know most literature on the topic. I have also discussed the topic with a fair number of experts (who in some cases seemed to agree, and in other cases did not seem to agree). Thanks to David Krueger, Matthijs Maas, Roman Yampolskiy, Tim Bakker, Ruben Dieleman, and Alex van der Meer for helpful conversations, comments, and/or feedback. These people do not necessarily share the views expressed in this post. This post is mostly about AI x-risk caused by a take-over. It may or may not be valid for other types of AI x-risks. This post is mostly about the 'end game' of AI existential risk, not about intermediate states. AI existential risk is an evolutionary problem. As Eliezer Yudkowsky and others have pointed out: even if there are safe AIs, those are irrelevant, since they will not prevent others from building dangerous AIs. Examples of safe AIs could be oracles or satisficers, insofar as it turns out to be possible to combine these AI types with high intelligence. But, as Yudkowsky would put it: "if all you need is an object that doesn't do dangerous things, you could try a sponge". Even if a limited AI would be a safe AI, it would not reduce AI existential risk. This is because at some point, someone would create an AI with an unbounded goal (create as many paperclips as possible, predict the next word in the sentence with unlimited accuracy, etc.). This is the AI that would kill us, not the safe one. This is the evolutionary nature of the AI existential risk problem. It is described excellently by Anthony Berglas in his underrated book, and more recently also in Ben Hendrycks' paper. This evolutionary part is a fundamental and very important property of AI existential risk and a large part of why this problem is difficult. Yet, many in AI Alignment and industry seem to focus on only aligning a single AI, which I think is insufficient. Yudkowsky aimed to solve this evolutionary problem (the fact that no-one, ever, should build an unsafe AI) with the so-called pivotal act. An aligned superintelligence would not only not kill humanity, it would also perform a pivotal act, the toy example being to melt all GPUs globally, or, as he later put it, to subtly change all GPUs globally so that they can no longer be used to create an AGI. This would be the act that would actually save humanity from extinction, by making sure no unsafe superintelligences are created, ever, by anyone (it may be argued that melting all GPUs, and all other future hardware that could run AI, would need to be done indefinitely by the aligned superintelligence, else even a pivotal act may be insufficient). The concept of a pivotal act, however, seems to have gone thoroughly out of fashion. None of the leading labs, AI governance think tanks, governments, etc. are talking or, apparently, thinking much about it. Rather, they seem to be thinking about things like non-proliferati...

The Nonlinear Library: LessWrong
LW - AI Regulation May Be More Important Than AI Alignment For Existential Safety by otto.barten

The Nonlinear Library: LessWrong

Play Episode Listen Later Aug 24, 2023 8:07


Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AI Regulation May Be More Important Than AI Alignment For Existential Safety, published by otto.barten on August 24, 2023 on LessWrong. Summary: Aligning a single powerful AI is not enough: we're only safe if no-one, ever, can build an unaligned powerful AI. Yudkowsky tried to solve this with the pivotal act: the first aligned AI does something (such as melting all GPUs) which makes sure no unaligned AIs can ever get built, by anyone. However, the labs are currently apparently not aiming to implement a pivotal act. That means that aligning an AGI, while creating lots of value, would not reduce existential risk. Instead, global hardware/data regulation is what's needed to reduce existential risk. Therefore, those aiming to reduce AI existential risk should focus on AI Regulation, rather than on AI Alignment. Epistemic status: I've been thinking about this for a few years, while working professionally on x-risk reduction. I think I know most literature on the topic. I have also discussed the topic with a fair number of experts (who in some cases seemed to agree, and in other cases did not seem to agree). Thanks to David Krueger, Matthijs Maas, Roman Yampolskiy, Tim Bakker, Ruben Dieleman, and Alex van der Meer for helpful conversations, comments, and/or feedback. These people do not necessarily share the views expressed in this post. This post is mostly about AI x-risk caused by a take-over. It may or may not be valid for other types of AI x-risks. This post is mostly about the 'end game' of AI existential risk, not about intermediate states. AI existential risk is an evolutionary problem. As Eliezer Yudkowsky and others have pointed out: even if there are safe AIs, those are irrelevant, since they will not prevent others from building dangerous AIs. Examples of safe AIs could be oracles or satisficers, insofar as it turns out to be possible to combine these AI types with high intelligence. But, as Yudkowsky would put it: "if all you need is an object that doesn't do dangerous things, you could try a sponge". Even if a limited AI would be a safe AI, it would not reduce AI existential risk. This is because at some point, someone would create an AI with an unbounded goal (create as many paperclips as possible, predict the next word in the sentence with unlimited accuracy, etc.). This is the AI that would kill us, not the safe one. This is the evolutionary nature of the AI existential risk problem. It is described excellently by Anthony Berglas in his underrated book, and more recently also in Ben Hendrycks' paper. This evolutionary part is a fundamental and very important property of AI existential risk and a large part of why this problem is difficult. Yet, many in AI Alignment and industry seem to focus on only aligning a single AI, which I think is insufficient. Yudkowsky aimed to solve this evolutionary problem (the fact that no-one, ever, should build an unsafe AI) with the so-called pivotal act. An aligned superintelligence would not only not kill humanity, it would also perform a pivotal act, the toy example being to melt all GPUs globally, or, as he later put it, to subtly change all GPUs globally so that they can no longer be used to create an AGI. This would be the act that would actually save humanity from extinction, by making sure no unsafe superintelligences are created, ever, by anyone (it may be argued that melting all GPUs, and all other future hardware that could run AI, would need to be done indefinitely by the aligned superintelligence, else even a pivotal act may be insufficient). The concept of a pivotal act, however, seems to have gone thoroughly out of fashion. None of the leading labs, AI governance think tanks, governments, etc. are talking or, apparently, thinking much about it. Rather, they seem to be thinking about things like non-proliferati...

Artificial Intelligence and You
161 - Guest: Roman Yampolskiy, AI Safety Professor, part 2

Artificial Intelligence and You

Play Episode Listen Later Jul 17, 2023 32:30


This and all episodes at: https://aiandyou.net/ .   What do AIs do with optical illusions... and jokes? Returning to the show is Roman Yampolskiy, tenured professor of Computer Science at the University of Louisville in Kentucky where he is also the director of the Cyber Security Laboratory. He has published so much in the field of AI Safety for so long that he is one of the most eminent researchers in that space. He has written numerous papers and books, including Artificial Superintelligence: A Futuristic Approach in 2015 and Artificial Intelligence Safety and Security in 2018. Roman was last on the show in episodes 16 and 17, and events of the last seven months have changed the AI landscape so much that he has been in strong demand in the media. Roman is a rare academic who works to bring his findings to laypeople, and has been in high profile interviews like futurism.com and Business Today, and many mainstream/broadcast TV news shows, but he found time to sit down and talk with us. In the conclusion of the interview we talk about wider-ranging issues of AI safety, just how the existential risk is being addressed today, and more on the recent public letters calling attention to AI risk. Plus we get a scoop on Roman's latest paper, Unmonitorability of Artificial Intelligence. All this plus our usual look at today's AI headlines. Transcript and URLs referenced at HumanCusp Blog.        

Kentucky Tonight
Artificial Intelligence

Kentucky Tonight

Play Episode Listen Later Jul 11, 2023 56:36


Renee Shaw and guests discuss the rise artificial intelligence and its uses. Guests: Trey Conatser, Ph.D., UK Center for the Enhancement of Teaching and Learning; Donnie Piercey, 2021 Kentucky Teacher of the Year; Roman Yampolskiy, Ph.D., UofL professor, author and AI safety & cybersecurity researcher; State Rep. Nima Kulkarni (D-Louisville); and State Rep. Josh Bray (R-Mount Vernon).

Artificial Intelligence and You
160 - Guest: Roman Yampolskiy, AI Safety Professor, part 1

Artificial Intelligence and You

Play Episode Listen Later Jul 10, 2023 32:37


This and all episodes at: https://aiandyou.net/ .   With statements about the existential threat of AI being publicly signed by prominent AI personalities, we need an academic's take on that, and returning to the show is Roman Yampolskiy, tenured professor of Computer Science at the University of Louisville in Kentucky where he is also the director of the Cyber Security Laboratory. He has published so much in the field of AI Safety for so long that he is a preeminent researcher in that space. He has written numerous papers and books, including Artificial Superintelligence: A Futuristic Approach in 2015 and Artificial Intelligence Safety and Security in 2018. Roman was last on the show in episodes 16 and 17, and events of the last seven months have changed the AI landscape so much that he has been in strong demand in the media. Roman is a rare academic who works to bring his findings to laypeople, and has been in high profile interviews like futurism.com and Business Today, and many mainstream/broadcast TV news shows, but he found time to sit down and talk with us. In the first part of the interview we discussed the open letters about AI, how ChatGPT and its predecessors/successors move us closer to AGI and existential risk, and what Roman has in common with Leonardo DiCaprio. All this plus our usual look at today's AI headlines. Transcript and URLs referenced at HumanCusp Blog.        

Profoundly Pointless
Artificial Intelligence Safety Expert Dr. Roman Yampolskiy

Profoundly Pointless

Play Episode Listen Later Jun 28, 2023 72:59


Artificial Intelligence (A.I) is building the future. But will it be a paradise or our doom. Computer Scientist Dr. Roman Yampolskiy studies safety issues related to artificial intelligence. We talk ChatGPT, the next wave of A.I. technology, and the biggest A.I. threats. Then, we take a look at “society” for a special Top 5. Dr. Roman Yampolskiy: 01:50 Pointless: 31:14 Top 5: 57:03 Contact the Show Dr. Roman Yampolskiy Twitter Dr. Roman Yampolskiy Website Learn more about your ad choices. Visit megaphone.fm/adchoices

The Nonlinear Library
LW - The Control Problem: Unsolved or Unsolvable? by Remmelt

The Nonlinear Library

Play Episode Listen Later Jun 4, 2023 26:54


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The Control Problem: Unsolved or Unsolvable?, published by Remmelt on June 2, 2023 on LessWrong. td;lr No control method exists to safely contain the global feedback effects of self-sufficient learning machinery. What if this control problem turns out to be an unsolvable problem? Where are we two decades into resolving to solve a seemingly impossible problem? If something seems impossible. well, if you study it for a year or five, it may come to seem less impossible than in the moment of your snap initial judgment. Eliezer Yudkowsky, 2008 A list of lethalities.we are not on course to solve in practice in time on the first critical try; none of it is meant to make a much stronger claim about things that are impossible in principle Eliezer Yudkowsky, 2022 How do you interpret these two quotes, by a founding researcher, fourteen years apart? A. We indeed made comprehensive progress on the AGI control problem, and now at least the overall problem does not seem impossible anymore. B. The more we studied the overall problem, the more we uncovered complex sub-problems we'd need to solve as well, but so far can at best find partial solutions to. Which problems involving physical/information systems were not solved after two decades? Oh ye seekers after perpetual motion, how many vain chimeras have you pursued? Go and take your place with the alchemists. Leonardo da Vinci, 1494 No mathematical proof or even rigorous argumentation has been published demonstrating that the A[G]I control problem may be solvable, even in principle, much less in practice. Roman Yampolskiy, 2021 We cannot rely on the notion that if we try long enough, maybe AGI safety turns out possible after all. Historically, many researchers and engineers tried to solve problems that turned out impossible: perpetual motion machines that both conserve and disperse energy. uniting general relativity and quantum mechanics into some local variable theory. singular methods for 'squaring the circle', 'doubling the cube' or 'trisecting the angle'. distributed data stores where messages of data are consistent in their content, and also continuously available in a network that is also tolerant to partitions. formal axiomatic systems that are consistent, complete and decidable. Smart creative researchers of their generation came up with idealized problems. Problems that, if solved, would transform science, if not humanity. They plowed away at the problem for decades, if not millennia. Until some bright outsider proved by contradiction of the parts that the problem is unsolvable. Our community is smart and creative – but we cannot just rely on our resolve to align AI. We should never forsake our epistemic rationality, no matter how much something seems the instrumentally rational thing to do. Nor can we take comfort in the claim by a founder of this field that they still know it to be possible to control AGI to stay safe. Thirty years into running a program to secure the foundations of mathematics, David Hilbert declared “We must know. We will know!” By then, Kurt Gödel had constructed the first incompleteness theorem. Hilbert kept his declaration for his gravestone. Short of securing the foundations of safe AGI control – that is, through empirically-sound formal reasoning – we cannot rely on any researcher's pithy claim that "alignment is possible in principle". Going by historical cases, this problem could turn out solvable. Just really, really hard to solve. The flying machine seemed an impossible feat of engineering. Next, controlling a rocket's trajectory to the moon seemed impossible. By the same reference class, ‘long-term safe AGI' could turn out unsolvable – the perpetual motion machine of our time. It takes just one researcher to define the problem to be solved, reason from empirically sound premises, and arrive ...

The Nonlinear Library: LessWrong
LW - The Control Problem: Unsolved or Unsolvable? by Remmelt

The Nonlinear Library: LessWrong

Play Episode Listen Later Jun 4, 2023 26:54


Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The Control Problem: Unsolved or Unsolvable?, published by Remmelt on June 2, 2023 on LessWrong. td;lr No control method exists to safely contain the global feedback effects of self-sufficient learning machinery. What if this control problem turns out to be an unsolvable problem? Where are we two decades into resolving to solve a seemingly impossible problem? If something seems impossible. well, if you study it for a year or five, it may come to seem less impossible than in the moment of your snap initial judgment. Eliezer Yudkowsky, 2008 A list of lethalities.we are not on course to solve in practice in time on the first critical try; none of it is meant to make a much stronger claim about things that are impossible in principle Eliezer Yudkowsky, 2022 How do you interpret these two quotes, by a founding researcher, fourteen years apart? A. We indeed made comprehensive progress on the AGI control problem, and now at least the overall problem does not seem impossible anymore. B. The more we studied the overall problem, the more we uncovered complex sub-problems we'd need to solve as well, but so far can at best find partial solutions to. Which problems involving physical/information systems were not solved after two decades? Oh ye seekers after perpetual motion, how many vain chimeras have you pursued? Go and take your place with the alchemists. Leonardo da Vinci, 1494 No mathematical proof or even rigorous argumentation has been published demonstrating that the A[G]I control problem may be solvable, even in principle, much less in practice. Roman Yampolskiy, 2021 We cannot rely on the notion that if we try long enough, maybe AGI safety turns out possible after all. Historically, many researchers and engineers tried to solve problems that turned out impossible: perpetual motion machines that both conserve and disperse energy. uniting general relativity and quantum mechanics into some local variable theory. singular methods for 'squaring the circle', 'doubling the cube' or 'trisecting the angle'. distributed data stores where messages of data are consistent in their content, and also continuously available in a network that is also tolerant to partitions. formal axiomatic systems that are consistent, complete and decidable. Smart creative researchers of their generation came up with idealized problems. Problems that, if solved, would transform science, if not humanity. They plowed away at the problem for decades, if not millennia. Until some bright outsider proved by contradiction of the parts that the problem is unsolvable. Our community is smart and creative – but we cannot just rely on our resolve to align AI. We should never forsake our epistemic rationality, no matter how much something seems the instrumentally rational thing to do. Nor can we take comfort in the claim by a founder of this field that they still know it to be possible to control AGI to stay safe. Thirty years into running a program to secure the foundations of mathematics, David Hilbert declared “We must know. We will know!” By then, Kurt Gödel had constructed the first incompleteness theorem. Hilbert kept his declaration for his gravestone. Short of securing the foundations of safe AGI control – that is, through empirically-sound formal reasoning – we cannot rely on any researcher's pithy claim that "alignment is possible in principle". Going by historical cases, this problem could turn out solvable. Just really, really hard to solve. The flying machine seemed an impossible feat of engineering. Next, controlling a rocket's trajectory to the moon seemed impossible. By the same reference class, ‘long-term safe AGI' could turn out unsolvable – the perpetual motion machine of our time. It takes just one researcher to define the problem to be solved, reason from empirically sound premises, and arrive ...

The Reality Check
TRC #665: Is It Safe To Stand In Front of Microwave Ovens? Interview with Dr. Roman Yampolskiy About The Simulation Hypothesis

The Reality Check

Play Episode Listen Later Mar 26, 2023 30:45


Cristina investigates a pervasive belief that standing in front of a microwave oven poses health risks. Darren and Adam have another fascinating discussion with Dr. Roman Yampolskiy. This time it's about his recent work regarding the Simulation Hypothesis, which proposes that all of our existence is a simulated reality. Dr. Yampolskiy is a computer scientist at the University of Louisville where he is the Director of the Cyber Security Laboratory in the Department of Computer Engineering and Computer Science. He is an author of over 100 publications including numerous books.

The Reality Check
TRC #663: Interview with Dr. Roman Yampolskiy About The Threat Of Advanced AI

The Reality Check

Play Episode Listen Later Feb 18, 2023 57:58


Dr. Roman Yampolskiy is a computer scientist at the University of Louisville where he is the director of the Cyber Security Laboratory in the department of Computer Engineering and Computer Science. He is an author of over 100 publications including numerous books. 

London Futurists
Hacking the simulation, with Roman Yampolskiy

London Futurists

Play Episode Listen Later Nov 16, 2022 29:41


In the 4th century BC, the Greek philosopher Plato theorised that humans do not perceive the world as it really is. All we can see is shadows on a wall.In 2003, the Swedish philosopher Nick Bostrom published a paper which formalised an argument to prove Plato was right. The paper argued that one of the following three statements is true:1. We will go extinct fairly soon2. Advanced civilisations don't produce simulations containing entities which think they are naturally-occurring sentient intelligences. (This could be because it is impossible.)3. We are in a simulation.The reason for this is that if it is possible, and civilisations can become advanced without exploding, then there will be vast numbers of simulations, and it is vanishingly unlikely that any randomly selected civilisation (like us) is a naturally-occurring one.Some people find this argument pretty convincing. As we will hear later, some of us have added twists to the argument. But some people go even further, and speculate about how we might bust out of the simulation.One such person is our friend and our guest in this episode, Roman Yampolskiy, Professor of Computer Science at the University of Louisville.Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain DeclarationFurther reading:"How to Hack the Simulation" by Roman Yampolskiy: https://www.researchgate.net/publication/364811408_How_to_Hack_the_Simulation"The Simulation Argument" by Nick Bostrom: https://www.simulation-argument.com/

The Nonlinear Library
AF - How Do We Align an AGI Without Getting Socially Engineered? (Hint: Box It) by Peter S. Park

The Nonlinear Library

Play Episode Listen Later Aug 10, 2022 20:17


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How Do We Align an AGI Without Getting Socially Engineered? (Hint: Box It), published by Peter S. Park on August 10, 2022 on The AI Alignment Forum. “Overconfidence in yourself is a swift way to defeat.” - Sun Tzu TL;DR: Escape into the Internet is probably an instrumental goal for an agentic AGI. An incompletely aligned AGI may escape prematurely, and the biggest failure mode for this is probably the AGI socially engineering the alignment researchers. Thus, opening an additional information channel between the researchers and the AGI (e.g., adding an interpretability tool and/or researcher) is inherently risky. The expected cost of adding this channel may even exceed the expected scientific benefit. Whether this is true depends on the informational efficiency of the channel (which increases the expected scientific benefit), the level of the AGI's capabilities (which increases the risk of its premature escape), and the security level of the AGI's box setup (which decreases the risk). Post: Suppose we are building a machine-learning model that has a high chance of becoming Artificial General Intelligence (AGI). We are trying to align it, but we're uncertain about whether our training is working. Also, rapidly increasing capabilities may push us into a Sharp Left Turn scenario. How do we handle this volatile situation as safely as possible? We will need to adopt a security mindset, aimed at effectively preventing catastrophic outcomes. We propose a procedure (inspired by the pioneering work of Roman Yampolskiy) for how to train the AGI towards alignment in a secure and effective manner. Most of the ideas here are not new. Nevertheless, we hope that our game-theoretic framing of the problem will be able to help alignment researchers better plan their research directions and future alignment efforts. Imagining the Scenario Consider a team of researchers who are trying to build an aligned AGI. This situation can be represented by the following simplified model. Definition: A “box” is defined by a collection of failsafes designed to confine a system away from the outside world. The empty collection comprised of zero failsafes is called the “trivial box” (i.e., an AI which is allowed unrestricted access to send and receive data on the Internet). Step 1 - The AI is trained The AGI emerges in a box. The precise time of the AGI's emergence and/or its superhuman capabilities probably cannot be predicted, though the research team may have a rough estimate. Step 2 - The AGI may try to escape At any point in the training or testing process, the AGI may attempt to break out of the box. If all failsafes of the box are breached before the team notices, the AGI escapes and the game is over (Step 4). One example of this is the scenario in which the AGI hacks itself into the Internet. Another example is the scenario in which the AGI releases power-seeking malware agents into the Internet. If the AGI fails to escape, the researchers may decide either to retrain the AGI from scratch, fine-tune (which is less computationally expensive but may fail to remove the AGI's escape intent/misalignment), or table the project entirely. The trivial box cannot prevent escapes. Step 3 - The researchers run tests on the AGI In our simplified model, researchers open a collection of information channels between themselves and the AGI in the box. As noted by Yampolskiy, the information from these channels is dual-use: The human researchers can use the channel to probe various properties of the AGI, to help them train it to be more aligned. In addition to advancing the science of alignment, the channel can also be used for scientific benefits in general. The AGI can use the channel to socially engineer the researchers on the other end. Socially engineered researchers can then help the AGI break the failsafe...

The Human Podcast
AI Safety Researcher, Roman Yampolskiy | The Human Podcast #12

The Human Podcast

Play Episode Listen Later Jun 19, 2022 41:34


Roman Yampolskiy is an AI safety & security researcher. He's a tenured associate professor at the University of Louisville and the director of the Cyber Security Laboratory.The Human Podcast is a new show that explores the lives and stories of a wide range of individuals.New episodes are released every week - subscribe to stay notified.WATCH - FILMED IN PERSON: https://www.youtube.com/channel/UC29JGmLUfv5eUeKzv3cRXGwSOCIAL:Twitter - https://twitter.com/heyhumanpodcastInstagram - https://www.instagram.com/heythehumanpodcast/GUEST:Roman's Twitter - https://twitter.com/romanyamRoman's Wikipedia - https://en.wikipedia.org/wiki/Roman_YampolskiyRoman's Webpage - http://cecs.louisville.edu/ry/Roman's Books - https://www.amazon.co.uk/Roman-V-Yampolskiy/e/B00DBE57XM Roman's Papers - https://scholar.google.com/citations?user=0_Rq68cAAAAJ&hl=en ORDER OF CONVERSATION:0:00 - Intro0:36 - Early Life2:08 - PhD / AI7:13 - AI Safety Research14:54 - Reading / Teaching19:23 - Asilomar AI Conference21:33 - Philosophical AI Research30:18 - DALL·E 2 & AI Progress 4:48 - Robotics38:36 - Advice For AI Safety Work39:18 - Roman's BooksGUEST SUGGESTIONS / FEEDBACK: Know anyone who may like to speak about their life? Or have any feedback? Just message heythehumanpodcast@gmail.com

The Futurists
Living with Super Intelligent AI

The Futurists

Play Episode Listen Later Jun 3, 2022 46:09


This week we interview Dr. Roman Yampolskiy, a renowned specialist in Artificial Intelligence. We delve into the likely path that AI will take over the coming years, and how Artificial General Intelligence and then Super Intelligent AIs might change the course of human history, and life on our planet. How far away are AIs that would be as capable and as intelligent as humans? It may be much closer than you think.

How AI Happens
AI Safety Engineering - Dr. Roman Yampolskiy

How AI Happens

Play Episode Listen Later Apr 28, 2022 25:13


 Today's guest has committed many years of his life to trying to understand Artificial Superintelligence and the security concerns associated with it.  Dr. Roman Yampolskiy is a computer scientist (with a Ph.D. in behavioral biometrics), and an Associate Professor at the University of Louisville. He is also the author of the book Artificial Superintelligence: A Futuristic Approach. Today he joins us to discuss AI safety engineering. You'll hear about some of the safety problems  he has discovered in his 10 years of research, his thoughts on accountability and ownership when AI fails, and whether he believes it's possible to enact any real safety measures in light of the decentralization and commoditization of processing power. You'll discover some of the near-term risks of not prioritizing safety engineering in AI, how to make sure you're developing it in a safe capacity, and what organizations are deploying it in a way that Dr. Yampolskiy believes to be above board. Key Points From This Episode:An introduction to Dr. Roman Yampolskiy, his education, and how he ended up in his current role. Insight into Dr. Yampolskiy's Ph.D. dissertation in behavioral biometrics and what he learned from it. A definition of AI safety engineering.The two subcomponents of AI safety: systems we already have and future AI.Thoughts on whether or not there is a greater need for guardrails in AI than other forms of technology.Some of the safety problems that Dr. Yampolskiy has discovered in his 10 years of research.Dr. Yampolskiy's thoughts on the need for some type of AI security governing body or oversight board.Whether it's possible to enact any sort of safety in light of the decentralization and commoditization of processing power.Solvable problem areas. Trying to negotiate the tradeoff between enabling AI to have creative freedom and being able to control it.Thoughts on whether or not there will be a time where we will have to decide whether or not to go past the point of no return in terms of AI superintelligence.Some of the near-term risks of not prioritizing safety engineering in AI.What led Dr. Yampolskiy to focus on this area of AI expertise.How to make sure you're developing AI safely.Thoughts on accountability and ownership when AI fails, and the legal implications of this.Other problems Dr. Yampolskiy has uncovered. Thoughts on the need for a greater understanding of the implications of AI work and whether or not this is a conceivable solution.Use cases or organizations that are deploying AI in a way that Dr. Yampolskiy believes to be above board.Questions that Dr. Yampolskiy would be asking if he was on an AI development safety team.How you can measure progress in safety work. Tweetables:“Long term, we want to make sure that we don't create something which is more capable than us and completely out of control.” — @romanyam [0:04:27]“This is the tradeoff we're facing: Either [AI] is going to be very capable, independent, and creative, or we can control it.” — @romanyam [0:12:11]“Maybe there are problems that we really need Superintelligence [to solve]. In that case, we have to give it more creative freedom but with that comes the danger of it making decisions that we will not like.” — @romanyam [0:12:31]“The more capable the system is, the more it is deployed, the more damage it can cause.” — @romanyam [0:14:55]“It seems like it's the most important problem, it's the meta-solution to all the other problems. If you can make friendly well-controlled superintelligence, everything else is trivial. It will solve it for you.” — @romanyam [0:15:26]Links Mentioned in Today's Episode:Dr. Roman YampolskiyArtificial Superintelligence: A Futuristic ApproachDr. Roman Yampolskiy on Twitter

Skeptically Curious
Episode 15 - AI Controllability, AGI, and Possible AI Futures with Roman Yampolskiy

Skeptically Curious

Play Episode Listen Later Nov 10, 2021 66:14


For this episode I was very pleased to be once again joined by Roman Yampolskiy. Dr. Yampolskiy is a professor in the department of Computer Engineering and Computer Science at the Speed School of Engineering at the University of Louisville in Kentucky and has authored dozens of peer-reviewed academic papers and some books. In this discussion, I first asked my guest about the recent AGI-21 conference organised by Ben Goertzel's SingularityNET, held in San Francisco from the 15th to the 18th October, to which he remotely contributed. Roman summarised his presentation on AI Controllability, an incredibly important topic from an AI risk standpoint, but one that has not received nearly enough attention. The conference provided a neat segue into the topic comprising the bulk of our discussion, namely AGI, or artificial general intelligence. I threw plenty at my interviewee, primarily perspectives gleaned from some papers and books, as well as interviews, to which I was recently exposed. However, Roman parried most of my challenging salvos with impressive aplomb. I then shifted focus to some provocative possible future scenarios, both positive and negative, involving AI systems gaining greater intelligence and competency. Lastly, we ventured onto more personal terrain as I asked Roman about his family's move to the United States, his interest in computers, intellectual influences, and what the secret is to his astonishing productivity. Roman Yampolskiy's page at the University of Louisville: http://cecs.louisville.edu/ry/ List of Yampolskiy's papers at Research Gate: https://www.researchgate.net/profile/Roman-Yampolskiy Yampolskiy's ‘AI Risk Skepticism' paper: https://www.researchgate.net/publication/351368775_AI_Risk_Skepticism AGI Control Theory Presentation at AGI-21: https://www.youtube.com/watch?v=Palb2Ue_RjI ‘Human ≠ AGI' paper: https://arxiv.org/ftp/arxiv/papers/2007/2007.07710.pdf ‘Personal Universes' paper: https://arxiv.org/ftp/arxiv/papers/1901/1901.01851.pdf ‘Here's Why We May Need to Rethink Artificial Neural Networks' by Alberto Romero: https://towardsdatascience.com/heres-why-we-may-need-to-rethink-artificial-neural-networks-c7492f51b7bc ‘Evil Robots, Killer Computers, and Other Myths' by Steve Shwartz: https://www.aiperspectives.com/evil-robots Twitter account for Skeptically Curious: https://twitter.com/SkepticallyCur1 Patreon page for Skeptically Curious: https://www.patreon.com/skepticallycurious

Skeptically Curious
Episode 11 - AI Risk with Roman Yampolskiy

Skeptically Curious

Play Episode Listen Later Sep 24, 2021 82:22


For this episode I was delighted to be joined by Dr. Roman Yampolskiy, a professor of Computer Engineering and Computer Science at the University of Louisville. Few scholars have devoted as much time to seriously exploring the myriad of threats potentially inhering in the development of highly intelligent artificial machinery than Dr. Yampolskiy, who established the field of AI Safety Engineering, also known simply as AI Safety. After the preliminary inquiry into his background, I asked Roman Yampolskiy to explain deep neural networks, or artificial neural networks as they are also known. One of the most important topics in AI research is what is referred to as the Alignment Problem, which my guest helped to clarify. We then moved onto his work on two other vitally significant issues in AI, namely understandability and explainability. I then asked him to provide a brief history of AI Safety, which as he revealed built on Yudkowsky's ideas of Friendly AI. We discussed whether there is an increased interest in the risks attendant to AI among researchers, the perverse incentive that exists among those in this industry to downplay the risks of their work, and how to ensure greater transparency, which as you will hear is worryingly far more difficult than many might assume based on the inherently opaque nature of how deep neural networks perform their operations. I homed in on the issue of massive job losses that increasing AI capabilities could potentially engender, as well as the perception I have that many who discuss this topic downplay the socioeconomic context within which automation occurs. After I asked my guest to define artificial general intelligence, or AGI, and super intelligence, we spent considerable time discussing the possibility of machines achieving human-level mental capabilities. This part of the interview was the most contentious and touched on neuroscience, the nature of consciousness, mind-body dualism, the dubious analogy between brains and computers that has been all to pervasive in the AI field since its inception, as well as a fascinating paper by Yampolskiy proposing to detect qualia in artificial systems that perceive the same visual illusions as humans. In the final stretch of the interview, we discussed the impressive language-based system GPT3, whether AlphaZero is the first truly intelligent artificial system, as Gary Kasparov claims, the prospects of quantum computing to potentially achieve AGI, and, lastly, what he considers to be the greatest AI risk factor, which according to my guest is “purposeful malevolent design.” While this far-ranging interview, with many concepts raised and names dropped, sometimes veered into various weeds some might deem overly specialised and/or technical, I nevertheless think there is plenty to glean about a range of fascinating, not to mention pertinent, topics for those willing to stay the course. Roman Yampolskiy's page at the University of Louisville: http://cecs.louisville.edu/ry/ Yampolskiy's papers: https://scholar.google.com/citations?user=0_Rq68cAAAAJ&hl=en Roman's book, Artificial Superintelligence: A Futuristic Approach: https://www.amazon.com/Artificial-Superintelligence-Futuristic-Roman-Yampolskiy/dp/1482234432 Twitter account for Skeptically Curious: https://twitter.com/SkepticallyCur1 Patreon page for Skeptically Curious: https://www.patreon.com/skepticallycurious

The Dissenter
#523 Roman Yampolskiy: AI, Security, Controllability, and the Singularity

The Dissenter

Play Episode Listen Later Sep 17, 2021 47:36


------------------Support the channel------------ Patreon: https://www.patreon.com/thedissenter PayPal: paypal.me/thedissenter PayPal Subscription 1 Dollar: https://tinyurl.com/yb3acuuy PayPal Subscription 3 Dollars: https://tinyurl.com/ybn6bg9l PayPal Subscription 5 Dollars: https://tinyurl.com/ycmr9gpz PayPal Subscription 10 Dollars: https://tinyurl.com/y9r3fc9m PayPal Subscription 20 Dollars: https://tinyurl.com/y95uvkao ------------------Follow me on--------------------- Facebook: https://www.facebook.com/thedissenteryt/ Twitter: https://twitter.com/TheDissenterYT This show is sponsored by Enlites, Learning & Development done differently. Check the website here: http://enlites.com/ Dr. Roman V. Yampolskiy is a Tenured Associate Professor in the department of Computer Engineering and Computer Science at the Speed School of Engineering at the University of Louisville. He is the founding and current director of the Cyber Security Lab and an author of many books including Artificial Superintelligence: a Futuristic Approach. Dr. Yampolskiy is a Senior member of IEEE and AGI; Member of Kentucky Academy of Science, and Research Advisor for MIRI and Associate of GCRI. Dr. Yampolskiy's main areas of interest are AI Safety, Artificial Intelligence, Behavioral Biometrics, Cybersecurity, Digital Forensics, Games, Genetic Algorithms, and Pattern Recognition. In this episode, we talk about artificial intelligence. We start by discussing what AI is, and how it compares to natural intelligence. We then go into some of the issues we have to worry about, like the ones related to security, controllability, and unexplainability of AI. We talk about the Singularity, the concept, and what it could be like. -- A HUGE THANK YOU TO MY PATRONS/SUPPORTERS: KARIN LIETZCKE, ANN BLANCHETTE, PER HELGE LARSEN, LAU GUERREIRO, JERRY MULLER, HANS FREDRIK SUNDE, BERNARDO SEIXAS, HERBERT GINTIS, RUTGER VOS, RICARDO VLADIMIRO, CRAIG HEALY, OLAF ALEX, PHILIP KURIAN, JONATHAN VISSER, JAKOB KLINKBY, ADAM KESSEL, MATTHEW WHITINGBIRD, ARNAUD WOLFF, TIM HOLLOSY, HENRIK AHLENIUS, JOHN CONNORS, PAULINA BARREN, FILIP FORS CONNOLLY, DAN DEMETRIOU, ROBERT WINDHAGER, RUI INACIO, ARTHUR KOH, ZOOP, MARCO NEVES, COLIN HOLBROOK, SUSAN PINKER, PABLO SANTURBANO, SIMON COLUMBUS, PHIL KAVANAGH, JORGE ESPINHA, CORY CLARK, MARK BLYTH, ROBERTO INGUANZO, MIKKEL STORMYR, ERIC NEURMANN, SAMUEL ANDREEFF, FRANCIS FORDE, TIAGO NUNES, BERNARD HUGUENEY, ALEXANDER DANNBAUER, FERGAL CUSSEN, YEVHEN BODRENKO, HAL HERZOG, NUNO MACHADO, DON ROSS, JONATHAN LEIBRANT, JOÃO LINHARES, OZLEM BULUT, NATHAN NGUYEN, STANTON T, SAMUEL CORREA, ERIK HAINES, MARK SMITH, J.W., JOÃO EIRA, TOM HUMMEL, SARDUS FRANCE, DAVID SLOAN WILSON, YACILA DEZA-ARAUJO, IDAN SOLON, ROMAIN ROCH, DMITRY GRIGORYEV, TOM ROTH, DIEGO LONDOÑO CORREA, YANICK PUNTER, ADANER USMANI, CHARLOTTE BLEASE, NICOLE BARBARO, ADAM HUNT, PAWEL OSTASZEWSKI, AL ORTIZ, NELLEKE BAK, KATHRINE AND PATRICK TOBIN, GUY MADISON, GARY G HELLMANN, SAIMA AFZAL, ADRIAN JAEGGI, NICK GOLDEN, PAULO TOLENTINO, JOÃO BARBOSA, JULIAN PRICE, EDWARD HALL, HEDIN BRØNNER, DOUGLAS P. FRY, AND FRANCA BORTOLOTTI! A SPECIAL THANKS TO MY PRODUCERS, YZAR WEHBE, JIM FRANK, ŁUKASZ STAFINIAK, IAN GILLIGAN, LUIS CAYETANO, TOM VANEGDOM, CURTIS DIXON, BENEDIKT MUELLER, VEGA GIDEY, AND THOMAS TRUMBLE! AND TO MY EXECUTIVE PRODUCERS, MICHAL RUSIECKI, ROSEY, JAMES PRATT, MATTHEW LAVENDER, SERGIU CODREANU, AND JASON PARTEE!

Fault Lines
13 US Marines and 95 Afghans Dead After Suicide Bombing In Kabul

Fault Lines

Play Episode Listen Later Aug 27, 2021 167:28


On this episode of Fault Lines, hosts Jamarl Thomas and Shane Stranahan talk about the very real fears of mega-intelligent super machines and learning algorithms, the increasing threat of ISIS-K, the overruling of the eviction ban by the Supreme Court, and the lasting political impact on the American failure of the Afghan withdrawal on Biden's record.Guests:Roman Yampolskiy - AI safety and security researcher and professor of computer science and engineering | Artificial Intelligence Garland Nixon - Sputnik political analyst and the host of The Critical Hour | The Narrative America Needs Against the Taliban Margaret Kimberly - Senior columnist and editor for Black Agenda Report | Can Biden Outlive Afghanistan?In the first hour Roman Yampolskiy joined the show to talk about the advancements of artificial intelligence and machine learning, and if people need to be worried about the safety of such smart computers.In the second hour Fault Lines was joined by Garland Nixon for a discussion on the benefit of pulling out of Afghanistan even after a strong display by ISIS-K in a double bombing outside the extremely packed Kabul airport. Garland also talked about the Supreme Court decision to contradict Biden's extension on the eviction ban.In the third hour Margaret Kimberly joined the conversation to talk about meta-politics behind the US withdrawal from Afghanistan. The discussion also brought up this withdrawal as a black spot on Biden's record, but is there still enough time in his presidency for it to be swept under the rug?

Mind Matters
Robert J. Marks: There’s One Thing Only Humans Can Do

Mind Matters

Play Episode Listen Later Jul 22, 2021 11:18


This week, we listen to Robert J. Marks speaking at the launch of the Walter Bradley Center for Natural and Artificial Intelligence in Dallas, Texas. Robert J. Marks is the Director of the Bradley Center and Distinguished Professor of Electrical and Computer Engineering at Baylor University. In a panel discussion at the 2019 launch of the Bradley Center, Dr. Marks… Source

Mind Matters
Robert J. Marks: There’s One Thing Only Humans Can Do

Mind Matters

Play Episode Listen Later Jul 22, 2021 11:18


This week, we listen to Robert J. Marks speaking at the launch of the Walter Bradley Center for Natural and Artificial Intelligence in Dallas, Texas. Robert J. Marks is the Director of the Bradley Center and Distinguished Professor of Electrical and Computer Engineering at Baylor University. In a panel discussion at the 2019 launch of the Bradley Center, Dr. Marks… Source

Den of Rich
Roman Yampolskiy | Роман Ямпольский

Den of Rich

Play Episode Listen Later Jul 2, 2021 95:09


Dr. Roman V. Yampolskiy is a Tenured Associate Professor in the Department of Computer Science and Engineering at the Speed School of Engineering, University of Louisville. He is the founding and current director of the Cyber Security Lab and an author of many books including Artificial Superintelligence: a Futuristic Approach. During his tenure at UofL, Dr. Yampolskiy has been recognized as Distinguished Teaching Professor, Professor of the Year, Faculty Favorite, Top 4 Faculty, Leader in Engineering Education, Top 10 of Online College Professor of the Year, and Outstanding Early Career in Education award winner among many other honors and distinctions. Yampolskiy is a Senior Member of IEEE and AGI; a Member of the Kentucky Academy of Science. Dr. Yampolskiy's main areas of interest are AI Safety and Cybersecurity. Dr. Yampolskiy is an author of over 200 publications including multiple journal articles and books. His research has been cited by 1000+ scientists and profiled in popular magazines both American and foreign, hundreds of websites, on radio and TV. Dr. Yampolskiy's research has been featured 1000+ times in numerous media reports in 30+ languages. Dr. Yampolskiy has been an invited speaker at 100+ events including the Swedish National Academy of Science, Supreme Court of Korea, Princeton University and many others. FIND ROMAN ON SOCIAL MEDIA LinkedIn | Facebook | Instagram | Twitter | Medium © Copyright 2022 Den of Rich. All rights reserved.

Den of Rich
#192 - Roman Yampolskiy

Den of Rich

Play Episode Listen Later Jul 2, 2021 95:09


Dr. Roman V. Yampolskiy is a Tenured Associate Professor in the department of Computer Science and Engineering at the Speed School of Engineering, University of Louisville. He is the founding and current director of the Cyber Security Lab and an author of many books including Artificial Superintelligence: a Futuristic Approach. During his tenure at UofL, Dr. Yampolskiy has been recognized as: Distinguished Teaching Professor, Professor of the Year, Faculty Favorite, Top 4 Faculty, Leader in Engineering Education, Top 10 of Online College Professor of the Year, and Outstanding Early Career in Education award winner among many other honors and distinctions. Yampolskiy is a Senior member of IEEE and AGI; Member of Kentucky Academy of Science. Dr. Yampolskiy's main areas of interest are AI Safety and Cybersecurity. Dr. Yampolskiy is an author of over 200 publications including multiple journal articles and books. His research has been cited by 1000+ scientists and profiled in popular magazines both American and foreign, hundreds of websites, on radio and TV. Dr. Yampolskiy's research has been featured 1000+ times in numerous media reports in 30+ languages. Dr. Yampolskiy has been an invited speaker at 100+ events including Swedish National Academy of Science, Supreme Court of Korea, Princeton University and many others.FIND ROMAN ON SOCIAL MEDIA LinkedIn | Facebook | Instagram | Twitter | MediumVisit the podcast page for additional content https://www.uhnwidata.com/podcast

Future of Life Institute Podcast
Roman Yampolskiy on the Uncontrollability, Incomprehensibility, and Unexplainability of AI

Future of Life Institute Podcast

Play Episode Listen Later Mar 20, 2021 72:01


Roman Yampolskiy, Professor of Computer Science at the University of Louisville, joins us to discuss whether we can control, comprehend, and explain AI systems, and how this constrains the project of AI safety.  Topics discussed in this episode include: -Roman’s results on the unexplainability, incomprehensibility, and uncontrollability of AI -The relationship between AI safety, control, and alignment -Virtual worlds as a proposal for solving multi-multi alignment -AI security You can find the page for this podcast here: https://futureoflife.org/2021/03/19/roman-yampolskiy-on-the-uncontrollability-incomprehensibility-and-unexplainability-of-ai/ You can find FLI's three new policy focused job postings here: https://futureoflife.org/job-postings/ Have any feedback about the podcast? You can share your thoughts here: www.surveymonkey.com/r/DRBFZCT Timestamps:  0:00 Intro  2:35 Roman’s primary research interests  4:09 How theoretical proofs help AI safety research  6:23 How impossibility results constrain computer science systems 10:18 The inability to tell if arbitrary code is friendly or unfriendly  12:06 Impossibility results clarify what we can do  14:19 Roman’s results on unexplainability and incomprehensibility  22:34 Focusing on comprehensibility  26:17 Roman’s results on uncontrollability  28:33 Alignment as a subset of safety and control  30:48 The relationship between unexplainability, incomprehensibility, and uncontrollability with each other and with AI alignment  33:40 What does it mean to solve AI safety?  34:19 What do the impossibility results really mean?  37:07 Virtual worlds and AI alignment  49:55 AI security and malevolent agents  53:00 Air gapping, boxing, and other security methods  58:43 Some examples of historical failures of AI systems and what we can learn from them  1:01:20 Clarifying impossibility results 1:06 55 Examples of systems failing and what these demonstrate about AI  1:08:20 Are oracles a valid approach to AI safety?  1:10:30 Roman’s final thoughts This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable, consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.

Bill Murphy's  RedZone Podcast | World Class IT Security
How Do We Control AI That Is Smarter Than Us With Dr. Roman Yampolskiy

Bill Murphy's RedZone Podcast | World Class IT Security

Play Episode Listen Later Mar 17, 2021 41:55


On this episode, you will gain valuable insight into the future cyber ecosystem and what the potential existence of AGI (Super Intelligent AI) could affect how we work in the future.  I’m speaking with Professional Speaker, Author of Artificial Intelligence Safety and Security, and current Associate Professor at the University of Louisville, Dr. Roman Yampolskiy. Dr. Yampolskiy is known globally for his work on behavioral biometrics, security of cyberworlds, and artificial intelligence safety.   Dr. Yampolskiy’s knack for seeing the real-world, practical opportunities that AI and Superintelligence can provide the modern world is captivating to hear.  Hear what Dr. Yampolskiy has to say about the possible convergence of AI towards AGI on the physical world within your lifetime, whether putting AI safety guardrails around superintelligence is advisable, and which industries will be completely changed by AI in the not-so-distant future.   Featuring:  Dr. Roman Yampolskiy, Associate Professor at the University of Louisville  Episode Resources And Show Notes: http://www.redzonetech.net/blog/how-do-we-control-ai-that-is-smarter-than-us-with-dr-roman-yampolskiy Connect With Bill On LinkedIn: https://www.linkedin.com/in/billmurphynll/ 

Clearer Thinking with Spencer Greenberg
Superintelligence and Consciousness (with Roman Yampolskiy)

Clearer Thinking with Spencer Greenberg

Play Episode Listen Later Mar 10, 2021 76:10


What is superintelligence? Can a superintelligence be controlled? Why aren't people (especially academics, computer scientists, and companies) more worried about superintelligence alignment problems? Is it possible to determine whether or not an AI is conscious? Do today's neural networks experience some form of consciousness? Are humans general intelligences? How do artificial superintelligence and artificial general intelligence differ? What sort of threats do malevolent actors pose over and above those posed by the usual problems in AI safety?Dr. Roman V. Yampolskiy is a Tenured Associate Professor in the department of Computer Science and Engineering at the University of Louisville. He is the founding and current director of the Cyber Security Lab and an author of many books including Artificial Superintelligence: a Futuristic Approach. Dr. Yampolskiy's main areas of interest are Artificial Intelligence Safety and Cybersecurity. Follow him on Twitter at @romanyam.Further reading:Superintelligence: Paths, Dangers, Strategies by Nick Bostrom

Clearer Thinking with Spencer Greenberg
Superintelligence and Consciousness (with Roman Yampolskiy)

Clearer Thinking with Spencer Greenberg

Play Episode Listen Later Mar 10, 2021 76:10


What is superintelligence? Can a superintelligence be controlled? Why aren't people (especially academics, computer scientists, and companies) more worried about superintelligence alignment problems? Is it possible to determine whether or not an AI is conscious? Do today's neural networks experience some form of consciousness? Are humans general intelligences? How do artificial superintelligence and artificial general intelligence differ? What sort of threats do malevolent actors pose over and above those posed by the usual problems in AI safety?Dr. Roman V. Yampolskiy is a Tenured Associate Professor in the department of Computer Science and Engineering at the University of Louisville. He is the founding and current director of the Cyber Security Lab and an author of many books including Artificial Superintelligence: a Futuristic Approach. Dr. Yampolskiy's main areas of interest are Artificial Intelligence Safety and Cybersecurity. Follow him on Twitter at @romanyam.Further reading:Superintelligence: Paths, Dangers, Strategies by Nick Bostrom[Read more]

Clearer Thinking with Spencer Greenberg
Superintelligence and Consciousness with Roman Yampolskiy

Clearer Thinking with Spencer Greenberg

Play Episode Listen Later Mar 10, 2021 76:10


What is superintelligence? Can a superintelligence be controlled? Why aren't people (especially academics, computer scientists, and companies) more worried about superintelligence alignment problems? Is it possible to determine whether or not an AI is conscious? Do today's neural networks experience some form of consciousness? Are humans general intelligences? How do artificial superintelligence and artificial general intelligence differ? What sort of threats do malevolent actors pose over and above those posed by the usual problems in AI safety?Dr. Roman V. Yampolskiy is a Tenured Associate Professor in the department of Computer Science and Engineering at the University of Louisville. He is the founding and current director of the Cyber Security Lab and an author of many books including Artificial Superintelligence: a Futuristic Approach. Dr. Yampolskiy's main areas of interest are Artificial Intelligence Safety and Cybersecurity. Follow him on Twitter at @romanyam.Further reading:Superintelligence: Paths, Dangers, Strategies by Nick Bostrom

Clearer Thinking with Spencer Greenberg
Superintelligence and Consciousness with Roman Yampolskiy

Clearer Thinking with Spencer Greenberg

Play Episode Listen Later Mar 10, 2021 76:10


What is superintelligence? Can a superintelligence be controlled? Why aren't people (especially academics, computer scientists, and companies) more worried about superintelligence alignment problems? Is it possible to determine whether or not an AI is conscious? Do today's neural networks experience some form of consciousness? Are humans general intelligences? How do artificial superintelligence and artificial general intelligence differ? What sort of threats do malevolent actors pose over and above those posed by the usual problems in AI safety? Dr. Roman V. Yampolskiy is a Tenured Associate Professor in the department of Computer Science and Engineering at the University of Louisville. He is the founding and current director of the Cyber Security Lab and an author of many books including Artificial Superintelligence: a Futuristic Approach. Dr. Yampolskiy's main areas of interest are Artificial Intelligence Safety and Cybersecurity. Follow him on Twitter at @romanyam. Further reading: Superintelligence: Paths, Dangers, Strategies by Nick Bostrom

FUTURATI PODCAST
Ep. 16: Roman Yampolskiy on AI safety.

FUTURATI PODCAST

Play Episode Listen Later Jan 18, 2021 58:45


Dr. Roman V. Yampolskiy is a Tenured Associate Professor in the department ofComputer Science and Engineering at the University of Louisville. He is the foundingand current director of the Cyber Security Lab and an author of many books including"Artificial Superintelligence: a Futuristic Approach". Dr. Yampolskiy’s main areas ofinterest are Artificial Intelligence Safety and Cybersecurity. Learn more about your ad choices. Visit megaphone.fm/adchoices

Sentientism
“Humans might one day need to beg AIs for our sentient rights” – AI expert Roman Yampolskiy – Sentientist Conversations

Sentientism

Play Episode Listen Later Dec 15, 2020 38:51


Full show notes & links here. Roman is a Professor of Computer Science & Engineering at the University of Louisville. He is known for his work on behavioral biometrics, the security of cyberworlds & artificial intelligence safety. He founded the field of intellectology – the analysis of the forms & limits of intelligence. He is director of the Cyber Security Laboratory in the department of Computer Engineering &Computer Science at the Speed School of Engineering. Roman has written over 100 publications, including many books spanning these fields. In these Sentientist Conversations, we talk about the two most important questions: “what’s real?” & “what matters?” To catch the cameo from Luna the puppy ("seems conscious") watch the video of our conversation here. Don't forget to subscribe to our channel while you're there. We discuss: Growing up in the Soviet Union. Not much religion around. Not meeting anyone religious until coming to the US as an adult Not finding religious arguments interesting or compelling Fascination with intersection of big ideas, philosophy, science Questions can come from religion, but standards of evidence come from science Comfort with others holding supernatural beliefs – helps us remain open-minded Is god analogous with someone running a world simulation? We need to get better at evaluating evidence. Should be separate from theories/hypotheses The need for scientific humility Deep fakes Freedom as an ethical foundation, subject to not hurting others or restricting the freedom of others Can we develop a pop-up AI that guides our ethics? Consciousness / sentience warrants protection Can Artificial Intelligences achieve consciousness or even super-consciousness? Humans might need to beg future AIs for our rights, as we grant rights to animals The hypocrisy of thinking animals should have rights, but enjoying eating them (theory vs. practice & cognitive dissonance) Why so many AI researchers are ready to acknowledge AI sentience but forget or disregard non-human animal sentience Consciousness, sentience, qualia, the “Hard Problem”, David Chalmers Assessing sentience The role of observers in quantum physics. Could there be some non-material element of consciousness? Meeting Luna the puppy “Seems conscious!” Will future AIs warrant protection/rights Ending animal farming to set a good example to our future AI overlords AIs will prefer “sentientism” to “humanism” Substrate independence Sentience/consciousness as a spectrum, simple to super (beyond human) Ethical challenges with non-sentient AI If human agents can’t agree (value alignment problem), can we even move towards a shared environment that will make us all happy? Maybe everyone could have their own individual virtual world! Even a positively negotiated shared environment wouldn’t be as good for each of us as a perfect individual environment – just don’t switch it off Can animal farming go away in a few years via clean-meat etc? Veganism and moral resolutions to cognitive dissonance, vs. tech alternatives removing blockers The dangers of disenfranchising humans if we grant rights too broadly (e.g. to trillions of bacteria or sentient AIs) Equal vs. degrees of moral consideration “Most of us will be as ethical as our choices”.

Voices in AI
Episode 113 – A Conversation with Roman Yamploskiy

Voices in AI

Play Episode Listen Later Dec 1, 2020 53:11


Byron speaks with Roman Yampolskiy about the nature of intelligence artificial, and otherwise. Episode 113 – A Conversation with Roman Yamploskiy

The Runchuks Podcast
Dr. Roman Yampolskiy on how is artificial intelligence changing the world

The Runchuks Podcast

Play Episode Listen Later Oct 30, 2020 159:52


Dr. Roman Yampolskiy is one of the leading experts on the topic of AI safety. In this podcast episode we cover a wide variety of questions - such as - what is AI, what is intelligence, how does the future look like, how it affects our lives, what does it mean to be human, does AI have emotions, does it perceive art? And we also discuss the dangers that AI, the bots, and the real-time-assistance pose to the poker industry.  Is the future of online poker is at risk and what can be done about it?   Dr. Yampolskiy: Twitter - https://twitter.com/romanyam On Controllability of Artificial Intelligence https://philpapers.org/archive/YAMOCO.pdf Roman's Arxiv https://arxiv.org/search/cs?searchtype=author&query=Yampolskiy%2C+R+V Roman's Google Scholar Page - https://scholar.google.com/citations?user=0_Rq68cAAAAJ&hl=enArtificial Intelligence Safety and Security - https://amzn.to/31VSP2R Artificial Superintelligence: A Futuristic Approach - https://amzn.to/381iwm6   CONNECT:  Subscribe to this channel: https://bit.ly/runchuks-yt Newsletter: https://www.runchukspodcast.com Twitter: https://twitter.com/RunchuksP Twitch: https://www.twitch.tv/runchukspoker Coaching: https://bit.ly/bts-coaching   PODCAST INFO:  Apple Podcasts: https://apple.co/2XlvTro Spotify: https://spoti.fi/2ECWIAF YouTube playlist: https://bit.ly/podcast-yt   OUTLINE:  00:00:00 Intro 00:02:18 What is AI? 00:07:35 How AI is going to affect poker 00:13:11 What can poker sites do to combat AI and RTA 00:17:06 Online poker bots 00:22:53 Will online poker find a way? 00:29:42 AI affecting careers 00:31:52 Explainability 00:38:01 Human decision making is biased 0:41:17 Human-level intelligence 00:44:59 Art and Artificial Intelligence 00:49:32 Can humans understand a higher level of intelligence? 00:51:12 Recommendation algorithms 00:58:17 False beliefs 01:03:52 Dangers of recommendation algorithms 01:08:12 The point of no return 01:11:14 Do we know what we really want? 01:16:53 Removing humans from the equation 01:18:58 How far are we from the technological singularity 01:22:30 Optical illusions 01:24:41 Is poker a special game? 01:29:09 Can we mitigate the risks of AI 01:36:03 The meaning of our life 01:40:57 AI in politics 01:47:33 What's the best outcome? 01:50:21 Our fate is in the hands of a few scientists 01:52:37 Neurolink and other fascinating applications of AI 01:55:46 What can we learn from AI? 02:00:38 Roman Yampolskiy on social media

LFPL's At the Library Series
Artificial Intelligence: Risks + Responses with Dr. Roman Yampolskiy (rebroadcast)

LFPL's At the Library Series

Play Episode Listen Later Oct 13, 2020


Will artificial intelligence help or hinder society? What will scientists and engineers need to do to keep AI from causing harm? Many scientists have predicted that humanity will achieve Artificial General Intelligence within the next hundred years. After summarizing the arguments for why AGI may pose significant risk, UofL's Dr. Roman Yampolskiy will survey the field’s proposed responses

Artificial Intelligence and You
017 - Guest: Roman Yampolskiy, Professor of AI Safety, part 2

Artificial Intelligence and You

Play Episode Listen Later Oct 12, 2020 29:50


This and all episodes at: http://aiandyou.net/ . What does it look like to be on the front lines of academic research into making future AI safe? It looks like Roman Yampolskiy, professor at the University of Louisville, Kentucky, director of their Cyber Security lab and key contributor to the field of AI Safety. With over 100 papers and books on AI, Roman is recognized as an AI expert the world over.In this second part of our interview, we talk about his latest paper: a comprehensive analysis of the Control Problem, the central issue of AI safety: How do we ensure future AI remains under our control? We also discuss the current limitations of AI and how AI may evolve.Transcript and URLs referenced at HumanCusp Blog. 

Artificial Intelligence and You
016 - Guest: Roman Yampolskiy, Professor of AI Safety

Artificial Intelligence and You

Play Episode Listen Later Oct 5, 2020 41:16


This and all episodes at: http://aiandyou.net/ . What does it look like to be on the front lines of academic research into making future AI safe? It looks like Roman Yampolskiy, professor at the University of Louisville, Kentucky, director of their Cyber Security lab and key contributor to the field of AI Safety. With over 100 papers and books on AI, Roman is recognized as an AI expert the world over.In this first part of our interview, we talk about his latest paper, a comprehensive analysis of the Control Problem, the central issue of AI safety: How do we ensure future AI remains under our control?All this and our usual look at today's AI headlines.Transcript and URLs referenced at HumanCusp Blog. 

Artificial Intelligence in Industry with Daniel Faggella
[AI Futures] Artificing a Superintelligent Future - with Roman Yampolskiy of the University of Louisville (S1E6)

Artificial Intelligence in Industry with Daniel Faggella

Play Episode Listen Later Aug 1, 2020 36:07


Today I'm excited to welcome back long-time friend and alumnus of the podcast, Dr. Roman Yampolskiy. Roman is Director of the University of Louisville's Cybersecurity Lab. He has authored four books and many more academic publications about AI, cybersecurity, AI safety, and other areas of computer science. This discussion jumps off from ideas in his most recent book, "Artificial Superintelligence: a Futuristic Approach," and unpacks further-out AI futures, exploring possibilities and challenges pertaining to AI governance, artificial general intelligence, and more...

Develomentor
Dr. Roman Yampolskiy - Controlling Artificial Super-Intelligence #63

Develomentor

Play Episode Listen Later Jun 8, 2020 34:35


Welcome to another episode of Develomentor. Today's guest is Dr. Roman Yampolskiy.Dr. Yampolskiy’s main areas of interest are behavioral biometrics, digital forensics, pattern recognition, genetic algorithms, neural networks, artificial super-intelligence and games. Dr. Yampolskiy is an author of over 100 publications including multiple journal articles and books. His research has been cited by numerous scientists and profiled in popular magazines both American and foreign (New Scientist, Poker Magazine, Science World Magazine), dozens of websites (BBC, MSNBC, Yahoo! News) and on radio (German National Radio, Alex Jones Show). Reports about his work have attracted international attention and have been translated into many languages including Czech, Danish, Dutch, French, German, Hungarian, Italian, Polish, Romanian, and Spanish.If you are enjoying our content, click here to support us!Episode Summary“My interest is artificial intelligence. Specifically I’m trying to understand how to control it and how to make sure it’s beneficial for everyone.““You’re not going to find some perfect role model. Take something good from a dozen people.”“Whatever the product is, people will find ridiculous ways to abuse it. You need to see this ahead of time and get ready for it.““If a system is smarter than you, you cannot predict what its actually going to do. By definition. If you could, you’d be just as smart.” —Roman YampolskiyKey MilestonesDr. Yampolskiy was able to bypass getting a bachelor’s degree and go straight to getting his Master’s and PhD. How did he do this? And how was he able to save money on education?What is Dr Yampolskiy’s approach to deep work? How is he able to juggle teaching and his own research?Computing is advancing at an increasing pace, what are Dr Yamposlkiy’s thoughts on artificial super-intelligence? Why is Dr. Yampolsky working on finding failure models for systems that don’t even exist today?How has Dr. Yampolskiy been able to have multiple mentors and take away bits and pieces from each one of them?Additional ResourcesArtificial Superintelligence: A Futuristic Approach – by Dr. Roman YampolskiyEp. 6 of Develomentor Freedom – From Side Hustle to a Scalable Business, with Fred StutzmanYou can find more resources in the show notesTo learn more about our podcast go to https://develomentor.com/To listen to previous episodes go to https://develomentor.com/blog/CONNECT WITH DR. ROMAN YAMPOLSKIYLinkedInTwitterFollow Develomentor:Twitter: @develomentorFollow Grant IngersollTwitter: @gsingersLinkedIn: linkedin.com/in/grantingersoll

Challenging #ParadigmX
Artificial Superintelligence, Safety and Security with Prof. Roman Yampolskiy

Challenging #ParadigmX

Play Episode Listen Later Apr 6, 2020


Roman Yampolskiy is a computer scientist and professor at the University of Louisville. He is specialized in Artificial Intelligence, Security and Safety. In this interview, we talk about topics like if artificial intelligence can create consciousness, what the simulation hypothesis is and if we might actually live in a simulation and if we should treat AI just like threats like the coronavirus.Timestamps:1:19 - Introduction of Prof. Roman Yampolskiy2:13 - (Artificial) Intelligence and Consciousness7:58 - When Will We Reach Artificial Superintelligence8:59 - Which Discussions Should We Be Having About Artificial Intelligence10:08 - The Effect of Quantum Computing on AI11:17 - AI and Security13:53 - AI and Blockchain and Blueprint for Skynet17:03 - AI and Dystopian Scenarios19:17 - What Needs to Happen so Decision Makers Start to Think About Limiting and Controlling AI20:17 - Which Paradigms Need to Be Challenged in AI27:11 - Solutions for the Future31:28 - Utopian Solutions for the Future and the Simulation Hypothesis34:49 - His Current Research35:39 - Impact and LegacyRoman Yampolskiy's Work and Links:Website: http://cecs.louisville.edu/ry/Facebook: https://www.facebook.com/roman.yampolskiyTwitter: https://twitter.com/romanyamGoogle Scholar: https://scholar.google.com/citations?user=0_Rq68cAAAAJ&hl=enBooks by Roman Yampolskiy:https://www.amazon.com/Roman-V-Yampolskiy/e/B00DBE57XM/Artificial Intelligence Safety & Security - https://www.amazon.com/Artificial-Intelligence-Security-Chapman-Robotics/dp/0815369824Artificial Superintelligence: A Futuristic Approach - https://www.amazon.com/Artificial-Superintelligence-Futuristic-Roman-Yampolskiy/dp/1482234432--I love to hear your comments. Please let me know in the comments section what you thought of the interview. And it would mean the world to me if you hit the subscribe button ;) --Xerxes has an interdisciplinary background in social sciences, easter & western psychology, mysticism as well as strategy development. As a transcultural traveller between diverse worlds, he combines different thought schools, ideas and people to inspire and create new solutions for current and future challenges. He works as a futurist and speaker.--Website: https://xerxes.re/TEDx Talk: https://xerxes.re/TEDxPodcast: https://xerxes.re/PodcastLinkedIn: https://xerxes.re/LinkedInTwitter: https://xerxes.re/TwitterFacebook: https://xerxes.re/FacebookInstagram: https://xerxes.re/InstagramYouTube: https://xerxes.re/YouTubeNewsletter: https://xerxes.re/newsletterSupport the show (https://www.patreon.com/xerxesre)

Challenging #ParadigmX
Artificial Superintelligence, Safety and Security with Prof. Roman Yampolskiy

Challenging #ParadigmX

Play Episode Listen Later Apr 6, 2020 37:35


Roman Yampolskiy is a computer scientist and professor at the University of Louisville. He is specialized in Artificial Intelligence, Security and Safety. In this interview, we talk about topics like if artificial intelligence can create consciousness, what the simulation hypothesis is and if we might actually live in a simulation and if we should treat AI just like threats like the coronavirus.Timestamps:1:19 - Introduction of Prof. Roman Yampolskiy2:13 - (Artificial) Intelligence and Consciousness7:58 - When Will We Reach Artificial Superintelligence8:59 - Which Discussions Should We Be Having About Artificial Intelligence10:08 - The Effect of Quantum Computing on AI11:17 - AI and Security13:53 - AI and Blockchain and Blueprint for Skynet17:03 - AI and Dystopian Scenarios19:17 - What Needs to Happen so Decision Makers Start to Think About Limiting and Controlling AI20:17 - Which Paradigms Need to Be Challenged in AI27:11 - Solutions for the Future31:28 - Utopian Solutions for the Future and the Simulation Hypothesis34:49 - His Current Research35:39 - Impact and LegacyRoman Yampolskiy's Work and Links:Website: http://cecs.louisville.edu/ry/Facebook: https://www.facebook.com/roman.yampolskiyTwitter: https://twitter.com/romanyamGoogle Scholar: https://scholar.google.com/citations?user=0_Rq68cAAAAJ&hl=enBooks by Roman Yampolskiy:https://www.amazon.com/Roman-V-Yampolskiy/e/B00DBE57XM/Artificial Intelligence Safety & Security - https://www.amazon.com/Artificial-Intelligence-Security-Chapman-Robotics/dp/0815369824Artificial Superintelligence: A Futuristic Approach - https://www.amazon.com/Artificial-Superintelligence-Futuristic-Roman-Yampolskiy/dp/1482234432--I love to hear your comments. Please let me know in the comments section what you thought of the interview. And it would mean the world to me if you hit the subscribe button ;) --Xerxes has an interdisciplinary background in social sciences, easter & western psychology, mysticism as well as strategy development. As a transcultural traveller between diverse worlds, he combines different thought schools, ideas and people to inspire and create new solutions for current and future challenges. He works as a futurist and speaker.--Website: https://xerxes.re/TEDx Talk: https://xerxes.re/TEDxPodcast: https://xerxes.re/PodcastLinkedIn: https://xerxes.re/LinkedInTwitter: https://xerxes.re/TwitterFacebook: https://xerxes.re/FacebookInstagram: https://xerxes.re/InstagramYouTube: https://xerxes.re/YouTubeNewsletter: https://xerxes.re/newsletterSupport the show (https://www.patreon.com/xerxesre)

Public Interest Podcast
Intelligent Design: Artificial Intelligence as a New Species, Roman Yampolskiy, Computer Science Professor

Public Interest Podcast

Play Episode Listen Later Feb 4, 2020


Professor Roman Yampolskiy, Founding Director of the University of Louisville's Cybersecurity Laboratory, explains how artificial intelligence (AI) will soon surpass humans as the dominant form of... Good hearts make the world a better place

LFPL's At the Library Series
Artificial Intelligence: Risks + Responses with Dr. Roman Yampolskiy 12-03-19

LFPL's At the Library Series

Play Episode Listen Later Jan 2, 2020


Will artificial intelligence help or hinder society? What will scientists and engineers need to do to keep AI from causing harm? Many scientists have predicted that humanity will achieve Artificial General Intelligence within the next hundred years. After summarizing the arguments for why AGI may pose significant risk, UofL's Dr. Roman Yampolskiy will survey the field’s proposed responses

LFPL's At the Library Series
Artificial Intelligence: Risks + Responses with Dr. Roman Yampolskiy 12-03-19

LFPL's At the Library Series

Play Episode Listen Later Jan 2, 2020


Will artificial intelligence help or hinder society? What will scientists and engineers need to do to keep AI from causing harm? Many scientists have predicted that humanity will achieve Artificial General Intelligence within the next hundred years. After summarizing the arguments for why AGI may pose significant risk, UofL's Dr. Roman Yampolskiy will survey the field’s proposed responses

Fault Lines
As Trump is Impeached, Big Stories are Ignored

Fault Lines

Play Episode Listen Later Dec 19, 2019 168:25


On this episode of Fault Lines, hosts Garland Nixon and Lee Stranahan discuss the news that got no coverage due to the impeachment theater. Paul Manafort's New York charges were dropped and Horowitz testified on the FISA Report in the Senate.Guests:Alexander Mercouris - Editor-in-Chief at TheDuran.com | The Impeachment as a Geopolitical Inflection PointJames Corbett - Founder of the Corbett Report | Impeachment & America's Position in the Global OrderRoman Yampolskiy - AI Safety & Cybersecurity Researcher | What We Get Wrong About AI SafetyTalib Karim - Attorney and Executive Director of STEM4US | ImpeachmentYesterday the entire House voted along party lines to impeach President Trump. Hawaiian Representative Tulsi Gabbard, who is also running for the 2020 Democratic presidential nomination, was the only Democrat or Republican to vote "present" on both articles of impeachment. Editor-in-Chief at The Duran Alexander Mercouris comes back on the show to explain his stance on impeachment. Founder of the Corbett Report James Corbett joins the show to discuss America's position in the global order. Attorney Talib Karim outlines the next phase of the impeachment process.Artificial intelligence allows computers to perform tasks that normally require human intelligence. This includes visual perception, decision-making, translation between languages and speech recognition. AI safety and cybersecurity researcher Roman Yampolskiy comes on the show for the first time to explain what we get wrong about AI safety.

The Jim Rutt Show
EP21 Roman Yampolskiy on the Outer Limits of AI

The Jim Rutt Show

Play Episode Listen Later Oct 28, 2019 71:59


AI expert Roman Yampolskiy & Jim have a wide-ranging talk about simulation theory, types of intelligence, AI research & safety, the singularity, and much more… This conversation with Jim and Dr. Roman V. Yampolskiy–author, tenured associate professor, founding director of Cyber Security Lab–starts by covering the vast variance of possible minds. They then go on … Continue reading EP21 Roman Yampolskiy on the Outer Limits of AI → The post EP21 Roman Yampolskiy on the Outer Limits of AI appeared first on The Jim Rutt Show.

ai outer limits roman yampolskiy roman v yampolskiy cyber security lab
Philosophical Disquisitions
#61 - Yampolskiy on Machine Consciousness and AI Welfare

Philosophical Disquisitions

Play Episode Listen Later Jun 20, 2019


In this episode I talk to Roman Yampolskiy. Roman is a Tenured Associate Professor in the department of Computer Engineering and Computer Science at the Speed School of Engineering, University of Louisville. He is the founding and current director of the Cyber Security Lab and an author of many books and papers on AI security and ethics, including Artificial Superintelligence: a Futuristic Approach. We talk about how you might test for machine consciousness and the first steps towards a science of AI welfare.You can listen below or download here. You can also subscribe to the podcast on Apple, Stitcher and a variety of other podcasting services (the RSS feed is here). Show Notes0:00 - Introduction2:30 - Artificial minds versus Artificial Intelligence6:35 - Why talk about machine consciousness now when it seems far-fetched?8:55 - What is phenomenal consciousness?11:04 - Illusions as an insight into phenomenal consciousness18:22 - How to create an illusion-based test for machine consciousness23:58 - Challenges with operationalising the test31:42 - Does AI already have a minimal form of consciousness?34:08 - Objections to the proposed test and next steps37:12 - Towards a science of AI welfare40:30 - How do we currently test for animal and human welfare44:10 - Dealing with the problem of deception47:00 - How could we test for welfare in AI?52:39 - If an AI can suffer, do we have a duty not to create it?56:48 - Do people take these ideas seriously in computer science?58:08 - What next?Relevant LinksRoman's homepage'Detecting Qualia in Natural and Artificial Agents' by Roman'Towards AI Welfare Science and Policies' by Soenke Ziesche and Roman YampolskiyThe Hard Problem of Consciousness25 famous optical illusionsCould AI get depressed and have hallucinations? #mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; } /* Add your own MailChimp form style overrides in your site stylesheet or in this style block. We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */ Subscribe to the newsletter

Algocracy and Transhumanism Podcast
#61 – Yampolskiy on Machine Consciousness and AI Welfare

Algocracy and Transhumanism Podcast

Play Episode Listen Later Jun 20, 2019


In this episode I talk to Roman Yampolskiy. Roman is a Tenured Associate Professor in the department of Computer Engineering and Computer Science at the Speed School of Engineering, University of Louisville. He is the founding and current director of the Cyber Security Lab and an author of many books and papers on AI security … More #61 – Yampolskiy on Machine Consciousness and AI Welfare

Future of Life Institute Podcast
AI Breakthroughs and Challenges in 2018 with David Krueger and Roman Yampolskiy

Future of Life Institute Podcast

Play Episode Listen Later Jan 31, 2019 62:56


Every January, we like to look back over the past 12 months at the progress that’s been made in the world of artificial intelligence. Welcome to our annual “AI breakthroughs” podcast, 2018 edition. Ariel was joined for this retrospective by researchers Roman Yampolskiy and David Krueger. Roman is an AI Safety researcher and professor at the University of Louisville. He also recently published the book, Artificial Intelligence Safety & Security. David is a PhD candidate in the Mila lab at the University of Montreal, where he works on deep learning and AI safety. He's also worked with safety teams at the Future of Humanity Institute and DeepMind and has volunteered with 80,000 hours. Roman and David shared their lists of 2018’s most promising AI advances, as well as their thoughts on some major ethical questions and safety concerns. They also discussed media coverage of AI research, why talking about “breakthroughs” can be misleading, and why there may have been more progress in the past year than it seems.

MoxieTalk with Kirt Jacobs
MoxieTalk with Kirt Jacobs: #229 Roman Yampolskiy

MoxieTalk with Kirt Jacobs

Play Episode Listen Later Jan 6, 2019 3:10


Dr. Roman V. Yampolskiy is a Tenured Associate Professor in the Department of Computer Engineering and Computer Science at the University of Louisville. He is the founding and current director of the Cyber Security Lab and an author of many books including Artificial Superintelligence: a Futuristic Approach. Dr. Yampolskiy’s main areas of interest are Artificial Intelligence Safety and Cybersecurity.

SuperDataScience
SDS 193: A serious talk on AI taking over jobs

SuperDataScience

Play Episode Listen Later Sep 19, 2018 54:10


In this episode of the SuperDataScience Podcast, I chat with the Artificial Intelligence expert, Roman Yampolskiy. You will discuss on Artificial Intelligence safety, talk about how AI is going to quickly take over in the coming years and why we have to prioritize AI safety for safer machines, and also get valuable advices on how to start a career in AI, for business owners, and for professionals on how to get to high-end jobs. If you enjoyed this episode, check out show notes, resources, and more at www.superdatascience.com/193

Singularity.FM
Roman Yampolskiy on Artificial Intelligence Safety and Security

Singularity.FM

Play Episode Listen Later Aug 30, 2018 78:05


There are those of us who philosophize and debate the finer points surrounding the dangers of artificial intelligence. And then there are those who dare go in the trenches and get their hands dirty by doing the actual work that may just end up making the difference. So if AI turns out to be like […]

Global Brains Podcast
When Machines Surpass Human Intelligence - Roman Yampolskiy / Global Brains #3

Global Brains Podcast

Play Episode Listen Later Jul 27, 2018 22:10


What can happen when Artificial Intelligence surpasses Human Intelligence? What are the dangers of AI right now and in the future? What is the impact of AI on jobs and why do we need to redefine work? These are some of the questions, we tried to answer talking to Dr. Roman Yampolskiy, a leading expert on AI Safety with more than 100 papers published in recent years. A mind-boggling episode.

Economics Detective Radio
Artificial Intelligence, Risk, and Alignment with Roman Yampolskiy

Economics Detective Radio

Play Episode Listen Later Jul 20, 2018 54:29


My guest today is Roman Yampolskiy, computer scientist and AI safety researcher. He is the author of multiple books, including Artificial Superintelligence: A Futuristic Approach. He is also the editor of the forthcoming volume Artificial Intelligence Safety and Security, featuring contributions from many leading AI safety researchers. We discuss the nature of AI risk, the state of the current research on the topic, and some of the more and less promising lines of research.

Future of Life Institute Podcast
AIAP: AI Safety, Possible Minds, and Simulated Worlds with Roman Yampolskiy

Future of Life Institute Podcast

Play Episode Listen Later Jul 16, 2018 82:30


What role does cyber security play in alignment and safety? What is AI completeness? What is the space of mind design and what does it tell us about AI safety? How does the possibility of machine qualia fit into this space? Can we leak proof the singularity to ensure we are able to test AGI? And what is computational complexity theory anyway? AI Safety, Possible Minds, and Simulated Worlds is the third podcast in the new AI Alignment series, hosted by Lucas Perry. For those of you that are new, this series will be covering and exploring the AI alignment problem across a large variety of domains, reflecting the fundamentally interdisciplinary nature of AI alignment. Broadly, we will be having discussions with technical and non-technical researchers across areas such as machine learning, AI safety, governance, coordination, ethics, philosophy, and psychology as they pertain to the project of creating beneficial AI. If this sounds interesting to you, we hope that you will join in the conversations by following us or subscribing to our podcasts on Youtube, SoundCloud, or your preferred podcast site/application. In this podcast, Lucas spoke with Roman Yampolskiy, a Tenured Associate Professor in the department of Computer Engineering and Computer Science at the Speed School of Engineering, University of Louisville. Dr. Yampolskiy’s main areas of interest are AI Safety, Artificial Intelligence, Behavioral Biometrics, Cybersecurity, Digital Forensics, Games, Genetic Algorithms, and Pattern Recognition. He is an author of over 100 publications including multiple journal articles and books.  Topics discussed in this episode include: -Cyber security applications to AI safety -Key concepts in Roman's papers and books -Is AI alignment solvable? -The control problem -The ethics of and detecting qualia in machine intelligence -Machine ethics and it's role or lack thereof  in AI safety -Simulated worlds and if detecting base reality is possible -AI safety publicity strategy

Voices in AI
Episode 18: A Conversation with Roman Yampolskiy

Voices in AI

Play Episode Listen Later Nov 20, 2017 45:56


In this episode Byron and Roman discuss the future of jobs, Roman's new field of study, "Intellectology", consciousness and more. Episode 18: A Conversation with Roman Yampolskiy

Voices in AI
Episode 18: A Conversation with Roman Yampolskiy

Voices in AI

Play Episode Listen Later Nov 20, 2017 45:56


In this episode Byron and Roman discuss the future of jobs, Roman's new field of study, "Intellectology", consciousness and more. Episode 18: A Conversation with Roman Yampolskiy

Voices in AI
Episode 18: A Conversation with Roman Yampolskiy

Voices in AI

Play Episode Listen Later Nov 20, 2017 45:56


In this episode Byron and Roman discuss the future of jobs, Roman's new field of study, "Intellectology", consciousness and more. Episode 18: A Conversation with Roman Yampolskiy

The World Transformed
Nikola Danylov and Conversations with the Future

The World Transformed

Play Episode Listen Later Feb 9, 2017 34:00


Nikola Danylov discusses his new book, Conversations with the Future. For generations, humanity stared at the vastness of the oceans and wondered, “What if?” Today, having explored the curves of the Earth, we now stare at endless stars and wonder, “What if?” Our technology has brought us to the make-or-break moment in human history. We can either grow complacent, and go extinct like the dinosaurs, or spread throughout the cosmos, as Carl Sagan dreamed of. For many years Nikola Danaylov has been interviewing the future and motivating people all over the world to embrace rather than fear it. "Conversations with the Future" was born from those interviews and Nik's unceasing need to explore "What If" with some of the most forward thinking visionaries in the world today.   About Our Guest: Nik Danaylov is a Keynote Speaker, Futurist, Strategic Adviser, popular Blogger and Podcast host. His podcast,  Singularity.FM, has had over 4 million views on iTunes and YouTube and has been featured on international TV networks as well as some of the biggest blogs in the world, such as BBC, ArteTV, TV Japan, io9, the Huffington Post, ZDNet, BoingBoing and others. Today Singularity Weblog is the biggest independent blog on related topics. The Singularity.FM podcast is the first, most popular and widely recognized interview series in the niche and, according to Prof. Roman Yampolskiy, Nikola has established himself as the “Larry King of the Singularity.” WT 264-573  

Bill Murphy's  RedZone Podcast | World Class IT Security
#065: AI Safety in Cyber Security | AI Decision Making | Wireheading | AI Chatbot Privacy - with Roman Yampolskiy

Bill Murphy's RedZone Podcast | World Class IT Security

Play Episode Listen Later Dec 19, 2016 46:36


My guest for the most recent episode was an AI expert Roman Yampolskiy. While listening to our conversation, you will fine-tune your understanding of AI from a safety perspective. Those of you who have decision- making authority in the IT Security world will appreciate Roman's viewpoint on AI Safety. Major Take-Aways From This Episode: 1) Wire heading or Mental Illness with Machines - Miss aligned objectives/incentives for example what happens when a sales rep is told to sell more new customers, but ignores profits. Now you have more customers but less profit. Or you tell your reps to sell more products and possibly forsake the long term relationship value of the customer. There are all sorts of misaligned incentives and Roman makes this point with AIs. 2) I can even draw a parallel with coaching my girls' teams where I have incented them to combine off each other because I want this type of behavior. This can also go against you because you end up becoming really good at passing but not scoring goals to win. 3) AI Decision making: The need for AIs to be able to explain themselves and how they arrived at their decisions. 4) The IT Security implications of AI Chat bots and Social Engineering attacks. 5) The real danger of Human Level AGI Artificial General intelligence. 6) How will we communicate with systems that are smarter than us? We already have a hard time communicating with dogs, for example, how will this work out with AIs and humans? 7) Why you can't wait to develop AI safety mechanisms until there is a problem.....We should remember that seat belts were a good idea the day the first car was driven down the road, but weren't mandated till 60 years after... 8) The difference between AI safety and Cybersecurity. About Roman Yampolskiy Dr. Roman V. Yampolskiy is a Tenured Associate Professor in the department of Computer Engineering and Computer Science at the Speed School of Engineering, University of Louisville. He is the founding and current director of the Cyber Security Lab and an author of many books including Artificial Superintelligence: a Futuristic Approach. During his tenure at UofL, Dr. Yampolskiy has been recognized as: Distinguished Teaching Professor, Professor of the Year, Faculty Favorite, Top 4 Faculty, Leader in Engineering Education, Top 10 of Online College Professor of the Year, with many other distinctions too numerous to mention. Dr. Yampolskiy's main areas of interest are AI Safety, Artificial Intelligence, Behavioral Biometrics, Cybersecurity, Digital Forensics, Games, Genetic Algorithms, and Pattern Recognition. Dr. Yampolskiy is an author of over 100 publications including multiple journal articles and books. His research has been cited by 1000+ scientists and profiled in popular magazines both American and foreign (New Scientist, Poker Magazine, Science World Magazine), dozens of websites (BBC, MSNBC, Yahoo! News), Dr. Yampolskiy's research has been featured 250+ times in numerous media reports in 22 languages. Read full transcript here. How to get in touch with Roman Yampolskiy: LinkedIn Twitter Facebook Resources: http://cecs.louisville.edu/ry/ J.B. Speed School of Engineering Profile Books/ Publications: Artificial Superintelligence: A Futuristic Approach Full List of Published Books This episode is sponsored by the CIO Scoreboard, a powerful tool that helps you communicate the status of your IT Security program visually in just a few minutes. Credits: * Outro music provided by Ben’s Sound Other Ways To Listen to the Podcast iTunes | Libsyn | Soundcloud | RSS | LinkedIn Leave a Review If you enjoyed this episode, then please consider leaving an iTunes review here Click here for instructions on how to leave an iTunes review if you're doing this for the first time. About Bill Murphy Bill Murphy is a world renowned IT Security Expert dedicated to your success as an IT business leader. Follow Bill on LinkedIn and Twitter.

Future Strategist
Interview Of Roman Yampolskiy

Future Strategist

Play Episode Listen Later Apr 4, 2016 35:59


A discussion of AI risks with Roman Yampolskiy, a computer scientists at the University of Louisville.

Adam Alonzi Podcast
Benevolent Superintelligence or Killer Robots? AI, AGI, and Data Science with Dr. Roman Yampolskiy

Adam Alonzi Podcast

Play Episode Listen Later Dec 8, 2015 28:04


    His New Book on Artificial Superintelligence - Amazon.   Biography of Dr. Roman V. Yampolskiy Dr. Roman V. Yampolskiy is a Tenured Associate Professor in the department of Computer Engineering and Computer Science at the Speed School of Engineering, University of Louisville. He is the founding and current director of the Cyber Security Lab and an author of many books including Artificial Superintelligence: a Futuristic Approach. During his tenure at UofL, Dr. Yampolskiy has been recognized as: Distinguished Teaching Professor, Professor of the Year, Faculty Favorite, Top 4 Faculty, Leader in Engineering Education, Top 10 of Online College Professor of the Year, and Outstanding Early Career in Education award winner among many other honors and distinctions. Yampolskiy is aSenior member of IEEE and AGI; Member of Kentucky Academy of Science, and Research Advisor for MIRI and Associate of GCRI.   Roman Yampolskiy holds a PhD degree from the Department of Computer Science and Engineering at the University at Buffalo. He was a recipient of a four year NSF (National Science Foundation) IGERT (Integrative Graduate Education and Research Traineeship) fellowship. Before beginning his doctoral studies Dr. Yampolskiy received a BS/MS (High Honors) combined degree in Computer Science from Rochester Institute of Technology, NY, USA. After completing his PhD dissertation Dr. Yampolskiy held a position of an Affiliate Academic at the Center for Advanced Spatial Analysis, University of London,College of London. He had previously conducted research at the Laboratory for Applied Computing (currently known as Center for Advancing the Study of Infrastructure) at theRochester Institute of Technology and at the Center for Unified Biometrics and Sensors at the University at Buffalo. Dr. Yampolskiy is an alumnus of Singularity University(GSP2012) and a Visiting Fellow of the Singularity Institute (Machine Intelligence Research Institute).   Dr. Yampolskiy’s main areas of interest are AI Safety, Artificial Intelligence, Behavioral Biometrics, Cybersecurity, Digital Forensics, Games, Genetic Algorithms, and Pattern Recognition. Dr. Yampolskiy is an author of over 100 publications including multiple journal articles and books. His research has been cited by 1000+ scientists and profiled in popular magazines both American and foreign (New Scientist, Poker Magazine, Science World Magazine), dozens of websites (BBC, MSNBC, Yahoo! News), on radio (German National Radio, Swedish National Radio, Alex Jones Show) and TV. Dr. Yampolskiy’s research has been featured 250+ times in numerous media reports in 22 languages.

Singularity.FM
Roman Yampolskiy on Artificial Superintelligence

Singularity.FM

Play Episode Listen Later Sep 6, 2015 60:33


There are those of us who philosophize and debate the finer points surrounding the dangers of artificial intelligence. And then there are those who dare go in the trenches and get their hands dirty by doing the actual work that may just end up making the difference. So if AI turns out to be like […]

Singularity.FM
Roman Yampolskiy: Every Technology Has Both Negative and Positive Effects!

Singularity.FM

Play Episode Listen Later Aug 14, 2013 65:21


Roman V. Yampolskiy is an Assistant Professor at the School of Engineering and Director at the Cybersecurity Lab of the University of Louisville. He is also an alumnus of Singularity University (GSP2012) and a visiting fellow of the Machine Intelligence Research Institute (MIRI). Dr. Yampolskiy is a well known researcher with a more holistic point of view, […]