Podcasts about machine intelligence research institute

  • 34PODCASTS
  • 55EPISODES
  • 55mAVG DURATION
  • ?INFREQUENT EPISODES
  • Jan 25, 2025LATEST

POPULARITY

20172018201920202021202220232024


Best podcasts about machine intelligence research institute

Latest podcast episodes about machine intelligence research institute

Artificial Intelligence in Industry with Daniel Faggella
Understanding AGI Alignment Challenges and Solutions - with Eliezer Yudkowsky of the Machine Intelligence Research Institute

Artificial Intelligence in Industry with Daniel Faggella

Play Episode Listen Later Jan 25, 2025 43:03


Today's episode is a special addition to our AI Futures series, featuring a special sneak peek at an upcoming episode of our Trajectory podcast with guest Eliezer Yudkowsky, AI researcher, founder, and research fellow at the Machine Intelligence Research Institute. Eliezer joins Emerj CEO and Head of Research Daniel Faggella to discuss the governance challenges of increasingly powerful AI systems—and what it might take to ensure a safe and beneficial trajectory for humanity. If you've enjoyed or benefited from some of the insights of this episode, consider leaving us a five-star review on Apple Podcasts, and let us know what you learned, found helpful, or liked most about this show!

RNZ: Sunday Morning
Eliezer Yudkowsky: The AI academic warning

RNZ: Sunday Morning

Play Episode Listen Later Mar 16, 2024 23:35


Eliezer Yudkowsky, artificial intelligence researcher, decision theorist and co-founder of Machine Intelligence Research Institute, has a stark warning that we're moving too fast in the field of AI.

London Futurists
Provably safe AGI, with Steve Omohundro

London Futurists

Play Episode Listen Later Feb 13, 2024 42:59


AI systems have become more powerful in the last few years, and are expected to become even more powerful in the years ahead. The question naturally arises: what, if anything, should humanity be doing to increase the likelihood that these forthcoming powerful systems will be safe, rather than destructive?Our guest in this episode has a long and distinguished history of analysing that question, and he has some new proposals to share with us. He is Steve Omohundro, the CEO of Beneficial AI Research, an organisation which is working to ensure that artificial intelligence is safe and beneficial for humanity.Steve has degrees in Physics and Mathematics from Stanford and a Ph.D. in Physics from U.C. Berkeley. He went on to be an award-winning computer science professor at the University of Illinois. At that time, he developed the notion of basic AI drives, which we talk about shortly, as well as a number of potential key AI safety mechanisms.Among many other roles which are too numerous to mention here, Steve served as a Research Scientist at Meta, the parent company of Facebook, where he worked on generative models and AI-based simulation, and he is an advisor to MIRI, the Machine Intelligence Research Institute.Selected follow-ups:Steve Omohundro: Innovative ideas for a better worldMetaculus forecast for the date of weak AGI"The Basic AI Drives" (PDF, 2008)TED Talk by Max Tegmark: How to Keep AI Under ControlApple Secure EnclaveMeta Research: Teaching AI advanced mathematical reasoningDeepMind AlphaGeometryMicrosoft Lean theorem proverTerence Tao (Wikipedia)NeurIPS Tutorial on Machine Learning for Theorem Proving (2023)The team at MIRIMusic: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration

POLITICO Dispatch
'Humans are not some peak of cognitive ability': The existential risks of AI

POLITICO Dispatch

Play Episode Listen Later Dec 12, 2023 18:18


Malo Bourgon doesn't know exactly what existential threats AI poses, but he says we should be preparing for them anyway. Bourgon is the CEO of the Machine Intelligence Research Institute and among those who fear AI could go deeply awry if it falls into the wrong hands. On today's show, Bourgon tells host Steven Overly what tech restrictions he wants to see in place.

FT Tech Tonic
Superintelligent AI: The Doomers

FT Tech Tonic

Play Episode Listen Later Nov 14, 2023 28:47


In the first episode of a new, five-part series of Tech Tonic, FT journalists Madhumita Murgia and John Thornhill ask how close we are to building human-level artificial intelligence and whether ‘superintelligent' AI poses an existential risk to humanity. John and Madhu speak to Yoshua Bengio, a pioneer of generative AI, who is concerned, and to his colleague Yann LeCun, now head of AI at Meta, who isn't. Plus, they hear from Eliezer Yudkowsky, research lead at the Machine Intelligence Research Institute, who's been sounding the alarm about superintelligent AI for more than two decades. Register here for the FT's Future of AI summit on November 15-16Free links to read more on this topic:How Sunak's Bletchley Park summit aims to shape global AI safetyOpenAI chief seeks new Microsoft funds to build ‘superintelligence'We must slow down the race to God-like AIThe sceptical case on generative AIAI will never threaten humans, says top AI scientistTech Tonic is presented by Madhumita Murgia and John Thornhill. Senior producer is Edwin Lane and the producer is Josh Gabert-Doyon. Executive producer is Manuela Saragosa. Sound design by Breen Turner and Samantha Giovinco. Original music by Metaphor Music. The FT's head of audio is Cheryl Brumley.Read a transcript of this episode on FT.com Hosted on Acast. See acast.com/privacy for more information.

Artificial Intelligence and You
178 - Guest: Jaan Tallinn, AI Existential Risk Philanthropist, part 1

Artificial Intelligence and You

Play Episode Listen Later Nov 13, 2023 33:59


This and all episodes at: https://aiandyou.net/ .   The attention of the world to the potential impact of AI owes a huge debt to my guest Jaan Tallinn. He was one of the founding developers of Skype and the file sharing application Kazaa, and that alone makes him noteworthy to most of the world. But he leveraged his billionaire status conferred by that success to pursue a goal uncommon among technology entrepreneurs: reducing existential risk. In other words, saving the human race from possible extinction through our own foolhardiness or fate. He has co-founded and funded the Centre for the Study of Existential Risk, in Cambridge, England, and the Future of Life Institute, in Cambridge, Massachusetts. He's also a member of the board of sponsors of the Bulletin of the Atomic Scientists, and a key funder of the Machine Intelligence Research Institute. In this first part, we talk about the problems with current #AI frontier models, Jaan's reaction to GPT-4, the letter causing for a pause in AI training, Jaan's motivations in starting CSER and FLI, how individuals and governments should react to AI risk, and Jaan's idea for how to enforce constraints on AI development. All this plus our usual look at today's AI headlines. Transcript and URLs referenced at HumanCusp Blog.        

The Foresight Institute Podcast
Jan Leike | Superintelligent Alignment

The Foresight Institute Podcast

Play Episode Listen Later Nov 3, 2023 9:57


Jan Leike is a Research Scientist at Google DeepMind and a leading voice in AI Alignment, with affiliations at the Future of Humanity Institute and the Machine Intelligence Research Institute. At OpenAI, he co-leads the Superalignment Team, contributing to AI advancements such as InstructGPT and ChatGPT. Holding a PhD from the Australian National University, Jan's work focuses on ensuring AI Alignment.Key HighlightsThe launch of OpenAI's Superalignment team, targeting the alignment of superintelligence in four years.The aim to automate of alignment research, currently leveraging 20% of OpenAI's computational power.How traditional reinforcement learning from human feedback may fall short in scaling language model alignment.Why there is a focus on scalable oversight, generalization, automation interpretability, and adversarial testing to ensure alignment reliability.Experimentation with intentionally misaligned models to evaluate alignment strategies.Dive deeper into the session: Full SummaryAbout Foresight InstituteForesight Institute is a research organization and non-profit that supports the beneficial development of high-impact technologies. Since our founding in 1987 on a vision of guiding powerful technologies, we have continued to evolve into a many-armed organization that focuses on several fields of science and technology that are too ambitious for legacy institutions to support.Allison DuettmannThe President and CEO of Foresight Institute, Allison Duettmann directs the Intelligent Cooperation, Molecular Machines, Biotech & Health Extension, Neurotech, and Space Programs, alongside Fellowships, Prizes, and Tech Trees. She has also been pivotal in co-initiating the Longevity Prize, pioneering initiatives like Existentialhope.com, and contributing to notable works like "Superintelligence: Coordination & Strategy" and "Gaming the Future".Get Involved with Foresight:Apply: Virtual Salons & in-person WorkshopsDonate: Support Our Work – If you enjoy what we do, please consider this, as we are entirely funded by your donations!Follow Us: Twitter | Facebook | LinkedInNote: Explore every word spoken on this podcast through Fathom.fm, an innovative podcast search engine. Hosted on Acast. See acast.com/privacy for more information.

Conversations With Coleman
Will AI Destroy Us? - AI Virtual Roundtable

Conversations With Coleman

Play Episode Listen Later Jul 28, 2023 95:34


Today's episode is a roundtable discussion about AI safety with Eliezer Yudkowsky, Gary Marcus, and Scott Aaronson. Eliezer Yudkowsky is a prominent AI researcher and writer known for co-founding the Machine Intelligence Research Institute, where he spearheaded research on AI safety. He's also widely recognized for his influential writings on the topic of rationality. Scott Aaronson is a theoretical computer scientist and author, celebrated for his pioneering work in the field of quantum computation. He's also the chair of COMSI at U of T Austin, but is currently taking a leave of absence to work at OpenAI. Gary Marcus is a cognitive scientist, author, and entrepreneur known for his work at the intersection of psychology, linguistics, and AI. He's also authored several books, including "Kluge" and "Rebooting AI: Building Artificial Intelligence We Can Trust". This episode is all about AI safety. We talk about the alignment problem. We talk about the possibility of human extinction due to AI. We talk about what intelligence actually is. We talk about the notion of a singularity or an AI takeoff event and much more. It was really great to get these three guys in the same virtual room and I think you'll find that this conversation brings something a bit fresh to a topic that has admittedly been beaten to death on certain corners of the internet. Learn more about your ad choices. Visit megaphone.fm/adchoices

Conversations With Coleman
Will AI Destroy Us? - AI Virtual Roundtable

Conversations With Coleman

Play Episode Listen Later Jul 28, 2023 91:04


Today's episode is a roundtable discussion about AI safety with Eliezer Yudkowsky, Gary Marcus, and Scott Aaronson. Eliezer Yudkowsky is a prominent AI researcher and writer known for co-founding the Machine Intelligence Research Institute, where he spearheaded research on AI safety. He's also widely recognized for his influential writings on the topic of rationality. Scott Aaronson is a theoretical computer scientist and author, celebrated for his pioneering work in the field of quantum computation. He's also the chair of COMSI at U of T Austin, but is currently taking a leave of absence to work at OpenAI. Gary Marcus is a cognitive scientist, author, and entrepreneur known for his work at the intersection of psychology, linguistics, and AI. He's also authored several books, including "Kluge" and "Rebooting AI: Building Artificial Intelligence We Can Trust".This episode is all about AI safety. We talk about the alignment problem. We talk about the possibility of human extinction due to AI. We talk about what intelligence actually is. We talk about the notion of a singularity or an AI takeoff event and much more.It was really great to get these three guys in the same virtual room and I think you'll find that this conversation brings something a bit fresh to a topic that has admittedly been beaten to death on certain corners of the internet.

Conversations With Coleman
Will AI Destroy Us? - AI Virtual Roundtable

Conversations With Coleman

Play Episode Listen Later Jul 28, 2023 91:04


Today's episode is a roundtable discussion about AI safety with Eliezer Yudkowsky, Gary Marcus, and Scott Aaronson. Eliezer Yudkowsky is a prominent AI researcher and writer known for co-founding the Machine Intelligence Research Institute, where he spearheaded research on AI safety. He's also widely recognized for his influential writings on the topic of rationality. Scott Aaronson is a theoretical computer scientist and author, celebrated for his pioneering work in the field of quantum computation. He's also the chair of COMSI at U of T Austin, but is currently taking a leave of absence to work at OpenAI. Gary Marcus is a cognitive scientist, author, and entrepreneur known for his work at the intersection of psychology, linguistics, and AI. He's also authored several books, including "Kluge" and "Rebooting AI: Building Artificial Intelligence We Can Trust".This episode is all about AI safety. We talk about the alignment problem. We talk about the possibility of human extinction due to AI. We talk about what intelligence actually is. We talk about the notion of a singularity or an AI takeoff event and much more.It was really great to get these three guys in the same virtual room and I think you'll find that this conversation brings something a bit fresh to a topic that has admittedly been beaten to death on certain corners of the internet.

Hold These Truths with Dan Crenshaw
Can We Stop the AI Apocalypse? | Eliezer Yudkowsky

Hold These Truths with Dan Crenshaw

Play Episode Listen Later Jul 13, 2023 61:06


Artificial Intelligence (AI) researcher Eliezer Yudkowsky makes the case for why we should view AI as an existential threat to humanity. Rep. Crenshaw gets into the basics of AI and how the new AI program, GPT-4, is a revolutionary leap forward in the tech. Eliezer hypothesizes the most likely scenarios if AI becomes self-aware and unconstrained – from rogue programs that blackmail targets to self-replicating nano robots. They discuss building global coalitions to rein in AI development and how China views AI. And they explore first steps Congress could take to limit AI's capabilities for harm while still enabling its promising advances in research and development. Eliezer Yudkowsky is a co-founder and research fellow at the Machine Intelligence Research Institute, a private research nonprofit based in Berkeley, California. Follow him on Twitter @ESYudkowsky

The Nonlinear Library
LW - TED talk by Eliezer Yudkowsky: Unleashing the Power of Artificial Intelligence by bayesed

The Nonlinear Library

Play Episode Listen Later May 7, 2023 1:16


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: TED talk by Eliezer Yudkowsky: Unleashing the Power of Artificial Intelligence, published by bayesed on May 7, 2023 on LessWrong. This is a TED talk published early by a random TEDx channel. Tweet by EY: Looks like my Sudden Unexpected TED Talk got posted early by a TEDx account. YouTube description: Eliezer Yudkowsky is a foundational thinker on the long-term future of artificial intelligence. With more than 20 years of experience in the world of AI, Eliezer Yudkowsky is the founder and senior research fellow of the Machine Intelligence Research Institute, an organization dedicated to ensuring smarter-than-human AI has a positive impact on the world. His writings, both fiction and nonfiction, frequently warn of the dangers of unchecked AI and its philosophical significance in today's world. Yudkowsky is the founder of LessWrong, an online forum and community dedicated to improving human reasoning and decision-making, and the coinventor of the "functional decision theory," which states that decisions should be the output of a fixed mathematical function answering the question: "Which output of this very function would yield the best outcome?" Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.

The Nonlinear Library: LessWrong
LW - TED talk by Eliezer Yudkowsky: Unleashing the Power of Artificial Intelligence by bayesed

The Nonlinear Library: LessWrong

Play Episode Listen Later May 7, 2023 1:16


Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: TED talk by Eliezer Yudkowsky: Unleashing the Power of Artificial Intelligence, published by bayesed on May 7, 2023 on LessWrong. This is a TED talk published early by a random TEDx channel. Tweet by EY: Looks like my Sudden Unexpected TED Talk got posted early by a TEDx account. YouTube description: Eliezer Yudkowsky is a foundational thinker on the long-term future of artificial intelligence. With more than 20 years of experience in the world of AI, Eliezer Yudkowsky is the founder and senior research fellow of the Machine Intelligence Research Institute, an organization dedicated to ensuring smarter-than-human AI has a positive impact on the world. His writings, both fiction and nonfiction, frequently warn of the dangers of unchecked AI and its philosophical significance in today's world. Yudkowsky is the founder of LessWrong, an online forum and community dedicated to improving human reasoning and decision-making, and the coinventor of the "functional decision theory," which states that decisions should be the output of a fixed mathematical function answering the question: "Which output of this very function would yield the best outcome?" Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.

The FOX News Rundown
Extra: Why A Renowned AI Expert Says We May Be "Headed For A Catastrophe"

The FOX News Rundown

Play Episode Listen Later May 6, 2023 23:59


Artificial intelligence may be the technology of the future, but could it bring us more harm than good? That is what some AI researchers and tech experts fear, and that is why more than 1,000 of them signed a letter earlier this year urging a ‘pause' on some AI development. Eliezer Yudkowsky, a decision theorist at the Machine Intelligence Research Institute who has studied AI for more than 20 years, thinks that letter does not go far enough. He says AI could actually lead to the end of humanity, and we need international agreements now to prevent the technology from advancing too far. He even believes we may need a ‘shooting war' to stop a country that refuses to comply. Yudkowsky recently spoke with FOX News Rundown's Jessica Rosenthal about why he is trying to raise the alarm over AI and why action is needed now. He offered some blunt warnings and compared AI's threat to that of nuclear war. Due to time limitations, we could not include the conversation in our weekday editions of the Fox News Rundown. In a FOX News Rundown Extra exclusive, you will hear our entire unedited interview with AI expert Eliezer Yudkowsky. Learn more about your ad choices. Visit megaphone.fm/adchoices

From Washington – FOX News Radio
Extra: Why A Renowned AI Expert Says We May Be "Headed For A Catastrophe"

From Washington – FOX News Radio

Play Episode Listen Later May 6, 2023 23:59


Artificial intelligence may be the technology of the future, but could it bring us more harm than good? That is what some AI researchers and tech experts fear, and that is why more than 1,000 of them signed a letter earlier this year urging a ‘pause' on some AI development. Eliezer Yudkowsky, a decision theorist at the Machine Intelligence Research Institute who has studied AI for more than 20 years, thinks that letter does not go far enough. He says AI could actually lead to the end of humanity, and we need international agreements now to prevent the technology from advancing too far. He even believes we may need a ‘shooting war' to stop a country that refuses to comply. Yudkowsky recently spoke with FOX News Rundown's Jessica Rosenthal about why he is trying to raise the alarm over AI and why action is needed now. He offered some blunt warnings and compared AI's threat to that of nuclear war. Due to time limitations, we could not include the conversation in our weekday editions of the Fox News Rundown. In a FOX News Rundown Extra exclusive, you will hear our entire unedited interview with AI expert Eliezer Yudkowsky. Learn more about your ad choices. Visit megaphone.fm/adchoices

Fox News Rundown Evening Edition
Extra: Why A Renowned AI Expert Says We May Be "Headed For A Catastrophe"

Fox News Rundown Evening Edition

Play Episode Listen Later May 6, 2023 23:59


Artificial intelligence may be the technology of the future, but could it bring us more harm than good? That is what some AI researchers and tech experts fear, and that is why more than 1,000 of them signed a letter earlier this year urging a ‘pause' on some AI development. Eliezer Yudkowsky, a decision theorist at the Machine Intelligence Research Institute who has studied AI for more than 20 years, thinks that letter does not go far enough. He says AI could actually lead to the end of humanity, and we need international agreements now to prevent the technology from advancing too far. He even believes we may need a ‘shooting war' to stop a country that refuses to comply. Yudkowsky recently spoke with FOX News Rundown's Jessica Rosenthal about why he is trying to raise the alarm over AI and why action is needed now. He offered some blunt warnings and compared AI's threat to that of nuclear war. Due to time limitations, we could not include the conversation in our weekday editions of the Fox News Rundown. In a FOX News Rundown Extra exclusive, you will hear our entire unedited interview with AI expert Eliezer Yudkowsky. Learn more about your ad choices. Visit megaphone.fm/adchoices

Lexman Artificial
guests Ben Goertzel and Greg Stockman talk about rehearsers and ecads

Lexman Artificial

Play Episode Listen Later Oct 28, 2022 5:45


Ben Goertzel (@bengoertzel) is a philosopher, AI researcher, and the CEO of the Machine Intelligence Research Institute. In this episode, he talks about rehearsers and ecads, and gives an intriguing discussion on the potential for barbitone in fornixes.

The Nonlinear Library
EA - Book a chat with an EA professional by Clifford

The Nonlinear Library

Play Episode Listen Later Jul 19, 2022 3:19


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Book a chat with an EA professional, published by Clifford on July 19, 2022 on The Effective Altruism Forum. People might underestimate how willing others are to give advice or soundboard ideas. I made a quick list of people who told me they'd be happy to chat - some have more availability than others. If you'd like to speak to one of the people below, please book a call at this link and I'll try to connect you. Caveats I'm writing this post as an experiment. If there's lots of interest and it's a good use of everyone's time, I might make this into a more developed product. While it's exciting that these people are willing to give their time, the supply is still quite limited - nothing here should be taken as an obligation on their part to talk to anyone in particular, nor a guarantee that they'll still have free slots at any point in the future. But if you are interested, use the forms below to request a call. Note: I picked people I happened to have spoken to recently: this is not a hand-picked, exclusive club. If you're also happy to speak to people, then feel free to post a comment below following the same format (similar to the “Who's hiring” thread). People Eirin Evjen Operations associate at Forethought Foundation. Previously ran EA Norway. LinkedIn. Ask about: Careers in operations Running an EA national group Request a call Eli Rose EA community-building grantmaking and projects at Open Phil. LinkedIn Talk about: EA 101 questions In-depth questions about longtermist cause areas Request a call Elika Somani Deputy Head of Events at Atlas Fellowship, incoming Bioethics and Biosecurity Research Fellow at the National Institutes for Health, EA Programme Facilitator and Community Builder . LinkedIn Chat about: EA 101 questions and being a welcoming face to the community :) Careers (or interest) in public health, disease control, and biosecurity Careers (or interest) in operations and events Community Building!! There's no dumb questions, I'm always happy to meet new people and talk about EA! Request a call Evan Hubinger Research Fellow at Machine Intelligence Research Institute. LinkedIn Chat about: AI alignment careers Request a call John Halstead Research Fellow at the Forethought Foundation. Formerly Head of Applied Research at Founders Pledge and researcher at Centre for Effective Altruism, DPhil in political philosophy from Oxford. Linkedin. Ask about: Careers in research Climate change Request a call Sarah Cheng Software engineer at Centre for Effective Altruism and Intro to EA facilitator. LinkedIn Ask about: Basic questions in EA Software engineering careers Request a call Seren Kell Science and Technology Manager at Good Food Institute. LinkedIn Ask about: Plant-based and cultivated meat science Biochemistry Europe's sustainable protein research ecosystem Research funding Request a call Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.

The Nonlinear Library
LW - I applied for a MIRI job in 2020. Here's what happened next. by ViktoriaMalyasova

The Nonlinear Library

Play Episode Listen Later Jun 16, 2022 11:46


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: I applied for a MIRI job in 2020. Here's what happened next., published by ViktoriaMalyasova on June 15, 2022 on LessWrong. I recently saw Rob Bensinger asking what MIRI could do to improve hiring. I tried to get a job at MIRI in February 2020 and I have some feedback to give. I am sorry, because I am about to say some harsh things, but I think they might be important for you to hear, and my previous more low-key attempts to point out problems had little to no effect. tl,dr: MIRI promised to contact me once the pandemic no longer prevents them from hiring people. They never did. What happened I applied for a Software Engineer role at MIRI on February 26th. I hoped to end up as a researcher eventually, but this position seemed like the easiest way in for a foreigner. The first stage was a multiple choice test from Triplebyte. I took it on March 2nd. After a week, on the 9th of March (why does it take a week to grade a multiple choice quiz?!), I got the following letter from Triplebyte: Hi Viktoriya, Triplebyte partners with Machine Intelligence Research Institute to help candidates that go through their process find the job that's right for them. If the opportunity with Machine Intelligence Research Institute is not the right fit, we believe we can help you find a great fit somewhere else. Seeing as you did particularly well on Machine Intelligence Research Institute's assessment, we are going to fast-track you to the final stage of our application process! All that is left is our technical interview. If you pass, you will be matched with top tech companies and fast-tracked to final round interviews. [...] This gave me the impression that I passed the quiz, so I was surprised to get a rejection letter from Buck Shlegeris on the same day. The letter left me with a way forward: Buck sent me a list of ways to impress him and get back into the interview. The tasks looked about a week of work each. I wondered if doing one is worth the opportunity cost. I lived in Russia at the time. Russia was fighting two wars, in Syria and Ukraine. I think it's fair to estimate that all marginal tax payments were spent on war. It didn't cost much to hire an additional soldier to fight in Ukraine. There's a word "15-тысячники" for a mercenary hired to fight for DNR for 15000 rubles a month. I easily paid that sum monthly in taxes. Staying in Russia was costing human lives. Once I stopped trying to overthrow the government the normal way, there was no reason to stay. Wasn't it better to spend the time applying to other US companies then, to increase the chances of landing a job before the H-1B application deadline? I thought about it and decided to go for MIRI. I didn't want to spend time on companies that weren't doing alignment research. I was reasonably confident I'd pass the interview if I was able to get in. (I think this was justified. I interviewed for a FAANG company last year and passed.) So I solved a task and sent it to Buck while reminding about the H-1B application deadline. He said he'd get back to me about further steps. The H-1B application deadline passed. There was no reply. I later discovered that MIRI can actually apply for H-1B all year round. Well, at least that was true 6 years ago. Buck never told me about this. If I knew, I could have applied for another job. The ML living library position was still open (at least that's what the job page said), and I'd being working in ML for 2 years. Two months later I got a letter from Buck. He said that MIRI applications process is messed up by COVID, and that he put me on the list of people to contact when things are starting up again. I asked Buck if MIRI does green card sponsorship and he said he isn't sure how all of that works. I asked who I should contact to find out, and got no reply. This is weird, how can an interviewe...

The Nonlinear Library: LessWrong
LW - I applied for a MIRI job in 2020. Here's what happened next. by ViktoriaMalyasova

The Nonlinear Library: LessWrong

Play Episode Listen Later Jun 16, 2022 11:46


Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: I applied for a MIRI job in 2020. Here's what happened next., published by ViktoriaMalyasova on June 15, 2022 on LessWrong. I recently saw Rob Bensinger asking what MIRI could do to improve hiring. I tried to get a job at MIRI in February 2020 and I have some feedback to give. I am sorry, because I am about to say some harsh things, but I think they might be important for you to hear, and my previous more low-key attempts to point out problems had little to no effect. tl,dr: MIRI promised to contact me once the pandemic no longer prevents them from hiring people. They never did. What happened I applied for a Software Engineer role at MIRI on February 26th. I hoped to end up as a researcher eventually, but this position seemed like the easiest way in for a foreigner. The first stage was a multiple choice test from Triplebyte. I took it on March 2nd. After a week, on the 9th of March (why does it take a week to grade a multiple choice quiz?!), I got the following letter from Triplebyte: Hi Viktoriya, Triplebyte partners with Machine Intelligence Research Institute to help candidates that go through their process find the job that's right for them. If the opportunity with Machine Intelligence Research Institute is not the right fit, we believe we can help you find a great fit somewhere else. Seeing as you did particularly well on Machine Intelligence Research Institute's assessment, we are going to fast-track you to the final stage of our application process! All that is left is our technical interview. If you pass, you will be matched with top tech companies and fast-tracked to final round interviews. [...] This gave me the impression that I passed the quiz, so I was surprised to get a rejection letter from Buck Shlegeris on the same day. The letter left me with a way forward: Buck sent me a list of ways to impress him and get back into the interview. The tasks looked about a week of work each. I wondered if doing one is worth the opportunity cost. I lived in Russia at the time. Russia was fighting two wars, in Syria and Ukraine. I think it's fair to estimate that all marginal tax payments were spent on war. It didn't cost much to hire an additional soldier to fight in Ukraine. There's a word "15-тысячники" for a mercenary hired to fight for DNR for 15000 rubles a month. I easily paid that sum monthly in taxes. Staying in Russia was costing human lives. Once I stopped trying to overthrow the government the normal way, there was no reason to stay. Wasn't it better to spend the time applying to other US companies then, to increase the chances of landing a job before the H-1B application deadline? I thought about it and decided to go for MIRI. I didn't want to spend time on companies that weren't doing alignment research. I was reasonably confident I'd pass the interview if I was able to get in. (I think this was justified. I interviewed for a FAANG company last year and passed.) So I solved a task and sent it to Buck while reminding about the H-1B application deadline. He said he'd get back to me about further steps. The H-1B application deadline passed. There was no reply. I later discovered that MIRI can actually apply for H-1B all year round. Well, at least that was true 6 years ago. Buck never told me about this. If I knew, I could have applied for another job. The ML living library position was still open (at least that's what the job page said), and I'd being working in ML for 2 years. Two months later I got a letter from Buck. He said that MIRI applications process is messed up by COVID, and that he put me on the list of people to contact when things are starting up again. I asked Buck if MIRI does green card sponsorship and he said he isn't sure how all of that works. I asked who I should contact to find out, and got no reply. This is weird, how can an interviewe...

Clearer Thinking with Spencer Greenberg
Taking pleasure in being wrong (with Buck Shlegeris)

Clearer Thinking with Spencer Greenberg

Play Episode Listen Later Jun 8, 2022 76:10


How hard is it to arrive at true beliefs about the world? How can you find enjoyment in being wrong? When presenting claims that will be scrutinized by others, is it better to hedge and pad the claims in lots of caveats and uncertainty, or to strive for a tone that matches (or perhaps even exaggerates) the intensity with which you hold your beliefs? Why should you maybe focus on drilling small skills when learning a new skill set? What counts as a "simple" question? How can you tell when you actually understand something and when you don't? What is "cargo culting"? Which features of AI are likely in the future to become existential threats? What are the hardest parts of AI research? What skills will we probably really wish we had on the eve of deploying superintelligent AIs?Buck Shlegeris is the CTO of Redwood Research, an independent AI alignment research organization. He currently leads their interpretability research. He previously worked on research and outreach at the Machine Intelligence Research Institute. His website is shlegeris.com.[Read more]

Clearer Thinking with Spencer Greenberg
Taking pleasure in being wrong (with Buck Shlegeris)

Clearer Thinking with Spencer Greenberg

Play Episode Listen Later Jun 8, 2022 76:10


How hard is it to arrive at true beliefs about the world? How can you find enjoyment in being wrong? When presenting claims that will be scrutinized by others, is it better to hedge and pad the claims in lots of caveats and uncertainty, or to strive for a tone that matches (or perhaps even exaggerates) the intensity with which you hold your beliefs? Why should you maybe focus on drilling small skills when learning a new skill set? What counts as a "simple" question? How can you tell when you actually understand something and when you don't? What is "cargo culting"? Which features of AI are likely in the future to become existential threats? What are the hardest parts of AI research? What skills will we probably really wish we had on the eve of deploying superintelligent AIs?Buck Shlegeris is the CTO of Redwood Research, an independent AI alignment research organization. He currently leads their interpretability research. He previously worked on research and outreach at the Machine Intelligence Research Institute. His website is shlegeris.com.

Clearer Thinking with Spencer Greenberg
Taking pleasure in being wrong (with Buck Shlegeris)

Clearer Thinking with Spencer Greenberg

Play Episode Listen Later Jun 8, 2022 76:10


How hard is it to arrive at true beliefs about the world? How can you find enjoyment in being wrong? When presenting claims that will be scrutinized by others, is it better to hedge and pad the claims in lots of caveats and uncertainty, or to strive for a tone that matches (or perhaps even exaggerates) the intensity with which you hold your beliefs? Why should you maybe focus on drilling small skills when learning a new skill set? What counts as a "simple" question? How can you tell when you actually understand something and when you don't? What is "cargo culting"? Which features of AI are likely in the future to become existential threats? What are the hardest parts of AI research? What skills will we probably really wish we had on the eve of deploying superintelligent AIs?Buck Shlegeris is the CTO of Redwood Research, an independent AI alignment research organization. He currently leads their interpretability research. He previously worked on research and outreach at the Machine Intelligence Research Institute. His website is shlegeris.com.

The Nonlinear Library
AF - AXRP Episode 14 - Infra-Bayesian Physicalism with Vanessa Kosoy by DanielFilan

The Nonlinear Library

Play Episode Listen Later Apr 5, 2022 81:06


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AXRP Episode 14 - Infra-Bayesian Physicalism with Vanessa Kosoy, published by DanielFilan on April 5, 2022 on The AI Alignment Forum. Google Podcasts link This podcast is called AXRP, pronounced axe-urp and short for the AI X-risk Research Podcast. Here, I (Daniel Filan) have conversations with researchers about their research. We discuss their work and hopefully get a sense of why it's been written and how it might reduce the risk of artificial intelligence causing an existential catastrophe: that is, permanently and drastically curtailing humanity's future potential. Late last year, Vanessa Kosoy and Alexander Appel published some research under the heading of “Infra-Bayesian physicalism”. But wait - what was infra-Bayesianism again? Why should we care? And what does any of this have to do with physicalism? In this episode, I talk with Vanessa Kosoy about these questions, and get a technical overview of how infra-Bayesian physicalism works and what its implications are. Topics we discuss: The basics of infra-Bayes An invitation to infra-Bayes What is naturalized induction? How infra-Bayesian physicalism helps with naturalized induction Bridge rules Logical uncertainty Open source game theory Logical counterfactuals Self-improvement How infra-Bayesian physicalism works World models Priors Counterfactuals Anthropics Loss functions The monotonicity principle How to care about various things Decision theory Follow-up research Infra-Bayesian physicalist quantum mechanics Infra-Bayesian physicalist agreement theorems The production of infra-Bayesianism research Bridge rules and malign priors Following Vanessa's work Daniel Filan: Hello everybody. Today, I'm going to be talking with Vanessa Kosoy. She is a research associate at the Machine Intelligence Research Institute, and she's worked for over 15 years in software engineering. About seven years ago, she started AI alignment research, and is now doing that full-time. Back in episode five, she was on the show to talk about a sequence of posts introducing Infra-Bayesianism. But today, we're going to be talking about her recent post, Infra-Bayesian Physicalism: a Formal Theory of Naturalized Induction, co-authored with Alex Appel. For links to what we're discussing, you can check the description of this episode, and you can read the transcript at axrp.net. Vanessa, welcome to AXRP. Vanessa Kosoy: Thank you for inviting me. The basics of infra-Bayes Daniel Filan: Cool. So, this episode is about Infra-Bayesian physicalism. Can you remind us of the basics of just what Infra-Bayesianism is? Vanessa Kosoy: Yes. Infra-Bayesianism is a theory we came up with to solve the problem of non-realizability, which is how to do theoretical analysis of reinforcement learning algorithms in situations where you cannot assume that the environment is in your hypothesis class, which is something that has not been studied much in the literature for reinforcement learning specifically. And the way we approach this is by bringing in concepts from so-called imprecise probability theory, which is something that's mostly decision theorists and economists have been using. And the basic idea is, instead of thinking of a probability distribution, you could be working with a convex set of probability distributions. That's what's called a credal set in imprecise probability theory. And then, when you are making decisions, instead of just maximizing the expected value of your utility function, with respect to some probability distribution, you are maximizing the minimal expected value where you minimize over the set. That's as if you imagine an adversary is selecting some distribution out of the set. Vanessa Kosoy: The nice thing about it is that you can start with this basic idea, and on the one hand, construct an entire theory analogous to classical pro...

The Nonlinear Library
AF - Arguments about Highly Reliable Agent Designs as a Useful Path to Artificial Intelligence Safety by Issa Rice

The Nonlinear Library

Play Episode Listen Later Jan 27, 2022 1:54


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Arguments about Highly Reliable Agent Designs as a Useful Path to Artificial Intelligence Safety, published by Issa Rice on January 27, 2022 on The AI Alignment Forum. This paper is a revised and expanded version of my blog post Plausible cases for HRAD work, and locating the crux in the "realism about rationality" debate, now with David Manheim as co-author. Abstract: Several different approaches exist for ensuring the safety of future Transformative Artificial Intelligence (TAI) or Artificial Superintelligence (ASI) systems, and proponents of different approaches have made different and debated claims about the importance or usefulness of their work in the near term, and for future systems. Highly Reliable Agent Designs (HRAD) is one of the most controversial and ambitious approaches, championed by the Machine Intelligence Research Institute, among others, and various arguments have been made about whether and how it reduces risks from future AI systems. In order to reduce confusion in the debate about AI safety, here we build on a previous discussion by Rice which collects and presents four central arguments which are used to justify HRAD as a path towards safety of AI systems. We have titled the arguments (1) incidental utility,(2) deconfusion, (3) precise specification, and (4) prediction. Each of these makes different, partly conflicting claims about how future AI systems can be risky. We have explained the assumptions and claims based on a review of published and informal literature, along with consultation with experts who have stated positions on the topic. Finally, we have briefly outlined arguments against each approach and against the agenda overall. See also this Twitter thread where David summarizes the paper. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.

The Nonlinear Library: LessWrong Top Posts
Pseudorandomness contest: prizes, results, and analysis by UnexpectedValues

The Nonlinear Library: LessWrong Top Posts

Play Episode Listen Later Dec 11, 2021 35:28


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Pseudorandomness contest: prizes, results, and analysis , published by UnexpectedValues on the LessWrong. This is a linkpost for/ (Previously in this series: Round 1, Round 2) In December I ran a pseudorandomness contest. Here's how it worked: In Round 1, participants were invited to submit 150-bit strings of their own devising. They had 10 minutes to write down their string while using nothing but their own minds. I received 62 submissions. I then used a computer to generate 62 random 150-bit strings, and put all 124 strings in a random order. In Round 2, participants had to figure out which strings were human-generated (I'm going to call these strings fake from now on) and which were “truly” random (I'm going to call these real). In particular, I asked for probabilities that each string was real, so participants could express their confidence rather than guessing “real” or “fake” for each string. I received 27 submissions for Round 2. This post is long because there are lots of fascinating things to talk about. So, feel free to skip around to whichever sections you find most interesting; I've done my best to give descriptive labels. But first: Prizes Round 1 Thank you to the 62 of you who submitted strings in Round 1! Your strings were scored by the average probability of being real assigned by Round 2 participants, weighted by their Round 2 score. (Entries with negative Round 2 scores received no weight). The top three scores in Round 1 were: Jenny Kaufmann, with a score of 69.4%. That is, even though Jenny's string was fake, Round 2 participants on average gave her string a 69.4% chance of being real. For winning Round 1, Jenny was given the opportunity to allocate $50 to charity, which she chose to give to the GiveWell Maximum Impact Fund. Reed Jacobs, with a score of 68.8%. Reed allocated $25 to Canada/USA Mathcamp. Eric Fletcher, with a score of 68.6%. Eric allocated $25 to the Poor People's Campaign. Congratulations to Jenny, Reed, and Eric! Round 2 A big thanks to the 27 of you (well, 28 — 26 plus a team of two) who submitted Round 2 entries. I estimate that the average participant put in a few hours of work, and that some put in more than 10. Entries were graded using a quadratic scoring rule (see here for details). When describing Round 2, I did a back-of-the-envelope estimate that a score of 15 on this round would be good. I was really impressed by the top two scores: Scy Yoon and William Ehlhardt, who were the only team, received a score of 28.5, honestly higher than I thought possible. They allocated $150 to the GiveWell Maximum Impact Fund. Ben Edelman received a score of 25.8. He allocated $75 to the Humane League. Three other participants received a score of over 15: simon received a score of 21.0. He allocated $25 to the Machine Intelligence Research Institute. Adam Hesterberg received a score of 19.5. He allocated $25 to the Sierra Club Beyond Coal campaign. Viktor Bowallius received a score of 17.3. He allocated $25 to the EA Long Term Future Fund. Congratulations to Scy, William, Ben, simon, Adam, and Viktor! All right, let's take a look at what people did and how well it worked! Round 1 analysis Summary statistics Recall that the score of a Round 1 entry is a weighted average of the probabilities assigned by Round 2 participants to the entry being real (i.e. truly random). The average score was 39.4% (this is well below 50%, as expected). The median score was 45.7%. Here's the full distribution: Figure 1: Histogram of Round 1 scores Interesting: the distribution is bimodal! Some people basically succeeded at fooling Round 2 participants, and most of the rest came up with strings that were pretty detectable as fakes. Methods I asked participants to describe the method they used to generate their string. Of the 58 participants who told me what the...

The Nonlinear Library: EA Forum Top Posts
EA Leaders Forum: Survey on EA priorities (data and analysis) by Aaron Gertler

The Nonlinear Library: EA Forum Top Posts

Play Episode Listen Later Dec 11, 2021 27:38


welcome to the nonlinear library, where we use text-to-speech software to convert the best writing from the rationalist and ea communities into audio. This is: EA Leaders Forum: Survey on EA priorities (data and analysis), published by Aaron Gertler on the effective altruism forum. Thanks to Alexander Gordon-Brown, Amy Labenz, Ben Todd, Jenna Peters, Joan Gass, Julia Wise, Rob Wiblin, Sky Mayhew, and Will MacAskill for assisting in various parts of this project, from finalizing survey questions to providing feedback on the final post. Clarification on pronouns: “We” refers to the group of people who worked on the survey and helped with the writeup. “I” refers to me; I use it to note some specific decisions I made about presenting the data and my observations from attending the event. This post is the second in a series of posts where we aim to share summaries of the feedback we have received about our own work and about the effective altruism community more generally. The first can be found here. Overview Each year, the EA Leaders Forum, organized by CEA, brings together executives, researchers, and other experienced staffers from a variety of EA-aligned organizations. At the event, they share ideas and discuss the present state (and possible futures) of effective altruism. This year (during a date range centered around ~1 July), invitees were asked to complete a “Priorities for Effective Altruism” survey, compiled by CEA and 80,000 Hours, which covered the following broad topics: The resources and talents most needed by the community How EA's resources should be allocated between different cause areas Bottlenecks on the community's progress and impact Problems the community is facing, and mistakes we could be making now This post is a summary of the survey's findings (N = 33; 56 people received the survey). Here's a list of organizations respondents worked for, with the number of respondents from each organization in parentheses. Respondents included both leadership and other staff (an organization appearing on this list doesn't mean that the org's leader responded). 80,000 Hours (3) Animal Charity Evaluators (1) Center for Applied Rationality (1) Centre for Effective Altruism (3) Centre for the Study of Existential Risk (1) DeepMind (1) Effective Altruism Foundation (2) Effective Giving (1) Future of Humanity Institute (4) Global Priorities Institute (2) Good Food Institute (1) Machine Intelligence Research Institute (1) Open Philanthropy Project (6) Three respondents work at organizations small enough that naming the organizations would be likely to de-anonymize the respondents. Three respondents don't work at an EA-aligned organization, but are large donors and/or advisors to one or more such organizations. What this data does and does not represent This is a snapshot of some views held by a small group of people (albeit people with broad networks and a lot of experience with EA) as of July 2019. We're sharing it as a conversation-starter, and because we felt that some people might be interested in seeing the data. These results shouldn't be taken as an authoritative or consensus view of effective altruism as a whole. They don't represent everyone in EA, or even every leader of an EA organization. If you're interested in seeing data that comes closer to this kind of representativeness, consider the 2018 EA Survey Series, which compiles responses from thousands of people. Talent Needs What types of talent do you currently think [your organization // EA as a whole] will need more of over the next 5 years? (Pick up to 6) This question was the same as a question asked to Leaders Forum participants in 2018 (see 80,000 Hours' summary of the 2018 Talent Gaps survey for more). Here's a graph showing how the most common responses from 2019 compare to the same categories in the 2018 talent needs survey from 80,000 Hours, for EA as a whole: And for the respondent's organization: The following table contains data on every category ...

The Foresight Institute Podcast
Christine Peterson | Meatspace & Cyberspace: How Can We Get the Best of Both

The Foresight Institute Podcast

Play Episode Listen Later Oct 1, 2021 15:24


“The reality is some people would like to be uploaded and live in cyberspace.”Meatspace and cyberspace often conflict. But how can we make the best out of both of them?This episode features Christine Peterson, she is Co-founder and former President of Foresight Institute. She lectures and writes about nanotechnology, AI, and longevity. She advises the Machine Intelligence Research Institute, Global Healthspan Policy Institute, National Space Society, startup Ligandal, and the Voice & Exit conference. She coined the term ‘open source software.' She holds a bachelor's degree in chemistry from MIT.If you enjoy what we do please support us via Patreon: https://www.patreon.com/foresightinstitute. If you are interested in joining these meetings consider donating through our donation page: https://foresight.org/donate/   Music: I Knew a Guy by Kevin MacLeod is licensed under a Creative Commons Attribution 4.0 license. https://creativecommons.org/licenses/by/4.0/ Session Summary: Christine Peterson | Meatspace & Cyberspace: How Can We Get the Best of Both | VISION WEEKEND 2019 - Foresight InstituteThe Foresight Institute is a research organization and non-profit that supports the beneficial development of high-impact technologies. Since our founding in 1987 on a vision of guiding powerful technologies, we have continued to evolve into a many-armed organization that focuses on several fields of science and technology that are too ambitious for legacy institutions to support.Allison Duettmann is the president and CEO of Foresight Institute. She directs the Intelligent Cooperation, Molecular Machines, Biotech & Health Extension, Neurotech, and Space Programs, Fellowships, Prizes, and Tech Trees, and shares this work with the public. She founded Existentialhope.com, co-edited Superintelligence: Coordination & Strategy, co-authored Gaming the Future, and co-initiated The Longevity Prize. Apply to Foresight's virtual salons and in person workshops here!We are entirely funded by your donations. If you enjoy what we do please consider donating through our donation page.Visit our website for more content, or join us here:TwitterFacebookLinkedInEvery word ever spoken on this podcast is now AI-searchable using Fathom.fm, a search engine for podcasts.  Hosted on Acast. See acast.com/privacy for more information.

The Naked Dialogue
TND EP#31: Michael Vassar | On Lacan, Obscurantism, Artificial Unconscious Intelligence, Philosophy & More

The Naked Dialogue

Play Episode Listen Later Sep 12, 2021 78:51


Michael Vassar is an American Futuristic, Activist, and Entrepreneur. He is the Co-founder and Chief Science Officer of Meta-Med Research. He was the President of the Machine Intelligence Research Institute until January 2012. Vassar advocates safe development of new technologies for the benefit of humankind. He has co-authored papers on the risks of advanced molecular manufacturing with Robert Freitas, and has written the special report "Corporate Cornucopia: Examining the Special Implications of Commercial MNT Development" for the Center for Responsible Nanotechnology Task Force. Michael Vassar: https://en.wikipedia.org/wiki/MetaMed ; https://twitter.com/michaelvassar Sanjana Singh (The Host): linktr.ee/sanjanasingh --- Support this podcast: https://anchor.fm/sanjanasinghx/support

Clearer Thinking with Spencer Greenberg
AI Safety and Solutions (with Robert Miles)

Clearer Thinking with Spencer Greenberg

Play Episode Listen Later May 22, 2021 84:30


Why is YouTube such a great way to communicate research findings? Why is AI safety (or alignment) a problem? Why is it an important problem? Why is the creation of AGI (artificial general intelligence) existentially risky for us? Why is it so hard for us to specify what we want in utility functions? What are some of the proposed strategies (and their limitations) for controlling AGI? What is instrumental convergence? What is the unilateralist's curse?Rob Miles is a science communicator focused on AI Safety and Alignment. He has a YouTube channel called Rob Miles AI and runs The Alignment Newsletter Podcast, which presents summaries of the week's research. He also collaborates with research organizations like the Machine Intelligence Research Institute and the Future of Humanity Institute to help them communicate their work.

Clearer Thinking with Spencer Greenberg
AI Safety and Solutions (with Robert Miles)

Clearer Thinking with Spencer Greenberg

Play Episode Listen Later May 22, 2021 84:30


Why is YouTube such a great way to communicate research findings? Why is AI safety (or alignment) a problem? Why is it an important problem? Why is the creation of AGI (artificial general intelligence) existentially risky for us? Why is it so hard for us to specify what we want in utility functions? What are some of the proposed strategies (and their limitations) for controlling AGI? What is instrumental convergence? What is the unilateralist's curse?Rob Miles is a science communicator focused on AI Safety and Alignment. He has a YouTube channel called Rob Miles AI and runs The Alignment Newsletter Podcast, which presents summaries of the week's research. He also collaborates with research organizations like the Machine Intelligence Research Institute and the Future of Humanity Institute to help them communicate their work.[Read more]

Clearer Thinking with Spencer Greenberg
AI Safety and Solutions with Robert Miles

Clearer Thinking with Spencer Greenberg

Play Episode Listen Later May 22, 2021 84:30


Why is YouTube such a great way to communicate research findings? Why is AI safety (or alignment) a problem? Why is it an important problem? Why is the creation of AGI (artificial general intelligence) existentially risky for us? Why is it so hard for us to specify what we want in utility functions? What are some of the proposed strategies (and their limitations) for controlling AGI? What is instrumental convergence? What is the unilateralist's curse?Rob Miles is a science communicator focused on AI Safety and Alignment. He has a YouTube channel called Rob Miles AI and runs The Alignment Newsletter Podcast, which presents summaries of the week's research. He also collaborates with research organizations like the Machine Intelligence Research Institute and the Future of Humanity Institute to help them communicate their work.

Clearer Thinking with Spencer Greenberg
AI Safety and Solutions with Robert Miles

Clearer Thinking with Spencer Greenberg

Play Episode Listen Later May 22, 2021 84:30


Why is YouTube such a great way to communicate research findings? Why is AI safety (or alignment) a problem? Why is it an important problem? Why is the creation of AGI (artificial general intelligence) existentially risky for us? Why is it so hard for us to specify what we want in utility functions? What are some of the proposed strategies (and their limitations) for controlling AGI? What is instrumental convergence? What is the unilateralist's curse? Rob Miles is a science communicator focused on AI Safety and Alignment. He has a YouTube channel called Rob Miles AI and runs The Alignment Newsletter Podcast, which presents summaries of the week's research. He also collaborates with research organizations like the Machine Intelligence Research Institute and the Future of Humanity Institute to help them communicate their work.

EARadio
How I think students should orient to AI safety | Buck Shlegeris

EARadio

Play Episode Listen Later Feb 26, 2021 14:12


Buck argues that students should engage with AI safety by trying to actually assess the arguments and the safety proposals. He claims that this is doable and useful. Buck Shlegeris is a researcher at the Machine Intelligence Research Institute. Buck works to make the future good for sentient beings; at the moment he believes that … Continue reading How I think students should orient to AI safety | Buck Shlegeris

Retraice
Re10: Living to Guess Another Day

Retraice

Play Episode Listen Later Nov 3, 2020 47:20


On guessing, checking and fighting.   Subscribe at: paid.retraice.com    Details: natural intelligence; you've realized you're dumb; what (natural) intelligence might be; historical candidates for smartish and dumb; the truth comes out; guessing; intelligence and learning might be unconnected; intelligence might be about perception; an absolute measure of intelligence; IQ; checking; Feynman's Cargo Cult Science; side note—for machines, easy is hard and hard is easy; pseudoscience; a shining example of science—and disappointment; he took away the information; reasons to reproduce checks; fighting; fighting for guesses; Darwin's belated fight; science is so recent, but babies aren't; recap. Complete notes and video at: https://www.retraice.com/segments/re10   Air date: Monday, 2nd Nov. 2020, 12 : 00 PM Pacific/US.   Chapters: 00:00 natural intelligence; 01:47 you've realized you're dumb; 06:41 what (natural) intelligence might be; 07:29 historical candidates for smartish and dumb; 13:21 the truth comes out; 14:48 guessing; 16:25 intelligence and learning might be unconnected; 19:37 intelligence might be about perception; 20:21 an absolute measure of intelligence; 24:30 IQ; 25:22 checking; 27:23 Feynman's Cargo Cult Science; 29:08 side note—for machines, easy is hard and hard is easy; 30:00 pseudoscience; 32:44 a shining example of science—and disappointment; 34:54 he took away the information; 37:56 reasons to reproduce checks; 39:55 fighting; 41:32 fighting for guesses; 43:04 Darwin's belated fight; 43:43 science is so recent, but babies aren't; 45:44 recap.   References:     Barlow, H. B. (2004). Guessing and intelligence. (pp. 382–384). In Gregory (2004).      BBC Two (2014). Brian Cox visits the world’s biggest vacuum — Human Universe - BBC. Uploaded 24th Oct. 2014. https://youtu.be/E43-CfukEgs Retrieved 2nd Nov. 2020.      Copi, I. M. (1972). Introduction to Logic. Macmillan, 4th ed. No ISBN. Webpages: https://www.amazon.com/Introduction-Logic-Irving-M-Copi/dp/B000J54UWU https://books.google.com/books/about/Introduction_to_Logic.html?id=sxbszAEACAAJ https://lccn.loc.gov/70171565      Deary, I. J. (2001). Intelligence: A Very Short Introduction. Oxford. ISBN: 978-0192893215. Searches: https://www.amazon.com/s?k=978-0192893215 https://www.google.com/search?q=isbn+978-0192893215 https://lccn.loc.gov/2001269139      Feynman, R. (1974). Cargo cult science. Engineering and Science, 7(37), 10–13. http://calteches.library.caltech.edu/3043/1/CargoCult.pdf Retrieved 20th Mar. 2019.      Goodfellow, I., Bengio, Y., & Courville, A. (2016). Deep Learning. MIT Press. ISBN 978-0262035613. Ebook available at: https://www.deeplearningbook.org/ Searches: https://www.amazon.com/s?k=978-0262035613 https://www.google.com/search?q=isbn+978-0262035613 https://lccn.loc.gov/2016022992      Gopnik, A., Meltzoff, A. N., & Kuhl, P. K. (1999). The Scientist in the Crib: What Early Learning Tells Us About the Mind. Perennial / HarperCollins. ISBN: 0688159885. Searches: https://www.amazon.com/s?k=0688159885 https://www.google.com/search?q=isbn+0688159885 https://lccn.loc.gov/99024247      Gregory, R. L. (Ed.) (2004). The Oxford Companion to the Mind. Oxford University Press, 2nd ed. ISBN: 0198662246. Searches: https://www.amazon.com/s?k=0198662246 https://www.google.com/search?q=isbn+0198662246 https://lccn.loc.gov/2004275127      Hart-Davis, A. (Ed.) (2009). Science: The Definitive Visual Guide. DK. ISBN 978-0756689018. Searches: https://www.amazon.com/s?k=978-0756689018 https://www.google.com/search?q=isbn+978-0756689018      Legg, S., & Hutter, M. (2007). Universal intelligence: A definition of machine intelligence. Minds & Machines, 17(4), 391–444. December 2007. https://arxiv.org/abs/0712.3329 Retrieved ca. 10 Mar. 2019.      Macphail, E. M. (1982). Brain and Intelligence in Vertebrates. Oxford. Book pending receipt by Retraice. ISBN 978-0198545514. Searches: https://www.amazon.com/s?k=978-0198545514 https://www.google.com/search?q=isbn+978-0198545514 https://lccn.loc.gov/82166301      Margin (2020/10/26). Ma7: Reading and Writing. retraice.com. https://www.retraice.com/segments/ma7 Retrieved 27th Oct. 2020.      Retraice (2020/09/07). Re1: Three Kinds of Intelligence. retraice.com. https://www.retraice.com/segments/re1 Retrieved 22nd Sep. 2020.      Retraice (2020/10/28). Re8: Strange Machines. retraice.com. https://www.retraice.com/segments/re8 Retrieved 29th Oct. 2020.      Shettleworth, S. J. (2010). Cognition, Evolution, and Behavior. Oxford, 2nd ed. ISBN: 978-0195319842. Searches: https://www.amazon.com/s?k=978-0195319842 https://www.google.com/search?q=isbn+978-0195319842 https://lccn.loc.gov/2009017840      van Wyhe, J. (2007). Mind the gap: did Darwin avoid publishing his theory for many years? Notes Rec. R. Soc., 61, 177–205. https://royalsocietypublishing.org/doi/10.1098/rsnr.2006.0171 Retrieved 2nd Nov. 2020.      Yudkowsky, E. (2013). Intelligence explosion microeconomics. Machine Intelligence Research Institute. Technical report 2013-1. https://intelligence.org/files/IEM.pdf Retrieved ca. 9th Dec. 2018.   Copyright: 2020 Retraice, Inc. https://retraice.com

Retraice
Re8: Strange Machines

Retraice

Play Episode Listen Later Oct 29, 2020 41:40


A survey of the idea that technology is creatures.    Subscribe at: paid.retraice.com    Details: we should call them something else; high-altitude fruit; Simon—the rules are the same; Grey Walter's tortoises; Butler—war to the death; Dyson—they're *not* imaginary; Wolfram's simple programs; Yudkowsky on fire alarms; I. J. Good—take science fiction seriously; `unquestionably'; Yudkowsky—smartish stuff; S. Russell and Norvig—operating on their own; two meanings of `the singularity'; a moral challenge; S. Russell—the user's mind; Dyson—worry less about intelligence; Smallberg—energy sources and replication; a digression on search; Dietterich—reproduction with autonomy; the work; Bostrom—deferred gratification; our civilization is evidence of capacity; skyscrapers seem taller than they are. Complete notes and video at: https://www.retraice.com/segments/re8   Air date: Wednesday, 28th Oct. 2020, 3 : 30 PM Pacific/US.   Chapters:  00:00 we should call them something else; 00:35 high-altitude fruit; 02:55 Simon—the rules are the same; 04:06 Grey Walter's tortoises; 08:19 Butler—war to the death; 11:16 Dyson—they're *not* imaginary; 14:02 Wolfram's simple programs; 15:49 Yudkowsky on fire alarms; 17:14 I. J. Good—take science fiction seriously; 18:36 `unquestionably'; 19:29 Yudkowsky—smartish stuff; 23:00 S. Russell and Norvig—operating on their own; 24:55 two meanings of `the singularity'; 25:41 a moral challenge; 26:52 S. Russell—the user's mind; 28:46 Dyson—worry less about intelligence; 30:24 Smallberg—energy sources and replication; 31:13 a digression on search; 34:02 Dietterich—reproduction with autonomy; 35:55 the work; 36:46 Bostrom—deferred gratification; 39:13 our civilization is evidence of capacity; 39:52 skyscrapers seem taller than they are.   References:      Bostrom, N. (2014). Superintelligence: Paths, Dangers, Strategies. Oxford. First published in 2014. Citations are from the pbk. edition, 2016. ISBN: 978-0198739838. Searches: https://www.amazon.com/s?k=978-0198739838 https://www.google.com/search?q=isbn+978-0198739838 https://lccn.loc.gov/2015956648      Brockman, J. (Ed.) (2015). What to Think About Machines That Think: Today’s Leading Thinkers on the Age of Machine Intelligence. Harper Perennial. ISBN: 978-0062425652. Searches: https://www.amazon.com/s?k=978-0062425652 https://www.google.com/search?q=isbn+978-0062425652 https://lccn.loc.gov/2016303054      Brockman, J. (Ed.) (2019). Possible Minds: Twenty-Five Ways of Looking at AI. Penguin. ISBN: 978-0525557999. Searches: https://www.amazon.com/s?k=978-0525557999 https://www.google.com/search?q=isbn+978-0525557999 https://lccn.loc.gov/2018032888      Butler, S. (1863). Darwin among the machines. The Press (Canterbury, New Zealand). Reprinted in Butler et al. (1923).      Butler, S., Jones, H., & Bartholomew, A. (1923). The Shrewsbury Edition of the Works of Samuel Butler Vol. 1. J. Cape. No ISBN. https://books.google.com/books?id=B-LQAAAAMAAJ Retrieved 27th Oct. 2020.      de Garis, H. (2005). The Artilect War: Cosmists vs. Terrans: A Bitter Controversy Concerning Whether Humanity Should Build Godlike Massively Intelligent Machines. ETC Publications. ISBN: 0882801546. Searches: https://www.amazon.com/s?k=0882801546 https://www.google.com/search?q=isbn+0882801546      Dietterich, T. G. (2015). How to prevent an intelligence explosion. (pp. 380–383). In Brockman (2015).      Dyson, G. (2019). The third law. (pp. 31–40). In Brockman (2019).      Dyson, G. B. (1997). Darwin Among The Machines: The Evolution Of Global Intelligence. Basic Books. ISBN: 978-0465031627. Searches: https://www.amazon.com/s?k=978-0465031627 https://www.google.com/search?q=isbn+978-0465031627 https://lccn.loc.gov/2012943208      Good, I. J. (1965). Speculations concerning the first ultraintelligent machine. Advances in Computers, 6, 31–88. https://exhibits.stanford.edu/feigenbaum/catalog/gz727rg3869 Retrieved 27th Oct. 2020.      Harris, S. (2016). Can we build AI without losing control over it? — Sam Harris. TED. https://youtu.be/8nt3edWLgIg Retrieved 28th Oct. 2020.      Holland, O. (2003). Exploration and high adventure: the legacy of Grey Walter. Phil. Trans. R. Soc. Lond. A, 361, 2085–2121. https://www.researchgate.net/publication/9025611 Retrieved 22nd Nov. 2019. See also: https://www.youtube.com/results?search_query=grey+walter+tortoise+      Jackson, R. E., & Cormack, L. K. (2008). Evolved navigation theory and the environmental vertical illusion. Evolution and Human Behavior, 29, 299–304. https://liberalarts.utexas.edu/cps/_files/cormack-pdf/12Evolved_navigation_theory2009.pdf Retrieved 29th Oct. 2020.      Kurzweil, R. (2005). The Singularity Is Near: When Humans Transcend Biology. Penguin. ISBN: 978-0143037880. Searches: https://www.amazon.com/s?k=978-0143037880 https://www.google.com/search?q=isbn+978-0143037880 https://lccn.loc.gov/2004061231      Legg, S., & Hutter, M. (2007a). A collection of definitions of intelligence. Frontiers in Artificial Intelligence and Applications, 157, 17–24. June 2007. https://arxiv.org/abs/0706.3639 Retrieved ca. 10 Mar. 2019.      Legg, S., & Hutter, M. (2007b). Universal intelligence: A definition of machine intelligence. Minds & Machines, 17(4), 391–444. December 2007. https://arxiv.org/abs/0712.3329 Retrieved ca. 10 Mar. 2019.      Retraice (2020/09/07). Re1: Three Kinds of Intelligence. retraice.com. https://www.retraice.com/segments/re1 Retrieved 22nd Sep. 2020.      Retraice (2020/09/08). Re2: Tell the People, Tell Foes. retraice.com. https://www.retraice.com/segments/re2 Retrieved 22nd Sep. 2020.      Russell, S. (2019). Human Compatible: Artificial Intelligence and the Problem of Control. Viking. ISBN: 978-0525558613. Searches: https://www.amazon.com/s?k=978-0525558613 https://www.google.com/search?q=isbn+978-0525558613 https://lccn.loc.gov/2019029688      Russell, S., & Norvig, P. (2020). Artificial Intelligence: A Modern Approach. Pearson, 4th ed. ISBN: 978-0134610993. Searches: https://www.amazon.com/s?k=978-0134610993 https://www.google.com/search?q=isbn+978-0134610993 https://lccn.loc.gov/2019047498      Simon, H. A. (1996). The Sciences of the Artificial. MIT, 3rd ed. ISBN: 0262691914. Searches: https://www.amazon.com/s?k=0262691914 https://www.google.com/search?q=isbn+0262691914 https://lccn.loc.gov/96012633 Previous editions available at: https://archive.org/search.php?query=The%20sciences%20of%20the%20artificial      Smallberg, G. (2015). No shared theory of mind. (pp. 297–299). In Brockman (2015).      Ulam, S. (1958). John von Neumann 1903-1957. Bull. Amer. Math. Soc., 64, 1–49. https://doi.org/10.1090/S0002-9904-1958-10189-5 Retrieved 29th Oct. 2020.      Weizenbaum, J. (1976). Computer Power and Human Reason: From Judgment to Calculation. W. H. Freeman and Company. ISBN: 0716704633. Also available at: https://archive.org/details/computerpowerhum0000weiz      Wolfram, S. (Ed.) (2002). A New Kind of Science. Wolfram Media, Inc. ISBN: 1579550088. Searches: https://www.amazon.com/s?k=1579550088 https://www.google.com/search?q=isbn+1579550088 https://lccn.loc.gov/2001046603      Yudkowsky, E. (2013). Intelligence explosion microeconomics. Machine Intelligence Research Institute. Technical report 2013-1. https://intelligence.org/files/IEM.pdf Retrieved ca. 9th Dec. 2018.      Yudkowsky, E. (2017). There’s no fire alarm for artificial general intelligence. Machine Intelligence Research Institute. 13th Oct. 2017. https://intelligence.org/2017/10/13/fire-alarm/ Retrieved 9th Dec. 2018.   Copyright: 2020 Retraice, Inc. https://retraice.com

Clearer Thinking with Spencer Greenberg
Lines of Retreat and Incomplete Maps with Anna Salamon

Clearer Thinking with Spencer Greenberg

Play Episode Listen Later Oct 14, 2020 86:32


What does it mean to leave lines of retreat in social contexts? How can we make sense of the current state of the world? What happens when we run out of map? How does the book Elephant in the Brain apply to the above questions? Anna Salamon does work with the Center for Applied Rationality and the Machine Intelligence Research Institute. She studied math and great books in undergrad, and philosophy of science for a small bit of grad school before leaving to work on AI-related existential risk. Fav. books include: R:AZ; HPMOR; “Zen and the Art of Motorcycle Maintenance,” and “The Closing of the American Mind” (as an intro to the practice of reading books from other places and times, not to evaluate the books, but to gain alternate hypotheses about ourselves by asking how the authors might perceive us). She blogs a bit at lesswrong.com.

Clearer Thinking with Spencer Greenberg
Lines of Retreat and Incomplete Maps (with Anna Salamon)

Clearer Thinking with Spencer Greenberg

Play Episode Listen Later Oct 13, 2020 86:32


What does it mean to leave lines of retreat in social contexts? How can we make sense of the current state of the world? What happens when we run out of map? How does the book Elephant in the Brain apply to the above questions?Anna Salamon does work with the Center for Applied Rationality and the Machine Intelligence Research Institute. She studied math and great books in undergrad, and philosophy of science for a small bit of grad school before leaving to work on AI-related existential risk. Fav. books include: R:AZ; HPMOR; “Zen and the Art of Motorcycle Maintenance,” and “The Closing of the American Mind” (as an intro to the practice of reading books from other places and times, not to evaluate the books, but to gain alternate hypotheses about ourselves by asking how the authors might perceive us). She blogs a bit at lesswrong.com.

Clearer Thinking with Spencer Greenberg
Lines of Retreat and Incomplete Maps (with Anna Salamon)

Clearer Thinking with Spencer Greenberg

Play Episode Listen Later Oct 13, 2020 86:32


What does it mean to leave lines of retreat in social contexts? How can we make sense of the current state of the world? What happens when we run out of map? How does the book Elephant in the Brain apply to the above questions?Anna Salamon does work with the Center for Applied Rationality and the Machine Intelligence Research Institute. She studied math and great books in undergrad, and philosophy of science for a small bit of grad school before leaving to work on AI-related existential risk. Fav. books include: R:AZ; HPMOR; “Zen and the Art of Motorcycle Maintenance,” and “The Closing of the American Mind” (as an intro to the practice of reading books from other places and times, not to evaluate the books, but to gain alternate hypotheses about ourselves by asking how the authors might perceive us). She blogs a bit at lesswrong.com.[Read more]

Clearer Thinking with Spencer Greenberg
Lines of Retreat and Incomplete Maps with Anna Salamon

Clearer Thinking with Spencer Greenberg

Play Episode Listen Later Oct 13, 2020 86:32


What does it mean to leave lines of retreat in social contexts? How can we make sense of the current state of the world? What happens when we run out of map? How does the book Elephant in the Brain apply to the above questions?Anna Salamon does work with the Center for Applied Rationality and the Machine Intelligence Research Institute. She studied math and great books in undergrad, and philosophy of science for a small bit of grad school before leaving to work on AI-related existential risk. Fav. books include: R:AZ; HPMOR; “Zen and the Art of Motorcycle Maintenance,” and “The Closing of the American Mind” (as an intro to the practice of reading books from other places and times, not to evaluate the books, but to gain alternate hypotheses about ourselves by asking how the authors might perceive us). She blogs a bit at lesswrong.com.

Future of Life Institute Podcast
Evan Hubinger on Inner Alignment, Outer Alignment, and Proposals for Building Safe Advanced AI

Future of Life Institute Podcast

Play Episode Listen Later Jul 1, 2020 97:05


It's well-established in the AI alignment literature what happens when an AI system learns or is given an objective that doesn't fully capture what we want.  Human preferences and values are inevitably left out and the AI, likely being a powerful optimizer, will take advantage of the dimensions of freedom afforded by the misspecified objective and set them to extreme values. This may allow for better optimization on the goals in the objective function, but can have catastrophic consequences for human preferences and values the system fails to consider. Is it possible for misalignment to also occur between the model being trained and the objective function used for training? The answer looks like yes. Evan Hubinger from the Machine Intelligence Research Institute joins us on this episode of the AI Alignment Podcast to discuss how to ensure alignment between a model being trained and the objective function used to train it, as well as to evaluate three proposals for building safe advanced AI.  Topics discussed in this episode include: -Inner and outer alignment -How and why inner alignment can fail -Training competitiveness and performance competitiveness -Evaluating imitative amplification, AI safety via debate, and microscope AI You can find the page for this podcast here: https://futureoflife.org/2020/07/01/evan-hubinger-on-inner-alignment-outer-alignment-and-proposals-for-building-safe-advanced-ai/ Timestamps:  0:00 Intro  2:07 How Evan got into AI alignment research 4:42 What is AI alignment? 7:30 How Evan approaches AI alignment 13:05 What are inner alignment and outer alignment? 24:23 Gradient descent 36:30 Testing for inner alignment 38:38 Wrapping up on outer alignment 44:24 Why is inner alignment a priority? 45:30 How inner alignment fails 01:11:12 Training competitiveness and performance competitiveness 01:16:17 Evaluating proposals for building safe and advanced AI via inner and outer alignment, as well as training and performance competitiveness 01:17:30 Imitative amplification 01:23:00 AI safety via debate 01:26:32 Microscope AI 01:30:19 AGI timelines and humanity's prospects for succeeding in AI alignment 01:34:45 Where to follow Evan and find more of his work This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable, consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.

Lex Fridman Podcast
#103 – Ben Goertzel: Artificial General Intelligence

Lex Fridman Podcast

Play Episode Listen Later Jun 22, 2020 249:25


Ben Goertzel is one of the most interesting minds in the artificial intelligence community. He is the founder of SingularityNET, designer of OpenCog AI framework, formerly a director of the Machine Intelligence Research Institute, Chief Scientist of Hanson Robotics, the company that created the Sophia Robot. He has been a central figure in the AGI community for many years, including in the Conference on Artificial General Intelligence. Support this podcast by supporting these sponsors: – Jordan Harbinger Show: https://jordanharbinger.com/lex/ – MasterClass: https://masterclass.com/lex This conversation is part of the Artificial Intelligence podcast. If you would like to get more information about this

Dinis Guarda citiesabc openbusinesscouncil Thought Leadership Interviews
citiesabc Interview: Ben Goertzel AI Mastermind, Founder SingularityNet - What kind of mind can we engineer?

Dinis Guarda citiesabc openbusinesscouncil Thought Leadership Interviews

Play Episode Listen Later May 15, 2020 79:36


Ben Goertzel is a leading world recognised artificial intelligence researcher, thinker, software engineer and serial entrepreneur. Ben is the founder and CEO of SingularityNET, the Chairman of OpenCog Foundation, the Chairman of Artificial General Intelligence Society, the Chief Scientist of Mozi Health and Vice Chairman Humanity+ and his work, writing and ideas are influencing the way we perceive AI, technology and blockchain._____________________________________________Ben Goertzel Interview focus questions:1. An introduction from Ben - background, overview, education... 2. The way you look at AI, AGI, open AI?3. Career highlights? 4. Your companies and focus5. What is your main focus as a doer?6. Can you show an example of of open AI 7. You have a focus on open AI and Artificial General Intelligence? Can you elaborate on this?8. You are a thinker, philosopher and as we look at evolution how do you see Singularity, special when it comes to the 2 visions of humanity the light side and the dark side?9. What are Ben Goals and focus as a thinker and a doer?10. With Covid-19 how can you look at this as a way to redesign our society 11. As a science fiction reader, writer and enthusiastic what is your vision of the Future you would like to create?_____________________________________________Ben Goertzel Bio:Ben Goertzel is a leading world recognised authority in artificial intelligence research, inventor and serial entrepreneur. Ben is a deep thinker and also a man of action as the founder and CEO of SingularityNET, the Chairman of OpenCog Foundation, the Chairman of Artificial General Intelligence Society, the Chief Scientist of Mozi Health and Vice Chairman Humanity+Goertzel is the chief scientist and chairman of AI software company Novamente LLC; chairman of the OpenCog Foundation; and advisor to Singularity University. He was Director of Research of the Machine Intelligence Research Institute (formerly the Singularity Institute).Goertzel is the son of Ted Goertzel, a former professor of sociology at Rutgers University.[2] He left high school after the tenth grade to attend Bard College at Simon's Rock, where he graduated with a bachelor's degree in Quantitative Studies.[3]Views on AIBen Goertzel's focus these days is the SingularityNET project, which brings AI and blockchain together to create a decentralized open market for AIs. It's a medium for the creation and emergence of AGI, a way to roll out superior AI-as-a-service to every vertical market, and a way to enable everyone in the world to contribute to and benefit from AI.Ben's passions are numerous, including AGI, life extension biology, philosophy of mind, psi, consciousness, complex systems, improvisational music, experimental fiction, theoretical physics and metaphysics.References and sources:https://www.linkedin.com/in/bengoertzel/https://www.youtube.com/watch?v=-qfB8...https://singularitynet.io/https://singularitynet.io/team/

Newочём
Часть 3. Ответ Тиму Урбану по поводу искусственного сверхразума [LukeMuehlhauser]

Newочём

Play Episode Listen Later Jan 18, 2019 27:17


Заключительная часть нашего мини-цикла про ИИ. В прошлых частях Тим Урбан подробно знакомил нас с темой, а теперь бывший исполнительный директор Machine Intelligence Research Institute Люк Мюльхаузер даёт свою оценку и подробно разбирает допущенные ошибки. Озвучил: Тарасов Валентин Переводил: Александр Иванков Автор: Люк Мюльхаузер Текст перевода: https://goo.gl/Ko6Lfe Текст оригинала: https://goo.gl/dKwQuM Хочешь слушать наши подкасты чаще? Поддержи проект: Patreon https://www.patreon.com/join/newochem Сбербанк 5469 4100 1191 4078 Тинькофф 5536 9137 8391 1874 Рокетбанк 5321 3003 1271 6181 Альфа-Банк 5486 7328 1231 5455 Яндекс.Деньги 410015483148917 PayPal https://paypal.me/vsilaev

80,000 Hours Podcast with Rob Wiblin
#47 - Catherine Olsson & Daniel Ziegler on the fast path into high-impact ML engineering roles

80,000 Hours Podcast with Rob Wiblin

Play Episode Listen Later Nov 2, 2018 124:49


After dropping out of a machine learning PhD at Stanford, Daniel Ziegler needed to decide what to do next. He’d always enjoyed building stuff and wanted to shape the development of AI, so he thought a research engineering position at an org dedicated to aligning AI with human interests could be his best option. He decided to apply to OpenAI, and spent about 6 weeks preparing for the interview before landing the job. His PhD, by contrast, might have taken 6 years. Daniel thinks this highly accelerated career path may be possible for many others. On today’s episode Daniel is joined by Catherine Olsson, who has also worked at OpenAI, and left her computational neuroscience PhD to become a research engineer at Google Brain. She and Daniel share this piece of advice for those curious about this career path: just dive in. If you're trying to get good at something, just start doing that thing, and figure out that way what's necessary to be able to do it well. Catherine has even created a simple step-by-step guide for 80,000 Hours, to make it as easy as possible for others to copy her and Daniel's success. Please let us know how we've helped you: fill out our 2018 annual impact survey so that 80,000 Hours can continue to operate and grow. Blog post with links to learn more, a summary & full transcript. Daniel thinks the key for him was nailing the job interview. OpenAI needed him to be able to demonstrate the ability to do the kind of stuff he'd be working on day-to-day. So his approach was to take a list of 50 key deep reinforcement learning papers, read one or two a day, and pick a handful to actually reproduce. He spent a bunch of time coding in Python and TensorFlow, sometimes 12 hours a day, trying to debug and tune things until they were actually working. Daniel emphasizes that the most important thing was to practice *exactly* those things that he knew he needed to be able to do. His dedicated preparation also led to an offer from the Machine Intelligence Research Institute, and so he had the opportunity to decide between two organisations focused on the global problem that most concerns him. Daniel’s path might seem unusual, but both he and Catherine expect it can be replicated by others. If they're right, it could greatly increase our ability to get new people into important ML roles in which they can make a difference, as quickly as possible. Catherine says that her move from OpenAI to an ML research team at Google now allows her to bring a different set of skills to the table. Technical AI safety is a multifaceted area of research, and the many sub-questions in areas such as reward learning, robustness, and interpretability all need to be answered to maximize the probability that AI development goes well for humanity. Today’s episode combines the expertise of two pioneers and is a key resource for anyone wanting to follow in their footsteps. We cover: * What are OpenAI and Google Brain doing? * Why work on AI? * Do you learn more on the job, or while doing a PhD? * Controversial issues within ML * Is replicating papers a good way of determining suitability? * What % of software developers could make similar transitions? * How in-demand are research engineers? * The development of Dota 2 bots * Do research scientists have more influence on the vision of an org? * Has learning more made you more or less worried about the future? Get this episode by subscribing: type '80,000 Hours' into your podcasting app. The 80,000 Hours Podcast is produced by Keiran Harris.

Future of Life Institute Podcast
AIAP: Astronomical Future Suffering and Superintelligence with Kaj Sotala

Future of Life Institute Podcast

Play Episode Listen Later Jun 14, 2018 74:40


In the classic taxonomy of risks developed by Nick Bostrom, existential risks are characterized as risks which are both terminal in severity and transgenerational in scope. If we were to maintain the scope of a risk as transgenerational and increase its severity past terminal, what would such a risk look like? What would it mean for a risk to be transgenerational in scope and hellish in severity? In this podcast, Lucas spoke with Kaj Sotala, an associate researcher at the Foundational Research Institute. He has previously worked for the Machine Intelligence Research Institute, and has publications on AI safety, AI timeline forecasting, and consciousness research. Topics discussed in this episode include: -The definition of and a taxonomy of suffering risks -How superintelligence has special leverage for generating or mitigating suffering risks -How different moral systems view suffering risks -What is possible of minds in general and how this plays into suffering risks -The probability of suffering risks -What we can do to mitigate suffering risks

Making Sense with Sam Harris - Subscriber Content
Bonus Questions: Eliezer Yudkowsky

Making Sense with Sam Harris - Subscriber Content

Play Episode Listen Later Feb 7, 2018 8:19


Eliezer Yudkowsky is a decision theorist and computer scientist at the Machine Intelligence Research Institute in Berkeley, California who is known for his work in technological forecasting. His publications include the Cambridge Handbook of Artificial Intelligence chapter “The Ethics of Artificial Intelligence,” co-authored with Nick Bostrom. Yudkowsky’s writings have helped spark a number of ongoing academic and public debates about the long-term impact of AI, and he has written a number of popular introductions to topics in cognitive science and formal epistemology, such as Rationality: From AI to Zombies and “Harry Potter and the Methods of Rationality.” His latest book is Inadequate Equilibria: Where and How Civilizations Get Stuck. Twitter: @ESYudkowsky

Making Sense with Sam Harris - Subscriber Content
#116 - AI: Racing Toward the Brink

Making Sense with Sam Harris - Subscriber Content

Play Episode Listen Later Feb 6, 2018 127:40


Sam Harris speaks with Eliezer Yudkowsky about the nature of intelligence, different types of AI, the “alignment problem,” IS vs OUGHT, the possibility that future AI might deceive us, the AI arms race, conscious AI, coordination problems, and other topics. Eliezer Yudkowsky is a decision theorist and computer scientist at the Machine Intelligence Research Institute in Berkeley, California who is known for his work in technological forecasting. His publications include the Cambridge Handbook of Artificial Intelligence chapter “The Ethics of Artificial Intelligence,” co-authored with Nick Bostrom. Yudkowsky’s writings have helped spark a number of ongoing academic and public debates about the long-term impact of AI, and he has written a number of popular introductions to topics in cognitive science and formal epistemology, such as Rationality: From AI to Zombies and “Harry Potter and the Methods of Rationality.” His latest book is Inadequate Equilibria: Where and How Civilizations Get Stuck. Twitter: @ESYudkowsky Facebook: facebook.com/yudkowsky Episodes that have been re-released as part of the Best of Making Sense series may have been edited for relevance since their original airing.

Building Jerusalem
#6 - Joshua Fox

Building Jerusalem

Play Episode Listen Later Nov 14, 2017


Joshua is the principal software architect at Freightos. He has worked with the Machine Intelligence Research Institute on problems of artificial intelligence, and currently organizes open-to-the-public Rationality meetups in Jerusalem.

The World Transformed
Fast Forward -- The Nanotech Debate Is Over

The World Transformed

Play Episode Listen Later Nov 7, 2017 31:00


In this edition of Fast Forward, Christine Peterson, co-founder of the Foresight Institute, talks with our hosts Phil Bowermaster and Stephen Gordon about her ongoing work exploring and educating the public about coming powerful technologies, including nanotechnology and related technologies. Are we witnessing the dawn of a new industrial revolution? If so, how will it impact the economy, the environment, and our day-to-day lives? Topics Defining nanotechnologyThe end of the nanotechnology debateA new industrial revolution?Nanotechnology and space explorationNanotechnology and medicineThe relationship between nanotechnology, open source software, and artificial intelligenceThe Foresight Institute Vision Weekend   About our guest Christine writes, lectures, and briefs the media on world transforming technologies. She is the Co-Founder and former President of Foresight Institute, the leading nanotech public interest group. Foresight educates the public, technical community, and policymakers on nanotechnology and its long-term effects. She serves on the Advisory Board of the Machine Intelligence Research Institute, and has served on California's Blue Ribbon Task Force on Nanotechnology and the Editorial Advisory Board of NASA's Nanotech Briefs. She has directed numerous Foresight Conferences on Molecular Nanotechnology, organized Foresight Institute Feynman Prizes, and chaired Foresight Vision Weekends. Learn more About the Foresight Institute  About the Foresight Vision Weekend  Music: www.bensound.com FF 002-676

Strange Attractor
Episode 6: You stay way over there you human idiot

Strange Attractor

Play Episode Listen Later May 20, 2016 59:11


What is artificial intelligence? What is artifical intelligence? And great answers to most of what we talked about, by a proper computer guy from Stanford University (Formal Reasoning Group) What is Skynet? (Wikia) What is computer chess? (Wikipedia) Google computer wins final game against South Korean Go master (Physics.org) Google has gotten very good at predicting traffic (Tech Insider) When will AI be created? (Machine Intelligence Research Institute) What is intelligence? (Machine Intelligence Research Institute) What is consciousness? (big think) What it will take for computers to be conscious (MIT Technology Review) Learning how little we know about the brain (The New York Times) Google traffic (Google) What is artifical consciousness? (Wikipedia) Kegan's 'orders of mind' (NZCR) Kegan's theory of the evolution of consciousness (Stanford University) Consciousness may be an 'emergent property' of the brain (Quora) A good discussion between Sam Harris and Neil deGrasse Tyson (Sam Harris' podcast) There are billions of connections in your brain (The Astronomist) The 'Go' game (Wikipedia) The number of possible Go games is reeeeally large...potentially more than the number of atoms in the universe (Sensei's Library) A comparison of chess & Go (British Go Association) Go & maths...the number of positions is scary (Wikipedia) There are also a lot of chess moves (Chess.com) What is a brute force attack? (Technopedia) How many moves ahead can hard core chess players see? (Quora) Deep learning in a nutshell – what it is, how it works, why care? (KDnuggets) Deep learning with massive amounts of computational power, machines can now recognize objects & translate speech in real time (MIT Technology Review) Google's 'DeepMind' deep learning start up (techworld) The Google Brain project (Wired) The Go computer was trained with 160,000 real-life games (Scientific American) Evolutionary computation & AI (Wikipedia) Genetic programming & AI (Wikipedia) So what's a robot then? (Galileo Educational Network) Professor reveals to students that his assistant was an AI all along (SMH) Hate Siri? Meet Viv - the future of chatbots and artificial intelligence (SMH) What is the connection between AI & robotics (wiseGEEK) Robotic limbs that plug into the brain (MIT Technology review) The Roomba vacuum robot (iRobot) The 'Robot or Not' podcast (The Incomparable) Expert predictions on when we'll see conscious machines: When will the machines wake up? (TechCrunch) Google AI: What if Google became self-aware? (wattpad) Will Google create the first conscious computer? (Daily Mail Australia) Google Consciousness...not affiliated with Google (Google Consciousness) Elon Musk does indeed have an AI company: Open AI (Wired) Evil genius with a fluffy cat (Regmedia) The Maltesers gift box (Mars) Will machines eventually take on every job? (BBC) When robots take all the work, what'll be left for us to do? (Wired) The travelling salesman maths problem (Wikipedia) A bunch of stuff about the travelling salesman maths problem (University of Waterloo) GPS became fully operational in 1995, but was proposed in 1973 (Wikipedia) How does GPS work? (Wikipedia) Digital diagnosis: intelligent machines do a better job than humans (The Conversation) What is Lyme disease? (Lyme Disease Association of Australia) Stuttgart (Wikipedia) Robot B-9: the robot from Lost in Space (Lost in Space Wiki) Robot B-9 in action (YouTube) Surgical robots (All About Robotic Surgery) Commercial planes are basically just big drones (Esquire) The AI in Google's self-driving cars qualifies as legal driver (Fortune) All the self-driving cars are learning from each other (The Oatmeal) Your future self-driving car will be way more hackable (MIT Technology Review) Google self-driving cars have driven more than 2 million km & have ony had 14 minor collisions (Wikipedia) Crazy animation of self-driving cars at an intersection (Co.Design) Self-driving cars could get their own lanes (wtop) Self-driving cars could lower insurance premiums (The Telegraph) Self-driving cars could lower insurance premiums (Wired) Australia's new National Broadband Network (nbnco) Tesla's cars now drive themselves, kinda (Wired) Australia's first autonomous vehicle test (Motoring) How AI is driving the next industrial revolution (InformationAge) Why bots are the next industrial revolution (Huffington Post) Humans need not apply: short video (C.G.P. Grey) Self-driving trucks are on the way (Basic Income) In the 2015 census there were 94,975 articulated trucks registered in Australia (Australian Bureau of Statistics) Driverless trucks move all iron ore at Rio Tinto's Pilbara mines, in world first (ABC Australia) Rio Tinto pushes ahead with driverless trains in Pilbara (SMH) Can Star Trek's world with no money work? (CNN Money) The economics of Star Trek (Medium) Jeff Bezos from Amazon (Wikipedia) Yes, the robots will steal our jobs. And that's fine (The Washington Post) I fear 'low-cost country sourcing' more than robots taking my job (Wikipedia) Flight prices are calculated by robots doing maths (Mathematical Association of America) Are airline passengers getting ripped off by robots? (Fortune) Is it true that once you search for a flight the algorithm will remember & put the price up? (Quora) Mac users may see pricier options (ABC America) Naked Wines Sir James Dyson (Encyclopaedia Britannica) Cheeky review? (If we may be so bold) It'd be amazing if you gave us a short review...it'll make us easier to find in iTunes: Click here for instructions. You're the best! We owe you a free hug and/or a glass of wine from our cellar

Get Yourself Optimized
6: A Glimpse at the Future Lifespan of Humans with Christine Peterson

Get Yourself Optimized

Play Episode Listen Later Oct 1, 2015 55:09


Christine Peterson is cofounder and past president of Foresight institute, the leading nanotech public interest group. She writes, lectures, and briefs the media on coming powerful technologies, especially nanotechnology and life extension. Peterson holds a bachelor's degree in chemistry from MIT.   Christine Peterson, future tech expert and board advisor for the Machine Intelligence Research Institute, sees into the future in a way that most cannot. In the constantly advancing world of technology and research, it is her job to look at what is happening and frame it in the larger concept of societal development, how it might affect humanity, and what it might mean for the human body itself. There are a lot of amazing technologies being developed that are under the radar that have the potential to drastically change the quality of life for humanity, within decades.  Christine and I sit down to discuss everything from nanotechnology to cryogenics to finding a soulmate, all in the effort of getting the most out of your body, your health, and your life. Machines are getting smarter, and we are getting more informed on the decisions we are making that affect our body. Because of it, some of the breakthroughs are absolutely amazing.  And listening in, you might just hear some speculations that may startle you - in a fantastic way. For instance, Christine paints a potential picture of what the future might hold that involves: ·       nanobots that make repairs to DNA on a cell-by-cell basis ·       the end of cancer - and all disease ·       a 10,000 year life span ·       finding a life partner through manipulation of chemical processes in the brain For complete shownotes and more, please head over to www.optimizedgeek.com/christinepeterson     Get Optimized! Order a blood test and set up an appointment to go over what your deficiencies are. Check out examine.com to see what type of supplements might be best for you. Test your hormones to determine the source of sleep problems. LINKS & RESOURCES MENTIONED:   BulletProof  The Quantified Self  SENS  ALCOR  MIRI (Machine Intelligence Research Institute)    THANK YOU FOR LISTENING! As always, thank you for tuning in. Please feel free to drop by the website to contact me or leave a comment. If you enjoyed this episode, please share it! -Stephan   STAY CONNECTED Reboot and Improve Your Life - Free Guide | Twitter | Facebook  

Big Picture Science
Meet Your Replacements

Big Picture Science

Play Episode Listen Later Jan 5, 2015 51:31


ENCORE There’s no one like you. At least, not yet. But in some visions of the future, androids can do just about everything, computers will hook directly into your brain, and genetic human-hybrids with exotic traits will be walking the streets. So could humans become an endangered species? Be prepared to meet the new-and-improved you. But how much human would actually remain in the humanoids of the future? Plus, tips for preventing our own extinction in the face of inevitable natural catastrophes. Guests: •   Robin Hanson – Associate professor of economics, George Mason University •   Luke Muehlhauser – Executive director of the Machine Intelligence Research Institute •   Stuart Newman – Professor of cell biology and anatomy, New York Medical College •   Annalee Newitz – Editor of io9.com, and author of Scatter, Adapt, and Remember: How Humans Will Survive a Mass Extinction   First released July 1, 2013.

Artificial Intelligence in Industry with Daniel Faggella
Dr. Robin Hanson - The Path Forward for a Better Human Future

Artificial Intelligence in Industry with Daniel Faggella

Play Episode Listen Later Sep 16, 2013 55:22


Dr. Robin Hanson is a recognized thought leader in the ramifications of emerging technology on economics and society. Being a featured speaker at the Singularity Summit, a research advisor for the Machine Intelligence Research Institute, and a prolific writer on the topic of existential risk and emerging technology. In this interview, we cover the trouble of honestly looking at future topics, and how current "future" issues are often just sensationalized versions or present concerns. Dr. Hanson says "Few people are ACTUALLY focused on the ACTUAL future." How can we improve, innovate, and collaborate despite this? Listen in. For additional insights from this interview, visit: http://sentientpotential.com/dr-robin-hanson-on-the-difficulty-of-making-progress-towards-beneficial-future/

The David Pakman Show
6/29/23: Bidenomics speech explodes, Trump fantasized about Ivanka

The David Pakman Show

Play Episode Listen Later Jan 1, 1970 64:11


-- On the Show:-- Eliezer Yudkowsky, Founder and Senior Research Fellow of the Machine Intelligence Research Institute, joins David to discuss artificial intelligence, machine learning, and much more-- President Joe Biden delivers a major speech on Bidenomics in Chicago, crushing the nonsense of right wing trickle down economics-- Due to concerns about President Joe Biden's indented face, the White House makes a medical disclosure about Biden wearing a CPAP mask for sleep apnea-- 20324 Democratic presidential candidate Robert F. Kennedy Jr wildly claims that "vaccine research" is responsible for HIV, the Spanish Flu, RSV, and Lyme Disease-- Yet another report that Donald Trump fantasized about having sex with his own daughter, Ivanka Trump-- 2024 Republican presidential candidate Chris Christie does what other Republicans won't, calling out Donald Trump for his obvious grift of MAGA cultists, even targeting Jared Kushner-- 2024 Republican presidential candidate Ron DeSantis claims he will eliminate the IRS, Department of Education, Commerce, and Energy in an interview so absurd, even Fox News host Martha MacCallum is visibly skeptical-- Voicemail from a great-grandmother explains that she LOVES THE SHOW!-- On the Bonus Show: Bill Cosby is sued by 9 more women, Joe Biden says he "isn't big" on abortion but believes it should be legal, new Denver restaurant eliminates tipping and raises wages, much more...✉️ StartMail: Get 50% OFF a year subscription at https://startmail.com/pakman