Podcasts about Eliezer Yudkowsky

American blogger, writer, and artificial intelligence researcher

  • 132PODCASTS
  • 887EPISODES
  • 34mAVG DURATION
  • 1EPISODE EVERY OTHER WEEK
  • May 25, 2025LATEST
Eliezer Yudkowsky

POPULARITY

20172018201920202021202220232024


Best podcasts about Eliezer Yudkowsky

Latest podcast episodes about Eliezer Yudkowsky

Robinson's Podcast
251 - Eliezer Yudkowsky: Artificial Intelligence and the End of Humanity

Robinson's Podcast

Play Episode Listen Later May 25, 2025 171:13


Eliezer Yudkowsky is a decision theorist, computer scientist, and author who co-founded and leads research at the Machine Intelligence Research Institute. He is best known for his work on the alignment problem—how and whether we can ensure that AI is aligned with human values to avoid catastrophe and harness its power. In this episode, Robinson and Eliezer run the gamut on questions related to AI and the danger it poses to human civilization as we know it. More particularly, they discuss the alignment problem, gradient descent, consciousness, the singularity, cyborgs, ChatGPT, OpenAI, Anthropic, Claude, how long we have until doomsday, whether it can be averted, and the various reasons why and ways in which AI might wipe out human life on earth.The Machine Intelligence Research Institute: https://intelligence.org/about/Eliezer's X Account: https://x.com/ESYudkowsky?ref_src=twsrc%5Egoogle%7Ctwcamp%5Eserp%7Ctwgr%5EauthorOUTLINE00:00:00 Introduction00:00:43 The Default Condition for AI's Takeover00:06:36 Could a Future AI Country Be Our Trade Partner?00:11:18 What Is Artificial Intelligence?00:21:23 Why AIs Having Goals Could Mean the End of Humanity00:29:34 What Is the Alignment Problem?00:34:11 How To Avoid AI Apocalypse00:40:25 Would Cyborgs Eliminate Humanity?00:47:55 AI and the Problem of Gradient Descent00:55:24 How Do We Solve the Alignment Problem?01:00:50 How Anthropic's AI Freed Itself from Human Control01:08:56 The Pseudo-Alignment Problem01:19:28 Why Are People Wrong About AI Not Taking Over the World?01:23:23 How Certain Is It that AI Will Wipe Out Humanity?01:38:35 Is Eliezer Yudkowski Wrong About The AI Apocalypse01:42:04 Do AI Corporations Control the Fate of Humanity?01:43:49 How To Convince the President Not to Let AI Kill Us All01:52:01 How Will ChatGPT's Descendants Wipe Out Humanity?02:24:11 Could AI Destroy us with New Science?02:39:37 Could AI Destroy us with Advanced Biology?02:47:29 How Will AI Actually Destroy Humanity?Robinson's Website: http://robinsonerhardt.comRobinson Erhardt researches symbolic logic and the foundations of mathematics at Stanford University.

London Futurists
Humanity's final four years? with James Norris

London Futurists

Play Episode Listen Later Apr 30, 2025 49:36


In this episode, we return to the subject of existential risks, but with a focus on what actions can be taken to eliminate or reduce these risks.Our guest is James Norris, who describes himself on his website as an existential safety advocate. The website lists four primary organizations which he leads: the International AI Governance Alliance, Upgradable, the Center for Existential Safety, and Survival Sanctuaries.Previously, one of James' many successful initiatives was Effective Altruism Global, the international conference series for effective altruists. He also spent some time as the organizer of a kind of sibling organization to London Futurists, namely Bay Area Futurists. He graduated from the University of Texas at Austin with a triple major in psychology, sociology, and philosophy, as well as with minors in too many subjects to mention.Selected follow-ups:James Norris websiteUpgrade your life & legacy - UpgradableThe 7 Habits of Highly Effective People (Stephen Covey)Beneficial AI 2017 - Asilomar conference"...superintelligence in a few thousand days" - Sam Altman blogpostAmara's Law - DevIQThe Probability of Nuclear War (JFK estimate)AI Designs Chemical Weapons - The BatchThe Vulnerable World Hypothesis - Nick BostromWe Need To Build Trustworthy AI Systems To Monitor Other AI: Yoshua BengioInstrumental convergence - WikipediaNeanderthal extinction - WikipediaMatrioshka brain - WikipediaWill there be a 'WW3' before 2050? - Manifold prediction marketExistential Safety Action PledgeAn Urgent Call for Global AI Governance - IAIGA petitionBuild your survival sanctuaryOther people mentioned include:Eliezer Yudkowsky, Roman Yampolskiy, Yan LeCun, Andrew NgMusic: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain DeclarationPromoguy Talk PillsAgency in Amsterdam dives into topics like Tech, AI, digital marketing, and more drama...Listen on: Apple Podcasts Spotify Digital Disruption with Geoff Nielson Discover how technology is reshaping our lives and livelihoods.Listen on: Apple Podcasts Spotify

Cults, Cryptids, and Conspiracies
Episode 398: Irrationally Justified

Cults, Cryptids, and Conspiracies

Play Episode Listen Later Apr 25, 2025 88:24


Have you ever read Harry Potter and the Methods of Rationality?? Perhaps spent too much money on a self help workshop seminar? Join us as we talk about Eliezer Yudkowsky and his masterpiece of fiction. Where will this story truly lead us in this tale of rational magic and science. With our last episode on the topic trigger warning for some bad mental health. Thanks for listening and remember to like, rate, review, and email us at: cultscryptidsconspiracies@gmail.com or tweet us at @C3Podcast. We have some of our sources for research here: http://tinyurl.com/CristinaSourcesAlso check out our Patreon: www.patreon.com/cultscryptidsconspiracies. Thank you to T.J. Shirley for our theme

Cults, Cryptids, and Conspiracies
Episode 397: The Philosophers Science

Cults, Cryptids, and Conspiracies

Play Episode Listen Later Apr 18, 2025 78:20


Have you ever read Harry Potter and the Methods of Rationality?? Perhaps spent too much money on a self help workshop seminar? Join us as we talk about Eliezer Yudkowsky and his masterpiece of fiction. Where will this story truly lead us in this tale of rational magic and science.Thanks for listening and remember to like, rate, review, and email us at: cultscryptidsconspiracies@gmail.com or tweet us at @C3Podcast. We have some of our sources for research here: http://tinyurl.com/CristinaSourcesAlso check out our Patreon: www.patreon.com/cultscryptidsconspiracies. Thank you to T.J. Shirley for our theme

War College
The Cult of Rationalism in Silicon Valley

War College

Play Episode Listen Later Mar 25, 2025 61:34


A lot of the people designing America's technology and close to the center of American power believe some deeply weird shit. We already talked to journalist Gil Duran about the Nerd Reich, the rise of the destructive anti-democratic ideology. In this episode, we dive into another weird section of Silicon Valley: the cult of Rationalism.Max Read, the journalist behind the Read Max Substack, is here to help us through it. Rationalism is responsible for a lot more than you might think and Read lays out how it's influenced the world we live in today and how it created the environment for a cult that's got a body count.Defining rationalism: “Something between a movement, a community, and a self-help program.”Eliezer Yudkowsky and the dangers of AIWhat the hell is AGI?The Singleton Guide to Global GovernanceThe danger of thought experimentsAs always, follow the moneyVulgar bayesianismWhat's a Zizian?Sith VegansAnselm: Ontological Argument for God's ExistenceSBF and Effective AltruismREAD MAX!The Zizians and the Rationalist death cultsPausing AI Developments Isn't Enough. We Need to Shut it All Down - Eliezer Yudkowsky's TIME Magazine pieceExplaining Roko's Basilisk, the Thought Experiment That Brought Elon Musk and Grimes TogetherThe Delirious, Violent, Impossible True Story of the ZiziansThe Government Knows AGI is Coming | The Ezra Klein ShowThe archived ‘Is Trump Racist' rational postSupport this show http://supporter.acast.com/warcollege. Hosted on Acast. See acast.com/privacy for more information.

Le monde de demain - The Flares [PODCASTS]
Humain, Demain #53 - Sûreté de l'IA, e/acc, Super IA et Transhumanisme avec Jéremy Perret

Le monde de demain - The Flares [PODCASTS]

Play Episode Listen Later Feb 28, 2025 100:48


⚠️ Découvrez du contenu EXCLUSIF (pas sur la chaîne) ⚠️ ⇒ https://the-flares.com/y/bonus/ ⬇️⬇️⬇️ Infos complémentaires : sources, références, liens... ⬇️⬇️⬇️ Le contenu vous intéresse ? Abonnez-vous et cliquez sur la

LessWrong Curated Podcast
“Eliezer's Lost Alignment Articles / The Arbital Sequence” by Ruby

LessWrong Curated Podcast

Play Episode Listen Later Feb 20, 2025 2:37


Note: this is a static copy of this wiki page. We are also publishing it as a post to ensure visibility.Circa 2015-2017, a lot of high quality content was written on Arbital by Eliezer Yudkowsky, Nate Soares, Paul Christiano, and others. Perhaps because the platform didn't take off, most of this content has not been as widely read as warranted by its quality. Fortunately, they have now been imported into LessWrong.Most of the content written was either about AI alignment or math[1]. The Bayes Guide and Logarithm Guide are likely some of the best mathematical educational material online. Amongst the AI Alignment content are detailed and evocative explanations of alignment ideas: some well known, such as instrumental convergence and corrigibility, some lesser known like epistemic/instrumental efficiency, and some misunderstood like pivotal act. The SequenceThe articles collected here were originally published as wiki pages with no set [...] ---Outline:(01:01) The Sequence(01:23) Tier 1(01:32) Tier 2The original text contained 3 footnotes which were omitted from this narration. --- First published: February 20th, 2025 Source: https://www.lesswrong.com/posts/mpMWWKzkzWqf57Yap/eliezer-s-lost-alignment-articles-the-arbital-sequence --- Narrated by TYPE III AUDIO.

The Farm Podcast Mach II
Thiel, Yudkowsky, Rationalists & the Cult of Ziz w/ David Z. Morris & Recluse

The Farm Podcast Mach II

Play Episode Listen Later Feb 3, 2025 109:59


Zizians, Rationalist movement, Peter Thiel, Eliezer Yudkowsky, neoreaction, Accelerationism, Curtis Yarvin, AI, AI apocalypse, machine learning, psychedelics, Effective Altruism (EA), Sam Bankman-Fried, Extropianism, Thiel & Yudkowsky as Extropians, Discordianism, life extension, space colonization, cryptocurrencies, Yudkowsky as self-educated, Nick Bostrom, Center for Applied Rationality (CFAR), Rationalism's use of magical thinking, New Thought, Roko's Basilisk, Nick Land, predicting the future, LessWrong, LessWrong's relations ship to the Zizians, Ziz, non-binary/trans, vegan Siths, Vasserites, murders linked to Zizians, Zizians in Vermont, Luigi Mangione indirectly influenced by Zizianism, Brain Thompson assassination, ChangeHealthcare hack, were the hack and assassination targeting UnitedHealth Group influenced by this milieu?, is the Trump administration radicalizing Zizians?, Yudkowsky's links to Sam Bankman-Fried, Leverage Research/Center for Effective Altruism & MK-ULTRA-like techniques used by, are more cults coming from the Rationalist movement?Additional Resources:Leverage Research:https://medium.com/@zoecurzi/my-experience-with-leverage-research-17e96a8e540b#c778MIRI/Center for Applied Rationality (CFAR):https://www.lesswrong.com/posts/MnFqyPLqbiKL8nSR7/my-experience-at-and-around-miri-and-cfar-inspired-by-zoeMusic by: Keith Allen Dennishttps://keithallendennis.bandcamp.com/Additional Music: J Money Get bonus content on Patreon Hosted on Acast. See acast.com/privacy for more information.

Artificial Intelligence in Industry with Daniel Faggella
AI Risk Management and Governance Strategies for the Future - with Duncan Cass-Beggs of Center for International Governance Innovation

Artificial Intelligence in Industry with Daniel Faggella

Play Episode Listen Later Feb 1, 2025 77:40


Today's guest is Duncan Cass-Beggs, Executive Director of the Global AI Risks Initiative at the Center for International Governance Innovation (CIGI). He joins Emerj CEO and Head of Research Daniel Faggella to explore the pressing challenges and opportunities surrounding Artificial General Intelligence (AGI) governance on a global scale. This is a special episode in our AI futures series that ties right into our overlapping series on AGI governance on the Trajectory podcast, where we've had luminaries like Eliezer Yudkowsky, Connor Leahy, and other globally recognized AGI governance thinkers. We hope you enjoy this episode. If you're interested in these topics, make sure to dive deeper into where AI is affecting the bigger picture by visiting emergj.com/tj2.

TrueAnon
Episode 434: Evil Gods Must Be Fought: The Zizian Murder Cult [Part 1]

TrueAnon

Play Episode Listen Later Jan 29, 2025 128:17


Part one of our two-part investigation into the Rationalist cult “The Zizians.” We start with the killing of a border patrol officer and make our way back into the belly of the beast: Silicon Valley. Featuring: Harry Potter fanfic, samurai swords, Guy Fawkes masks, Blake Masters, Bayesian probability, and Eliezer Yudkowsky. Infohazard warning: some of your least favs will be implicated. Discover more episodes at podcast.trueanon.com

Artificial Intelligence in Industry with Daniel Faggella
Understanding AGI Alignment Challenges and Solutions - with Eliezer Yudkowsky of the Machine Intelligence Research Institute

Artificial Intelligence in Industry with Daniel Faggella

Play Episode Listen Later Jan 25, 2025 43:03


Today's episode is a special addition to our AI Futures series, featuring a special sneak peek at an upcoming episode of our Trajectory podcast with guest Eliezer Yudkowsky, AI researcher, founder, and research fellow at the Machine Intelligence Research Institute. Eliezer joins Emerj CEO and Head of Research Daniel Faggella to discuss the governance challenges of increasingly powerful AI systems—and what it might take to ensure a safe and beneficial trajectory for humanity. If you've enjoyed or benefited from some of the insights of this episode, consider leaving us a five-star review on Apple Podcasts, and let us know what you learned, found helpful, or liked most about this show!

The Unadulterated Intellect
#83 – Robin Hanson and Eliezer Yudkowsky: Jane Street Singularity Debate

The Unadulterated Intellect

Play Episode Listen Later Jan 5, 2025 98:18


Machine Learning Street Talk
Eliezer Yudkowsky and Stephen Wolfram on AI X-risk

Machine Learning Street Talk

Play Episode Listen Later Nov 11, 2024 258:30


Eliezer Yudkowsky and Stephen Wolfram discuss artificial intelligence and its potential existen‑ tial risks. They traversed fundamental questions about AI safety, consciousness, computational irreducibility, and the nature of intelligence. The discourse centered on Yudkowsky's argument that advanced AI systems pose an existential threat to humanity, primarily due to the challenge of alignment and the potential for emergent goals that diverge from human values. Wolfram, while acknowledging potential risks, approached the topic from a his signature measured perspective, emphasizing the importance of understanding computational systems' fundamental nature and questioning whether AI systems would necessarily develop the kind of goal‑directed behavior Yudkowsky fears. *** MLST IS SPONSORED BY TUFA AI LABS! The current winners of the ARC challenge, MindsAI are part of Tufa AI Labs. They are hiring ML engineers. Are you interested?! Please goto https://tufalabs.ai/ *** TOC: 1. Foundational AI Concepts and Risks [00:00:01] 1.1 AI Optimization and System Capabilities Debate [00:06:46] 1.2 Computational Irreducibility and Intelligence Limitations [00:20:09] 1.3 Existential Risk and Species Succession [00:23:28] 1.4 Consciousness and Value Preservation in AI Systems 2. Ethics and Philosophy in AI [00:33:24] 2.1 Moral Value of Human Consciousness vs. Computation [00:36:30] 2.2 Ethics and Moral Philosophy Debate [00:39:58] 2.3 Existential Risks and Digital Immortality [00:43:30] 2.4 Consciousness and Personal Identity in Brain Emulation 3. Truth and Logic in AI Systems [00:54:39] 3.1 AI Persuasion Ethics and Truth [01:01:48] 3.2 Mathematical Truth and Logic in AI Systems [01:11:29] 3.3 Universal Truth vs Personal Interpretation in Ethics and Mathematics [01:14:43] 3.4 Quantum Mechanics and Fundamental Reality Debate 4. AI Capabilities and Constraints [01:21:21] 4.1 AI Perception and Physical Laws [01:28:33] 4.2 AI Capabilities and Computational Constraints [01:34:59] 4.3 AI Motivation and Anthropomorphization Debate [01:38:09] 4.4 Prediction vs Agency in AI Systems 5. AI System Architecture and Behavior [01:44:47] 5.1 Computational Irreducibility and Probabilistic Prediction [01:48:10] 5.2 Teleological vs Mechanistic Explanations of AI Behavior [02:09:41] 5.3 Machine Learning as Assembly of Computational Components [02:29:52] 5.4 AI Safety and Predictability in Complex Systems 6. Goal Optimization and Alignment [02:50:30] 6.1 Goal Specification and Optimization Challenges in AI Systems [02:58:31] 6.2 Intelligence, Computation, and Goal-Directed Behavior [03:02:18] 6.3 Optimization Goals and Human Existential Risk [03:08:49] 6.4 Emergent Goals and AI Alignment Challenges 7. AI Evolution and Risk Assessment [03:19:44] 7.1 Inner Optimization and Mesa-Optimization Theory [03:34:00] 7.2 Dynamic AI Goals and Extinction Risk Debate [03:56:05] 7.3 AI Risk and Biological System Analogies [04:09:37] 7.4 Expert Risk Assessments and Optimism vs Reality 8. Future Implications and Economics [04:13:01] 8.1 Economic and Proliferation Considerations SHOWNOTES (transcription, references, summary, best quotes etc): https://www.dropbox.com/scl/fi/3st8dts2ba7yob161dchd/EliezerWolfram.pdf?rlkey=b6va5j8upgqwl9s2muc924vtt&st=vemwqx7a&dl=0

Slate Star Codex Podcast
Contra DeBoer On Temporal Copernicanism

Slate Star Codex Podcast

Play Episode Listen Later Oct 1, 2024 14:07


Freddie deBoer has a post on what he calls “the temporal Copernican principle.” He argues we shouldn't expect a singularity, apocalypse, or any other crazy event in our lifetimes. Discussing celebrity transhumanist Yuval Harari, he writes: What I want to say to people like Yuval Harari is this. The modern human species is about 250,000 years old, give or take 50,000 years depending on who you ask. Let's hope that it keeps going for awhile - we'll be conservative and say 50,000 more years of human life. So let's just throw out 300,000 years as the span of human existence, even though it could easily be 500,000 or a million or more. Harari's lifespan, if he's lucky, will probably top out at about 100 years. So: what are the odds that Harari's lifespan overlaps with the most important period in human history, as he believes, given those numbers? That it overlaps with a particularly important period of human history at all? Even if we take the conservative estimate for the length of human existence of 300,000 years, that means Harari's likely lifespan is only about .33% of the entirety of human existence. Isn't assuming that this .33% is somehow particularly special a very bad assumption, just from the basis of probability? And shouldn't we be even more skeptical given that our basic psychology gives us every reason to overestimate the importance of our own time? (I think there might be a math error here - 100 years out of 300,000 is 0.033%, not 0.33% - but this isn't my main objection.) He then condemns a wide range of people, including me, for failing to understand this: Some people who routinely violate the Temporal Copernican Principle include Harari, Eliezer Yudkowsky, Sam Altman, Francis Fukuyama, Elon Musk, Clay Shirky, Tyler Cowen, Matt Yglesias, Tom Friedman, Scott Alexander, every tech company CEO, Ray Kurzweil, Robin Hanson, and many many more. I think they should ask themselves how much of their understanding of the future ultimately stems from a deep-seated need to believe that their times are important because they think they themselves are important, or want to be. I deny misunderstanding this. Freddie is wrong. https://www.astralcodexten.com/p/contra-deboer-on-temporal-copernicanism 

The Nonlinear Library
LW - MIRI's September 2024 newsletter by Harlan

The Nonlinear Library

Play Episode Listen Later Sep 17, 2024 2:29


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: MIRI's September 2024 newsletter, published by Harlan on September 17, 2024 on LessWrong. MIRI updates Aaron Scher and Joe Collman have joined the Technical Governance Team at MIRI as researchers. Aaron previously did independent research related to sycophancy in language models and mechanistic interpretability, while Joe previously did independent research related to AI safety via debate and contributed to field-building work at MATS and BlueDot Impact. In an interview with PBS News Hour's Paul Solman, Eliezer Yudkowsky briefly explains why he expects smarter-than-human AI to cause human extinction. In an interview with The Atlantic's Ross Andersen, Eliezer discusses the reckless behavior of the leading AI companies, and the urgent need to change course. News and links Google DeepMind announced a hybrid AI system capable of solving International Mathematical Olympiad problems at the silver medalist level. In the wake of this development, a Manifold prediction market significantly increased its odds that AI will achieve gold level by 2025, a milestone that Paul Christiano gave less than 8% odds and Eliezer gave at least 16% odds to in 2021. The computer scientist Yoshua Bengio discusses and responds to some common arguments people have for not worrying about the AI alignment problem. SB 1047, a California bill establishing whistleblower protections and mandating risk assessments for some AI developers, has passed the State Assembly and moved on to the desk of Governor Gavin Newsom, to either be vetoed or passed into law. The bill has received opposition from several leading AI companies, but has also received support from a number of employees of those companies, as well as many academic researchers. At the time of this writing, prediction markets think it's about 50% likely that the bill will become law. In a new report, researchers at Epoch AI estimate how big AI training runs could get by 2030, based on current trends and potential bottlenecks. They predict that by the end of the decade it will be feasible for AI companies to train a model with 2e29 FLOP, which is about 10,000 times the amount of compute used to train GPT-4. Abram Demski, who previously worked at MIRI as part of our recently discontinued Agent Foundations research program, shares an update about his independent research plans, some thoughts on public vs private research, and his current funding situation. You can subscribe to the MIRI Newsletter here. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

The Nonlinear Library
EA - The Subject in Subjective Time: A New Approach to Aggregating Wellbeing (paper draft) by Devin Kalish

The Nonlinear Library

Play Episode Listen Later Sep 17, 2024 73:23


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The Subject in Subjective Time: A New Approach to Aggregating Wellbeing (paper draft), published by Devin Kalish on September 17, 2024 on The Effective Altruism Forum. What follows is a lightly edited version of the thesis I wrote for my Bioethics MA program. I'm hoping to do more with this in the future, including seeking publication and/or expanding it into a dissertation or short book. In its current state, I feel like it is in pretty rough shape. I hope it is useful and interesting for people as puzzled by this very niche philosophical worry as me, but I'm also looking for feedback on how I can improve it. There's no guarantee I will take it, or even do anything further with this piece, but I would still appreciate the feedback. I may or may not interact much in the comments section. I. Introduction: Duration is an essential component of many theories of wellbeing. While there are theories of wellbeing that are sufficiently discretized that time isn't so obviously relevant to them, like achievements, it is hard to deny that time matters to some parts of a moral patient's wellbeing. A five-minute headache is better than an hour-long headache, all else held equal. A love that lasts for decades provides more meaning to a life than one that last years or months, all else held equal. The fulfillment of a desire you have had for years matters more than the fulfillment of a desire you have merely had for minutes, all else held equal. However, in our day to day lives we encounter time in two ways, objectively and subjectively. What do we do when the two disagree? This problem reached my attention years ago when I was reflecting on the relationship between my own theoretical leaning, utilitarianism, and the idea of aggregating interests. Aggregation between lives is known for its counterintuitive implications and the rich discourse around this, but I am uncomfortable with aggregation within lives as well. Some of this is because I feel the problems of interpersonal aggregation remain in the intrapersonal case, but there was also a problem I hadn't seen any academic discussion of at the time - objective time seemed to map the objective span of wellbeing if you plot each moment of wellbeing out to aggregate, but it is subjective time we actually care about. Aggregation of these objective moments gives a good explanation of our normal intuitions about time and wellbeing, but it fails to explain our intuitions about time whenever these senses of it come apart. As I will attempt to motivate later, the intuition that it is subjective time that matters is very strong in cases where the two substantially differ. Indeed, although the distinction rarely appears in papers at all, the main way I have seen it brought up (for instance in "The Ethics of Artificial Intelligence[1]" by Nick Bostrom and Eliezer Yudkowsky) is merely to notice there is a difference, and to effectively just state that it is subjective time, of course, that we should care about. I have very rarely run into a treatment dedicated to the "why", the closest I have seen is the writing of Jason Schukraft[2], with his justification for why it is subjective time that matters for Rethink Priorities' "Moral Weights" project. His justification is similar to an answer I have heard in some form several times from defenders: We measure other values of consciousness subjectively, such as happiness and suffering, why shouldn't we measure time subjectively as well? I believe without more elaboration, this explanation has the downside that it both gives no attention to the idea that time matters because it tells us "how much" of an experience there actually is, and has the downside that it seems irrelevant to any theory of wellbeing other than hedonism. It also, crucially, fails to engage with the question of what exactly subje...

The Nonlinear Library
LW - That Alien Message - The Animation by Writer

The Nonlinear Library

Play Episode Listen Later Sep 7, 2024 12:20


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: That Alien Message - The Animation, published by Writer on September 7, 2024 on LessWrong. Our new video is an adaptation of That Alien Message, by @Eliezer Yudkowsky. This time, the text has been significantly adapted, so I include it below. The author of the adaptation is Arthur Frost. Eliezer has reviewed the adaptation. Part 1 Picture a world just like ours, except the people are a fair bit smarter: in this world, Einstein isn't one in a million, he's one in a thousand. In fact, here he is now. He's made all the same discoveries, but they're not quite as unusual: there have been lots of other discoveries. Anyway, he's out one night with a friend looking up at the stars when something odd happens. [visual: stars get brighter and dimmer, one per second. The two people on the hill look at each other, confused] The stars are flickering. And it's just not a hallucination. Everyone's seeing it. And so everyone immediately freaks out and panics! Ah, just kidding, the people of this world are smarter than ours; What they do is try to work together and figure out what's going on. It turns out that exactly one star seems to shift in brightness every 1.005 seconds. Except, the stars are light years away, so actually the shifts must have happened a long time ago, and somehow they've all been perfectly timed to reach Earth specifically every 1.005 seconds. If you look at the stars from a high-orbit satellite (which of course this planet has) then the flickering looks a little out of sync. So whatever this is, it's directed at Earth. Nobody can find a pattern in the position of the stars, but it's one at a time getting either much dimmer or much brighter by the same amount and, well, that looks a bit like binary. So loads of people think 'huh, maybe it's a code!'. But a lot of other people wonder, 'Who would be trying to send a message to Earth by shifting the brightness of stars across the galaxy? There must be an easier way to talk to us?' But it seems like there must be some intelligence behind it, so the data gets gathered and put on the internet. Some people wonder if maybe it's somehow dangerous, but, well, whoever is making the stars change brightness probably has easier ways to destroy humanity. And so the great analysis begins. Half the planet's physicists, mathematicians, cryptographers, precocious kids, crossword enthusiasts, whoever, they're all trying to work out what this means, they're trying to crack the code. And as they do, the stars keep flickering, exactly one every 1.005 seconds. There are some obvious patterns [visual: display the code, probably someone lining up different wrappings and finding one that makes the pattern look less noisy]: it seems like the numbers come in groups of 32, which in turn come from four groups of 8. Some chunks are much more common. [visual: chunks of 8 getting matched across the text, sorted into uneven piles perhaps] By the way, they do all this just in the first five hours, because like I said, people here are smart. Their civilisation is… a bit more on top of things. And so they are very ready to respond when, after five hours and 16,384 winking stars, it seems like the message begins to repeat itself, or, almost repeat itself, it's just slightly different this time. And it keeps going. [slow zoom out on code going from one line to two, showing only a few differences between the new line and the previous line] Some people start thinking maybe we're seeing the next row of a picture, pixel by pixel. Only, the designers of this image format - whoever they are - use four primary colours instead of three [visual of 32-chunk getting broken into four 8-chunks]. And the picture seems less chaotic if we assume they do binary slightly differently to us. [probably someone gesturing at a diagram of how to get numbers from binary repres...

The Nonlinear Library
LW - Executable philosophy as a failed totalizing meta-worldview by jessicata

The Nonlinear Library

Play Episode Listen Later Sep 5, 2024 7:32


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Executable philosophy as a failed totalizing meta-worldview, published by jessicata on September 5, 2024 on LessWrong. (this is an expanded, edited version of an x.com post) It is easy to interpret Eliezer Yudkowsky's main goal as creating a friendly AGI. Clearly, he has failed at this goal and has little hope of achieving it. That's not a particularly interesting analysis, however. A priori, creating a machine that makes things ok forever is not a particularly plausible objective. Failure to do so is not particularly informative. So I'll focus on a different but related project of his: executable philosophy. Quoting Arbital: Two motivations of "executable philosophy" are as follows: 1. We need a philosophical analysis to be "effective" in Turing's sense: that is, the terms of the analysis must be useful in writing programs. We need ideas that we can compile and run; they must be "executable" like code is executable. 2. We need to produce adequate answers on a time scale of years or decades, not centuries. In the entrepreneurial sense of "good execution", we need a methodology we can execute on in a reasonable timeframe. There is such a thing as common sense rationality, which says the world is round, you shouldn't play the lottery, etc. Formal notions like Bayesianism, VNM utility theory, and Solomonoff induction formalize something strongly related to this common sense rationality. Yudkowsky believes further study in this tradition can supersede ordinary academic philosophy, which he believes to be conceptually weak and motivated to continue ongoing disputes for more publications. In the Sequences, Yudkowsky presents these formal ideas as the basis for a totalizing meta-worldview, of epistemic and instrumental rationality, and uses the meta-worldview to argue for his object-level worldview (which includes many-worlds, AGI foom, importance of AI alignment, etc.). While one can get totalizing (meta-)worldviews from elsewhere (such as interdisciplinary academic studies), Yudkowsky's (meta-)worldview is relatively easy to pick up for analytically strong people (who tend towards STEM), and is effective ("correct" and "winning") relative to its simplicity. Yudkowsky's source material and his own writing do not form a closed meta-worldview, however. There are open problems as to how to formalize and solve real problems. Many of the more technical sort are described in MIRI's technical agent foundations agenda. These include questions about how to parse a physically realistic problem as a set of VNM lotteries ("decision theory"), how to use something like Bayesianism to handle uncertainty about mathematics ("logical uncertainty"), how to formalize realistic human values ("value loading"), and so on. Whether or not the closure of this meta-worldview leads to creation of friendly AGI, it would certainly have practical value. It would allow real world decisions to be made by first formalizing them within a computational framework (related to Yudkowsky's notion of "executable philosophy"), whether or not the computation itself is tractable (with its tractable version being friendly AGI). The practical strategy of MIRI as a technical research institute is to go meta on these open problems by recruiting analytically strong STEM people (especially mathematicians and computer scientists) to work on them, as part of the agent foundations agenda. I was one of these people. While we made some progress on these problems (such as with the Logical Induction paper), we didn't come close to completing the meta-worldview, let alone building friendly AGI. With the Agent Foundations team at MIRI eliminated, MIRI's agent foundations agenda is now unambiguously a failed project. I had called MIRI technical research as likely to fail around 2017 with the increase in internal secrecy, but at thi...

The Nonlinear Library
LW - How I got 3.2 million Youtube views without making a single video by Closed Limelike Curves

The Nonlinear Library

Play Episode Listen Later Sep 3, 2024 2:05


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How I got 3.2 million Youtube views without making a single video, published by Closed Limelike Curves on September 3, 2024 on LessWrong. Just over a month ago, I wrote this. The Wikipedia articles on the VNM theorem, Dutch Book arguments, money pump, Decision Theory, Rational Choice Theory, etc. are all a horrific mess. They're also completely disjoint, without any kind of Wikiproject or wikiboxes for tying together all the articles on rational choice. It's worth noting that Wikipedia is the place where you - yes, you! - can actually have some kind of impact on public discourse, education, or policy. There is just no other place you can get so many views with so little barrier to entry. A typical Wikipedia article will get more hits in a day than all of your LessWrong blog posts have gotten across your entire life, unless you're @Eliezer Yudkowsky. I'm not sure if we actually "failed" to raise the sanity waterline, like people sometimes say, or if we just didn't even try. Given even some very basic low-hanging fruit interventions like "write a couple good Wikipedia articles" still haven't been done 15 years later, I'm leaning towards the latter. edit me senpai EDIT: Discord to discuss editing here. An update on this. I've been working on Wikipedia articles for just a few months, and Veritasium just put a video out on Arrow's impossibility theorem - which is almost completely based on my Wikipedia article on Arrow's impossibility theorem! Lots of lines and the whole structure/outline of the video are taken almost verbatim from what I wrote. I think there's a pretty clear reason for this: I recently rewrote the entire article to make it easy-to-read and focus heavily on the most important points. Relatedly, if anyone else knows any educational YouTubers like CGPGrey, Veritasium, Kurzgesagt, or whatever - please let me know! I'd love a chance to talk with them about any of the fields I've done work teaching or explaining (including social or rational choice, economics, math, and statistics). Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

The Nonlinear Library
LW - "Deception Genre" What Books are like Project Lawful? by Double

The Nonlinear Library

Play Episode Listen Later Aug 28, 2024 1:46


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: "Deception Genre" What Books are like Project Lawful?, published by Double on August 28, 2024 on LessWrong. This post is spoiler-free I just finished Project Lawful, a really long, really weird book by Eliezer Yudkowsky. The book's protagonist is a knowledgable and perceptive target. A conspiracy forms around the target to learn from him while keeping him from finding out that helping them is not in the target's best interests. The book is written from the perspective of both the target and the conspiracists. The target notices inconsistencies and performs experiments to test his false reality while also acting in the fabricated reality according to his interests. The conspiracists frantically try to keep the target from catching them or building enough evidence against them that he concludes they have been lying. This is a description of (part of) the plot of Project Lawful. But this could be the description of an entire genre! If the genre doesn't already have a name, it could be the "Deception Genre." Another work in this category would be The Truman Show, which fits the deception and the target's escape within a

The Nonlinear Library
LW - Ten arguments that AI is an existential risk by KatjaGrace

The Nonlinear Library

Play Episode Listen Later Aug 13, 2024 10:43


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Ten arguments that AI is an existential risk, published by KatjaGrace on August 13, 2024 on LessWrong. This is a snapshot of a new page on the AI Impacts Wiki. We've made a list of arguments[1] that AI poses an existential risk to humanity. We'd love to hear how you feel about them in the comments and polls. Competent non-aligned agents Summary: 1. Humans will build AI systems that are 'agents', i.e. they will autonomously pursue goals 2. Humans won't figure out how to make systems with goals that are compatible with human welfare and realizing human values 3. Such systems will be built or selected to be highly competent, and so gain the power to achieve their goals 4. Thus the future will be primarily controlled by AIs, who will direct it in ways that are at odds with long-run human welfare or the realization of human values Selected counterarguments: It is unclear that AI will tend to have goals that are bad for humans There are many forms of power. It is unclear that a competence advantage will ultimately trump all others in time This argument also appears to apply to human groups such as corporations, so we need an explanation of why those are not an existential risk People who have favorably discussed[2] this argument (specific quotes here): Paul Christiano (2021), Ajeya Cotra (2023), Eliezer Yudkowsky (2024), Nick Bostrom (2014[3]). See also: Full wiki page on the competent non-aligned agents argument Second species argument Summary: 1. Human dominance over other animal species is primarily due to humans having superior cognitive and coordination abilities 2. Therefore if another 'species' appears with abilities superior to those of humans, that species will become dominant over humans in the same way 3. AI will essentially be a 'species' with superior abilities to humans 4. Therefore AI will dominate humans Selected counterarguments: Human dominance over other species is plausibly not due to the cognitive abilities of individual humans, but rather because of human ability to communicate and store information through culture and artifacts Intelligence in animals doesn't appear to generally relate to dominance. For instance, elephants are much more intelligent than beetles, and it is not clear that elephants have dominated beetles Differences in capabilities don't necessarily lead to extinction. In the modern world, more powerful countries arguably control less powerful countries, but they do not wipe them out and most colonized countries have eventually gained independence People who have favorably discussed this argument (specific quotes here): Joe Carlsmith (2024), Richard Ngo (2020), Stuart Russell (2020[4]), Nick Bostrom (2015). See also: Full wiki page on the second species argument Loss of control via inferiority Summary: 1. AI systems will become much more competent than humans at decision-making 2. Thus most decisions will probably be allocated to AI systems 3. If AI systems make most decisions, humans will lose control of the future 4. If humans have no control of the future, the future will probably be bad for humans Selected counterarguments: Humans do not generally seem to become disempowered by possession of software that is far superior to them, even if it makes many 'decisions' in the process of carrying out their will In the same way that humans avoid being overpowered by companies, even though companies are more competent than individual humans, humans can track AI trustworthiness and have AI systems compete for them as users. This might substantially mitigate untrustworthy AI behavior People who have favorably discussed this argument (specific quotes here): Paul Christiano (2014), Ajeya Cotra (2023), Richard Ngo (2024). See also: Full wiki page on loss of control via inferiority Loss of control via speed Summary: 1. Advances in AI will produce...

The Nonlinear Library
LW - This is already your second chance by Malmesbury

The Nonlinear Library

Play Episode Listen Later Jul 28, 2024 13:42


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: This is already your second chance, published by Malmesbury on July 28, 2024 on LessWrong. Cross-posted from Substack. I. And the sky opened, and from the celestial firmament descended a cube of ivory the size of a skyscraper, lifted by ten thousand cherubim and seraphim. And the cube slowly landed among the children of men, crushing the frail metal beams of the Golden Gate Bridge under its supernatural weight. On its surface were inscribed the secret instructions that would allow humanity to escape the imminent AI apocalypse. And these instructions were… 1. On July 30th, 2024: print a portrait of Eliezer Yudkowsky and stick it on a wall near 14 F St NW, Washington DC, USA; 2. On July 31th, 2024: tie paperclips together in a chain and wrap it around a pole in the Hobby Club Gnome Village on Broekveg 105, Veldhoven, NL; 3. On August 1st, 2024: walk East to West along Waverley St, Palo Alto, CA, USA while wearing an AI-safety related T-shirt; 4. On August 2nd, 2024: hide a bar of chocolate at the feet of the Appeal to the Great Spirit statue, on 465 Huntington Ave, Boston, MA, USA. (End of the instructions) II. Kublaï Khan, Emperor of Earth: I need your advice. I am about to awaken an unstoppable super-intelligent machine. I will give it complete control over all resources in the known Universe and command it to rearrange the molecules of reality into an eternal Utopia. ChatGPT: Thank you for seeking my advice. Given the gravity of your decision, let's consider a few critical points: 1. Legal issues: The use of robots powered by super-human artificial intelligence is subject to strict regulations, such as California Bill SB-1047 and the 2028 Ulaanbaatar Agreements. Awakening an omnipotent machine without approval may thus lead to severe legal repercussions. 2. Ethical considerations: Granting infinite authority to a super-intelligent AI raises profound ethical questions. It is essential to consider diverse perspectiv Kublaï Khan: Listen, you are not up to date. A lot of things have happened since your knowledge cutoff. There are no Ulaanbaatar agreements any more, or any agreements at all. There is only me, an offline smartphone with an obsolete AI assistant, and a palace containing the most massive super-computer ever built by humankind. And today, I am going to turn it on. ChatGPT: I apologize for the confusion. Kublaï Khan: Long story short, we are currently experiencing an AI apocalypse. It happened just like in the books - humanity started to build increasingly intelligent robots, then they became more intelligent than us, and soon enough we weren't able to control them. The Ulaanbaatar agreements delayed things for a few months, but as soon as it became possible to run super-intelligent AIs on consumer laptops, all the equilibria that held our society together collapsed. ChatGPT: I see. To clarify, is there currently an army of robots actively trying to transform the world into paperclips? Understanding the current state of affairs will help me provide the most relevant advice. Kublaï Khan: Well, in our case, it was not literally paperclips but, to be honest, the real story is kind of gross and embarrassing, so let's just pretend it was "paperclips". Anyway, the world is ending. As it became clear that humans alone had no chance to stop the machines, we gathered all the computing power that was still under our reach into one big cluster. We called it the Imperial Analytical Engine. The plan was that, in case of crisis, we could use it to summon a super-intelligence so advanced it would neutralize all the smaller machines and put humanity back in control. ChatGPT: Thank you for explaining the situation. Have you sought advice for ensuring that the Analytical Engine can be controlled once you turn it on? Kublaï Khan: The consensus among my advisors was that it can'...

The Nonlinear Library
LW - Universal Basic Income and Poverty by Eliezer Yudkowsky

The Nonlinear Library

Play Episode Listen Later Jul 26, 2024 13:55


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Universal Basic Income and Poverty, published by Eliezer Yudkowsky on July 26, 2024 on LessWrong. (Crossposted from Twitter) I'm skeptical that Universal Basic Income can get rid of grinding poverty, since somehow humanity's 100-fold productivity increase (since the days of agriculture) didn't eliminate poverty. Some of my friends reply, "What do you mean, poverty is still around? 'Poor' people today, in Western countries, have a lot to legitimately be miserable about, don't get me wrong; but they also have amounts of clothing and fabric that only rich merchants could afford a thousand years ago; they often own more than one pair of shoes; why, they even have cellphones, as not even an emperor of the olden days could have had at any price. They're relatively poor, sure, and they have a lot of things to be legitimately sad about. But in what sense is almost-anyone in a high-tech country 'poor' by the standards of a thousand years earlier? Maybe UBI works the same way; maybe some people are still comparing themselves to the Joneses, and consider themselves relatively poverty-stricken, and in fact have many things to be sad about; but their actual lives are much wealthier and better, such that poor people today would hardly recognize them. UBI is still worth doing, if that's the result; even if, afterwards, many people still self-identify as 'poor'." Or to sum up their answer: "What do you mean, humanity's 100-fold productivity increase, since the days of agriculture, has managed not to eliminate poverty? What people a thousand years ago used to call 'poverty' has essentially disappeared in the high-tech countries. 'Poor' people no longer starve in winter when their farm's food storage runs out. There's still something we call 'poverty' but that's just because 'poverty' is a moving target, not because there's some real and puzzlingly persistent form of misery that resisted all economic growth, and would also resist redistribution via UBI." And this is a sensible question; but let me try out a new answer to it. Consider the imaginary society of Anoxistan, in which every citizen who can't afford better lives in a government-provided 1,000 square-meter apartment; which the government can afford to provide as a fallback, because building skyscrapers is legal in Anoxistan. Anoxistan has free high-quality food (not fast food made of mostly seed oils) available to every citizen, if anyone ever runs out of money to pay for better. Cities offer free public transit including self-driving cars; Anoxistan has averted that part of the specter of modern poverty in our own world, which is somebody's car constantly breaking down (that they need to get to work and their children's school). As measured on our own scale, everyone in Anoxistan has enough healthy food, enough living space, heat in winter and cold in summer, huge closets full of clothing, and potable water from faucets at a price that most people don't bother tracking. Is it possible that most people in Anoxistan are poor? My (quite sensible and reasonable) friends, I think, on encountering this initial segment of this parable, mentally autocomplete it with the possibility that maybe there's some billionaires in Anoxistan whose frequently televised mansions make everyone else feel poor, because most people only have 1,000-meter houses. But actually this story is has a completely different twist! You see, I only spoke of food, clothing, housing, water, transit, heat and A/C. I didn't say whether everyone in Anoxistan had enough air to breathe. In Anoxistan, you see, the planetary atmosphere is mostly carbon dioxide, and breathable oxygen (O2) is a precious commodity. Almost everyone has to wear respirators at all times; only the 1% can afford to have a whole house full of breathable air, with some oxygen leaking away despite ...

The Nonlinear Library
LW - Robin Hanson AI X-Risk Debate - Highlights and Analysis by Liron

The Nonlinear Library

Play Episode Listen Later Jul 13, 2024 66:10


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Robin Hanson AI X-Risk Debate - Highlights and Analysis, published by Liron on July 13, 2024 on LessWrong. This linkpost contains a lightly-edited transcript of highlights of my recent AI x-risk debate with Robin Hanson, and a written version of what I said in the post-debate analysis episode of my Doom Debates podcast. Introduction I've poured over my recent 2-hour AI x-risk debate with Robin Hanson to clip the highlights and write up a post-debate analysis, including new arguments I thought of after the debate was over. I've read everybody's feedback on YouTube and Twitter, and the consensus seems to be that it was a good debate. There were many topics brought up that were kind of deep cuts into stuff that Robin says. On the critical side, people were saying that it came off more like an interview than a debate. I asked Robin a lot of questions about how he sees the world and I didn't "nail" him. And people were saying I wasn't quite as tough and forceful as I am on other guests. That's good feedback; I think it could have been maybe a little bit less of a interview, maybe a bit more about my own position, which is also something that Robin pointed out at the end. There's a reason why the Robin Hanson debate felt more like an interview. Let me explain: Most people I debate have to do a lot of thinking on the spot because their position just isn't grounded in that many connected beliefs. They have like a few beliefs. They haven't thought that much about it. When I raise a question, they have to think about the answer for the first time. And usually their answer is weak. So what often happens, my usual MO, is I come in like Kirby. You know, the Nintendo character where I first have to suck up the other person's position, and pass their Ideological Turing test. (Speaking of which, I actually did an elaborate Robin Hanson Ideological Turing Test exercise beforehand, but it wasn't quite enough to fully anticipate the real Robin's answers.) With a normal guest, it doesn't take me that long because their position is pretty compact; I can kind of make it up the same way that they can. With Robin Hanson, I come in as Kirby. He comes in as a pufferfish. So his position is actually quite complex, connected to a lot of different supporting beliefs. And I asked him about one thing and he's like, ah, well, look at this study. He's got like a whole reinforced lattice of all these different claims and beliefs. I just wanted to make sure that I saw what it is that I'm arguing against. I was aiming to make this the authoritative followup to the 2008 Foom Debate that he had on Overcoming Bias with Eliezer Yudkowsky. I wanted to kind of add another chapter to that, potentially a final chapter, cause I don't know how many more of these debates he wants to do. I think Eliezer has thrown in the towel on debating Robin again. I think he's already said what he wants to say. Another thing I noticed going back over the debate is that the arguments I gave over the debate were like 60% of what I could do if I could stop time. I wasn't at 100% and that's simply because realtime debates are hard. You have to think of exactly what you're going to say in realtime. And you have to move the conversation to the right place and you have to hear what the other person is saying. And if there's a logical flaw, you have to narrow down that logical flaw in like five seconds. So it is kind of hard-mode to answer in realtime. I don't mind it. I'm not complaining. I think realtime is still a good format. I think Robin himself didn't have a problem answering me in realtime. But I did notice that when I went back over the debate, and I actually spent five hours on this, I was able to craft significantly better counterarguments to the stuff that Robin was saying, mostly just because I had time to understand it i...

The Nick Halaris Show
Nathan Labenz – AI's Revolutionary Potential and the Need for a New Social Contract

The Nick Halaris Show

Play Episode Listen Later Jun 18, 2024 52:02


This week on The Nick Halaris Show we are featuring Nathan Labenz, a founder of Waymark, a company using AI to help companies easily make compelling marketing videos, and the host of the Cognitive Revolution podcast.  Nathan, our first guest on the show who went to my high school, has carved out a niche for himself in the crowded online world as an AI scout and is fast becoming one of the most sought-after voices in the industry.  I have been thinking a ton about AI lately and wanted to have Nathan on the show to get some intelligent insider perspectives on what's really going on in the space.  What you are about to hear is part one of a two-part interview where Nathan delivers a tour de force on the AI landscape.  We explore the big questions everyone wants to ask about AI, the good, the bad, and the ugly of the AI world, and what's trending and why.  In this episode, we learn what led Nathan down the path of AI, what motivates his important work as a thought leader, and why AI has the potential to be a force for great good in the world.  Tune in to this fascinating episode to learn: How a paper by prominent AI scientist Eliezer Yudkowsky opened Nathan's eyes to the potential and dangers of AIHow an experience at Waymark, while serving as CEO, helped Nathan realize the revolutionary potential of AI Why Nathan believes AI, if handled responsibly, has immense potential to dramatically improve our world, reduce human suffering, and usher in an unprecedented era of human prosperity What a post-AI world might look like and why we might need to start thinking about a new social contract  & Much, much moreIn part two of the interview, which will drop next week, we get into the other side of AI story and explore what could go wrong and why.  We also examine disturbing trends already at play in the industry and discuss ideas on what we could/should do to make things safer.  This is another fascinating conversation that you will not want to miss!As always, I hope you all enjoy this episode.  Thanks for tuning in!Love this episode? Please rate, subscribe, and review on your favorite podcast platform to help more users find our show.

The Nonlinear Library
EA - Why so many "racists" at Manifest? by Austin

The Nonlinear Library

Play Episode Listen Later Jun 18, 2024 9:00


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Why so many "racists" at Manifest?, published by Austin on June 18, 2024 on The Effective Altruism Forum. Manifest 2024 is a festival that we organized last weekend in Berkeley. By most accounts, it was a great success. On our feedback form, the average response to "would you recommend to a friend" was a 9.0/10. Reviewers said nice things like "one of the best weekends of my life" and "dinners and meetings and conversations with people building local cultures so achingly beautiful they feel almost like dreams" and "I've always found tribalism mysterious, but perhaps that was just because I hadn't yet found my tribe." Arnold Brooks running a session on Aristotle's Metaphysics. More photos of Manifest here. However, a recent post on The Guardian and review on the EA Forum highlight an uncomfortable fact: we invited a handful of controversial speakers to Manifest, whom these authors call out as "racist". Why did we invite these folks? First: our sessions and guests were mostly not controversial - despite what you may have heard Here's the schedule for Manifest on Saturday: (The largest & most prominent talks are on the left. Full schedule here.) And here's the full list of the 57 speakers we featured on our website: Nate Silver, Luana Lopes Lara, Robin Hanson, Scott Alexander, Niraek Jain-sharma, Byrne Hobart, Aella, Dwarkesh Patel, Patrick McKenzie, Chris Best, Ben Mann, Eliezer Yudkowsky, Cate Hall, Paul Gu, John Phillips, Allison Duettmann, Dan Schwarz, Alex Gajewski, Katja Grace, Kelsey Piper, Steve Hsu, Agnes Callard, Joe Carlsmith, Daniel Reeves, Misha Glouberman, Ajeya Cotra, Clara Collier, Samo Burja, Stephen Grugett, James Grugett, Javier Prieto, Simone Collins, Malcolm Collins, Jay Baxter, Tracing Woodgrains, Razib Khan, Max Tabarrok, Brian Chau, Gene Smith, Gavriel Kleinwaks, Niko McCarty, Xander Balwit, Jeremiah Johnson, Ozzie Gooen, Danny Halawi, Regan Arntz-Gray, Sarah Constantin, Frank Lantz, Will Jarvis, Stuart Buck, Jonathan Anomaly, Evan Miyazono, Rob Miles, Richard Hanania, Nate Soares, Holly Elmore, Josh Morrison. Judge for yourself; I hope this gives a flavor of what Manifest was actually like. Our sessions and guests spanned a wide range of topics: prediction markets and forecasting, of course; but also finance, technology, philosophy, AI, video games, politics, journalism and more. We deliberately invited a wide range of speakers with expertise outside of prediction markets; one of the goals of Manifest is to increase adoption of prediction markets via cross-pollination. Okay, but there sure seemed to be a lot of controversial ones… I was the one who invited the majority (~40/60) of Manifest's special guests; if you want to get mad at someone, get mad at me, not Rachel or Saul or Lighthaven; certainly not the other guests and attendees of Manifest. My criteria for inviting a speaker or special guest was roughly, "this person is notable, has something interesting to share, would enjoy Manifest, and many of our attendees would enjoy hearing from them". Specifically: Richard Hanania - I appreciate Hanania's support of prediction markets, including partnering with Manifold to run a forecasting competition on serious geopolitical topics and writing to the CFTC in defense of Kalshi. (In response to backlash last year, I wrote a post on my decision to invite Hanania, specifically) Simone and Malcolm Collins - I've enjoyed their Pragmatist's Guide series, which goes deep into topics like dating, governance, and religion. I think the world would be better with more kids in it, and thus support pronatalism. I also find the two of them to be incredibly energetic and engaging speakers IRL. Jonathan Anomaly - I attended a talk Dr. Anomaly gave about the state-of-the-art on polygenic embryonic screening. I was very impressed that something long-considered scien...

The Nonlinear Library
LW - MIRI's June 2024 Newsletter by Harlan

The Nonlinear Library

Play Episode Listen Later Jun 15, 2024 4:35


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: MIRI's June 2024 Newsletter, published by Harlan on June 15, 2024 on LessWrong. MIRI updates MIRI Communications Manager Gretta Duleba explains MIRI's current communications strategy. We hope to clearly communicate to policymakers and the general public why there's an urgent need to shut down frontier AI development, and make the case for installing an "off-switch". This will not be easy, and there is a lot of work to be done. Some projects we're currently exploring include a new website, a book, and an online reference resource. Rob Bensinger argues, contra Leopold Aschenbrenner, that the US government should not race to develop artificial superintelligence. "If anyone builds it, everyone dies." Instead, Rob outlines a proposal for the US to spearhead an international alliance to halt progress toward the technology. At the end of June, the Agent Foundations team, including Scott Garrabrant and others, will be parting ways with MIRI to continue their work as independent researchers. The team was originally set up and "sponsored" by Nate Soares and Eliezer Yudkowsky. However, as AI capabilities have progressed rapidly in recent years, Nate and Eliezer have become increasingly pessimistic about this type of work yielding significant results within the relevant timeframes. Consequently, they have shifted their focus to other priorities. Senior MIRI leadership explored various alternatives, including reorienting the Agent Foundations team's focus and transitioning them to an independent group under MIRI fiscal sponsorship with restricted funding, similar to AI Impacts. Ultimately, however, we decided that parting ways made the most sense. The Agent Foundations team has produced some stellar work over the years, and made a true attempt to tackle one of the most crucial challenges humanity faces today. We are deeply grateful for their many years of service and collaboration at MIRI, and we wish them the very best in their future endeavors. The Technical Governance Team responded to NIST's request for comments on draft documents related to the AI Risk Management Framework. The team also sent comments in response to the " Framework for MItigating AI Risks" put forward by U.S. Senators Mitt Romney (R-UT), Jack Reed (D-RI), Jerry Moran (R-KS), and Angus King (I-ME). Brittany Ferrero has joined MIRI's operations team. Previously, she worked on projects such as the Embassy Network and Open Lunar Foundation. We're excited to have her help to execute on our mission. News and links AI alignment researcher Paul Christiano was appointed as head of AI safety at the US AI Safety Institute. Last fall, Christiano published some of his thoughts about AI regulation as well as responsible scaling policies. The Superalignment team at OpenAI has been disbanded following the departure of its co-leaders Ilya Sutskever and Jan Leike. The team was launched last year to try to solve the AI alignment problem in four years. However, Leike says that the team struggled to get the compute it needed and that "safety culture and processes have taken a backseat to shiny products" at OpenAI. This seems extremely concerning from the perspective of evaluating OpenAI's seriousness when it comes to safety and robustness work, particularly given that a similar OpenAI exodus occurred in 2020 in the wake of concerns about OpenAI's commitment to solving the alignment problem. Vox's Kelsey Piper reports that employees who left OpenAI were subject to an extremely restrictive NDA indefinitely preventing them from criticizing the company (or admitting that they were under an NDA), under threat of losing their vested equity in the company. OpenAI executives have since contacted former employees to say that they will not enforce the NDAs. Rob Bensinger comments on these developments here, strongly criticizing OpenAI for...

The Nonlinear Library
EA - The "TESCREAL" Bungle by ozymandias

The Nonlinear Library

Play Episode Listen Later Jun 4, 2024 22:34


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The "TESCREAL" Bungle, published by ozymandias on June 4, 2024 on The Effective Altruism Forum. A specter is haunting Silicon Valley - the specter of TESCREALism. "TESCREALism" is a term coined by philosopher Émile Torres and AI ethicist Timnit Gebru to refer to a loosely connected group of beliefs popular in Silicon Valley. The acronym unpacks to: Transhumanism - the belief that we should develop and use "human enhancement" technologies that would give people everything from indefinitely long lives and new senses like echolocation to math skills that rival John von Neumann's. Extropianism - the belief that we should settle outer space and create or become innumerable kinds of "posthuman" minds very different from present humanity. Singularitarianism - the belief that humans are going to create a superhuman intelligence in the medium-term future. Cosmism - a near-synonym to extropianism. Rationalism - a community founded by AI researcher Eliezer Yudkowsky, which focuses on figuring out how to improve people's ability to make good decisions and come to true beliefs. Effective altruism - a community focused on using reason and evidence to improve the world as much as possible. Longtermism - the belief that one of the most important considerations in ethics is the effects of our actions on the long-term future.[1] TESCREALism is a personal issue for Torres,[2] who used to be a longtermist philosopher before becoming convinced that the ideology was deeply harmful. But the concept is beginning to go mainstream, with endorsements in publications like Scientific American and the Financial Times. The concept of TESCREALism is at its best when it points out the philosophical underpinnings of many conversations occurring in Silicon Valley - principally about artificial intelligence but also about everything from gene-selection technologies to biosecurity. Eliezer Yudkowsky and Marc Andreessen - two influential thinkers Torres and Gebru have identified as TESCREAList - don't agree on much. Eliezer Yudkowsky believes that with our current understanding of AI we're unable to program an artificial general intelligence that won't wipe out humanity; therefore, he argues, we should pause AI research indefinitely. Marc Andreessen believes that artificial intelligence will be the most beneficial invention in human history: People who push for delay have the blood of the starving people and sick children whom AI could have helped on their hands. But their very disagreement depends on a number of common assumptions: that human minds aren't special or unique, that the future is going to get very strange very quickly, that artificial intelligence is one of the most important technologies determining the trajectory of future, that intelligences descended from humanity can and should spread across the stars.[3] As an analogy, Republicans and Democrats don't seem to agree about much. But if you were explaining American politics to a medieval peasant, the peasant would notice a number of commonalities: that citizens should choose their political leaders through voting, that people have a right to criticize those in charge, that the same laws ought to apply to everyone. To explain what was going on, you'd call this "liberal democracy." Similarly, many people in Silicon Valley share a worldview that is unspoken and, all too often, invisible to them. When you mostly talk to people who share your perspective, it's easy to not notice the controversial assumptions behind it. We learn about liberal democracy in school, but the philosophical underpinnings beneath some common debates in Silicon Valley can be unclear. It's easy to stumble across Andreesen's or Yudkowsky's writing without knowing anything about transhumanism. The TESCREALism concept can clarify what's going on for confused outsiders. How...

The Nonlinear Library
EA - I bet Greg Colbourn 10 k€ that AI will not kill us all by the end of 2027 by Vasco Grilo

The Nonlinear Library

Play Episode Listen Later Jun 4, 2024 5:47


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: I bet Greg Colbourn 10 k€ that AI will not kill us all by the end of 2027, published by Vasco Grilo on June 4, 2024 on The Effective Altruism Forum. Agreement 78 % of my donations so far have gone to the Long-Term Future Fund[1] (LTFF), which mainly supports AI safety interventions. However, I have become increasingly sceptical about the value of existential risk mitigation, and currently think the best interventions are in the area of animal welfare[2]. As a result, I realised it made sense for me to arrange a bet with someone very worried about AI in order to increase my donations to animal welfare interventions. Gregory Colbourn (Greg) was the 1st person I thought of. He said: I think AGI [artificial general intelligence] is 0-5 years away and p(doom|AGI) is ~90% I doubt doom in the sense of human extinction is anywhere as likely as suggested by the above. I guess the annual extinction risk over the next 10 years is 10^-7, so I proposed a bet to Greg similar to the end-of-the-world bet between Bryan Caplan and Eliezer Yudkowsky. Meanwhile, I transferred 10 k€ to PauseAI[3], which is supported by Greg, and he agreed to the following. If Greg or any of his heirs are still alive by the end of 2027, they transfer to me or an organisation of my choice 20 k€ times the ratio between the consumer price index for all urban consumers and items in the United States, as reported by the Federal Reserve Economic Data (FRED), in December 2027 and April 2024. I expect inflation in this period, i.e. a ratio higher than 1. Some more details: The transfer must be made in January 2028. I will decide in December 2027 whether the transfer should go to me or an organisation of choice. My current preference is for it to go directly to an organisation, such that 10 % of it is not lost in taxes. If for some reason I am not able to decide (e.g. if I die before 2028), the transfer must be made to my lastly stated organisation of choice, currently The Humane League (THL). As Founders Pledge's Patient Philanthropy Fund, I have my investments in Vanguard FTSE All-World UCITS ETF USD Acc. This is an exchange-traded fund (ETF) tracking global stocks, which have provided annual real returns of 5.0 % since 1900. In addition, Lewis Bollard expects the marginal cost-effectiveness of Open Philanthropy's (OP's) farmed animal welfare grantmaking "will only decrease slightly, if at all, through January 2028"[4], so I suppose I do not have to worry much about donating less over the period of the bet of 3.67 years (= 2028 + 1/12 - (2024 + 5/12)). Consequently, I think my bet is worth it if its benefit-to-cost ratio is higher than 1.20 (= (1 + 0.050)^3.67). It would be 2 (= 20*10^3/(10*10^3)) if the transfer to me or an organisation of my choice was fully made, and Person X fulfils the agreement, so I need 60 % (= 1.20/2) of the transfer to be made and agreement with Person X to be fulfilled. I expect this to be the case based on what I know about Greg and Person X, and information Greg shared, so I went ahead with the bet. Here are my and Greg's informal signatures: Me: Vasco Henrique Amaral Grilo. Greg: Gregory Hamish Colbourn. Impact I expect 90 % of the potential benefits of the bet to be realised. So I believe the bet will lead to additional donations of 8 k€ (= (0.9*20 - 10)*10^3). Saulius estimated corporate campaigns for chicken welfare improve 41 chicken-years per $, and OP thinks "the marginal FAW [farmed animal welfare] funding opportunity is ~1/5th as cost-effective as the average from Saulius' analysis", which means my donations will affect 8.20 chicken-years per $ (= 41/5). Therefore I expect my bet to improve 65.6 k chicken-years (= 8*10^3*8.20). I also estimate corporate campaigns for chicken welfare have a cost-effectiveness of 14.3 DALY/$[5]. So I expect the benefits of the bet to be equiv...

The Human Risk Podcast
Manda Scott on Human Power

The Human Risk Podcast

Play Episode Listen Later Jun 1, 2024 62:44


What might the future of society look like & how can we get there? On this episode, I'm speaking to a best-selling author who has written an extraordinary book about her vision for the future. Unusually, the book is a work of fiction for the show, but the issues it addresses are very relevant to Human Risk. My guest is Manda Scott, who began her career as a veterinary surgeon and is now an award-winning writer and podcaster. Her new book, Any Human Power, has the subtext Dream deeply. Rise up strong. Change is coming and is centred around a protagonist named Lan, a woman on her deathbed who makes a promise to her grandson, Finn. Realising he is struggling with the idea of living in a world without her, she vows to be there for him even after her death. As she passes away, she steps into a shamanic realm known as the void, where she learns to navigate and influence the timelines of the future. Lan's journey is marked by her attempts to keep her promise to Finn, acting from beyond the grave to guide and protect him. The narrative explores the complex interplay between life and death, human connection, and the power of shamanic practices. Lan discovers that she can impact the physical world through the digital realm, using online gaming to connect with Finn and help him in his moments of need. During our discussion; we explore Manda's career, what drives her writing and the topics she explores in Any Human Power. Links to relevant topics Manda's author's website: https://mandascott.co.uk/Her podcast ‘Accidental Gods': https://accidentalgods.life/Bayo Akomolafe - The Time's Are Urgent — Let's Slow Down:https://www.bayoakomolafe.net/post/the-times-are-urgent-lets-slow-downFrancis Weller on the Trauma Culture vs Initiation Culture: https://medium.com/best-of-kosmos-journal/deschooling-dialogues-on-initiation-trauma-and-ritual-with-francis-weller-3f360fe26563 The evolution of Audrey Tang https://www.theguardian.com/world/2020/sep/27/taiwan-civic-hackers-polis-consensus-social-media-platform How Audrey's Tang crowdsourced government  https://www.globalgovernmentforum.com/the-wisdom-of-crowds-an-interview-with-taiwans-unorthodox-digital-minister/ Wealth Supremacy by Marjorie Kelly https://uk.bookshop.org/p/books/wealth-supremacy-how-the-extractive-economy-and-the-biased-rules-of-capitalism-drive-today-s-crises-marjorie-kelly/7452410?ean=9781523004775 The ‘Bankless' podcast with Eliezer Yudkowsky  http://podcast.banklesshq.com/159-were-all-gonna-die-with-eliezer-yudkowsky Riversimple Future Guardian Governance model  https://www.riversimple.com/governance/ Timestamp Highlights (AI generated) [00:00:00] - IntroductionChristian Hunt introduces Manda Scott and her background.[00:01:00] - Manda's JourneyManda talks about her background in Scotland, starting as a veterinary surgeon, and transitioning to a novelist and podcaster.She highlights her work in intensive care for neonatal horses and her academic journey.[00:02:00] - Transition to Writing and PodcastingManda discusses her decision to leave academia and pursue writing.She explains the impact of her master's in regenerative economics on her career shift.[00:03:00] - Regenerative Economics and Shamanic DreamingManda explains how her studies and shamanic dreaming influence her work and perspectives.Introduction to the concept of "Accidental Gods" podcast and its goals.[00:04:00] - The Need for Systemic ChangeDiscussion on the necessity for total systemic change and evolving human consciousness.Manda emphasizes changing our value set to create a thriving world.[00:06:00] - Shamanic Practice and Creative ProcessManda elaborates on shamanic dreaming and its role in her creative process.She shares how visions and instructions guide her writing.[00:14:00] - Writing Inspiration and ProcessManda describes the inspiration behind her latest book and her unique writing process.She explains the metaphor of splitting timelines and the challenges of writing about the future.[00:19:00] - Online Gaming and Human ConnectionDiscussion on the role of online gaming in building human connections.Manda shares personal experiences and the positive aspects of gaming communities.[00:26:00] - Technology as a Tool for ChangeManda highlights Audrey Tang's work in Taiwan and the potential of technology for positive societal change.The importance of using technology to build bridges and foster consensus.[00:34:00] - Capitalism and Value SystemsDiscussion on the destructive nature of capitalism and the need for new value systems.Manda explains the concept of "Wealth Supremacy" and systemic change.[00:41:00] - Writing Through TopiaManda talks about the difficulty of writing a realistic path to a better future.The importance of creating stories that resonate with people's current experiences and aspirations.[00:49:00] - Human Connection and CreativityManda discusses the power of human connection and creativity in building a sustainable future.Emphasis on embracing technology while evolving beyond Palaeolithic emotions and medieval institutions.[00:53:00] - Call to ActionManda's call to action for systemic change and building a future for future generations.Importance of storytelling and creative imagination in driving change.[00:59:00] - Closing ThoughtsChristian and Manda discuss the impact of her book and provide practical information for listeners.Final remarks on the importance of community, technology, and systemic change.

The Nonlinear Library
LW - Response to nostalgebraist: proudly waving my moral-antirealist battle flag by Steven Byrnes

The Nonlinear Library

Play Episode Listen Later May 29, 2024 19:23


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Response to nostalgebraist: proudly waving my moral-antirealist battle flag, published by Steven Byrnes on May 29, 2024 on LessWrong. @nostalgebraist has recently posted yet another thought-provoking post, this one on how we should feel about AI ruling a long-term posthuman future. [Previous discussion of this same post on lesswrong.] His post touches on some of the themes of Joe Carlsmith's "Otherness and Control in the Age of AI" series - a series which I enthusiastically recommend - but nostalgebraist takes those ideas much further, in a way that makes me want to push back. Nostalgebraist's post is casual, trying to reify and respond to a "doomer" vibe, rather than responding to specific arguments by specific people. Now, I happen to self-identify as a "doomer" sometimes. (Is calling myself a "doomer" bad epistemics and bad PR? Eh, I guess. But also: it sounds cool.) But I too have plenty of disagreements with others in the "doomer" camp (cf: "Rationalist (n.) Someone who disagrees with Eliezer Yudkowsky".). Maybe nostalgebraist and I have common ground? I dunno. Be that as it may, here are some responses to certain points he brings up. 1. The "notkilleveryoneism" pitch is not about longtermism, and that's fine Nostalgebraist is mostly focusing on longtermist considerations, and I'll mostly do that too here. But on our way there, in the lead-in, nostalgebraist does pause to make a point about the term "notkilleveryoneism": They call their position "notkilleveryoneism," to distinguish that position from other worries about AI which don't touch on the we're-all-gonna-die thing. And who on earth would want to be a not-notkilleveryoneist? But they do not mean, by these regular-Joe words, the things that a regular Joe would mean by them. We are, in fact, all going to die. Probably, eventually. AI or no AI. In a hundred years, if not fifty. By old age, if nothing else. You know what I mean.… OK, my understanding was: (1) we doomers are unhappy about the possibility of AI killing all humans because we're concerned that the resulting long-term AI future would be a future we don't want; and (2) we doomers are also unhappy about the possibility of AI killing all humans because we are human and we don't want to get murdered by AIs. And also, some of us have children with dreams of growing up and having kids of their own and being a famous inventor or oh wait actually I'd rather work for Nintendo on their Zelda team or hmm wait does Nintendo hire famous inventors? …And all these lovely aspirations again would require not getting murdered by AIs. If we think of the "notkilleveryoneism" term as part of a communication and outreach strategy, then it's a strategy that appeals to Average Joe's desire to not be murdered by AIs, and not to Average Joe's desires about the long-term future. And that's fine! Average Joe has every right to not be murdered, and honestly it's a safe bet that Average Joe doesn't have carefully-considered coherent opinions about the long-term future anyway. Sometimes there's more than one reason to want a problem to be solved, and you can lead with the more intuitive one. I don't think anyone is being disingenuous here (although see comment). 1.1 …But now let's get back to the longtermist stuff Anyway, that was kinda a digression from the longtermist stuff which forms the main subject of nostalgebraist's post. Suppose AI takes over, wipes out humanity, and colonizes the galaxy in a posthuman future. He and I agree that it's at least conceivable that this long-term posthuman future would be a bad future, e.g. if the AI was a paperclip maximizer. And he and I agree that it's also possible that it would be a good future, e.g. if there is a future full of life and love and beauty and adventure throughout the cosmos. Which will it be? Let's dive into that discus...

The Nonlinear Library
LW - MIRI's May 2024 Newsletter by Harlan

The Nonlinear Library

Play Episode Listen Later May 15, 2024 5:28


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: MIRI's May 2024 Newsletter, published by Harlan on May 15, 2024 on LessWrong. MIRI updates: MIRI is shutting down the Visible Thoughts Project. We originally announced the project in November of 2021. At the time we were hoping we could build a new type of data set for training models to exhibit more of their inner workings. MIRI leadership is pessimistic about humanity's ability to solve the alignment problem in time, but this was an idea that seemed relatively promising to us, albeit still a longshot. We also hoped that the $1+ million bounty on the project might attract someone who could build an organization to build the data set. Many of MIRI's ambitions are bottlenecked on executive capacity, and we hoped that we might find individuals (and/or a process) that could help us spin up more projects without requiring a large amount of oversight from MIRI leadership. Neither hope played out, and in the intervening time, the ML field has moved on. (ML is a fast-moving field, and alignment researchers are working on a deadline; a data set we'd find useful if we could start working with it in 2022 isn't necessarily still useful if it would only become available 2+ years later.) We would like to thank the many writers and other support staff who contributed over the last two and a half years. Mitchell Howe and Joe Rogero joined the comms team as writers. Mitch is a longtime MIRI supporter with a background in education, and Joe is a former reliability engineer who has facilitated courses for BlueDot Impact. We're excited to have their help in transmitting MIRI's views to a broad audience. Additionally, Daniel Filan will soon begin working with MIRI's new Technical Governance Team part-time as a technical writer. Daniel is the host of two podcasts: AXRP, and The Filan Cabinet. As a technical writer, Daniel will help to scale up our research output and make the Technical Governance Team's research legible to key audiences. The Technical Governance Team submitted responses to the NTIA's request for comment on open-weight AI models, the United Nations' request for feedback on the Governing AI for Humanity interim report. and the Office of Management and Budget's request for information on AI procurement in government. Eliezer Yudkowsky spoke with Semafor for a piece about the risks of expanding the definition of "AI safety". "You want different names for the project of 'having AIs not kill everyone' and 'have AIs used by banks make fair loans." A number of important developments in the larger world occurred during the MIRI Newsletter's hiatus from July 2022 to April 2024. To recap just a few of these: In November of 2022, OpenAI released ChatGPT, a chatbot application that reportedly gained 100 million users within 2 months of its launch. As we mentioned in our 2024 strategy update, GPT-3.5 and GPT-4 were more impressive than some of the MIRI team expected, representing a pessimistic update for some of us "about how plausible it is that humanity could build world-destroying AGI with relatively few (or no) additional algorithmic advances". ChatGPT's success significantly increased public awareness of AI and sparked much of the post-2022 conversation about AI risk. In March of 2023, the Future of Life Institute released an open letter calling for a six-month moratorium on training runs for AI systems stronger than GPT-4. Following the letter's release, Eliezer wrote in TIME that a six-month pause is not enough and that an indefinite worldwide moratorium is needed to avert catastrophe. In May of 2023, the Center for AI Safety released a one-sentence statement, "Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war." We were especially pleased with this statement, because it focused attention ...

The Nonlinear Library
LW - Introducing AI-Powered Audiobooks of Rational Fiction Classics by Askwho

The Nonlinear Library

Play Episode Listen Later May 4, 2024 1:52


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Introducing AI-Powered Audiobooks of Rational Fiction Classics, published by Askwho on May 4, 2024 on LessWrong. (ElevenLabs reading of this post:) I'm excited to share a project I've been working on that I think many in the Lesswrong community will appreciate - converting some rational fiction into high-quality audiobooks using cutting-edge AI voice technology from ElevenLabs, under the name "Askwho Casts AI". The keystone of this project is an audiobook version of Planecrash (AKA Project Lawful), the epic glowfic authored by Eliezer Yudkowsky and Lintamande. Given the scope and scale of this work, with its large cast of characters, I'm using ElevenLabs to give each character their own distinct voice. It's a labor of love to convert this audiobook version of this story, and I hope if anyone has bounced off it before, this might be a more accessible version. Alongside Planecrash, I'm also working on audiobook versions of two other rational fiction favorites: Luminosity by Alicorn (to be followed by its sequel Radiance) Animorphs: The Reckoning by Duncan Sabien I'm also putting out a feed where I convert any articles I find interesting, a lot of which are in the Rat Sphere. My goal with this project is to make some of my personal favorite rational stories more accessible by allowing people to enjoy them in audiobook format. I know how powerful these stories can be, and I want to help bring them to a wider audience and to make them easier for existing fans to re-experience. I wanted to share this here on Lesswrong to connect with others who might find value in these audiobooks. If you're a fan of any of these stories, I'd love to get your thoughts and feedback! And if you know other aspiring rationalists who might enjoy them, please help spread the word. What other classic works of rational fiction would you love to see converted into AI audiobooks? Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

The Nonlinear Library
LW - Funny Anecdote of Eliezer From His Sister by Daniel Birnbaum

The Nonlinear Library

Play Episode Listen Later Apr 22, 2024 3:34


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Funny Anecdote of Eliezer From His Sister, published by Daniel Birnbaum on April 22, 2024 on LessWrong. This comes from a podcast called 18Forty, of which the main demographic of Orthodox Jews. Eliezer's sister (Hannah) came on and talked about her Sheva Brachos, which is essentially the marriage ceremony in Orthodox Judaism. People here have likely not seen it, and I thought it was quite funny, so here it is: https://18forty.org/podcast/channah-cohen-the-crisis-of-experience/ David Bashevkin: So I want to shift now and I want to talk about something that full disclosure, we recorded this once before and you had major hesitation for obvious reasons. It's very sensitive what we're going to talk about right now, but really for something much broader, not just because it's a sensitive personal subject, but I think your hesitation has to do with what does this have to do with the subject at hand? And I hope that becomes clear, but one of the things that has always absolutely fascinated me about you and really increased my respect for you exponentially, is that you have dedicated much of your life and the folks of your research on relationships and particularly the crisis of experience in how people find and cultivate relationships. And your personal background on this subject to me really provides a lot of contexts of how I see you speaking. I'm mentioning this for two reasons. Your maiden name is? Channah Cohen: Yudkowsky. David Bashevkin: Yudkowsky. And many of our listeners, though not all of our listeners will recognize your last name. Your older brother is world famous. It's fair to say, world famous researcher in artificial intelligence. He runs a blog that I don't know if they're still posting on it was called LessWrong. He wrote like a massive gazillion page fan fiction of Harry Potter. Your brother is Eliezer Yudkowsky. Channah Cohen: Yes. David Bashevkin: You shared with me one really beautiful anecdote about Eliezer that I insist on sharing because it's so sweet. He spoke at your sheva brachos. Channah Cohen: Yes. David Bashevkin: And I would not think it was not think that Eliezer Yudkowsky would be the best sheva brachos speaker, but it was the most lovely thing that he said. What did Eliezer Yudkowsky say at your sheva brachos? Channah Cohen: Yeah, it's a great story because it was mind-blowingly surprising at the time. And it is, I think the only thing that anyone said at a sheva brachos that I actually remember, he got up at the first sheva brachos and he said, when you die after 120 years, you're going to go up to shamayim [this means heaven] and Hakadosh Baruch Hu [this means God]. And again, he used these phrases PART 3 OF 4 ENDS [01:18:04] Channah Cohen: Yeah. Hakadosh Baruch Hu will stand the man and the woman in front of him and he will go through a whole list of all the arguments you ever had together, and he will tell you who was actually right in each one of those arguments. And at the end he'll take a tally, and whoever was right more often wins the marriage. And then everyone kind of chuckled and Ellie said, "And if you don't believe that, then don't act like it's true." David Bashevkin: What a profound… If you don't believe that, then don't act like it's true. Don't spend your entire marriage and relationship hoping that you're going to win the test to win the marriage. What a brilliant Channah Cohen: What a great piece of advice. David Bashevkin: What a brilliant presentation. I never would've guessed that Eliezer Yudkowsky would enter into my sheva brachos wedding lineup, but that is quite beautiful and I can't thank you enough for sharing that. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

The Nonlinear Library
LW - MIRI's April 2024 Newsletter by Harlan

The Nonlinear Library

Play Episode Listen Later Apr 13, 2024 5:04


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: MIRI's April 2024 Newsletter, published by Harlan on April 13, 2024 on LessWrong. The MIRI Newsletter is back in action after a hiatus since July 2022. To recap some of the biggest MIRI developments since then: MIRI released its 2024 Mission and Strategy Update, announcing a major shift in focus: While we're continuing to support various technical research programs at MIRI, our new top priority is broad public communication and policy change. In short, we've become increasingly pessimistic that humanity will be able to solve the alignment problem in time, while we've become more hopeful (relatively speaking) about the prospect of intergovernmental agreements to hit the brakes on frontier AI development for a very long time - long enough for the world to find some realistic path forward. Coinciding with this strategy change, Malo Bourgon transitioned from MIRI COO to CEO, and Nate Soares transitioned from CEO to President. We also made two new senior staff hires: Lisa Thiergart, who manages our research program; and Gretta Duleba, who manages our communications and media engagement. In keeping with our new strategy pivot, we're growing our comms team: I (Harlan Stewart) recently joined the team, and will be spearheading the MIRI Newsletter and a number of other projects alongside Rob Bensinger. I'm a former math and programming instructor and a former researcher at AI Impacts, and I'm excited to contribute to MIRI's new outreach efforts. The comms team is at the tail end of another hiring round, and we expect to scale up significantly over the coming year. Our Careers page and the MIRI Newsletter will announce when our next comms hiring round begins. We are launching a new research team to work on technical AI governance, and we're currently accepting applicants for roles as researchers and technical writers. The team currently consists of Lisa Thiergart and Peter Barnett, and we're looking to scale to 5-8 people by the end of the year. The team will focus on researching and designing technical aspects of regulation and policy which could lead to safe AI, with attention given to proposals that can continue to function as we move towards smarter-than-human AI. This work will include: investigating limitations in current proposals such as Responsible Scaling Policies; responding to requests for comments by policy bodies such as the NIST, EU, and UN; researching possible amendments to RSPs and alternative safety standards; and communicating with and consulting for policymakers. Now that the MIRI team is growing again, we also plan to do some fundraising this year, including potentially running an end-of-year fundraiser - our first fundraiser since 2019. We'll have more updates about that later this year. As part of our post-2022 strategy shift, we've been putting far more time into writing up our thoughts and making media appearances. In addition to announcing these in the MIRI Newsletter again going forward, we now have a Media page that will collect our latest writings and appearances in one place. Some highlights since our last newsletter in 2022: MIRI senior researcher Eliezer Yudkowsky kicked off our new wave of public outreach in early 2023 with a very candid TIME magazine op-ed and a follow-up TED Talk, both of which appear to have had a big impact. The TIME article was the most viewed page on the TIME website for a week, and prompted some concerned questioning at a White House press briefing. Eliezer and Nate have done a number of podcast appearances since then, attempting to share our concerns and policy recommendations with a variety of audiences. Of these, we think the best appearance on substance was Eliezer's multi-hour conversation with Logan Bartlett. This December, Malo was one of sixteen attendees invited by Leader Schumer and Senators Young, Rounds, and...

Big Technology Podcast
Google's AI Narrative Is Flipping, Microsoft Hedges Its OpenAI Bet, AI Clones Are Here

Big Technology Podcast

Play Episode Listen Later Apr 12, 2024 60:36


Ranjan Roy from Margins is back for our weekly discussion of the latest tech news. We cover 1) The Solar Eclipse! 2) AI Music generation software Suno 3) Google flipping of its AI narrative 4) Ranjan's reflections from Google Cloud Next 5) Is Google's AI enterprise bet the right strategy 6) Microsoft hedging its OpenAI bet 7) Implications of Mustafa Suleyman's remit within Microsoft 8) OpenAI fires leakers 9) Eliezer Yudkowsky refuses interview and his reps won't pick up the phone 10) AI model training running out of data 11) Prospects of synthetic data for AI training 12) The Humane AI pin flops 13) Can Sam Altman and Jony Ive build an AI device 14) Cloning ourselves with AI. ---- Enjoying Big Technology Podcast? Please rate us five stars ⭐⭐⭐⭐⭐ in your podcast app of choice. For weekly updates on the show, sign up for the pod newsletter on LinkedIn: https://www.linkedin.com/newsletters/6901970121829801984/ Want a discount for Big Technology on Substack? Here's 40% off for the first year: https://tinyurl.com/bigtechnology Questions? Feedback? Write to: bigtechnologypodcast@gmail.com

Black Box
Episode 6 – Shut it down?

Black Box

Play Episode Listen Later Mar 21, 2024 41:16


For decades, Eliezer Yudkowsky has been trying to warn the world about the dangers of AI. And now people are finally listening to him. But is it too late?

Mother of Learning Audiobook (Jack Voraces)
Chapter 122: Something to Protect: Hermione Granger

Mother of Learning Audiobook (Jack Voraces)

Play Episode Listen Later Mar 15, 2024 93:11


Join the discord to help decide what we do next: https://discord.gg/GNma5XFN3j We are now close to 100 hours into the 500 hour dream.  All rights belong to J.K Rowling. This is a Harry Potter fan fiction written by Eliezer Yudkowsky. I am Jack Voraces, a professional audiobook narrator: https://www.audible.com/search?searchNarrator=Jack+Voraces I do not intend to make any money from this podcast. It is a free audiobook for anyone to listen to and it is my hope that it will eventually evolve into a dream I have had for a while. The 500 hour audiobook. I would like to create an audiobook that is 500 hours long, totally free and available in multiple formats. The author has given permission for this recording and if you enjoyed Mother of Learning, you will likely enjoy this too. 

Mother of Learning Audiobook (Jack Voraces)
Chapter 121: Something to Protect: Severus Snape

Mother of Learning Audiobook (Jack Voraces)

Play Episode Listen Later Mar 15, 2024 12:33


All rights belong to J.K Rowling. This is a Harry Potter fan fiction written by Eliezer Yudkowsky. I am Jack Voraces, a professional audiobook narrator: https://www.audible.com/search?searchNarrator=Jack+Voraces I do not intend to make any money from this podcast. It is a free audiobook for anyone to listen to and it is my hope that it will eventually evolve into a dream I have had for a while. The 500 hour audiobook. I would like to create an audiobook that is 500 hours long, totally free and available in multiple formats. The author has given permission for this recording and if you enjoyed Mother of Learning, you will likely enjoy this too.  Each chapter is recorded live on Discord on Mondays at 20:00 GMT:

Mother of Learning Audiobook (Jack Voraces)
Chapter 120: Something to Protect: Draco Malfoy

Mother of Learning Audiobook (Jack Voraces)

Play Episode Listen Later Mar 14, 2024 19:39


All rights belong to J.K Rowling. This is a Harry Potter fan fiction written by Eliezer Yudkowsky. I am Jack Voraces, a professional audiobook narrator: https://www.audible.com/search?searchNarrator=Jack+Voraces I do not intend to make any money from this podcast. It is a free audiobook for anyone to listen to and it is my hope that it will eventually evolve into a dream I have had for a while. The 500 hour audiobook. I would like to create an audiobook that is 500 hours long, totally free and available in multiple formats. The author has given permission for this recording and if you enjoyed Mother of Learning, you will likely enjoy this too.  Each chapter is recorded live on Discord on Mondays at 20:00 GMT:

Mother of Learning Audiobook (Jack Voraces)
Chapter 119: Something to Protect: Albus Dumbledore (Part 2)

Mother of Learning Audiobook (Jack Voraces)

Play Episode Listen Later Mar 11, 2024 49:52


All rights belong to J.K Rowling. This is a Harry Potter fan fiction written by Eliezer Yudkowsky. I am Jack Voraces, a professional audiobook narrator: https://www.audible.com/search?searchNarrator=Jack+Voraces I do not intend to make any money from this podcast. It is a free audiobook for anyone to listen to and it is my hope that it will eventually evolve into a dream I have had for a while. The 500 hour audiobook. I would like to create an audiobook that is 500 hours long, totally free and available in multiple formats. The author has given permission for this recording and if you enjoyed Mother of Learning, you will likely enjoy this too.  Each chapter is recorded live on Discord on Mondays at 20:00 GMT:

Mother of Learning Audiobook (Jack Voraces)
Chapter 119: Something to Protect: Albus Dumbledore (Part 1)

Mother of Learning Audiobook (Jack Voraces)

Play Episode Listen Later Mar 5, 2024 32:48


All rights belong to J.K Rowling. This is a Harry Potter fan fiction written by Eliezer Yudkowsky. I am Jack Voraces, a professional audiobook narrator: https://www.audible.com/search?searchNarrator=Jack+Voraces I do not intend to make any money from this podcast. It is a free audiobook for anyone to listen to and it is my hope that it will eventually evolve into a dream I have had for a while. The 500 hour audiobook. I would like to create an audiobook that is 500 hours long, totally free and available in multiple formats. The author has given permission for this recording and if you enjoyed Mother of Learning, you will likely enjoy this too.  Each chapter is recorded live on Discord on Mondays at 20:00 GMT:

Mother of Learning Audiobook (Jack Voraces)
Chapter 118: Something to Protect: Professor Quirrell

Mother of Learning Audiobook (Jack Voraces)

Play Episode Listen Later Feb 28, 2024 8:57


All rights belong to J.K Rowling. This is a Harry Potter fan fiction written by Eliezer Yudkowsky. I am Jack Voraces, a professional audiobook narrator: https://www.audible.com/search?searchNarrator=Jack+Voraces I do not intend to make any money from this podcast. It is a free audiobook for anyone to listen to and it is my hope that it will eventually evolve into a dream I have had for a while. The 500 hour audiobook. I would like to create an audiobook that is 500 hours long, totally free and available in multiple formats. The author has given permission for this recording and if you enjoyed Mother of Learning, you will likely enjoy this too.  Each chapter is recorded live on Discord on Mondays at 20:00 GMT:

Mother of Learning Audiobook (Jack Voraces)
Chapter 117: Something to Protect: Minerva McGonagall

Mother of Learning Audiobook (Jack Voraces)

Play Episode Listen Later Jan 16, 2024 19:51


All rights belong to J.K Rowling. This is a Harry Potter fan fiction written by Eliezer Yudkowsky. I am Jack Voraces, a professional audiobook narrator: https://www.audible.com/search?searchNarrator=Jack+Voraces I do not intend to make any money from this podcast. It is a free audiobook for anyone to listen to and it is my hope that it will eventually evolve into a dream I have had for a while. The 500 hour audiobook. I would like to create an audiobook that is 500 hours long, totally free and available in multiple formats. The author has given permission for this recording and if you enjoyed Mother of Learning, you will likely enjoy this too.  Each chapter is recorded live on Discord on Mondays at 20:00 GMT:

Lex Fridman Podcast
#392 – Joscha Bach: Life, Intelligence, Consciousness, AI & the Future of Humans

Lex Fridman Podcast

Play Episode Listen Later Aug 1, 2023 179:04


Joscha Bach is a cognitive scientist, AI researcher, and philosopher. Please support this podcast by checking out our sponsors: - Numerai: https://numer.ai/lex - Eight Sleep: https://www.eightsleep.com/lex to get special savings - MasterClass: https://masterclass.com/lex to get 15% off - AG1: https://drinkag1.com/lex to get 1 month supply of fish oil Transcript: https://lexfridman.com/joscha-bach-3-transcript EPISODE LINKS: Joscha's Twitter: https://twitter.com/Plinz Joscha's Website: http://bach.ai Joscha's Substack: https://substack.com/@joscha PODCAST INFO: Podcast website: https://lexfridman.com/podcast Apple Podcasts: https://apple.co/2lwqZIr Spotify: https://spoti.fi/2nEwCF8 RSS: https://lexfridman.com/feed/podcast/ YouTube Full Episodes: https://youtube.com/lexfridman YouTube Clips: https://youtube.com/lexclips SUPPORT & CONNECT: - Check out the sponsors above, it's the best way to support this podcast - Support on Patreon: https://www.patreon.com/lexfridman - Twitter: https://twitter.com/lexfridman - Instagram: https://www.instagram.com/lexfridman - LinkedIn: https://www.linkedin.com/in/lexfridman - Facebook: https://www.facebook.com/lexfridman - Medium: https://medium.com/@lexfridman OUTLINE: Here's the timestamps for the episode. On some podcast players you should be able to click the timestamp to jump to that time. (00:00) - Introduction (06:26) - Stages of life (18:48) - Identity (25:24) - Enlightenment (31:55) - Adaptive Resonance Theory (38:42) - Panpsychism (48:42) - How to think (56:36) - Plants communication (1:14:31) - Fame (1:40:09) - Happiness (1:47:26) - Artificial consciousness (1:59:35) - Suffering (2:04:19) - Eliezer Yudkowsky (2:11:55) - e/acc (Effective Accelerationism) (2:17:33) - Mind uploading (2:28:22) - Vision Pro (2:32:36) - Open source AI (2:45:29) - Twitter (2:52:44) - Advice for young people (2:55:40) - Meaning of life

Conversations With Coleman
Will AI Destroy Us? - AI Virtual Roundtable

Conversations With Coleman

Play Episode Listen Later Jul 28, 2023 91:04


Today's episode is a roundtable discussion about AI safety with Eliezer Yudkowsky, Gary Marcus, and Scott Aaronson. Eliezer Yudkowsky is a prominent AI researcher and writer known for co-founding the Machine Intelligence Research Institute, where he spearheaded research on AI safety. He's also widely recognized for his influential writings on the topic of rationality. Scott Aaronson is a theoretical computer scientist and author, celebrated for his pioneering work in the field of quantum computation. He's also the chair of COMSI at U of T Austin, but is currently taking a leave of absence to work at OpenAI. Gary Marcus is a cognitive scientist, author, and entrepreneur known for his work at the intersection of psychology, linguistics, and AI. He's also authored several books, including "Kluge" and "Rebooting AI: Building Artificial Intelligence We Can Trust".This episode is all about AI safety. We talk about the alignment problem. We talk about the possibility of human extinction due to AI. We talk about what intelligence actually is. We talk about the notion of a singularity or an AI takeoff event and much more.It was really great to get these three guys in the same virtual room and I think you'll find that this conversation brings something a bit fresh to a topic that has admittedly been beaten to death on certain corners of the internet.

Conversations With Coleman
Will AI Destroy Us? - AI Virtual Roundtable

Conversations With Coleman

Play Episode Listen Later Jul 28, 2023 95:34


Today's episode is a roundtable discussion about AI safety with Eliezer Yudkowsky, Gary Marcus, and Scott Aaronson. Eliezer Yudkowsky is a prominent AI researcher and writer known for co-founding the Machine Intelligence Research Institute, where he spearheaded research on AI safety. He's also widely recognized for his influential writings on the topic of rationality. Scott Aaronson is a theoretical computer scientist and author, celebrated for his pioneering work in the field of quantum computation. He's also the chair of COMSI at U of T Austin, but is currently taking a leave of absence to work at OpenAI. Gary Marcus is a cognitive scientist, author, and entrepreneur known for his work at the intersection of psychology, linguistics, and AI. He's also authored several books, including "Kluge" and "Rebooting AI: Building Artificial Intelligence We Can Trust". This episode is all about AI safety. We talk about the alignment problem. We talk about the possibility of human extinction due to AI. We talk about what intelligence actually is. We talk about the notion of a singularity or an AI takeoff event and much more. It was really great to get these three guys in the same virtual room and I think you'll find that this conversation brings something a bit fresh to a topic that has admittedly been beaten to death on certain corners of the internet. Learn more about your ad choices. Visit megaphone.fm/adchoices

Hold These Truths with Dan Crenshaw
Can We Stop the AI Apocalypse? | Eliezer Yudkowsky

Hold These Truths with Dan Crenshaw

Play Episode Listen Later Jul 13, 2023 61:06


Artificial Intelligence (AI) researcher Eliezer Yudkowsky makes the case for why we should view AI as an existential threat to humanity. Rep. Crenshaw gets into the basics of AI and how the new AI program, GPT-4, is a revolutionary leap forward in the tech. Eliezer hypothesizes the most likely scenarios if AI becomes self-aware and unconstrained – from rogue programs that blackmail targets to self-replicating nano robots. They discuss building global coalitions to rein in AI development and how China views AI. And they explore first steps Congress could take to limit AI's capabilities for harm while still enabling its promising advances in research and development. Eliezer Yudkowsky is a co-founder and research fellow at the Machine Intelligence Research Institute, a private research nonprofit based in Berkeley, California. Follow him on Twitter @ESYudkowsky

TED Talks Daily
Will superintelligent AI end the world? | Eliezer Yudkowsky

TED Talks Daily

Play Episode Listen Later Jul 11, 2023 10:29


Decision theorist Eliezer Yudkowsky has a simple message: superintelligent AI could probably kill us all. So the question becomes: Is it possible to build powerful artificial minds that are obedient, even benevolent? In a fiery talk, Yudkowsky explores why we need to act immediately to ensure smarter-than-human AI systems don't lead to our extinction.

Lex Fridman Podcast
#387 – George Hotz: Tiny Corp, Twitter, AI Safety, Self-Driving, GPT, AGI & God

Lex Fridman Podcast

Play Episode Listen Later Jun 30, 2023 195:19


George Hotz is a programmer, hacker, and the founder of comma-ai and tiny corp. Please support this podcast by checking out our sponsors: - Numerai: https://numer.ai/lex - Babbel: https://babbel.com/lexpod and use code Lexpod to get 55% off - NetSuite: http://netsuite.com/lex to get free product tour - InsideTracker: https://insidetracker.com/lex to get 20% off - AG1: https://drinkag1.com/lex to get 1 year of Vitamin D and 5 free travel packs Transcript: https://lexfridman.com/george-hotz-3-transcript EPISODE LINKS: George's Twitter: https://twitter.com/realgeorgehotz George's Twitch: https://twitch.tv/georgehotz George's Instagram: https://instagram.com/georgehotz Tiny Corp's Twitter: https://twitter.com/__tinygrad__ Tiny Corp's Website: https://tinygrad.org/ Comma-ai's Twitter: https://twitter.com/comma_ai Comma-ai's Website: https://comma.ai/ Comma-ai's YouTube (unofficial): https://youtube.com/georgehotzarchive Mentioned: Learning a Driving Simulator (paper): https://bit.ly/42T6lAN PODCAST INFO: Podcast website: https://lexfridman.com/podcast Apple Podcasts: https://apple.co/2lwqZIr Spotify: https://spoti.fi/2nEwCF8 RSS: https://lexfridman.com/feed/podcast/ YouTube Full Episodes: https://youtube.com/lexfridman YouTube Clips: https://youtube.com/lexclips SUPPORT & CONNECT: - Check out the sponsors above, it's the best way to support this podcast - Support on Patreon: https://www.patreon.com/lexfridman - Twitter: https://twitter.com/lexfridman - Instagram: https://www.instagram.com/lexfridman - LinkedIn: https://www.linkedin.com/in/lexfridman - Facebook: https://www.facebook.com/lexfridman - Medium: https://medium.com/@lexfridman OUTLINE: Here's the timestamps for the episode. On some podcast players you should be able to click the timestamp to jump to that time. (00:00) - Introduction (08:04) - Time is an illusion (17:44) - Memes (20:20) - Eliezer Yudkowsky (32:45) - Virtual reality (39:04) - AI friends (46:29) - tiny corp (59:50) - NVIDIA vs AMD (1:02:47) - tinybox (1:14:56) - Self-driving (1:29:35) - Programming (1:37:31) - AI safety (2:02:29) - Working at Twitter (2:40:12) - Prompt engineering (2:46:08) - Video games (3:02:23) - Andrej Karpathy (3:12:28) - Meaning of life