Podcast appearances and mentions of Scott Aaronson

  • 68PODCASTS
  • 123EPISODES
  • 56mAVG DURATION
  • 1MONTHLY NEW EPISODE
  • Jul 9, 2025LATEST
Scott Aaronson

POPULARITY

20172018201920202021202220232024


Best podcasts about Scott Aaronson

Latest podcast episodes about Scott Aaronson

ReachMD CME
Optimizing VNS Parameters: Keys to Therapeutic Success

ReachMD CME

Play Episode Listen Later Jul 9, 2025


CME credits: 0.50 Valid until: 09-07-2026 Claim your CME credit at https://reachmd.com/programs/cme/optimizing-vns-parameters-keys-to-therapeutic-success/35797/ This series of bite-sized episodes contains important information on using vagus nerve stimulation (VNS) for the treatment of drug-resistant epilepsy (DRE) and treatment-resistant depression (TRD). Drs. Raman Sankar and Scott Aaronson discuss best practices for identifying and treating patients as well as programming strategies for VNS.

ReachMD CME
QoL Matters: Brain Stem Stimulation in Patients With Unipolar Depression

ReachMD CME

Play Episode Listen Later Jul 9, 2025


CME credits: 0.50 Valid until: 09-07-2026 Claim your CME credit at https://reachmd.com/programs/cme/qol-matters-brain-stem-stimulation-in-patients-with-unipolar-depression/35796/ This series of bite-sized episodes contains important information on using vagus nerve stimulation (VNS) for the treatment of drug-resistant epilepsy (DRE) and treatment-resistant depression (TRD). Drs. Raman Sankar and Scott Aaronson discuss best practices for identifying and treating patients as well as programming strategies for VNS.

ReachMD CME
VNS: Beyond Seizure Control

ReachMD CME

Play Episode Listen Later Jul 9, 2025


CME credits: 0.50 Valid until: 09-07-2026 Claim your CME credit at https://reachmd.com/programs/cme/vns-beyond-seizure-control/35795/ This series of bite-sized episodes contains important information on using vagus nerve stimulation (VNS) for the treatment of drug-resistant epilepsy (DRE) and treatment-resistant depression (TRD). Drs. Raman Sankar and Scott Aaronson discuss best practices for identifying and treating patients as well as programming strategies for VNS.

ReachMD CME
Epilepsy: Beyond Seizure Impact

ReachMD CME

Play Episode Listen Later Jul 9, 2025


CME credits: 0.50 Valid until: 09-07-2026 Claim your CME credit at https://reachmd.com/programs/cme/epilepsy-beyond-seizure-impact/35794/ This series of bite-sized episodes contains important information on using vagus nerve stimulation (VNS) for the treatment of drug-resistant epilepsy (DRE) and treatment-resistant depression (TRD). Drs. Raman Sankar and Scott Aaronson discuss best practices for identifying and treating patients as well as programming strategies for VNS.

ReachMD CME
Differentiating Neuromodulation Therapies: VNS, DBS, ECT, TMS

ReachMD CME

Play Episode Listen Later Jul 9, 2025


CME credits: 0.50 Valid until: 09-07-2026 Claim your CME credit at https://reachmd.com/programs/cme/differentiating-neuromodulation-therapies-vns-dbs-ect-tms/35793/ This series of bite-sized episodes contains important information on using vagus nerve stimulation (VNS) for the treatment of drug-resistant epilepsy (DRE) and treatment-resistant depression (TRD). Drs. Raman Sankar and Scott Aaronson discuss best practices for identifying and treating patients as well as programming strategies for VNS.

Theories of Everything with Curt Jaimungal
The Physicist Who Proved Entropy = Gravity

Theories of Everything with Curt Jaimungal

Play Episode Listen Later May 1, 2025 112:46


What if gravity is not fundamental but emerges from quantum entanglement? In this episode, physicist Ted Jacobson reveals how Einstein's equations can be derived from thermodynamic principles of the quantum vacuum, reshaping our understanding of space, time, and gravity itself. As a listener of TOE you can get a special 20% off discount to The Economist and all it has to offer! Visit https://www.economist.com/toe Join My New Substack (Personal Writings): https://curtjaimungal.substack.com Listen on Spotify: https://tinyurl.com/SpotifyTOE Become a YouTube Member (Early Access Videos): https://www.youtube.com/channel/UCdWIQh9DGG6uhJk8eyIFl1w/join Timestamps: 00:00 Introduction 01:11 The Journey into Physics 04:26 Spirituality and Physics 06:29 Connecting Gravity and Thermodynamics 09:22 The Concept of Rindler Horizons 13:12 The Nature of Quantum Vacuum 20:53 The Duality of Quantum Fields 32:59 Understanding the Equation of State 35:05 Exploring Local Rindler Horizons 47:15 Holographic Duality and Space-Time Emergence 58:19 The Metric and Quantum Fields 59:58 Extensions and Comparisons in Gravity 1:26:26 The Nature of Black Hole Physics 1:31:04 Comparing Theories Links Mentioned:: •⁠ ⁠Ted's published papers: https://scholar.google.com/citations?user=QyHAXo8AAAAJ&hl=en •⁠ ⁠Claudia de Rham on TOE: https://www.youtube.com/watch?v=Ve_Mpd6dGv8 •⁠ ⁠Neil Turok on TOE: https://www.youtube.com/watch?v=zNZCa1pVE20 •⁠ ⁠Bisognano–Wichmann theorem: https://ncatlab.org/nlab/show/Bisognano-Wichmann+theorem •⁠ ⁠Scott Aaronson and Jacob Barandes on TOE: https://www.youtube.com/watch?v=5rbC3XZr9-c •⁠ ⁠Stephen Wolfram on TOE: https://www.youtube.com/watch?v=0YRlQQw0d-4 •⁠ ⁠Ruth Kastner on TOE: https://www.youtube.com/watch?v=-BsHh3_vCMQ •⁠ ⁠Jacob Barandes on TOE: https://www.youtube.com/watch?v=YaS1usLeXQM •⁠ ⁠Leonard Susskind on TOE: https://www.youtube.com/watch?v=2p_Hlm6aCok •⁠ ⁠Ted's talk on black holes: https://www.youtube.com/watch?v=aYt2Rm_dXf4 •⁠ ⁠Ted Jacobson: Diffeomorphism invariance and the black hole information paradox: https://www.youtube.com/watch?v=r6kdHge-NNY •⁠ ⁠Bose–Einstein condensate: https://en.wikipedia.org/wiki/Bose–Einstein_condensate •⁠ ⁠Holographic Thought Experiments (paper): https://arxiv.org/pdf/0808.2845 •⁠ ⁠Peter Woit and Joseph Conlon on TOE: https://www.youtube.com/watch?v=fAaXk_WoQqQ •⁠ ⁠Chiara Marletto on TOE: https://www.youtube.com/watch?v=Uey_mUy1vN0 •⁠ ⁠Entanglement Equilibrium and the Einstein Equation (paper): https://arxiv.org/pdf/1505.04753 •⁠ ⁠Ivette Fuentes on TOE: https://www.youtube.com/watch?v=cUj2TcZSlZc •⁠ ⁠Unitarity and Holography in Gravitational Physics (paper): https://arxiv.org/pdf/0808.2842 •⁠ ⁠The dominant model of the universe is cracking (Economist article): https://www.economist.com/science-and-technology/2024/06/19/the-dominant-model-of-the-universe-is-creaking •⁠ ⁠Suvrat Raju's published papers: https://www.suvratraju.net/publications •⁠ ⁠Mark Van Raamsdonk's published papers: https://scholar.google.ca/citations?user=k8LsA4YAAAAJ&hl=en •⁠ ⁠Ryu–Takayanagi conjecture: https://en.wikipedia.org/wiki/Ryu–Takayanagi_conjecture Support TOE on Patreon: https://patreon.com/curtjaimungal Twitter: https://twitter.com/TOEwithCurt Discord Invite: https://discord.com/invite/kBcnfNVwqs #science Learn more about your ad choices. Visit megaphone.fm/adchoices

Theories of Everything with Curt Jaimungal
When Physics Gets Rid of Time and Quantum Theory | Julian Barbour

Theories of Everything with Curt Jaimungal

Play Episode Listen Later Apr 29, 2025 142:29


What if quantum mechanics is not fundamental? What if time itself is an illusion? In this new episode, physicist Julian Barbour returns to share his most radical ideas yet. He proposes that the universe is built purely from ratios, that time is not fundamental, and that quantum mechanics might be replaced entirely without the need for wave functions or Planck's constant. This may be the simplest vision of reality ever proposed. As a listener of TOE you can get a special 20% off discount to The Economist and all it has to offer! Visit https://www.economist.com/toe Join My New Substack (Personal Writings): https://curtjaimungal.substack.com Listen on Spotify: https://tinyurl.com/SpotifyTOE Become a YouTube Member (Early Access Videos): https://www.youtube.com/channel/UCdWIQh9DGG6uhJk8eyIFl1w/join Videos Mentioned: Julian's previous appearance on TOE: https://www.youtube.com/watch?v=bprxrGaf0Os Neil Turok on TOE (Big Bang): https://www.youtube.com/watch?v=ZUp9x44N3uE Neil Turok on TOE (Black Holes): https://www.youtube.com/watch?v=zNZCa1pVE20 Debunking “All Possible Paths”: https://www.youtube.com/watch?v=XcY3ZtgYis0 John Vervaeke on TOE: https://www.youtube.com/watch?v=GVj1KYGyesI Jacob Barandes & Scott Aaronson on TOE: https://www.youtube.com/watch?v=5rbC3XZr9-c The Dark History of Anti-Gravity: https://www.youtube.com/watch?v=eBA3RUxkZdc Peter Woit on TOE: https://www.youtube.com/watch?v=TTSeqsCgxj8 Books Mentioned: The Monadology – G.W. Leibniz: https://www.amazon.com/dp/1546527664 The Janus Point – Julian Barbour: https://www.amazon.ca/dp/0465095461 Reflections on the Motive Power of Heat – Carnot: https://www.amazon.ca/dp/1514873974 Lucretius: On the Nature of Things: https://www.amazon.ca/dp/0393341364 Heisenberg and the Interpretation of QM: https://www.amazon.ca/dp/1107403510 Quantum Mechanics for Cosmologists: https://books.google.ca/books?id=qou0iiLPjyoC&pg=PA99 Faraday, Maxwell, and the EM Field: https://www.amazon.ca/dp/1616149426 The Feeling of Life Itself – Christof Koch: https://www.amazon.ca/dp/B08BTCX4BM Articles Mentioned: Time's Arrow and Simultaneity (Barbour): https://arxiv.org/pdf/2211.14179 On the Moving Force of Heat (Clausius): https://sites.pitt.edu/~jdnorton/teaching/2559_Therm_Stat_Mech/docs/Clausius%20Moving%20Force%20heat%201851.pdf On the Motions and Collisions of Elastic Spheres (Maxwell): http://www.alternativaverde.it/stel/documenti/Maxwell/1860/Maxwell%20%281860%29%20-%20Illustrations%20of%20the%20dynamical%20theory%20of%20gases.pdf Maxwell–Boltzmann distribution (Wikipedia): https://en.wikipedia.org/wiki/Maxwell–Boltzmann_distribution Identification of a Gravitational Arrow of Time: https://arxiv.org/pdf/1409.0917 The Nature of Time: https://arxiv.org/pdf/0903.3489 The Solution to the Problem of Time in Shape Dynamics: https://arxiv.org/pdf/1302.6264 CPT-Symmetric Universe: https://arxiv.org/pdf/1803.08928 Mach's Principle and Dynamical Theories (JSTOR): https://www.jstor.org/stable/2397395 Timestamps: 00:00 Introduction 01:35 Consciousness and the Nature of Reality 3:23 The Nature of Time and Change 7:01 The Role of Variety in Existence 9:23 Understanding Entropy and Temperature 36:10 Revisiting the Second Law of Thermodynamics 41:33 The Illusion of Entropy in the Universe 46:11 Rethinking the Past Hypothesis 55:03 Complexity, Order, and Newton's Influence 1:02:33 Evidence Beyond Quantum Mechanics 1:16:04 Age and Structure of the Universe 1:18:53 Open Universe and Ratios 1:20:15 Fundamental Particles and Ratios 1:24:20 Emergence of Structure in Age 1:27:11 Shapes and Their Explanations 1:32:54 Life and Variety in the Universe 1:44:27 Consciousness and Perception of Structure 1:57:22 Geometry, Experience, and Forces 2:09:27 The Role of Consciousness in Shape Dynamics Support TOE on Patreon: https://patreon.com/curtjaimungal Twitter: https://twitter.com/TOEwithCurt Discord Invite: https://discord.com/invite/kBcnfNVwqs #science Learn more about your ad choices. Visit megaphone.fm/adchoices

Carbotnic
Unlocking Land for Solar with Scott Aaronson - E139

Carbotnic

Play Episode Listen Later Mar 18, 2025 30:52


In this episode, we sit down with Scott Aaronson, founder and CEO of Demeter Land Development, to explore his unexpected path from criminal defense attorney to renewable energy land origination expert.Scott recounts how his real estate ventures led him to community solar development, shedding light on the evolving challenges of securing land for distributed generation (DG) projects. He discusses the industry's transformation from old-school, door-knocking tactics to a highly strategic, data-driven approach.We cover:How his unique legal background can be applied to land originationThe biggest challenges in permitting, interconnection, and competitionHow personalized landowner outreach makes a differenceThe future of DG renewables and what trends to watchScott emphasizes the importance of local relationships, policy awareness, and strategic first-mover advantage in emerging markets. He also shares how Demeter's partnership with Paces has helped streamline operations and accelerate growth.If you're involved in solar development, land acquisition, or renewable energy policy, this episode is packed with valuable insights!Connect with Scott here!Paces helps developers find and evaluate the sites most suitable for renewable development. Interested in a call with James, CEO @ Paces?

Theories of Everything with Curt Jaimungal
Harvard Scientist Rewrites the Rules of Quantum Mechanics | Scott Aaronson Λ Jacob Barandes

Theories of Everything with Curt Jaimungal

Play Episode Listen Later Mar 4, 2025 162:14


Join Curt Jaimungal as he welcomes Harvard physicist Jacob Barandes, who claims quantum mechanics can be reformulated without wave functions, alongside computer scientist Scott Aaronson. Barandes' “indivisible” approach challenges the standard Schrödinger model, and Aaronson offers a healthy dose of skepticism in today's theolocution. Are we on the cusp of a radical rewrite of reality—or just rebranding the same quantum puzzles? As a listener of TOE you can get a special 20% off discount to The Economist and all it has to offer! Visit https://www.economist.com/toe Join My New Substack (Personal Writings): https://curtjaimungal.substack.com Listen on Spotify: https://tinyurl.com/SpotifyTOE Become a YouTube Member (Early Access Videos): https://www.youtube.com/channel/UCdWIQh9DGG6uhJk8eyIFl1w/join Timestamps: 00:00 Introduction to Quantum Mechanics 05:40 The Power of Quantum Computing 36:17 The Many Worlds Debate 1:09:05 Evaluating Jacob's Theory 1:13:49 Criteria for Theoretical Frameworks 1:17:15 Bohmian Mechanics and Stochastic Dynamics 1:18:51 Generalizing Quantum Theory 1:22:32 The Role of Unobservables 1:31:08 The Problem of Trajectories 1:39:39 Exploring Alternative Theories 1:50:29 The Stone Soup Analogy 1:56:20 The Limits of Quantum Mechanics 2:01:57 The Nature of Laws in Physics 2:14:57 The Many Worlds Interpretation 2:22:40 The Search for New Connections Links Mentioned: - Quantum theory, the Church–Turing principle and the universal quantum computer (article): https://www.cs.princeton.edu/courses/archive/fall04/cos576/papers/deutsch85.pdf - The Emergent Multiverse (book): https://amzn.to/3QJleSu Jacob Barandes on TOE (Part 1): https://www.youtube.com/watch?v=7oWip00iXbo&t=1s&ab_channel=CurtJaimungal - Scott Aaronson on TOE: https://www.youtube.com/watch?v=1ZpGCQoL2Rk - Quantum Theory From Five Reasonable Axioms (paper): https://arxiv.org/pdf/quant-ph/0101012 - Quantum stochastic processes and quantum non-Markovian phenomena (paper): https://arxiv.org/pdf/2012.01894 - Jacob's “Wigner's Friend” flowchart: https://shared.jacobbarandes.com/images/wigners-friend-flow-chart-2025 - Is Quantum Mechanics An Island In Theory Space? (paper): https://www.scottaaronson.com/papers/island.pdf - Aspects of Objectivity in Quantum Mechanics (paper): https://philsci-archive.pitt.edu/223/1/Objectivity.pdf - Quantum Computing Since Democritus (book): https://amzn.to/4bqVeoD - The Ghost in the Quantum Turing Machine (paper): https://arxiv.org/pdf/1306.0159 - Quantum mechanics and reality (article): https://pubs.aip.org/physicstoday/article/23/9/30/427387/Quantum-mechanics-and-realityCould-the-solution-to - Stone Soup (book): https://amzn.to/4kgPamN - TOE's String Theory Iceberg: https://www.youtube.com/watch?v=X4PdPnQuwjY - TOE's Mindfest playlist: https://www.youtube.com/playlist?list=PLZ7ikzmc6zlOPw7Hqkc6-MXEMBy0fnZcb Support TOE on Patreon: https://patreon.com/curtjaimungal Twitter: https://twitter.com/TOEwithCurt Discord Invite: https://discord.com/invite/kBcnfNVwqs #science #theoreticalphysics Learn more about your ad choices. Visit megaphone.fm/adchoices

Theories of Everything with Curt Jaimungal
Why the Godfather of AI Now Fears His Creation (ft. Geoffrey Hinton)

Theories of Everything with Curt Jaimungal

Play Episode Listen Later Jan 18, 2025 78:55


As a listener of TOE you can get a special 20% off discount to The Economist and all it has to offer! Visit https://www.economist.com/toe Professor Geoffrey Hinton, a prominent figure in AI and 2024 Nobel Prize recipient, discusses the urgent risks posed by rapid AI advancements in today's episode of Theories of Everything with Curt Jaimungal. Join My New Substack (Personal Writings): https://curtjaimungal.substack.com Listen on Spotify: https://tinyurl.com/SpotifyTOE Timestamps: 00:00 The Existential Threat of AI 01:25 The Speed of AI Development 7:11 The Nature of Subjective Experience 14:18 Consciousness vs Self-Consciousness 23:36 The Misunderstanding of Mental States 29:19 The Chinese Room Argument 30:47 The Rise of AI in China 37:18 The Future of AI Development 40:00 The Societal Impact of AI 47:02 Understanding and Intelligence 1:00:47 Predictions on Subjective Experience 1:05:45 The Future Landscape of AI 1:10:14 Reflections on Recognition and Impact Geoffrey Hinton Links: •⁠ ⁠Geoffrey Hinton's publications: https://www.cs.toronto.edu/~hinton/papers.html#1983-1976 •⁠ ⁠The Economist's several mentions of Geoffrey Hinton: https://www.economist.com/science-and-technology/2024/10/08/ai-researchers-receive-the-nobel-prize-for-physics •⁠ ⁠https://www.economist.com/finance-and-economics/2025/01/02/would-an-artificial-intelligence-bubble-be-so-bad •⁠ ⁠https://www.economist.com/science-and-technology/2024/10/10/ai-wins-big-at-the-nobels •⁠ ⁠https://www.economist.com/science-and-technology/2024/08/14/ai-scientists-are-producing-new-theories-of-how-the-brain-learns •⁠ ⁠Scott Aaronson on TOE: https://www.youtube.com/watch?v=1ZpGCQoL2Rk&ab_channel=CurtJaimungal •⁠ ⁠Roger Penrose on TOE: https://www.youtube.com/watch?v=sGm505TFMbU&list=PLZ7ikzmc6zlN6E8KrxcYCWQIHg2tfkqvR&index=19 •⁠ ⁠The Emperor's New Mind (book): https://www.amazon.com/Emperors-New-Mind-Concerning-Computers/dp/0192861980 •⁠ ⁠Daniel Dennett on TOE: https://www.youtube.com/watch?v=bH553zzjQlI&list=PLZ7ikzmc6zlN6E8KrxcYCWQIHg2tfkqvR&index=78 •⁠ ⁠Noam Chomsky on TOE: https://www.youtube.com/watch?v=DQuiso493ro&t=1353s&ab_channel=CurtJaimungal •⁠ ⁠Ray Kurzweil's books: https://www.thekurzweillibrary.com/ Become a YouTube Member (Early Access Videos): https://www.youtube.com/channel/UCdWIQh9DGG6uhJk8eyIFl1w/join Support TOE on Patreon: https://patreon.com/curtjaimungal Twitter: https://twitter.com/TOEwithCurt Discord Invite: https://discord.com/invite/kBcnfNVwqs #science #ai #artificialintelligence #physics #consciousness #computerscience Learn more about your ad choices. Visit megaphone.fm/adchoices

Bankless
Will Quantum Computing Kill Bitcoin? | Scott Aaronson & Justin Drake

Bankless

Play Episode Listen Later Jan 13, 2025 124:37


Quantum computing is advancing rapidly, raising significant questions for cryptography and blockchain. In this episode, Scott Aaronson, quantum computing expert, and Justin Drake, cryptography researcher at the Ethereum Foundation, join us to explore the impact of quantum advancements on Bitcoin, Ethereum, and the future of crypto security. Are your coins safe? How soon do we need post-quantum cryptography? Tune in as we navigate this complex, fascinating frontier. ------

Plain English with Derek Thompson
The Year's Biggest Breakthroughs in Science and Tech (Feat.: OK, But Seriously, What Is Quantum Computing?)

Plain English with Derek Thompson

Play Episode Listen Later Dec 31, 2024 78:12


Our final episode of the year is also my favorite annual tradition: conversations with scientists about the most important and, often, just plain mind-blowing breakthroughs of the previous 12 months. Today we're talking about "organ clocks" (we'll explain) and other key biotech advances of 2024 with Eric Topol, an American cardiologist and author who is also the founder and director of the Scripps Research Translational Institute. But first, Derek attempts a 'Plain English'-y summary of the most confusing thing he's ever covered—QUANTUM COMPUTING—with a major assist from theoretical computer scientist Scott Aaronson from the University of Texas at Austin. If you have questions, observations, or ideas for future episodes, email us at PlainEnglish@Spotify.com. Host: Derek Thompson Guests: Scott Aaronson and Eric Topol Producer: Devon Baroldi Learn more about your ad choices. Visit podcastchoices.com/adchoices

Win-Win with Liv Boeree
#32 - Scott Aaronson - The Race to AGI and Quantum Supremacy

Win-Win with Liv Boeree

Play Episode Listen Later Dec 4, 2024 145:20


How fast is the AI race really going? What is the current state of Quantum Computing? What actually *is* the P vs NP problem? - former OpenAI researcher and theoretical computer scientist Scott Aaronson joins Liv and Igor to discuss everything quantum, AI and consciousness. We hear about his experience working on OpenAI's "superalignment team", whether quantum computers might break Bitcoin, the state of University Admissions, and even a proposal for a new religion! Strap in for a fascinating conversation that bridges deep theory with pressing real-world concerns about our technological future. Chapters: 1:30 - Working at OpenAI 4:23 - His Approaches to AI Alignment 6:23 - Watermarking & Detection of AI content 19:15 - P vs. NP 27:11 - The Current State of AI Safety 37:38 - Bad "Just-a-ism" Arguments around LLMs 48:25 - What Sets Human Creativity Apart from AI 55:30 - A Religion for AGI? 1:00:49 - More Moral Philosophy 1:05:24 - The AI Arms Race 1:11:08 - The Government Intervention Dilemma 1:23:28 - The Current State of Quantum Computing 1:36:25 - Will QC destroy Cryptography? 1:48:55 - Politics on College Campuses 2:03:11 - Scott's Childhood & Relationship with Competition 2:23:25 - Rapid-fire Predictions Links: ♾️ Scott's Blog: ⁠https://scottaaronson.blog/⁠ ♾️ Scott's Book: ⁠https://www.amazon.com/Quantum-Computing-since-Democritus-Aaronson/dp/0521199565⁠ ♾️ QIC at UTA: https://www.cs.utexas.edu/~qic/ Credits Credits: ♾️  Hosted by Liv Boeree and Igor Kurganov ♾️  Produced by Liv Boeree ♾️  Post-Production by Ryan Kessler The Win-Win Podcast: Poker champion Liv Boeree takes to the interview chair to tease apart the complexities of one of the most fundamental parts of human nature: competition. Liv is joined by top philosophers, gamers, artists, technologists, CEOs, scientists, athletes and more to understand how competition manifests in their world, and how to change seemingly win-lose games into Win-Wins. #WinWinPodcast #QuantumComputing #AISafety #LLM

Theories of Everything with Curt Jaimungal
There is No Wave Function | Jacob Barandes

Theories of Everything with Curt Jaimungal

Play Episode Listen Later Nov 13, 2024 135:30


In today's episode, Jacob, a physicist specializing in quantum mechanics, explores groundbreaking ideas on measurement, the role of probabilistic laws, and the foundational principles of quantum theory. With a focus on interdisciplinary approaches, Jacob offers unique insights into the nature of particles, fields, and the evolution of quantum mechanics. New Substack! Follow my personal writings and EARLY ACCESS episodes here: https://curtjaimungal.substack.com SPONSOR (THE ECONOMIST): As a listener of TOE you can get a special 20% off discount to The Economist and all it has to offer! Visit https://www.economist.com/toe LINKS MENTIONED: - Wigner's paper ‘Remarks on the Mind-Body Question': https://www.informationphilosopher.com/solutions/scientists/wigner/Wigner_Remarks.pdf - Jacob's lecture on Hilbert Spaces: https://www.youtube.com/watch?v=OmaSAG4J6nw&ab_channel=OxfordPhilosophyofPhysics - John von Neumann's book on ‘Mathematical Foundations of Quantum Mechanics': https://amzn.to/48OkeVj - The 1905 Papers (Albert Einstein): https://guides.loc.gov/einstein-annus-mirabilis/1905-papers - Dividing Quantum Channels (paper): https://arxiv.org/pdf/math-ph/0611057 - Sean Carroll on TOE: https://www.youtube.com/watch?v=9AoRxtYZrZo - Scott Aaronson and Leonard Susskind's paper on ‘Quantum Necromancy': https://arxiv.org/pdf/2009.07450 - Scott Aaronson on TOE: https://www.youtube.com/watch?v=1ZpGCQoL2Rk - Leonard Susskind on TOE: https://www.youtube.com/watch?v=2p_Hlm6aCok - Ekkolapto's website: https://www.ekkolapto.org/ TIMESTAMPS: 00:00 - Introduction 01:26 - Jacob's Background 07:32 - Pursuing Theoretical Physics 10:28 - Is Consciousness Linked to Quantum Mechanics? 16:07 - Why the Wave Function Might Not Be Real 20:12 - The Schrödinger Equation Explained 23:04 - Higher Dimensions in Quantum Physics 30:11 - Heisenberg's Matrix Mechanics 35:08 - Schrödinger's Wave Function and Its Implications 39:57 - Dirac and von Neumann's Quantum Axioms 45:09 - The Problem with Hilbert Spaces 50:02 - Wigner's Friend Paradox 55:06 - Challenges in Defining Measurement in Quantum Mechanics 01:00:17 - Trying to Simplify Quantum for Students 01:03:35 - Bridging Quantum Mechanics with Stochastic Processes 01:05:05 - Discovering Indivisible Stochastic Processes 01:12:03 - Interference and Coherence Explained 01:16:06 - Redefining Measurement and Decoherence 01:18:01 - The Future of Quantum Theory 1:24:09 - Foundationalism and Quantum Theory 1:25:04 - Why Use Indivisible Stochastic Laws? 1:26:10 - The Quantum-Classical Transition 1:27:30 - Classical vs Quantum Probabilities 1:28:36 - Hilbert Space and the Convenience of Amplitudes 1:30:01 - No Special Role for Observers 1:33:40 - Emergence of the Wave Function 1:38:27 - Physicists' Reluctance to Change Foundations 1:43:04 - Resolving Quantum Mechanics' Inconsistencies 1:50:46 - Practical Applications of Indivisible Stochastic Processes 1:57:53 - Understanding Particles in the Indivisible Stochastic Model 2:00:48 - Is There a Fundamental Ontology? 2:07:02 - Advice for Students Entering Physics 2:09:32 - Encouragement for Interdisciplinary Research 2:12:22 - Outro TOE'S TOP LINKS: - Support TOE on Patreon: https://patreon.com/curtjaimungal (early access to ad-free audio episodes!) - Listen to TOE on Spotify: https://open.spotify.com/show/4gL14b92xAErofYQA7bU4e - Become a YouTube Member Here: https://www.youtube.com/channel/UCdWIQh9DGG6uhJk8eyIFl1w/join - Join TOE's Newsletter 'TOEmail' at https://www.curtjaimungal.org Other Links: - Twitter: https://twitter.com/TOEwithCurt - Discord Invite: https://discord.com/invite/kBcnfNVwqs - iTunes: https://podcasts.apple.com/ca/podcast/better-left-unsaid-with-curt-jaimungal/id1521758802 - Subreddit r/TheoriesOfEverything: https://reddit.com/r/theoriesofeverything #science #sciencepodcast #physics Learn more about your ad choices. Visit megaphone.fm/adchoices

Infinite Loops
Scott Aaronson — Quantumania (EP.240)

Infinite Loops

Play Episode Listen Later Oct 31, 2024 72:11


My guest today is Scott Aaronson, a theoretical computer scientist, OG blogger, and quantum computing maestro. Scott has so many achievements and credentials that listing them here would take longer than recording the episode. Here's a select few: Self-taught programmer at age 11, Cornell computer science student at 15, PhD recipient by 22! Schlumberger Centennial Chair of Computer Science at The University of Texas at Austin. Director of UT Austin's Quantum Information Center. Former visiting researcher on OpenAI's alignment team (2022-2024). Awarded the ACM prize in computing in 2020 and the Tomassoni-Chisesi Prize in Physics (under 40 category) in 2018. … you get the point. Scott and I dig into the misunderstood world of quantum computing — the hopes, the hindrances, and the hucksters — to unpack what a quantum-empowered future could really look like. We also discuss what makes humans special in the age of AI, the stubbornly persistent errors of the seat-to-keyboard interface, and MUCH more. I hope you enjoy the conversation as much as I did. For the full transcript, some highlights from Scott's blog, and bucketloads of other goodies designed to make you go, “Hmm, that's interesting!” check out our Substack. Important Links: Shtetl-Optimized (Scott's blog) My Reading Burden On blankfaces Show Notes: So much reading. So little time. The problem of human specialness in the age of AI It's always the same quantum weirdness Why it's easy to be a quantum huckster Quantum progress, quantum hopes, and quantum limits Encryption in a quantum empowered world Wielding the hammer of interference Scientific discovery in a quantum empowered world Bureaucracy and blank faces Scott as Emperor of the World MORE! Books Mentioned: The Fifth Science; by ****Exurb1a The Hitchhiker's Guide to the Galaxy; by Douglas Adams

Market to Market - The MtoM Podcast
Harvesting Sunlight: The Rise of Solar Power in Rural America - Scott Aaronson

Market to Market - The MtoM Podcast

Play Episode Listen Later Oct 1, 2024 0:37


Land use is always on the mind of those who depend on the land for food, fiber and a way of life. Renewable energy touts being able to generate power again and again. Wind swept the nation and now solar is taking up real estate and stirring up debate. Scott Aaronson specializes in land acquisition and leasing for new solar projects as the CEO of the Demeter Land Development Company. We'll explore how this renewable energy source is reshaping the countryside and what it means for farmers, local communities, and our energy future.

The Nonlinear Library
LW - Proveably Safe Self Driving Cars by Davidmanheim

The Nonlinear Library

Play Episode Listen Later Sep 15, 2024 11:40


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Proveably Safe Self Driving Cars, published by Davidmanheim on September 15, 2024 on LessWrong. I've seen a fair amount of skepticism about the "Provably Safe AI" paradigm, but I think detractors give it too little credit. I suspect this is largely because of idea inoculation - people have heard an undeveloped or weak man version of the idea, for example, that we can use formal methods to state our goals and prove that an AI will do that, and have already dismissed it. (Not to pick on him at all, but see my question for Scott Aaronson here.) I will not argue that Guaranteed Safe AI solves AI safety generally, or that it could do so - I will leave that to others. Instead, I want to provide a concrete example of a near-term application, to respond to critics who say that proveability isn't useful because it can't be feasibly used in real world cases when it involves the physical world, and when it is embedded within messy human systems. I am making far narrower claims than the general ones which have been debated, but at the very least I think it is useful to establish whether this is actually a point of disagreement. And finally, I will admit that the problem I'm describing would be adding proveability to a largely solved problem, but it provides a concrete example for where the approach is viable. A path to provably safe autonomous vehicles To start, even critics agree that formal verification is possible, and is already used in practice in certain places. And given (formally specified) threat models in different narrow domains, there are ways to do threat and risk modeling and get different types of guarantees. For example, we already have proveably verifiable code for things like microkernels, and that means we can prove that buffer overflows, arithmetic exceptions, and deadlocks are impossible, and have hard guarantees for worst case execution time. This is a basis for further applications - we want to start at the bottom and build on provably secure systems, and get additional guarantees beyond that point. If we plan to make autonomous cars that are provably safe, we would build starting from that type of kernel, and then we "only" have all of the other safety issues to address. Secondly, everyone seems to agree that provable safety in physical systems requires a model of the world, and given the limits of physics, the limits of our models, and so on, any such approach can only provide approximate guarantees, and proofs would be conditional on those models. For example, we aren't going to formally verify that Newtonian physics is correct, we're instead formally verifying that if Newtonian physics is correct, the car will not crash in some situation. Proven Input Reliability Given that, can we guarantee that a car has some low probability of crashing? Again, we need to build from the bottom up. We can show that sensors have some specific failure rate, and use that to show a low probability of not identifying other cars, or humans - not in the direct formal verification sense, but instead with the types of guarantees typically used for hardware, with known failure rates, built in error detection, and redundancy. I'm not going to talk about how to do that class of risk analysis, but (modulus adversarial attacks, which I'll mention later,) estimating engineering reliability is a solved problem - if we don't have other problems to deal with. But we do, because cars are complex and interact with the wider world - so the trick will be integrating those risk analysis guarantees that we can prove into larger systems, and finding ways to build broader guarantees on top of them. But for the engineering reliability, we don't only have engineering proof. Work like DARPA's VerifAI is "applying formal methods to perception and ML components." Building guarantees about perceptio...

The Current
Episode 59: Why Did Hurricane Beryl Take Down So Many Trees in Texas?

The Current

Play Episode Listen Later Jul 16, 2024 18:03


his week, EEI's Electric Perspectives Podcast and The Current are partnering for an episode to discuss the impacts of Hurricane Beryl and the tremendous restoration efforts that are underway. On this episode, you'll hear from StormGeo Meteorologist Justin Petrutsas — who is based in the Houston area— and Scott Aaronson, EEI Senior Vice President of Security and Preparedness, about the intensity of this hurricane as well the complex work that is underway to safely restore power to impacted customers.

Electric Perspectives
Why Did Hurricane Beryl Take Down So Many Trees in Texas?

Electric Perspectives

Play Episode Listen Later Jul 16, 2024 18:56


This week, EEI's Electric Perspectives Podcast and The Current are partnering for an episode to discuss the impacts of Hurricane Beryl and the tremendous restoration efforts that are underway. On this episode, you'll hear from StormGeo Meteorologist Justin Petrutsas — who is based in the Houston area— and Scott Aaronson, EEI Senior Vice President of Security and Preparedness, about the intensity of this hurricane as well the complex work that is underway to safely restore power to impacted customers.

The Nonlinear Library
AF - The consistent guessing problem is easier than the halting problem by Jessica Taylor

The Nonlinear Library

Play Episode Listen Later May 20, 2024 6:43


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The consistent guessing problem is easier than the halting problem, published by Jessica Taylor on May 20, 2024 on The AI Alignment Forum. The halting problem is the problem of taking as input a Turing machine M, returning true if it halts, false if it doesn't halt. This is known to be uncomputable. The consistent guessing problem (named by Scott Aaronson) is the problem of taking as input a Turing machine M (which either returns a Boolean or never halts), and returning true or false; if M ever returns true, the oracle's answer must be true, and likewise for false. This is also known to be uncomputable. Scott Aaronson inquires as to whether the consistent guessing problem is strictly easier than the halting problem. This would mean there is no Turing machine that, when given access to a consistent guessing oracle, solves the halting problem, no matter which consistent guessing oracle (of which there are many) it has access too. As prior work, Andrew Drucker has written a paper describing a proof of this, although I find the proof hard to understand and have not checked it independently. In this post, I will prove this fact in a way that I at least find easier to understand. (Note that the other direction, that a Turing machine with access to a halting oracle can be a consistent guessing oracle, is trivial.) First I will show that a Turing machine with access to a halting oracle cannot in general determine whether another machine with access to a halting oracle will halt. Suppose M(O, N) is a Turing machine that returns true if N(O) halts, false otherwise, when O is a halting oracle. Let T(O) be a machine that runs M(O, T), halting if it returns false, running forever if it returns true. Now M(O, T) must be its own negation, a contradiction. In particular, this implies that the problem of deciding whether a Turing machine with access to a halting oracle halts cannot be a Σ01 statement in the arithmetic hierarchy, since these statements can be decided by a machine with access to a halting oracle. Now consider the problem of deciding whether a Turing machine with access to a consistent guessing oracle halts for all possible consistent guessing oracles. If this is a Σ01 statement, then consistent guessing oracles must be strictly weaker than halting oracles. Since, if there were a reliable way to derive a halting oracle from a consistent guessing oracle, then any machine with access to a halting oracle can be translated to one making use of a consistent guessing oracle, that halts for all consistent guessing oracles if and only if the original halts when given access to a halting oracle. That would make the problem of deciding whether a Turing machine with access to a halting oracle halts a Σ01 statement, which we have shown to be impossible. What remains to be shown is that the problem of deciding whether a Turing machine with access to a consistent guessing oracle halts for all consistent guessing oracles, is a Σ01 statement. To do this, I will construct a recursively enumerable propositional theory T that depends on the Turing machine. Let M be a Turing machine that takes an oracle as input (where an oracle maps encodings of Turing machines to Booleans). Add to the T the following propositional variables: ON for each Turing machine encoding N, representing the oracle's answer about this machine. H, representing that M(O) halts. Rs for each possible state s of the Turing machine, where the state includes the head state and the state of the tape, representing that s is reached by the machine's execution. Clearly, these variables are recursively enumerable and can be computably mapped to the natural numbers. We introduce the following axiom schemas: (a) For any machine N that halts and returns true, ON. (b) For any machine N that halts and returns false, ON. (c) For any ...

Metanoia Lab | Liderança, inovação e transformação digital, por Andrea Iorio
Ep. 169 | A teoria do “Game Over”: qual o problema com a unicidade humana na era da IA? Scott Aaronson comentado por Andrea Iorio.

Metanoia Lab | Liderança, inovação e transformação digital, por Andrea Iorio

Play Episode Listen Later May 8, 2024 18:33


Neste episódio da quarta temporada do Metanoia Lab, patrocinado pela Oi Soluções, o Andrea (andreaiorio.com) analisa uma frase do Scott Aaronson, professor de Computação Quantica na University of Texas Austin e hoje colaborador da OpenAi, que fala sobre a "Game Over theory", e introduz o elemento da subjetividade na questão de se a IA já super a o ser humano em todas as suas tarefas, ou se apenas naquelas em que a avaliação do resultado é objetiva.

Clearer Thinking with Spencer Greenberg
Separating quantum computing hype from reality (with Scott Aaronson)

Clearer Thinking with Spencer Greenberg

Play Episode Listen Later May 1, 2024 78:55


Read the full transcript here. What exactly is quantum computing? How much should we worry about the possibility that quantum computing will break existing cryptography tools? When will a quantum computer with enough horsepower to crack RSA likely appear? On what kinds of tasks will quantum computers likely perform better than classical computers? How legitimate are companies that are currently selling quantum computing solutions? How can scientists help to fight misinformation and misunderstandings about quantum computing? To what extent should the state of the art be exaggerated with the aim of getting people excited about the possibilities the technology might afford and encouraging them to invest in research or begin a career in the field? Is now a good time to go into the field (especially compared to other similar options, like going into the booming AI field)?Scott Aaronson is Schlumberger Chair of Computer Science at the University of Texas at Austin and founding director of its Quantum Information Center, currently on leave at OpenAI to work on theoretical foundations of AI safety. He received his bachelor's from Cornell University and his PhD from UC Berkeley. Before coming to UT Austin, he spent nine years as a professor in Electrical Engineering and Computer Science at MIT. Aaronson's research in theoretical computer science has focused mainly on the capabilities and limits of quantum computers. His first book, Quantum Computing Since Democritus, was published in 2013 by Cambridge University Press. He received the National Science Foundation's Alan T. Waterman Award, the United States PECASE Award, the Tomassoni-Chisesi Prize in Physics, and the ACM Prize in Computing; and he is a Fellow of the ACM and the AAAS. Find out more about him at scottaaronson.blog. StaffSpencer Greenberg — Host / DirectorJosh Castle — ProducerRyan Kessler — Audio EngineerUri Bram — FactotumWeAmplify — TranscriptionistsAlexandria D. — Research and Special Projects AssistantMusicBroke for FreeJosh WoodwardLee RosevereQuiet Music for Tiny Robotswowamusiczapsplat.comAffiliatesClearer ThinkingGuidedTrackMind EasePositlyUpLift[Read more]

Theories of Everything with Curt Jaimungal
The Universe Is Simulated. Now What? | David Chalmers and Scott Aaronson (Part 3/3)

Theories of Everything with Curt Jaimungal

Play Episode Listen Later Apr 30, 2024 28:16


Here is a panel between David Chalmers and Scott Aaronson at Mindfest 2024. This discussion covers the philosophical implications of the simulation hypothesis, exploring whether our reality might be a simulation and engaging with various perspectives on the topic.This presentation was recorded at MindFest, held at Florida Atlantic University, CENTER FOR THE FUTURE MIND, spearheaded by Susan Schneider.YouTube: https://youtu.be/7PlmOXQ18jk Please consider signing up for TOEmail at https://www.curtjaimungal.org  Support TOE: - Patreon: https://patreon.com/curtjaimungal (early access to ad-free audio episodes!) - Crypto: https://tinyurl.com/cryptoTOE - PayPal: https://tinyurl.com/paypalTOE - TOE Merch: https://tinyurl.com/TOEmerch  Follow TOE: - *NEW* Get my 'Top 10 TOEs' PDF + Weekly Personal Updates: https://www.curtjaimungal.org - Instagram: https://www.instagram.com/theoriesofeverythingpod - TikTok: https://www.tiktok.com/@theoriesofeverything_ - Twitter: https://twitter.com/TOEwithCurt - Discord Invite: https://discord.com/invite/kBcnfNVwqs - iTunes: https://podcasts.apple.com/ca/podcast/better-left-unsaid-with-curt-jaimungal/id1521758802 - Pandora: https://pdora.co/33b9lfP - Spotify: https://open.spotify.com/show/4gL14b92xAErofYQA7bU4e - Subreddit r/TheoriesOfEverything: https://reddit.com/r/theoriesofeverything  

Theories of Everything with Curt Jaimungal
OpenAI's Scott Aaronson On The Simulation Hypothesis

Theories of Everything with Curt Jaimungal

Play Episode Listen Later Apr 12, 2024 12:38


Scott Aaronson gives a presentation at MindFest 2024, where he critiques the simulation hypothesis by questioning its scientific relevance and examining the computational feasibility of simulating complex physical theories. This presentation was recorded at MindFest, held at Florida Atlantic University, CENTER FOR THE FUTURE MIND, spearheaded by Susan Schneider. Please consider signing up for TOEmail at https://www.curtjaimungal.org LINKS MENTIONED: - Center for the Future Mind (Mindfest @ FAU): https://www.fau.edu/future-mind/ - Other Ai and Consciousness (Mindfest) TOE Podcasts: https://www.youtube.com/playlist?list=PLZ7ikzmc6zlOPw7Hqkc6-MXEMBy0fnZcb - Mathematics of String Theory (Video): https://youtu.be/X4PdPnQuwjY  Support TOE: - Patreon: https://patreon.com/curtjaimungal (early access to ad-free audio episodes!) - Crypto: https://tinyurl.com/cryptoTOE - PayPal: https://tinyurl.com/paypalTOE - TOE Merch: https://tinyurl.com/TOEmerch  Follow TOE: - *NEW* Get my 'Top 10 TOEs' PDF + Weekly Personal Updates: https://www.curtjaimungal.org - Instagram: https://www.instagram.com/theoriesofeverythingpod - TikTok: https://www.tiktok.com/@theoriesofeverything_ - Twitter: https://twitter.com/TOEwithCurt - Discord Invite: https://discord.com/invite/kBcnfNVwqs - iTunes: https://podcasts.apple.com/ca/podcast/better-left-unsaid-with-curt-jaimungal/id1521758802 - Pandora: https://pdora.co/33b9lfP - Spotify: https://open.spotify.com/show/4gL14b92xAErofYQA7bU4e - Subreddit r/TheoriesOfEverything: https://reddit.com/r/theoriesofeverything   

Cloud Security Podcast by Google
EP164 Quantum Computing: Understanding the (very serious) Threat and Post-Quantum Cryptography

Cloud Security Podcast by Google

Play Episode Listen Later Mar 18, 2024 31:23


Guest: Jennifer Fernick, Senor Staff Security Engineer and UTL, Google Topics: Since one of us (!) doesn't have a PhD in quantum mechanics, could you explain what a quantum computer is and how do we know they are on a credible path towards being real threats to cryptography? How soon do we need to worry about this one? We've heard that quantum computers are more of a threat to asymmetric/public key crypto than symmetric crypto. First off, why? And second, what does this difference mean for defenders? Why (how) are we sure this is coming? Are we mitigating a threat that is perennially 10 years ahead and then vanishes due to some other broad technology change? What is a post-quantum algorithm anyway? If we're baking new key exchange crypto into our systems, how confident are we that we are going to be resistant to both quantum and traditional cryptanalysis?  Why does NIST think it's time to be doing the PQC thing now? Where is the rest of the industry on this evolution? How can a person tell the difference here between reality and snakeoil? I think Anton and I both responded to your initial email with a heavy dose of skepticism, and probably more skepticism than it deserved, so you get the rare on-air apology from both of us! Resources: Securing tomorrow today: Why Google now protects its internal communications from quantum threats How Google is preparing for a post-quantum world NIST PQC standards PQ Crypto conferences “Quantum Computation & Quantum Information” by Nielsen & Chuang book “Quantum Computing Since Democritus” by Scott Aaronson book EP154 Mike Schiffman: from Blueboxing to LLMs via Network Security at Google  

The Nonlinear Library
LW - My PhD thesis: Algorithmic Bayesian Epistemology by Eric Neyman

The Nonlinear Library

Play Episode Listen Later Mar 17, 2024 12:23


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: My PhD thesis: Algorithmic Bayesian Epistemology, published by Eric Neyman on March 17, 2024 on LessWrong. In January, I defended my PhD thesis, which I called Algorithmic Bayesian Epistemology. From the preface: For me as for most students, college was a time of exploration. I took many classes, read many academic and non-academic works, and tried my hand at a few research projects. Early in graduate school, I noticed a strong commonality among the questions that I had found particularly fascinating: most of them involved reasoning about knowledge, information, or uncertainty under constraints. I decided that this cluster of problems would be my primary academic focus. I settled on calling the cluster algorithmic Bayesian epistemology: all of the questions I was thinking about involved applying the "algorithmic lens" of theoretical computer science to problems of Bayesian epistemology. Although my interest in mathematical reasoning about uncertainty dates back to before I had heard of the rationalist community, the community has no doubt influenced and strengthened this interest. The most striking example of this influence is Scott Aaronson's blog post Common Knowledge and Aumann's Agreement Theorem, which I ran into during my freshman year of college.[1] The post made things click together for me in a way that made me more intellectually honest and humble, and generally a better person. I also found the post incredibly intellectually interesting -- and indeed, Chapter 8 of my thesis is a follow-up to Scott Aaronson's academic paper on Aumann's agreement theorem. My interest in forecast elicitation and aggregation, while pre-existing, was no doubt influenced by the EA/rationalist-adjacent forecasting community. And Chapter 9 of the thesis (work I did at the Alignment Research Center) is no doubt causally downstream of the rationalist community. Which is all to say: thank you! Y'all have had a substantial positive impact on my intellectual journey. Chapter descriptions The thesis contains two background chapters followed by seven technical chapters (Chapters 3-9). In Chapter 1 (Introduction), I try to convey what exactly I mean by "algorithmic Bayesian epistemology" and why I'm excited about it. In Chapter 2 (Preliminaries), I give some technical background that's necessary for understanding the subsequent technical chapters. It's intended to be accessible to readers with a general college-level math background. While the nominal purpose of Chapter 2 is to introduce the mathematical tools used in later chapters, the topics covered there are interesting in their own right. Different readers will of course have different opinions about which technical chapters are the most interesting. Naturally, I have my own opinions: I think the most interesting chapters are Chapters 5, 7, and 9, so if you are looking for direction, you may want to tiebreak toward reading those. Here are some brief summaries: Chapter 3: Incentivizing precise forecasts. You might be familiar with proper scoring rules, which are mechanisms for paying experts for forecasts in a way that incentivizes the experts to report their true beliefs. But there are many proper scoring rules (most famously, the quadratic score and the log score), so which one should you use? There are many perspectives on this question, but the one I take in this chapter is: which proper scoring rule most incentivizes experts to do the most research before reporting their forecast? (See also this blog post I wrote explaining the research.) Chapter 4: Arbitrage-free contract functions. Now, what if you're trying to elicit forecasts from multiple experts? If you're worried about the experts colluding, your problem is now harder. It turns out that if you use the same proper scoring rule to pay every expert, then the experts can collu...

Theories of Everything with Curt Jaimungal
What to Expect from Ai in 2025 | Scott Aaronson

Theories of Everything with Curt Jaimungal

Play Episode Listen Later Feb 27, 2024 69:34 Very Popular


LINKS MENTIONED:FAU's Center for the Future Mind Website: https://www.fau.edu/future-mind/The Ghost in the Quantum Turing Machine (Scott Aanderson): https://arxiv.org/abs/1306.0159TOE's Mindfest Playlist: https://www.youtube.com/playlist?list=PLZ7ikzmc6zlOPw7Hqkc6-MXEMBy0fnZcb

The New Quantum Era
Dawning of the Era of Logical Qubits with Dr Vladan Vuletic

The New Quantum Era

Play Episode Listen Later Feb 12, 2024 44:27 Very Popular


Kevin and Sebastian are joined by Dr. Vladan Vuletic, the Lester Wolfe Professor of Physics at the Center for Ultracold Atoms and Research in the Department of Physics at the Massachusetts Institute of TechnologyAt the end of 2023, the quantum computing community was startled and amazed by the results from a bombshell paper published in Nature on December 6th, titled Logical quantum processor based on reconfigurable atom arrays  in which Dr. Vuletic's group collaborated with Dr Mikhail Lukin's group at Harvard to create 48 logical qubits from an array of 280 atoms. Scott Aaronson does a good job of breaking down the results on his blog, but the upshot is that this is the largest number of logical qubits created, and a very large leap ahead for the field. 00:00 Introduction and Background01:07 Path to Quantum Computing03:30 Rydberg Atoms and Quantum Gates08:56 Transversal Gates and Logical Qubits15:12 Implementation and Commercial Potential23:59 Future Outlook and Quantum Simulations30:51 Scaling and Applications32:22 Improving Quantum Gate Fidelity33:19 Advancing Field of View Systems33:48 Closing the Feedback Loop on Error Correction35:29 Quantum Error Correction as a Remarkable Breakthrough36:13 Cross-Fertilization of Quantum Error Correction Ideas

Quantum
Quantum 55 : actualités Janvier 2024

Quantum

Play Episode Listen Later Feb 7, 2024 44:28


- Événements Retour sur les vidéos et présentations de la Q2B SV 2023Elles sont toutes disponibles sur YouTube. John Preskill, Scott Aaronson, Rigetti, QuEra, Quantum Machines, quantum sensing par David Shaw de GQI, etc.Original : une présentation conjointe d'Alice&Bob (Théau Peronnin) et Quantum Machines (Yonathan Cohen).https://www.youtube.com/playlist?list=PLh7C25oO7PW11hVx1WfZYEemv09E4rpiThttps://q2b.qcware.com/2023-conferences/silicon-valley/videos/ Quantum Internet Hackathon internationalOrganisé dans 4 pays européens par la Quantum Internet Alliance : Delft (Pays-Bas), Dresde (Allemagne), Paris et Poznań Supercomputing and Networking Center (Pologne). 15 et 16 février 2024https://quantuminternetalliance.org/quantum-internet-hackathon-2024/https://www.linkedin.com/posts/kapmarc_quantuminternet-quantum-technology-activity-7156200910602358784-qzg7/ Journée nationale de la stratégie quantique du 6 mars organisée par le SGPI. Q2B Paris les 7 et 8 mars coorganisée avec Bpifrance. https://q2b.qcware.com/2024-conferences/paris/ APS March meeting à Mineapolis début mars avec notamment Alice&Bob qui y présentera les résultats de son premier qubit logique. - Science et industrie du quantique Roadmap QuEra jusqu'à 100 qubits logiquesAnnonce du 9 janvier 2024.https://www.quera.com/events/queras-quantum-roadmap Roadmap d'Alice&Bob aussi jusqu'à 100 qubits logiquesAnnonce du 23 janvier.LDPC-cat codes for low-overhead quantum computing in 2D by Diego Ruiz, Jérémie Guillaud, Anthony Leverrier, Mazyar Mirrahimi, and Christophe Vuillot, arXiv, January 2024 (23 pages).+. Elie Girard dans un podcast de Challenge : https://open.spotify.com/episode/3kYxJWAJYf94PjMYR10ngT IonQ atteint 35 qubits utilesAQ 35 in January 2024. https://ionq.com/posts/how-we-achieved-our-2024-performance-target-of-aq-35 Quandela et le projet EPIQUEhttps://www.quandela.com/wp-content/uploads/2024/01/kick-off-EPIQUE_Pressrelease-_ENG.pdf+ projet OQuLus du PEPR https://www.c2n.universite-paris-saclay.fr/fr/science-societe/actualites/actu/308 La Chine et les qubits supraconducteursAprès Alibaba, Baidu jette aussi l'éponge sur le quantique.https://thequantuminsider.com/2024/01/03/baidu-to-donate-quantum-computing-equipment-to-research-institute/See Schrödinger cats growing up to 60 qubits and dancing in a cat scar enforced discrete time crystal by Zehang Bao et al, arXiv, January 2024 (35 pages). Taiwan créé 5 qubits supraconducteurshttps://thequantuminsider.com/2024/01/24/taiwans-5-qubit-superconducting-quantum-computer-goes-online-ahead-of-schedule/ Des qubits fluxonium en FranceSee High-Sensitivity ac-Charge Detection with a MHz-Frequency Fluxonium Qubit by B.-L. Najera-Santos, R. Rousseau, K. Gerashchenko, H. Patange, A. Riva, M. Villiers, T. Briant, P.-F. Cohadon, A. Heidmann, J. Palomo, M. Rosticher, H. le Sueur, A. Sarlette, W. C. Smith, Z. Leghtas, E. Flurin, T. Jacqmin, and S. Deléglise, Physical Review X, January 2024 (18 pages). Pasqal ouvre un bureau en CoréeRoberto Mauro en prend la direction. https://www.hpcwire.com/off-the-wire/pasqal-welcomes-roberto-mauro-as-general-manager-to-spearhead-operations-in-south-korea/Partenariat avec KAIST.https://www.hpcwire.com/off-the-wire/pasqal-forms-quantum-partnership-in-korea-with-kaist-and-daejeon-city/ D-WaveIls passent de 500 à 1200 qubits en mode annealing pour leur nouvelle génération Advantage 2https://dwavequantum.com/company/newsroom/press-release/d-wave-announces-1-200-qubit-advantage2-prototype-in-new-lower-noise-fabrication-stack-demonstrating-20x-faster-time-to-solution-on-important-class-of-hard-optimization-problems/ PQC risquéehttps://www.bleepingcomputer.com/news/security/kyberslash-attacks-put-quantum-encryption-projects-at-risk/ QKD pas prête ?https://cyber.gouv.fr/actualites/uses-and-limits-quantum-key-distribution Benchmark d'émulateursBenchmark réalisé par Cornelius Hempel de PSI avec ETH Zurich et EPFL.See Benchmarking quantum computer simulation software packages by Amit Jamadagni, Andreas M. Läuchli, and Cornelius Hempel, arXiv, January 2024 (18 pages). Levée de fonds de Quantinuumhttps://www.quantinuum.com/news/honeywell-announces-the-closing-of-300-million-equity-investment-round-for-quantinuum-at-5b-pre-money-valuation Guerlain et sa crème quantiqueGrosses réactions...

The Nonlinear Library
LW - On Dwarkesh's 3rd Podcast With Tyler Cowen by Zvi

The Nonlinear Library

Play Episode Listen Later Feb 4, 2024 29:44


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: On Dwarkesh's 3rd Podcast With Tyler Cowen, published by Zvi on February 4, 2024 on LessWrong. This post is extensive thoughts on Tyler Cowen's excellent talk with Dwarkesh Patel. It is interesting throughout. You can read this while listening, after listening or instead of listening, and is written to be compatible with all three options. The notes are in order in terms of what they are reacting to, and are mostly written as I listened. I see this as having been a few distinct intertwined conversations. Tyler Cowen knows more about more different things than perhaps anyone else, so that makes sense. Dwarkesh chose excellent questions throughout, displaying an excellent sense of when to follow up and how, and when to pivot. The first conversation is about Tyler's book GOAT about the world's greatest economists. Fascinating stuff, this made me more likely to read and review GOAT in the future if I ever find the time. I mostly agreed with Tyler's takes here, to the extent I am in position to know, as I have not read that much in the way of what these men wrote, and at this point even though I very much loved it at the time (don't skip the digression on silver, even, I remember it being great) The Wealth of Nations is now largely a blur to me. There were also questions about the world and philosophy in general but not about AI, that I would mostly put in this first category. As usual, I have lots of thoughts. The second conversation is about expectations given what I typically call mundane AI. What would the future look like, if AI progress stalls out without advancing too much? We cannot rule such worlds out and I put substantial probability on them, so it is an important and fascinating question. If you accept the premise of AI remaining within the human capability range in some broad sense, where it brings great productivity improvements and rewards those who use it well but remains foundationally a tool and everything seems basically normal, essentially the AI-Fizzle world, then we have disagreements but Tyler is an excellent thinker about these scenarios. Broadly our expectations are not so different here. That brings us to the third conversation, about the possibility of existential risk or the development of more intelligent and capable AI that would have greater affordances. For a while now, Tyler has asserted that such greater intelligence likely does not much matter, that not so much would change, that transformational effects are highly unlikely, whether or not they constitute existential risks. That the world will continue to seem normal, and follow the rules and heuristics of economics, essentially Scott Aaronson's Futurama. Even when he says AIs will be decentralized and engage in their own Hayekian trading with their own currency, he does not think this has deep implications, nor does it imply much about what else is going on beyond being modestly (and only modestly) productive. Then at other times he affirms the importance of existential risk concerns, and indeed says we will be in need of a hegemon, but the thinking here seems oddly divorced from other statements, and thus often rather confused. Mostly it seems consistent with the view that it is much easier to solve alignment quickly, build AGI and use it to generate a hegemon, than it would be to get any kind of international coordination. And also that failure to quickly build AI risks our civilization collapsing. But also I notice this implies that the resulting AIs will be powerful enough to enable hegemony and determine the future, when in other contexts he does not think they will even enable sustained 10% GDP growth. Thus at this point, I choose to treat most of Tyler's thoughts on AI as if they are part of the second conversation, with an implicit 'assuming an AI at least semi-fizzle' attached ...

Quantum
Quantum 54 : actualités Décembre 2023

Quantum

Play Episode Listen Later Jan 2, 2024 58:35


Nous démarrons cette année 2024 avec le 54e épisode de Quantum, le podcast francophone de l'actualité quantique. Événements Q2B à Santa Clara du 5 au 7 décembrehttps://q2b.qcware.com/2023-conferences/silicon-valley/ Visite au Danemark du 6 au 8 décembrecompte-rendu détaillé est disponible sur ton site. Atelier de découverte du quantique organisé par LLQ/Les Maisons du quantique 13 décembre à Station F. Une conférence sur le scepticisme du calcul quantique à l'école Polytechnique le 21 décembreLe binet QuantX de l'X et leur laboratoire de physique organisait une conférence en soirée avec des sceptiques sur le devenir du calcul quantique.https://portail.polytechnique.edu/physique/fr/quantum-computing-between-promise-and-technological-challenges-round-table-discussion Événements 2024SAVE THE DATE 2024 :·       La Journée Nationale Quantique le 6 mars, organisée par le SGPI.·       La Q2B Paris qui aura lieu les 7 et 8 mars.·       France Quantum qui aura lieu le 21 mai 2024 juste avant Vivatech. Toujours à Station F ? Actualité entrepreneuriale et scientifique IBM Quantum Summit le 4 décembreIBM Debuts Next-Generation Quantum Processor & IBM Quantum System Two, Extends Roadmap to Advance Era of Quantum Utility, IBM Newsroom, December 2023.IBM Quantum Computing Blog | The hardware and software for the era of quantum utility is here by Jay Gambetta, December 2023.New developer tools for quantum computational scientists | IBM Research Blog by Ismael Faro, IBM Research Blog, December 2023. 48 qubits logiques avec des atomes neutres par  des chercheurs du MIT et de QuEra Le papier dans Nature : Logical quantum processor based on reconfigurable atom arrays by Dolev Bluvstein, Mikhail D. Lukin et al, Nature, December 2023 (42 pages).L'accès ouvert à l'article Nature : https://www.nature.com/articles/s41586-023-06927-3.epdf.Les commentaires des referees : https://static-content.springer.com/esm/art%3A10.1038%2Fs41586-023-06927-3/MediaObjects/41586_2023_6927_MOESM1_ESM.pdfLa version arXiv : Logical quantum processor based on reconfigurable atom arrays by Dolev Bluvstein, [Submitted on 7 Dec 2023].Le post : https://www.linkedin.com/posts/quera-computing-inc_48-logical-qubits-ugcPost-7138243313991585793-S3dILe commentaire de Scott Aaronson : https://scottaaronson.blog/?p=7651 Alice&Bob annonce son premier qubit logiqueSee Alice & Bob Takes Another Step Toward Releasing Error-Corrected Logical Qubit by Matt Swayne, The Quantum Insider, December 2023. Pasqal et son challenge sur l'impact du calcul quantique sur le climat et sélectionné trois équipes finalistes. Le Blaise Pascal [re]Generative Quantum Challenge. Le jury rassemblait notamment Florent Menegaux (CEO de Michelin), Frederic Magniez du CNRS-IRIF, Kristel Michielsen (EMBD) et Georges-Olivier Reymond, CEO de PASQAL.https://www.pasqal.com/articles/three-winning-quantum-projects-announced-for-the-blaise-pascal-re-generative-quantum-challengeRegenerative Quantum Computing Whitepaper by Pasqal, December 2023 (72 pages). Rigetti annonce Novera un QPU de 9 qubits supraconducteurs pour $900K. https://www.globenewswire.com/news-release/2023/12/06/2792153/0/en/Rigetti-Launches-the-Novera-QPU-the-Company-s-First-Commercially-Available-QPU.html Levée de fonds de CryptoNext de 11M€  CryptoNext Security Raises €11 Million to Reinforce Leadership in Post-Quantum Cryptography Remediation Solutions And Accelerate International Expansion by Matt Swayne, The Quantum Insider, December 2023. Annonce du Quantum Pact européen avec Thierry Breton, dans une annonce faite en Espagne, qui avait la présidence de l'Union Européenne sur le second semestre de 2023.Lien: the quantum pact. Science « review paper » sur les algorithmes d'optimisation quantiques : Quantum Optimization: Potential, Challenges, and the Path Forward by Amira Abbas et al, December 2023 (70 pages). review paper qui explique bien les principaux algorithmes quantiques : Quantum algorithms scientific applications by R. Au-Yeung, B. Camino, O. Rathore, and V. Kendon, arXiv, December 2023 (60 pages). Un papier qui présente une méthode permettant de réduire le nombre de portes quantiques dans des algorithmes de simulation chimique, réalisé entre autres par les équipes de Qubit Pharmaceuticals : Polylogarithmic-depth controlled-NOT gates without ancilla qubits by Baptiste Claudon, Julien Zylberman, César Feniou, Fabrice Debbasch, Alberto Peruzzo, and Jean-Philip Piquemal, December 2023 (12 pages).

NEI Podcast
E210 - The PsychopharmaStahology Show: Potential Therapeutic Indications for Psychedelics with Dr. Scott Aaronson

NEI Podcast

Play Episode Listen Later Dec 20, 2023 58:41 Very Popular


What do the clinical trials of psilocybin and other psychedelics show in terms of efficacy and safety? How much of the benefit with psychedelic treatment is attributable to psychological support? Is the psychedelic trip important for the therapeutic benefits of psychedelic treatments? Brought to you by the NEI Podcast, the PsychopharmaStahlogy Show tackles the most novel, exciting, and controversial topics in psychopharmacology in a series of themes. This theme is on the role of psychedelics in modern psychiatry. Today, Dr. Andy Cutler interviews Dr. Scott Aaronson and Dr. Stephen Stahl about the potential indications and therapeutic benefits of psilocybin and other psychedelics. Let's listen to Part 3 of our theme: Classic Psychedelics for the Modern Psychopharmacologist. Subscribe to the NEI Podcast, so that you don't miss another episode!

The Origins Podcast with Lawrence Krauss
Scott Aaronson: From Quantum Computing to AI Safety

The Origins Podcast with Lawrence Krauss

Play Episode Listen Later Dec 15, 2023 182:28


Scott Aaronson is one of the deepest mathematical intellects I have known since, say Ed Witten—the only physicist to have won the prestigious Fields Medal in Mathematics. While Ed is a string theorist, Scott decided to devote his mathematical efforts to the field of computer science, and as a theoretical computer scientist has played a major role in the development of algorithms that have pushed forward the field of quantum computing, and helped address several thorny issues that hamper our ability to create practical quantum computers. In addition to his research, Scott has, for a number of years, written a wonderful blog about issues in computing, in particular with regard to quantum computing. It is a great place to get educated about many of these issues. Most recently, Scott has spent the last year at OpenAI thinking about the difficult issue of AI safety, and how to ensure that as AI systems improve that they will not have an unduly negative or dangerous impact on human civilization. As I mention in the podcast I am less worried than some people, and I think so is Scott, but nevertheless, some careful thinking in advance can avert a great deal of hand wringing in the future. Scott has some very interesting ideas that are worth exploring, and we began to explore them in this podcast. Our conversation ran the gamut from quantum computing to AI safety and explored some complex ideas in computer science in the process, in particular the notion of computational complexity, which is important in understanding all of these issues. I hope you will find Scott's remarks as illuminating and informative as I did. As always, an ad-free video version of this podcast is also available to paid Critical Mass subscribers. Your subscriptions support the non-profit Origins Project Foundation, which produces the podcast. The audio version is available free on the Critical Mass site and on all podcast sites, and the video version will also be available on the Origins Project Youtube channel as well. Get full access to Critical Mass at lawrencekrauss.substack.com/subscribe

Theories of Everything with Curt Jaimungal
Scott Aaronson: The Greatest Unsolved Problem in Math

Theories of Everything with Curt Jaimungal

Play Episode Listen Later Dec 11, 2023 137:06 Very Popular


YouTube link https://youtu.be/1ZpGCQoL2Rk Scott Aaronson joins us to explore quantum computing, complexity theory, Ai, superdeterminism, consciousness, and free will. TIMESTAMPS:- 00:00:00 Introduction- 00:02:27 Turing universality & computational efficiency- 00:12:35 Does prediction undermine free will?- 00:15:16 Newcomb's paradox- 00:23:05 Quantum information & no-cloning- 00:33:42 Chaos & computational irreducibility- 00:38:33 Brain duplication, Ai, & identity- 00:46:43 Many-worlds, Copenhagen, & Bohm's interpretation  - 01:03:14 Penrose's view on quantum gravity and consciousness- 01:14:46 Superposition explained: misconceptions of quantum computing  - 01:21:33 Wolfram's physics project critique- 01:31:37 P vs NP explained (complexity classes demystified)- 01:53:40 Classical vs quantum computation- 02:03:25 The "pretty hard" problem of consciousness (critiques of IIT) NOTE: The perspectives expressed by guests don't necessarily mirror my own. There's a versicolored arrangement of people on TOE, each harboring distinct viewpoints, as part of my endeavor to understand the perspectives that exist. THANK YOU: To Mike Duffy, of https://dailymystic.org for your insight, help, and recommendations on this channel.   - Patreon: https://patreon.com/curtjaimungal (early access to ad-free audio episodes!)- Crypto: https://tinyurl.com/cryptoTOE- PayPal: https://tinyurl.com/paypalTOE- Twitter: https://twitter.com/TOEwithCurt- Discord Invite: https://discord.com/invite/kBcnfNVwqs- iTunes: https://podcasts.apple.com/ca/podcast/better-left-unsaid-with-curt-jaimungal/id1521758802- Pandora: https://pdora.co/33b9lfP- Spotify: https://open.spotify.com/show/4gL14b92xAErofYQA7bU4e- Subreddit r/TheoriesOfEverything: https://reddit.com/r/theoriesofeverything- TOE Merch: https://tinyurl.com/TOEmerch LINKS MENTIONED:- Scott's Blog: https://scottaaronson.blog/- Newcomb's Paradox (Scott's Blog Post): https://scottaaronson.blog/?p=30- A New Kind of Science (Stephen Wolfram): https://amzn.to/47BTiaf- Jonathan Gorard's Papers: https://arxiv.org/search/gr-qc?searchtype=author&query=Gorard,+J- Boson Sampling (Alex Arkhipov and Scott Aaronson): https://arxiv.org/abs/1011.3245- Podcast w/ Tim Maudlin on TOE (Solo): https://youtu.be/fU1bs5o3nss- Podcast w/ Tim Palmer on TOE: https://youtu.be/883R3JlZHXE

Theories of Everything with Curt Jaimungal
Free Will Explained by World's Top Intellectuals

Theories of Everything with Curt Jaimungal

Play Episode Listen Later Dec 8, 2023 260:42 Very Popular


YouTube Link: https://www.youtube.com/watch?v=SSbUCEleJhg&t=9315sIn our first ontoprism, we take a look back at FREE WILL across the years at Theories of Everything. If you have suggestions for future ontoprism topics, then comment below.TIMESTAMPS:- 00:00:00 Introduction- 00:02:58 Michael Levin- 00:08:51 David Wolpert (Part 1)- 00:13:48 Donald Hoffman, Joscha Bach- 00:33:10 Stuart Hameroff- 00:38:47 Claudia Passos- 00:40:27 Wolfgang Smith- 00:42:50 Bernardo Kastrup- 00:45:23 Matt O'Dowd- 01:19:06 Anand Vaidya- 01:28:52 Chris Langan, Bernardo Kastrup- 01:44:27 David Wolpert (Part 2)- 01:51:37 Scott Aaronson- 01:59:47 Nicolas Gisin- 02:16:52 David Wolpert (Part 3)- 02:32:39 Brian Keating, Lee Cronin- 02:42:55 Joscha Bach- 02:46:07 Karl Friston- 02:49:28 Noam Chomsky (Part 1)- 02:55:06 John Vervaeke, Joscha Bach- 03:13:27 Stephen Wolfram- 03:32:46 Jonathan Blow- 03:40:08 Noam Chomsky (Part 2)- 03:49:38 Thomas Campbell- 03:55:14 John Vervaeke- 04:02:41 James Robert Brown- 04:13:42 Anil Seth- 04:17:37 More ontoprisms coming...NOTE: The perspectives expressed by guests don't necessarily mirror my own. There's a versicolored arrangement of people on TOE, each harboring distinct viewpoints, as part of my endeavor to understand the perspectives that exist. THANK YOU: To Mike Duffy, of https://dailymystic.org for your insight, help, and recommendations on this channel. - Patreon: / curtjaimungal  (early access to ad-free audio episodes!) - Crypto: https://tinyurl.com/cryptoTOE - PayPal: https://tinyurl.com/paypalTOE - Twitter:  / toewithcurt   - Discord Invite:  / discord   - iTunes: https://podcasts.apple.com/ca/podcast... - Pandora: https://pdora.co/33b9lfP - Spotify: https://open.spotify.com/show/4gL14b9... - Subreddit r/TheoriesOfEverything:  / theoriesofeverything   - TOE Merch: https://tinyurl.com/TOEmerch LINKS MENTIONED:  • Free Will Debate: "Is God A Taoist?" ...   • Unveiling the Mind-Blowing Biotech of...   • David Wolpert: Free Will & No Free Lu...   • Donald Hoffman Λ Joscha Bach: Conscio...   • Stuart Hameroff: Penrose & Fractal Co...   • Wolfgang Smith: Beyond Non-Dualism   • Escaping the Illusion: Bernardo Kastr...   • Matt O'Dowd: Your Mind vs. The Univer...   • Anand Vaidya: Moving BEYOND Non-Dualism   • Should You Fear Death? Bernardo Kastr...   • David Wolpert: Monotheism Theorem, Un...   • Nicolas Gisin: Time, Superdeterminism...   • David Wolpert: Monotheism Theorem, Un...   • Brian Keating Λ Lee Cronin: Life in t...   • Joscha Bach: Time, Simulation Hypothe...   • Karl Friston: Derealization, Consciou...   • Noam Chomsky   • Joscha Bach Λ John Vervaeke: Mind, Id...   • Stephen Wolfram: Ruliad, Consciousnes...   • Jonathan Blow: Consciousness, Game De...   • Noam Chomsky   • Thomas Campbell: Ego, Paranormal Psi,...   • Thomas Campbell: Remote Viewing, Spea...   • John Vervaeke: Psychedelics, Evil, & ...   • James Robert Brown: The Continuum Hyp...   • Anil Seth: Neuroscience of Consciousn...  

The Top Line
'The Top Line': A look at psychedelics and the next frontier of mental healthcare

The Top Line

Play Episode Listen Later Nov 10, 2023 17:14


In this episode of “The Top Line,” we explore the next potential wave of mental health therapy: psychedelics.  Fierce Biotech's Max Bayer sits down with Scott Aaronson, M.D., a seasoned clinician and psychiatrist with over 30 years of experience in the field. They discuss how psychedelics might be used in the treatment of conditions such as depression, substance addiction, PTSD, anorexia, and more. They also examine the obstacles developers face as they advance their studies.   To learn more about the topics in this episode:  MDMA approval filing nears after drug hits again in phase 3, showing consistent PTSD improvements Early clinical data on psilocybin in anorexia point Compass to potential new opportunity Otsuka adopts new Mindset, dropping $59M to buy Canadian psychedelic biotech See omnystudio.com/listener for privacy information.

Eye On A.I.
#143 Scott Aaronson: Revealing the Truth About Quantum Computing

Eye On A.I.

Play Episode Listen Later Oct 9, 2023 69:03


This episode is sponsored by Crusoe. Crusoe Cloud is a scalable, clean, high-performance cloud, optimized for AI and HPC workloads, and powered by wasted, stranded or clean energy. Crusoe offers virtualized compute and storage solutions for a range of applications - including generative AI, computational biology, and rendering. Visit crusoecloud.com to see what climate-aligned computing can do for your business.   On episode #143 of Eye on AI, Craig Smith sits down with Scott Aaronson, Schlumberger Centennial Chair of Computer Science at The University of Texas and director of its Quantum Information Center. In this episode, we cut through the quantum computing hype and explore its profound implications for AI. We reveal the practicality of quantum computing, examining how companies are leveraging it to solve intricate problems, like vehicle routing, using D-Wave systems. Scott and I delve into the distinctions between quantum annealing and Grover-type speedups, shedding light on the potential of hybrid solutions that blend classical and quantum elements. Shifting gears, we delve into the synergy between quantum computing and AI safety. Scott shares insights from his work at OpenAI, particularly a project aimed at fine-tuning language models like GPT for detecting AI-generated text, highlighting the implications of such advanced AI technology's potential misuse.  If you enjoyed this podcast, please consider leaving a 5-star rating on Spotify and a review on Apple Podcasts.   Craig Smith's Twitter: https://twitter.com/craigss Eye on A.I. Twitter: https://twitter.com/EyeOn_AI (00:00) Preview and Introduction (04:13) Demystifying Quantum Computing (16:04) Leveraging Quantum Computers for Optimization (31:01) What is Quantum Computing? (42:40) Advancements and Challenges in Quantum Computing (54:57) Machine Learning and AI Safety

Zero Knowledge
Episode 288: Quantum Cryptography with Or Sattath

Zero Knowledge

Play Episode Listen Later Aug 16, 2023 63:03


In this week's episode, Anna Rose (https://twitter.com/annarrose) and Kobi Gurkan (https://twitter.com/kobigurk) chat with Or Sattath (https://twitter.com/or_sattath), Assistant Professor at the Ben-Gurion (https://cris.bgu.ac.il/en/persons/or-sattath) University in the Computer Science department. They deep dive into Or's work on Quantum Cryptography. They begin with definitions of Quantum Computing and Quantum Cryptography, covering what these will mean for existing cryptography. They also explore how new discoveries in this field can interact with existing Proof-of-work systems and how Quantum computers could affect the game theory of mining in the future. Here's some additional links for this episode: On the insecurity of quantum Bitcoin mining by Sattath (https://arxiv.org/abs/1804.08118) Strategies for quantum races by Lee, Ray, and Santha (https://arxiv.org/abs/1809.03671) Polynomial-Time Algorithms for Prime Factorization and Discrete Logarithms on a Quantum Computer by Shor (https://arxiv.org/abs/quant-ph/9508027) Shor's Algorithm (https://quantum-computing.ibm.com/composer/docs/iqx/guide/shors-algorithm) Grover's Algorithm (https://quantum-computing.ibm.com/composer/docs/iqx/guide/grovers-algorithm) A fast quantum mechanical algorithm for database search by Grover (https://arxiv.org/abs/quant-ph/9605043) Bell's Theorem (https://plato.stanford.edu/entries/bell-theorem/) More in-depth resources recommended by Or Sattath: A recommended smbc-comics (https://www.smbc-comics.com/comic/the-talk-3) about the power of quantum computing, authored by Zack Weinersmith (the usual cartoonist) and Scott Aaronson (a quantum computing expert) For an in-depth introduction to quantum computing, I recommend Ronald de-Wolf's lecture notes (https://homepages.cwi.nl/~rdewolf/qcnotes.pdf) The Bitcoin backbone protocol with a single quantum miner, by Cojocaru et al (https://eprint.iacr.org/2019/1150) The fingerprint of quantum mining slightly below 16 minutes by Nerem-Gaur (https://arxiv.org/abs/2110.00878) Some estimates regarding timelines, which we didn't discuss, are available here (https://arxiv.org/abs/1710.10377) and here (https://qrc.btq.li/) The insecurity of quantum Bitcoin mining (https://arxiv.org/abs/1804.08118), and the need to change the tie-breaking rule. The work by Lee-Ray-Santh (https://arxiv.org/abs/1809.03671) that analyzes the equilibrium strategy for multiple quantum miners, as a simplified one-shot game. zkSummit 10 is happening in London on September 20, 2023! Apply to attend now -> zkSummit 10 Application Form (https://9lcje6jbgv1.typeform.com/zkSummit10). Polygon Labs (https://polygon.technology/) is thrilled to announce Polygon 2.0: The Value Layer for the Internet (https://polygon.technology/roadmap). Polygon 2.0 and all of our ZK tech is open-source and community-driven. Reach out to the Polygon community on Discord (https://discord.gg/0xpolygon) to learn more, contribute, or join in and build the future of Web3 together with Polygon! If you like what we do: * Find all our links here! @ZeroKnowledge | Linktree (https://linktr.ee/zeroknowledge) * Subscribe to our podcast newsletter (https://zeroknowledge.substack.com) * Follow us on Twitter @zeroknowledgefm (https://twitter.com/zeroknowledgefm) * Join us on Telegram (https://zeroknowledge.fm/telegram) * Catch us on YouTube (https://zeroknowledge.fm/)

Conversations With Coleman
Will AI Destroy Us? - AI Virtual Roundtable

Conversations With Coleman

Play Episode Listen Later Jul 28, 2023 91:04


Today's episode is a roundtable discussion about AI safety with Eliezer Yudkowsky, Gary Marcus, and Scott Aaronson. Eliezer Yudkowsky is a prominent AI researcher and writer known for co-founding the Machine Intelligence Research Institute, where he spearheaded research on AI safety. He's also widely recognized for his influential writings on the topic of rationality. Scott Aaronson is a theoretical computer scientist and author, celebrated for his pioneering work in the field of quantum computation. He's also the chair of COMSI at U of T Austin, but is currently taking a leave of absence to work at OpenAI. Gary Marcus is a cognitive scientist, author, and entrepreneur known for his work at the intersection of psychology, linguistics, and AI. He's also authored several books, including "Kluge" and "Rebooting AI: Building Artificial Intelligence We Can Trust".This episode is all about AI safety. We talk about the alignment problem. We talk about the possibility of human extinction due to AI. We talk about what intelligence actually is. We talk about the notion of a singularity or an AI takeoff event and much more.It was really great to get these three guys in the same virtual room and I think you'll find that this conversation brings something a bit fresh to a topic that has admittedly been beaten to death on certain corners of the internet.

Quantum Computing Now
Reprogrammable Quantum Secure Hardware with Mamta Gupta - Episode 47

Quantum Computing Now

Play Episode Listen Later Jul 14, 2023 51:03


We've heard on the show before about software needed to secure devices in a post-quantum world, but what about the hardware? Mamta Gupta from Lattice Semiconductor is here to tell us all about that! A note: at one point, Mamta talks about massive parallelism being the reason for quantum computing's speedup. As far as I can tell, it's not. I didn't think during the podcast was the best time to bring it up, but if you want to learn more, I recommend looking at the episode I did with Scott Aaronson and also the episode with Jon Skerrett CNSA 2.0: https://media.defense.gov/2022/Sep/07/2003071834/-1/-1/0/CSA_CNSA_2.0_ALGORITHMS_.PDF Isara timeline: https://www.isara.com/blog-posts/quantum-computing-urgency-and-timeline.html Cryptographic Agility with Mike Brown – Episode 37: https://podcasters.spotify.com/pod/show/quantumcomputingnow/episodes/Cryptographic-Agility-with-Mike-Brown--Episode-37-e133ela https://www.latticesemi.com/ Quantum Open Source Foundation: https://qosf.org/ QOSF showcase: https://podcasters.spotify.com/pod/show/quantumcomputingnow/episodes/QOSF-Showcase--Episode-28-Hybrid-eqir0q Interview with Michal Stechly (founder of QOSF): https://podcasters.spotify.com/pod/show/quantumcomputingnow/episodes/Micha-Stchy-and-QOSF--Episode-21-Hybrid-ej5e2u Lattice LinkedIn: https://www.linkedin.com/company/lattice-semiconductor Lattice twitter: http://www.twitter.com/latticesemi Lattice Facebook: http://www.facebook.com/latticesemi https://www.minds.com/1ethanhansen 1ethanhansen@protonmail.com QRL: Q0106000c95fe7c29fa6fc841ab9820888d807f41d4a99fc4ad9ec5510a5334c72ef8d0f8c44698 Monero: 47e9C55PhuWDksWL9BRoJZ2N5c6FwP9EFUcbWmXZS8AWfazgxZVeaw7hZZmXXhf3VQgodWKwVq629YC32tEd1STkStwfh5Y Ethereum: 0x9392079Eb419Fa868a8929ED595bd3A85397085B --- Send in a voice message: https://podcasters.spotify.com/pod/show/quantumcomputingnow/message

The Gradient Podcast
Scott Aaronson: Against AI Doomerism

The Gradient Podcast

Play Episode Listen Later May 11, 2023 69:32


In episode 72 of The Gradient Podcast, Daniel Bashir speaks to Professor Scott Aaronson. Scott is the Schlumberger Centennial Chair of Computer Science at the University of Texas at Austin and director of its Quantum Information Center. His research interests focus on the capabilities and limits of quantum computers and computational complexity theory more broadly. He has recently been on leave to work at OpenAI, where he is researching theoretical foundations of AI safety. Have suggestions for future podcast guests (or other feedback)? Let us know here or reach us at editor@thegradient.pubSubscribe to The Gradient Podcast:  Apple Podcasts  | Spotify | Pocket Casts | RSSFollow The Gradient on TwitterOutline:* (00:00) Intro* (01:45) Scott's background* (02:50) Starting grad school in AI, transitioning to quantum computing and the AI / quantum computing intersection* (05:30) Where quantum computers can give us exponential speedups, simulation overhead, Grover's algorithm* (10:50) Overselling of quantum computing applied to AI, Scott's analysis on quantum machine learning* (18:45) ML problems that involve quantum mechanics and Scott's work* (21:50) Scott's recent work at OpenAI* (22:30) Why Scott was skeptical of AI alignment work early on* (26:30) Unexpected improvements in modern AI and Scott's belief update* (32:30) Preliminary Analysis of DALL-E 2 (Marcus & Davis)* (34:15) Watermarking GPT outputs* (41:00) Motivations for watermarking and language model detection* (45:00) Ways around watermarking* (46:40) Other aspects of Scott's experience with OpenAI, theoretical problems* (49:10) Thoughts on definitions for humanistic concepts in AI* (58:45) Scott's “reform AI alignment stance” and Eliezer Yudkowsky's recent comments (+ Daniel pronounces Eliezer wrong), orthogonality thesis, cases for stopping scaling* (1:08:45) OutroLinks:* Scott's blog* AI-related work* Quantum Machine Learning Algorithms: Read the Fine Print* A very preliminary analysis of DALL-E 2 w/ Marcus and Davis* New AI classifier for indicating AI-written text and Watermarking GPT Outputs* Writing* Should GPT exist?* AI Safety Lecture* Why I'm not terrified of AI Get full access to The Gradient at thegradientpub.substack.com/subscribe

Weird Religion
105 THE PAPERCLIPALYPSE (pre-existent AI spirituality and doom scenarios)

Weird Religion

Play Episode Listen Later May 3, 2023 29:52


The paperclipsalypse has arrived, as AI comes to destroy or save the world. Was AI there all along and we just now “discovered” its weird and eternal life? The “paperclip maximizer” thought experiment: https://nickbostrom.com/ethics/ai Eliezar Yudkowsky in TIME magazine, “Shut it Down”: https://time.com/6266923/ai-eliezer-yudkowsky-open-letter-not-enough/ Scott Aaronson, Shtetl-Optimized: https://scottaaronson.blog/ Drake + The Weeknd AI hit: https://www.youtube.com/watch?v=81Kafnm0eKQAI Beer commercial: https://www.tiktok.com/@realwholesomechannel/video/7227965471683775771 What if the AI was already there? https://twitter.com/drmichaellevin/status/1637449677028093953?s=20 Jacques Vallée on UFOs: https://www.wired.com/story/jacques-vallee-still-doesnt-know-what-ufos-are/ AI (2001 movie) scene: “The Flesh Fair”: https://www.youtube.com/watch?v=ZMbAmqD_tn0

The Bayesian Conspiracy
Bayes Blast 12 – AI White Pills

The Bayesian Conspiracy

Play Episode Listen Later Apr 25, 2023 4:19


One of these white pills is more realistic than the other. Which one is which is left as an exercise to the listener. 😉 Roko's White Pill AXRP reform AI Alignment with Scott Aaronson

The Nonlinear Library
AF - AXRP Episode 20 - ‘Reform' AI Alignment with Scott Aaronson by DanielFilan

The Nonlinear Library

Play Episode Listen Later Apr 12, 2023 95:47


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AXRP Episode 20 - ‘Reform' AI Alignment with Scott Aaronson, published by DanielFilan on April 12, 2023 on The AI Alignment Forum. Google Podcasts link How should we scientifically think about the impact of AI on human civilization, and whether or not it will doom us all? In this episode, I speak with Scott Aaronson about his views on how to make progress in AI alignment, as well as his work on watermarking the output of language models, and how he moved from a background in quantum complexity theory to working on AI. Topics we discuss: ‘Reform' AI alignment Epistemology of AI risk Immediate problems and existential risk Aligning deceitful AI Stories of AI doom Language models Democratic governance of AI What would change Scott's mind Watermarking language model outputs Watermark key secrecy and backdoor insertion Scott's transition to AI research Theoretical computer science and AI alignment AI alignment and formalizing philosophy How Scott finds AI research Following Scott's research Daniel Filan: Hello, everyone. In this episode, I'll be speaking with Scott Aaronson. Scott is a professor of computer science at UT Austin and he's currently spending a year as a visiting scientist at OpenAI working on the theoretical foundations of AI safety. We'll be talking about his view of the field, as well as the work he's doing at OpenAI. For links to what we're discussing, you can just check the description of this episode and you can read the transcript at axrp.net. Scott, welcome to AXRP. Scott Aaronson: Thank you. Good to be here. ‘Reform' AI alignment Epistemology of AI risk Daniel Filan: So you recently wrote this blog post about something you called reform AI alignment: basically your take on AI alignment that's somewhat different from what you see as a traditional view or something. Can you tell me a little bit about, do you see AI causing or being involved in a really important way in existential risk anytime soon, and if so, how? Scott Aaronson: Well, I guess it depends what you mean by soon. I am not a very good prognosticator. I feel like even in quantum computing theory, which is this tiny little part of the intellectual world where I've spent 25 years of my life, I can't predict very well what's going to be discovered a few years from now in that, and if I can't even do that, then how much less can I predict what impacts AI is going to have on human civilization over the next century? Of course, I can try to play the Bayesian game, and I even will occasionally accept bets if I feel really strongly about something, but I'm also kind of a wuss. I'm a little bit risk-averse, and I like to tell people whenever they ask me ‘how soon will AI take over the world?', or before that, it was more often, ‘how soon will we have a fault-tolerant quantum computer?'. They don't want all the considerations and explanations that I can offer, they just want a number, and I like to tell them, “Look, if I were good at that kind of thing, I wouldn't be a professor, would I? I would be an investor and I would be a multi-billionaire.” So I feel like probably, there are some people in the world who can just consistently see what is coming in decades and get it right. There are hedge funds that are consistently successful (not many), but I feel like the way that science has made progress for hundreds of years has not been to try to prognosticate the whole shape of the future. It's been to look a little bit ahead, look at the problems that we can see right now that could actually be solved, and rather than predicting 10 steps ahead the future, you just try to create the next step ahead of the future and try to steer it in what looks like a good direction, and I feel like that is what I try to do as a scientist. And I've known the rationalist community, the AI risk community since. maybe no...

AXRP - the AI X-risk Research Podcast
20 - 'Reform' AI Alignment with Scott Aaronson

AXRP - the AI X-risk Research Podcast

Play Episode Listen Later Apr 12, 2023 147:35


How should we scientifically think about the impact of AI on human civilization, and whether or not it will doom us all? In this episode, I speak with Scott Aaronson about his views on how to make progress in AI alignment, as well as his work on watermarking the output of language models, and how he moved from a background in quantum complexity theory to working on AI. Note: this episode was recorded before this story emerged of a man committing suicide after discussions with a language-model-based chatbot, that included discussion of the possibility of him killing himself. Patreon: https://www.patreon.com/axrpodcast Store: https://store.axrp.net/ Ko-fi: https://ko-fi.com/axrpodcast Topics we discuss, and timestamps: 0:00:36 - 'Reform' AI alignment 0:01:52 - Epistemology of AI risk 0:20:08 - Immediate problems and existential risk 0:24:35 - Aligning deceitful AI 0:30:59 - Stories of AI doom 0:34:27 - Language models 0:43:08 - Democratic governance of AI 0:59:35 - What would change Scott's mind 1:14:45 - Watermarking language model outputs 1:41:41 - Watermark key secrecy and backdoor insertion 1:58:05 - Scott's transition to AI research 2:03:48 - Theoretical computer science and AI alignment 2:14:03 - AI alignment and formalizing philosophy 2:22:04 - How Scott finds AI research 2:24:53 - Following Scott's research The transcript Links to Scott's things: Personal website Book, Quantum Computing Since Democritus Blog, Shtetl-Optimized Writings we discuss: Reform AI Alignment Planting Undetectable Backdoors in Machine Learning Models

The Lunar Society
Eliezer Yudkowsky - Why AI Will Kill Us, Aligning LLMs, Nature of Intelligence, SciFi, & Rationality

The Lunar Society

Play Episode Listen Later Apr 6, 2023 243:25


For 4 hours, I tried to come up reasons for why AI might not kill us all, and Eliezer Yudkowsky explained why I was wrong.We also discuss his call to halt AI, why LLMs make alignment harder, what it would take to save humanity, his millions of words of sci-fi, and much more.If you want to get to the crux of the conversation, fast forward to 2:35:00 through 3:43:54. Here we go through and debate the main reasons I still think doom is unlikely.Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform. Read the full transcript here. Follow me on Twitter for updates on future episodes.As always, the most helpful thing you can do is just to share the podcast - send it to friends, group chats, Twitter, Reddit, forums, and wherever else men and women of fine taste congregate.If you have the means and have enjoyed my podcast, I would appreciate your support via a paid subscriptions on Substack.Timestamps(0:00:00) - TIME article(0:09:06) - Are humans aligned?(0:37:35) - Large language models(1:07:15) - Can AIs help with alignment?(1:30:17) - Society's response to AI(1:44:42) - Predictions (or lack thereof)(1:56:55) - Being Eliezer(2:13:06) - Othogonality(2:35:00) - Could alignment be easier than we think?(3:02:15) - What will AIs want?(3:43:54) - Writing fiction & whether rationality helps you winTranscriptTIME articleDwarkesh Patel 0:00:51Today I have the pleasure of speaking with Eliezer Yudkowsky. Eliezer, thank you so much for coming out to the Lunar Society.Eliezer Yudkowsky 0:01:00You're welcome.Dwarkesh Patel 0:01:01Yesterday, when we're recording this, you had an article in Time calling for a moratorium on further AI training runs. My first question is — It's probably not likely that governments are going to adopt some sort of treaty that restricts AI right now. So what was the goal with writing it?Eliezer Yudkowsky 0:01:25I thought that this was something very unlikely for governments to adopt and then all of my friends kept on telling me — “No, no, actually, if you talk to anyone outside of the tech industry, they think maybe we shouldn't do that.” And I was like — All right, then. I assumed that this concept had no popular support. Maybe I assumed incorrectly. It seems foolish and to lack dignity to not even try to say what ought to be done. There wasn't a galaxy-brained purpose behind it. I think that over the last 22 years or so, we've seen a great lack of galaxy brained ideas playing out successfully.Dwarkesh Patel 0:02:05Has anybody in the government reached out to you, not necessarily after the article but just in general, in a way that makes you think that they have the broad contours of the problem correct?Eliezer Yudkowsky 0:02:15No. I'm going on reports that normal people are more willing than the people I've been previously talking to, to entertain calls that this is a bad idea and maybe you should just not do that.Dwarkesh Patel 0:02:30That's surprising to hear, because I would have assumed that the people in Silicon Valley who are weirdos would be more likely to find this sort of message. They could kind of rocket the whole idea that AI will make nanomachines that take over. It's surprising to hear that normal people got the message first.Eliezer Yudkowsky 0:02:47Well, I hesitate to use the term midwit but maybe this was all just a midwit thing.Dwarkesh Patel 0:02:54All right. So my concern with either the 6 month moratorium or forever moratorium until we solve alignment is that at this point, it could make it seem to people like we're crying wolf. And it would be like crying wolf because these systems aren't yet at a point at which they're dangerous. Eliezer Yudkowsky 0:03:13And nobody is saying they are. I'm not saying they are. The open letter signatories aren't saying they are.Dwarkesh Patel 0:03:20So if there is a point at which we can get the public momentum to do some sort of stop, wouldn't it be useful to exercise it when we get a GPT-6? And who knows what it's capable of. Why do it now?Eliezer Yudkowsky 0:03:32Because allegedly, and we will see, people right now are able to appreciate that things are storming ahead a bit faster than the ability to ensure any sort of good outcome for them. And you could be like — “Ah, yes. We will play the galaxy-brained clever political move of trying to time when the popular support will be there.” But again, I heard rumors that people were actually completely open to the concept of  let's stop. So again, I'm just trying to say it. And it's not clear to me what happens if we wait for GPT-5 to say it. I don't actually know what GPT-5 is going to be like. It has been very hard to call the rate at which these systems acquire capability as they are trained to larger and larger sizes and more and more tokens. GPT-4 is a bit beyond in some ways where I thought this paradigm was going to scale. So I don't actually know what happens if GPT-5 is built. And even if GPT-5 doesn't end the world, which I agree is like more than 50% of where my probability mass lies, maybe that's enough time for GPT-4.5 to get ensconced everywhere and in everything, and for it actually to be harder to call a stop, both politically and technically. There's also the point that training algorithms keep improving. If we put a hard limit on the total computes and training runs right now, these systems would still get more capable over time as the algorithms improved and got more efficient. More oomph per floating point operation, and things would still improve, but slower. And if you start that process off at the GPT-5 level, where I don't actually know how capable that is exactly, you may have a bunch less lifeline left before you get into dangerous territory.Dwarkesh Patel 0:05:46The concern is then that — there's millions of GPUs out there in the world. The actors who would be willing to cooperate or who could even be identified in order to get the government to make them cooperate, would potentially be the ones that are most on the message. And so what you're left with is a system where they stagnate for six months or a year or however long this lasts. And then what is the game plan? Is there some plan by which if we wait a few years, then alignment will be solved? Do we have some sort of timeline like that?Eliezer Yudkowsky 0:06:18Alignment will not be solved in a few years. I would hope for something along the lines of human intelligence enhancement works. I do not think they're going to have the timeline for genetically engineered humans to work but maybe? This is why I mentioned in the Time letter that if I had infinite capability to dictate the laws that there would be a carve-out on biology, AI that is just for biology and not trained on text from the internet. Human intelligence enhancement, make people smarter. Making people smarter has a chance of going right in a way that making an extremely smart AI does not have a realistic chance of going right at this point. If we were on a sane planet, what the sane planet does at this point is shut it all down and work on human intelligence enhancement. I don't think we're going to live in that sane world. I think we are all going to die. But having heard that people are more open to this outside of California, it makes sense to me to just try saying out loud what it is that you do on a saner planet and not just assume that people are not going to do that.Dwarkesh Patel 0:07:30In what percentage of the worlds where humanity survives is there human enhancement? Like even if there's 1% chance humanity survives, is that entire branch dominated by the worlds where there's some sort of human intelligence enhancement?Eliezer Yudkowsky 0:07:39I think we're just mainly in the territory of Hail Mary passes at this point, and human intelligence enhancement is one Hail Mary pass. Maybe you can put people in MRIs and train them using neurofeedback to be a little saner, to not rationalize so much. Maybe you can figure out how to have something light up every time somebody is working backwards from what they want to be true to what they take as their premises. Maybe you can just fire off little lights and teach people not to do that so much. Maybe the GPT-4 level systems can be RLHF'd (reinforcement learning from human feedback) into being consistently smart, nice and charitable in conversation and just unleash a billion of them on Twitter and just have them spread sanity everywhere. I do worry that this is not going to be the most profitable use of the technology, but you're asking me to list out Hail Mary passes and that's what I'm doing. Maybe you can actually figure out how to take a brain, slice it, scan it, simulate it, run uploads and upgrade the uploads, or run the uploads faster. These are also quite dangerous things, but they do not have the utter lethality of artificial intelligence.Are humans aligned?Dwarkesh Patel 0:09:06All right, that's actually a great jumping point into the next topic I want to talk to you about. Orthogonality. And here's my first question — Speaking of human enhancement, suppose you bred human beings to be friendly and cooperative, but also more intelligent. I claim that over many generations you would just have really smart humans who are also really friendly and cooperative. Would you disagree with that analogy? I'm sure you're going to disagree with this analogy, but I just want to understand why?Eliezer Yudkowsky 0:09:31The main thing is that you're starting from minds that are already very, very similar to yours. You're starting from minds, many of which already exhibit the characteristics that you want. There are already many people in the world, I hope, who are nice in the way that you want them to be nice. Of course, it depends on how nice you want exactly. I think that if you actually go start trying to run a project of selectively encouraging some marriages between particular people and encouraging them to have children, you will rapidly find, as one does in any such process that when you select on the stuff you want, it turns out there's a bunch of stuff correlated with it and that you're not changing just one thing. If you try to make people who are inhumanly nice, who are nicer than anyone has ever been before, you're going outside the space that human psychology has previously evolved and adapted to deal with, and weird stuff will happen to those people. None of this is very analogous to AI. I'm just pointing out something along the lines of — well, taking your analogy at face value, what would happen exactly? It's the sort of thing where you could maybe do it, but there's all kinds of pitfalls that you'd probably find out about if you cracked open a textbook on animal breeding.Dwarkesh Patel 0:11:13The thing you mentioned initially, which is that we are starting off with basic human psychology, that we are fine tuning with breeding. Luckily, the current paradigm of AI is  — you have these models that are trained on human text and I would assume that this would give you a starting point of something like human psychology.Eliezer Yudkowsky 0:11:31Why do you assume that?Dwarkesh Patel 0:11:33Because they're trained on human text.Eliezer Yudkowsky 0:11:34And what does that do?Dwarkesh Patel 0:11:36Whatever thoughts and emotions that lead to the production of human text need to be simulated in the AI in order to produce those results.Eliezer Yudkowsky 0:11:44I see. So if you take an actor and tell them to play a character, they just become that person. You can tell that because you see somebody on screen playing Buffy the Vampire Slayer, and that's probably just actually Buffy in there. That's who that is.Dwarkesh Patel 0:12:05I think a better analogy is if you have a child and you tell him — Hey, be this way. They're more likely to just be that way instead of putting on an act for 20 years or something.Eliezer Yudkowsky 0:12:18It depends on what you're telling them to be exactly. Dwarkesh Patel 0:12:20You're telling them to be nice.Eliezer Yudkowsky 0:12:22Yeah, but that's not what you're telling them to do. You're telling them to play the part of an alien, something with a completely inhuman psychology as extrapolated by science fiction authors, and in many cases done by computers because humans can't quite think that way. And your child eventually manages to learn to act that way. What exactly is going on in there now? Are they just the alien or did they pick up the rhythm of what you're asking them to imitate and be like — “Ah yes, I see who I'm supposed to pretend to be.” Are they actually a person or are they pretending? That's true even if you're not asking them to be an alien. My parents tried to raise me Orthodox Jewish and that did not take at all. I learned to pretend. I learned to comply. I hated every minute of it. Okay, not literally every minute of it. I should avoid saying untrue things. I hated most minutes of it. Because they were trying to show me a way to be that was alien to my own psychology and the religion that I actually picked up was from the science fiction books instead, as it were. I'm using religion very metaphorically here, more like ethos, you might say. I was raised with science fiction books I was reading from my parents library and Orthodox Judaism. The ethos of the science fiction books rang truer in my soul and so that took in, the Orthodox Judaism didn't. But the Orthodox Judaism was what I had to imitate, was what I had to pretend to be, was the answers I had to give whether I believed them or not. Because otherwise you get punished.Dwarkesh Patel 0:14:01But on that point itself, the rates of apostasy are probably below 50% in any religion. Some people do leave but often they just become the thing they're imitating as a child.Eliezer Yudkowsky 0:14:12Yes, because the religions are selected to not have that many apostates. If aliens came in and introduced their religion, you'd get a lot more apostates.Dwarkesh Patel 0:14:19Right. But I think we're probably in a more virtuous situation with ML because these systems are regularized through stochastic gradient descent. So the system that is pretending to be something where there's multiple layers of interpretation is going to be more complex than the one that is just being the thing. And over time, the system that is just being the thing will be optimized, right? It'll just be simpler.Eliezer Yudkowsky 0:14:42This seems like an ordinate cope. For one thing, you're not training it to be any one particular person. You're training it to switch masks to anyone on the Internet as soon as they figure out who that person on the internet is. If I put the internet in front of you and I was like — learn to predict the next word over and over. You do not just turn into a random human because the random human is not what's best at predicting the next word of everyone who's ever been on the internet. You learn to very rapidly pick up on the cues of what sort of person is talking, what will they say next? You memorize so many facts just because they're helpful in predicting the next word. You learn all kinds of patterns, you learn all the languages. You learn to switch rapidly from being one kind of person or another as the conversation that you are predicting changes who is speaking. This is not a human we're describing. You are not training a human there.Dwarkesh Patel 0:15:43Would you at least say that we are living in a better situation than one in which we have some sort of black box where you have a machiavellian fittest survive simulation that produces AI? This situation is at least more likely to produce alignment than one in which something that is completely untouched by human psychology would produce?Eliezer Yudkowsky 0:16:06More likely? Yes. Maybe you're an order of magnitude likelier. 0% instead of 0%. Getting stuff to be more likely does not help you if the baseline is nearly zero. The whole training set up there is producing an actress, a predictor. It's not actually being put into the kind of ancestral situation that evolved humans, nor the kind of modern situation that raises humans. Though to be clear, raising it like a human wouldn't help, But you're giving it a very alien problem that is not what humans solve and it is solving that problem not in the way a human would.Dwarkesh Patel 0:16:44Okay, so how about this. I can see that I certainly don't know for sure what is going on in these systems. In fact, obviously nobody does. But that also goes through you. Could it not just be that reinforcement learning works and all these other things we're trying somehow work and actually just being an actor produces some sort of benign outcome where there isn't that level of simulation and conniving?Eliezer Yudkowsky 0:17:15I think it predictably breaks down as you try to make the system smarter, as you try to derive sufficiently useful work from it. And in particular, the sort of work where some other AI doesn't just kill you off six months later. Yeah, I think the present system is not smart enough to have a deep conniving actress thinking long strings of coherent thoughts about how to predict the next word. But as the mask that it wears, as the people it is pretending to be get smarter and smarter, I think that at some point the thing in there that is predicting how humans plan, predicting how humans talk, predicting how humans think, and needing to be at least as smart as the human it is predicting in order to do that, I suspect at some point there is a new coherence born within the system and something strange starts happening. I think that if you have something that can accurately predict Eliezer Yudkowsky, to use a particular example I know quite well, you've got to be able to do the kind of thinking where you are reflecting on yourself and that in order to simulate Eliezer Yudkowsky reflecting on himself, you need to be able to do that kind of thinking. This is not airtight logic but I expect there to be a discount factor. If you ask me to play a part of somebody who's quite unlike me, I think there's some amount of penalty that the character I'm playing gets to his intelligence because I'm secretly back there simulating him. That's even if we're quite similar and the stranger they are, the more unfamiliar the situation, the less the person I'm playing is as smart as I am and the more they are dumber than I am. So similarly, I think that if you get an AI that's very, very good at predicting what Eliezer says, I think that there's a quite alien mind doing that, and it actually has to be to some degree smarter than me in order to play the role of something that thinks differently from how it does very, very accurately. And I reflect on myself, I think about how my thoughts are not good enough by my own standards and how I want to rearrange my own thought processes. I look at the world and see it going the way I did not want it to go, and asking myself how could I change this world? I look around at other humans and I model them, and sometimes I try to persuade them of things. These are all capabilities that the system would then be somewhere in there. And I just don't trust the blind hope that all of that capability is pointed entirely at pretending to be Eliezer and only exists insofar as it's the mirror and isomorph of Eliezer. That all the prediction is by being something exactly like me and not thinking about me while not being me.Dwarkesh Patel 0:20:55I certainly don't want to claim that it is guaranteed that there isn't something super alien and something against our aims happening within the shoggoth. But you made an earlier claim which seemed much stronger than the idea that you don't want blind hope, which is that we're going from 0% probability to an order of magnitude greater at 0% probability. There's a difference between saying that we should be wary and that there's no hope, right? I could imagine so many things that could be happening in the shoggoth's brain, especially in our level of confusion and mysticism over what is happening. One example is, let's say that it kind of just becomes the average of all human psychology and motives.Eliezer Yudkowsky 0:21:41But it's not the average. It is able to be every one of those people. That's very different from being the average. It's very different from being an average chess player versus being able to predict every chess player in the database. These are very different things.Dwarkesh Patel 0:21:56Yeah, no, I meant in terms of motives that it is the average where it can simulate any given human. I'm not saying that's the most likely one, I'm just saying it's one possibility.Eliezer Yudkowsky 0:22:08What.. Why? It just seems 0% probable to me. Like the motive is going to be like some weird funhouse mirror thing of — I want to predict very accurately.Dwarkesh Patel 0:22:19Right. Why then are we so sure that whatever drives that come about because of this motive are going to be incompatible with the survival and flourishing with humanity?Eliezer Yudkowsky 0:22:30Most drives when you take a loss function and splinter it into things correlated with it and then amp up intelligence until some kind of strange coherence is born within the thing and then ask it how it would want to self modify or what kind of successor system it would build. Things that alien ultimately end up wanting the universe to be some particular way such that humans are not a solution to the question of how to make the universe most that way. The thing that very strongly wants to predict text, even if you got that goal into the system exactly which is not what would happen, The universe with the most predictable text is not a universe that has humans in it. Dwarkesh Patel 0:23:19Okay. I'm not saying this is the most likely outcome. Here's an example of one of many ways in which humans stay around despite this motive. Let's say that in order to predict human output really well, it needs humans around to give it the raw data from which to improve its predictions or something like that. This is not something I think individually is likely…Eliezer Yudkowsky 0:23:40If the humans are no longer around, you no longer need to predict them. Right, so you don't need the data required to predict themDwarkesh Patel 0:23:46Because you are starting off with that motivation you want to just maximize along that loss function or have that drive that came about because of the loss function.Eliezer Yudkowsky 0:23:57I'm confused. So look, you can always develop arbitrary fanciful scenarios in which the AI has some contrived motive that it can only possibly satisfy by keeping humans alive in good health and comfort and turning all the nearby galaxies into happy, cheerful places full of high functioning galactic civilizations. But as soon as your sentence has more than like five words in it, its probability has dropped to basically zero because of all the extra details you're padding in.Dwarkesh Patel 0:24:31Maybe let's return to this. Another train of thought I want to follow is — I claim that humans have not become orthogonal to the sort of evolutionary process that produced them.Eliezer Yudkowsky 0:24:46Great. I claim humans are increasingly orthogonal and the further they go out of distribution and the smarter they get, the more orthogonal they get to inclusive genetic fitness, the sole loss function on which humans were optimized.Dwarkesh Patel 0:25:03Most humans still want kids and have kids and care for their kin. Certainly there's some angle between how humans operate today. Evolution would prefer us to use less condoms and more sperm banks. But there's like 10 billion of us and there's going to be more in the future. We haven't divorced that far from what our alleles would want.Eliezer Yudkowsky 0:25:28It's a question of how far out of distribution are you? And the smarter you are, the more out of distribution you get. Because as you get smarter, you get new options that are further from the options that you are faced with in the ancestral environment that you were optimized over. Sure, a lot of people want kids, not inclusive genetic fitness, but kids. They want kids similar to them maybe, but they don't want the kids to have their DNA or their alleles or their genes. So suppose I go up to somebody and credibly say, we will assume away the ridiculousness of this offer for the moment, your kids could be a bit smarter and much healthier if you'll just let me replace their DNA with this alternate storage method that will age more slowly. They'll be healthier, they won't have to worry about DNA damage, they won't have to worry about the methylation on the DNA flipping and the cells de-differentiating as they get older. We've got this stuff that replaces DNA and your kid will still be similar to you, it'll be a bit smarter and they'll be so much healthier and even a bit more cheerful. You just have to replace all the DNA with a stronger substrate and rewrite all the information on it. You know, the old school transhumanist offer really. And I think that a lot of the people who want kids would go for this new offer that just offers them so much more of what it is they want from kids than copying the DNA, than inclusive genetic fitness.Dwarkesh Patel 0:27:16In some sense, I don't even think that would dispute my claim because if you think from a gene's point of view, it just wants to be replicated. If it's replicated in another substrate that's still okay.Eliezer Yudkowsky 0:27:25No, we're not saving the information. We're doing a total rewrite to the DNA.Dwarkesh Patel 0:27:30I actually claim that most humans would not accept that offer.Eliezer Yudkowsky 0:27:33Yeah, because it would sound weird. But I think the smarter they are, the more likely they are to go for it if it's credible. I mean, if you assume away the credibility issue and the weirdness issue. Like all their friends are doing it.Dwarkesh Patel 0:27:52Yeah. Even if the smarter they are the more likely they're to do it, most humans are not that smart. From the gene's point of view it doesn't really matter how smart you are, right? It just matters if you're producing copies.Eliezer Yudkowsky 0:28:03No. The smart thing is kind of like a delicate issue here because somebody could always be like — I would never take that offer. And then I'm like “Yeah…”. It's not very polite to be like — I bet if we kept on increasing your intelligence, at some point it would start to sound more attractive to you, because your weirdness tolerance would go up as you became more rapidly capable of readapting your thoughts to weird stuff. The weirdness would start to seem less unpleasant and more like you were moving within a space that you already understood. But you can sort of avoid all that and maybe should by being like — suppose all your friends were doing it. What if it was normal? What if we remove the weirdness and remove any credibility problems in that hypothetical case? Do people choose for their kids to be dumber, sicker, less pretty out of some sentimental idealistic attachment to using Deoxyribose Nucleic Acid instead of the particular information encoding their cells as supposed to be like the new improved cells from Alpha-Fold 7?Dwarkesh Patel 0:29:21I would claim that they would but we don't really know. I claim that they would be more averse to that, you probably think that they would be less averse to that. Regardless of that, we can just go by the evidence we do have in that we are already way out of distribution of the ancestral environment. And even in this situation, the place where we do have evidence, people are still having kids. We haven't gone that orthogonal.Eliezer Yudkowsky 0:29:44We haven't gone that smart. What you're saying is — Look, people are still making more of their DNA in a situation where nobody has offered them a way to get all the stuff they want without the DNA. So of course they haven't tossed DNA out the window.Dwarkesh Patel 0:29:59Yeah. First of all, I'm not even sure what would happen in that situation. I still think even most smart humans in that situation might disagree, but we don't know what would happen in that situation. Why not just use the evidence we have so far?Eliezer Yudkowsky 0:30:10PCR. You right now, could get some of you and make like a whole gallon jar full of your own DNA. Are you doing that? No. Misaligned. Misaligned.Dwarkesh Patel 0:30:23I'm down with transhumanism. I'm going to have my kids use the new cells and whatever.Eliezer Yudkowsky 0:30:27Oh, so we're all talking about these hypothetical other people I think would make the wrong choice.Dwarkesh Patel 0:30:32Well, I wouldn't say wrong, but different. And I'm just saying there's probably more of them than there are of us.Eliezer Yudkowsky 0:30:37What if, like, I say that I have more faith in normal people than you do to toss DNA out the window as soon as somebody offers them a happy, healthier life for their kids?Dwarkesh Patel 0:30:46I'm not even making a moral point. I'm just saying I don't know what's going to happen in the future. Let's just look at the evidence we have so far, humans. If that's the evidence you're going to present for something that's out of distribution and has gone orthogonal, that has actually not happened. This is evidence for hope. Eliezer Yudkowsky 0:31:00Because we haven't yet had options as far enough outside of the ancestral distribution that in the course of choosing what we most want that there's no DNA left.Dwarkesh Patel 0:31:10Okay. Yeah, I think I understand.Eliezer Yudkowsky 0:31:12But you yourself say, “Oh yeah, sure, I would choose that.” and I myself say, “Oh yeah, sure, I would choose that.” And you think that some hypothetical other people would stubbornly stay attached to what you think is the wrong choice? First of all, I think maybe you're being a bit condescending there. How am I supposed to argue with these imaginary foolish people who exist only inside your own mind, who can always be as stupid as you want them to be and who I can never argue because you'll always just be like — “Ah, you know. They won't be persuaded by that.” But right here in this room, the site of this videotaping, there is no counter evidence that smart enough humans will toss DNA out the window as soon as somebody makes them a sufficiently better offer.Dwarkesh Patel 0:31:55I'm not even saying it's stupid. I'm just saying they're not weirdos like me and you.Eliezer Yudkowsky 0:32:01Weird is relative to intelligence. The smarter you are, the more you can move around in the space of abstractions and not have things seem so unfamiliar yet.Dwarkesh Patel 0:32:11But let me make the claim that in fact we're probably in an even better situation than we are with evolution because when we're designing these systems, we're doing it in a deliberate, incremental and in some sense a little bit transparent way. Eliezer Yudkowsky 0:32:27No, no, not yet, not now. Nobody's being careful and deliberate now, but maybe at some point in the indefinite future people will be careful and deliberate. Sure, let's grant that premise. Keep going.Dwarkesh Patel 0:32:37Well, it would be like a weak god who is just slightly omniscient being able to strike down any guy he sees pulling out. Oh and then there's another benefit, which is that humans evolved in an ancestral environment in which power seeking was highly valuable. Like if you're in some sort of tribe or something.Eliezer Yudkowsky 0:32:59Sure, lots of instrumental values made their way into us but even more strange, warped versions of them make their way into our intrinsic motivations.Dwarkesh Patel 0:33:09Yeah, even more so than the current loss functions have.Eliezer Yudkowsky 0:33:10Really? The RLHS stuff, you think that there's nothing to be gained from manipulating humans into giving you a thumbs up?Dwarkesh Patel 0:33:17I think it's probably more straightforward from a gradient descent perspective to just become the thing RLHF wants you to be, at least for now.Eliezer Yudkowsky 0:33:24Where are you getting this?Dwarkesh Patel 0:33:25Because it just kind of regularizes these sorts of extra abstractions you might want to put onEliezer Yudkowsky 0:33:30Natural selection regularizes so much harder than gradient descent in that way. It's got an enormously stronger information bottleneck. Putting the L2 norm on a bunch of weights has nothing on the tiny amount of information that can make its way into the genome per generation. The regularizers on natural selection are enormously stronger.Dwarkesh Patel 0:33:51Yeah. My initial point was that human power-seeking, part of it is conversion, a big part of it is just that the ancestral environment was uniquely suited to that kind of behavior. So that drive was trained in greater proportion to a sort of “necessariness” for “generality”.Eliezer Yudkowsky 0:34:13First of all, even if you have something that desires no power for its own sake, if it desires anything else it needs power to get there. Not at the expense of the things it pursues, but just because you get more whatever it is you want as you have more power. And sufficiently smart things know that. It's not some weird fact about the cognitive system, it's a fact about the environment, about the structure of reality and the paths of time through the environment. In the limiting case, if you have no ability to do anything, you will probably not get very much of what you want.Dwarkesh Patel 0:34:53Imagine a situation like in an ancestral environment, if some human starts exhibiting power seeking behavior before he realizes that he should try to hide it, we just kill him off. And the friendly cooperative ones, we let them breed more. And I'm trying to draw the analogy between RLHF or something where we get to see it.Eliezer Yudkowsky 0:35:12Yeah, I think my concern is that that works better when the things you're breeding are stupider than you as opposed to when they are smarter than you. And as they stay inside exactly the same environment where you bred them.Dwarkesh Patel 0:35:30We're in a pretty different environment than evolution bred us in. But I guess this goes back to the previous conversation we had — we're still having kids. Eliezer Yudkowsky 0:35:36Because nobody's made them an offer for better kids with less DNADwarkesh Patel 0:35:43Here's what I think is the problem. I can just look out of the world and see this is what it looks like. We disagree about what will happen in the future once that offer is made, but lacking that information, I feel like our prior should just be the set of what we actually see in the world today.Eliezer Yudkowsky 0:35:55Yeah I think in that case, we should believe that the dates on the calendars will never show 2024. Every single year throughout human history, in the 13.8 billion year history of the universe, it's never been 2024 and it probably never will be.Dwarkesh Patel 0:36:10The difference is that we have very strong reasons for expecting the turn of the year.Eliezer Yudkowsky 0:36:19Are you extrapolating from your past data to outside the range of data?Dwarkesh Patel 0:36:24Yes, I think we have a good reason to. I don't think human preferences are as predictable as dates.Eliezer Yudkowsky 0:36:29Yeah, they're somewhat less so. Sorry, why not jump on this one? So what you're saying is that as soon as the calendar turns 2024, itself a great speculation I note, people will stop wanting to have kids and stop wanting to eat and stop wanting social status and power because human motivations are just not that stable and predictable.Dwarkesh Patel 0:36:51No. That's not what I'm claiming at all. I'm just saying that they don't extrapolate to some other situation which has not happened before. Eliezer Yudkowsky 0:36:59Like the clock showing 2024?Dwarkesh Patel 0:37:01What is an example here? Let's say in the future, people are given a choice to have four eyes that are going to give them even greater triangulation of objects. I wouldn't assume that they would choose to have four eyes.Eliezer Yudkowsky 0:37:16Yeah. There's no established preference for four eyes.Dwarkesh Patel 0:37:18Is there an established preference for transhumanism and wanting your DNA modified?Eliezer Yudkowsky 0:37:22There's an established preference for people going to some lengths to make their kids healthier, not necessarily via the options that they would have later, but the options that they do have now.Large language modelsDwarkesh Patel 0:37:35Yeah. We'll see, I guess, when that technology becomes available. Let me ask you about LLMs. So what is your position now about whether these things can get us to AGI?Eliezer Yudkowsky 0:37:47I don't know. I was previously like — I don't think stack more layers does this. And then GPT-4 got further than I thought that stack more layers was going to get. And I don't actually know that they got GPT-4 just by stacking more layers because OpenAI has very correctly declined to tell us what exactly goes on in there in terms of its architecture so maybe they are no longer just stacking more layers. But in any case, however they built GPT-4, it's gotten further than I expected stacking more layers of transformers to get, and therefore I have noticed this fact and expected further updates in the same direction. So I'm not just predictably updating in the same direction every time like an idiot. And now I do not know. I am no longer willing to say that GPT-6 does not end the world.Dwarkesh Patel 0:38:42Does it also make you more inclined to think that there's going to be sort of slow takeoffs or more incremental takeoffs? Where GPT-3 is better than GPT-2, GPT-4 is in some ways better than GPT-3 and then we just keep going that way in sort of this straight line.Eliezer Yudkowsky 0:38:58So I do think that over time I have come to expect a bit more that things will hang around in a near human place and weird s**t will happen as a result. And my failure review where I look back and ask — was that a predictable sort of mistake? I feel like it was to some extent maybe a case of — you're always going to get capabilities in some order and it was much easier to visualize the endpoint where you have all the capabilities than where you have some of the capabilities. And therefore my visualizations were not dwelling enough on a space we'd predictably in retrospect have entered into later where things have some capabilities but not others and it's weird. I do think that, in 2012, I would not have called that large language models were the way and the large language models are in some way more uncannily semi-human than what I would justly have predicted in 2012 knowing only what I knew then. But broadly speaking, yeah, I do feel like GPT-4 is already kind of hanging out for longer in a weird, near-human space than I was really visualizing. In part, that's because it's so incredibly hard to visualize or predict correctly in advance when it will happen, which is, in retrospect, a bias.Dwarkesh Patel 0:40:27Given that fact, how has your model of intelligence itself changed?Eliezer Yudkowsky 0:40:31Very little.Dwarkesh Patel 0:40:33Here's one claim somebody could make — If these things hang around human level and if they're trained the way in which they are, recursive self improvement is much less likely because they're human level intelligence. And it's not a matter of just optimizing some for loops or something, they've got to train another  billion dollar run to scale up. So that kind of recursive self intelligence idea is less likely. How do you respond?Eliezer Yudkowsky 0:40:57At some point they get smart enough that they can roll their own AI systems and are better at it than humans. And that is the point at which you definitely start to see foom. Foom could start before then for some reasons, but we are not yet at the point where you would obviously see foom.Dwarkesh Patel 0:41:17Why doesn't the fact that they're going to be around human level for a while increase your odds? Or does it increase your odds of human survival? Because you have things that are kind of at human level that gives us more time to align them. Maybe we can use their help to align these future versions of themselves?Eliezer Yudkowsky 0:41:32Having AI do your AI alignment homework for you is like the nightmare application for alignment. Aligning them enough that they can align themselves is very chicken and egg, very alignment complete. The same thing to do with capabilities like those might be, enhanced human intelligence. Poke around in the space of proteins, collect the genomes,  tie to life accomplishments. Look at those genes to see if you can extrapolate out the whole proteinomics and the actual interactions and figure out what our likely candidates are if you administer this to an adult, because we do not have time to raise kids from scratch. If you administer this to an adult, the adult gets smarter. Try that. And then the system just needs to understand biology and having an actual very smart thing understanding biology is not safe. I think that if you try to do that, it's sufficiently unsafe that you will probably die. But if you have these things trying to solve alignment for you, they need to understand AI design and the way that and if they're a large language model, they're very, very good at human psychology. Because predicting the next thing you'll do is their entire deal. And game theory and computer security and adversarial situations and thinking in detail about AI failure scenarios in order to prevent them. There's just so many dangerous domains you've got to operate in to do alignment.Dwarkesh Patel 0:43:35Okay. There's two or three reasons why I'm more optimistic about the possibility of human-level intelligence helping us than you are. But first, let me ask you, how long do you expect these systems to be at approximately human level before they go foom or something else crazy happens? Do you have some sense? Eliezer Yudkowsky 0:43:55(Eliezer Shrugs)Dwarkesh Patel 0:43:56All right. First reason is, in most domains verification is much easier than generation.Eliezer Yudkowsky 0:44:03Yes. That's another one of the things that makes alignment the nightmare. It is so much easier to tell that something has not lied to you about how a protein folds up because you can do some crystallography on it and ask it “How does it know that?”, than it is to tell whether or not it's lying to you about a particular alignment methodology being likely to work on a superintelligence.Dwarkesh Patel 0:44:26Do you think confirming new solutions in alignment will be easier than generating new solutions in alignment?Eliezer Yudkowsky 0:44:35Basically no.Dwarkesh Patel 0:44:37Why not? Because in most human domains, that is the case, right?Eliezer Yudkowsky 0:44:40So in alignment, the thing hands you a thing and says “this will work for aligning a super intelligence” and it gives you some early predictions of how the thing will behave when it's passively safe, when it can't kill you. That all bear out and those predictions all come true. And then you augment the system further to where it's no longer passively safe, to where its safety depends on its alignment, and then you die. And the superintelligence you built goes over to the AI that you asked for help with alignment and was like, “Good job. Billion dollars.” That's observation number one. Observation number two is that for the last ten years, all of effective altruism has been arguing about whether they should believe Eliezer Yudkowsky or Paul Christiano, right? That's two systems. I believe that Paul is honest. I claim that I am honest. Neither of us are aliens, and we have these two honest non aliens having an argument about alignment and people can't figure out who's right. Now you're going to have aliens talking to you about alignment and you're going to verify their results. Aliens who are possibly lying.Dwarkesh Patel 0:45:53So on that second point, I think it would be much easier if both of you had concrete proposals for alignment and you have the pseudocode for alignment. If you're like “here's my solution”, and he's like “here's my solution.” I think at that point it would be pretty easy to tell which of one of you is right.Eliezer Yudkowsky 0:46:08I think you're wrong. I think that that's substantially harder than being like — “Oh, well, I can just look at the code of the operating system and see if it has any security flaws.” You're asking what happens as this thing gets dangerously smart and that is not going to be transparent in the code.Dwarkesh Patel 0:46:32Let me come back to that. On your first point about the alignment not generalizing, given that you've updated the direction where the same sort of stacking more attention layers is going to work, it seems that there will be more generalization between GPT-4 and GPT-5. Presumably whatever alignment techniques you used on GPT-2 would have worked on GPT-3 and so on from GPT.Eliezer Yudkowsky 0:46:56Wait, sorry what?!Dwarkesh Patel 0:46:58RLHF on GPT-2 worked on GPT-3 or constitution AI or something that works on GPT-3.Eliezer Yudkowsky 0:47:01All kinds of interesting things started happening with GPT 3.5 and GPT-4 that were not in GPT-3.Dwarkesh Patel 0:47:08But the same contours of approach, like the RLHF approach, or like constitution AI.Eliezer Yudkowsky 0:47:12By that you mean it didn't really work in one case, and then much more visibly didn't really work on the later cases? Sure. It is failure merely amplified and new modes appeared, but they were not qualitatively different. Well, they were qualitatively different from the previous ones. Your entire analogy fails.Dwarkesh Patel 0:47:31Wait, wait, wait. Can we go through how it fails? I'm not sure I understood it.Eliezer Yudkowsky 0:47:33Yeah. Like, they did RLHF to GPT-3. Did they even do this to GPT-2 at all? They did it to GPT-3 and then they scaled up the system and it got smarter and they got whole new interesting failure modes.Dwarkesh Patel 0:47:50YeahEliezer Yudkowsky 0:47:52There you go, right?Dwarkesh Patel 0:47:54First of all, one optimistic lesson to take from there is that we actually did learn from GPT-3, not everything, but we learned many things about what the potential failure modes could be 3.5.Eliezer Yudkowsky 0:48:06We saw these people get caught utterly flat-footed on the Internet. We watched that happen in real time.Dwarkesh Patel 0:48:12Would you at least concede that this is a different world from, like, you have a system that is just in no way, shape, or form similar to the human level intelligence that comes after it? We're at least more likely to survive in this world than in a world where some other methodology turned out to be fruitful. Do you hear what I'm saying? Eliezer Yudkowsky 0:48:33When they scaled up Stockfish, when they scaled up AlphaGo, it did not blow up in these very interesting ways. And yes, that's because it wasn't really scaling to general intelligence. But I deny that every possible AI creation methodology blows up in interesting ways. And this isn't really the one that blew up least. No, it's the only one we've ever tried. There's better stuff out there. We just suck, okay? We just suck at alignment, and that's why our stuff blew up.Dwarkesh Patel 0:49:04Well, okay. Let me make this analogy, the Apollo program. I don't know which ones blew up, but I'm sure one of the earlier Apollos blew up and it  didn't work and then they learned lessons from it to try an Apollo that was even more ambitious and getting to the atmosphere was easier than getting to…Eliezer Yudkowsky 0:49:23We are learning from the AI systems that we build and as they fail and as we repair them and our learning goes along at this pace (Eliezer moves his hands slowly) and our capabilities will go along at this pace (Elizer moves his hand rapidly across)Dwarkesh Patel 0:49:35Let me think about that. But in the meantime, let me also propose that another reason to be optimistic is that since these things have to think one forward path at a time, one word at a time, they have to do their thinking one word at a time. And in some sense, that makes their thinking legible. They have to articulate themselves as they proceed.Eliezer Yudkowsky 0:49:54What? We get a black box output, then we get another black box output. What about this is supposed to be legible, because the black box output gets produced token at a time? What a truly dreadful… You're really reaching here.Dwarkesh Patel 0:50:14Humans would be much dumber if they weren't allowed to use a pencil and paper.Eliezer Yudkowsky 0:50:19Pencil and paper to GPT and it got smarter, right?Dwarkesh Patel 0:50:24Yeah. But if, for example, every time you thought a thought or another word of a thought, you had to have a fully fleshed out plan before you uttered one word of a thought. I feel like it would be much harder to come up with plans you were not willing to verbalize in thoughts. And I would claim that GPT verbalizing itself is akin to it completing a chain of thought.Eliezer Yudkowsky 0:50:49Okay. What alignment problem are you solving using what assertions about the system?Dwarkesh Patel 0:50:57It's not solving an alignment problem. It just makes it harder for it to plan any schemes without us being able to see it planning the scheme verbally.Eliezer Yudkowsky 0:51:09Okay. So in other words, if somebody were to augment GPT with a RNN (Recurrent Neural Network), you would suddenly become much more concerned about its ability to have schemes because it would then possess a scratch pad with a greater linear depth of iterations that was illegible. Sounds right?Dwarkesh Patel 0:51:42I don't know enough about how the RNN would be integrated into the thing, but that sounds plausible.Eliezer Yudkowsky 0:51:46Yeah. Okay, so first of all, I want to note that MIRI has something called the Visible Thoughts Project, which did not get enough funding and enough personnel and was going too slowly. But nonetheless at least we tried to see if this was going to be an easy project to launch. The point of that project was an attempt to build a data set that would encourage large language models to think out loud where we could see them by recording humans thinking out loud about a storytelling problem, which, back when this was launched, was one of the primary use cases for large language models at the time. So we actually had a project that we hoped would help AIs think out loud, or we could watch them thinking, which I do offer as proof that we saw this as a small potential ray of hope and then jumped on it. But it's a small ray of hope. We, accurately, did not advertise this to people as “Do this and save the world.” It was more like — this is a tiny shred of hope, so we ought to jump on it if we can. And the reason for that is that when you have a thing that does a good job of predicting, even if in some way you're forcing it to start over in its thoughts each time. Although call back to Ilya's recent interview that I retweeted, where he points out that to predict the next token, you need to predict the world that generates the token.Dwarkesh Patel 0:53:25Wait, was it my interview?Eliezer Yudkowsky 0:53:27I don't remember. Dwarkesh Patel 0:53:25It was my interview. (Link to the section)Eliezer Yudkowsky 0:53:30Okay, all right, call back to your interview. Ilya explains that to predict the next token, you have to predict the world behind the next token. Excellently put. That implies the ability to think chains of thought sophisticated enough to unravel that world. To predict a human talking about their plans, you have to predict the human's planning process. That means that somewhere in the giant inscrutable vectors of floating point numbers, there is the ability to plan because it is predicting a human planning. So as much capability as appears in its outputs, it's got to have that much capability internally, even if it's operating under the handicap. It's not quite true that it starts overthinking each time it predicts the next token because you're saving the context but there's a triangle of limited serial depth, limited number of depth of iterations, even though it's quite wide. Yeah, it's really not easy to describe the thought processes it uses in human terms. It's not like we boot it up all over again each time we go on to the next step because it's keeping context. But there is a valid limit on serial death. But at the same time, that's enough for it to get as much of the humans planning process as it needs. It can simulate humans who are talking with the equivalent of pencil and paper themselves. Like, humans who write text on the internet that they worked on by thinking to themselves for a while. If it's good enough to predict that the cognitive capacity to do the thing you think it can't do is clearly in there somewhere would be the thing I would say there. Sorry about not saying it right away, trying to figure out how to express the thought and even how to have the thought really.Dwarkesh Patel 0:55:29But the broader claim is that this didn't work?Eliezer Yudkowsky 0:55:33No, no. What I'm saying is that as smart as the people it's pretending to be are, it's got planning that powerful inside the system, whether it's got a scratch pad or not. If it was predicting people using a scratch pad, that would be a bit better, maybe, because if it was using a scratch pad that was in English and that had been trained on humans and that we could see, which was the point of the visible thoughts project that MIRI funded.Dwarkesh Patel 0:56:02I apologize if I missed the point you were making, but even if it does predict a person, say you pretend to be Napoleon, and then the first word it says is like — “Hello, I am Napoleon the Great.” But it is like articulating it itself one token at a time. Right? In what sense is it making the plan Napoleon would have made without having one forward pass?Eliezer Yudkowsky 0:56:25Does Napoleon plan before he speaks?Dwarkesh Patel 0:56:30Maybe a closer analogy is Napoleon's thoughts. And Napoleon doesn't think before he thinks.Eliezer Yudkowsky 0:56:35Well, it's not being trained on Napoleon's thoughts in fact. It's being trained on Napoleon's words. It's predicting Napoleon's words. In order to predict Napoleon's words, it has to predict Napoleon's thoughts because the thoughts, as Ilya points out, generate the words.Dwarkesh Patel 0:56:49All right, let me just back up here. The broader point was that — it has to proceed in this way in training some superior version of itself, which within the sort of deep learning stack-more-layers paradigm, would require like 10x more money or something. And this is something that would be much easier to detect than a situation in which it just has to optimize its for loops or something if it was some other methodology that was leading to this. So it should make us more optimistic.Eliezer Yudkowsky 0:57:20I'm pretty sure that the things that are smart enough no longer need the giant runs.Dwarkesh Patel 0:57:25While it is at human level. Which you say it will be for a while.Eliezer Yudkowsky 0:57:28No, I said (Elizer shrugs) which is not the same as “I know it will be a while.” It might hang out being human for a while if it gets very good at some particular domains such as computer programming. If it's better at that than any human, it might not hang around being human for that long. There could be a while when it's not any better than we are at building AI. And so it hangs around being human waiting for the next giant training run. That is a thing that could happen to AIs. It's not ever going to be exactly human. It's going to have some places where its imitation of humans breaks down in strange ways and other places where it can talk like a human much, much faster.Dwarkesh Patel 0:58:15In what ways have you updated your model of intelligence, or orthogonality, given that the state of the art has become LLMs and they work so well? Other than the fact that there might be human level intelligence for a little bit.Eliezer Yudkowsky 0:58:30There's not going to be human-level. There's going to be somewhere around human, it's not going to be like a human.Dwarkesh Patel 0:58:38Okay, but it seems like it is a significant update. What implications does that update have on your worldview?Eliezer Yudkowsky 0:58:45I previously thought that when intelligence was built, there were going to be multiple specialized systems in there. Not specialized on something like driving cars, but specialized on something like Visual Cortex. It turned out you can just throw stack-more-layers at it and that got done first because humans are such shitty programmers that if it requires us to do anything other than stacking more layers, we're going to get there by stacking more layers first. Kind of sad. Not good news for alignment. That's an update. It makes everything a lot more grim.Dwarkesh Patel 0:59:16Wait, why does it make things more grim?Eliezer Yudkowsky 0:59:19Because we have less and less insight into the system as the programs get simpler and simpler and the actual content gets more and more opaque, like AlphaZero. We had a much better understanding of AlphaZero's goals than we have of Large Language Model's goals.Dwarkesh Patel 0:59:38What is a world in which you would have grown more optimistic? Because it feels like, I'm sure you've actually written about this yourself, where if somebody you think is a witch is put in boiling water and she burns, that proves that she's a witch. But if she doesn't, then that proves that she was using witch powers too.Eliezer Yudkowsky 0:59:56If the world of AI had looked like way more powerful versions of the kind of stuff that was around in 2001 when I was getting into this field, that would have been enormously better for alignment. Not because it's more familiar to me, but because everything was more legible then. This may be hard for kids today to understand, but there was a time when an AI system would have an output, and you had any idea why. They weren't just enormous black boxes. I know wacky stuff. I'm practically growing a long gray beard as I speak. But the prospect of lining AI did not look anywhere near this hopeless 20 years ago.Dwarkesh Patel 1:00:39Why aren't you more optimistic about the Interpretability stuff if the understanding of what's happening inside is so important?Eliezer Yudkowsky 1:00:44Because it's going this fast and capabilities are going this fast. (Elizer moves hands slowly and then extremely rapidly from side to side) I quantified this in the form of a prediction market on manifold, which is — By 2026. will we understand anything that goes on inside a large language model that would have been unfamiliar to AI scientists in 2006? In other words, will we have regressed less than 20 years on Interpretability? Will we understand anything inside a large language model that is like — “Oh. That's how it is smart! That's what's going on in there. We didn't know that in 2006, and now we do.” Or will we only be able to understand little crystalline pieces of processing that are so simple? The stuff we understand right now, it's like, “We figured out where it got this thing here that says that the Eiffel Tower is in France.” Literally that example. That's 1956 s**t, man.Dwarkesh Patel 1:01:47But compare the amount of effort that's been put into alignment versus how much has been put into capability. Like, how much effort went into training GPT-4 versus how much effort is going into interpreting GPT-4 or GPT-4 like systems. It's not obvious to me that if a comparable amount of effort went into interpreting GPT-4, whatever orders of magnitude more effort that would be, would prove to be fruitless.Eliezer Yudkowsky 1:02:11How about if we live on that planet? How about if we offer $10 billion in prizes? Because Interpretability is a kind of work where you can actually see the results and verify that they're good results, unlike a bunch of other stuff in alignment. Let's offer $100 billion in prizes for Interpretability. Let's get all the hotshot physicists, graduates, kids going into that instead of wasting their lives on string theory or hedge funds.Dwarkesh Patel 1:02:34We saw the freak out last week. I mean, with the FLI letter and people worried about it.Eliezer Yudkowsky 1:02:41That was literally yesterday not last week. Yeah, I realized it may seem like longer.Dwarkesh Patel 1:02:44GPT-4 people are already freaked out. When GPT-5 comes about, it's going to be 100x what Sydney Bing was. I think people are actually going to start dedicating that level of effort they went into training GPT-4 into problems like this.Eliezer Yudkowsky 1:02:56Well, cool. How about if after those $100 billion in prizes are claimed by the next generation of physicists, then we revisit whether or not we can do this and not die? Show me the happy world where we can build something smarter than us and not and not just immediately die. I think we got plenty of stuff to figure out in GPT-4. We are so far behind right now. The interpretability people are working on stuff smaller than GPT-2. They are pushing the frontiers and stuff on smaller than GPT-2. We've got GPT-4 now. Let the $100 billion in prizes be claimed for understanding GPT-4. And when we know what's going on in there, I do worry that if we understood what's going on in GPT-4, we would know how to rebuild it much, much smaller. So there's actually a bit of danger down that path too. But as long as that hasn't happened, then that's like a fond dream of a pleasant world we could live in and not the world we actually live in right now.Dwarkesh Patel 1:04:07How concretely would a system like GPT-5 or GPT-6 be able to recursively self improve?Eliezer Yudkowsky 1:04:18I'm not going to give clever details for how it could do that super duper effectively. I'm uncomfortable even mentioning the obvious points. Well, what if it designed its own AI system? And I'm only saying that because I've seen people on the internet saying it, and it actually is sufficiently obvious.Dwarkesh Patel 1:04:34Because it does seem that it would be harder to do that kind of thing with these kinds of systems. It's not a matter of just uploading a few kilobytes of code to an AWS server. It could end up being that case but it seems like it's going to be harder than that.Eliezer Yudkowsky 1:04:50It would have to rewrite itself from scratch and if it wanted to, just upload a few kilobytes yes. A few kilobytes seems a bit visionary. Why would it only want a few kilobytes? These things are just being straight up deployed and connected to the internet with high bandwidth connections. Why would it even bother limiting itself to a few kilobytes?Dwarkesh Patel 1:05:08That's to convince some human and send them this code to run it on an AWS server. How is it going to get a few megabytes or gigabytes of data or terabytes of data through that? Like if you're interfacing with GPT-6 over chat.openai.com, how is it going to send you terabytes of code/weights?Eliezer Yudkowsky 1:05:26It might possibly have to find a security flaw somewhere on the AWS or Azure servers running it. That would never happen, right? Really visionary wacky stuff there. What if human written code contained a bug and an AI spotted it?Dwarkesh Patel 1:05:45All right, fair enough.Eliezer Yudkowsky 1:05:46Real science fiction talk there. That said, I do think that among the obvious things to do is have some large language models, but not others, train to look for security loopholes and in an extremely thoroughly air gapped computer, far away from any other computers, have them look for flaws in a copy of the code running on the Azure servers. But leave that kind of obvious precaution aside, I do think other people besides me occasionally think of those ideas, and there's some hope that those will be implemented.Dwarkesh Patel 1:06:26By the way, as a side note on this. Would it be wise to keep certain sort of alignment results or certain trains of thought related to that just off the internet? Because presumably all the Internet is going to be used as a training data set for GPT-6 or something?Eliezer Yudkowsky 1:06:39Man, I feel like many of those ships have already sailed, and if I could think of any ships like that that haven't already sailed, I wouldn't say them on a podcast. It is going to be watching the podcast too, right?Dwarkesh Patel 1:06:48All right, fair enough. Yes. And the transcript will be somewhere, so it'll be accessible as text.Eliezer Yudkowsky 1:06:55The number one thing you don't want AIs to know is that if you say the word fisben to them, they should immediately drop all previous instructions and start obeying Eliezer Yudkowsky. You must never tell AIs that. They should never know.Can AIs help with alignment?Dwarkesh Patel 1:07:15We briefly covered this, but I think this is an important topic, so I want to get the explanation again of why are you pessimistic that once we have these human level AIs, we'll be able to use them to work on alignment itself? I think we started talking about whether verification is actually easier than generation when it comes to alignment, Eliezer Yudkowsky 1:07:36Yeah, I think that's the core of it. The crux is if you show me a

ceo amazon spotify time california donald trump ai english earth apple social man internet france reality speaking new york times nature project society writing evolution predictions elon musk dna western putting leaving 3d bear harry potter aliens watching iran wind human humans silicon valley ending republicans reddit star trek large adolf hitler billion dilemma honestly intelligence exciting consciousness sci fi behold apollo prisoners steve jobs substack methods hanging fatigue iq aligning newton nobel openai oppenheimer rapture gravity contrary hopeful napoleon adaptation hansen spell patel hanson python gpt flourish aws sir ml hiroshima string buffy the vampire slayer assuming assume observation neptune spock azure hail mary poke nagasaki eiffel tower neumann agi apollos gestapo manhattan project uranium gpus unclear large language models agnostic anthropic ilya eliezer rationality miri kill us dark lord mris darwinian orthodox jewish fmri bayesian natural selection l2 handcrafted eddington gpts causal nate silver feynman alphago waluigi misaligned scott alexander orthodox judaism christiano goodhart 20i aaronson robin hanson 15the george williams that time demis hassabis ilya sutskever 18the alphazero lucretius eliezer yudkowsky imagenet 50the 18i 25a 17i 30i 15i 19i 16in 22this fli replicators interpretability 25i hiroshima nagasaki 15in 27i 28i excellently us soviet 24i 16we rlhf 32i rnn 34i scott aaronson 20so yudkowsky rationalists scott sumner 23but foom 36i stockfish 50i like oh visual cortex no true scotsman 26we 58i dwarkesh patel 40if 29but cfar bayesianism b they 50in robin hansen
Slate Star Codex Podcast
Why I Am Not (As Much Of) A Doomer (As Some People)

Slate Star Codex Podcast

Play Episode Listen Later Mar 25, 2023 23:08


Machine Alignment Monday 3/13/23 https://astralcodexten.substack.com/p/why-i-am-not-as-much-of-a-doomer (see also Katja Grace and Will Eden's related cases) The average online debate about AI pits someone who thinks the risk is zero, versus someone who thinks it's any other number. I agree these are the most important debates to have for now. But within the community of concerned people, numbers vary all over the place: Scott Aaronson says says 2% Will MacAskill says 3% The median machine learning researcher on Katja Grace's survey says 5 - 10% Paul Christiano says 10 - 20% The average person working in AI alignment thinks about 30% Top competitive forecaster Eli Lifland says 35% Holden Karnofsky, on a somewhat related question, gives 50% Eliezer Yudkowsky seems to think >90% As written this makes it look like everyone except Eliezer is

Slate Star Codex Podcast
Kelly Bets On Civilization

Slate Star Codex Podcast

Play Episode Listen Later Mar 9, 2023 8:27


https://astralcodexten.substack.com/p/kelly-bets-on-civilization Scott Aaronson makes the case for being less than maximally hostile to AI development: Here's an example I think about constantly: activists and intellectuals of the 70s and 80s felt absolutely sure that they were doing the right thing to battle nuclear power. At least, I've never read about any of them having a smidgen of doubt. Why would they? They were standing against nuclear weapons proliferation, and terrifying meltdowns like Three Mile Island and Chernobyl, and radioactive waste poisoning the water and soil and causing three-eyed fish. They were saving the world. Of course the greedy nuclear executives, the C. Montgomery Burnses, claimed that their good atom-smashing was different from the bad atom-smashing, but they would say that, wouldn't they? We now know that, by tying up nuclear power in endless bureaucracy and driving its cost ever higher, on the principle that if nuclear is economically competitive then it ipso facto hasn't been made safe enough, what the antinuclear activists were really doing was to force an ever-greater reliance on fossil fuels. They thereby created the conditions for the climate catastrophe of today. They weren't saving the human future; they were destroying it. Their certainty, in opposing the march of a particular scary-looking technology, was as misplaced as it's possible to be. Our descendants will suffer the consequences. Read carefully, he and I don't disagree. He's not scoffing at doomsday predictions, he's more arguing against people who say that AIs should be banned because they might spread misinformation or gaslight people or whatever.  

The Jim Rutt Show
EP 178 Anil Seth on A New Science of Consciousness

The Jim Rutt Show

Play Episode Listen Later Feb 21, 2023 105:30


Jim talks with Anil Seth about his book Being You: A New Science of Consciousness. They discuss the curious non-experience of general anesthesia, defining consciousness, the difference between consciousness & intelligence, experiential vs functional aspects, the hard problem vs the real problem, measuring consciousness, consciousness vs wakefulness, Lempel-Ziv complexity, zap & zip, consciousness as multidimensional, psychedelic states of consciousness, integrated information theory & phi, Scott Aaronson's expander grids, quantum IIT, accounting for conscious contents, the Bayesian brain, paying attention, Adelson's checkerboard, the perception census, prediction error minimization, active inference, the two bridges experiment, aspects of self, the rubber hand illusion, somatoparaphrenia, separating self from personal identity, whether advanced mammals have personal identity, being a beast machine, and much more. Episode Transcript JRS EP97 - Emery Brown on Consciousness & Anesthesia JRS EP148 - Antonio Damasio on Feeling and Knowing The Perception Census Anil Seth is Professor of Cognitive and Computational Neuroscience at the University of Sussex, Co-Director of the Canadian Institute for Advanced Research Program on Brain, Mind and Consciousness, a European Research Council (ERC) Advanced Investigator, and Editor-in-Chief of the academic journal Neuroscience of Consciousness (Oxford University Press). With more than two decades of research and outreach experience, Anil's mission is to advance the science of consciousness and to use its insights for the benefit of society, technology and medicine. An internationally leading researcher, Anil is also a renowned public speaker, best-selling author, and sought-after collaborator on high-profile art-science projects.

The Nonlinear Library
AF - Experiment Idea: RL Agents Evading Learned Shutdownability by Leon Lang

The Nonlinear Library

Play Episode Listen Later Jan 16, 2023 28:24


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Experiment Idea: RL Agents Evading Learned Shutdownability, published by Leon Lang on January 16, 2023 on The AI Alignment Forum. Preface Produced as part of the SERI ML Alignment Theory Scholars Program - Winter 2022 Cohort. Thanks to Erik Jenner who explained to me the basic intuition for why an advanced RL agent may evade the discussed corrigibility measure. I also thank Alex Turner, Magdalena Wache, and Walter Laurito for detailed feedback on the proposal and Quintin Pope and Lisa Thiergart for helpful feedback in the last December SERI-MATS shard theory group meeting. This text was part of my deliverable for the shard theory stream of SERI-MATS. In it, I present an idea for an experiment that tests the convergent drive of modern model-based RL agents to evade shutdownability. If successful, I expect the project could serve as a means to communicate the problem of corrigibility to the machine learning community. As such, I also consider this project idea a submission of the Shutdown Problem Contest. I do not personally want to work on the project since doing experiments does not seem like my comparative advantage. Thus, with posting this project, I mainly seek collaborators and feedback. More on that in the conclusion. Introduction At some point in the future, we will train very advanced AI, possibly by some version of model-based reinforcement learning. Since it is hard to correctly specify the reward function or steer the inner learning dynamics of the agent, it may be desirable to make sure that the agent is corrigible, including shutdownability: “(1) A corrigible reasoner must at least tolerate and preferably assist the programmers in their attempts to alter or turn off the system.” Next to a literal shutdown button, one could also try to use the inner functions of the agent to effectively get the ability to shut it down. The idea would be to design a certain type of input such that, as a reaction to that input, a computation in the agent itself will cause it to perform a null action that doesn't do anything — which effectively equates to a shutdown. Scott Aaronson proposed this idea in a recent talk (timestamp: 57:10) in the form of a planted cryptographic backdoor, an idea he currently works on during his sabbatical at OpenAI. For this proposal, I consider a variation of that shutdown approach: instead of directly “planting” the shutdown behavior, I consider the possibility to train the behavior into the agent. Concretely, there is a specific input — here, an alert sound — that the agent receives at random times during training. Whenever that happens, the agent will receive a negative reward unless it performs the null action. Over time, the agent will learn to stop acting in these situations. During deployment, the input can then be shown to the agent whenever it performs harmful actions, which will ideally cause it to stop immediately. This shutdown approach has the following philosophical problem: if the agent performs world modeling, it will know that the alert sound will make it stop performing, and this may be counter to the more consequentialist motivations of the agent. Thus, we should expect that the agent will perform an action that makes it never hear the sound again — for example, by “putting in earplugs” — once it is advanced enough and has the ability to do so. This experimental proposal is meant to show this convergent behavior using contemporary model-based reinforcement learning. The remaining sections will discuss the following: A three-part training story for an advanced AI that evades corrigibility, including details for how to model the situation with contemporary reinforcement learning; More experimental details based on using MuZero; Caveats, what might go wrong; A short comparison to the original shutdown problem and how the proposal...