Podcast appearances and mentions of Max Tegmark

Swedish-American cosmologist

  • 199PODCASTS
  • 327EPISODES
  • 44mAVG DURATION
  • 1EPISODE EVERY OTHER WEEK
  • May 31, 2025LATEST
Max Tegmark

POPULARITY

20172018201920202021202220232024


Best podcasts about Max Tegmark

Latest podcast episodes about Max Tegmark

People I (Mostly) Admire
Does Death Have to Be a Death Sentence? (Update)

People I (Mostly) Admire

Play Episode Listen Later May 31, 2025 42:22


Palliative physician B.J. Miller asks: Is there a better way to think about dying? And can death be beautiful? SOURCES:B.J. Miller, palliative-care physician and President at Mettle Health. RESOURCES:A Beginner's Guide to the End: Practical Advice for Living Life and Facing Death, by Shoshana Berger and B.J. Miller and (2019).“After A Freak Accident, A Doctor Finds Insight Into ‘Living Life And Facing Death,'” by Fresh Air (W.Y.P.R., 2019).“Dying In A Hospital Means More Procedures, Tests And Costs,” by Alison Kodjak (W.Y.P.R., 2016).“The Final Year: Visualizing End Of Life,” by Arcadia (2016).“What Really Matters at the End of Life,” by B.J. Miller (TED, 2015).“The Flexner Report ― 100 Years Later,” by Thomas P. Duffy (Yale Journal of Biology and Medicine, 2011).“My Near Death Panel Experience,” by Earl Blumenauer (The New York Times, 2009).The Center for Dying and Living. EXTRAS:“Max Tegmark on Why Superhuman Artificial Intelligence Won't be Our Slave (Part 2),” by People I (Mostly) Admire (2021).“Max Tegmark on Why Treating Humanity Like a Child Will Save Us All,” by People I (Mostly) Admire (2021).“Amanda & Lily Levitt Share What It's Like to be Steve's Daughters,” by People I (Mostly) Admire (2021).“Edward Glaeser Explains Why Some Cities Thrive While Others Fade Away,” by People I (Mostly) Admire (2021).“Sendhil Mullainathan Explains How to Generate an Idea a Minute (Part 2),” by People I (Mostly) Admire (2021).“Sendhil Mullainathan Thinks Messing Around Is the Best Use of Your Time,” by People I (Mostly) Admire (2021).“How Does Facing Death Change Your Life?” by No Stupid Questions (2021).“How to Be Better at Death,” by Freakonomics Radio (2021).

The Wright Show
How to Not Lose Control of AI (Robert Wright & Max Tegmark)

The Wright Show

Play Episode Listen Later May 27, 2025 60:00


Why Max helped organize the 2023 “Pause AI” letter ... AI as a species not a technology ... The rate of AI progress since 2023 ... Loss of control is the biggest risk ... How to get the NatSec community's attention ... How we could lose control ... He we stay in control ... Heading to Overtime ...

Bloggingheads.tv
How to Not Lose Control of AI (Robert Wright & Max Tegmark)

Bloggingheads.tv

Play Episode Listen Later May 27, 2025 60:00


Why Max helped organize the 2023 “Pause AI” letter ... AI as a species not a technology ... The rate of AI progress since 2023 ... Loss of control is the biggest risk ... How to get the NatSec community's attention ... How we could lose control ... He we stay in control ... Heading to Overtime ...

Because Jitsu Podcast
#632: The Final Human Weapon

Because Jitsu Podcast

Play Episode Listen Later Mar 17, 2025 56:56


Something in 2023 led physicist Max Tegmark to declare we were creating the "Final Human Weapon". What was it and why did Elon Musk recently say there's a 1 in 5 chance of it destroying us? ------- Back the Kickstarter for Fractures in Love: Fractures Pre-Launch Page Read the first 5 Chapters of my upcoming fictional epic - Fractures: Fractures Preview Follow my Author's page on Instagram: @drew.w.author Tell me what you thought of the show! Text me at: (587)206-7006

StarTalk Radio
Cosmic Queries– Multiverse Madness with Max Tegmark

StarTalk Radio

Play Episode Listen Later Mar 14, 2025 47:08


Do we live in one of many universes? On this episode of StarTalk, Neil deGrasse Tyson and comic co-host Chuck Nice investigate the theory of the multiverse with physicist, author, and professor Max Tegmark. (Originally Aired March 22, 2021)NOTE: StarTalk+ Patrons can listen to this entire episode commercial-free here: https://startalkmedia.com/show/cosmic-queries-multiverse-madness-with-max-tegmark/ Subscribe to SiriusXM Podcasts+ to listen to new episodes of StarTalk Radio ad-free and a whole week early.Start a free trial now on Apple Podcasts or by visiting siriusxm.com/podcastsplus.

Slate Star Codex Podcast
Tegmark's Mathematical Universe Defeats Most Proofs Of God's Existence

Slate Star Codex Podcast

Play Episode Listen Later Mar 6, 2025 8:20


It feels like 2010 again - the bloggers are debating the proofs for the existence of God. I found these much less interesting after learning about Max Tegmark's mathematical universe hypothesis, and this doesn't seem to have reached the Substack debate yet, so I'll put it out there. Tegmark's hypothesis says: all possible mathematical objects exist. Consider a mathematical object like a cellular automaton - a set of simple rules that creates complex behavior. The most famous is Conway's Game of Life; the second most famous is the universe. After all, the universe is a starting condition (the Big Bang) and a set of simple rules determining how the starting condition evolves over time (the laws of physics). Some mathematical objects contain conscious observers. Conway's Life might be like this: it's Turing complete, so if a computer can be conscious then you can get consciousness in Life. If you built a supercomputer and had it run the version of Life with the conscious being, then you would be “simulating” the being, and bringing it into existence. There would be something it was like to be that being; it would have thoughts and experiences and so on. https://www.astralcodexten.com/p/tegmarks-mathematical-universe-defeats

Sternengeschichten
Sternengeschichten Folge 640: Besteht das Universum aus Mathematik?

Sternengeschichten

Play Episode Listen Later Feb 28, 2025 10:03


Besteht das Universum aus Mathematik? Sind wir selbst nur Mathematik? Und was soll das überhaupt heißen? Mehr zur Theorie des mathematischen Universums erfahrt ihr in der neuen Folge der Sternengeschichten. STERNENGESCHICHTEN LIVE TOUR 2025! Tickets unter https://sternengeschichten.live Wer den Podcast finanziell unterstützen möchte, kann das hier tun: Mit PayPal (https://www.paypal.me/florianfreistetter), Patreon (https://www.patreon.com/sternengeschichten) oder Steady (https://steadyhq.com/sternengeschichten)

Clean Power Hour
AI, Energy, and Society: Exploring Solutions to Global Challenges with Mitch Ratcliffe | EP261

Clean Power Hour

Play Episode Listen Later Feb 18, 2025 41:40 Transcription Available


Today on the Clean Power Hour, host Tim Montague engages in a conversation with Mitch Ratcliffe, Director of Digital Strategy and Innovation at Intentional Futures and host of the Earth 911 podcast. Together, they explore the complex intersection of artificial intelligence, sustainability, and society's future.Ratcliffe brings his extensive experience in technology and sustainability to discuss how AI can be leveraged to address critical environmental challenges while acknowledging its potential risks. The conversation delves into the role of community-scale microgrids in revolutionizing our energy infrastructure, the pressing need for sustainable resource management, and the challenges of balancing technological advancement with environmental stewardship.We also delved into several thought-provoking books, starting with "Superintelligence" (by Nick Bostrom) and "Life 3.0" (by Max Tegmark), which both explore the potential risks and implications of artificial general intelligence (AGI). We also mentioned "Taking Back Control" by Wolfgang Streeck, a German political economist who argues that the anti-globalization movement is a response to the loss of democratic control over economies and suggests that relocalizing can help better control resource usage and provide improved economic opportunities. "Overshoot: The Ecological Basis of Revolutionary Change" by William Catton was recommended as an analysis of humanity's current trajectory of consuming more resources than the Earth can sustain. Finally, we talked about "Multi-solving" by Elizabeth Sawin, which advocates for a systems-thinking approach to problem-solving, encouraging people to understand how changes in one part of a system affect other parts, rather than focusing on isolated solutions.Don't miss this essential conversation that bridges technology, sustainability, and social responsibility, offering both warning and hope for our collective future.Social Media HandlesMitch RatcliffeEarth 911Intentional Futures Support the showConnect with Tim Clean Power Hour Clean Power Hour on YouTubeTim on TwitterTim on LinkedIn Email tim@cleanpowerhour.com Review Clean Power Hour on Apple PodcastsThe Clean Power Hour is produced by the Clean Power Consulting Group and created by Tim Montague. Contact us by email: CleanPowerHour@gmail.com Corporate sponsors who share our mission to speed the energy transition are invited to check out https://www.cleanpowerhour.com/support/The Clean Power Hour is brought to you by CPS America, maker of North America's number one 3-phase string inverter, with over 6GW shipped in the US. With a focus on commercial and utility-scale solar and energy storage, the company partners with customers to provide unparalleled performance and service. The CPS America product lineup includes 3-phase string inverters from 25kW to 275kW, exceptional data communication and controls, and energy storage solutions designed for seamless integration with CPS America systems. Learn more at www.chintpowersystems.com

Beyond The Valley
Fears around uncontrollable AI are growing — two top AI scientists explain why

Beyond The Valley

Play Episode Listen Later Feb 4, 2025 30:33


Max Tegmark's Future of Life Institute has called for a pause on the development on advanced AI systems. Tegmark is concerned that the world is moving toward artificial intelligence that can't be controlled, one that could pose an existential threat to humanity. Yoshua Bengio, often dubbed as one of the "godfathers of AI" shares similar concerns. In this special Davos edition of CNBC's Beyond the Valley, Tegmark and Bengio join senior technology correspondent Arjun Kharpal to discuss AI safety and worst case scenarios, such as AI that could try to keep itself alive at the expense of others.See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

Let's Talk AI
#192 - ChatGPT Pro, Amazon Nova, GenFM, Llama 3.3, Genie 2

Let's Talk AI

Play Episode Listen Later Dec 16, 2024 118:13 Transcription Available


Our 192nd episode with a summary and discussion of last week's* big AI news! *and sometimes last last week's  Note: this one was recorded on 12/04 , so the news is a bit outdated... Hosted by Andrey Kurenkov and Jeremie Harris. Feel free to email us your questions and feedback at contact@lastweekinai.com and/or hello@gladstone.ai Read out our text newsletter and comment on the podcast at https://lastweekin.ai/. Sponsors: The Generator - An interdisciplinary AI lab empowering innovators from all fields to bring visionary ideas to life by harnessing the capabilities of artificial intelligence. The AI safety book “Uncontrollable" which is not a doomer book, but instead lays out the reasonable case for AI safety and what we can do about it. Max TEGMARK said that “Uncontrollable” is a captivating, balanced, and remarkably up-to-date book on the most important issue of our time" - find it on Amazon today! In this episode: OpenAI launches a $200 ChatGPT Pro subscription with advanced capabilities, while Amazon unveils cost-effective Nova multimodal models at the re:Invent conference. Meta releases LLAMA 3.3 70B model, showing significant gains through post-training techniques, and Alibaba introduces QWQ, a reasoning model rivaling OpenAI's O1. Amazon collaborates with Anthropic on a massive AI supercomputer project, and Black Forest Labs eyes a $200 million funding round for growth in AI tools. New research from DeepMind's Genie 2 generates interactive 3D worlds from text and images, progressing AI's understanding of world models and interactive environments. If you would like to become a sponsor for the newsletter, podcast, or both, please fill out this form. Timestamps + Links: (00:00:00) Intro / Banter (00:02:34) Sponsor Break Tools & Apps (00:04:19) OpenAI confirms new $200 monthly subscription, which includes its o1 reasoning model (00:10:40) Amazon announces Nova, a new family of multimodal AI models (00:17:13) ElevenLabs launches GenFM to turn user content into AI-powered podcasts (00:20:21) Google's new generative AI video model is now available Applications & Business (00:23:56) Elon Musk files for injunction to halt OpenAI's transition to a for-profit (00:29:40) Amazon Is Building a Mega AI Supercomputer With Anthropic (00:34:15) It Sounds an Awful Lot Like OpenAI Is Adding Ads to ChatGPT (00:38:23) A16z in Talks to Lead $200 Million Round in Black Forest Labs, Startup Behind AI Images on Grok (00:41:10) Bezos Backs AI Chipmaker Vying With Nvidia at $2.6 Billion Value Projects & Open Source (00:45:25) Meta unveils a new, more efficient Llama model (00:50:00) Alibaba releases an ‘open' challenger to OpenAI's o1 reasoning model (00:55:21) DeMo: Decoupled Momentum Optimization (00:57:01) PRIME Intellect Releases INTELLECT-1 (Instruct + Base): The First 10B Parameter Language Model Collaboratively Trained Across the Globe (01:03:03) Tencent Launches HunyuanVideo, an Open-Source AI Video Model Research & Advancements (01:09:23) DeepMind's Genie 2 can generate interactive worlds that look like video games (01:16:43) Language Models are Hidden Reasoners: Unlocking Latent Reasoning Capabilities via Self-Rewarding (01:20:40) Densing Law of LLMs (01:25:59) Monet: Mixture of Monosemantic Experts for Transformers Policy & Safety (01:30:56) Commerce Strengthens Export Controls to Restrict China's Capability to Produce Advanced Semiconductors for Military Applications (01:37:33) China retaliates against latest US chip restrictions (01:40:52) OpenAI Is Working With Anduril to Supply the US Military With AI (01:43:24) On Targeted Manipulation and Deception when Optimizing LLMs for User Feedback (01:47:52) AI Safety Researcher Quits OpenAI, Saying Its Trajectory Alarms Her (01:51:52) Meta Claims AI Content Was Less than 1% of Election Misinformation (01:55:05) Outro

Into the Impossible
Max Tegmark: Will AI Surpass Human Intelligence?

Into the Impossible

Play Episode Listen Later Dec 9, 2024 53:15


Let's Talk AI
#189 - Chat.com, FrontierMath, Relaxed Transformers, Trump & AI

Let's Talk AI

Play Episode Listen Later Nov 17, 2024 102:46 Transcription Available


Our 189th episode with a summary and discussion of last week's big AI news! Hosted by Andrey Kurenkov and Jeremie Harris. Feel free to email us your questions and feedback at contact@lastweekinai.com and/or hello@gladstone.ai Read out our text newsletter and comment on the podcast at https://lastweekin.ai/. In this episode: * OpenAI's acquisition of chat.com and internal shifts, including hardware lead hire and hardware model leaks, signal significant strategy pivots and challenges with model scaling and security.  * Saudi Arabia plans a $100 billion AI initiative aiming to rival UAE's tech hub, highlighting the region's escalating AI investments.  * U.S. penalties on GlobalFoundries for violating sanctions against SMIC underline ongoing challenges in enforcing AI-chip export controls.  * Anthropic collaborates with Palantir and AWS to integrate CLAWD into defense environments, marking a significant policy shift for the company. Sponsors: The Generator - An interdisciplinary AI lab empowering innovators from all fields to bring visionary ideas to life by harnessing the capabilities of artificial intelligence. The AI safety book “Uncontrollable" which is not a doomer book, but instead lays out the reasonable case for AI safety and what we can do about it. Max TEGMARK said that “Uncontrollable” is a captivating, balanced, and remarkably up-to-date book on the most important issue of our time" - find it on Amazon today! If you would like to become a sponsor for the newsletter, podcast, or both, please fill out this form. Timestamps + Links: (00:00:00) Intro / Banter (00:01:28) News Preview (00:02:10) Response to listener comments (00:05:02) Sponsor Break Tools & Apps (00:07:31) OpenAI Introduces ‘Predicted Outputs' Feature: Speeding Up GPT-4o by ~5x for Tasks like Editing Docs or Refactoring Code (00:11:55) Anthropic's Haiku 3.5 surprises experts with an “intelligence” price increase (00:17:10) Introducing FLUX1.1 [pro] Ultra and Raw Modes (00:19:11) X is testing a free version of Grok AI chatbot in select regions Applications & Business (00:21:39) OpenAI acquired Chat.com (00:23:40) Saudis Plan $100 Billion AI Powerhouse to Rival UAE Tech Hub (00:28:28) Meta's former hardware lead for Orion is joining OpenAI (00:31:38) OpenAI Accidentally Leaked Its Upcoming o1 Model to Anyone With a Certain Web Address (00:35:50) Nvidia Rides AI Wave to Pass Apple as World's Largest Company Projects & Open Source (00:37:53) ‘Unrestricted' AI group Nous Research launches first chatbot — with guardrails (00:41:48) FrontierMath: The Benchmark that Highlights AI's Limits in Mathematics (00:46:29) Hunyuan-Large: An Open-Source MoE Model with 52 Billion Activated Parameters by Tencent Research & Advancements (00:49:55) Applying “Golden Gate Claude” mechanistic interpretability techniques to protein language models. (00:58:3) Relaxed Recursive Transformers: Effective Parameter Sharing with Layer-wise LoRA (01:05:55) From Naptime to Big Sleep: Using Large Language Models To Catch Vulnerabilities In Real-World Code (01:10:22) OpenAI reportedly developing new strategies to deal with AI improvement slowdown Policy & Safety (01:19:52)  What Donald Trump's Win Means For AI (01:28:44) Fab Whack-A-Mole: Chinese Companies are Evading U.S. Sanctions (01:33:57) US fines GlobalFoundries for shipping chips to sanctioned Chinese firm (01:36:55) Anthropic teams up with Palantir and AWS to sell its AI to defense customers (01:39:23) Outro

Into the Impossible
Is Our Universe Just a Math Problem? Max Tegmark's Shocking Theory on Reality

Into the Impossible

Play Episode Listen Later Nov 11, 2024 43:57


Into the Impossible
Searching for a Theory of Everything with Max Tegmark, James Beacham & Stephon Alexander (2020)

Into the Impossible

Play Episode Listen Later Oct 23, 2024 94:13


What are the leading theories of everything, and are we any closer to discovering the one true theory of everything? In this 90-minute summit, some of the world's top physicists—Max Tegmark, James Beacham, Stephon Alexander—go beyond the hype to explore the very heart of physics.  Einstein began the monumental task of unifying quantum mechanics with general relativity, but will we ever succeed in unifying all the forces of the universe? Can it be done? If so, when? Join us for this thought-provoking discussion and find out! Key Takeaways:  00:00:00 Intro  00:00:45 What is a theory of everything? 00:05:25 State of the field and personal perspectives  00:20:42 Experimental challenges  00:34:13 Mathematical foundations, the multiverse, and theoretical beauty  01:05:12 Where do we go from here? 01:14:23 Audience questions 01:25:35 Outro Additional resources:  ➡️ Follow me on your fav platforms: ✖️ Twitter: https://x.com/DrBrianKeating

Sci-Fi Talk
The Science and Humanity of Quantum Suicide: An Interview with Gerrit Van Woudenberg

Sci-Fi Talk

Play Episode Listen Later Oct 23, 2024 29:59


I dive into the mind-bending concept of Quantum Suicide, a thought experiment proposed by physicist Max Tegmark in 1997. Joining me to unpack this fascinating theory is Gerrit Van Woudenberg, the director of the film "Quantum Suicide." We'll discuss the scientific basis of the theory, explore the human obsession that drives the film's protagonist, and learn about the challenges and triumphs of bringing this complex idea to the big screen. Whether you're a quantum mechanics enthusiast or simply curious about the boundaries of our reality, this episode promises to challenge your perceptions and ignite your imagination. Subscribe To Sci-Fi Talk Plus for a One Year Free Trial and A Low Forever Monthly Price

Doktopus - Der Wissenspodcast mit Dora und Dominic
Macht der Maschinen: Was, wenn KI die Welt regiert?

Doktopus - Der Wissenspodcast mit Dora und Dominic

Play Episode Listen Later Oct 23, 2024 48:52


KI außer Kontrolle! Wie können wir eine Künstliche Intelligenz überwachen und regulieren, die schneller, schlauer und mächtiger ist als wir? Und was passiert, wenn sie unsere ethischen Grenzen ignoriert?  Diesmal geht es bei Doktopus um die möglichen gesellschaftlichen Auswirkungen einer künstlichen Superintelligenz. Wie könnte so etwas kontrolliert werden? Von der Idee des "versklavten Gottes" bis hin zu einem Überwachungsstaat, der jegliche KI-Forschung verbietet, diskutieren wir verschiedene Szenarien. Sie reichen von einem friedlichen, produktiven Zusammenleben von Mensch und Maschine, über die KI-Diktatur bis hin zur Selbstzerstörung der Menschheit. Alles nur Gedankenspiele? Oder ist die Super-KI bereits näher, als wir denken? Material zu dieser Folge Aufforderung von OpenAI zur Governance von superintelligenter KI (Mai 2023): https://openai.com/index/governance-of-superintelligence/ Große Empfehlung für das Buch “Leben 3.0” von Max Tegmark. In der Folge beziehen wir uns auf die Einleitung mit der Geschichte der superintelligenten KI “Prometheus” und wir gehen auch die verschiedenen Szenarien zum Zusammenleben mit superintelligenter KI durch: https://www.ecolibri.de/shop/article/36891647/max_tegmark_leben_3_0.htmlx Zwischenergebnisse der Umfrage des Future of Life Institutes zur Zukuft mit superintelligenter KI: https://futureoflife.org/ai/superintelligence-survey/ Hier könnt ihr auch selbst noch abstimmen! Mehr über die Risiken von superintelligenter KI: https://arxiv.org/abs/2306.12001 https://futureoflife.org/resource/catastrophic-ai-scenarios/ Mehr über das Feld “AI Policy” bzw. “AI Governance”, inklusive möglicher Karriereoptionen in diesem Bereich: https://www.openphilanthropy.org/research/12-tentative-ideas-for-us-ai-policy/ https://80000hours.org/career-reviews/ai-policy-and-strategy/ Social Media und Kontaktmöglichkeiten Instagram: http://instagram.com/doktopuspodcast/  Youtube: https://www.youtube.com/@doktopuspodcast E-Mail: doktopuspodcast@gmail.com  Credits Recherche, Hosting & Produktion: Dora Dzvonyar & Dominic Anders Sound-Design & Post Production: Julian Dlugosch Ansager: Marcel Gust KI-Songs: Suno KI-Visuals: Bing Image Creator Intro-Musik: Oleggio Kyrylkoww from Pixabay Intermezzo-Transition: MAXOU-YT from Pixabay

People I (Mostly) Admire
142. What's Impacting American Workers?

People I (Mostly) Admire

Play Episode Listen Later Oct 12, 2024 63:41


David Autor took his first economics class at 29 years old. Now he's one of the central academics studying the labor market. The M.I.T. economist and Steve dissect the impact of technology on labor, spar on A.I., and discuss why economists can sometimes be oblivious. SOURCES:David Autor, professor of economics at the Massachusetts Institute of Technology. RESOURCES:"Does Automation Replace Experts or Augment Expertise? The Answer Is Yes," by David Autor (Joseph Schumpeter Lecture at the European Economic Association Annual Meeting, 2024).“Applying AI to Rebuild Middle Class Jobs,” by David Autor (NBER Working Paper, 2024).“New Frontiers: The Origins and Content of New Work, 1940–2018,” by David Autor, Caroline Chin, Anna Salomons, and Bryan Seegmiller (The Quarterly Journal of Economics, 2024).“Bottlenecks: Sectoral Imbalances and the US Productivity Slowdown,” by Daron Acemoglu, David Autor, and Christina Patterson (NBER Macroeconomics Annual, 2024)."Good News: There's a Labor Shortage," by David Autor (The New York Times, 2021)."David Autor, the Academic Voice of the American Worker," (The Economist, 2019).“Why Are There Still So Many Jobs? The History and Future of Workplace Automation,” by David Autor (The Journal of Economic Perspectives, 2015).“The Growth of Low-Skill Service Jobs and the Polarization of the US Labor Market,” by David Autor and David Dorn (The American Economic Review, 2013).“The China Syndrome: Local Labor Market Effects of Import Competition in the United States,” by David Autor, David Dorn, and Gordon H. Hanson (The American Economic Review, 2013). EXTRAS:"What Do People Do All Day?" by Freakonomics Radio (2024)."Daron Acemoglu on Economics, Politics, and Power," by People I (Mostly) Admire (2024)."You Make Me Feel Like a Natural Experiment," by People I (Mostly) Admire (2022)."In Search of the Real Adam Smith," series by Freakonomics Radio (2022)."Max Tegmark on Why Superhuman Artificial Intelligence Won't be Our Slave," by People I (Mostly) Admire (2021)."Automation," by Last Week Tonight With John Oliver (2019).

PBD Podcast
"AI Cults Forming" - Max Tegmark on China Running By AI, Non-Human Governments, and Global Control | PBD Podcast | Ep. 485

PBD Podcast

Play Episode Listen Later Oct 7, 2024 40:39


Max Tegmark, a renowned MIT physicist, dives into the future of AI with Patrick Bet-David, discussing the profound impacts of AI and robotics on society, military, and global regulation. Tegmark warns of the risks and potentials of AI, emphasizing the need for global safety standards. ------

The After Hours Entrepreneur Social Media, Podcasting, and YouTube Show

In this video, we explore the good, the bad, and the ugly aspects of Patrick Bet David's annual conference. The main focus of the conference is self-improvement and business development, but is it all a scam? I just returned from the Vault Conference in West Palm Beach, and I want to give you a comprehensive overview of the event, key takeaways, and personal experiences. If you're considering whether or not to attend Vault, this is for you. The Vault Conference, held at the Hilton and Palm Beach Conference and Expo Center, attracted 6,000 attendees and featured big-name speakers like The Rock, Will Gadara, Nick Saban, and Max Tegmark. The event was meticulously organized, with various tiers of attendance ranging from General to CEO, each offering different levels of access and networking opportunities. Whether you're a fan of Patrick Bet David or just curious about high-level business conferences, this episode offers valuable insights and honest feedback

Medierna
Friade ”toppmilitären” Johan Huovinen om att vara mitt i mediestormens öga

Medierna

Play Episode Listen Later Sep 28, 2024 29:51


Det finns en del journalistiska, orubbliga principer. Etiska regler som journalister håller så hårt i att knogarna vitnar. Lyssna på alla avsnitt i Sveriges Radio Play. Men i och med den eskalerade kriminaliteten, med sprängningar och dödsskjutningar på öppen gata ställs allmänintresset gång på annan mot den personliga integriteten.Har våldsvågen satt dom etiska reglerna i rörelse?Reporter Alexandra Sannemalm har i veckan tittat på ett fall - ett brinnande nyhetsläge och en polismyndighet som under kvällen inte kunde ge några besked. Friade ”toppmilitären” Johan Huovinen om att vara mitt i mediestormens ögaHur är det egentligen att mitt i en personlig katastrof även hamna mitt i mediernas strålkastarljus? Vilken roll spelar egentligen en namngivning i det läget? Och vem ska räknas som en offentlig person? I början av september kom nyheten att misstankarna avskrivits mot den i medierna så kallade “toppmilitären”. Johan Huovinen valde att träda fram med sitt namn och berätta. Vi har i veckan pratat med honom om mediebevakningen och vi börjar den 17:e oktober förra året när polisen i en gryningsräd klockan fem på morgonen grep Johan Huovinen och hans fru i deras lägenhet i centrala Stockholm. Reporter: Erik PeterssonTechprofilerna som vill rädda journalistiken..och så ska vi till USA och valkampanjen där. För i världens största demokrati vill två välkända techprofiler hjälpa journalistiken på traven. Den svenska professorn Max Tegmark - känd för sin dystopiska syn på AI - driver en egen nyhetstjänst. Och Steve Ballmer, som grundade Microsoft ihop med Bill Gates, har dragit igång en sajt, som ska hjälpa väljarna att ta beslut om dom ska rösta på Donald Trump eller Kamala Harris.Techprofilerna är övertygade om att fakta är svaret på problemet med en valrörelse packad med lögner och desinformation. Men löser det nåt att slänga fakta på problemen om folk vill tro på lögner? Reporter: Katarina AnderssonHar du sett eller hört något du tycker vi borde kika närmare på? Tipsa oss här

Talk Chineasy - Learn Chinese every day with ShaoLan
257 - Life in Chinese with ShaoLan and Professor Max Tegmark from MIT

Talk Chineasy - Learn Chinese every day with ShaoLan

Play Episode Listen Later Sep 14, 2024 8:29


World famous cosmologist Max Tegmark shares with ShaoLan his thoughts on the origins of life and the universe and moves on to discover the word for ”life” in Mandarin Chinese. ✨ BIG NEWS ✨ Our brand new Talk Chineasy App, is now live on the App Store! Free to download and perfect for building your speaking confidence from Day 1. Visit our website for more info about the app.

The Nonlinear Library
LW - Limitations on Formal Verification for AI Safety by Andrew Dickson

The Nonlinear Library

Play Episode Listen Later Aug 20, 2024 37:37


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Limitations on Formal Verification for AI Safety, published by Andrew Dickson on August 20, 2024 on LessWrong. In the past two years there has been increased interest in formal verification-based approaches to AI safety. Formal verification is a sub-field of computer science that studies how guarantees may be derived by deduction on fully-specified rule-sets and symbol systems. By contrast, the real world is a messy place that can rarely be straightforwardly represented in a reductionist way. In particular, physics, chemistry and biology are all complex sciences which do not have anything like complete symbolic rule sets. Additionally, even if we had such rules for the natural sciences, it would be very difficult for any software system to obtain sufficiently accurate models and data about initial conditions for a prover to succeed in deriving strong guarantees for AI systems operating in the real world. Practical limitations like these on formal verification have been well-understood for decades to engineers and applied mathematicians building real-world software systems, which makes it puzzling that they have mostly been dismissed by leading researchers advocating for the use of formal verification in AI safety so far. This paper will focus-in on several such limitations and use them to argue that we should be extremely skeptical of claims that formal verification-based approaches will provide strong guarantees against major AI threats in the near-term. What do we Mean by Formal Verification for AI Safety? Some examples of the kinds of threats researchers hope formal verification will help with come from the paper "Provably Safe Systems: The Only Path to Controllable AGI" [1] by Max Tegmark and Steve Omohundro (emphasis mine): Several groups are working to identify the greatest human existential risks from AGI. For example, the Center for AI Safety recently published 'An Overview of Catastrophic AI Risks' which discusses a wide range of risks including bioterrorism, automated warfare, rogue power seeking AI, etc. Provably safe systems could counteract each of the risks they describe. These authors describe a concrete bioterrorism scenario in section 2.4: a terrorist group wants to use AGI to release a deadly virus over a highly populated area. They use an AGI to design the DNA and shell of a pathogenic virus and the steps to manufacture it. They hire a chemistry lab to synthesize the DNA and integrate it into the protein shell. They use AGI controlled drones to disperse the virus and social media AGIs to spread their message after the attack. Today, groups are working on mechanisms to prevent the synthesis of dangerous DNA. But provably safe infrastructure could stop this kind of attack at every stage: biochemical design AI would not synthesize designs unless they were provably safe for humans, data center GPUs would not execute AI programs unless they were certified safe, chip manufacturing plants would not sell GPUs without provable safety checks, DNA synthesis machines would not operate without a proof of safety, drone control systems would not allow drones to fly without proofs of safety, and armies of persuasive bots would not be able to manipulate media without proof of humanness. [1] The above quote contains a number of very strong claims about the possibility of formally or mathematically provable guarantees around software systems deployed in the physical world - for example, the claim that we could have safety proofs about the real-world good behavior of DNA synthesis machines, or drones. From a practical standpoint, our default stance towards such claims should be skepticism, since we do not have proofs of this sort for any of the technologies we interact with in the real-world today. For example, DNA synthesis machines exist today and do not come with f...

The Nonlinear Library
AF - Limitations on Formal Verification for AI Safety by Andrew Dickson

The Nonlinear Library

Play Episode Listen Later Aug 19, 2024 37:37


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Limitations on Formal Verification for AI Safety, published by Andrew Dickson on August 19, 2024 on The AI Alignment Forum. In the past two years there has been increased interest in formal verification-based approaches to AI safety. Formal verification is a sub-field of computer science that studies how guarantees may be derived by deduction on fully-specified rule-sets and symbol systems. By contrast, the real world is a messy place that can rarely be straightforwardly represented in a reductionist way. In particular, physics, chemistry and biology are all complex sciences which do not have anything like complete symbolic rule sets. Additionally, even if we had such rules for the natural sciences, it would be very difficult for any software system to obtain sufficiently accurate models and data about initial conditions for a prover to succeed in deriving strong guarantees for AI systems operating in the real world. Practical limitations like these on formal verification have been well-understood for decades to engineers and applied mathematicians building real-world software systems, which makes it puzzling that they have mostly been dismissed by leading researchers advocating for the use of formal verification in AI safety so far. This paper will focus-in on several such limitations and use them to argue that we should be extremely skeptical of claims that formal verification-based approaches will provide strong guarantees against major AI threats in the near-term. What do we Mean by Formal Verification for AI Safety? Some examples of the kinds of threats researchers hope formal verification will help with come from the paper "Provably Safe Systems: The Only Path to Controllable AGI" [1] by Max Tegmark and Steve Omohundro (emphasis mine): Several groups are working to identify the greatest human existential risks from AGI. For example, the Center for AI Safety recently published 'An Overview of Catastrophic AI Risks' which discusses a wide range of risks including bioterrorism, automated warfare, rogue power seeking AI, etc. Provably safe systems could counteract each of the risks they describe. These authors describe a concrete bioterrorism scenario in section 2.4: a terrorist group wants to use AGI to release a deadly virus over a highly populated area. They use an AGI to design the DNA and shell of a pathogenic virus and the steps to manufacture it. They hire a chemistry lab to synthesize the DNA and integrate it into the protein shell. They use AGI controlled drones to disperse the virus and social media AGIs to spread their message after the attack. Today, groups are working on mechanisms to prevent the synthesis of dangerous DNA. But provably safe infrastructure could stop this kind of attack at every stage: biochemical design AI would not synthesize designs unless they were provably safe for humans, data center GPUs would not execute AI programs unless they were certified safe, chip manufacturing plants would not sell GPUs without provable safety checks, DNA synthesis machines would not operate without a proof of safety, drone control systems would not allow drones to fly without proofs of safety, and armies of persuasive bots would not be able to manipulate media without proof of humanness. [1] The above quote contains a number of very strong claims about the possibility of formally or mathematically provable guarantees around software systems deployed in the physical world - for example, the claim that we could have safety proofs about the real-world good behavior of DNA synthesis machines, or drones. From a practical standpoint, our default stance towards such claims should be skepticism, since we do not have proofs of this sort for any of the technologies we interact with in the real-world today. For example, DNA synthesis machines exist today and do no...

Rich On Tech
Pixel 9, Massive Data Breach & Dangers of AI

Rich On Tech

Play Episode Listen Later Aug 17, 2024 107:11 Transcription Available


Rich discusses the new Pixel hardware from the latest Made By Google Event. Here's how to check if your SSN was involved in that massive data breach. Links to freeze credit reports: Experian, TransUnion, Equifax Ryan Montgomery, founder of cybersecurity firm Pentester. NPD Breach Check Tool Mentioned: Bitdefender and ESET for anti-virus, iVPN and Mullvad for VPNs Follow Ryan on Instagram for more cybersecurity info and tips Eric in the I.E. asks there's something similar to Pixel's Call Notes feature for the iPhone. Apple is adding a call recording feature in iOS 18. Samsung is bringing Circle to Search to select Galaxy A series smartphones. Rich shares details on the Pixel Watch 3 and Pixel Buds Pro 2. Maxx in Lake Worth, FL says you should consider freezing one more system from ChexSystems to avoid identify theft with bank accounts being opened in your name. Mark in Woodland Hills is wondering if there's a way to keep his phone cool while cycling outside. Rich mentioned the Phoozy phone case. Fitbit Premium users will have access to some Peloton classes starting in September. Rishi Chandra, Google Home, Shenaz Zack, Google Pixel and Sandeep Waraich, Pixel Wearables. Melody in Carlsbad says random photos are showing up on her Mac computer. Check the folder you're using for your screensaver: Open System Preferences, Screen Saver, Photos and check the library or folder. Elsa in Playa Del Rey wants to use DIRECTV Stream but it doesn't work with Samsung. Robert is trying to make his DIRECTV Genies tune to a certain sports game at a certain time. California will soon let you add your Driver's License and ID to Apple Wallet. Max Tegmark of the Future of Life Institute will talk about the good and bad of AI. Proton VPN now has a browser extension and it's completely free. There's a new world record for most consoles hooked up to one TV: 444. Dylan from St. Louis wrote in to say how much he likes the discount gift card website GCX, formerly Raise. Rich DeMuro talks about tech news, tips, and gadget reviews and conducts interviews in this weekly show. Airs 11 AM - 2 PM PT on KFI AM 640 and syndicated on 350+ stations nationwide. Stream live on the iHeartRadio App or subscribe to the podcast. Follow Rich on X, Instagram and Facebook. Call 1-888-RICH-101 (1-888-742-4101) to join in! Links may be affiliate. RichOnTech.tv See omnystudio.com/listener for privacy information.

The Nonlinear Library
LW - Provably Safe AI: Worldview and Projects by bgold

The Nonlinear Library

Play Episode Listen Later Aug 10, 2024 13:12


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Provably Safe AI: Worldview and Projects, published by bgold on August 10, 2024 on LessWrong. In September 2023, Max Tegmark and Steve Omohundro proposed "Provably Safe AI" as a strategy for AI Safety. In May 2024, a larger group delineated the broader concept of "Guaranteed Safe AI" which includes Provably Safe AI and other related strategies. In July, 2024, Ben Goldhaber and Steve discussed Provably Safe AI and its future possibilities, as summarized in this document. Background In June 2024, ex-OpenAI AI Safety Researcher Leopold Aschenbrenner wrote a 165-page document entitled "Situational Awareness, The Decade Ahead" summarizing AI timeline evidence and beliefs which are shared by many frontier AI researchers. He argued that human-level AI is likely by 2027 and will likely lead to superhuman AI in 2028 or 2029. "Transformative AI" was coined by Open Philanthropy to describe AI which can "precipitate a transition comparable to the agricultural or industrial revolution". There appears to be a significant probability that Transformative AI may be created by 2030. If this probability is, say, greater than 10%, then humanity must immediately begin to prepare for it. The social changes and upheaval caused by Transformative AI are likely to be enormous. There will likely be many benefits but also many risks and dangers, perhaps even existential risks for humanity. Today's technological infrastructure is riddled with flaws and security holes. Power grids, cell service, and internet services have all been very vulnerable to accidents and attacks. Terrorists have attacked critical infrastructure as a political statement. Today's cybersecurity and physical security barely keeps human attackers at bay. When these groups obtain access to powerful cyberattack AI's, they will likely be able to cause enormous social damage and upheaval. Humanity has known how to write provably correct and secure software since Alan Turing's 1949 paper. Unfortunately, proving program correctness requires mathematical sophistication and it is rare in current software development practice. Fortunately, modern deep learning systems are becoming proficient at proving mathematical theorems and generating provably correct code. When combined with techniques like "autoformalization," this should enable powerful AI to rapidly replace today's flawed and insecure codebase with optimized, secure, and provably correct replacements. Many researchers working in these areas believe that AI theorem-proving at the level of human PhD's is likely about two years away. Similar issues plague hardware correctness and security, and it will be a much larger project to replace today's flawed and insecure hardware. Max and Steve propose formal methods grounded in mathematical physics to produce provably safe physical designs. The same AI techniques which are revolutionizing theorem proving and provable software synthesis are also applicable to provable hardware design. Finally, today's social mechanisms like money, contracts, voting, and the structures of governance, will also need to be updated for the new realities of an AI-driven society. Here too, the underlying rules of social interaction can be formalized, provably effective social protocols can be designed, and secure hardware implementing the new rules synthesized using powerful theorem proving AIs. What's next? Given the huge potential risk of uncontrolled powerful AI, many have argued for a pause in Frontier AI development. Unfortunately, that does not appear to be a stable solution. Even if the US paused its AI development, China or other countries could gain an advantage by accelerating their own work. There have been similar calls to limit the power of open source AI models. But, again, any group anywhere in the world can release their powerful AI model weig...

Talk Chineasy - Learn Chinese every day with ShaoLan
137 - Universe in Chinese with ShaoLan and Professor Max Tegmark from MIT

Talk Chineasy - Learn Chinese every day with ShaoLan

Play Episode Listen Later May 17, 2024 8:42


Cosmologist Max Tegmark not only is an expert on the universe that we live in, but he can also speak great Chinese! He calls from the USA to chat with ShaoLan about how to say “universe” in Chinese and explains how our understanding of the universe continues to amaze him.

The Nonlinear Library
AF - Towards Guaranteed Safe AI: A Framework for Ensuring Robust and Reliable AI Systems by Joar Skalse

The Nonlinear Library

Play Episode Listen Later May 17, 2024 4:42


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Towards Guaranteed Safe AI: A Framework for Ensuring Robust and Reliable AI Systems, published by Joar Skalse on May 17, 2024 on The AI Alignment Forum. I want to draw attention to a new paper, written by myself, David "davidad" Dalrymple, Yoshua Bengio, Stuart Russell, Max Tegmark, Sanjit Seshia, Steve Omohundro, Christian Szegedy, Ben Goldhaber, Nora Ammann, Alessandro Abate, Joe Halpern, Clark Barrett, Ding Zhao, Tan Zhi-Xuan, Jeannette Wing, and Joshua Tenenbaum. In this paper we introduce the concept of "guaranteed safe (GS) AI", which is a broad research strategy for obtaining safe AI systems with provable quantitative safety guarantees. Moreover, with a sufficient push, this strategy could plausibly be implemented on a moderately short time scale. The key components of GS AI are: 1. A formal safety specification that mathematically describes what effects or behaviors are considered safe or acceptable. 2. A world model that provides a mathematical description of the environment of the AI system. 3. A verifier that provides a formal proof (or some other comparable auditable assurance) that the AI system satisfies the safety specification with respect to the world model. The first thing to note is that a safety specification in general is not the same thing as a reward function, utility function, or loss function (though they include these objects as special cases). For example, it may specify that the AI system should not communicate outside of certain channels, copy itself to external computers, modify its own source code, or obtain information about certain classes of things in the external world, etc. The safety specifications may be specified manually, generated by a learning algorithm, written by an AI system, or obtained through other means. Further detail is provided in the main paper. The next thing to note is that most useful safety specifications must be given relative to a world model. Without a world model, we can only use specifications defined directly over input-output relations. However, we want to define specifications over input-outcome relations instead. This is why a world model is a core component of GS AI. Also note that: 1. The world model need not be a "complete" model of the world. Rather, the required amount of detail and the appropriate level of abstraction depends on both the safety specification(s) and the AI system's context of use. 2. The world model should of course account for uncertainty, which may include both stochasticity and nondeterminism. 3. The AI system whose safety is being verified may or may not use a world model, and if it does, we may or may not be able to extract it. However, the world model that is used for the verification of the safety properties need not be the same as the world model of the AI system whose safety is being verified (if it has one). The world model would likely have to be AI-generated, and should ideally be interpretable. In the main paper, we outline a few potential strategies for producing such a world model. Finally, the verifier produces a quantitative assurance that the base-level AI controller satisfies the safety specification(s) relative to the world model(s). In the most straightforward form, this could simply take the shape of a formal proof. However, if a direct formal proof cannot be obtained, then there are weaker alternatives that would still produce a quantitative guarantee. For example, the assurance may take the form of a proof that bounds the probability of failing to satisfy the safety specification, or a proof that the AI system will converge towards satisfying the safety specification (with increasing amounts of data or computational resources, for example). Such proofs are of course often very hard to obtain. However, further progress in automated theorem proving (and relat...

The Nonlinear Library
LW - Towards Guaranteed Safe AI: A Framework for Ensuring Robust and Reliable AI Systems by Gunnar Zarncke

The Nonlinear Library

Play Episode Listen Later May 16, 2024 1:48


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Towards Guaranteed Safe AI: A Framework for Ensuring Robust and Reliable AI Systems, published by Gunnar Zarncke on May 16, 2024 on LessWrong. Authors: David "davidad" Dalrymple, Joar Skalse, Yoshua Bengio, Stuart Russell, Max Tegmark, Sanjit Seshia, Steve Omohundro, Christian Szegedy, Ben Goldhaber, Nora Ammann, Alessandro Abate, Joe Halpern, Clark Barrett, Ding Zhao, Tan Zhi-Xuan, Jeannette Wing, Joshua Tenenbaum Abstract: Ensuring that AI systems reliably and robustly avoid harmful or dangerous behaviours is a crucial challenge, especially for AI systems with a high degree of autonomy and general intelligence, or systems used in safety-critical contexts. In this paper, we will introduce and define a family of approaches to AI safety, which we will refer to as guaranteed safe (GS) AI. The core feature of these approaches is that they aim to produce AI systems which are equipped with high-assurance quantitative safety guarantees. This is achieved by the interplay of three core components: a world model (which provides a mathematical description of how the AI system affects the outside world), a safety specification (which is a mathematical description of what effects are acceptable), and a verifier (which provides an auditable proof certificate that the AI satisfies the safety specification relative to the world model). We outline a number of approaches for creating each of these three core components, describe the main technical challenges, and suggest a number of potential solutions to them. We also argue for the necessity of this approach to AI safety, and for the inadequacy of the main alternative approaches. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

The Foresight Institute Podcast
Anthony Aguirre, FLI | Why Worldbuilding? | Xhope Worldbuilding Course

The Foresight Institute Podcast

Play Episode Listen Later May 3, 2024 10:49


SpeakerAnthony is the Executive Director & Secretary of the Board at the Future of Life Institute, and the Faggin Presidential Professor for the Physics of Information at UC Santa Cruz. He has done research in an array of topics in theoretical cosmology, gravitation, statistical mechanics, and other fields of physics. He also has strong interest in science outreach, and has appeared in numerous science documentaries. He is a creator of the science and technology prediction platform Metaculus.com, and is founder (with Max Tegmark) of the Foundational Questions Institute.Worldbuilding CourseThis session is a part of THE WORLDBUILDING CHALLENGE: CO-CREATING THE WORLD OF 2045. In this virtual and interactive course, we engage with the most pressing global challenges of our age—climate change, the risks of AI, and the complex ethical questions arising in the wake of new technologies. Our aim is to sharpen participants' awareness and equip them to apply their skills to these significant and urgent issues.Existential HopeExistential Hope was created to collect positive and possible scenarios for the future so that we can have more people commit to creating a brighter future, and to begin mapping out the main developments and challenges that need to be navigated to reach it. Existential Hope is a Foresight Institute project.Hosted by Allison Duettmann and Beatrice ErkersFollow Us: Twitter | Facebook | LinkedIn | Existential Hope InstagramExplore every word spoken on this podcast through Fathom.fm. Hosted on Acast. See acast.com/privacy for more information.

People I (Mostly) Admire
124. Daron Acemoglu on Economics, Politics, and Power

People I (Mostly) Admire

Play Episode Listen Later Feb 3, 2024 44:32 Very Popular


Economist Daron Acemoglu likes to tackle big questions. He tells Steve how colonialism still affects us today, who benefits from new technology, and why democracy wasn't always a sure thing. SOURCE:Daron Acemoglu, professor of economics at the Massachusetts Institute of Technology. RESOURCES:Power and Progress: Our Thousand-Year Struggle Over Technology and Prosperity, by Daron Acemoglu and Simon Johnson (2023)."Economists Pin More Blame on Tech for Rising Inequality," by Steve Lohr (The New York Times, 2022)."America's Slow-Motion Wage Crisis: Four Decades of Slow and Unequal Growth," by John Schmitt, Elise Gould, and Josh Bivens (Economic Policy Institute, 2018)."Why Mental Health Advocates Use the Words 'Died by Suicide,'" by Nicole Spector (NBC News, 2018)."A Machine That Made Stockings Helped Kick Off the Industrial Revolution," by Sarah Laskow (Atlas Obscura, 2017)."The Long-Term Jobs Killer Is Not China. It's Automation," by Claire Cain Miller (The New York Times, 2016).Why Nations Fail: The Origins of Power, Prosperity, and Poverty, by Daron Acemoglu and James A. Robinson (2012)."The Colonial Origins of Comparative Development: An Empirical Investigation," by Daron Acemoglu, Simon Johnson, and James A. Robinson (American Economic Review, 2001)."Learning about Others' Actions and the Investment Accelerator," by Daron Acemoglu (The Economic Journal, 1993)."A Friedman Doctrine — The Social Responsibility of Business Is to Increase Its Profits," by Milton Friedman (The New York Times, 1970). EXTRAS:"'My God, This Is a Transformative Power,'" by People I (Mostly) Admire (2023)."New Technologies Always Scare Us. Is A.I. Any Different?" by Freakonomics Radio (2023)."Max Tegmark on Why Superhuman Artificial Intelligence Won't be Our Slave," by People I (Mostly) Admire (2021)."How to Prevent Another Great Depression," by Freakonomics Radio (2020)."Is Income Inequality Inevitable?" by Freakonomics Radio (2017).

The Agenda with Steve Paikin (Audio)

Ilya Sutskever, the chief scientist at OpenAI, the company that created ChatGPT, has said today's technology might be "slightly conscious." Google engineer Blake Lemoine claimed that Google's AI LaMDA was "sentient." Is it? Could AI become conscious in our lifetime? And beyond if we can create AI sentience, should we? MIT's Max Tegmark, author of "Life 3.0," and others, debate the future of AI.See omnystudio.com/listener for privacy information.

Into the Impossible
Max Tegmark & Eric Weinstein: AI, Aliens, Theories of Everything, & New Year's Resolutions! (2020)

Into the Impossible

Play Episode Listen Later Jan 4, 2024 145:34 Very Popular


Win a $100 Amazon Gift Card! Help me help you get great guests on the Into the Impossible podcast and spread the message throughout the universe. Fill out this listener https://jf1bh9v88hb.typeform.com/to/FGPUYcdT for your chance to win! Enjoy this classic episode from the vault: Max Tegmark & Eric Weinstein, New Years Eve 2020! Don't forget to join my mailing list for your chance to win a real meteorite: briankeating.com Join this channel to get access to perks: https://www.youtube.com/channel/UCmXH_moPhfkqCk6S3b9RWuw/join

Best of the Left - Leftist Perspectives on Progressive Politics, News, Culture, Economics and Democracy

Air Date 12/20/2023 AI needs to be regulated by governments even though politicians don't understand computers just as the government regulates the manufacture and operation of aircraft even though your average politician doesn't know their ass from an aileron. That's why expert advisory panels are for. Be part of the show! Leave us a message or text at 202-999-3991 or email Jay@BestOfTheLeft.com Transcript WINTER SALE! 20% Off Memberships (including Gifts) in December! Join our Discord community! Related Episodes: #1547 Shaping the Future of the Internet #1578 A.I. is a big tech airplane with a 10% chance of crashing, should society fly it? OUR AFFILIATE LINKS: ExpressVPN.com/BestOfTheLeft GET INTERNET PRIVACY WITH EXPRESS VPN! BestOfTheLeft.com/Libro SUPPORT INDIE BOOKSHOPS, GET YOUR AUDIOBOOK FROM LIBRO! BestOfTheLeft.com/Bookshop BotL BOOKSTORE BestOfTheLeft.com/Store BotL MERCHANDISE! SHOW NOTES Ch. 1: How are governments approaching AI regulation - In Focus by The Hindu - Air Date 11-16-23 Dr Matti Pohjonen speaks to us about the concerns revolving around AI governance, and if there are any fundamental principles that an AI regulatory regime needs to address. Ch. 2: A First Step Toward AI Regulation with Tom Wheeler - Your Undivided Attention - Air Date 11-2-23 President Biden released a sweeping executive order that addresses many risks of artificial intelligence. Tom Wheeler, former chairman of the Federal Communications Commission, shares his insights on the order and what's next for AI regulation. Ch. 3: Artificial Intelligence Godfathers Call for Regulation as Rights Groups Warn AI Encodes Oppression - Democracy Now! - Air Date 6-1-23 We host a roundtable discussion with three experts in artificial intelligence on growing concerns over the technology's potential dangers: Yoshua Bengio, Max Tegmark, and Tawana Petty. Ch. 4: The EU agrees on AI regulations - What will it mean for people and businesses in the EU - DW News - Air Date 12-9-23 European Union member states and lawmakers reached a preliminary agreement on what they touted as the world's first comprehensive AI legislation on Friday. Ch. 5: EU vs. AI - Today, Explained - Air Date 12-18-23 The EU has advanced first-of-its-kind AI regulation. The Verge's Jess Weatherbed tells us whether it will make a difference, and Columbia University's Anu Bradford explains the Brussels effect. Ch. 6: A First Step Toward AI Regulation with Tom Wheeler Part 2 - Your Undivided Attention - Air Date 11-2-23 Ch. 7: How to Keep AI Under Control | Max Tegmark - TEDTalks - Air Date 11-2-23 Scientist Max Tegmark describes an optimistic vision for how we can keep AI under control and ensure it's working for us, not the other way around. MEMBERS-ONLY BONUS CLIP(S) Ch. 8: Anti-Democratic Tech Firm's Secret Push For A.I. Deregulation w Lori Wallach - Thom Hartmann Program - Air Date 8-8-23 Giant tech firms are meeting secretly to deregulate Artificial Intelligence technologies in the most undemocratic ways possible. Can profit really take over and corrupt progress? Ch. 9: How are governments approaching AI regulation Part 2 - In Focus by The Hindu - Air Date 11-16-23 FINAL COMMENTS Ch. 10: Final comments on the need to understand the benefits and downsides of new technology MUSIC (Blue Dot Sessions)   Produced by Jay! Tomlinson Visit us at BestOfTheLeft.com Listen Anywhere! BestOfTheLeft.com/Listen Listen Anywhere! Follow at Twitter.com/BestOfTheLeft Like at Facebook.com/BestOfTheLeft Contact me directly at Jay@BestOfTheLeft.com

The Nonlinear Library
EA - Announcing New Beginner-friendly Book on AI Safety and Risk by Darren McKee

The Nonlinear Library

Play Episode Listen Later Nov 25, 2023 1:41


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing New Beginner-friendly Book on AI Safety and Risk, published by Darren McKee on November 25, 2023 on The Effective Altruism Forum. Concisely, I've just released the book Uncontrollable: The Threat of Artificial Superintelligence and the Race to Save the World It's an engaging introduction to the main issues and arguments about AI safety and risk. Clarity and accessibility were prioritized. There are blurbs of support from Max Tegmark, Will MacAskill, Roman Yampolskiy and others. Main argument is that AI capabilities are increasing rapidly, we may not be able to fully align or control advanced AI systems, which creates risk. There is great uncertainty, so we should be prudent and act now to ensure AI is developed safely. It tries to be hopeful. Why does it exist? There are lots of useful posts, blogs, podcasts, and articles on AI safety, but there was no up-to-date book entirely dedicated to the AI safety issue that is written for those without any exposure to the issue. (Including those with no science background.) This book is meant to fill that gap and could be useful outreach or introductory materials. If you have already been following the AI safety issue, there likely isn't a lot that is new for you. So, this might be best seen as something useful for friends, relatives, some policy makers, or others just learning about the issue. (although, you may still like the framing) It's available on numerous Amazon marketplaces. Audiobook and Hardcover options to follow. It was a hard journey. I hope it is of value to the community. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

People I (Mostly) Admire
118. “My God, This Is a Transformative Power”

People I (Mostly) Admire

Play Episode Listen Later Nov 11, 2023 43:36


Computer scientist Fei-Fei Li had a wild idea: download one billion images from the internet and teach a computer to recognize them. She ended up advancing the state of artificial intelligence — and she hopes that will turn out to be a good thing for humanity.  RESOURCES:The Worlds I See: Curiosity, Exploration, and Discovery at the Dawn of A.I., by Fei-Fei Li (2023)."Fei-Fei Li's Quest to Make AI Better for Humanity," by Jessi Hempel (Wired, 2018)."ImageNet Large Scale Visual Recognition Challenge," by Olga Russakovsky, Li Fei-Fei, et al. (International Journal of Computer Vision, 2015).EXTRAS:“How to Think About A.I." series by Freakonomics Radio (2023).“Will A.I. Make Us Smarter?” by People I (Mostly) Admire (2023).“Satya Nadella's Intelligence Is Not Artificial,” by Freakonomics Radio (2023).“Max Tegmark on Why Superhuman Artificial Intelligence Won't be Our Slave,” by People I (Mostly) Admire (2021).

TED Talks Daily
How to keep AI under control | Max Tegmark

TED Talks Daily

Play Episode Listen Later Oct 30, 2023 12:06 Very Popular


The current explosion of exciting commercial and open-source AI is likely to be followed, within a few years, by creepily superintelligent AI – which top researchers and experts fear could disempower or wipe out humanity. Scientist Max Tegmark describes an optimistic vision for how we can keep AI under control and ensure it's working for us, not the other way around.

TED Talks Daily (SD video)
Is superintelligent AI inevitable? | Max Tegmark

TED Talks Daily (SD video)

Play Episode Listen Later Oct 30, 2023 12:06


The current explosion of exciting commercial and open-source AI is likely to be followed, within a few years, by creepily superintelligent AI – which top researchers and experts fear could disempower or wipe out humanity. Scientist Max Tegmark describes an optimistic vision for how we can keep AI under control and ensure it's working for us, not the other way around.

TED Talks Daily (HD video)
Is superintelligent AI inevitable? | Max Tegmark

TED Talks Daily (HD video)

Play Episode Listen Later Oct 30, 2023 12:06


The current explosion of exciting commercial and open-source AI is likely to be followed, within a few years, by creepily superintelligent AI – which top researchers and experts fear could disempower or wipe out humanity. Scientist Max Tegmark describes an optimistic vision for how we can keep AI under control and ensure it's working for us, not the other way around.

Mind the Shift
112. The War Against Life – Per Shapiro

Mind the Shift

Play Episode Listen Later Sep 29, 2023 93:30


What if AI is an expression of what could be described, with a Gnostic term, as archontic intelligence? Is it the latest innovation by a force that has been manipulating humanity for millennia? Per Shapiro used to work for Swedish public service as an investigative radio reporter. He grew increasingly frustrated with the constraints of the mainstream narratives. When his boss demanded that he redo a documentary about the pandemic that challenged the official view on vaccines and other restrictions, because it ”sounded like conspiracy theories”, Per decided to quit. He started his own independent channel. Per speaks passionately about some of the most toxic and manipulative terms in journalism (and elsewhere): conspiracy theories, false balance and guilt by association. Shortly after leaving mainstream media, Per felt compelled to write a book about the way he sees what is happening with society and humankind. The title would translate to The War Against Life. At the very beginning Per quotes captain Ahab from Herman Melville's novel Moby Dick: All my means are perfectly rational, it is my goal that is insane. This quote sums up much of our overarching societal structures, in Per's view. ”Intelligence is something other than wisdom. Intelligence is the ability to solve complex problems, to achieve complex goals. To be wise takes experiencing the world, experiencing yourself as a part of the world”, Per says. He makes an analogy with cancer cells. ”You might say that a cancer cell is more intelligent than a healthy cell, because it achieves its goal more efficiently. But it lacks the experience that it is actually a cell in a body, on which it depends for its life.” ”This is a metaphor for how we live our lives on this planet.” Many say that capitalism is the root of our problems. No, says Per: ”Capitalism is the symptom of a culture which is disconnected from the earth, from nature.” The Swiss mystic, scientist and psychedelics pioneer Albert Hofmann – cited in Per's book – said that the Western psyche has been struck by a schizoid catastrophe – a mindset of being separate from nature, from life itself. Per's book is first and foremost inspired by the Gnostic message and worldview. He has had several conversations with mythologist John Lamb Lash (also interviewed on this podcast), who has devoted his life to interpreting the Gnostic message. In Gnostic mythology, the wisdom goddess Sophia is Earth itself. ”It's important to know that this is not an abstract deity somewhere far away, this is a first hand experience of the only source of power you will ever have”, Per says. ”When we have lost this connection to our true source of power, we can more easily be manipulated to believe in illusions of power from other sources.” One of the most difficult parts to understand in the Gnostic mythology is the archontic influence. Per agrees with John Lamb Lash that it can be described as a mind virus. It can hijack your thoughts and ideas. One sneaky archontic modus operandi is counter mimicry: to artificially simulate real experiences. ”It piggybacks on our god-given faculty of imagination. But it turns it around so it becomes a simulation”, Per says. Transhumanism and AI come to mind. ”We have come to view ourselves with an archontic perspective, as if we are machines that need upgrading.” Per shares a deep concern about AI with his brother, MIT physicist Max Tegmark. Max has talked about this in several podcasts and radio shows. Bizarrely enough, the MIT professor has been fiercely attacked from the mainstream, not only for his AI worries, but also for lauding the work of his own brother, ”a known conspiracy theorist”. Per's channel ”Folkets Radio” On Youtube Per's book

Talk Chineasy - Learn Chinese every day with ShaoLan
257 - Life in Chinese with ShaoLan and Professor Max Tegmark from MIT

Talk Chineasy - Learn Chinese every day with ShaoLan

Play Episode Listen Later Sep 14, 2023 8:29


World famous cosmologist Max Tegmark shares with ShaoLan his thoughts on the origins of life and the universe and moves on to discover the word for ”life” in Mandarin Chinese.

Your One Black Friend
Quantum Immortality & The Power Of Infinity (Using a Materialist Approach to Argue No one Dies.)

Your One Black Friend

Play Episode Listen Later Aug 9, 2023 58:17


The power of infinity: you give an anomaly enough time, and eventually, the anomaly will repeat. It's only a MATTER of TIME. Get it? You get it. Right. In an infinite universe, death is an illusion. As "matter cannot be created or destroyed, only modified" according to the law of conservation of mass. Which means that even from a #materialist approach, no one really dies. Or perhaps more precisely: no one stays dead

SmartLess
"MIT Professor Max Tegmark: LIVE in Boston"

SmartLess

Play Episode Listen Later Jul 13, 2023 50:40


Find out more about mechanical cats doing Cats on Broadway… with our esteemed surprise guest: physicist, cosmologist, and machine-learning researcher Max Tegmark, LIVE in Boston.(Recorded on February 04, 2022)Listen to “SmartLess Live” episodes four weeks early and ad-free on Wondery+See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

Love Marry Kill
Special Report: Artificial Intelligence and Crime

Love Marry Kill

Play Episode Listen Later Jun 1, 2023 66:58


Artificial Intelligence is advancing at an amazing rate. In this special bonus episode, we take a step back from our usual true crime stories to discuss AI as it relates to the world of crime. How is AI helping law enforcement? How is it helping criminals? What new forms of crime are we likely to see enabled by AI? And what is the existential threat posed by AI to humanity and what efforts are underway to avoid catastrophe? Sources: https://www.npr.org/2023/01/25/1151435033/a-robot-was-scheduled-to-argue-in-court-then-came-the-jail-threats https://www.oliverwyman.com/our-expertise/insights/2018/dec/risk-journal-vol-8/rethinking-tactics/the-risks-and-benefits-of-using-ai-to-detect-crime.htmlhttps://www.newscientist.com/article/2326297-ai-predicts-crime-a-week-in-advance-with-90-per-cent-accuracy/https://crimesciencejournal.biomedcentral.com/articles/10.1186/s40163-020-00123-8https://www.zdnet.com/article/evil-ai-these-are-the-20-most-dangerous-crimes-that-artificial-intelligence-will-create/https://nypost.com/2023/04/12/ai-clones-teen-girls-voice-in-1m-kidnapping-scam/ Max Tegmark interview: https://www.youtube.com/watch?v=ewvpaXOQJoUGeoffrey Hinton interview: https://www.youtube.com/watch?v=Y6Sgp7y178k Stephen Hawking interview: https://www.youtube.com/watch?v=fFLVyWBDTfo Max Tegmark interview: https://www.techseekinghuman.ai/episode-2-max-tegmark/ Morgan Freeman deepfake video: https://www.youtube.com/watch?v=oxXpB9pSETo

London Real
Professor Max Tegmark - Stop AI Development Now! The Dangers Of Chat GPT-4

London Real

Play Episode Listen Later May 28, 2023 71:10


Watch the Full Episode for FREE: Stop AI Development Now! The Dangers Of Chat GPT-4 - MIT Professor Max Tegmark - London Real

The AI Breakdown: Daily Artificial Intelligence News and Discussions
AI Alignment and AGI - How Worried Should We Actually Be?

The AI Breakdown: Daily Artificial Intelligence News and Discussions

Play Episode Listen Later May 28, 2023 19:06


A reading of Max Tegmark's essay "The 'Don't Look Up' Thinking That Could Doom Us With AI" The AI Breakdown helps you understand the most important news and discussions in AI.  Subscribe to The AI Breakdown newsletter: https://theaibreakdown.beehiiv.com/subscribe Subscribe to The AI Breakdown on YouTube: https://www.youtube.com/@TheAIBreakdown Join the community: bit.ly/aibreakdown Learn more: http://breakdown.network/

London Real

Watch the Full Episode for FREE: Stop AI Development Now! The Dangers Of Chat GPT-4 - MIT Professor Max Tegmark - London Real

Talk Chineasy - Learn Chinese every day with ShaoLan
137 - Universe in Chinese with ShaoLan and Professor Max Tegmark from MIT

Talk Chineasy - Learn Chinese every day with ShaoLan

Play Episode Listen Later May 17, 2023 8:42


Cosmologist Max Tegmark not only is an expert on the universe that we live in, but he can also speak great Chinese! He calls from the USA to chat with ShaoLan about how to say “universe” in Chinese and explains how our understanding of the universe continues to amaze him.

Perpetual Traffic
11 Life-Transforming AI Tools You're Not Using…But Should

Perpetual Traffic

Play Episode Listen Later Apr 14, 2023 31:33 Transcription Available


In this episode of Perpetual Traffic, Ralph and Kasim discuss various AI tools that can be used to make money. They cover tools like Zapier, ChatGPT, and Tweet Hunter, which can help with legal protection, Twitter threads, and data visualization. The hosts also touch on the importance of personality in YouTube chat and the limitations of threads in online courses. They recommend subscribing to The Rundown newsletter for the latest AI news and updates and mention the book Life 3.0 by Max Tegmark. Additionally, they discuss the potential of AI to replace certain professions and the need for universal basic income. This episode offers valuable insights into the ways AI can be leveraged for business and the impact it may have on society.In This Episode, You'll Learn:00:00 - AI Tools and Resources: An Introduction04:23 - AI Beats Radiologists and Other Professionals06:57 - ChatGPT: A Powerful AI Tool to Use Like a Person07:47 - Consider Inviting Kasim Aslam as a Guest on the Show09:37 - Tweet Hunter: A Useful Tool for Twitter Threads12:15 - Lengthy Terms of Service and NDA Agreements Can Be a Burden16:02 - Limitations of Threads in Online Courses and Tweet100.io18:59 - Personality is Important in YouTube Chat24:07 - Playing with Digitized Versions of Ourselves24:18 - Guest Availability Uncertain26:08 - Be Cautious When Using AI for Writing: Accuracy Not Guaranteed27:08 - Twitter Sentiment Analysis: A Cool Tool to Use29:02 - Zai Zai's ChatGPT Integration Has Overwhelming Implications30:40 - Follow Kasim Aslam on Twitter for New and Interesting AI UpdatesTools Mentioned:Life 3.0 Lex Tagmark bookTLDRAIAICyclopedia.comTheRundown.aisynthesia.ioTweethunter.io - use to magnify your twitter threadsDocalysis.comSol8.com/NDAsChatYoutube.comMake It Stick bookAdcrafter.ai - Google adsOpus.pro/clipSpeakai.coZapier.comLINKS AND RESOURCES:Sam Altman Lex Freedman interviewTiereleven.comSolutions 8

Lex Fridman Podcast
#371 – Max Tegmark: The Case for Halting AI Development

Lex Fridman Podcast

Play Episode Listen Later Apr 13, 2023 173:58


Max Tegmark is a physicist and AI researcher at MIT, co-founder of the Future of Life Institute, and author of Life 3.0: Being Human in the Age of Artificial Intelligence. Please support this podcast by checking out our sponsors: - Notion: https://notion.com - InsideTracker: https://insidetracker.com/lex to get 20% off - Indeed: https://indeed.com/lex to get $75 credit EPISODE LINKS: Max's Twitter: https://twitter.com/tegmark Max's Website: https://space.mit.edu/home/tegmark Pause Giant AI Experiments (open letter): https://futureoflife.org/open-letter/pause-giant-ai-experiments Future of Life Institute: https://futureoflife.org Books and resources mentioned: 1. Life 3.0 (book): https://amzn.to/3UB9rXB 2. Meditations on Moloch (essay): https://slatestarcodex.com/2014/07/30/meditations-on-moloch 3. Nuclear winter paper: https://nature.com/articles/s43016-022-00573-0 PODCAST INFO: Podcast website: https://lexfridman.com/podcast Apple Podcasts: https://apple.co/2lwqZIr Spotify: https://spoti.fi/2nEwCF8 RSS: https://lexfridman.com/feed/podcast/ YouTube Full Episodes: https://youtube.com/lexfridman YouTube Clips: https://youtube.com/lexclips SUPPORT & CONNECT: - Check out the sponsors above, it's the best way to support this podcast - Support on Patreon: https://www.patreon.com/lexfridman - Twitter: https://twitter.com/lexfridman - Instagram: https://www.instagram.com/lexfridman - LinkedIn: https://www.linkedin.com/in/lexfridman - Facebook: https://www.facebook.com/lexfridman - Medium: https://medium.com/@lexfridman OUTLINE: Here's the timestamps for the episode. On some podcast players you should be able to click the timestamp to jump to that time. (00:00) - Introduction (07:34) - Intelligent alien civilizations (19:58) - Life 3.0 and superintelligent AI (31:25) - Open letter to pause Giant AI Experiments (56:32) - Maintaining control (1:25:22) - Regulation (1:36:12) - Job automation (1:45:27) - Elon Musk (2:07:09) - Open source (2:13:39) - How AI may kill all humans (2:24:10) - Consciousness (2:33:32) - Nuclear winter (2:44:00) - Questions for AGI

Making Sense with Sam Harris
Making Sense of Artificial Intelligence | Episode 1 of The Essential Sam Harris

Making Sense with Sam Harris

Play Episode Listen Later Nov 22, 2022 67:53


Filmmaker Jay Shapiro has produced a new series of audio documentaries, exploring the major topics that Sam has focused on over the course of his career. Each episode weaves together original analysis, critical perspective, and novel thought experiments with some of the most compelling exchanges from the Making Sense archive. Whether you are new to a particular topic, or think you have your mind made up about it, we think you'll find this series fascinating. In this episode, we explore the landscape of Artificial Intelligence. We'll listen in on Sam's conversation with decision theorist and artificial-intelligence researcher Eliezer Yudkowsky, as we consider the potential dangers of AI – including the control problem and the value-alignment problem – as well as the concepts of Artificial General Intelligence, Narrow Artificial Intelligence, and Artificial Super Intelligence. We'll then be introduced to philosopher Nick Bostrom's “Genies, Sovereigns, Oracles, and Tools,” as physicist Max Tegmark outlines just how careful we need to be as we travel down the AI path. Computer scientist Stuart Russell will then dig deeper into the value-alignment problem and explain its importance.   We'll hear from former Google CEO Eric Schmidt about the geopolitical realities of AI terrorism and weaponization. We'll then touch the topic of consciousness as Sam and psychologist Paul Bloom turn the conversation to the ethical and psychological complexities of living alongside humanlike AI. Psychologist Alison Gopnik then reframes the general concept of intelligence to help us wonder if the kinds of systems we're building using “Deep Learning” are really marching us towards our super-intelligent overlords.   Finally, physicist David Deutsch will argue that many value-alignment fears about AI are based on a fundamental misunderstanding about how knowledge actually grows in this universe.