Swedish-American cosmologist
POPULARITY
Was wäre, wenn das Universum jederzeit kollabieren könnte? Wenn unsere Realität nur eine Simulation wäre - und jemand den Stecker zieht? Oder wenn eine Theorie beweist, dass freier Wille eine Illusion ist? Manche wissenschaftlichen Konzepte sind nicht nur faszinierend, sondern potenziell verstörend und gefährlich. Tim und Max sprechen dieses Mal über Theorien, die – falls sie sich als wahr herausstellen – unser Verständnis der Welt erschüttern könnten. Eine Folge voller Gedankenexperimente, faszinierender Physik und existenzieller Fragen. Viel Spaß beim Hören - falls unsere Realität bis dahin noch existiert!
It feels like 2010 again - the bloggers are debating the proofs for the existence of God. I found these much less interesting after learning about Max Tegmark's mathematical universe hypothesis, and this doesn't seem to have reached the Substack debate yet, so I'll put it out there. Tegmark's hypothesis says: all possible mathematical objects exist. Consider a mathematical object like a cellular automaton - a set of simple rules that creates complex behavior. The most famous is Conway's Game of Life; the second most famous is the universe. After all, the universe is a starting condition (the Big Bang) and a set of simple rules determining how the starting condition evolves over time (the laws of physics). Some mathematical objects contain conscious observers. Conway's Life might be like this: it's Turing complete, so if a computer can be conscious then you can get consciousness in Life. If you built a supercomputer and had it run the version of Life with the conscious being, then you would be “simulating” the being, and bringing it into existence. There would be something it was like to be that being; it would have thoughts and experiences and so on. https://www.astralcodexten.com/p/tegmarks-mathematical-universe-defeats
[Original thread here: Tegmark's Mathematical Universe Defeats Most Arguments For God's Existence.] 1: Comments On Specific Technical Points 2: Comments From Bentham's Bulldog's Response 3: Comments On Philosophical Points, And Getting In Fights https://www.astralcodexten.com/p/highlights-from-the-comments-on-tegmarks
Max Tegmark's Future of Life Institute has called for a pause on the development on advanced AI systems. Tegmark is concerned that the world is moving toward artificial intelligence that can't be controlled, one that could pose an existential threat to humanity. Yoshua Bengio, often dubbed as one of the "godfathers of AI" shares similar concerns. In this special Davos edition of CNBC's Beyond the Valley, Tegmark and Bengio join senior technology correspondent Arjun Kharpal to discuss AI safety and worst case scenarios, such as AI that could try to keep itself alive at the expense of others.See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
E se tudo o que você conhece – o universo, a Terra, até você mesmo – fosse apenas uma simulação criada por uma inteligência superior? Neste episódio, mergulhamos na fascinante Hipótese da Simulação, explorando desde suas raízes filosóficas, como o Mito da Caverna de Platão e os simulacros de Baudrillard, até suas bases científicas, como a física quântica e o universo matemático de Tegmark. Será que estamos vivendo em uma Matrix? E, se estivermos, o que isso significaria para nossa existência?
Please join my mailing list here
Max Tegmark, a renowned MIT physicist, dives into the future of AI with Patrick Bet-David, discussing the profound impacts of AI and robotics on society, military, and global regulation. Tegmark warns of the risks and potentials of AI, emphasizing the need for global safety standards. ------
Bienvenidos a un nuevo episodio, un nuevo INSIDE X donde hablamos de...
On today's episode of Virtual Sentiments, host Kristen Collins interviews John Kaufhold on the history of image recognition and deep learning. With over 30 years of experience in the artificial intelligence and machine learning world, John shares his history starting from his early days in speech recognition in the 90s. He covers the ImageNet Big Bang in 2012, the dramatic improvement of image recognition error rates and hardware power, neural networks, the development of chatbots, terminology, and discusses challenges such as data privacy, bias reproduction, existential risk, transparency in data sets, and more!Dr. John Kaufhold is an expert with over 30 years of experience in artificial intelligence and deep learning. He is the founder of Deep Learning Analytics, a machine learning company, and serves on the Advisory Board of the DC Data Community.References and related works to this episode: "Munk Debate on Artificial Intelligence | Bengio & Tegmark vs. Mitchell & LeCun" and Data Science DC's "How Attention in 2017 got us Chat GPT."Read more work from Kristen Collins.If you like the show, please subscribe, leave a 5-star review, and tell others about the show! We're available on Apple Podcasts, Spotify, Amazon Music, and wherever you get your podcasts.Follow the Hayek Program on Twitter: @HayekProgramLearn more about Academic & Student ProgramsFollow the Mercatus Center on Twitter: @mercatus
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Why does generalization work?, published by Martín Soto on February 21, 2024 on LessWrong. Just an interesting philosophical argument I. Physics Why can an ML model learn from part of a distribution or data set, and generalize to the rest of it? Why can I learn some useful heuristics or principles in a particular context, and later apply them in other areas of my life? The answer is obvious: because there are some underlying regularities between the parts I train on and the ones I test on. In the ML example, generalization won't work when approximating a function which is a completely random jumble of points. Also, quantitatively, the more regular the function is, the better generalization will work. For example, polynomials of lower degree require less data points to pin down. Same goes for periodic functions. Also, a function with lower Lipschitz constant will allow for better bounding of the values in un-observed points. So it must be that the variables we track (the ones we try to predict or control, either with data science or our actions), are given by disproportionately regular functions (relative to random ones). In this paper by Tegmark, the authors argue exactly that most macroscopic variables of interest have Hamiltonians of low polynomial degree. And that this happens because of some underlying principles of low-level physics, like locality, symmetry, or the hierarchical composition of physical processes. But then, why is low-level physics like that? II. Anthropics If our low-level physics wasn't conducive to creating macroscopic patterns and regularities, then complex systems capable of asking that question (like ourselves) wouldn't exist. Indeed, we ourselves are nothing more than a specific kind of macroscopic pattern. So anthropics explains why we should expect such patterns to exist, similarly to how it explains why the gravitational constant, or the ratio between sound and light speed, are the right ones to allow for complex life. III. Dust But there's yet one more step. Let's try to imagine a universe which is not conducive to such macroscopic patterns. Say you show me its generating code (its laws of physics), and run it. To me, it looks like a completely random mess. I am not able to differentiate any structural regularities that could be akin to the law of ideal gases, or the construction of molecules or cells. While on the contrary, if you showed me the running code of this reality, I'd be able (certainly after many efforts) to differentiate these conserved quantities and recurring structures. What are, exactly, these macroscopic variables I'm able to track, like "pressure in a room", or "chemical energy in a cell"? Intuitively, they are a way to classify all possible physical arrangements into more coarse-grained buckets. In the language of statistical physics, we'd say they are a way to classify all possible microstates into a macrostate partition. For example, every possible numerical value for pressure is a different macrostate (a different bucket), that could be instantiated by many different microstates (exact positions of particles). But there's a circularity problem. When we say a certain macroscopic variable (like pressure) is easily derived from others (like temperature), or that it is a useful way to track another variable we care about (like "whether a human can survive in this room"), we're being circular. Given I already have access to a certain macrostate partition (temperature), or that I already care about tracking a certain macrostate partition (aliveness of human), then I can say it is natural or privileged to track another partition (pressure). But I cannot motivate the importance of pressure as a macroscopic variable from just looking at the microstates. Thus, "which parts of physics I consider interesting macroscopic varia...
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Why does generalization work?, published by Martín Soto on February 21, 2024 on LessWrong. Just an interesting philosophical argument I. Physics Why can an ML model learn from part of a distribution or data set, and generalize to the rest of it? Why can I learn some useful heuristics or principles in a particular context, and later apply them in other areas of my life? The answer is obvious: because there are some underlying regularities between the parts I train on and the ones I test on. In the ML example, generalization won't work when approximating a function which is a completely random jumble of points. Also, quantitatively, the more regular the function is, the better generalization will work. For example, polynomials of lower degree require less data points to pin down. Same goes for periodic functions. Also, a function with lower Lipschitz constant will allow for better bounding of the values in un-observed points. So it must be that the variables we track (the ones we try to predict or control, either with data science or our actions), are given by disproportionately regular functions (relative to random ones). In this paper by Tegmark, the authors argue exactly that most macroscopic variables of interest have Hamiltonians of low polynomial degree. And that this happens because of some underlying principles of low-level physics, like locality, symmetry, or the hierarchical composition of physical processes. But then, why is low-level physics like that? II. Anthropics If our low-level physics wasn't conducive to creating macroscopic patterns and regularities, then complex systems capable of asking that question (like ourselves) wouldn't exist. Indeed, we ourselves are nothing more than a specific kind of macroscopic pattern. So anthropics explains why we should expect such patterns to exist, similarly to how it explains why the gravitational constant, or the ratio between sound and light speed, are the right ones to allow for complex life. III. Dust But there's yet one more step. Let's try to imagine a universe which is not conducive to such macroscopic patterns. Say you show me its generating code (its laws of physics), and run it. To me, it looks like a completely random mess. I am not able to differentiate any structural regularities that could be akin to the law of ideal gases, or the construction of molecules or cells. While on the contrary, if you showed me the running code of this reality, I'd be able (certainly after many efforts) to differentiate these conserved quantities and recurring structures. What are, exactly, these macroscopic variables I'm able to track, like "pressure in a room", or "chemical energy in a cell"? Intuitively, they are a way to classify all possible physical arrangements into more coarse-grained buckets. In the language of statistical physics, we'd say they are a way to classify all possible microstates into a macrostate partition. For example, every possible numerical value for pressure is a different macrostate (a different bucket), that could be instantiated by many different microstates (exact positions of particles). But there's a circularity problem. When we say a certain macroscopic variable (like pressure) is easily derived from others (like temperature), or that it is a useful way to track another variable we care about (like "whether a human can survive in this room"), we're being circular. Given I already have access to a certain macrostate partition (temperature), or that I already care about tracking a certain macrostate partition (aliveness of human), then I can say it is natural or privileged to track another partition (pressure). But I cannot motivate the importance of pressure as a macroscopic variable from just looking at the microstates. Thus, "which parts of physics I consider interesting macroscopic varia...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Why does generalization work?, published by Martín Soto on February 20, 2024 on The AI Alignment Forum. Just an interesting philosophical argument I. Physics Why can an ML model learn from part of a distribution or data set, and generalize to the rest of it? Why can I learn some useful heuristics or principles in a particular context, and later apply them in other areas of my life? The answer is obvious: because there are some underlying regularities between the parts I train on and the ones I test on. In the ML example, generalization won't work when approximating a function which is a completely random jumble of points. Also, quantitatively, the more regular the function is, the better generalization will work. For example, polynomials of lower degree require less data points to pin down. Same goes for periodic functions. Also, a function with lower Lipschitz constant will allow for better bounding of the values in un-observed points. So it must be that the variables we track (the ones we try to predict or control, either with data science or our actions), are given by disproportionately regular functions (relative to random ones). In this paper by Tegmark, the authors argue exactly that most macroscopic variables of interest have Hamiltonians of low polynomial degree. And that this happens because of some underlying principles of low-level physics, like locality, symmetry, or the hierarchical composition of physical processes. But then, why is low-level physics like that? II. Anthropics If our low-level physics wasn't conducive to creating macroscopic patterns and regularities, then complex systems capable of asking that question (like ourselves) wouldn't exist. Indeed, we ourselves are nothing more than a specific kind of macroscopic pattern. So anthropics explains why we should expect such patterns to exist, similarly to how it explains why the gravitational constant, or the ratio between sound and light speed, are the right ones to allow for complex life. III. Dust But there's yet one more step. Let's try to imagine a universe which is not conducive to such macroscopic patterns. Say you show me its generating code (its laws of physics), and run it. To me, it looks like a completely random mess. I am not able to differentiate any structural regularities that could be akin to the law of ideal gases, or the construction of molecules or cells. While on the contrary, if you showed me the running code of this reality, I'd be able (certainly after many efforts) to differentiate these conserved quantities and recurring structures. What are, exactly, these macroscopic variables I'm able to track, like "pressure in a room", or "chemical energy in a cell"? Intuitively, they are a way to classify all possible physical arrangements into more coarse-grained buckets. In the language of statistical physics, we'd say they are a way to classify all possible microstates into a macrostate partition. For example, every possible numerical value for pressure is a different macrostate (a different bucket), that could be instantiated by many different microstates (exact positions of particles). But there's a circularity problem. When we say a certain macroscopic variable (like pressure) is easily derived from others (like temperature), or that it is a useful way to track another variable we care about (like "whether a human can survive in this room"), we're being circular. Given I already have access to a certain macrostate partition (temperature), or that I already care about tracking a certain macrostate partition (aliveness of human), then I can say it is natural or privileged to track another partition (pressure). But I cannot motivate the importance of pressure as a macroscopic variable from just looking at the microstates. Thus, "which parts of physics I consider interesting macr...
Varför landar västerländska diskussioner om AI i det religiösa? Göran Sommardal söker sig till Ostasien för en mer nykter bild av vad vi håller på att skapa. Lyssna på alla avsnitt i Sveriges Radio Play. ESSÄ: Detta är en text där skribenten reflekterar över ett ämne eller ett verk. Åsikter som uttrycks är skribentens egna.Varje gång tankeböljorna börjar slå mot stränderna på de svenska kultursidorna och stormen som sägs ha framkallat ordsvallet är det föreställda hotet eller det tvetydiga löftet från AI – artificiell intelligens, då förvandlas jag själv – som genom ett gammal- dags trollslag – till en hårdhudad intelligensaristokrat.Är det den nya Apokalypsen som förebådas? Nån av Guds änglar som bakom kulisserna stöter i sin förment intelligenta basun? Eller bara ett medialt moderniserat flagellanttåg som drar förbi?Självutnämnda ”framtidsexperter” dyker upp och varnar för mänsklighetens förestående förintelse och i allehanda intressegruppers namn föreställer man sig den oundvikliga mänskliga marschen upp till den av intelligenta robotor administrerade galgbacken. En konstnär har redan ställt ut sina förmodligen intelligenta samlag med en AI-sexdocka och en ansenlig samling tech-giganter samlar sig till ett upprop för ett moratorium för allt vidare macklande med vad det nu än var som skulle förkroppsliga vår slutgiltiga Nemesis. Åtminstone i sex månader.Inte ens AI-forskaren Max Tegmark lyckades åstadkomma annat än att kratsa schablonerna ur asken och in i elden i sitt sommarprat. Tegmarks egen i grunden pessimistiska men aldrig uppgivna utgångspunkt tycks vara att AI verkligen är smartare än vi, och snabbt skulle bli det i ännu högre grad. Kunde räkna fortare och exaktare, kunde memorera oändligt mycket mer och iordningsställa sina minnesdata i enlighet med smartare algoritmer. Och den kan inte bara lära sig allt det vi ställer till dess förfogande utan förmådde också lära sig sådant den ”själv ville” veta och därmed skapa sin egen överlägsna intelligens. Jag förstod Tegmarks tanke som att AI i någon mening ytterst skulle kunna göra sig oberoende av oss.Men lider då inte en sådan tanke i sin tur av det filosoferna skulle kalla en contradictio in subjecto – motstridiga villkor? Nämligen, att vi med vår underlägsna intelligens, som är en av utgångspunkterna för resonemanget, meningsfullt skulle kunna reflektera över en förutsatt överlägsen intelligens. Vore det inte logiskt detsamma som att tänka sig att en underlägsen hund skulle kunna föreställa sig exakt den överlägsenhet, som gör att människan behärskar hunden. Utan att kunna göra något åt det.Tegmark gör på något ställe just den liknelsen, att AI:n kunde tänkas behålla och bevara människan, om inte för något annat, utom just som ett trevligt husdjur, fastän den inte dög något särskilt omistligt till.Bakom en sådan tankefigur anar jag den urgamla kategoriska teogonin. Och Tegmark är ju inte för inte både fysiker och kosmolog. Dvs. han drabbas av den oundvikliga skymten av ett högre väsen, vare det sedan ont eller gott, men som oundvikligen dyker upp vid horisonten när vi saktmodiga människobarn inte vet vad vi ska ta oss till.Här någonstans börjar jag misstänka att det är vårt sätt att tänka och tala om den artificiella intelligensen, som piskar upp vår oro.Men tänk om vi drar gränsen inte mellan oss och maskinerna utan mellan oss och naturen. Om vi tänker den stora gränsen som kineser och japaner och koreaner gör, mellan den människogjorda vishetsförmågan |人工智能 som är vad AI, artificiell intelligens, ju ordagrant etymologiskt betyder, OCH det stora spontana |大自然, det som betecknar naturen. I stället för att betona det ”artificiella”, det ”konstgjorda”, och det skrämmande snobbiga ”intelligens”, så hasplar kinesiskan och japanskan och koreanskan helt enkelt ur sig en klädsammare pragmatisk sida av framtiden.Utan att ta till det apokalyptiska trollslaget hamnar vi då själva bland maskinerna och maskinerna bland oss. Och varken de intelligenta maskinerna eller vi kommer ju att övervinna naturlagarna. Och den artificiella intelligensen kan aldrig bli mer omänsklig än vad vi själva kan tänkas vara.Säkerligen har det också att göra med att det historiografiskt besatta Ostasien föreställer sig att utvecklingen av AI inte på minsta vis skulle försiggå i religionshistorien, vilket är den plats där den mesta science fiction från Västvärlden förr eller senare mynnar ut. Utan i stället utspelas som ett siffertyst fortskridande genom matematikens historia.Den kinesiske matematikern Wu Wenjun – känd för att ha skapat Wu-metoden och Wu-formeln med tillämpning inom algebraisk topologi – har beskrivit utvecklingen inom AI som en fortgående mekanisering av matematiken, som tidigare funnits i teorin – utvecklad av en följd av matematiker som Leibniz, Hilbert, Gödel till Tarski och Quine och vidare, men att det är först i och med den hypersnabba behandlingen av det statistiska materialet, som kvantumberäkningen innebär, vilket har gjort det möjligt att tillämpa den matematiska kunskapen i sin fulla potential. Så här storvulet mänskligt har Wu sammanfattat sin matematiska syn på AI – den människogjorda vishetsförmågan: ”Programmen för artificiell intelligens skrivs av mig, ett efter ett, och varje instruktion är en handling som måste utföras mekaniskt, den har ingen intelligens alls. Den så kallade artificiella intelligensen innebär ett mekaniskt utförande av mänskliga tankeprocesser, inte en dator med intelligens.”Men usch! hur avskräckande tråkig ter sig inte en sådan verklighet, där den svårbegripliga maskinintelligensen föreställs göra det möjligt för oss att behärska världen. Och hur mycket mer fantasifullt rysansvärd ter sig inte mardrömmen om en ny härskarklass av stålblanka kreatur som håller universum i sina övermänskliga klor och tyglar och samtidigt hotar att göra oss mänskliga parvlar och tösabitar såväl maktlösa som arbetslösa.Att ett sådant väsen i vårt ufo-iserade tankeuniversum skulle anta formen av en överlägsen intelligens är knappast mindblowing. Jag minns fortfarande sidorepliken från ett nummer av Marvel Comics från 70-talet: The Fantastic Four, där en av alla mänsklighetens fantasilösa förgörare dyker upp: ”From beyond the Stars shall come the Over-mind, and he shall crush the Universe.”Hur nära till hands ligger då inte den tankspridda föreställningen att en AI slutgiltigt tar makten och skulle uppenbara sig nedstigande ur ett jättelikt skimrande flygande tefat på Times Square, och allra helst i parallella uppenbarelser samtidigt på Röda Torget och torget vid Den himmelska friden port. För att verifiera sitt herravälde över oss ”underlings”: the puny humans.Göran Sommardalpoet, kritiker och översättare från kinesiska Litteratur中国人工智能简史|En kortfattad historia över kinesisk AI. 人民邮电出版社, 2023.Wu Wen-tsun: Mathematics Mechanization: mechanical Geometry Theorem-Proving, Mechanical Geometry Problem-Solving and Polynomial Equations-Solving (Mathematics and Its Applications), December 31, 1999, Springer.Wu Wen-tsün: Mechanical theorem proving in geometries : basic principles, Wien; Springer-Vlg, cop. 1994
Hipótesis Físicas B Hipótesis de Max Tegmark Según Max Tegmark, la existencia de otros universos es una implicación directa de las observaciones cosmológicas. Tegmark describe el conjunto de conceptos relacionados que comparten la noción de que existen universos más allá del familiar observable. Y continúa proporcionando una “clasificación” de universos paralelos organizados por niveles. Con el fin de aclarar la terminología, una serie de investigadores. tales como: George Ellis, U. Kirchner y W.R. Stoeger. Recomiendan utilizar distintos términos, para el modelo teórico de la totalidad del espacio-tiempo; causalmente conectad oen el que vivimos. "dominio del universo" : para el universo observable o una parte similar del mismo espacio-tiempo. "multiverso" : para un conjunto de espacio-tiempos desconectados. "universo" : para un espacio-tiempo general, ya sea nuestro propio "Universo" u otro desconectado del nuestro. … Estudiemos lo expuesto …
Federal government reaches deal with Google on Online News Act Google DeepMind researchers use AI tool to find 2mn new materials Some Pixel 8 Pro displays have bumps under the glass Reflecting on 18 years at Google Google's new geothermal energy project is up and running AWS Unveils Next Generation AWS-Designed Chips Amazon Introduces Q, an A.I. Chatbot for Companies Does Black Friday and Cyber Monday Matter? Adobe's $20 Billion Purchase of Figma Would Harm Innovation, U.K. Regulator Provisionally Finds Elon goes full Pizzagate. How does this end? A new low in manels Sports Illustrated Published Articles by Fake, AI-Generated Writers OpenAI Made an AI Breakthrough Before Altman Firing, Stoking Excitement and Concern Hugging Face CEO on What Comes After Transformers Hinton vs LeCun vs Ng vs Tegmark vs O Anthony Levandowski Reboots Church of Artificial Intelligence Unauthorized "David Attenborough" AI clone narrates developer's life, goes viral Google Slides getting built-in presentation recording tool Google Will Start Deleting Old Accounts This Week. Here's How to Save Your Google Account Google's .meme domain is here to serve your wackiest websites Some Google Drive for Desktop users are missing months of files Picks of the week (Paris) The 2000 cinematic masterpiece Chicken Run (Paris) Pentiment (Jeff) Jezebel to Be Resurrected by Paste Magazine (Jeff) After 151 years, Popular Science will no longer offer a magazine (Ant) City Nerd on YouTube (Jason) No Ads in Albania, Ethiopia and Myanmar Hosts: Jason Howell, Jeff Jarvis, Paris Martineau, and Ant Pruitt Download or subscribe to this show at https://twit.tv/shows/this-week-in-google. Get episodes ad-free with Club TWiT at https://twit.tv/clubtwit Sponsors: GO.ACILEARNING.COM/TWIT mylio.com/TWIT25 hid.link/twigdemo
Federal government reaches deal with Google on Online News Act Google DeepMind researchers use AI tool to find 2mn new materials Some Pixel 8 Pro displays have bumps under the glass Reflecting on 18 years at Google Google's new geothermal energy project is up and running AWS Unveils Next Generation AWS-Designed Chips Amazon Introduces Q, an A.I. Chatbot for Companies Does Black Friday and Cyber Monday Matter? Adobe's $20 Billion Purchase of Figma Would Harm Innovation, U.K. Regulator Provisionally Finds Elon goes full Pizzagate. How does this end? A new low in manels Sports Illustrated Published Articles by Fake, AI-Generated Writers OpenAI Made an AI Breakthrough Before Altman Firing, Stoking Excitement and Concern Hugging Face CEO on What Comes After Transformers Hinton vs LeCun vs Ng vs Tegmark vs O Anthony Levandowski Reboots Church of Artificial Intelligence Unauthorized "David Attenborough" AI clone narrates developer's life, goes viral Google Slides getting built-in presentation recording tool Google Will Start Deleting Old Accounts This Week. Here's How to Save Your Google Account Google's .meme domain is here to serve your wackiest websites Some Google Drive for Desktop users are missing months of files Picks of the week (Paris) The 2000 cinematic masterpiece Chicken Run (Paris) Pentiment (Jeff) Jezebel to Be Resurrected by Paste Magazine (Jeff) After 151 years, Popular Science will no longer offer a magazine (Ant) City Nerd on YouTube (Jason) No Ads in Albania, Ethiopia and Myanmar Hosts: Jason Howell, Jeff Jarvis, Paris Martineau, and Ant Pruitt Download or subscribe to this show at https://twit.tv/shows/this-week-in-google. Get episodes ad-free with Club TWiT at https://twit.tv/clubtwit Sponsors: GO.ACILEARNING.COM/TWIT mylio.com/TWIT25 hid.link/twigdemo
Federal government reaches deal with Google on Online News Act Google DeepMind researchers use AI tool to find 2mn new materials Some Pixel 8 Pro displays have bumps under the glass Reflecting on 18 years at Google Google's new geothermal energy project is up and running AWS Unveils Next Generation AWS-Designed Chips Amazon Introduces Q, an A.I. Chatbot for Companies Does Black Friday and Cyber Monday Matter? Adobe's $20 Billion Purchase of Figma Would Harm Innovation, U.K. Regulator Provisionally Finds Elon goes full Pizzagate. How does this end? A new low in manels Sports Illustrated Published Articles by Fake, AI-Generated Writers OpenAI Made an AI Breakthrough Before Altman Firing, Stoking Excitement and Concern Hugging Face CEO on What Comes After Transformers Hinton vs LeCun vs Ng vs Tegmark vs O Anthony Levandowski Reboots Church of Artificial Intelligence Unauthorized "David Attenborough" AI clone narrates developer's life, goes viral Google Slides getting built-in presentation recording tool Google Will Start Deleting Old Accounts This Week. Here's How to Save Your Google Account Google's .meme domain is here to serve your wackiest websites Some Google Drive for Desktop users are missing months of files Picks of the week (Paris) The 2000 cinematic masterpiece Chicken Run (Paris) Pentiment (Jeff) Jezebel to Be Resurrected by Paste Magazine (Jeff) After 151 years, Popular Science will no longer offer a magazine (Ant) City Nerd on YouTube (Jason) No Ads in Albania, Ethiopia and Myanmar Hosts: Jason Howell, Jeff Jarvis, Paris Martineau, and Ant Pruitt Download or subscribe to this show at https://twit.tv/shows/this-week-in-google. Get episodes ad-free with Club TWiT at https://twit.tv/clubtwit Sponsors: GO.ACILEARNING.COM/TWIT mylio.com/TWIT25 hid.link/twigdemo
Federal government reaches deal with Google on Online News Act Google DeepMind researchers use AI tool to find 2mn new materials Some Pixel 8 Pro displays have bumps under the glass Reflecting on 18 years at Google Google's new geothermal energy project is up and running AWS Unveils Next Generation AWS-Designed Chips Amazon Introduces Q, an A.I. Chatbot for Companies Does Black Friday and Cyber Monday Matter? Adobe's $20 Billion Purchase of Figma Would Harm Innovation, U.K. Regulator Provisionally Finds Elon goes full Pizzagate. How does this end? A new low in manels Sports Illustrated Published Articles by Fake, AI-Generated Writers OpenAI Made an AI Breakthrough Before Altman Firing, Stoking Excitement and Concern Hugging Face CEO on What Comes After Transformers Hinton vs LeCun vs Ng vs Tegmark vs O Anthony Levandowski Reboots Church of Artificial Intelligence Unauthorized "David Attenborough" AI clone narrates developer's life, goes viral Google Slides getting built-in presentation recording tool Google Will Start Deleting Old Accounts This Week. Here's How to Save Your Google Account Google's .meme domain is here to serve your wackiest websites Some Google Drive for Desktop users are missing months of files Picks of the week (Paris) The 2000 cinematic masterpiece Chicken Run (Paris) Pentiment (Jeff) Jezebel to Be Resurrected by Paste Magazine (Jeff) After 151 years, Popular Science will no longer offer a magazine (Ant) City Nerd on YouTube (Jason) No Ads in Albania, Ethiopia and Myanmar Hosts: Jason Howell, Jeff Jarvis, Paris Martineau, and Ant Pruitt Download or subscribe to this show at https://twit.tv/shows/this-week-in-google. Get episodes ad-free with Club TWiT at https://twit.tv/clubtwit Sponsors: GO.ACILEARNING.COM/TWIT mylio.com/TWIT25 hid.link/twigdemo
Federal government reaches deal with Google on Online News Act Google DeepMind researchers use AI tool to find 2mn new materials Some Pixel 8 Pro displays have bumps under the glass Reflecting on 18 years at Google Google's new geothermal energy project is up and running AWS Unveils Next Generation AWS-Designed Chips Amazon Introduces Q, an A.I. Chatbot for Companies Does Black Friday and Cyber Monday Matter? Adobe's $20 Billion Purchase of Figma Would Harm Innovation, U.K. Regulator Provisionally Finds Elon goes full Pizzagate. How does this end? A new low in manels Sports Illustrated Published Articles by Fake, AI-Generated Writers OpenAI Made an AI Breakthrough Before Altman Firing, Stoking Excitement and Concern Hugging Face CEO on What Comes After Transformers Hinton vs LeCun vs Ng vs Tegmark vs O Anthony Levandowski Reboots Church of Artificial Intelligence Unauthorized "David Attenborough" AI clone narrates developer's life, goes viral Google Slides getting built-in presentation recording tool Google Will Start Deleting Old Accounts This Week. Here's How to Save Your Google Account Google's .meme domain is here to serve your wackiest websites Some Google Drive for Desktop users are missing months of files Picks of the week (Paris) The 2000 cinematic masterpiece Chicken Run (Paris) Pentiment (Jeff) Jezebel to Be Resurrected by Paste Magazine (Jeff) After 151 years, Popular Science will no longer offer a magazine (Ant) City Nerd on YouTube (Jason) No Ads in Albania, Ethiopia and Myanmar Hosts: Jason Howell, Jeff Jarvis, Paris Martineau, and Ant Pruitt Download or subscribe to this show at https://twit.tv/shows/this-week-in-google. Get episodes ad-free with Club TWiT at https://twit.tv/clubtwit Sponsors: GO.ACILEARNING.COM/TWIT mylio.com/TWIT25 hid.link/twigdemo
Federal government reaches deal with Google on Online News Act Google DeepMind researchers use AI tool to find 2mn new materials Some Pixel 8 Pro displays have bumps under the glass Reflecting on 18 years at Google Google's new geothermal energy project is up and running AWS Unveils Next Generation AWS-Designed Chips Amazon Introduces Q, an A.I. Chatbot for Companies Does Black Friday and Cyber Monday Matter? Adobe's $20 Billion Purchase of Figma Would Harm Innovation, U.K. Regulator Provisionally Finds Elon goes full Pizzagate. How does this end? A new low in manels Sports Illustrated Published Articles by Fake, AI-Generated Writers OpenAI Made an AI Breakthrough Before Altman Firing, Stoking Excitement and Concern Hugging Face CEO on What Comes After Transformers Hinton vs LeCun vs Ng vs Tegmark vs O Anthony Levandowski Reboots Church of Artificial Intelligence Unauthorized "David Attenborough" AI clone narrates developer's life, goes viral Google Slides getting built-in presentation recording tool Google Will Start Deleting Old Accounts This Week. Here's How to Save Your Google Account Google's .meme domain is here to serve your wackiest websites Some Google Drive for Desktop users are missing months of files Picks of the week (Paris) The 2000 cinematic masterpiece Chicken Run (Paris) Pentiment (Jeff) Jezebel to Be Resurrected by Paste Magazine (Jeff) After 151 years, Popular Science will no longer offer a magazine (Ant) City Nerd on YouTube (Jason) No Ads in Albania, Ethiopia and Myanmar Hosts: Jason Howell, Jeff Jarvis, Paris Martineau, and Ant Pruitt Download or subscribe to this show at https://twit.tv/shows/this-week-in-google. Get episodes ad-free with Club TWiT at https://twit.tv/clubtwit Sponsors: GO.ACILEARNING.COM/TWIT mylio.com/TWIT25 hid.link/twigdemo
Federal government reaches deal with Google on Online News Act Google DeepMind researchers use AI tool to find 2mn new materials Some Pixel 8 Pro displays have bumps under the glass Reflecting on 18 years at Google Google's new geothermal energy project is up and running AWS Unveils Next Generation AWS-Designed Chips Amazon Introduces Q, an A.I. Chatbot for Companies Does Black Friday and Cyber Monday Matter? Adobe's $20 Billion Purchase of Figma Would Harm Innovation, U.K. Regulator Provisionally Finds Elon goes full Pizzagate. How does this end? A new low in manels Sports Illustrated Published Articles by Fake, AI-Generated Writers OpenAI Made an AI Breakthrough Before Altman Firing, Stoking Excitement and Concern Hugging Face CEO on What Comes After Transformers Hinton vs LeCun vs Ng vs Tegmark vs O Anthony Levandowski Reboots Church of Artificial Intelligence Unauthorized "David Attenborough" AI clone narrates developer's life, goes viral Google Slides getting built-in presentation recording tool Google Will Start Deleting Old Accounts This Week. Here's How to Save Your Google Account Google's .meme domain is here to serve your wackiest websites Some Google Drive for Desktop users are missing months of files Picks of the week (Paris) The 2000 cinematic masterpiece Chicken Run (Paris) Pentiment (Jeff) Jezebel to Be Resurrected by Paste Magazine (Jeff) After 151 years, Popular Science will no longer offer a magazine (Ant) City Nerd on YouTube (Jason) No Ads in Albania, Ethiopia and Myanmar Hosts: Jason Howell, Jeff Jarvis, Paris Martineau, and Ant Pruitt Download or subscribe to this show at https://twit.tv/shows/this-week-in-google. Get episodes ad-free with Club TWiT at https://twit.tv/clubtwit Sponsors: GO.ACILEARNING.COM/TWIT mylio.com/TWIT25 hid.link/twigdemo
Federal government reaches deal with Google on Online News Act Google DeepMind researchers use AI tool to find 2mn new materials Some Pixel 8 Pro displays have bumps under the glass Reflecting on 18 years at Google Google's new geothermal energy project is up and running AWS Unveils Next Generation AWS-Designed Chips Amazon Introduces Q, an A.I. Chatbot for Companies Does Black Friday and Cyber Monday Matter? Adobe's $20 Billion Purchase of Figma Would Harm Innovation, U.K. Regulator Provisionally Finds Elon goes full Pizzagate. How does this end? A new low in manels Sports Illustrated Published Articles by Fake, AI-Generated Writers OpenAI Made an AI Breakthrough Before Altman Firing, Stoking Excitement and Concern Hugging Face CEO on What Comes After Transformers Hinton vs LeCun vs Ng vs Tegmark vs O Anthony Levandowski Reboots Church of Artificial Intelligence Unauthorized "David Attenborough" AI clone narrates developer's life, goes viral Google Slides getting built-in presentation recording tool Google Will Start Deleting Old Accounts This Week. Here's How to Save Your Google Account Google's .meme domain is here to serve your wackiest websites Some Google Drive for Desktop users are missing months of files Picks of the week (Paris) The 2000 cinematic masterpiece Chicken Run (Paris) Pentiment (Jeff) Jezebel to Be Resurrected by Paste Magazine (Jeff) After 151 years, Popular Science will no longer offer a magazine (Ant) City Nerd on YouTube (Jason) No Ads in Albania, Ethiopia and Myanmar Hosts: Jason Howell, Jeff Jarvis, Paris Martineau, and Ant Pruitt Download or subscribe to this show at https://twit.tv/shows/this-week-in-google. Get episodes ad-free with Club TWiT at https://twit.tv/clubtwit Sponsors: GO.ACILEARNING.COM/TWIT mylio.com/TWIT25 hid.link/twigdemo
(NOTAS Y ENLACES DEL CAPÍTULO AQUÍ: https://www.jaimerodriguezdesantiago.com/kaizen/qa-metafisica-youtube-toltecas-generalismo-gaia-privilegios-masculinos-identidad-femenina-ebitda-calistenia-y-excesos-de-informacion/)¡Nuevo capítulo de preguntas y respuestas! En el que, además, contra todo pronóstico —y salvo que se me haya escapado algún mensaje por ahí— creo que me he puesto al día. O más o menos, porque hay alguna pregunta que no tenía ni idea de cómo responder… Pero se ha hecho lo que se ha podido
Hipótesis Físicas Hipótesis de Max Tegmark Según Max Tegmark, la existencia de otros universos es una implicación directa de las observaciones cosmológicas. Tegmark describe el conjunto de conceptos relacionados que comparten la noción de que existen universos más allá del familiar observable. Y continúa proporcionando una “clasificación” de universos paralelos organizados por niveles. Con el fin de aclarar la terminología, una serie de investigadores: George Ellis, U. Kirchner y W.R. Stoeger. Recomiendan utilizar distintos términos, para el modelo teórico de la totalidad del espacio-tiempo causalmente conectado en el que vivimos. "dominio del universo" para el universo observable o una parte similar del mismo espacio-tiempo. "multiverso" para un conjunto de espacio-tiempos desconectados. "universo" para un espacio-tiempo general, ya sea nuestro propio "Universo" u otro desconectado del nuestro. … Estudiemos lo expuesto …
Lotta Davidson-Bask och Klara Önnerfält träffas i "bunkern" på Spyken för att fundera på hur det känns att börja om igen, för snart tjugonde gången. Vad är det som gör det lika kul varje år? Dessutom pratar vi AI! Är vi team Wärnestål eller team Tegmark? Vi berättar om det stora läsprojektet vi drar igång med alla ettor om om lite annat kul som är på gång under hösten.
Patreon: https://www.patreon.com/mlst Discord: https://discord.gg/ESrGqhf5CB The discussion between Tim Scarfe and David Foster provided an in-depth critique of the arguments made by panelists at the Munk AI Debate on whether artificial intelligence poses an existential threat to humanity. While the panelists made thought-provoking points, Scarfe and Foster found their arguments largely speculative, lacking crucial details and evidence to support claims of an impending existential threat. Scarfe and Foster strongly disagreed with Max Tegmark's position that AI has an unparalleled “blast radius” that could lead to human extinction. Tegmark failed to provide a credible mechanism for how this scenario would unfold in reality. His arguments relied more on speculation about advanced future technologies than on present capabilities and trends. As Foster argued, we cannot conclude AI poses a threat based on speculation alone. Evidence is needed to ground discussions of existential risks in science rather than science fiction fantasies or doomsday scenarios. They found Yann LeCun's statements too broad and high-level, critiquing him for not providing sufficiently strong arguments or specifics to back his position. While LeCun aptly noted AI remains narrow in scope and far from achieving human-level intelligence, his arguments lacked crucial details on current limitations and why we should not fear superintelligence emerging in the near future. As Scarfe argued, without these details the discussion descended into “philosophy” rather than focusing on evidence and data. Scarfe and Foster also took issue with Yoshua Bengio's unsubstantiated speculation that machines would necessarily develop a desire for self-preservation that threatens humanity. There is no evidence today's AI systems are developing human-like general intelligence or desires, let alone that these attributes would manifest in ways dangerous to humans. The question is not whether machines will eventually surpass human intelligence, but how and when this might realistically unfold based on present technological capabilities. Bengio's arguments relied more on speculation about advanced future technologies than on evidence from current systems and research. In contrast, they strongly agreed with Melanie Mitchell's view that scenarios of malevolent or misguided superintelligence are speculation, not backed by evidence from AI as it exists today. Claims of an impending “existential threat” from AI are overblown, harmful to progress, and inspire undue fear of technology rather than consideration of its benefits. Mitchell sensibly argued discussions of risks from emerging technologies must be grounded in science and data, not speculation, if we are to make balanced policy and development decisions. Overall, while the debate raised thought-provoking questions about advanced technologies that could eventually transform our world, none of the speakers made a credible evidence-based case that today's AI poses an existential threat. Scarfe and Foster argued the debate failed to discuss concrete details about current capabilities and limitations of technologies like language models, which remain narrow in scope. General human-level AI is still missing many components, including physical embodiment, emotions, and the "common sense" reasoning that underlies human thinking. Claims of existential threats require extraordinary evidence to justify policy or research restrictions, not speculation. By discussing possibilities rather than probabilities grounded in evidence, the debate failed to substantively advance our thinking on risks from AI and its plausible development in the coming decades. David's new podcast: https://podcasts.apple.com/us/podcast/the-ai-canvas/id1692538973 Generative AI book: https://www.oreilly.com/library/view/generative-deep-learning/9781098134174/
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Did Bengio and Tegmark lose a debate about AI x-risk against LeCun and Mitchell?, published by Karl von Wendt on June 25, 2023 on LessWrong. On June 22nd, there was a “Munk Debate”, facilitated by the Canadian Aurea Foundation, on the question whether “AI research and development poses an existential threat” (you can watch it here, which I highly recommend). On stage were Yoshua Bengio and Max Tegmark as proponents and Yann LeCun and Melanie Mitchell as opponents of the central thesis. This seems like an excellent opportunity to compare their arguments and the effects they had on the audience, in particular because in the Munk Debate format, the audience gets to vote on the issue before and after the debate. The vote at the beginning revealed 67% of the audience being pro the existential threat hypothesis and 33% against it. Interestingly, it was also asked if the listeners were prepared to change their minds depending on how the debate went, which 92% answered with “yes”. The moderator later called this extraordinary and a possible record for the format. While this is of course not representative for the general public, it mirrors the high uncertainty that most ordinary people feel about AI and its impacts on our future. I am of course heavily biased. I would have counted myself among the 8% of people who were unwilling to change their minds, and indeed I'm still convinced that we need to take existential risks from AI very seriously. While Bengio and Tegmark have strong arguments from years of alignment research on their side, LeCun and Mitchell have often made weak claims in public. So I was convinced that Bengio and Tegmark would easily win the debate. However, when I skipped to the end of the video before watching it, there was an unpleasant surprise waiting for me: at the end of the debate, the audience had seemingly switched to a more skeptical view, with now only 61% accepting an existential threat from AI and 39% dismissing it. What went wrong? Had Max Tegmark and Yoshua Bengio really lost a debate against two people I hadn't taken very seriously before? Had the whole debate somehow been biased against them? As it turned out, things were not so clear. At the end, the voting system apparently broke down, so the audience wasn't able to vote on the spot. Instead, they were later asked for their vote by email. It is unknown how many people responded, so the difference can well be a random error. However, it does seem to me that LeCun and Mitchell, although clearly having far weaker arguments, came across quite convincing. A simple count of the hands of the people behind the stage, who can be seen in the video, during a hand vote results almost in a tie. The words of the moderator also seem to indicate that he couldn't see a clear majority for one side in the audience, so the actual shift may have been even worse. In the following, I assume that Bengio and Tegmark were indeed not as convincing as I had hoped. It seems worthwhile to look at this in some more detail to learn from it for future discussions. I will not give a detailed description of the debate; I recommend you watch it yourself. However, I will summarize some key points and will give my own opinion on why this may have gone badly from an AI safety perspective, as well as some learnings I extracted for my own outreach work. The debate was structured in a good way and very professionally moderated by Munk Debate's chair Rudyard Griffiths. If anything, he seemed to be supportive of an existential threat from AI; he definitely wasn't biased against it. At the beginning, each participant gave a 6-minute opening statement, then each one could reply to what the others had said in a brief rebuttal. After that, there was an open discussion for about 40 minutes, until the participants could again summarize the...
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Did Bengio and Tegmark lose a debate about AI x-risk against LeCun and Mitchell?, published by Karl von Wendt on June 25, 2023 on LessWrong. On June 22nd, there was a “Munk Debate”, facilitated by the Canadian Aurea Foundation, on the question whether “AI research and development poses an existential threat” (you can watch it here, which I highly recommend). On stage were Yoshua Bengio and Max Tegmark as proponents and Yann LeCun and Melanie Mitchell as opponents of the central thesis. This seems like an excellent opportunity to compare their arguments and the effects they had on the audience, in particular because in the Munk Debate format, the audience gets to vote on the issue before and after the debate. The vote at the beginning revealed 67% of the audience being pro the existential threat hypothesis and 33% against it. Interestingly, it was also asked if the listeners were prepared to change their minds depending on how the debate went, which 92% answered with “yes”. The moderator later called this extraordinary and a possible record for the format. While this is of course not representative for the general public, it mirrors the high uncertainty that most ordinary people feel about AI and its impacts on our future. I am of course heavily biased. I would have counted myself among the 8% of people who were unwilling to change their minds, and indeed I'm still convinced that we need to take existential risks from AI very seriously. While Bengio and Tegmark have strong arguments from years of alignment research on their side, LeCun and Mitchell have often made weak claims in public. So I was convinced that Bengio and Tegmark would easily win the debate. However, when I skipped to the end of the video before watching it, there was an unpleasant surprise waiting for me: at the end of the debate, the audience had seemingly switched to a more skeptical view, with now only 61% accepting an existential threat from AI and 39% dismissing it. What went wrong? Had Max Tegmark and Yoshua Bengio really lost a debate against two people I hadn't taken very seriously before? Had the whole debate somehow been biased against them? As it turned out, things were not so clear. At the end, the voting system apparently broke down, so the audience wasn't able to vote on the spot. Instead, they were later asked for their vote by email. It is unknown how many people responded, so the difference can well be a random error. However, it does seem to me that LeCun and Mitchell, although clearly having far weaker arguments, came across quite convincing. A simple count of the hands of the people behind the stage, who can be seen in the video, during a hand vote results almost in a tie. The words of the moderator also seem to indicate that he couldn't see a clear majority for one side in the audience, so the actual shift may have been even worse. In the following, I assume that Bengio and Tegmark were indeed not as convincing as I had hoped. It seems worthwhile to look at this in some more detail to learn from it for future discussions. I will not give a detailed description of the debate; I recommend you watch it yourself. However, I will summarize some key points and will give my own opinion on why this may have gone badly from an AI safety perspective, as well as some learnings I extracted for my own outreach work. The debate was structured in a good way and very professionally moderated by Munk Debate's chair Rudyard Griffiths. If anything, he seemed to be supportive of an existential threat from AI; he definitely wasn't biased against it. At the beginning, each participant gave a 6-minute opening statement, then each one could reply to what the others had said in a brief rebuttal. After that, there was an open discussion for about 40 minutes, until the participants could again summarize the...
David Sirota sits down with Dr. Max Tegmark, president of the Future of Life Institute and one of the world's leading experts in artificial intelligence, to dive into the growing debate around AI. In this deep-dive conversation, Dr. Tegmark breaks down recent breakthroughs in generative AI technology like ChatGPT and explains how it could replace most forms of human labor — for better or worse. They unpack why private interests are barreling forward with commercial AI development and why the Future of Life Institute has publicly called for an immediate pause on all AI experimentation. Dr. Tegmark also explains how AI could be the most transformative technology in human history — and the (small!) probability it becomes the Terminator. Next Monday's bonus episode of Lever Time Premium, exclusively for The Lever's supporting subscribers, will include David's extended conversation with Dr. Tegmark. In this special bonus segment, Dr. Tegmark describes the safety measures that could prevent the emergence of the most dangerous forms of AI. If those precautions are put in place, he argues, AI could cure disease, stop climate change, end poverty, and just maybe set us free. Links: Pause Giant AI Experiments: An Open Letter (Future of Life Institute, 2023) EU lawmakers' challenge to rein in ChatGPT and generative AI (Reuters, 2023) If you'd like access to Lever Time Premium, which includes extended interviews and bonus content, head over to LeverNews.com to become a supporting subscriber.If you'd like to leave a tip for The Lever, click the following link. It helps us do this kind of independent journalism. levernews.com/tipjarA transcript of this episode is available here. Thanks to our sponsor Sheets & Giggles. To get 20% off your order, head over to SheetsGiggles.com/lever. Make sure to use the discount code LEVER at checkout.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Honestly I'm just here for the drama, published by electroswing on April 1, 2023 on The Effective Altruism Forum. I'm a young twenty-something who got into EA a couple of years ago. Back then I was really into the whole "learning object-level facts about EA” thing. I'd religiously listen to the 80,000 Hours podcast, read in detail about different interventions, both neartermist and longtermist, and generally just try my darndest to improve my understanding of how to do good effectively. Key to this voracious consumption was of course the EA Forum. Now? Lmao. It started with the SBF saga. Boy was the EA forum entertaining. The whole front page was full of upset people wildly speculating. So many rumors to sift through, so many spicy arguments to follow. The best parts of any post could be found by scrolling to the very bottom and unfolding highly downvoted comments. So much entertainment. Like reality TV except about my ingroup specifically. Then, you know the meme. EA has had a scandal of the week TM ever since. Castle? Hilarious watching people who don't understand logistics butt heads with people who don't value optics and frugality. The weird Tegmark neonazi thing? Absolutely incredible watching the comments turn on him and then pull back. Time and Bloomberg articles and the ensuing "I can fix EA in one blog post" follow-ups? Delicious. Bostrom and the fascinating case of the use-mention distinction? Yikes bro. Spicy takes and arguments hidden in the Lightcone closure announcement? Fantastic sequel to "The Vultures Are Coming". When it was announced the Community posts would appear in a separate section of the Forum, the little drama-hungry goblin in my brain was at first disappointed. Oh no! Maybe I'll accidentally click on a post about malaria instead! Then I realized I can simply upweigh Community posts by +100 and I'll never miss another scandal ever again. Now, I visit almost daily. I briefly skim the frontpage post titles. Maybe occasionally I'll stop to learn more about some new org or read the executive summary of a high-quality research report. But honestly? Most of the time I just scroll through looking for drama, and if I don't find it, I close the tab and get on with my day. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: FLI FAQ on the rejected grant proposal controversy, published by Tegmark on January 19, 2023 on The Effective Altruism Forum. The details surrounding FLI's rejection of a grant proposal from Nya Dagbladet last November has raised controversy and important questions (including here on this forum) which we address in this FAQ. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Does EA understand how to apologize for things?, published by titotal on January 15, 2023 on The Effective Altruism Forum. In response to the drama over Bostroms apology for an old email, the original email has been universally condemned from all sides. But I've also seen some confusion over why people dislike the apology itself. After all, nothing in the apology was technically inaccurate, right? What part of it do we disagree with? Well, I object to it because it was an apology. And when you grade an apology, you don't grade it on the factual accuracy of the scientific claims contained within, you grade it on how good it is at being an apology. And to be frank, this was probably one of the worst apologies I have ever seen in my life, although it has since been topped by Tegmark's awful non-apology for the far right newspaper affair. Okay, let's go over the rules for an apology to be genuine and sincere. I'll take them from here. Acknowledge the offense. Explain what happened. Express remorse. Offer to make amends. Notably missing from this list is step 5: Go off on an unrelated tangent about eugenics. Imagine if I called someone's mother overweight in a vulgar manner. When they get upset, I compose a long apology email where I apologize for the language, but then note that I believe it is factually true their mother has a BMI substantially above average, as does their sister, father, and wife. Whether or not those claims are factually true doesn't actually matter, because bringing them up at all is unnecessary and further upsets the person I just hurt. In Bostroms email of 9 paragraphs, he spends 2 talking about the historical context of the email, 1 talking about why he decided to release it, 1 actually apologizing, and the remaining 5 paragraphs giving an overview of his current views on race, intelligence, genetics, and eugenics. What this betrays is an extreme lack of empathy for the people he is meant to be apologizing to. Imagine if he was reading this apology out loud to the average black person, and think about how uncomfortable they would feel by the time he got to part discussing his papers about the ethics of genetic enhancement. Bostroms original racist email did not mention racial genetic differences or eugenics. They should not have been brought up in the apology either. As a direct result of him bringing the subject up, this forum and others throughout the internet have been filled with race science debate, an outcome that I believe is very harmful. Discussions of racial differences are divisive, bad PR, probably result in the spread of harmful beliefs, and are completely irrelevant to top EA causes. If Bostrom didn't anticipate that this outcome would result from bringing the subject up, then he was being hopelessly naive. On the other hand, Bostroms apology looks absolutely saintly next to the FLI's/Max Tegmarks non-apology for the initial approval of grant money to a far-right newspaper (the funding offer was later rescinded). At no point does he offer any understanding at all as to why people might be concerned about approving, even temporarily, funding for a far-right newspaper that promotes holocaust denial, covid vaccine conspiracy theories, and defending "ethnic rights". I don't even know what to say about this statement. The FLI has managed to fail at point 1 of an apology: understanding that they did something wrong. I hope they manage to release a real apology soon, and when they do, maybe they can learn some lessons from previous failures. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Why I think there's a one-in-six chance of an imminent global nuclear war, published by Tegmark on October 8, 2022 on The Effective Altruism Forum. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
It's telethon month! If you enjoy Real Science Radio, The Dominic Enyart Show, Theology Thursday, and Bob Enyart Live, consider assisting financially to keep us around! Help us reach our $30,000 goal by purchasing any KGOV product, especially those listed here. Note that all recurring monthly support is multiplied by ten towards our telethon goal. Thank you for your support! Check out the show summary below! Sponsor a Show! Click here to help keep us broadcasting! If you would like to help support our KGOV.com shows, consider sponsoring a broadcast! Monthly sponsorships are the very best way to help us stay on air. Want a Shout Out? Most of our sponsors prefer to remain anonymous. But if you'd like a shout-out from one of the guys, please let us know in the "comments" section of your order. Or feel free to email us, service@kgov.com. Note: We will have to get in touch via phone before fully processing your subscription. Please expect a call from our friendly KGOV staff after signing up. :) The Plot: 2nd Edition A year after his passing, we have made available Bob's 2nd edition of "The Plot!" Currently, the only way to get your hands on a copy is by signing up for a monthly sponsorship (of any level- full, half, or 1/3). We want to thank our sponsors with this special offer. To get the book, sign up for a monthly sponsorship and let us know in the "comments" section of your order. If you've already signed up to sponsor a show and would like a copy, please email service@kgov.com. Why it Matters Bob Enyart Live (BEL) The ministries of so many Godly leaders, authors & preachers have been magnified tenfold, or even a hundredfold after their passing. Think of C.S. Lewis, and how he still, today has such an impact on millions. We have no doubt Bob Enyart could have a similar impact, and your sponsorship of just one show a month will be a massive force to magnify this ministry and the Gospel. Bob Enyart Live Broadcast Classics still air every Monday, at 3 pm on KGOV.com! See more by clicking here. * Comes Now Atheist Dan Barker: the media has been quoting atheist Dan Barker regarding the atheist plaque set up in the capitol in Seattle near the nativity scene. When Dan Barker was a teenager, he was involved with the ministry of Kathryn Kuhlman, one of a group of so-called faith healers. (See a BEL listener, TOL's Crow, who initially compared Bob to Benny Hinn until...) After interviewing atheists including: - ABC's Reginald Finley, called The Infidel Guy, from ABC's Wife Swap program; - TheologyOnline's psychologist Zakath; - TOL's member who calls himself Fool; - John Henderson who wrote the book God.com; and, - Michael Shermer, an editor with Scientific American and the Skeptic Society who in in this famous 73-second excerpt on BEL denied that the sun is a light, illustrating that it's tough debating atheists when they're hesitant to admit to even the most obvious common ground; now comes Dan Barker, a director with the Freedom from Religion Foundation.* Truth, the Senses, the Universe, Sun and Moon: Acknowledging the difficulty in proving a negative, Bob Enyart stipulates at the outset that atheist Dan Barker would not have to worry about whether God was out on a star in a galaxy far, far away, but rather, Bob and Dan could discuss the evidence before us all, right here and right now. Bob then asks Dan Barker whether or not objective Truth exists. The atheists who will acknowledge that objective truth exists often give so many qualifiers that it can be hard to know if they believe in objective reality. Dan stressed, with Bob's concurrence, that "the word truth is not a thing;" that is, truth is not a physical object. After Bob and Dan seemed to agree that Truth and objective reality exists, later in the discussion Dan seemed to backtrack, and suggest that everything could be an illusion including Barker's very own existence. Yikes! With this Bob blew up in frustration. Not really. Actually, Bob simply reminded Dan of Descartes' "I think, therefore I am" and when Dan said that perhaps everything was an illusion, Bob replied, "Dan, you can assert that you think that I am an illusion, perhaps you think I'm in a dream you are having; but no, you cannot assert that you are an illusion; that YOU do not exist." And Bob indicated Dan's toying with the possibility that perhaps he didn't exist was further evidence that he was equivocating earlier regarding the existence of truth. In the end, Bob was thankful that he and Dan could once again agree that Dan existed, so that the show could proceed. Dan then acknowledged that knowledge can come from sources other than one's own five senses. Bob pointed out that some atheists assert that "only your five senses provide real knowledge," about which he typically asks: "says which of the five?" It wasn't until he was off air that Bob noticed that at 13 minutes into the program Dan seemed to backslide by saying that we should "assume" that "there's an objective reality." (Bob also would be thankful if Dan could email a clarification as to whether he agrees that "reason and logic" are a source, quite apart from the five senses, of knowledge.) Bob and Dan also talked about whether the universe had a beginning, and Bob quoted Hawking: "This argument about whether or not the universe had a beginning, persisted into the 19th and 20th centuries. It was conducted mainly on the basis of theology and philosophy... But if your theory disagrees with the Second Law of Thermodynamics, it is in bad trouble. In fact, the theory that the universe has existed forever is in serious difficulty with the Second Law of Thermodynamics. The Second Law states that disorder always increases with time. ...it indicates that there must have been a beginning. Otherwise, the universe would be in a state of complete disorder by now, and everything would be at the same temperature." -Stephen Hawking To which Dan stated that Hawking has had to correct himself in the past, to which Bob Enyart replied, "Yes, but not about this." Dan tried to explain the continued existence of the universe by claiming that there could be many universes. (Please note, as Bob demonstrated in his 10-round moderated debate with TOL's Zakath, atheists often rationalize "complexity by... introducing even more complexity," completely apart from any empirical evidence, wildly increasing complexity in order to explain it; if an atheist has a problem in that he cannot explain by the laws of science the continued existence of the universe, he merely posits infinite parallel or successive universes. The cover of Discover magazine July 2008 states "Parallel Universes, Infinite Yous" and they ask a physicist, "Can you explain parallel universes?" and Max Tegmark replies, "Three [parallel universes] have been proposed by other people, and I've added a fourth... go far enough out and you will find another Earth with another version of yourself." Yes, attempting to get rid of the Creator requires extraordinary creativity.) Bob argued that there is no empirical evidence for parallel universes, and that if a succession of universes gave rise to one another, that entire process would be a perpetual motion machine that would have run out of useable energy long ago. Bob argued that people should not believe that the Big Bang theory can actually explain the existence of the universe because: - since big bangers don't know "which objects came first, stars or galaxies? Theoretical science offers no clear guidance..." according to 23-year Nature magazine editor John Maddox, p. 48, What Remains to be Discovered; - in the universe's supposed 15 billion years there's insufficient time for its temperature to even out to 2.7 K; - the natural formation of a solar system precludes gas giants like Jupiter and Saturn; - 99% of the solar system mass is in the Sun, yet the planets have 99% of the angular momentum (spin); and, - after $20 billion spent on the U.S. Apollo and lunar program to determine how the moon got there, the only theories simply can't account for the evidence. Dan countered that science always increases in knowledge. And Bob agreed, and then asked if scientific evidence might preclude certain possibilities (like millions of flies spontaneously generating daily out of carcasses, or the solar system forming from a condensing gas cloud). Bob brought up that the supposed Big Bang would have been an event that does not even comply with the laws of physics and which atheists accept on faith. (One tiny example is the so-called Inflationary Period which supposedly began shortly after the explosion and saw a wild and virtually instantaneous acceleration of the speed of the expansion of the universe with a subsequent virtually instantaneous deceleration, none of this having any correlation to the physical laws and hardly qualifying even as a scientific proposition.) Bob concluded that people should not be tricked into thinking that atheistic cosmology has proven how the universe could exist apart from a Creator, when they can't explain the temperature of the universe, the formation of stars or galaxies, gas giants like Jupiter and Saturn, the spin of the Sun vs. the planets, nor even the moon, our very closest outer-space neighbor! Dan closed by agreeing to Bob's suggestion that they trade materials: - Bob's DVD on the Evidence for the Resurrection of Jesus Christ, and - Dan's book, Godless: How an Evangelical Preacher Became One of America's Leading Atheists, which has a chapter arguing against the resurrection, and that they schedule a debate on Christ's resurrection. A BEL staffer put Dan's DVD in the mail right after today's program. Stay tuned...
Nils Köbis is a research scientist at the Max Planck Institute for Human Development, where he studies the intersection of AI and corruption. In this conversation, we talk about how Nils got into working on this topic, and some of his recent papers on AI, corruption, deepfakes, and AI poetry.BJKS Podcast is a podcast about neuroscience, psychology, and anything vaguely related, hosted by Benjamin James Kuper-Smith. In 2022, episodes will appear irregularly, roughly twice per month.Timestamps0:00:04: Moral Games0:13:09: How Nils started working at the intersection of AI and corruption0:30:12: Start discussing 'Bad machines corrput good morals'1:01:00: Start discussing Nils's papers on whether people can detect AI-generated poems and videos1:25:59: Learning to say no and to not get sidetracked1:31:05: Writing a PhD thesisPodcast linksWebsite: https://geni.us/bjks-podTwitter: https://geni.us/bjks-pod-twtNils's linksWebsite: https://geni.us/koebis-webGoogle Scholar: https://geni.us/koebis-scholarTwitter: https://geni.us/Koebis-twtBen's linksWebsite: https://geni.us/bjks-webGoogle Scholar: https://geni.us/bjks-scholarTwitter: https://geni.us/bjks-twtReferences & linksMoral Games (in German): https://geni.us/moral-gamesNils's podcast KickBack: https://www.icrnetwork.org/what-we-do/kickback-global-anticorruption-podcast/Replika AI app: https://replika.com/Science fiction science: https://www.mpib-berlin.mpg.de/chm/guiding-concepts/concept-2-science-fiction-scienceCollingridge dilemma: https://en.wikipedia.org/wiki/Collingridge_dilemmaGPT-3: https://openai.com/api/Fotos of people who don't exist: https://thispersondoesnotexist.com/Abdalla & Abdalla 2021. The Grey Hoodie Project: Big tobacco, big tech, .... Proc of 2021 AAAI/ACM Conf.Crandall ... 2018. Cooperating with machines. Nat Comm.Goffman 1959. The Presentation of Self in Everyday LifeHarari 2016. Homo Deus: A brief history of tomorrow.Hawking 2018. Brief answers to the big questions.Kehlmann 2021: Mein Algorithmus und ich.Köbis ... 2021. Bad machines corrupt good morals. Nat Hum Behav.Köbis ... 2021. Fooled twice: People cannot detect deepfakes but think they can. Iscience.Köbis & Mossink, 2021. Artificial intelligence versus Maya Angelou... . Comp in hum behav.Köbis ... 2022. The promise and perils of using artificial intelligence to fight corruption. Nat Mach Intell.Leib ... 2021. The corruptive force of AI-generated advice. arXiv.Leib ... 2021. Collaborative dishonesty: A meta-analytic review. Psych Bull.Mnih ... 2015. Human-level control through deep reinforcement learning. Nature.Rahwan ... 2019. Machine behaviour. Nature.Silver ... 2016. Mastering the game of Go with deep neural networks and tree search. Nature.Tegmark 2017. Life 3.0: Being human in the age of artificial intelligence.
"So the idea of the multiverse, as you all know, is a pretty big idea in physics right now. Many physicists are thinking about interpreting quantum theory in terms of the multiverse or many-worlds interpretation. Max Tegmark, for example, has the idea that there's what he calls a Level IV multiverse. He thinks that mathematics is fundamental. So the fundamental reality is just mathematics, and in some sense, Gödel's incompleteness theorem says that there's endless mathematics. There's no end to mathematical exploration. And so that's Tegmark's multiverse: whatever is mathematically possible is actual."Donald D. Hoffman is a Professor of Cognitive Sciences at the University of California, Irvine. He is the author of The case against reality: Why evolution hid the truth from our eyes. His research on perception, evolution, and consciousness received the Troland Award of the US National Academy of Sciences, the Distinguished Scientific Award for Early Career Contribution of the American Psychological Association, the Rustum Roy Award of the Chopra Foundation, and is the subject of his TED Talk, titled “Do we see reality as it is?”http://www.cogsci.uci.edu/~ddhoff/The Case Against Realitywww.creativeprocess.infowww.oneplanetpodcast.org
Donald D. Hoffman is a Professor of Cognitive Sciences at the University of California, Irvine. He is the author of The case against reality: Why evolution hid the truth from our eyes. His research on perception, evolution, and consciousness received the Troland Award of the US National Academy of Sciences, the Distinguished Scientific Award for Early Career Contribution of the American Psychological Association, the Rustum Roy Award of the Chopra Foundation, and is the subject of his TED Talk, titled “Do we see reality as it is?”"So the idea of the multiverse, as you all know, is a pretty big idea in physics right now. Many physicists are thinking about interpreting quantum theory in terms of the multiverse or many-worlds interpretation. Max Tegmark, for example, has the idea that there's what he calls a Level IV multiverse. He thinks that mathematics is fundamental. So the fundamental reality is just mathematics, and in some sense, Gödel's incompleteness theorem says that there's endless mathematics. There's no end to mathematical exploration. And so that's Tegmark's multiverse: whatever is mathematically possible is actual."http://www.cogsci.uci.edu/~ddhoff/The Case Against Realitywww.creativeprocess.infowww.oneplanetpodcast.org
Max Tegmark is a theoretical physicist who has made notable contributions to the field of cosmology and the physics of alternate universes. He is also known for co-founding the Center for Cosmology and Particle Physics at the Massachusetts Institute of Technology, and for his years-long project called "The Swapper." In this episode, we discuss Tegmark's work on the Swapper, and discuss the implications of its development for our understanding of the universe.
Andy and Dave discuss the latest in AI news and research, including DoD's 2023 budget for research, engineering, development, and testing at $130B, around 9.5% higher than the previous year. DARPA announces the “In the Moment” (ITM) program, which aims to create rigorous and quantifiable algorithms for evaluating situations where objective ground truth is not available. The European Parliament's Special Committee on AI in a Digital Age (AIDA) adopts its final recommendations, though the report is still in draft (including that the EU should not regulate AI as a technology, but rather focus on risk). Other EP committees debated the proposal for an “AI Act” on 21 March, and included speakers such as Tegmark, Russell, and many others. The OECD AI Policy Observatory provides an interactive visual database of national AI policies, initiatives, and strategies. In research, a brain implant allows a fully paralyzed patient to communicate solely by “thought,” using neurofeedback. Researchers from Collaborations Pharmaceuticals and King's College London discover that they could repurpose their AI drug-seeking system to instead generate 40,000 possible chemical weapons. And NukkAI holds a bridge competition and claims its NooK AI “beats eight world champions,” though others take exception to the methods. And Kevin Pollpeter, from CNA's China Studies Program, joins to discuss the role (or lack) of Chinese technology in the Ukraine-Russia conflict. https://www.cna.org/news/AI-Podcast
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is Predictably Wrong, Part 12: The Lens That Sees Its Flaws, published by Eliezer Yudkowsky. Light leaves the Sun and strikes your shoelaces and bounces off; some photons enter the pupils of your eyes and strike your retina; the energy of the photons triggers neural impulses; the neural impulses are transmitted to the visual-processing areas of the brain; and there the optical information is processed and reconstructed into a 3D model that is recognized as an untied shoelace; and so you believe that your shoelaces are untied. Here is the secret of deliberate rationality—this whole process is not magic, and you can understand it. You can understand how you see your shoelaces. You can think about which sort of thinking processes will create beliefs which mirror reality, and which thinking processes will not. Mice can see, but they can't understand seeing. You can understand seeing, and because of that, you can do things that mice cannot do. Take a moment to marvel at this, for it is indeed marvelous. Mice see, but they don't know they have visual cortexes, so they can't correct for optical illusions. A mouse lives in a mental world that includes cats, holes, cheese and mousetraps—but not mouse brains. Their camera does not take pictures of its own lens. But we, as humans, can look at a seemingly bizarre image, and realize that part of what we're seeing is the lens itself. You don't always have to believe your own eyes, but you have to realize that you have eyes—you must have distinct mental buckets for the map and the territory, for the senses and reality. Lest you think this a trivial ability, remember how rare it is in the animal kingdom. The whole idea of Science is, simply, reflective reasoning about a more reliable process for making the contents of your mind mirror the contents of the world. It is the sort of thing mice would never invent. Pondering this business of “performing replicable experiments to falsify theories,” we can see why it works. Science is not a separate magisterium, far away from real life and the understanding of ordinary mortals. Science is not something that only applies to the inside of laboratories. Science, itself, is an understandable process-in-the-world that correlates brains with reality. Science makes sense, when you think about it. But mice can't think about thinking, which is why they don't have Science. One should not overlook the wonder of this—or the potential power it bestows on us as individuals, not just scientific societies. Admittedly, understanding the engine of thought may be a little more complicated than understanding a steam engine—but it is not a fundamentally different task. Once upon a time, I went to EFNet's #philosophy chatroom to ask, “Do you believe a nuclear war will occur in the next 20 years? If no, why not?” One person who answered the question said he didn't expect a nuclear war for 100 years, because “All of the players involved in decisions regarding nuclear war are not interested right now.” “But why extend that out for 100 years?” I asked. “Pure hope,” was his reply. Reflecting on this whole thought process, we can see why the thought of nuclear war makes the person unhappy, and we can see how his brain therefore rejects the belief. But if you imagine a billion worlds—Everett branches, or Tegmark duplicates1—this thought process will not systematically correlate optimists to branches in which no nuclear war occurs.2 To ask which beliefs make you happy is to turn inward, not outward—it tells you something about yourself, but it is not evidence entangled with the environment. I have nothing against happiness, but it should follow from your picture of the world, rather than tampering with the mental paintbrushes. If you can see this—if you can see that hope is shifting your first-order thoughts by too large a ...
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is Predictably Wrong, Part 12: The Lens That Sees Its Flaws, published by Eliezer Yudkowsky. Light leaves the Sun and strikes your shoelaces and bounces off; some photons enter the pupils of your eyes and strike your retina; the energy of the photons triggers neural impulses; the neural impulses are transmitted to the visual-processing areas of the brain; and there the optical information is processed and reconstructed into a 3D model that is recognized as an untied shoelace; and so you believe that your shoelaces are untied. Here is the secret of deliberate rationality—this whole process is not magic, and you can understand it. You can understand how you see your shoelaces. You can think about which sort of thinking processes will create beliefs which mirror reality, and which thinking processes will not. Mice can see, but they can't understand seeing. You can understand seeing, and because of that, you can do things that mice cannot do. Take a moment to marvel at this, for it is indeed marvelous. Mice see, but they don't know they have visual cortexes, so they can't correct for optical illusions. A mouse lives in a mental world that includes cats, holes, cheese and mousetraps—but not mouse brains. Their camera does not take pictures of its own lens. But we, as humans, can look at a seemingly bizarre image, and realize that part of what we're seeing is the lens itself. You don't always have to believe your own eyes, but you have to realize that you have eyes—you must have distinct mental buckets for the map and the territory, for the senses and reality. Lest you think this a trivial ability, remember how rare it is in the animal kingdom. The whole idea of Science is, simply, reflective reasoning about a more reliable process for making the contents of your mind mirror the contents of the world. It is the sort of thing mice would never invent. Pondering this business of “performing replicable experiments to falsify theories,” we can see why it works. Science is not a separate magisterium, far away from real life and the understanding of ordinary mortals. Science is not something that only applies to the inside of laboratories. Science, itself, is an understandable process-in-the-world that correlates brains with reality. Science makes sense, when you think about it. But mice can't think about thinking, which is why they don't have Science. One should not overlook the wonder of this—or the potential power it bestows on us as individuals, not just scientific societies. Admittedly, understanding the engine of thought may be a little more complicated than understanding a steam engine—but it is not a fundamentally different task. Once upon a time, I went to EFNet's #philosophy chatroom to ask, “Do you believe a nuclear war will occur in the next 20 years? If no, why not?” One person who answered the question said he didn't expect a nuclear war for 100 years, because “All of the players involved in decisions regarding nuclear war are not interested right now.” “But why extend that out for 100 years?” I asked. “Pure hope,” was his reply. Reflecting on this whole thought process, we can see why the thought of nuclear war makes the person unhappy, and we can see how his brain therefore rejects the belief. But if you imagine a billion worlds—Everett branches, or Tegmark duplicates1—this thought process will not systematically correlate optimists to branches in which no nuclear war occurs.2 To ask which beliefs make you happy is to turn inward, not outward—it tells you something about yourself, but it is not evidence entangled with the environment. I have nothing against happiness, but it should follow from your picture of the world, rather than tampering with the mental paintbrushes. If you can see this—if you can see that hope is shifting your first-order thoughts by too large a ...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Beyond Astronomical Waste, published by Wei Dai on the AI Alignment Forum. Write a Review Faced with the astronomical amount of unclaimed and unused resources in our universe, one's first reaction is probably wonderment and anticipation, but a second reaction may be disappointment that our universe isn't even larger or contains even more resources (such as the ability to support 3^^^3 human lifetimes or perhaps to perform an infinite amount of computation). In a previous post I suggested that the potential amount of astronomical waste in our universe seems small enough that a total utilitarian (or the total utilitarianism part of someone's moral uncertainty) might reason that since one should have made a deal to trade away power/resources/influence in this universe for power/resources/influence in universes with much larger amounts of available resources, it would be rational to behave as if this deal was actually made. But for various reasons a total utilitarian may not buy that argument, in which case another line of thought is to look for things to care about beyond the potential astronomical waste in our universe, in other words to explore possible sources of expected value that may be much greater than what can be gained by just creating worthwhile lives in this universe. One example of this is the possibility of escaping, or being deliberately uplifted from, a simulation that we're in, into a much bigger or richer base universe. Or more generally, the possibility of controlling, through our decisions, the outcomes of universes with much greater computational resources than the one we're apparently in. It seems likely that under an assumption such as Tegmark's Mathematical Universe Hypothesis, there are many simulations of our universe running all over the multiverse, including in universes that are much richer than ours in computational resources. If such simulations exist, it also seems likely that we can leave some of them, for example through one of these mechanisms: Exploiting a flaw in the software or hardware of the computer that is running our simulation (including "natural simulations" where a very large universe happens to contain a simulation of ours without anyone intending this). Exploiting a flaw in the psychology of agents running the simulation. Altruism (or other moral/axiological considerations) on the part of the simulators. Acausal trade. Other instrumental reasons for the simulators to let out simulated beings, such as wanting someone to talk to or play with. (Paul Christiano's recent When is unaligned AI morally valuable? contains an example of this, however the idea there only lets us escape to another universe similar to this one.) (Being run as a simulation in another universe isn't necessarily the only way to control what happens in that universe. Another possibility is if universes with halting oracles exist (which is implied by Tegmark's MUH since they exist as mathematical structures in the arithmetical hierarchy), some of their oracle queries may be questions whose answers can be controlled by our decisions, in which case we can control what happens in those universes without being simulated by them (in the sense of being run step by step in a computer). Another example is that superintelligent beings may be able to reason about what our decisions are without having to run a step by step simulation of us, even without access to a halting oracle.) The general idea here is for a superintelligence descending from us to (after determining that this is an advisable course of action) use some fraction of the resources of this universe to reason about or search (computationally) for much bigger/richer universes that are running us as simulations or can otherwise be controlled by us, and then determine what we need to do to maximize the exp...
Dagen efter att hon utsågs till generaldirektör för Folkhälsomyndigheten kom coronakommissionens svidande kritik kring pandemihanteringen. Karin Tegmark Wisell svarar för första gången på kritiken. Programledare: Mikael Sjödell Kommentar: Daniel Öhman, granskande reporter på Ekot Producent: Maja Lagercrantz Tekniker: Joakim Persson
Ce este transumanismul? Ep.30 Invitat: Mihaela (Meia) Chita-Tegmark
From the BEL archieves, * Real Science Radio has a Far Ranging Conversation with Krauss: Co-hosts Bob Enyart and Fred Williams present Bob's interview of theoretical physicist (emphasis on the theoretical), atheist Lawrence Krauss. Fred says, "It's David vs. Goliath, but without the slingshot." As the discussion ranges from astronomy and anatomy to cosmology and physics, most folks would presume that Dr. Krauss would take apart Enyart's arguments, especially when the Bible believer got the wrong value for the electron-to-proton mass ratio. But the conversation reveals fascinating dynamics from the creation/evolution debate. (The planned 25-minute interview ran 40 minutes, so there's also a Krauss Part II and once in each half we say, "Stop the tape, stop the tape," to comment.) * "All Evidence Overwhelmingly Supports the Big Bang": Contradicting Dr. Krauss'
From the BEL archieves, * Real Science Radio has a Far Ranging Conversation with Krauss: Co-hosts Bob Enyart and Fred Williams present Bob's interview of theoretical physicist (emphasis on the theoretical), atheist Lawrence Krauss. Fred says, "It's David vs. Goliath, but without the slingshot." As the discussion ranges from astronomy and anatomy to cosmology and physics, most folks would presume that Dr. Krauss would take apart Enyart's arguments, especially when the Bible believer got the wrong value for the electron-to-proton mass ratio. But the conversation reveals fascinating dynamics from the creation/evolution debate. (The planned 25-minute interview ran 40 minutes, so there's also a Krauss Part II and once in each half we say, "Stop the tape, stop the tape," to comment.) * "All Evidence Overwhelmingly Supports the Big Bang": Contradicting Dr. Krauss'
Also on Youtube Max Tegmark is a physicist, cosmologist, and artificial intelligence - machine learning researcher. He is a professor at the Massachusetts Institute of Technology and the scientific director of the Foundational Questions Institute. He has been my mentor and friend for a LONG time :-) Professor Tegmark’s research is focused on precision cosmology, e.g., combining theoretical work with new measurements to place sharp constraints on cosmological models and their parameters. Early on, this challenge has lead him to work mainly on cosmology and quantum information. Although he’s continuing his cosmology work with the HERA collaboration, the main focus of his current research is on the physics of intelligence: using physics-based techniques to better understand biological and artificial intelligence (AI). Ultimately, this could culminate in what he calls an "AI Physicist" https://www.technologyreview.com/2018/11/01/1895/an-ai-physicist-can-derive-the-natural-laws-of-imagined-universes/ A native of Stockholm, Tegmark left Sweden in 1990 after receiving his B.Sc. in Physics from the Royal Institute of Technology (& a B.A. in Economics from the Stockholm School of Economics). His first academic venture beyond Scandinavia brought him to California, where he studied physics at the University of California, Berkeley, earning his PhD. in 1994. Tegmark is an author on more than 200 technical papers, and has been featured in dozens of science documentaries. He has received numerous awards for his research, including a Packard Fellowship (2001-06), Cottrell Scholar Award, an NSF Career grant. He is a Fellow of the American Physical Society. His work with the SDSS collaboration on galaxy clustering shared the first prize in Science magazine’s “Breakthrough of the Year: 2003.” His book "LIFE 3.0" was an instant New York Times Best Seller and one of Mark Cuban and Barack Obama's favorite books of 2017. Life 3.0 asks the question: "How will Artificial Intelligence affect crime, war, justice, jobs, society and our very sense of being human?" The rise of AI has the potential to transform our future more than any other technology—and there’s nobody better qualified or situated to explore that future than Max Tegmark. How can we grow our prosperity through automation without leaving people lacking income or purpose? What career advice should we give today’s kids? How can we make future AI systems more robust, so that they do what we want without crashing, malfunctioning or getting hacked? Should we fear an arms race in lethal autonomous weapons? Will machines eventually outsmart us at all tasks, replacing humans on the job market and perhaps altogether? Will AI help life flourish like never before or give us more power than we can handle? Read Life 3.0 https://amzn.to/2YTDg9L 00:00:00 Intro 00:02:09 Disagreement with Noam Chomsky and the challenge of the Imitation Game 00:04:16 Will AI exceed human intelligence? 00:07:04 Should we fear AI? Are we being to passive? 00:09:02 Should we trust AI? What we should worry about. 00:11:21 Were you born tooearly to make good use of AI? Could AI avert war? 00:12:45 AI may have a democratizing impact. 00:17:55 The "Improve The News" experiment 00:26:19 What do you think about exponential change? Will tech solve humanity's problems? 00:30:31 What is your ethical will?
* A Fun RSR List Show: For this Thanksgiving weekend, a special rebroadcast. In our List of the Fine-Tuned Features of the Universe, Real Science Radio host Bob Enyart quotes leading scientists and their astounding admission of the uncanny and seemingly never-ending list of the just-perfect finely-tuned parameters of the physical features of the Earth, the solar system, and the entire cosmos. This program is brought to you by God, maker of heaven and earth and other fine products! * The Finely Tuned Parameters of the Universe: Barrow & Tipler, in their standard treatment, The Anthropic Cosmological Principle, admit that "there exist a number of unlikely coincidences between numbers of enormous magnitude that are, superficially, completely independent; moreover, these coincidences appear essential to the existence of carbon-based observers in the Universe." Examples include the wildly unlikely combination of: - there is the same number of electrons as protons to a standard deviation of one in ten to the thirty-seventh power, that is, 1 in 10,000,000,000,000,000,000,000,000,000,000,000,000 (37 zeros) - the 1-to-1 electron to proton ratio throughout the universe yields our electrically neutral universe - all fundamental particles of the same kind are identical (including protons, electrons, down quarks, etc., even, in QED, photons!) - energy exactly equals mass (times the conversion factor of c²) - the electron and the massively greater proton have exactly equivalent opposite charges - the electron to proton mass ratio (1 to 1,836) is perfect for forming molecules - the baryon (protons, neutrons, etc.) that decays must conserve the number of baryons - the free neutron decays in minutes whereas it is stable within the nuclei of all the non-radioactive elements (otherwise eventually only hydrogen would exist because the strong nuclear force needs neutrons to overcome proton repulsion) - the proton can't decay because it is the lightest baryon (otherwise all elements would be unstable) - the electromagnetic and gravitational forces are finely tuned for the stability of stars - the gravitational and inertial mass equivalency - the electromagnetic force constant is perfect for holding electrons to nuclei - the electromagnetic force is in the right ratio to the nuclear force - the strong force if changed by 1% would destroy all carbon, nitrogen, oxygen, and heavier elements - the precise speed of light, the square root of the inverse of the product of space's permeability and permittivity (i.e., its magnetic field resistance, 4π * 10-7 Weber/Amps * meter, multiplied by its electric field resistance, or 8.8542 * 10-12 Coulomb2 /Newton * meter2), or 186,282 MPS, is integral for life - etc., etc., etc. (including the shocking apparent alignment of the universe with the orbit of the Earth) Leading atheist physicist and biologist admit* The Most Famous Scientist Atheists Agree: The world's most famous scientist atheists in physics and biology have fully admitted half the question as to fine tuning, that the world APPEARS to have been fine tuned. Richard Dawkins: "Biology is the study of complicated things that give the appearance of having been designed for a purpose." Stephen Hawking: "The remarkable fact is that the values of these numbers seem to have been very finely adjusted..." * An Atheist's Index to Replies: Here's an index to (failed) attempts to rebut the fine-tuning argument for God's existence. * Omitting the Cosmological Constant: We have omitted from this list the commonly reported fine-tuning of the cosmological constant to one part in 10 to the 120th. This is so very precise that if the entire universe had as much additional mass as exists in a single grain of sand, it would all collapse upon itself. That is, if a big bang actually formed our universe, and if it created a miniscule additional amount of mass than it is claimed to have created, then no planets, stars, or galaxies could exist. Conversely, if the universe had less mass, by that same quantity, matter never would have coalesced to become planets, stars, and galaxies, and again, we would not exist. So, why doesn't Real Science Radio include this astoundingly fine-tuned parameter in our list? Well, as physicist John Hartnett points out, the cosmological constant is only a fine-tuning problem for the big bang theory, so it is an argument only against a big bang universe, whereas in our actual universe, it is not a fine tuning issue. So, the cosmological constant problem, also known as the vacuum catastrophe, does refute big bang cosmology, at least, for anyone who is objective, has common sense, and is not desperately trying to ignore the evidence for the Creator. (By the way, since NASA says that the confirmed predictions of the big bang theory are what validates it, you might want to Google: big bang predictions, and you'll find our article ranked #1 out of half-a-million, at rsr.org/bbp, presenting the actual track record of the predictions of the theory. Also, if you Google: evidence against the big bang, you'll find our article on that topic near the top of the first page of Google results!) * The Whopping Physics Coincidence: NewScientist reports about gravity and acceleration that, "a large chunk of modern physics is precariously balanced on a whopping coincidence" for, regarding gravitational and inertial mass, "these two masses are always numerically exactly the same. The consequences of this coincidence are profound..." * The Finely Tuned Parameters of the Solar System include: - Our Sun is positioned far from the Milky Way's center in a galactic Goldilocks zone of low radiation - Our Sun placed in an arm of the Milky Way puts it where we can discover a vast swath of the entire universe - Our Sun is in the unusual Local Bubble, 300 light years of extremely diffuse gas, 1/500th of the average - Earth's orbit is nearly circular (eccentricity ~ 0.02) around the Sun providing a stability in a range of vital factors - Earth's orbit has a low inclination keeping its temperatures within a range permitting diverse ecosystems - Earth's axial tilt is within a range that helps to stabilize our planet's climate - the Moon's mass helps stabilize the Earth's tilt on its axis, which provides for the diversity of alternating seasons - the Moon's distance from the Earth provides tides to keep life thriving in our oceans, and thus, worldwide - the Moon's nearly circular orbit (eccentricity ~ 0.05) makes its influence extraordinarily reliable - the Moon is 1/400th the size of the Sun, and at 1/400th its distance, enables educational perfect eclipses - the Earth's distance from the Sun provides for great quantities of life and climate-sustaining liquid water - the Sun's extraordinary stable output of the energy - the Sun's mass and size are just right for Earth's biosystem - the Sun's luminosity and temperature are just right to provide for Earth's extraordinary range of ecosystems - the color of the Sun's light is tuned for maximum benefit for photosynthesis - the Sun's low "metallicity" prevents the destruction of life on Earth -
* A Fun RSR List Show: For this Thanksgiving weekend, a special rebroadcast. In our List of the Fine-Tuned Features of the Universe, Real Science Radio host Bob Enyart quotes leading scientists and their astounding admission of the uncanny and seemingly never-ending list of the just-perfect finely-tuned parameters of the physical features of the Earth, the solar system, and the entire cosmos. This program is brought to you by God, maker of heaven and earth and other fine products! * The Finely Tuned Parameters of the Universe: Barrow & Tipler, in their standard treatment, The Anthropic Cosmological Principle, admit that "there exist a number of unlikely coincidences between numbers of enormous magnitude that are, superficially, completely independent; moreover, these coincidences appear essential to the existence of carbon-based observers in the Universe." Examples include the wildly unlikely combination of: - there is the same number of electrons as protons to a standard deviation of one in ten to the thirty-seventh power, that is, 1 in 10,000,000,000,000,000,000,000,000,000,000,000,000 (37 zeros) - the 1-to-1 electron to proton ratio throughout the universe yields our electrically neutral universe - all fundamental particles of the same kind are identical (including protons, electrons, down quarks, etc., even, in QED, photons!) - energy exactly equals mass (times the conversion factor of c²) - the electron and the massively greater proton have exactly equivalent opposite charges - the electron to proton mass ratio (1 to 1,836) is perfect for forming molecules - the baryon (protons, neutrons, etc.) that decays must conserve the number of baryons - the free neutron decays in minutes whereas it is stable within the nuclei of all the non-radioactive elements (otherwise eventually only hydrogen would exist because the strong nuclear force needs neutrons to overcome proton repulsion) - the proton can't decay because it is the lightest baryon (otherwise all elements would be unstable) - the electromagnetic and gravitational forces are finely tuned for the stability of stars - the gravitational and inertial mass equivalency - the electromagnetic force constant is perfect for holding electrons to nuclei - the electromagnetic force is in the right ratio to the nuclear force - the strong force if changed by 1% would destroy all carbon, nitrogen, oxygen, and heavier elements - the precise speed of light, the square root of the inverse of the product of space's permeability and permittivity (i.e., its magnetic field resistance, 4π * 10-7 Weber/Amps * meter, multiplied by its electric field resistance, or 8.8542 * 10-12 Coulomb2 /Newton * meter2), or 186,282 MPS, is integral for life - etc., etc., etc. (including the shocking apparent alignment of the universe with the orbit of the Earth) Leading atheist physicist and biologist admit* The Most Famous Scientist Atheists Agree: The world's most famous scientist atheists in physics and biology have fully admitted half the question as to fine tuning, that the world APPEARS to have been fine tuned. Richard Dawkins: "Biology is the study of complicated things that give the appearance of having been designed for a purpose." Stephen Hawking: "The remarkable fact is that the values of these numbers seem to have been very finely adjusted..." * An Atheist's Index to Replies: Here's an index to (failed) attempts to rebut the fine-tuning argument for God's existence. * Omitting the Cosmological Constant: We have omitted from this list the commonly reported fine-tuning of the cosmological constant to one part in 10 to the 120th. This is so very precise that if the entire universe had as much additional mass as exists in a single grain of sand, it would all collapse upon itself. That is, if a big bang actually formed our universe, and if it created a miniscule additional amount of mass than it is claimed to have created, then no planets, stars, or galaxies could exist. Conversely, if the universe had less mass, by that same quantity, matter never would have coalesced to become planets, stars, and galaxies, and again, we would not exist. So, why doesn't Real Science Radio include this astoundingly fine-tuned parameter in our list? Well, as physicist John Hartnett points out, the cosmological constant is only a fine-tuning problem for the big bang theory, so it is an argument only against a big bang universe, whereas in our actual universe, it is not a fine tuning issue. So, the cosmological constant problem, also known as the vacuum catastrophe, does refute big bang cosmology, at least, for anyone who is objective, has common sense, and is not desperately trying to ignore the evidence for the Creator. (By the way, since NASA says that the confirmed predictions of the big bang theory are what validates it, you might want to Google: big bang predictions, and you'll find our article ranked #1 out of half-a-million, at rsr.org/bbp, presenting the actual track record of the predictions of the theory. Also, if you Google: evidence against the big bang, you'll find our article on that topic near the top of the first page of Google results!) * The Whopping Physics Coincidence: NewScientist reports about gravity and acceleration that, "a large chunk of modern physics is precariously balanced on a whopping coincidence" for, regarding gravitational and inertial mass, "these two masses are always numerically exactly the same. The consequences of this coincidence are profound..." * The Finely Tuned Parameters of the Solar System include: - Our Sun is positioned far from the Milky Way's center in a galactic Goldilocks zone of low radiation - Our Sun placed in an arm of the Milky Way puts it where we can discover a vast swath of the entire universe - Our Sun is in the unusual Local Bubble, 300 light years of extremely diffuse gas, 1/500th of the average - Earth's orbit is nearly circular (eccentricity ~ 0.02) around the Sun providing a stability in a range of vital factors - Earth's orbit has a low inclination keeping its temperatures within a range permitting diverse ecosystems - Earth's axial tilt is within a range that helps to stabilize our planet's climate - the Moon's mass helps stabilize the Earth's tilt on its axis, which provides for the diversity of alternating seasons - the Moon's distance from the Earth provides tides to keep life thriving in our oceans, and thus, worldwide - the Moon's nearly circular orbit (eccentricity ~ 0.05) makes its influence extraordinarily reliable - the Moon is 1/400th the size of the Sun, and at 1/400th its distance, enables educational perfect eclipses - the Earth's distance from the Sun provides for great quantities of life and climate-sustaining liquid water - the Sun's extraordinary stable output of the energy - the Sun's mass and size are just right for Earth's biosystem - the Sun's luminosity and temperature are just right to provide for Earth's extraordinary range of ecosystems - the color of the Sun's light is tuned for maximum benefit for photosynthesis - the Sun's low "metallicity" prevents the destruction of life on Earth -
One of my favorite topics is artificial intelligence, or - more specifically - what we can learn from neuroscience about artificial intelligence. So, when I was gifted the book "Life 3.0: Being Human in the Age of Artificial Intelligence" by Max Tegmark I enjoyed the read thoroughly. But, several scenarios envisioned in the book as paths to human-like artificial intelligence didn't make sense to me, as a neuroscientist. So a bestseller book on artificial intelligence completely ignored the views of neuroscience. This is why invited Dr. Grace Lindsay, host of the podcast "Unsupervised Thinking" about computational neuroscience and artificial intelligence. Grace is a postdoc at University College London, and she is currently writing a popular book about computational neuroscience. Listen to the Full Conversation on Patreon! Neuroscience inspired the technology that is currently leading the field in artificial intelligence: artificial neural networks (ANNs); now better known as 'deep networks' as in 'deep learning'. The inventors of ANNs were the first to implement the basic idea of distributing computations across a large number of small processing units - neurons. For decades this method suffered from it's need for large amounts of data and a lack of appropriate hardware. As soon as these prerequisites were met, ANNs really took off. Today, some people are thinking about how progress in neuroscience can further inform the structure of ANNs to improve on their performance - because they still are far behind what a brain can do. Referring to Tegmark's book we discuss scenarios that he writes are proposed to lead toward human-like artificial intelligence. We discuss whether modelling a human brain on different levels, from the molecules of every brain cell up to the behavior of an individual human, would work out - or would even count as intelligence. Could we upload our minds? Would human-level AI be conscious? Will the "singularity" kill us all? We try to answer these questions form the viewpoint of neuroscience. Resources: Grace Lindsay on TwitterGrace's upcoming Book “Models of the Mind“ Grace's Podcast “Unsupervised Thinking” Grace's Blog "Neurdiness"Max Tegmark “Life 3.0: Being Human in the Age of Artificial Intelligence” Mentioned Black Mirror episode: “Be Right Back”
In COVID-related AI news, Purdue University has built a website that tracks global response to social distancing, by pulling live footage and images from over 30,000 cameras in 100 countries. Simon Fong, Nilanjan Dey, and Jyotismita Chaki have published Artificial Intelligence for Coronavirus Outbreak, which examines AI’s contribution to combating COVID-19. Researchers at Harvard and Boston Children’s Hospital use a “regular” Bayesian model to identify COVID-19 hotspots over 14 days before they occur. In non-COVID AI news, the acting director of the JAIC announces a shift to enabling joint warfighting operations. The DoD Inspector General releases an Audit of Governance and Protection of DoD AI Data and Technology, which reveals a variety of gaps and weaknesses in AI governance across DoD. Detroit Police Chief James Craig reveals that the police department’s experience with facial recognition technology resulted in misidentified people about 96% of the time. Over 1400 mathematicians sign and deliver a letter to the American Mathematical Society, urging researchers to stop working on predictive-policing algorithms. DARPA awards the Meritorious Public Service Medal to Professor Hava Siegelmann for her creation and research in the Lifelong Learning Machines Program. And Horace Barlow, one of the founders of modern visual neuroscience, passed away on 5 July at the age of 98. In research, Udrescu and Tegmark release AI Feynman 2.0, with unsupervised learning of equations of motion by viewing objects in raw and unlabeled video. Researchers at CSAIL, NVidia, and the University of Toronto create the Visual Causal Discovery Network, which learns to recognize underlying dependency structures for simulated fabrics, such as shirts, pants, and towels. In reports, the Montreal AI Ethics Institute publishes its State of AI Ethics. In the video of the week, Max Tegmark discusses the previously mentioned research on equations of motion, and also discusses progress in symbolic regression. And GanBreeder upgrades to ArtBreeder, which can create realistic-looking images from paintings, cartoons, or just about anything. Click here to visit our website and explore the links mentioned in the episode.
Was bedeutet Leben überhaupt? Im Buch Leben 3.0 von Max Tegmark geht es um diese Frage, vor allem wenn sich Künstliche Intelligenz entwickelt. Leben ist ein Prozess welcher sich aus einer gedankenlosen Materie ohne Bewusstsein in ein lebendes Ökosystem entwickelt hat. Mit der Möglichkeit der Selbstreflexion, Hoffnung und dem streben nach Zielen und Bedeutung. 13,8 Milliarden Jahre vom Urknall zu einem immer komplexer werdenden Prozess welchen wir heute haben. Doch die Definition was Leben ist, ist umstritten. Doch diese Frage sollten wir uns stellen wenn wir an den Punkt kommen in dem künstliche Intelligenz so weit ist, dass diese denken kann und vielleicht ein Bewusstsein entwickelt.Tegmark definiert hier in seinem Buch 3 Formen von Leben: Leben 1.0 ist Biologisch, Software und Hardware ist fest Leben 2.0 ist Kulturell, Hardware (Körper) ist fest, aber Software kann lernen Leben 3.0 ist Technisch, Hardware und Software sind gestaltbar Bakterien, Einzeller, Pflanzen sind hier Leben 1.0. Wir Menschen sind Leben 2.0 da wir lernen können, aber doch an unseren Körper gebunden sind. Leben 3.0 dagegen wird in der Lage sein ein lernendes Bewusstsein zu haben, welches unabhängig von seiner Hardware (Körper) funktioniert. Die Frage ist auch, warum entwickelt sich Leben als solches immer komplexer weiter? Die Antwort liegt hier in der Evolution. Je komplexer bzw. intelligenter eine Lebensform ist, umso mehr kann sie die Regeln und Bedingungen ihrer Umwelt erkennen und damit überleben. Gleichzeitig ist diese Lebensform dazu in der Lage Einfluss auf die Umwelt zu nehmen, um so wieder neue Regeln und Bedingungen zu schaffen.Der Mensch hat seine Umwelt so verändert, dass heute komplett andere Regeln gelten um „Erfolg“ zu haben als vor 20.000 Jahren. Ebenfalls stellt sich die Frage was ist Intelligenz überhaupt? Es gibt keine einheitliche Definition davon. Emotionale Intelligenz, Kreativität, Lösungsorientiert etc. Tegmark definiert Intelligenz als Fähigkeit komplexe Ziele zu erreichenDa es viele mögliche Ziele gibt, gibt es auch verschiedene Formen der Intelligenz Intelligenz ist Informationen und deren Verarbeitung um so die Ziele zu erreichen. Information kann daher ein Eigenleben entwickeln, unabhängig von der physikalischen Hülle. Ein Schachprogramm verarbeitet die Informationen mit dem Ziel das Schachspiel zu gewinnen. Ob das Programm auf einem Smartphone, Laptop oder einem Desktop PC läuft, was für ein Chip, welches Material die Platinen haben, das ist nicht wichtig, solange die Fähigkeit besteht genug Speicherplatz und Rechenleistung zu haben um das Programm selbst auszuführen. Menschen als solches können zB nur einen sehr kleinen Teil von mathematischen Berechnungen ausführen, hier sind Computer schon weiter, doch wir bezeichnen diese noch lange nicht als intelligent oder gar lebendig. Wenn aber der Punkt erreicht wird, an dem immer komplexere Aufgaben erledigt werden können, besser als es ein Mensch kann, ab wann sprechen wir von Intelligenz und vielleicht sogar Leben? Was passiert wenn dieses System ein Bewusstsein entwickelt? Welche möglichen Szenarien können passieren? Im Buch werden hier einige benannt, nicht alle gehen gut für den Menschen aus. Im Buch stellt Tegmark ebenfalls die Frage nach dem Bewusstsein. Ab wann ist etwas bewusst? Wie können wir sicher stellen, dass sich KI mit den Zielen der Menschen deckt? Was sind überhaupt unsere Ziele? Gibt es ein gemeinsames Ziel?Am Ende ist es doch die subjektive Erfahrung welche zählt, sonst ist alles nur ein verschieben von Molekülen. Vieles von dem was wir Menschen tun erfolgt unbewusst, durch Triebe, Prägung, unser Verhalten wird oft erst im Nachgang als bewusst und logisch angesehen. Doch was entsteht, wenn Maschinen klüger sind und ein Bewusstsein haben? Was passiert mit uns als Mensch? Das Buch geht auf diese und weiter Punkte sehr gut ein. Ebenfalls wird die Technik sehr gut erklärt. Wenn du dich für das Thema interessierst, kann ich das Buch nur empfehlen.
Andy and Dave discuss the initial results from King’s College London’s COVID Symptom Tracker, which found fatigue, loss of taste and smell, and cough to be the most common symptoms. MIT’s CSAIL and clinical team at Heritage Assisted Living announce Emerald, a Wi-Fi box that uses machine learning analyzes wireless signals to record (non-invasively) a person’s vital signs. AI Landing has developed a tool that monitors the distance between people and can send an alert when they get too close. And Johns Hopkins University updates its COVID tracker to provide greater levels of detail on information in the US. In non-COVID news, OpenAI releases Microscope, which contains visualizations of the layers and neurons of eight vision systems (such as AlexNet). The JAIC announces its “Responsible AI Champions” for AI Ethics Principles, and also issues a new RFI for new testing and evaluation technologies. In research, Udrescu and Tegmark publish AI Feynman, and improved algorithm that can find symbolic expressions that match data from an unknown function; they apply the method to 100 equations from Feynman’s Lectures on Physics, and it discovers all of them. The report of the week comes from nearly 60 authors across 30 organizations, a publication on Toward Trustworthy AI Development: Mechanisms for Supporting Verifiable Claims. The review paper of the week provides an overview of the State of the Art on Neural Rendering. The book of the week takes a look at the history of DARPA, in Transformative Technologies: Perspectives on DARPA. Stuart Kauffman gives his thoughts on complexity science and prediction, as they related to COVID-19. The ELLIS society holds its second online workshop on COVID on 15 April. Matt Reed creates Zoombot, a personalized chatbot to take your place in Zoom meetings. Ali Aliev creates Avatarify, to make yourself look like somebody else in real-time for your next Zoom meeting. Click here to visit our website and explore the links mentioned in the episode.
When do artificial intelligences become people? This is one of the most profound ethical questions of all, a question raised by numerous works of science fiction over many decades. Using the 2014 film “Ex Machina” as a springboard, James explores this question with guest Meia Chita-Tegmark, co-founder of the Future of Life Institute and human-robot interaction expert. We ask how we would know if an AI deserves rights, what the way we treat robots tells us about ourselves, and why some people clean up before running their Roomba!
One in a series of talks from the 2019 Models of Consciousness conference. Diana Stanciu University of Bucharest; Berlin-Brandenburg Academy of Sciences and Humanities (BBAW) I will argue that epistemic structural realism (ESR) can offer a feasible theoretical framework for the study of consciousness and its associated neurophysiological phenomena. While structural realism has already been employed in physics or biology (cf. Tegmark 2007, Leng 2010, Ainsworth 2010, 2011, McArthur 2011, Pincock 2011, Woodin 2011, Landry and Rickels 2012, Bain 2013, Andreas and Zenker 2014, Schurz 2014), its application to the study of consciousness is new indeed. Out of its two variants: ontic structural realism (OSR) and ESR, I consider the latter more suitable when studying the neurophysiological bases of consciousness since the OSR drastically claims that ‘there are’ actually no ‘objects’ and that ‘structure’ is all ‘there is’, while the ESR more moderately states that all we can ‘know’ is the ‘structure of the relations between objects’ and not the objects themselves (cf. Van Fraassen 2006). Thus, while not denying the existence of ‘objects’ (even if they are hard to pinpoint when discussing the neurophysiological bases of consciousness), the ESR still emphasises ‘relations’ vs. ‘objects’ and the retention of structure across theory change. In other words, it emphasies the continuity across theory change through the structural or mathematical aspects of our theories (cf. Stanford 2006). Filmed at the Models of Consciousness conference, University of Oxford, September 2019.
What does it mean for something to be impossible? In the world of science, some might say that things like time travel or invincibility fall into this category. But in philosophy, bending the laws of physics is fair game. However, there are still some things that are considered philosophically impossible. An emerging class of philosophers takes these ideas to the next level by reasoning about impossible worlds: alternate universes that contain impossible situations and objects. What do these worlds look like, and are they real? On this episode, we explore the impossible and why impossible worlds may be a key part of the ultimate nature of who and what we are.Twitter: grand_theoriesInstagram: grandtheoriesFacebook: grandtheoriesMusic (in order of appearance):1. Benjamin Banger - "Bobby Drake" (Creative Commons 4.0)Soundcloud:@benjamin-bangerInstagram: @benjaminbanger2. Nctrnm - "Rider" (Creative Commons 4.0)3. Daniel Birch - "Deep in Peace" (Creative Commons 4.0 NonCommercial)4. Nctrnm - "Secretary" (Creative Commons 4.0)5. Glass Boy - "My Pretty Looking Clothes" (Creative Commons 3.0)6. Chris Zabriskie - "Another Version of You" (Creative Commons 4.0)Soundcloud: @chriszabriskie7. Pipe Choir - "Exit Exit" (Creative Commons 4.0)Soundcloud: @pipe-choir-28. Nctrnm - "Anthony" (Creative Commons 4.0)9. Chris Zabriskie - "Land on The Golden Gate" (Creative Commons 4.0)10. Music For Your Plants - "Tour Peru" (Creative Commons 2.5 NonCommercial)11. Daniel Birch - "Set Adrift" (Creative Commons 4.0 NonCommercial) Works Cited: 1. Ballarin, R. (2011). The perils of primitivism: Takashi Yagisawa’s worlds and individuals, possible and otherwise. Analytic Philosophy. 52(4). 272-282.2. Benovsky, J. (2006). Four-dimensionalism and model perdurants. In Valore, P. (ed.) Topics on general and formal ontology. Monza, Italy: Polimetrica. 3. Berto, F. (2018). Conceivability and possibility: problems for Humeans. Synthese. 195(6). 2697-2715.4. Berto, F. and Plebani, M. (2015). Ontology and metaontology: a contemporary guide. London: Bloomsbury Academic. 5. Berto, F. and Jago, M. (2019). Impossible worlds. Oxford: Oxford University Press.6. Cordova, V. (2007). How it is: the Native American philosophy of V.F. Cordova. Tucson, Arizona: University of Arizona Press.7. Ellis, G. (2006). The multiverse proposal and the anthropic principle. Presented at the Claremont Cosmology Conference, 2006. 8. Gendler, T. and Hawthorne, J. (2002). Introduction: conceivability and possibility. In Gendler, T. and Hawthorne, J. (eds.) Conceivability and possibility. New York: Oxford University Press. 1-70.9. Graham, A. (2015). From four- to five-dimensionalism. Ratio. 28(1). 14-28.10. Greene, B. (2011). The hidden reality: parallel universes and the deep laws of the cosmos. 11. Kaku, M. (2008). Physics of the impossible: a scientific exploration into the world of phasers, force fields, teleportation and time travel. New York: Doubleday Publishing.12. Lewis, D. (1986). On the plurality of worlds. Oxford: Blackwell Publishing.13. Priest, G. (2016). Thinking the impossible. Philosophical Studies. 173(10). 2649-2662.14. Tegmark, M. (2014). Our mathematical universe: my quest for the ultimate nature of reality. New York: Knopf.15. Yagisawa, T. (2010). Worlds and individuals, possible and otherwise. New York: Oxford University Press.16. Yagisawa, T. (2017). S4 to 5D. Argumenta. 2(2). 241-261.
Steven Weinberg diece que hay un número Infinito de realidades paralelas que conviven con nosotros en una misma habitación. Tegmark agrupa las teorías científicas de los universos paralelos en una jerarquía de cuatro niveles. Conforme aumenta el nivel, los distintos universos difieren más del nuestro. Este es nuestro relato…
Las respuestas la han encontrado científicos británicos, que han demostrado que existen dos versiones de la realidad a un mismo tiempo. Esta categoría de Tegmark se base en la teoría de universos múltiples de Hugh Everett; cuyo acrónimo se denomina (IMM). Y que es una de las varias interpretaciones dominantes en la mecánica cuántica. Oigamos el relato…
Vi kommer allt längre i utvecklingen av att föra över det som gjort oss människor unika, vår intelligens, till maskinerna. Det som ger datorer förmågan att läsa in information, lära sig, utvecklas och se mönster. Vad händer när maskinerna kan göra allting bättre än människor? När vi uppfinner en AI som kan göra en ännu bättre AI som kan göra en ännu bättre AI? Alla experter verkar väldigt överens om att artificiell intelligens kommer att förändra världen som vi känner den. På gott eller ont. Vissa menar till och med att vi går mot en brytpunkt som kommer förändra mänskligheten för evigt. Medverkande Max Tegmark, AI-forskare på MIT, författare, föreläsare Amy Loutfi, AI-forskare på Örebro Universitet, föreläsare Musik Originalmusik av Sandra Broström Övrig musik: And So Then – Lee Rosevere Expectations – Lee Rosevere As I Was Saying – Lee Rosevere Making a Change – Lee Rosevere 0___0 – Lee Rosevere Glassbells Dancing With a Synthesizer – Daniel Birch - (”Minus” - tower1(reflect) Övrigt Wait But Why: The AI Revolution: The Road to Superintelligence (bloggserie i två delar) Bostrom & Müllers (2013) enkätundersökning (PDF) Prata med mig Använd hashtag #teknikensunder Min blogg: teknifik.se Twitter: @elinhaggberg Instagram: @teknifik Facebook: /teknifik Mail: hello@teknifik.se Teknikens under produceras av Elin Häggberg och är ett samarbete med Female Engineer Network. Det här avsnittet sponsrades av Academic Work.
In the latest news, Andy and Dave discuss OpenAI releasing “Spinning Up in Deep RL,” an online educational resource; Google AI and the New York Times team up to digitize over 5 million photos and find “untold stories;” China is recruiting its brightest children to develop AI “killer bots;” and China unveils the world’s first AI new anchor; and Douglas Rain, the voice of HAL 9000 has died at age 90. In research topics, Andy and Dave discuss research from MIT, Tegmark, and Wu, that attempts to improve unsupervised machine learning by using a framework that more closely mirrors scientific thought and process. Albrecht and Stone examine the issue of autonomous agents modeling other agents, which leads to an interest list of open problems for future research. Research from the Stanford makes an empirical examination of bias and generalization in deep generative models, and Andy notes striking similarities to previously reported experiments in cognitive psychology. Other research surveys data collection for machine learning, from the perspective of the data. In blog posts of the week, the Mad Scientist Initiative reveals the results from a recent competition, which suggests themes of the impacts of AI on the future battlefield; and Piekniewski follows up his May 2018 “Is an AI Winter On Its Way?” in which he reviews cracks appearing in the AI façade, with particular focus on the arena of self-driving vehicles. And Melanie Mitchell provides some insight about AI hitting the barrier of meaning. CSIS publishes a report on the Importance of the AI Ecosystem. And another paper takes insights from the social sciences to provide insight into AI. Finally, MIT press has updated one of the major sources on Reinforcement Learning with a second edition; AI Superpowers examines the global push toward AI; the Eye of War examines how perceptual technologies have shaped the history of war; SparkCognition publishes HyperWar, a collection of essays from leaders in defense and emerging technology; Major Voke’s entire presentation on AI for C2 of Airpower is now available; and the Bionic Bug Podcast has an interview with CNA’s own Sam Bendett to talk AI and robotics. Go to www.cna.org/AIwithAI for the show notes and links.
FORSKARE INOM AI OCH FYSIK, 51 år. Född i Stockholm, bosatt utanför Boston. Tidigare Sommarvärd 2008. Forskaren Max Tegmark berättar i sitt Sommar om vad som håller på att hända med utvecklingen av artificiell intelligens, ett område med stor potential och stora risker. För mig är artificiell intelligens den viktigaste demokratifrågan i vår tid, säger Max Tegmark, forskare inom artificiell intelligens, AI, och fysik i sitt program. När teknikutvecklingen gör enorma framsteg finns risken att de som har tillgång till den nya tekniken får makt över de som inte har den. Max Tegmark pratar om vad som kan hända, och hur teknikutvecklingen redan idag omformar samhället. Men vad är egentligen intelligens? Max Tegmark vill att vi slutar se intelligens som något bara kolbaserade livsformer, som människor och djur, kan ha. De senaste åren har datorerna blivit mycket bättre på att röra sig, spela och köra själva. Sen sitt förra sommarprat har Max Tegmark börjat kalla sig aktivist, och inte bara forskare. Via sitt arbete med organisationen Future of Life Institute arbetar han ihop med andra personer inom forskningsfältet för att styra utvecklingen av artificiell intelligens i rätt riktning. För, som organisationen skrev i en insändare: AI kan bli det bästa eller det sämsta som någonsin hänt mänskligheten. Om Max Tegmark Föreläsande fysiker känd för sina kontroversiella teorier om parallella världar. Professor vid MIT, Massachusetts Institute of Technology i Boston. Aktuell med boken Liv 3.0; att vara människa i den artificiella intelligensens tid. Har dubbel examen, som civilingenjör i teknisk fysik vid Kungliga Tekniska högskolan och civilekonom vid Handelshögskolan i Stockholm. Disputerade vid University of California, Berkeley. Leder The Future of Life Institute, en organisation som verkar för att tekniken ska hjälpa mänskligheten. Har gett ut Vårt matematiska universum; mitt sökande efter den yttersta verkligheten. Mottog KTH:s stora pris 2015. Här kan du lyssna på hela dokumentären "Fyra minuter från kärnvapenkrig" av Per Shapiro och Håkan Engström som nämns i programmet: https://sverigesradio.se/sida/avsnitt/991022?programid=909 Producent: Per Shapiro
Some people say that renowned MIT physicist Max Tegmark is totally bonkers and refer to him as “Mad Max”. But, to quote Lewis Carroll from Alice in Wonderland, “All the best people are.” Furthermore, I am not sure if Tegmark is “mad” but I am pretty sure he is very much “fun” because I had a total blast […]
Hacking at the Vatican?! Can you combine 2,000 years of deep, Catholic tradition with something as modern as hacking? For our next guest, Fr. Eric Salobir, the answer was most certainly, “Yes!” Fr. Salobir helped to organize the first-ever “hackathon” at the Vatican where groups of students from 62 universities from around the world, including Harvard, Stanford, MIT and many more, “hacked” ideas that could help humanity in three key areas: Fostering inter-religious dialogue, helping the poor and underprivileged people, and supporting refugees and migrants. In our interview with Fr. Salobir, he talks about this event and how his organization, OPTIC Technology, is building a network of the top thinkers, scientists, religious leaders, high-tech leaders to discuss ways we can use technologies like artificial intelligence (AI) for good of humanity. He also talks about a recent panel he participated in that included Max Tegmark, PhD of MIT where Dr. Tegmark boldly stated that we shouldn’t be “carbon chauvinists.” We discuss the promise and pitfalls of AI, how values might be embedded into AI, and how we seek to ensure these values will align with our Christian values. Join us for this fascinating and fun adventure from the Vatican to AI and beyond! Fr. Eric Salobir is a Roman Catholic priest and a member of the Dominican Order and he is in charge of the media and technology for the Dominicans. He is also the founder of OPTIC Technologies, which is a network, founded within the Catholic Church, which promotes research and innovation and various technology fields, basing its work on a Christian viewpoint, looking at disruptive technologies and the ways that they could help humanity, along with some of the ethical concerns with these technologies. He's also a consultant for the Pontifical Council of Communication as well as an expert for the delegation of the Holy See at UNESCO. He teaches digital communication at Catholic University of Paris and he's a graduate of ISC Paris Business School and worked at the French Embassy in Prague and also worked in banking for a while. He joined the Order of Dominicans in 2000 and graduated in theology and philosophy. He travels the world speaking on many important technology conferences and panels on topics like internet security, AI, trans-humanism and other disruptive technologies. ** For a transcript of this interview, visit our website:** https://www.purposenation.org/father-eric-salobir-podcast-transcript Fr. Salobir’s Linkedin: https://www.linkedin.com/in/ericsalobir/en More about OPTIC Technology: http://optictechnology.org/index.php/en/ Watch the panel discussion at MIT that included Fr. Salobir and Max Tegmark: https://www.youtube.com/watch?v=YotVoOFDlP8 Please subscribe to our YouTube Channel and find our podcast on iTunes, Google Play, SoundCloud or your favorite podcasting application: www.purposenation.org/podcast/ Visit our website for more information or to make a tax-deductible donation to our non-profit 501(c)(3) Christian ministry: www.purposenation.org/
Sam Harris speaks with Rebecca Goldstein and Max Tegmark about the foundations of human knowledge and morality. Rebecca Goldstein is a MacArthur Fellow, a professor of philosophy, and the author of five novels and a collection of short stories. She lives in Boston, Massachusetts. Her latest book is Plato at The Googleplex: Why Philosophy Won’t Go Away. Twitter: @platobooktour Max Tegmark is a professor of physics at MIT and the co-founder of the Future of Life Institute. Tegmark has been featured in dozens of science documentaries. He is the author of Our Mathematical Universe and Life 3.0: Being Human in the Age of Artificial Intelligence. Twitter: @Tegmark
In this episode of the Making Sense podcast, Sam Harris speaks with Rebecca Goldstein and Max Tegmark about the foundations of human knowledge and morality. SUBSCRIBE to continue listening and gain access to all content on samharris.org/subscribe.
What does it mean to create beneficial artificial intelligence? How can we expect to align AIs with human values if humans can't even agree on what we value? Building safe and beneficial AI involves tricky technical research problems, but it also requires input from philosophers, ethicists, and psychologists on these fundamental questions. How can we ensure the most effective collaboration? Ariel spoke with FLI's Meia Chita-Tegmark and Lucas Perry on this month's podcast about the value alignment problem: the challenge of aligning the goals and actions of AI systems with the goals and intentions of humans.
25 gånger har vi varit en hårsmån från kärnvapenkrig. Varje gång till följd av missförstånd eller tekniska haverier. Hur länge kan turen hålla i sig? En resa från Hiroshima 1945 till New York 2017. MAD-doktrinen, Mutually Assured Destruction, löftet om en garanterad ömsesidig förstörelse, själva fundamentet i terrorbalansen, är det som sägs ha upprätthållit freden efter andra världskriget. Men alla är inte lika övertygade. En chans på tre Så stor bedömer fysikprofessor Max Tegmark risken att dö i ett kärnvapenkrig. Som fysiker vet jag att tur inte är någon långsiktig strategi. Tegmark har lett ett upprop i forskarvärlden till stöd för ett förbud mot kärnvapen. Sleepwalking into a nuclear war William Perry var försvarsminister under Bill Clinton och har arbetat med säkerhetsfrågor sedan 60-talet. Risken för att det blir katastrof i dag, bedömer han som större än under kalla kriget. Perry jämför med 1914, då länderna som drogs in i världskrig inte förstod vad som höll på att hända. De gick som i sömnen. Idag är det som att vi går i sömnen mot ett kärnvapenkrig, säger William Perry. Atomvinter Fysiker har visat att ett kärnvapenkrig idag skulle kunna utlösa så kallad nukleär vinter, ett tillstånd där solens strålar inte kan tränga igenom de stoftpartiklar som bildas. Följden blir en period av drastiskt kallare klimat och massvält. Klimatkollapsen ökar kärnvapenhotet Det förändrade klimatet leder till extremväder, översvämningar och andra naturkatastrofer; samt till ökat ekonomiskt och socialt tryck. Detta spär på redan existerande konflikter mellan kärnvapenstater. Indien och Pakistan är två sådana oroshärdar. Pandoras ask 1939 skickade Einstein ett brev till USA:s president, och varnade för att nazisterna kunde utveckla en atombomb. Det ledde till Manhattan-projektet och tillverkningen av den första atombomben. Einstein har kommit att symbolisera det mänskliga intellektets högsta höjder. I slutet av sitt liv vädjade han att inte låta mänsklig dumhet förstöra allt vi har skapat. Av Per Shapiro Producent Håkan Engström Slutmix Fredrik Nilsson
The Trigger-Happy Mouse Trap & Goal-Setting from First Principles: Book Review of Life 3.0 by Max Tegmark Max Tegmark shed tears after emerging from a London science museum in 2014, but by early 2017, his heart had warmed. In Life 3.0: Being Human in the Age of AI, Tegmark explains his concern that all might be lost if AI security issues are not taken seriously. You can support my continued work at https://patreon.com/evm/
Max Tegmark is a professor of physics at MIT and the co-founder of the Future of Life Institute. Tegmark has been featured in dozens of science documentaries. He is the author of Our Mathematical Universe and Life 3.0: Being Human in the Age of Artificial Intelligence. Twitter: @Tegmark
In this episode of the Making Sense podcast, Sam Harris speaks with Max Tegmark about his new book Life 3.0: Being Human in the Age of Artificial Intelligence. They talk about the nature of intelligence, the risks of superhuman AI, a nonbiological definition of life, the substrate independence of minds, the relevance and irrelevance of consciousness for the future of AI, near-term breakthroughs in AI, and other topics. You can support the Making Sense podcast and receive subscriber-only content at samharris.org/subscribe.
Han är en av världens mest citerade forskare och professor i fysik vid prestigefulla MIT i Boston. Max Tegmark bjöd in till sitt hus vid sjön där vindspelet klingar och han funderar på stora kosmos. Max Tegmark forskar om de allra största gåtorna i universum. Hur stort är det egentligen? När bildades det? Hur? Han brinner också för idén om oändligt många parallella universa. En ifrågasatt tankefigur, men Max Tegmark vill få oss att tänka stort. Han har länge varit en av de största inom fysiken, men slog igenom för en bredare publik med boken Vårt matematiska universum mitt sökande efter den yttersta verkligheten. I Söndagsintervjun i P1 berättar Max Tegmark om sin syn på kosmos och om den stillsamma uppväxten i Bromma, där den nyfikna "initiativjäveln" blev ännu mer ivrig att förstå världen. KontaktMail: sondagsintervjun@sverigesradio.se Facebook: Söndagsintervjun i P1 Twitter: @sondagsintervju Instagram: @sondagsintervjun_p1
OBE, Astral Projections, PLR, Remote Viewing, Shamanic Journey are keenly intriguing experiences as discussed by four practitioners who will teach at the Soul Journey Festival Nov 21, 2015 in Irvine. Dr. Christina Gikas is a Certified Hypnotherapist who offers a variety of assistance with psychological, physical and spiritual concerns, reachable at gnosishypnosis.com. Jade Elizabeth practices hypnosis and Reiki and a variety of alternative healing arts. Learn more at journeythrutransformation.com. Half way into the show, Debra Hookey, a Nationally Known Medium, will speak. Learn more at debrahookey.com. Today Debra Hookey will discuss Soul Journeys to Loved Ones Past-Over. Myrna Godfrey is also an Hypnotherapist, Energy Healer providing assistance throughout the Orange County area, reachable through myrnagodfrey.com. Together with Dr. Carol Francis we explore many modalities of Soul Journey practices. Soul Journey Festival, Nov 21 2-6pm ($35.00 pre-registration 949-752-5272 at the School of Multidimensional Healing Arts and Science in Irvine, California) coalesces 33 exercises (from hypnosis, meditation, remote viewing, astral projection) with modern theories postulated by Tegmark's mathematics of the multi-verse and Bohr's (etc) descriptions of quantum physical nature of the subatomic universe. Eight practitioners and authors, with Keynote speaker Dr. Carol Francis, weave psychological exercises, metaphysical practices, hypnosis, meditation, research, and discussion into this intense and fast paced, 4 hour workshop is followed by 2 hours of individual sessions with practitioners at additional costs. For more information @ souljourneytools.com.
In this episode of the Making Sense podcast, Sam Harris speaks with MIT cosmologist Max Tegmark about the foundations of science, our current understanding of the universe, and the risks of future breakthroughs in artificial intelligence. You can support the Making Sense Podcast and receive subscriber-only content at samharris.org/subscribe.
Just as some of us are getting used to the idea of the multiverse, along comes Max Tegmark telling us that we're not thinking big enough. According to Max, there are actually multiple multiverses containing endless copies of and variations on the "pocket universe" we've quaintly come to think of as everything. And while it may sound like he's tripping, Max assures us that he's only high on mathematics and logic, following where they lead. In our tour of the multi-multiverse, he and I discussed the implications of infinite spacetime, eternal inflation, the quantum mechanical wave function, the many worlds interpretation and the idea that reality is at bottom mathematical.
Max Tegmark, svensk fysiker verksam i USA, är besatt av de allra största frågorna. Han har fått gömma undan sitt sökande efter de märkligaste svaren bakom forskning som haft större utsikter att gå hem i de vetenskapliga finrummen. Allt är matematik. Och all matematik som finns har också en motsvarighet i den fysiska världen. Så lyder Max Tegmarks verklighetsbeskrivning. Har han rätt finns det mängder av parallella universum - varav en del med helt andra naturlagar än vårt. Men forskning på det temat har länge ansetts så spekulativ, att han ägnat sig åt dubbla forskarkarriärer. En som bygger på klara och enkla mätvärden och som ingen kan avfärda. Och så en annan, till en början på fritiden - med de frågor han verkligen brinner för. Camilla Widebeck camilla.widebeck@sverigesradio.se