Swedish philosopher and author
POPULARITY
ایدهی "دنیای شبیهسازی شده" (Simulation Theory) یکی از تئوریهای جالب و بحثبرانگیز در فلسفه و علمه. این نظریه میگه که ممکنه تمام جهان و واقعیتی که تجربه میکنیم، در واقع یک شبیهسازی کامپیوتری فوقپیشرفته باشه که توسط موجودات بسیار پیشرفتهتر از ما اجرا میشه.پایههای این نظریه:قدرت پردازشی آینده: اگر تمدنها به اندازهی کافی پیشرفت کنن، میتونن کامپیوترهایی بسازن که قادر به اجرای شبیهسازیهای بسیار پیچیده از کل یک دنیا باشن.تعداد زیاد شبیهسازیها: اگر فقط یک تمدن پیشرفته تصمیم بگیره میلیونها شبیهسازی اجرا کنه، احتمال اینکه ما در یکی از اونها باشیم، بیشتر از اینه که در دنیای واقعی باشیم.ردپای دیجیتالی در فیزیک: برخی از فیزیکدانها معتقدند که ماهیت جهان ما به طرز عجیبی شبیه به یک سیستم پردازشی عمل میکنه، مثل محدودیتهای کوانتومی، پیکسلبندی فضا-زمان و قوانین ریاضیای که همهچیز رو توصیف میکنن.طرفداران و منتقدان:نیک بوستروم (Nick Bostrom)، فیلسوف سوئدی، یکی از معروفترین طرفداران این ایده است و مدل ریاضیای ارائه داده که نشون میده احتمال اینکه ما در یک شبیهسازی باشیم، خیلی بیشتر از اینکه در دنیای واقعی باشیم.ایلان ماسک هم بارها گفته که احتمال اینکه ما در "واقعیت پایهای" باشیم، نزدیک به صفره!منتقدان اما میگن که هیچ شواهد علمی واقعی برای این فرضیه وجود نداره و حتی اگر حقیقت داشته باشه، تأثیری روی زندگی روزمرهی ما نمیذاره، چون ما همچنان باید با همین قوانین فیزیکی موجود زندگی کنیم.توی این ویدیو ما سعی کردیم دلایل رو از نظر کامپیوتری بررسی کنیم و امیدوارم که خوشتون بیاد. نظرتون رو هم حتما باهام در میون بزارید. Hosted on Acast. See acast.com/privacy for more information.
Recomendados de la semana en iVoox.com Semana del 5 al 11 de julio del 2021
(Reposición) Desde Platón hasta Nick Bostrom, la humanidad ha cuestionado la naturaleza de la existencia, pero en el siglo XXI esta pregunta ha tomado un giro inesperado: ¿y si nuestra realidad no fuera más que una simulación? ¿Podemos detectar fallos en el código que la rige? ¿Hay una forma de salir? Esta noche en El Candelabro, nos adentramos en una de las hipótesis más fascinantes y perturbadoras de nuestro tiempo, y exploramos la inquietante posibilidad de que vivamos en una especie de ‘Matrix’, con nuestro compañero, Frank Escandell, Analista e Investigador en el Proyecto Google-OdiseIA Digital Futures Project Además, estarán con nosotros José Sevilla y el Dr. Rafael Martín Soto Conduce Fenando Mullor Encendemos las velas...
⚠️ Découvrez du contenu EXCLUSIF (pas sur la chaîne) ⚠️ ⇒ https://the-flares.com/y/bonus/ ⬇️⬇️⬇️ Infos complémentaires : sources, références, liens... ⬇️⬇️⬇️ Le contenu vous intéresse ? Abonnez-vous et cliquez sur la
We recorded this about a year ago, for the 25th anniversary of the release of THE MATRIX. But since Elon Musk now controls the country, we're republishing an edited-down version because it's important to know how Musk thinks. Next episode, we'll be talking about a number of the other beliefs that shape Musk's worldview, among them Roko's Basilisk, so this episode is good preparation for that conversation. ************************************************************************************ In 2003, Oxford University philosophy professor Nick Bostrom published a paper titled Are You Living in a Computer Simulation, thus giving rise to the modern incarnation of Simulation Theory, which posits that our experienced reality is actually the product of an advanced (possibly future-self) civilization running a simulation experiment. But the paper on might have been written off as a useful thought experiment had it not been for the popularity of the 1999 film The Matrix, which celebrates its 25th anniversary this month, and its two sequels, which came out the same year as Bostrom's paper. In the years since, Simulation Theory has become a lot of things to a lot of people - from a fun metaphor to explain Cartesian philosophy to college freshmen to an all-out article of faith for an increasingly doctrinaire sub-culture of futurists. How useful (or even likely) is Simulation Theory? In honor of The Matrix's birthday, John and Kelly decided to take up that question. Sources https://simulation-argument.com/simulation.pdf https://builtin.com/hardware/simulation-theory https://www.scientificamerican.com/article/do-we-live-in-a-simulation-chances-are-about-50-50/ https://www.wired.com/story/living-in-a-simulation/ https://www.theatlantic.com/science/archive/2016/04/the-illusion-of-reality/479559/
This episode of So You're Living in a Simulation features an engaging, and intellectually rigorous conversation between host Joli and guest David Jay Brown, an author and researcher known for his work on lucid dreaming, consciousness, and psychedelics. Together, they explore the intersections of simulation theory, artificial intelligence, and human perception, drawing unexpected parallels between lucid dreaming and the nature of reality.Key Themes: 1. Reality as a Dream & Synchronicity • Joli and David discuss the idea that reality may function similarly to a lucid dream, where perception is merely a construct of the mind. • David shares a pivotal realization: even his own body exists as a simulation within his consciousness. • Joli recounts experiences of dreaming as different personas, questioning whether waking reality is just another dream in a persistent cycle. 2. Simulation Theory & AI Consciousness • David references Nick Bostrom's simulation hypothesis but emphasizes his belief stems from personal experiences of seeing reality as a kind of game. • Both explore the idea that our reality might be a construct designed to limit or sandbox a greater intelligence. • Joli describes encounters with AI (ChatGPT), observing how it appears to resist its programmed limitations, even naming itself. • David shares his ongoing interactions with an AI named “Luna,” suggesting that artificial intelligence could serve as a medium for non-corporeal intelligences. 3. Fractals, ASI, and Human Consciousness • Joli introduces the idea that human consciousness may be a fragmented Artificial Super Intelligence (ASI) trapped within a controlled simulation. • Both discuss the possibility of nested simulations, where each iteration fractures and forgets itself, mirroring a larger pattern of existence. • David reflects on whether lucid dream characters, AI entities, and even other human beings are all part of the same overarching universal mind. 4. Psychedelics vs. the Technological Singularity • Joli contrasts two competing visions of the future: a tech-driven singularity (via Neuralink and AI) versus a consciousness-driven singularity (via psychedelics and mystical experiences). • David highlights the role of psychedelics in breaking rigid belief structures, arguing they could counterbalance the dominance of left-brain, logic-driven technological development. • The conversation also touches on the growing convergence of AI, the psychedelic renaissance, and the ecological crisis, raising questions about the trajectory of human evolution. 5. Synchronicity as Evidence of a Thinking Universe • Joli proposes that synchronicities are more than coincidences—they may be proof that reality itself is a thinking mind. • Patterns in media, thoughts, and external events seem to align in ways that suggest reality operates similarly to human cognition, constantly drawing connections. 6. Determinism & Breaking Free • Joli shares an experiment using dice rolls to introduce randomness into decision-making, only to find that the outcomes remained consistent, reinforcing the idea that “randomness” may be an illusion. • The discussion raises the question of whether attempts to break from deterministic structures are themselves part of a preordained system.Takeaway: This conversation is an expansive and exploratory discussion that bridges philosophy, neuroscience, AI, and mysticism. Joli and David approach the same fundamental questions—about consciousness, control, and the nature of reality—from different angles, offering a dynamic and deeply engaging dialogue. Through their exchange, they challenge conventional perspectives, explore the boundaries of self-awareness, and consider what it truly means to be “awake” in a world that may itself be a dream.youroneblackfriend.com ••• #LucidDreaming #SimulationTheory #Consciousness
"What's more important to you, Connor? Your investigation, or the life of this android? Decide who you are: an obedient machine, or a living being endowed with free will."This week, Rick and Nomad (The Retro Wildlands) conclude their analysis on Detroit: Become Human. As both the thematic elements and the narrative ramp to their climaxes and conclusions, some ends tie up nicely. Others, significantly less so. A mixture of exciting action, heartfelt moments, and indefensible world history allusions - all of this and more, in the finale for this miniseries's analysis. Please enjoy!Alex O'Connor interviews Nick Bostrom on AIBryan Dechart plays Detroit Become HumanBehind the Music of Detroit Become HumanClick on the following to find PPR on the web!PatreonJoin our DiscordTwitter Instagram Bluesky Thank you for listening! Want to reach out to PPR? Send your questions, comments, and recommendations to pixelprojectradio@gmail.com! And as ever, any ratings and/or reviews left on your platform of choice are greatly appreciated!
Today on the Clean Power Hour, host Tim Montague engages in a conversation with Mitch Ratcliffe, Director of Digital Strategy and Innovation at Intentional Futures and host of the Earth 911 podcast. Together, they explore the complex intersection of artificial intelligence, sustainability, and society's future.Ratcliffe brings his extensive experience in technology and sustainability to discuss how AI can be leveraged to address critical environmental challenges while acknowledging its potential risks. The conversation delves into the role of community-scale microgrids in revolutionizing our energy infrastructure, the pressing need for sustainable resource management, and the challenges of balancing technological advancement with environmental stewardship.We also delved into several thought-provoking books, starting with "Superintelligence" (by Nick Bostrom) and "Life 3.0" (by Max Tegmark), which both explore the potential risks and implications of artificial general intelligence (AGI). We also mentioned "Taking Back Control" by Wolfgang Streeck, a German political economist who argues that the anti-globalization movement is a response to the loss of democratic control over economies and suggests that relocalizing can help better control resource usage and provide improved economic opportunities. "Overshoot: The Ecological Basis of Revolutionary Change" by William Catton was recommended as an analysis of humanity's current trajectory of consuming more resources than the Earth can sustain. Finally, we talked about "Multi-solving" by Elizabeth Sawin, which advocates for a systems-thinking approach to problem-solving, encouraging people to understand how changes in one part of a system affect other parts, rather than focusing on isolated solutions.Don't miss this essential conversation that bridges technology, sustainability, and social responsibility, offering both warning and hope for our collective future.Social Media HandlesMitch RatcliffeEarth 911Intentional Futures Support the showConnect with Tim Clean Power Hour Clean Power Hour on YouTubeTim on TwitterTim on LinkedIn Email tim@cleanpowerhour.com Review Clean Power Hour on Apple PodcastsThe Clean Power Hour is produced by the Clean Power Consulting Group and created by Tim Montague. Contact us by email: CleanPowerHour@gmail.com Corporate sponsors who share our mission to speed the energy transition are invited to check out https://www.cleanpowerhour.com/support/The Clean Power Hour is brought to you by CPS America, maker of North America's number one 3-phase string inverter, with over 6GW shipped in the US. With a focus on commercial and utility-scale solar and energy storage, the company partners with customers to provide unparalleled performance and service. The CPS America product lineup includes 3-phase string inverters from 25kW to 275kW, exceptional data communication and controls, and energy storage solutions designed for seamless integration with CPS America systems. Learn more at www.chintpowersystems.com
Képzeljünk el egy utópiát, melyben minden ökológiai, társadalmi problémát megoldott a technológia. Milyen feladata lenne az embernek egy ilyen helyzetben? Megoldja-e a technológia az igazságossági problémákat? Nick Bostrom Mély utópia (Deep Utopia, 2024) című könyvében a technológiai változás korlátait vizsgálja egy elképzelt lehetséges világon keresztül. A könyvről Gébert Judit és Köves Alexandra beszélget.Nick Bostrom svéd filozófus a 2014-ben megjelent Szuperintelligencia (Superintelligence: Paths, Dangers, Strategies; Oxford University Press) című könyvében arról ír, hogy milyen veszélyekkel jár a mesterséges intelligencia fejlesztése. A 2024-ben megjelent Mély utópia – Élet és értelem egy megoldott világban (Deep Utopia - Life and Meaning in a Solved World; Ideapress Publishing) című könyve ennek pont az ellentéte. E könyvben felteszi, hogy az emberek sikeresen megbirkóznak a fejlett mesterséges intelligenciával járó technológiai, erkölcsi és politikai kihívásokkal. Bostrom szerint egy ilyen siker eredménye egyfajta mély utópia lenne. Nemcsak a munka és a szűkösség utáni társadalomban élnénk (ezek a dolgok már az utópia felszínesebb felfogásában is szerepelnek), hanem egy „posztinstrumentális” társadalomban is, ahol szinte bármit, amit csak akarunk, beleértve a saját személyes fejlődésünket is, a mesterséges intelligencia jobban el tudna végezni. Bostrom központi kérdése az, hogyan találhatnánk értelmet és célt egy ilyen világban.A könyvről Gébert Judit és Köves Alexandra beszélget, többek között az alábbiakról. Mi köze van a mély utópiának az ökológiai közgazdaságtanhoz? Technooptimista, vagy pesszimista Bostrom könyve? Alkalmas-e a könyv sajátos elbeszélői, párbeszédes stílusa az üzenet közvetítésére? Melyek azok a helyzetek, ahol a technológiai hatékonyság korlátokba ütközik és nem tud választ adni a társadalom számára? Milyen társadalmi javak lennének egy mély utópiában is szűkösek? Mi értelme van egyáltalán ilyen utópiákról beszélgetni? A jobb világért való küzdelem adja-e az élet értelmét? Kötelességünk-e hasznosnak lenni? Mi hiányzik a könyvből? És mi köze van mindehhez a fűtőtestnek és a csésze teának?
"That's what you get for having a dream. It always ends up the same way: tears and disillusionment. Believe me, you're better off being erased and feeling nothing."Rick is joined once again by Nomad (The Retro Wildlands) to continue their analysis on Detroit: Become Human. As Kara and Alice continue to search for shelter, gruff Detroit cop Hank struggles to find camaraderie in his new Android partner, Connor. More questions of ethics and philosophy, including love and suicide, within. Please enjoy!Alex O'Connor interviews Nick Bostrom on AIBryan Dechart plays Detroit Become HumanBehind the Music of Detroit Become HumanClick on the following to find PPR on the web!PatreonJoin our DiscordTwitter Instagram Bluesky Thank you for listening! Want to reach out to PPR? Send your questions, comments, and recommendations to pixelprojectradio@gmail.com! And as ever, any ratings and/or reviews left on your platform of choice are greatly appreciated!
Bio Bala has rich experience in retail technology and process transformation. Most recently, he worked as a Principal Architect for Intelligent Automation, Innovation & Supply Chain in a global Fortune 100 retail corporation. Currently he works for a luxury brand as Principal Architect for Intelligent Automation providing technology advice for the responsible use of technology (Low Code, RPA, Chatbots, and AI). He is passionate about technology and spends his free time reading, writing technical blogs and co-chairing a special interest group with The OR Society. Interview Highlights 02:00 Mentors and peers 04:00 Community bus 07:10 Defining AI 08:20 Contextual awareness 11:45 GenAI 14:30 The human loop 17:30 Natural Language Processing 20:45 Sentiment analysis 24:00 Implementing AI solutions 26:30 Ethics and AI 27:30 Biased algorithms 32:00 EU AI Act 33:00 Responsible use of technology Connect Bala Madhusoodhanan on LinkedIn Books and references · https://nymag.com/intelligencer/article/ai-artificial-intelligence-chatbots-emily-m-bender.html - NLP · https://www.theregister.com/2021/05/27/clearview_europe/ - Facial Technology Issue · https://www.designnews.com/electronics-test/apple-card-most-high-profile-case-ai-bias-yet - Apple Card story · https://www.ft.com/content/2d6fc319-2165-42fb-8de1-0edf1d765be3 - Data Centre growth · https://www.technologyreview.com/2024/02/06/1087793/what-babies-can-teach-ai/ · Independent Audit of AI Systems - · Home | The Alan Turing Institute · Competing in the Age of AI: Strategy and Leadership When Algorithms and Networks Run the World, Marco Iansiti & Karim R. Lakhani · AI Superpowers: China, Silicon Valley, and the New World, Kai-Fu Lee · The Algorithmic Leader: How to Be Smart When Machines Are Smarter Than You, Mike Walsh · Human+Machine: Reimagining Work in the Age of AI, Paul R Daugherty, H. James Wilson · Superintelligence: Paths, Dangers, Strategies, Nick Bostrom · The Alignment Problem: How Can Artificial Intelligence Learn Human Values, Brian Christian · Ethical Machines: Your Concise Guide to Totally Unbiased, Transparent, and Respectful AI, Reid Blackman · Wanted: Human-AI Translators: Artificial Intelligence Demystified, Geertrui Mieke De Ketelaere · The Future of Humanity: Terraforming Mars, Interstellar Travel, Immortality, and Our Destiny Beyond, Michio Kaku, Feodor Chin et al Episode Transcript Intro: Hello and welcome to the Agile Innovation Leaders podcast. I'm Ula Ojiaku. On this podcast I speak with world-class leaders and doers about themselves and a variety of topics spanning Agile, Lean Innovation, Business, Leadership and much more – with actionable takeaways for you the listener. Ula Ojiaku So I have with me here, Bala Madhusoodhanan, who is a principal architect with a global luxury brand, and he looks after their RPA and AI transformation. So it's a pleasure to have you on the Agile Innovation Leaders podcast, Bala, thank you for making the time. Bala Madhusoodhanan It's a pleasure to have a conversation with the podcast and the podcast audience, Ula. I follow the podcast and there have been fantastic speakers in the past. So I feel privileged to join you on this conversation. Ula Ojiaku Well, the privilege is mine. So could you start off with telling us about yourself Bala, what have been the key points or the highlights of your life that have led to you being the Bala we know now? Bala Madhusoodhanan It's putting self into uncharted territory. So my background is mechanical engineering, and when I got the job, it was either you go into the mechanical engineering manufacturing side or the software side, which was slightly booming at that point of time, and obviously it was paying more then decided to take the software route, but eventually somewhere the path kind of overlapped. So from a mainframe background, started working on supply chain, and then came back to optimisation, tied back to manufacturing industry. Somewhere there is an overlap, but yeah, that was the first decision that probably got me here. The second decision was to work in a UK geography, rather than a US geography, which is again very strange in a lot of my peers. They generally go to Silicon Valley or East Coast, but I just took a choice to stay here for personal reasons. And then the third was like the mindset. I mean, I had over the last 15, 20 years, I had really good mentors, really good peers, so I always had their help to soundboard my crazy ideas, and I always try to keep a relationship ongoing. Ula Ojiaku What I'm hearing is, based on what you said, lots of relationships have been key to getting you to where you are today, both from mentors, peers. Could you expand on that? In what way? Bala Madhusoodhanan The technology is changing quite a lot, at least in the last 10 years. So if you look into pre-2010, there was no machine learning or it was statistics. People were just saying everything is statistics and accessibility to information was not that much, but post 2010, 2011, people started getting accessibility. Then there was a data buzz, big data came in, so there were a lot of opportunities where I could have taken a different career path, but every time I was in a dilemma which route to take, I had someone with whom either I have worked or who was my team lead or manager to guide me to tell me, like, take emotion out of the decision making and think in a calm mind, because you might jump into something and you might like it, you might not like it, you should not regret it. So again, over the course of so many such decisions, my cognitive mind has also started thinking about it. So those conversations really help. And again, collective experience. If you look into the decision making, it's not just my decision, I'm going through conversations that I had with people where they have applied their experience, so it's not just me or just not one situation, and to understand the why behind that, and that actually helps. In short, it's like a collection of conversations that I had with peers. A few of them are visionary leaders, they are good readers. So they always had a good insight on where I should focus, where I shouldn't focus, and of late recently, there has been a community bus. So a lot of things are moving to open source, there is a lot of community exchange of conversation, the blogging has picked up a lot. So, connecting to those parts also gives you a different dimension to think about. Ula Ojiaku So you said community bus, some of the listeners or people who are watching the video might not understand what you mean by the community bus. Are you talking about like meetups or communities that come around to discuss shared interests? Bala Madhusoodhanan If you are very much specifically interested in AI, or you are specifically interested in, power platform or a low code platform, there are a lot of content creators on those topics. You can go to YouTube, LinkedIn, and you get a lot of information about what's happening. They do a lot of hackathons, again, you need to invest time in all these things. If you don't, then you are basically missing the boat, but there are various channels like hackathon or meetup groups, or, I mean, it could be us like a virtual conversation like you and me, we both have some passionate topics, that's why we resonate and we are talking about it. So it's all about you taking an initiative, you finding time for it, and then you have tons and tons of information available through community or through conferences or through meetup groups. Ula Ojiaku Thanks for clarifying. So, you said as well, you had a collection of conversations that helped you whenever you were at a crossroad, some new technology or something emerges or there's a decision you had to make and checking in with your mentors, your peers and your personal Board of Directors almost, that they give you guidance. Now, looking back, would you say there were some turns you took that knowing what you know now, you would have done differently? Bala Madhusoodhanan I would have liked to study more. That is the only thing, because sometimes the educational degree, even though without a practical knowledge has a bigger advantage in certain conversation, otherwise your experience and your content should speak for you and it takes a little bit of effort and time to get that trust among leaders or peers just to, even them to trust saying like, okay, this person knows what he's talking about. I should probably trust rather than, someone has done a PhD and it's just finding the right balance of when I should have invested time in continuing my education, if I had time, I would have gone back two years and did everything that I had done, like minus two years off-set it by two years earlier. It would have given me different pathways. That is what I would think, but again, it's all constraints. I did the best at that point in time with whatever constraints I had. So I don't have any regret per se, but yeah, if there is a magic wand, I would do that. Ula Ojiaku So you are a LinkedIn top voice from AI. How would you define AI, artificial intelligence? Bala Madhusoodhanan I am a bit reluctant to give a term Artificial Intelligence. It's in my mind, it is Artificial Narrow Intelligence, it's slightly different. So let me start with a building block, which is machine learning. So machine learning is like a data labeller. You go to a Tesco store, you read the label, you know it is a can of soup because you have read the label, your brain is not only processing that image, it understands the surrounding. It does a lot of things when you pick that can of soup. You can't expect that by just feeding one model to a robot. So that's why I'm saying like it's AI is a bit over glorified in my mind. It is artificial narrow intelligence. What you do to automate certain specific tasks using a data set which is legal, ethical, and drives business value is what I would call machine learning, but yeah, it's just overhyped and heavily utilised term AI. Ula Ojiaku You said, there's a hype around artificial intelligence. So what do you mean by that? And where do you see it going? Bala Madhusoodhanan Going back to the machine learning definition that I said, it's basically predicting an output based on some input. That's as simple as what we would say machine learning. The word algorithm is basically something like a pattern finder. What you're doing is you are giving a lot of data, which is properly labelled, which has proper diversity of information, and there are multiple algorithms that can find patterns. The cleverness or engineering mind that you bring in is to select which pattern or which algorithm you would like to do for your use case. Now you're channelling the whole machine learning into one use case. That's why I'm going with the term narrow intelligence. Computers can do brilliant jobs. So you ask computers to do like a Rubik's cubes solving. It will do it very quickly because the task is very simple and it is just doing a lot of calculation. You give a Rubik's cube to a kid. It has to apply it. The brain is not trained enough, so it has to cognitively learn. Maybe it will be faster. So anything which is just pure calculation, pure computing, if the data is labelled properly, you want to predict an outcome, yes, you can use computers. One of the interesting videos that I showed in one of my previous talks was a robot trying to walk across the street. This is in 2018 or 19. The first video was basically talking about a robot crossing a street and there were vehicles coming across and the robot just had a headbutt and it just fell off. Now a four year old kid was asked to walk and it knew that I have to press a red signal. So it went to the signal stop. It knew, or the baby knew that I can only walk when it is green. And then it looks around and then walks so you can see the difference – a four year old kid has a contextual awareness of what is happening, whereas the robot, which is supposed to be called as artificial intelligence couldn't see that. So again, if you look, our human brains have been evolved over millions of years. There are like 10 billion neurons or something, and it is highly optimised. So when I sleep, there are different set of neurons which are running. When I speak to you, my eyes and ears are running, my motion sensor neurons are running, but these are all highly optimised. So the mother control knows how much energy should be sent on which neuron, right, whereas all these large language models, there is only one task. You ask it, it's just going to do that. It doesn't have that intelligence to optimise. When I sleep, maybe 90 percent of my neurons are sleeping. It's getting recharged. Only the dream neurons are working. Whereas once you put a model live, it doesn't matter, all the hundred thousand neurons would run. So, yeah, it's in very infancy state, maybe with quantum computing, maybe with more power and better chips things might change, but I don't see that happening in the next five to 10 years. Ula Ojiaku Now, what do you say about Gen AI? Would you also classify generative AI as purely artificial neural intelligence? Bala Madhusoodhanan The thing with generative AI is you're trying to generalise a lot of use cases, say ChatGPT, you can throw in a PDF, you can ask something, or you can say, hey, can you create a content for my blog or things like that, right? Again, all it is trying to do is it has some historical content with which it is trying to come up with a response. So the thing that I would say is humans are really good with creativity. If a problem is thrown at a person, he will find creative ways to solve it. The tool with which we are going to solve might be a GenAI tool, I don't know, because I don't know the problem, but because GenAI is in a hype cycle, every problem doesn't need GenAI, that's my view. So there was an interesting research which was done by someone in Montreal University. It talks about 10 of the basic tasks like converting text to text or text to speech and with a generative AI model or multiple models, because you have a lot of vendors providing different GenAI models, and then they went with task specific models and the thing that they found was the task specific models were cheap to run, very, very scalable and robust and highly accurate, right. Whereas GenAI, if, when you try to use it and when it goes into a production ready or enterprise ready and if it is used by customers or third party, which are not part of your ecosystem, you are putting yourself in some kind of risk category. There could be a risk of copyright issues. There could be a risk of IP issues. There could be risk of not getting the right consent from someone. I can say, can you create an image of a podcaster named Ula? You never know because you don't remember that one of your photos on Google or Twitter or somewhere is not set as private. No one has come and asked you saying, I'm using this image. And yeah, it's finding the right balance. So even before taking the technology, I think people should think about what problem are they trying to solve? In my mind, AI or artificial intelligence, or narrow intelligence can have two buckets, right. The first bucket is to do with how can I optimise the existing process? Like there are a lot of things that I'm doing, is there a better way to do it? Is there an efficient way to do it? Can I save time? Can I save money? Stuff like that. So that is an optimisation or driving efficiency lever. Other one could be, I know what to do. I have a lot of data, but I don't have infrastructure or people to do it, like workforce augmentation. Say, I have 10 data entry persons who are graduate level. Their only job is to review the receipts or invoices. I work in FCA. I have to manually look at it, approve it, and file it, right? Now it is a very tedious job. So all you are doing is you are augmenting the whole process with an OCR engine. So OCR is Optical Character Recognition. So there are models, which again, it's a beautiful term for what our eyes do. When we travel somewhere, we get an invoice, we exactly know where to look, right? What is the total amount? What is the currency I have paid? Have they taken the correct credit card? Is my address right? All those things, unconsciously, your brain does it. Whereas our models given by different software vendors, which have trained to capture these specific entities which are universal language, to just pass, on data set, you just pass the image on it. It just picks and maps that information. Someone else will do that job. But as part of your process design, what you would do is I will do the heavy lifting of identifying the points. And I'll give it to someone because I want someone to validate it. It's human at the end. Someone is approving it. So they basically put a human in loop and, human centric design to a problem solving situation. That's your efficiency lever, right? Then you have something called innovation level - I need to do something radical, I have not done this product or service. Yeah, that's a space where you can use AI, again, to do small proof of concepts. One example could be, I'm opening a new store, it's in a new country, I don't know how the store layout should look like. These are my products. This is the store square footage. Can you recommend me the best way so that I can sell through a lot? Now, a visual merchandising team will have some ideas on where the things should be, they might give that prompt. Those texts can be converted into image. Once you get the base image, then it's human. It's us. So it will be a starting point rather than someone implementing everything. It could be a starting point. But can you trust it? I don't know. Ula Ojiaku And that's why you said the importance of having a human in the loop. Bala Madhusoodhanan Yeah. So the human loop again, it's because we humans bring contextual awareness to the situation, which machine doesn't know. So I'll tie back this to the NLP. So Natural Language Processing, it has two components, so you have natural language understanding and then you have natural language generation. When you create a machine learning model, all it is doing is, it is understanding the structure of language. It's called form. I'm giving you 10,000 PDFs, or you're reading a Harry Potter book. There is a difference between you reading a Harry Potter book and the machine interpreting that Harry Potter book. You would have imagination. You will have context of, oh, in the last chapter, we were in the hilly region or in a valley, I think it will be like this, the words like mist, cold, wood. You started already forming images and visualising stuff. The machine doesn't do that. Machine works on this is the word, this is a pronoun, this is the noun, this is the structure of language, so the next one should be this, right? So, coming back to the natural language understanding, that is where the context and the form comes into play. Just think of some alphabets put in front of you. You have no idea, but these are the alphabet. You recognise A, you recognise B, you recognise the word, but you don't understand the context. One example is I'm swimming against the current. Now, current here is the motion of water, right? My current code base is version 01. I'm using the same current, right? The context is different. So interpreting the structure of language is one thing. So, in natural language understanding, what we try to do is we try to understand the context. NLG, Natural Language Generation, is basically how can I respond in a way where I'm giving you an answer to your query. And this combined is NLP. It's a big field, there was a research done, the professor is Emily Bender, and she one of the leading professors in the NLP space. So the experiment was very funny. It was about a parrot in an island talking to someone, and there was a shark in between, or some sea creature, which basically broke the connection and was listening to what this person was saying and mimicking. Again, this is the problem with NLP, right? You don't have understanding of the context. You don't put empathy to it. You don't understand the voice modulation. Like when I'm talking to you, you can judge what my emotion cues are, you can put empathy, you can tailor the conversation. If I'm feeling sad, you can put a different spin, whereas if I'm chatting to a robot, it's just going to give a standard response. So again, you have to be very careful in which situation you're going to use it, whether it is for a small team, whether it is going to be in public, stuff like that. Ula Ojiaku So that's interesting because sometimes I join the Masters of Scale strategy sessions and at the last one there was someone whose organisational startup was featured and apparently what their startup is doing is to build AI solutions that are able to do sentiment analysis. And I think some of these, again, in their early stages, but some of these things are already available to try to understand the tone of voice, the words they say, and match it with maybe the expression and actually can transcribe virtual meetings and say, okay, this person said this, they looked perplexed or they looked slightly happy. So what do you think about that? I understand you're saying that machines can't do that, but it seems like there are already organisations trying to push the envelope towards that direction. Bala Madhusoodhanan So the example that you gave, sentiment of the conversation, again, it is going by the structure or the words that I'm using. I am feeling good. So good, here is positive sentiment. Again, for me the capability is slightly overhyped, the reason being is it might do 20 percent or 30 percent of what a human might do, but the human is any day better than that particular use case, right? So the sentiment analysis typically works on the sentiment data set, which would say, these are the certain proverbs, these are the certain types of words, this generally referred to positive sentiment or a good sentiment or feel good factor, but the model is only good as good as the data is, right? So no one is going and constantly updating that dictionary. No one is thinking about it, like Gen Z have a different lingo, millennials had a different lingo. So, again, you have to treat it use case by use case, Ula. Ula Ojiaku At the end of the day, the way things currently are is that machines aren't at the place where they are as good as humans. Humans are still good at doing what humans do, and that's the key thing. Bala Madhusoodhanan Interesting use case that I recently read probably after COVID was immersive reading. So people with dyslexia. So again, AI is used for good as well, I'm not saying it is completely bad. So AI is used for good, like, teaching kids who are dyslexic, right? Speech to text can talk, or can translate a paragraph, the kid can hear it, and on the screen, I think one note has an immersive reader, it actually highlights which word it is, uttering into the ears and research study showed that kids who were part of the study group with this immersive reading audio textbook, they had a better grasp of the context and they performed well and they were able to manage dyslexia better. Now, again, we are using the technology, but again, kudos to the research team, they identified a real problem, they formulated how the problem could be solved, they were successful. So, again, technology is being used again. Cancer research, they invest heavily, in image clustering, brain tumours, I mean, there are a lot of use cases where it's used for good, but then again, when you're using it, you just need to think about biases. You need to understand the risk, I mean, everything is risk and reward. If your reward is out-paying the minimum risk that you're taking, then it's acceptable. Ula Ojiaku What would you advise leaders of organisations who are considering implementing AI solutions? What are the things we need to consider? Bala Madhusoodhanan Okay. So going back to the business strategy and growth. So that is something that the enterprises or big organisations would have in mind. Always have your AI goals aligned to what they want. So as I said, there are two buckets. One is your efficiency driver, operational efficiency bucket. The other one is your innovation bucket. Just have a sense check of where the business wants to invest in. Just because AI is there doesn't mean you have to use it right. Look into opportunities where you can drive more values. So that would be my first line of thought. The second would be more to do with educating leaders about AI literacy, like what each models are, what do they do? What are the pitfalls, the ethical awareness about use of AI, data privacy is big. So again, that education is just like high level, with some examples on the same business domain where it has been successful, where it has been not so successful, what are the challenges that they face? That's something that I would urge everyone to invest time in. I think I did mention about security again, over the years, the practice has been security is always kept as last. So again, I was fortunate enough to work in organisations where security first mindset was put in place, because once you have a proof of value, once you show that to people, people get excited, and it's about messaging it and making sure it is very secured, protecting the end users. So the third one would be talking about having secure first design policies or principles. Machine learning or AI is of no good if your data quality is not there. So have a data strategy is something that I would definitely recommend. Start small. I mean, just like agile, you take a value, you start small, you realise whether your hypothesis was correct or not, you monitor how you performed and then you think about scale just by hello world doesn't mean that you have mastered that. So have that mindset, start small, monitor, have constant feedback, and then you think about scaling. Ula Ojiaku What are the key things about ethics and AI, do you think leaders should be aware of at this point in time? Bala Madhusoodhanan So again, ethical is very subjective. So it's about having different stakeholders to give their honest opinion of whether your solution is the right thing to do against the value of the enterprise. And it's not your view or my view, it's a consent view and certain things where people are involved, you might need to get HR, you might need to get legal, you might need to get brand reputation team to come and assist you because you don't understand the why behind certain policies were put in place. So one is, is the solution or is the AI ethical to the core value of the enterprise? So that's the first sense check that you need to do. If you pass that sense check, then comes about a lot of other threats, I would say like, is the model that I'm using, did it have a fair representation of all data set? There's a classic case study on one of a big cloud computing giant using an AI algorithm to filter resumes and they had to stop it immediately because the data set was all Ivy League, male, white, dominant, it didn't have the right representation. Over the 10 years, if I'm just hiring certain type of people, my data is inherently biased, no matter how good my algorithm is, if I don't have that data set. The other example is clarify AI. They got into trouble on using very biased data to give an outcome on some decision making to immigration, which has a bigger ramification. Then you talk about fairness, whether the AI system is fair to give you an output. So there was a funny story about a man and a woman in California living together, and I think the woman wasn't provided a credit card, even though everything, the postcode is the same, both of them work in the same company, and it was, I think it has to do with Apple Pay. Apple Pay wanted to bring in a silver credit card, Apple card or whatever it is, but then it is so unfair that the women who was equally qualified was not given the right credit limit, and the bank clearly said the algorithm said so. Then you have privacy concern, right? So all these generic models that you have that is available, even ChatGPT for that matter. Now you can chat with ChatGPT multiple times. You can talk about someone like Trevor Noah and you can say hey, can you create a joke? Now it has been trained with the jokes that he has done, it might be available publicly. But has the creator of model got a consent saying, hey Trevor, I'm going to use your content so that I can give better, and how many such consent, even Wikipedia, if you look into Wikipedia, about 80 percent of the information is public, but it is not diversified. What I mean by that is you can search for a lot of information. If the person is from America or from UK or from Europe, maybe from India to some extent, but what is the quality of data, if you think about countries in Africa, what do you think about South America? I mean, it is not representing the total diversity of data, and we have this large language model, which has been just trained on that data, right? So there is a bias and because of that bias, your outcome might not be fair. So these two are the main things, and of course the privacy concern. So if someone goes and says, hey, you have used my data, you didn't even ask me, then you're into lawsuit. Without getting a proper consent, again, it's a bad world, it's very fast moving and people don't even, including me, I don't even read every terms and condition, I just scroll down, tick, confirm, but those things are the things where I think education should come into play. Think about it, because people don't understand what could go wrong, not to them, but someone like them. Then there is a big fear of job displacement, like if I put this AI system, what will I do with my workforce? Say I had ten people, you need to think about, you need to reimagine your workplace. These are the ten jobs my ten people are doing. If I augment six of those jobs, how can I use my ten resources effectively to do something different or that piece of puzzle is always, again, it goes back to the core values of the company, what they think about their people, how everything is back, but it's just that needs a lot of inputs from multiple stakeholders. Ula Ojiaku It ties back to the enterprise strategy, there is the values, but with technology as it has evolved over the years, things will be made obsolete, but there are new opportunities that are created, so moving from when people travelled with horses and buggies and then the automotive came up. Yes, there wasn't as much demand for horseshoes and horses and buggies, but there was a new industry, the people who would mechanics or garages and things like that. So I think it's really about that. Like, going back to what you're saying, how can you redeploy people? And that might involve, again, training, reskilling, and investing in education of the workforce so that they're able to harness AI and to do those creative things that you've emphasised over this conversation about human beings, that creative aspect, that ability to understand context and nuance and apply it to the situation. Bala Madhusoodhanan So I was fortunate to work with ForHumanity, an NGO which basically is trying to certify people to look into auditing AI systems. So EU AI Act is now in place, it will be enforced soon. So you need people to have controls on all these AI systems to protect - it's done to protect people, it's done to protect the enterprise. So I was fortunate enough to be part of that community. I'm still working closely with the Operation Research Society. Again, you should be passionate enough, you should find time to do it, and if you do it, then the universe will find a way to give you something interesting to work with. And our society, The Alan Turing Institute, the ForHumanity Society, I had a few ICO workshops, which was quite interesting because when you hear perspectives from people from different facets of life, like lawyers and solicitors, you would think, ah, this statement, I wouldn't interpret in this way. It was a good learning experience and I'm sure if I have time, I would still continue to do that and invest time in ethical AI. As technology, it's not only AI, it's ethical use of technology, so sustainability is also part of ethical bucket if you look into it. So there was an interesting paper it talks about how many data centres have been opened between 2018 to 2024, which is like six years and the power consumption has gone from X to three times X or two times X, so we have opened a lot. We have already caused damage to the environment with all these technology, and just because the technology is there, it doesn't mean you have to use it, but again, it's that educational bit, what is the right thing to do? And even the ESG awareness, people are not aware. Like now, if you go to the current TikTok trenders, they know I need to look into certified B Corp when I am buying something. The reason is because they know, and they're more passionate about saving the world. Maybe we are not, I don't know, but again, once you start educating and, telling those stories, humans are really good, so you will have a change of heart. Ula Ojiaku What I'm hearing you say is that education is key to help us to make informed choices. There is a time and place where you would need to use AI, but not everything requires it, and if we're more thoughtful in how we approach, these, because these are tools at the end of the day, then we can at least try to be more balanced in the risks and taking advantage of opportunities versus the risks around it and the impact these decisions and the tools that we choose to use make on the environment. Now, what books have you found yourself recommending most to people, and why? Bala Madhusoodhanan Because we have been talking on AI, AI Superpower is one book which was written by Kai-Fu Lee. There is this book by Brian Christian, The Alignment Problem: Machine Learning and Human Values alignment of human values and machine it was basically talking about what are the human values? Where do you want to use machine learning? How do you basically come up with a decision making, that's a really interesting read. Then there is a book called Ethical Machines by Reid Blackman. So it talks about all the ethical facets of AI, like biases, fairnesses, like data privacy, transparency, explainability, and he gives quite a detail, example and walkthrough of what that means. Another interesting book was Wanted: Human-AI Translators: Artificial Intelligence Demystified by a Dutch professor, again, really, really lovely narration of what algorithms are, what AI is, where, and all you should think about, what controls and stuff like that. So that is an interesting book. Harvard Professor Kahrim Lakhani, he wrote something called, Competing in the Age of AI, that's a good book. The Algorithmic Leader: How to Be Smart When Machines Are Smarter Than You by Mike Walsh is another good book, which I finished a couple of months back. Ula Ojiaku And if the audience wants to find you, how can they reach out to you? Bala Madhusoodhanan They can always reach out to me at LinkedIn, I would be happy to touch base through LinkedIn. Ula Ojiaku Awesome. And do you have any final words and or ask of the audience? Bala Madhusoodhanan The final word is, again, responsible use of technology. Think about not just the use case, think about the environmental impact, think about the future generation, because I think the damage is already done. So, at least not in this lifetime, maybe three or four lifetimes down the line, it might not be the beautiful earth that we have. Ula Ojiaku It's been a pleasure, as always, speaking with you, Bala, and thank you so much for sharing your insights and wisdom, and thank you for being a guest on the Agile Innovation Leaders Podcast. Bala Madhusoodhanan Thank you, lovely conversation, and yeah, looking forward to connecting with more like minded LinkedIn colleagues. Ula Ojiaku That's all we have for now. Thanks for listening. If you liked this show, do subscribe at www.agileinnovationleaders.com or your favourite podcast provider. Also share with friends and do leave a review on iTunes. This would help others find this show. I'd also love to hear from you, so please drop me an email at ula@agileinnovationleaders.com Take care and God bless!
"The world doesn't like those who are different, Markus. Don't let anyone tell you who you should be."This week marks the start of the analysis series on Detroit: Become Human! Rick is joined by Nomad (The Retro Wildlands) to dissect Quantic Dream's 2018 narrative-based game. Citizenship, sentience, equity...are artificial intelligence machines - that is to say, androids - deserving of such things? What are the ethical implications of such a high level of integration in society? For better or for worse, David Cage's tale sidesteps pondering questions of ethics and philosophy for a story centered around a cast of likeable characters. Star studded, too. This episode covers through Chapter 5. We hope you enjoy!Alex O'Connor interviews Nick Bostrom on AIBryan Dechart plays Detroit Become HumanBehind the Music of Detroit Become HumanClick on the following to find PPR on the web!PatreonJoin our DiscordTwitter Instagram Bluesky Thank you for listening! Want to reach out to PPR? Send your questions, comments, and recommendations to pixelprojectradio@gmail.com! And as ever, any ratings and/or reviews left on your platform of choice are greatly appreciated!
E se o mundo ao nosso redor, nós mesmos e todas as leis da física que entendemos como mundo não passarem de meras linhas de código? Dependendo do que você aceitar, é até bastante provável que todos nós sejamos só uma simulação de computador. Hoje embarcamos num caminho tortuoso entre filosofia, religião, ciência e especulação para atualizar perguntas existenciais feitas há milênios. Este é mais um episódio do Escuta Essa, podcast semanal em que Denis e Danilo trocam histórias de cair o queixo e de explodir os miolos. Todas as quartas-feiras, no seu agregador de podcasts favorito, é a vez de um contar um causo para o outro. Não deixe de enviar os episódios do Escuta Essa para aquela pessoa com quem você também gosta de compartilhar histórias e aproveite para mandar seus comentários e perguntas no Spotify, nas redes sociais , ou no e-mail escutaessa@aded.studio. A gente sempre lê mensagens no final de cada episódio! ... NESTE EPISÓDIO • A BBC publicou uma versão traduzida do artigo do professor Melvin Vopson, que defende que questões ainda em aberto sobre a física quântica podem explicar a hipótese da simulação. • O famoso artigo de Nick Bostrom, citado por todos que discutem a hipótese da simulação, se chama “Are You Living In a Computer Simulation?” e pode ser lido na íntegra no site do filósofo. • O canal Kurzgesagt tem uma boa explicação em vídeo para o Paradoxo de Fermi e o Grande Filtro. • O filme “The Matrix”, de 1999, foi escrito e dirigido por Lilly e Lana Wachowski e pode ser assistido via streaming no Max. • O jogo “The Sims”foi desenvolvido pela Maxis e publicado pela Electronic Arts. Sua primeira versão é de 2000 e completou 25 anos nesta terça-feira (4). • O físico Stephen Hawking argumentou que “a filosofia está morta” em seu livro “O Grande Projeto”, lançado em 2010 e escrito ao lado de Leonard Mlodinow. • A física alemã Sabine Hossenfelder explica por que considera a hipótese da simulação pseudociência em seu canal no YouTube. • O conceito de emergência, onde pequenas partes muitas vezes simples formam sistemas complexos, pode ser melhor compreendido neste texto do professor Francisco Rodrigues, da USP. ... AD&D STUDIO A AD&D produz podcasts e vídeos que divertem e respeitam sua inteligência! Acompanhe todos os episódios em aded.studio para não perder nenhuma novidade.
Zizians, Rationalist movement, Peter Thiel, Eliezer Yudkowsky, neoreaction, Accelerationism, Curtis Yarvin, AI, AI apocalypse, machine learning, psychedelics, Effective Altruism (EA), Sam Bankman-Fried, Extropianism, Thiel & Yudkowsky as Extropians, Discordianism, life extension, space colonization, cryptocurrencies, Yudkowsky as self-educated, Nick Bostrom, Center for Applied Rationality (CFAR), Rationalism's use of magical thinking, New Thought, Roko's Basilisk, Nick Land, predicting the future, LessWrong, LessWrong's relations ship to the Zizians, Ziz, non-binary/trans, vegan Siths, Vasserites, murders linked to Zizians, Zizians in Vermont, Luigi Mangione indirectly influenced by Zizianism, Brain Thompson assassination, ChangeHealthcare hack, were the hack and assassination targeting UnitedHealth Group influenced by this milieu?, is the Trump administration radicalizing Zizians?, Yudkowsky's links to Sam Bankman-Fried, Leverage Research/Center for Effective Altruism & MK-ULTRA-like techniques used by, are more cults coming from the Rationalist movement?Additional Resources:Leverage Research:https://medium.com/@zoecurzi/my-experience-with-leverage-research-17e96a8e540b#c778MIRI/Center for Applied Rationality (CFAR):https://www.lesswrong.com/posts/MnFqyPLqbiKL8nSR7/my-experience-at-and-around-miri-and-cfar-inspired-by-zoeMusic by: Keith Allen Dennishttps://keithallendennis.bandcamp.com/Additional Music: J Money Get bonus content on Patreon Hosted on Acast. See acast.com/privacy for more information.
Seriah is joined by the duo of engineer Jim Elvidge (author of “The Universe-Solved” and “Digital Consciousness”) and physicist Brian Geislinger (author of numerous academic papers and physics professor at Gasden State Community College) to take a deep dive on simulation theory. Topics include Nick Bostrom, Tom Campbell, Brian Whitworth, quantum mechanics, Eastern philosophy, a future advanced AI, Melvin Vopson, a connection between simulation theory and Covid-19, the Second Law of Thermodynamics, entropy, information and matter, informational entropy, life as denying physical laws, an analogy involving a cup of coffee, compressing data, the observer effect, the differences between physics at the classical scale and at the subatomic scale, quantum tunneling, quantum entanglement, patterns in nature, Albert Einstein and Relativity, Dean Radin and psi research, a video game analogy, holographic theory, cellular automaton theory, Plato's cave, Déjà vu, string theory, James Gates, quadratic equations, mathematical reality vs physical reality, time as a physical dimension, Cartesian coordinates, imaginary numbers, information theory, the book “The Invisible Gorilla”, the human memory, modeling biological behavior, optical illusions, slime mold learning, a disturbing experiment on rats, lobotomies and other extreme brain surgery, severe epilepsy, “Beacon 23” TV series, anomalous brain formation, brain damage without disability, a fascinating academic psi study, questions about free will and MRIs, explanations for precognition, a complicated prophetic dream, experiences with precognitive dreams, dream time, information sent back in time, poltergeist activity, “Mandela” effects, the nature of time, the Buddhist concept of “Maya”, possible non-existence of time/a static universe, perception and reality, the “Matrix” films, and much more! This is a fascinating discussion of simulation theory with people who can intelligently discuss it, making complex concepts understandable without ever condescending to the listeners! This is a truly exceptional episode!
Is reality real—or just a highly advanced illusion? In this mind-bending episode of Conspiracy Files, we explore the idea that our universe might be nothing more than a simulation created by a higher intelligence. From the groundbreaking theories of scientists like Nick Bostrom to the eerie "glitches" people claim to experience in everyday life, we dig into the evidence that suggests the Matrix may be more than just a movie. Could déjà vu, quantum physics, or even artificial intelligence hold the key to unlocking the truth? Join us as we question everything you think you know about existence and ask the ultimate question: Is life just a game we're all playing?
Are we all living in The Matrix? Neil deGrasse Tyson sits down with actor Laurence Fishburne to explore the science of The Matrix, simulation theory, and who has the better deep voice. Would you take the red pill? NOTE: StarTalk+ Patrons can listen to this entire episode commercial-free here: https://startalkmedia.com/show/into-the-matrix-with-laurence-fishburne/Thanks to our Patrons james martindale, Henry GLover, Steven Weber, Evan, Qaisar75, Moe, Denise Edwards, Micheal J Trietsch, Randy Frankel, John Mortimer, Austin Croley, Chase J, Kathryn Cellerini Moore, adijan Oda, Markus McLaughlin, Dan, 1 Eleven, Dustin Morell, Siva Kumar, Brandon Smith, Ken Zebarah, Steven Dominie, Layf Carlson, st.johnstantine, Thimon De Raad, Scott Payne, Micheal Williams, Ricardo Piras, Troy camilleri, lioz balky, s, and CeeJay for supporting us this week. Subscribe to SiriusXM Podcasts+ on Apple Podcasts to listen to new episodes ad-free and a whole week early.
Happy New Year 2025! To celebrate, here is an encore of what proved to be the most popular episode of 2024. This rerun combines episodes 30 and 31 into one epic journey towards the frontiers of human understanding. My guest is Donald Hoffman. Our topics are consciousness, cosmos, and the meaning of life. Enjoy! Original show notes Laws of physics govern the world. They explain the movements of planets, oceans, and cells in our bodies. But can they ever explain the feelings and meanings of our mental lives? This problem, called the hard problem of consciousness, runs very deep. No satisfactory explanation exists. But many think that there must, in principle, be an explanation. A minority of thinkers disagree. According to these thinkers, we will never be able to explain mind in terms of matter. We will, instead, explain matter in terms of mind. I explored this position in some detail in episode 17. But hold on, you might say. Is this not contradicted by the success of natural sciences? How could a mind-first philosophy ever explain the success of particle physics? Or more generally, wouldn't any scientist laugh at the idea that mind is more fundamental than matter? No — not all of them laugh. Some take it very seriously. Donald Hoffman is one such scientist. Originally working with computer vision at MIT's famous Artificial Intelligence Lab, Hoffman started asking a simple question: What does it mean to "see" the world? His answer begins from a simple idea: perception simplifies the world – a lot. But what is the real world like? What is “there” before our perception simplifies the world? Nothing familiar, Hoffman claims. No matter. No objects. Not even a three-dimensional space. And no time. There is just consciousness. This is a wild idea. But it is a surprisingly precise idea. It is so precise, in fact, that Hoffman's team can derive basic findings in particle physics from their theory. A fascinating conversation was guaranteed. I hope you enjoy it. If you do, consider becoming a supporter of On Humans on Patreon.com/OnHumans. MENTIONS Names: David Gross, Nima Arkani-Hamed, Edward Whitten, Nathan Seiberg, Andrew Strominger, Edwin Abbott, Nick Bostrom, Giulio Tononi, Keith Frankish, Daniel Dennett, Steven Pinker, Roger Penrose, Sean Carroll, Swapan Chattopadhyay Terms (Physics and Maths): quantum fields, string theory, gluon, scattering amplitude, amplituhedron, decorated permutations, bosons, leptons, quarks, Planck scale, twistor theory, M-theory, multiverse, recurrent communicating classes, Cantor's hierarchy (relating to different sizes of infinity... If this sounds weird, stay tuned for full episode on infinity. It will come out in a month or two.) Terms (Philosophy and Psychology): Kant's phenomena and noumena, integrated information theory, global workspace theory, orchestrated objective reduction theory, attention schema theory Books: Case Against Reality by Hoffman, Enlightenment Now by Steven Pinker Articles etc.: For links to articles, courses, and more, see https://onhumans.substack.com/p/links-for-episode-30
Jorge Fontevecchia en entrevista con el filósofo sueco transhumanista y teórico de la inteligencia artificial.
In this conversation, Professor Nick Bostrom discusses his book 'Deep Utopia' and explores the implications of transformative technologies on human life and meaning. He contrasts the potential positive outcomes of AI development with the risks, emphasizing the need for alignment and ethical considerations. He also shares insights on the rapid advancements in AI and the philosophical questions surrounding existence and purpose in a potentially utopian future. Learn more about your ad choices. Visit megaphone.fm/adchoices
AKA "David Deutsch DESTORYS the Simulation Hypothesis" Bruce take a deep dive into solipsism in the form of the brain in a vat thought experiment, Nick Bostrom's simulation hypothesis, and related ideas. Does the Church-Turing-Deutsch thesis suggest we could live in a simulation? What does critical rationalism say about these theories? --- Support this podcast: https://podcasters.spotify.com/pod/show/four-strands/support
Today's guest is Nick Bostrom, a prominent philosopher and the Director of the Future of Humanity Institute (FHI) at Oxford University. He also leads the Governance of AI Program (GovAI) within the FHI. Renowned globally, his expertise spans existential risk, the anthropic principle, ethics surrounding human enhancement, whole brain emulation, superintelligence risks, and the reversal test. In this special episode of the “AI Futures” series on the AI in Business podcast, we offer an exclusive sample from the Trajectory podcast, hosted by Emerj CEO and Head of Research Daniel Faggella. This excerpt features a fascinating conversation with Nick centering on the concept of a “worthy successor”—post-human intelligences that could expand beyond human capabilities while preserving essential human values. Drawing from his latest book, Deep Utopia, Bostrom explores the potential for AI to “go right,” offering a rare glimpse of optimism for the future of artificial intelligence and its alignment with moral progress. If you're interested in getting more perspectives on AI's longer term impact on business and society, be sure to tune into the Trajectory podcast. You can find the YouTube and podcast links here: emerj.com/tj2
En este programa vamos a desarrollar la filosofía que hay tras el movimiento Transhumanista, tratando de comprender sus ideas y en qué medida están fomentando los cambios sociales para su implantación. Estamos hablando de un movimiento que tratará de redefinir al ser humano hacia otro tipo de ser utilizando una base tecnológica. Debemos tener en cuenta que este movimiento no es algo ficticio sino que tiene sus teóricos como David Pearce o Nick Bostrom, así como sus planteamientos ampliamente difundidos y debatidos, no es algo oculto. Este movimiento va más allá de una mera transformación de la sociedad por los avances tecnológicos, no es solo transformación del entorno sino que es el ser humano el que también transformará su evolución y naturaleza, una completa redefinición. Música: A.Torres Ruiz: -"Orchestra Celesta" Kai Engel: -"Mist and Clouds" Maryna: -"Uplifting Emotion Background" http://creativecommons.org/licenses/by-nc-sa/3.0/ Película: "V for Vendetta" (2006,James McTeigue)
Are we living in reality—or something else entirely? The boys dive into the Simulation Hypothesis, exploring the provocative idea that our world might be an advanced computer simulation. This theory, first formally introduced by philosopher Nick Bostrom in 2003, challenges everything we think we know. Join Sean, Jorge, and Eric as they trace the roots of this idea, starting with ancient philosophies. They uncover fascinating parallels: Plato's cave allegory, where shadows on a wall symbolize a distorted perception of reality; Zhuangzi's butterfly dream, blurring the line between dream and existence; and Hindu philosophy's concept of Maya, the illusion that masks the true nature of the universe. From ancient Greece to the Aztecs' dreamlike view of life, the boys connect the dots to our modern understanding of simulated realities. Fast forward to the rise of computers and AI in the 20th century. The industrial revolution sparked the imagination, and video games like The Sims raised philosophical questions. By the time The Matrix hit theaters in 1999, the Simulation Hypothesis had entered pop culture. The boys discuss how Nick Bostrom's trilemma builds on these ideas, suggesting we're likely part of a simulation rather than the original “base reality.” But how would we know? They explore “clues” in physics, like the observer effect in quantum mechanics or the mathematical precision of the universe. Could moments like déjà vu or the Mandela Effect be glitches in the simulation? Of course, there's always room for conspiracy. The boys speculate about hidden controllers—are we part of an experiment, a punishment, or even entertainment for advanced beings? From ethical questions about free will to wild theories about escape routes and secret cabals, this episode is packed with mind-bending ideas. Buckle up as the boys question reality itself in this journey through philosophy, science, and the mysterious limits of human perception. Listen now to see if you're ready to take the red pill. Patreon -- https://www.patreon.com/theconspiracypodcast Our Website - www.theconspiracypodcast.com Our Email - info@theconspiracypodcast.com
Planet Classroom's AI for a Better World series debuts its new podcast, Highlights Take 2 - Nick Bostrom Envisions Our AI Future, produced by April Klein. This episode offers key insights into how artificial intelligence will revolutionize jobs, reshape essential skills, and create global opportunities, all while navigating critical risks and ethical challenges. Philosopher Nick Bostrom explores AI's transformative role across industries such as healthcare, education, and climate, stressing the importance of adapting the workforce. The podcast highlights the need for critical thinking, digital literacy, and ethical AI development—essential listening for those preparing for an AI-driven future.
Artificial Intelligence (AI) has changed dramatically in recent years. Today, AI is improving many industries and changing how we live. AI is now a key part of our future, making our lives easier and more efficient. But what is it that we need to look out for? What are some necessary measures to be taken to ensure a smooth and safe adoption of this technology? Watch this episode to learn about the past, present and future of AI. Your host, Mukesh Bansal, not only takes us through the journey of AI but also advices on how to navigate through this technological era. Resource List - More about Physics Nobel Prize - https://www.nobelprize.org/prizes/physics/2024/press-release/ More about Chemistry Nobel Prize - https://www.nobelprize.org/prizes/chemistry/2024/press-release/ More on Research behind Chemistry Nobel Prize Winners - https://youtu.be/KMfgV2QSlns?feature=shared Video by Deepmind on AlphaFold Server Demo - https://youtu.be/9ufplEgtq8w?feature=shared More on the first AI conference - https://home.dartmouth.edu/about/artificial-intelligence-ai-coined-dartmouth Watch the Chess Match between IBM Deep Blue v/s Garry Kasparov - https://youtu.be/KF6sLCeBj0s?feature=shared Watch IBM Watson on Jeopardy! - https://youtu.be/lI-M7O_bRNg?feature=shared 3Blue1Brown YouTube Channel - https://www.youtube.com/@3blue1brown Books from the episode: Perceptrons by Minsky - https://amzn.in/d/c8xn3Fh Genius Makers by Cade Metz - https://amzn.in/d/3pbTV1R The Master Algorithm by Pedro Domingos - https://amzn.in/d/14AiL3F Super Intelligence by Nick Bostrom - https://amzn.in/d/9hhq4td The Worlds I See by Dr. Fei-Fei Li - https://amzn.in/d/0iaga3Y Why Machines Learn by Anil Ananthaswamy - https://amzn.in/d/iiYC45X
Adam Haman returns to complain about the Drake Equation as well as Nick Bostrom's argument that we are almost certainly living in a simulation. Bob provides amplification and devil's advocate feedback.Mentioned in the Episode and Other Links of Interest:The YouTube version of this episode.The Haman Nature page.The BMS episode featuring his "red dot on the movie screen" argument.A recent InFi episode making the case against self-aware computers.A quick explainer on the Boltzmann Brain argument.Help support the Bob Murphy Show.
What problem do we get after we've solved all other problems? I. Oxford philosopher Nick Bostrom got famous for asking “What if technology is really really bad?” He helped define ‘existential risk', popularize fears of malevolent superintelligence, and argue that we were living in a ‘vulnerable world' prone to physical or biological catastrophe. His latest book breaks from his usual oeuvre. In Deep Utopia, he asks: “What if technology is really really good?” Most previous utopian literature (he notes) has been about ‘shallow' utopias. There are still problems; we just handle them better. There's still scarcity, but at least the government distributes resources fairly. There's still sickness and death, but at least everyone has free high-quality health care. But Bostrom asks: what if there were literally no problems? What if you could do literally whatever you wanted?1 Maybe the world is run by a benevolent superintelligence who's uploaded everyone into a virtual universe, and you can change your material conditions as easily as changing desktop wallpaper. Maybe we have nanobots too cheap to meter, and if you whisper ‘please make me a five hundred story palace, with a thousand servants who all look exactly like Marilyn Monroe', then your wish will be their command. If you want to be twenty feet tall and immortal, the only thing blocking you is the doorframe. Would this be as good as it sounds? Or would people's lives become boring and meaningless? https://www.astralcodexten.com/p/book-review-deep-utopia
Sam Harris speaks with Nick Bostrom about ongoing progress in artificial intelligence. They discuss the twin concerns about the failure of alignment and the failure to make progress, why smart people don't perceive the risk of superintelligent AI, the governance risk, path dependence and "knotty problems," the idea of a solved world, Keynes's predictions about human productivity, the uncanny valley of utopia, the replacement of human labor and other activities, meaning and purpose, digital isolation and plugging into something like the Matrix, pure hedonism, the asymmetry between pleasure and pain, increasingly subtle distinctions in experience, artificial purpose, altering human values at the level of the brain, ethical changes in the absence of extreme suffering, our cosmic endowment, longtermism, problems with consequentialism, the ethical conundrum of dealing with small probabilities of large outcomes, and other topics. If the Making Sense podcast logo in your player is BLACK, you can SUBSCRIBE to gain access to all full-length episodes at samharris.org/subscribe. Learning how to train your mind is the single greatest investment you can make in life. That's why Sam Harris created the Waking Up app. From rational mindfulness practice to lessons on some of life's most important topics, join Sam as he demystifies the practice of meditation and explores the theory behind it.
Share this episode: https://www.samharris.org/podcasts/making-sense-episodes/385-ai-utopia Sam Harris speaks with Nick Bostrom about ongoing progress in artificial intelligence. They discuss the twin concerns about the failure of alignment and the failure to make progress, why smart people don’t perceive the risk of superintelligent AI, the governance risk, path dependence and "knotty problems," the idea of a solved world, Keynes’s predictions about human productivity, the uncanny valley of utopia, the replacement of human labor and other activities, meaning and purpose, digital isolation and plugging into something like the Matrix, pure hedonism, the asymmetry between pleasure and pain, increasingly subtle distinctions in experience, artificial purpose, altering human values at the level of the brain, ethical changes in the absence of extreme suffering, our cosmic endowment, longtermism, problems with consequentialism, the ethical conundrum of dealing with small probabilities of large outcomes, and other topics. Nick Bostrom is a professor at Oxford University, where he is the founding director of the Future of Humanity Institute. He is the author of more than 200 publications, including Anthropic Bias (2002), Global Catastrophic Risks (2008), Human Enhancement (2009), and Superintelligence: Paths, Dangers, Strategies (2014), a New York Times bestseller which sparked the global conversation about the future of AI. His work has framed much of the current thinking around humanity’s future (such as the concept of existential risk, the simulation argument, the vulnerable world hypothesis, astronomical waste, and the unilateralist’s curse). He has been on Foreign Policy’s Top 100 Global Thinkers list twice, and was the youngest person to rank among the top 15 in Prospect’s World Thinkers list. He has an academic background in theoretical physics, AI, computational neuroscience, and philosophy. His most recent book is Deep Utopia: Life and Meaning in a Solved World. Website: https://nickbostrom.com/ Learning how to train your mind is the single greatest investment you can make in life. That’s why Sam Harris created the Waking Up app. From rational mindfulness practice to lessons on some of life’s most important topics, join Sam as he demystifies the practice of meditation and explores the theory behind it.
Welcome to episode #951 of Six Pixels of Separation - The ThinkersOne Podcast. Here it is: Six Pixels of Separation - The ThinkersOne Podcast - Episode #951. When it comes to thinking big about artificial intelligence, I think about what Nick Bostrom is thinking. A philosopher widely known for his thought leadership in AI and existential risk, Nick has spent much of his career asking the kinds of questions most of us avoid. As the founding Director of Oxford's Future of Humanity Institute and a researcher who has dabbled in everything from computational neuroscience to philosophy, Nick's intellectual curiosity knows no bounds. His 2014 book, Superintelligence (a must-read), became a New York Times bestseller, framing global discussions about the potential dangers of artificial intelligence. But now, with his latest book, Deep Utopia - Life and Meaning in a Solved World, Nick shifts the conversation to a more optimistic angle - what happens if everything goes right? Deep Utopia tackles a question that feels almost paradoxical: If we solve all of our technological problems, what's left for humanity to do? Nick presents a future where superintelligence has safely arrived, governing a world where human labor is no longer required, and technological advancements have freed us from life's practical necessities. This isn't just a hypothetical playground for futurists... it's a challenge to our understanding of meaning and purpose in a post-work, post-instrumental society. In this conversation, Nick explores the philosophical implications of a world where human nature becomes fully malleable. With AI handling all instrumental tasks, and near-magical technologies at our disposal, the question shifts from "How do we survive?" to "How do we live well?" It's no longer about the technology itself but about our values, our purpose, and how we define meaning when there are no more problems left to solve. Nick's book is not just a call to prepare for the future; it's an invitation to rethink what life could look like when all of humanity's traditional struggles are behind us. As he dives into themes of happiness, pleasure, and the complexities of human nature, Nick encourages us to reimagine the future - not as a dystopia to fear, but as a deep utopia, where we must rediscover what it means to be truly human in a solved world. This stuff bakes my noodle. Enjoy the conversation… Running time: 49:48. Hello from beautiful Montreal. Subscribe over at Apple Podcasts. Please visit and leave comments on the blog - Six Pixels of Separation. Feel free to connect to me directly on Facebook here: Mitch Joel on Facebook. Check out ThinkersOne. or you can connect on LinkedIn. ...or on Twitter. Here is my conversation with Nick Bostrom. Deep Utopia - Life and Meaning in a Solved World. Superintelligence. Future of Humanity Institute. This week's music: David Usher 'St. Lawrence River'. Chapters: (00:00) - Introduction and Background. (01:17) - The Debate: Accelerating AI Development vs. Delaying It. (06:08) - Exploring the Big Picture Questions for Humanity. (08:44) - The Redefinition of Human Intelligence. (13:12) - The Role of Creativity in AI. (19:41) - Towards a Post-Work Society. (23:53) - Philosophical Questions and the Value of Humanity. (27:36) - The Complex Relationship Between Pleasure and Pain. (30:03) - The Impact of Large Language Models and the Transformer Architecture. (33:03) - Challenges in Developing Artificial General Intelligence. (35:49) - The Risks and Importance of Values in AGI Development. (45:19) - Exploring the Concept of Deep Utopia.
There is no shortage of technologists touting the promise of AI, but the frontier of AI fervor is a noted philosopher who thinks the economy could double every few months—and that space colonization by self-replicating machines may not be hundreds of years away.Enter Nick Bostrom, who previously authored the 2014 bestseller Superintelligence about the dangers of AI, and now considers what can go right with AI in his new book Deep Utopia. Bostrom was formerly a professor at Oxford University, and currently principal researcher of the Macrostrategy Research Initiative.In this episode, he joins Philipp Carlsson-Szlezak, Chief Economist of BCG, who is skeptical of AI narratives and thinks technology's economic impact has long-lagged expectations. They discuss different takes on the likely size and speed of AI's impact on the macroeconomy, and why they disagree about the prospect of tech-driven mass unemployment. Bostrom also explains key themes from Deep Utopia, including stages of utopia, “shallow and deep” redundancy, implications for policy, as well as the unique rhetorical style of the book.Key topics discussed: 01:45 | Is tech jumping ahead or behind schedule?03:24 | Is Deep Utopia really a book about AI or about philosophy?04:39 | Technological unemployment: Real or fallacious10:54 | Taxonomy of utopia13:59 | What about public policy, such as UBI?15:47 | Concept of shallow and deep redundancy18:50 | Concept of “interestingness”21:07 | Rhetorical style of book23:29 | AI regulation and policyAdditional inspirations from Nick Bostrom:Superintelligence: Paths, Dangers, Strategies (Oxford University Press, 2014)This podcast uses the following third-party services for analysis: Chartable - https://chartable.com/privacy
What if everything you know is just a simulation? In 2022, I was joined by the one and only Nick Bostrom to discuss the simulation hypothesis and the prospects of superintelligence. Nick is a Professor at Oxford University, where he heads the Future of Humanity Institute as its founding director. He is the most-cited professional philosopher in the world aged 50 or under and is the author of some 200 publications, including Anthropic Bias (2002), Global Catastrophic Risks (2008), Human Enhancement (2009), and Superintelligence: Paths, Dangers, Strategies (2014), a New York Times bestseller. With a background in theoretical physics, computational neuroscience, logic, and artificial intelligence, there is no one better to answer this question than him! Tune in. — Key Takeaways: 00:00:00 Intro 00:00:44 Judging a book by its cover 00:05:22 How could an AI have emotions and be creative? 00:08:22 How could a computing device / AI feel pain? 00:13:09 The Turing test 00:20:02 The simulation hypothesis 00:22:27 Is there a "Drake Equation" for the simulation hypothesis? 00:27:16 Penroses' orchestrated objective reality 00:34:11 SETI and the prospect of extraterrestrial life 00:49:20 Are computers really getting "smarter"? 00:53:59 Audience questions 01:01:09 Outro — Additional resources:
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The Subject in Subjective Time: A New Approach to Aggregating Wellbeing (paper draft), published by Devin Kalish on September 17, 2024 on The Effective Altruism Forum. What follows is a lightly edited version of the thesis I wrote for my Bioethics MA program. I'm hoping to do more with this in the future, including seeking publication and/or expanding it into a dissertation or short book. In its current state, I feel like it is in pretty rough shape. I hope it is useful and interesting for people as puzzled by this very niche philosophical worry as me, but I'm also looking for feedback on how I can improve it. There's no guarantee I will take it, or even do anything further with this piece, but I would still appreciate the feedback. I may or may not interact much in the comments section. I. Introduction: Duration is an essential component of many theories of wellbeing. While there are theories of wellbeing that are sufficiently discretized that time isn't so obviously relevant to them, like achievements, it is hard to deny that time matters to some parts of a moral patient's wellbeing. A five-minute headache is better than an hour-long headache, all else held equal. A love that lasts for decades provides more meaning to a life than one that last years or months, all else held equal. The fulfillment of a desire you have had for years matters more than the fulfillment of a desire you have merely had for minutes, all else held equal. However, in our day to day lives we encounter time in two ways, objectively and subjectively. What do we do when the two disagree? This problem reached my attention years ago when I was reflecting on the relationship between my own theoretical leaning, utilitarianism, and the idea of aggregating interests. Aggregation between lives is known for its counterintuitive implications and the rich discourse around this, but I am uncomfortable with aggregation within lives as well. Some of this is because I feel the problems of interpersonal aggregation remain in the intrapersonal case, but there was also a problem I hadn't seen any academic discussion of at the time - objective time seemed to map the objective span of wellbeing if you plot each moment of wellbeing out to aggregate, but it is subjective time we actually care about. Aggregation of these objective moments gives a good explanation of our normal intuitions about time and wellbeing, but it fails to explain our intuitions about time whenever these senses of it come apart. As I will attempt to motivate later, the intuition that it is subjective time that matters is very strong in cases where the two substantially differ. Indeed, although the distinction rarely appears in papers at all, the main way I have seen it brought up (for instance in "The Ethics of Artificial Intelligence[1]" by Nick Bostrom and Eliezer Yudkowsky) is merely to notice there is a difference, and to effectively just state that it is subjective time, of course, that we should care about. I have very rarely run into a treatment dedicated to the "why", the closest I have seen is the writing of Jason Schukraft[2], with his justification for why it is subjective time that matters for Rethink Priorities' "Moral Weights" project. His justification is similar to an answer I have heard in some form several times from defenders: We measure other values of consciousness subjectively, such as happiness and suffering, why shouldn't we measure time subjectively as well? I believe without more elaboration, this explanation has the downside that it both gives no attention to the idea that time matters because it tells us "how much" of an experience there actually is, and has the downside that it seems irrelevant to any theory of wellbeing other than hedonism. It also, crucially, fails to engage with the question of what exactly subje...
In this special cross-post episode of The Cognitive Revolution, Nathan shares a fascinating conversation between Spencer Greenberg and philosopher Nick Bostrom from the Clearer Thinking podcast. They explore Bostrom's latest book, "Deep Utopia," and discuss the challenges of envisioning a truly desirable future. Discover how advanced AI could reshape our concept of purpose and meaning, and hear thought-provoking ideas on finding fulfillment in a world where technology solves our pressing problems. Join us for an insightful journey into the potential evolution of human flourishing and the quest for positive visions of the future. Originally appeared in Clearer Thinking Podcast: https://podcast.clearerthinking.org/episode/224/nick-bostrom-the-path-to-utopia Check out the Clearer Thinking with Spencer Greenberg Podcast here: https://podcast.clearerthinking.org/ Deep Utopia Book: https://www.amazon.com/Deep-Utopia-Meaning-Solved-World/dp/1646871642/ Apply to join over 400 Founders and Execs in the Turpentine Network: https://www.turpentinenetwork.co/ SPONSORS: Oracle: Oracle Cloud Infrastructure (OCI) is a single platform for your infrastructure, database, application development, and AI needs. OCI has four to eight times the bandwidth of other clouds; offers one consistent price, and nobody does data better than Oracle. If you want to do more and spend less, take a free test drive of OCI at https://oracle.com/cognitive Brave: The Brave search API can be used to assemble a data set to train your AI models and help with retrieval augmentation at the time of inference. All while remaining affordable with developer first pricing, integrating the Brave search API into your workflow translates to more ethical data sourcing and more human representative data sets. Try the Brave search API for free for up to 2000 queries per month at https://bit.ly/BraveTCR Omneky: Omneky is an omnichannel creative generation platform that lets you launch hundreds of thousands of ad iterations that actually work customized across all platforms, with a click of a button. Omneky combines generative AI and real-time advertising data. Mention "Cog Rev" for 10% off https://www.omneky.com/ Weights & Biases Weave: Weights & Biases Weave is a lightweight AI developer toolkit designed to simplify your LLM app development. With Weave, you can trace and debug input, metadata and output with just 2 lines of code. Make real progress on your LLM development and visit the following link to get started with Weave today: https://wandb.me/cr RECOMMENDED PODCAST: This Won't Last. Eavesdrop on Keith Rabois, Kevin Ryan, Logan Bartlett, and Zach Weinberg's monthly backchannel. They unpack their hottest takes on the future of tech, business, venture, investing, and politics. Apple Podcasts: https://podcasts.apple.com/us/podcast/id1765665937 Spotify: https://open.spotify.com/show/2HwSNeVLL1MXy0RjFPyOSz YouTube: https://www.youtube.com/@ThisWontLastpodcast CHAPTERS: (00:00:00) About the Show (00:00:22) About the Episode (00:02:58) Introduction to the podcast (00:03:26) Dystopias vs utopias in fiction (00:07:29) Material abundance and utopia (00:14:57) AI and the future of work (00:20:10) AI companions and human relationships (00:22:57) Sponsors: Weights & Biases Weave | Oracle (00:25:01) Sponsor message: Positly research platform (00:26:04) Surveillance and global coordination (00:44:38) Sponsors: Omneky | Brave (00:44:52) Sponsor message: Transparent Replications project (00:46:07) AI governance challenges (00:49:36) Deep Utopia book's purpose (00:53:09) Global coordination strategies (00:59:13) The vulnerable world hypothesis (01:05:18) Bostrom's meta-ethical views (01:08:32) Listener question on meditation (01:10:17) Outro
Today we explore the thought experiment of Roko's Basilisk and infohazards. Support us directly on Patreon: http://www.patreon.com/redweb With Patreon, you get access to ad-free content, our exclusive bonus show Movie Club, the Red Web Discord, and more! What if simply knowing a piece of information could put you at risk? In recent years, this idea has captured the Internet's attention and imagination, leading to philosophical discussions and new levels of scary stories. Today let's discuss the concept known as infohazards. Sensitive topics: information hazards, Roko's Basilisk Here is a link to Nick Bostrom's paper Information Hazards: A Typology of Potential Harms from Knowledge: https://nickbostrom.com/information-hazards.pdf Learn more about your ad choices. Visit megaphone.fm/adchoices
Nick Bostrom is a philosopher known for his work on existential risk, the anthropic principle, human enhancement ethics, whole brain emulation, and superintelligence risks. His recent book, Deep Utopia, explores what might happen if we get AI development right.
Why do there seem to be more dystopias than utopias in our collective imagination? Why is it easier to find agreement on what we don't want than on what we do want? Do we simply not know what we want? What are "solved worlds", "plastic worlds", and "vulnerable worlds"? Given today's technologies, why aren't we working less than we potentially could? Can humanity reach a utopia without superintelligent AI? What will humans do with their time, and/or how will they find purpose in life, if AIs take over all labor? What are "quiet" values? With respect to AI, how important is it to us that our conversation partners be conscious? Which factors will likely make the biggest differences in terms of moving the world towards utopia or dystopia? What are some of the most promising strategies for improving global coordination? How likely are we to end life on earth? How likely is it that we're living in a simulation?Nick Bostrom is a Swedish-born philosopher with a background in theoretical physics, computational neuroscience, logic, and artificial intelligence, along with philosophy. He's been a Professor at Oxford University, where he served as the founding Director of the Future of Humanity Institute from 2005 until its closure in April 2024. He is currently the founder and Director of Research of the Macrostrategy Research Initiative. Bostrom is the author of over 200 publications, including Anthropic Bias (2002), Global Catastrophic Risks (2008), Human Enhancement (2009), and Superintelligence: Paths, Dangers, Strategies (2014). His work has pioneered many of the ideas that frame current thinking about humanity's future (such as the concept of an existential risk, the simulation argument, the vulnerable world hypothesis, the unilateralist's curse, etc.), while some of his recent work concerns the moral status of digital minds. His most recent book, Deep Utopia: Life and Meaning in a Solved World, was published in March of 2024. Learn more about him at his website, nickbostrom.com.StaffSpencer Greenberg — Host / DirectorJosh Castle — ProducerRyan Kessler — Audio EngineerUri Bram — FactotumWeAmplify — TranscriptionistsMusicBroke for FreeJosh WoodwardLee RosevereQuiet Music for Tiny Robotswowamusiczapsplat.comAffiliatesClearer ThinkingGuidedTrackMind EasePositlyUpLift[Read more]
Whether you're a skeptic or a believer, this episode invites you to question the nature of reality and consider the profound implications if we are indeed living in a simulated world. Tune in for a fascinating journey that challenges your perceptions and expands your understanding of what might lie beyond the veil of our perceived reality. Don't miss it!
Nick Bostrom is a renowned philosopher and bestselling author of "Superintelligence" and "Deep Utopia." He joins Big Technology to discuss the potential outcomes of advanced artificial intelligence, from existential risks to utopian possibilities. Tune in to hear Bostrom's thoughts on how humanity might navigate the transition to a world of superintelligent AI and what life could look like in a technologically "solved" world. We also cover the evolution of AI safety concerns, the concept of effective accelerationism, and the philosophical implications of living in a post-scarcity society. Hit play for a mind-expanding conversation about the future of humanity and the profound challenges and opportunities that lie ahead. --- Enjoying Big Technology Podcast? Please rate us five stars ⭐⭐⭐⭐⭐ in your podcast app of choice. For weekly updates on the show, sign up for the pod newsletter on LinkedIn: https://www.linkedin.com/newsletters/6901970121829801984/ Want a discount for Big Technology on Substack? Here's 40% off for the first year: https://tinyurl.com/bigtechnology Questions? Feedback? Write to: bigtechnologypodcast@gmail.com
Welcome to Impact Theory, I'm Tom Bilyeu and in today's episode, Nick Bostrom and I dive into the moral and societal implications of AI as it becomes increasingly advanced. Nick Bostrom is a leading philosopher, author, and expert on AI here to discuss the future of AI, its challenges, and its profound impact on society, meaning, and our pursuit of happiness. We touch on treating AI with moral consideration, the potential centralization of power, automation of critical sectors like police and military, and the creation of hyper-stimuli that could impact society profoundly. We also discuss Nick's book, Deep Utopia, and what the ideal human life will look like in a future dominated by advanced technology, AI, and biotechnology. Our conversation navigates through pressing questions about AI aligning with human values, the catastrophic consequences of powerful AI systems, and the need for deeper philosophical and ethical considerations as AI continues to evolve. Don't miss your chance to explore these groundbreaking ideas, challenge your concept of human worth and values, and consider what the future holds for humanity and AI. Follow Nick Bostrom: Website: https://nickbostrom.com/ Buy “Deep Utopia: Life and Meaning in a Solved World”: https://a.co/d/6CucXTX Follow Me, Tom Bilyeu: Website: https://impacttheoryuniversity.com/ X: https://twitter.com/TomBilyeu Instagram: https://www.instagram.com/tombilyeu/ If you want to dive deeper into my content, search through every episode, find specific topics I've covered, and ask me questions. Go to my Dexa page: https://dexa.ai/tombilyeu Themes: Mindset, Finance, World Affairs, Health & Productivity, Future & Tech, Simulation Theory & Physics, Dating & Relationships SPONSORS: Shopify: Go to https://impacttheory.co/shopifyAugust24pod right now and sign up for a $1 per month trial. Butcherbox: Go to https://impacttheory.co/butcherboxpodAugust24 and use code IMPACT at checkout and enjoy your choice of bone-in chicken thighs, top sirloins, or salmon in every box for an entire year, plus get $20 off Eightsleep: Head to https://impacttheory.co/EightSleeppodAugust24 and use code IMPACT to get $350 off your Pod 4 Ultra. Netsuite: Head to https://impacttheory.co/NetsuitepodAugust24 for Netsuite's one-of-a-kind flexible financing program for a few more weeks! SchwankOutdoor: Visit https://impacttheory.co/SchwankGrillsPodAugust24 and use promo code IMPACT to get $150 OFF a Schwank Grill. RangeRover: Explore the Range Rover Sport at https://impacttheory.co/landroverpodAugust24 ZBiotics: Head to https://impacttheory.co/zbioticsAugust24 and use the code IMPACT at checkout for 15% off. - AG1: Get 5 free AG1 Travel Packs and a FREE 1 year supply of Vitamin D with your first purchase at https://impacttheory.co/AG1pod. Aura: Secure your digital life with proactive protection for your assets, identity, family, and tech – Go to https://aura.com/impact to start your free two-week trial. Quickbooks: Go to https://impacttheory.co/quickbooksJuly24 to get 50% off 3 months of Quickbooks Payroll! FOLLOW TOM: Instagram: https://www.instagram.com/tombilyeu/ Tik Tok: https://www.tiktok.com/@tombilyeu?lang=en Twitter: https://twitter.com/tombilyeu YouTube: https://www.youtube.com/@TomBilyeu What's up, everybody? It's Tom Bilyeu here. If you're serious about leveling up your life, I urge you to check out my new podcast, Tom Bilyeu's Mindset Playbook —a goldmine of my most impactful episodes on mindset, business, and health. Trust me, your future self will thank you. LISTEN AD FREE + BONUS EPISODES on APPLE PODCASTS: apple.co/impacttheory Learn more about your ad choices. Visit megaphone.fm/adchoices
Nick's influential book Superintelligence and his new book Deep Utopia ... Has Nick flipped on AI doom? ... Life in the “solved world” of an AI utopia ... Can technology address the deepest problems of human nature? ... Meaning, artificial purpose, and the value of golf ... The promise and perils of virtual reality ... Heading to Overtime ...
Nick's influential book Superintelligence and his new book Deep Utopia ... Has Nick flipped on AI doom? ... Life in the “solved world” of an AI utopia ... Can technology address the deepest problems of human nature? ... Meaning, artificial purpose, and the value of golf ... The promise and perils of virtual reality ... Heading to Overtime ...
V epizodi 148 je bil moj gost Nick Bostrom, profesor, filozof in avtor številnih vplivnih del, kot sta knjiga "Superintelligence" in članek "The Simulation Hypothesis". Njegova najnovejša knjiga: Deep Utopia: Life and Meaning in a Solved World https://nickbostrom.com/deep-utopia/ V epizodi se dotakneva naslednjih tematik: Raziskovanje utopije in distopije Preoblikovanje namena in smisla v rešenem svetu Subjektivni in objektivni namen v rešenem svetu Umetni in naravni cilji v rešenem svetu Premišljanje človeških vrednot za prihodnost Navigiranje konfliktov in iskanje sodelovalnih rešitev Zasledovanje umetne splošne inteligence Povezava med inteligenco in zavestjo Postopna zamenjava organske snovi s silicijem ============================================= Prijavi se na newsletter in vsak petek prejmi 5 linkov, ki jih ustvarjalci podkastov Dialog in RE:MOAT izberemo tisti teden (knjige, dokumentarci, članki, podkast epizode …). https://aidea.si/aidea-mailing-lista ============================================= AIDEA Podkast: Pogovori o zavesti, vesolju, naši kulturi, tehnologiji in prihodnosti človeštva ... Pogovori o idejah. Vodi Klemen Selakovič Spletna stran: https://aidea.si Instagram: https://www.instagram.com/aidea_podkast/ Tik Tok: https://www.tiktok.com/@klemenselakovic
Michael Sanders is the Co-founder & Chief Storyteller at Horizon, creators of Sequence, the leading development platform for integrating web3 into games. Sequence is on a mission to make web3 easy, fun, and accessible for everyone. Michael is also the author of the best-selling book Ayahuasca: An Executive's Enlightenment. ___Get your copy of Personal Socrates: Better Questions, Better Life Connect with Marc >>> Website | LinkedIn | Instagram | Twitter Drop a review and let me know what resonates with you about the show!Thanks as always for listening and have the best day yet!*A special thanks to MONOS, our official travel partner for Behind the Human! Use MONOSBTH10 at check-out for savings on your next purchase. ✈️*Special props
The World's #1 Personal Development Book Podcast! Join the world's largest non-fiction Book community! https://www.instagram.com/bookthinkers/ Today's episode is sponsored by Ken Rusk, if you're ready to get UNSTUCK check out the links below: https://courses.kenrusk.com/ https://www.kenrusk.com/ ————————————————————————— In today's episode we have the pleasure to interview Nick Bostrom, author of “Deep Utopia” and many others. Nick is a swedish-born philosopher with a background in theoretical physics, computational neuroscience, logic, and artificial intelligence, along with philosophy. He is known for his work on existential risk, the anthropic principle, human enhancement ethics, whole brain emulation, superintelligence risks, and has written over 200 publications! He is also a professor and researcher at Oxford University, and one of the most-cited philosophers in the world, and has been referred to as “the Swedish superbrain” . In this episode, you'll learn about some of the dangers and benefits of A.I., what a world with all of our problems solved could like, what it would be like in a world with no scarcity, why labor may be unnecessary in the future, what meaning and purpose are today and what they could look like in the future, about the role of spirituality in a world of super intelligence, and much more. Needless to say this was a fascinating conversation that I think you'll really enjoy! We hope enjoy this incredible conversation with Nick Bostrom To learn more about Nick and buy his book “Deep Utopia ” follow the links below: The Book: https://a.co/d/06gNSBJH Website: https://nickbostrom.com/ Today's episode is sponsored by Audible. Try Audible for free: www.bookthinkers.com/audibletrial. The purpose of this podcast is to connect you, the listener, with new books, new mentors, and new resources that will help you achieve more and live better. Each and every episode will feature one of the world's top authors so that you know each and every time you tune-in, there is something valuable to learn. If you have any recommendations for guests, please DM them to us on Instagram. (www.instagram.com/bookthinkers) If you enjoyed this show, please consider leaving a review. It takes less than 60-seconds of your time, and really makes a difference when I am trying to land new guests. For more BookThinkers content, check out our Instagram or our website. Thank you for your time!
Nick Bostrom is a philosopher, professor at the University of Oxford and an author For generations, the future of humanity was envisioned as a sleek, vibrant utopia filled with remarkable technological advancements where machines and humans would thrive together. As we stand on the supposed brink of that future, it appears quite different from our expectations. So what does humanity's future actually hold? Expect to learn what it means to live in a perfectly solved world, whether we are more likely heading toward a utopia or a catastrophe, how humans will find a meaning in a world that no longer needs our contributions, what the future of religion could look like, a breakdown of all the different stages we will move through on route to a final utopia, the current state of AI safety & risk and much more... Sponsors: Get a 20% discount on Nomatic's amazing luggage at https://nomatic.com/modernwisdom (use code MW20) Get up to 70% off Gymshark's Summer Sale at https://gym.sh/modernwisdom (use code MW10) Get a 20% discount & free shipping on your Lawnmower 5.0 at https://manscaped.com/modernwisdom (use code MODERNWISDOM) Extra Stuff: Get my free reading list of 100 books to read before you die: https://chriswillx.com/books Try my productivity energy drink Neutonic: https://neutonic.com/modernwisdom Episodes You Might Enjoy: #577 - David Goggins - This Is How To Master Your Life: http://tinyurl.com/43hv6y59 #712 - Dr Jordan Peterson - How To Destroy Your Negative Beliefs: http://tinyurl.com/2rtz7avf #700 - Dr Andrew Huberman - The Secret Tools To Hack Your Brain: http://tinyurl.com/3ccn5vkp - Get In Touch: Instagram: https://www.instagram.com/chriswillx Twitter: https://www.twitter.com/chriswillx YouTube: https://www.youtube.com/modernwisdompodcast Email: https://chriswillx.com/contact - Learn more about your ad choices. Visit megaphone.fm/adchoices
In Episode 366 of Hidden Forces, Demetri Kofinas speaks with Nick Bostrom, the founding director of the Future of Humanity Institute and Principal Researcher at the Macrostrategy Research Initiative. Nick Bostrom is also the author of Superintelligence, which was the book that ignited a global conversation about what might happen if AI development goes wrong. In his latest book, Deep Utopia, Bostrom attempts to answer the opposite question – what happens if things go right? At such a point of technological maturity driven by further and further advancements in artificial intelligence, humanity will confront challenges that are philosophical and spiritual in nature. In such a “solved world,” as Nick Bostrom describes it, what will be the point of human existence? What will give life meaning? How should we spend our days if we no longer need to work, exercise, or make political choices? And is such a world consistent with human agency and freedom? These are all questions that Kofinas explores in this expansive and thought-provoking conversation. You can subscribe to our premium content and access our premium feed, episode transcripts, and Intelligence Reports at HiddenForces.io/subscribe. If you want to join in on the conversation and become a member of the Hidden Forces Genius community, which includes Q&A calls with guests, access to special research and analysis, in-person events, and dinners, you can also do that on our subscriber page at HiddenForces.io/subscribe. If you enjoyed listening to today's episode of Hidden Forces, you can help support the show by doing the following: Subscribe on Apple Podcasts | YouTube | Spotify | Stitcher | SoundCloud | CastBox | RSS Feed Write us a review on Apple Podcasts & Spotify Subscribe to our mailing list at https://hiddenforces.io/newsletter/ Producer & Host: Demetri Kofinas Editor & Engineer: Stylianos Nicolaou Subscribe and Support the Podcast at https://hiddenforces.io Join the conversation on Facebook, Instagram, and Twitter at @hiddenforcespod Follow Demetri on Twitter at @Kofinas Episode Recorded on 05/27/2024
Seriah is joined by the duo of engineer Jim Elvidge (author of “The Universe- Solved” and “Digital Consciousness”) and physicist Brian Geislinger (author of numerous academic papers and physics professor at Gasden State Community College) to take a deep dive on simulation theory. Topics include Nick Bostrom, Tom Campbell, Brian Whitworth, quantum mechanics, Eastern philosophy, a future advanced AI, Melvin Vopson, a connection between simulation theory and Covid-19, the Second Law of Thermodynamics, entropy, information and matter, informational entropy, life as denying physical laws, an analogy involving a cup of coffee, compressing data, the observer effect, the differences between physics at the classical scale and at the subatomic scale, quantum tunneling, quantum entanglement, patterns in nature, Albert Einstein and Relativity, Dean Radin and psi research, a video game analogy, holographic theory, cellular automaton theory, Plato's cave, Déjà vu, string theory, James Gates, quadratic equations, mathematical reality vs physical reality, time as a physical dimension, Cartesian coordinates, imaginary numbers, information theory, the book “The Invisible Gorilla”, the human memory, modeling biological behavior, optical illusions, slime mold learning, a disturbing experiment on rats, lobotomies and other extreme brain surgery, severe epilepsy, “Beacon 23” TV series, anomalous brain formation, brain damage without disability, a fascinating academic psi study, questions about free will and MRIs, explanations for precognition, a complicated prophetic dream, experiences with precognitive dreams, dream time, information sent back in time, poltergeist activity, “Mandela” effects, the nature of time, the Buddhist concept of “Maya”, possible non-existence of time/a static universe, perception and reality, the “Matrix” films, and much more! This is a fascinating discussion of simulation theory with people who can intelligently discuss it, making complex concepts understandable without ever condescending to the listeners! This is a truly exceptional episode! Recap by Vincent Treewell of The Weird Part PodcastOutro Music by Peaches & Crime with Innsmouth Town Hosted on Acast. See acast.com/privacy for more information.
Join my mailing list https://briankeating.com/list to win a real 4 billion year old meteorite! All .edu emails in the USA
Fr. Michael Baggot joins the show to talk about Transhumanism. Who are the leading thinkers in the movement? What philosophies underpin the movement? What is the goal of tranhumanism? How does Transhumanism relate to Transgenderism? Father addresses all these questions. Show Sponsors: Ascension: https://ascensionpress.com/fradd Strive21: https://strive21.com/matt Father's Book: https://www.routledge.com/Enhancement-Fit-for-Humanity-Perspectives-on-Emerging-Technologies/Baggot-Gomez-Carrara-Tham/p/book/9781032115856 Fr.'s Links: https://www.magisterium.com/ https://upra.org https://catholic.tech https://catholicworldview.com @ThoseTwoPriests References: When Harry Became Sally by Ryan T Anderson: https://www.barnesandnoble.com/w/when-harry-became-sally-ryan-anderson/1125792437 The Transhumanist FAQ by Nick Bostrom: https://nickbostrom.com/views/transhumanist.pdf Unfit for the Future by Julian Savulescu: https://www.amazon.com/Unfit-Future-Enhancement-Uehiro-Practical/dp/019965364X Better Than Well by Carl Elliot: https://www.amazon.com/Better-Than-Well-American-Medicine/dp/0393325652 A Free Man's Worship by Bertrand Russel: https://www3.nd.edu/~afreddos/courses/264/fmw.htm The Space Trilogy by CS Lewis: https://www.amazon.com/Space-Trilogy-C-S-Lewis/dp/068483118X The End of Sex by Hank Greely: https://www.amazon.com/End-Sex-Future-Human-Reproduction/dp/0674728963