POPULARITY
Nick Bostrom's simulation hypothesis suggests that we might be living in a simulation created by posthumans. His work on artificial intelligence and superintelligence challenges how entrepreneurs, scientists, and everyone else understand human existence and the future of work. In this episode, Nick shares how AI can transform innovation, entrepreneurship, and careers. He also discusses the rapid pace of AI development, its promise to radically improve our world, and the existential risks it poses to humanity. In this episode, Hala and Nick will discuss: (00:00) Introduction (02:54) The Simulation Hypothesis, Posthumanism, and AI (11:48) Moral Implications of a Simulated Reality (22:28) Fermi Paradox and Doomsday Arguments (30:29) Is AI Humanity's Biggest Breakthrough? (38:26) Types of AI: Oracles, Genies, and Sovereigns (41:43) The Potential Dangers of Advanced AI (50:15) Artificial Intelligence and the Future of Work (57:25) Finding Purpose in an AI-Driven World (1:07:07) AI for Entrepreneurs and Innovators Nick Bostrom is a philosopher specializing in understanding AI in action, the advancement of superintelligent technologies, and their impact on humanity. For nearly 20 years, he served as the founding director of the Future of Humanity Institute at the University of Oxford. Nick is known for developing influential concepts such as the simulation argument and has authored over 200 publications, including the New York Times bestsellers Superintelligence and Deep Utopia. Sponsored By: Shopify - Start your $1/month trial at Shopify.com/profiting. Indeed - Get a $75 sponsored job credit to boost your job's visibility at Indeed.com/PROFITING Mercury - Streamline your banking and finances in one place. Learn more at mercury.com/profiting OpenPhone - Get 20% off your first 6 months at OpenPhone.com/profiting. Bilt - Start paying rent through Bilt and take advantage of your Neighborhood Benefits by going to joinbilt.com/profiting. Airbnb - Find a co-host at airbnb.com/host Boulevard - Get 10% off your first year at joinblvd.com/profiting when you book a demo Resources Mentioned: Nick's Book, Superintelligence: bit.ly/_Superintelligence Nick's Book, Deep Utopia: bit.ly/DeepUtopia Nick's Website: nickbostrom.com Active Deals - youngandprofiting.com/deals Key YAP Links Reviews - ratethispodcast.com/yap Youtube - youtube.com/c/YoungandProfiting LinkedIn - linkedin.com/in/htaha/ Instagram - instagram.com/yapwithhala/ Social + Podcast Services: yapmedia.com Transcripts - youngandprofiting.com/episodes-new Entrepreneurship, Entrepreneurship Podcast, Business, Business Podcast, Self Improvement, Self-Improvement, Personal Development, Starting a Business, Strategy, Investing, Sales, Selling, Psychology, Productivity, Entrepreneurs, AI, Artificial Intelligence, Technology, Marketing, Negotiation, Money, Finance, Side Hustle, Startup, Mental Health, Career, Leadership, Mindset, Health, Growth Mindset, ChatGPT, AI Marketing, Prompt, AI in Business, Generative AI, AI Podcast.
"What are uses of land that the market won't provide but are still worthwhile?"Are you interested in the urban aspects not supported by market, like parks and beauty? What do you think about AI evolution? How can we retrofit the urban fabric for better futures? Interview with Fin Moorhouse, advanced AI researcher at Forethought. We will talk about his vision for the future of cities, AI and its progress, urban retrofit, job automation, beauty as urban externality, and many more. Fin Moorhouse is a researcher at Forethought focused on advanced AI, previously working at Longview Philanthropy and Oxford's Future of Humanity Institute. He co-hosts Hear This Idea, a podcast exploring solutions to pressing global problems. A former Roots of Progress writing fellow, he has contributed to EA initiatives, UN policy discussions, and research on space governance. He studied philosophy at Cambridge.Find out more about Finn through these links:finmoorhouse website@finmoorhouse as Fin Moorhouse on XHear this idea podcast, co-hosted by Fin MoorhouseFin Moorhouse on GoodreadsRoots of Progress websiteOrder Without Design - book by Alain Bertaud, recommended by Fin MoorhouseSoonish - book by Kelly Weinersmith and Zach Weinersmith, recommended by Fin MoorhouseThe Death and Life of Great American Cities - book by Jane Jacobs, recommended by Fin MoorhouseForethought websiteConnecting episodes you might be interested in:No.126 - Interview with Corey Gray about beautyNo.300 - Panel conversation on urban food production with Adam Dorr, Nadun Hennayaka, and Simon BurtNo.304 - Interview with Nick Bray about AI agentsNo.314 - Interview with Andrew Vass about how repeated construction decreases costsNo.323R - Planning ahead for better neighborhood: Long run evidence from TanzaniaWhat was the most interesting part for you? What questions did arise for you? Let me know on Twitter @WTF4Cities or on the wtf4cities.com website where the shownotes are also available.I hope this was an interesting episode for you and thanks for tuning in.Episode generated with Descript assistance (affiliate link).Music by Lesfm from Pixabay
Are we alone in the universe—or already living alongside an ancient alien intelligence? In this mind-bending exploration, Professor Robin Hanson (George Mason University & Oxford's Future of Humanity Institute) breaks down the statistical odds that alien life exists and why it may have already been found in our own solar system. From AI-driven extraterrestrials silently observing us, to the chilling theory that humans are being domesticated by advanced alien civilizations, Hanson reveals where alien life is most likely to emerge, why UFO sightings might actually be real, and how our understanding of “quiet” vs. “loud” aliens could change everything we know about our future. Robin Hanson's Book, The Elephant in the Brain: Hidden Motives in Everyday Life: https://www.elephantinthebrain.com/ BialikBreakdown.comYouTube.com/mayimbialik
Our subject in this episode may seem grim – it's the potential extinction of the human species, either from a natural disaster, like a supervolcano or an asteroid, or from our own human activities, such as nuclear weapons, greenhouse gas emissions, engineered biopathogens, misaligned artificial intelligence, or high energy physics experiments causing a cataclysmic rupture in space and time.These scenarios aren't pleasant to contemplate, but there's a school of thought that urges us to take them seriously – to think about the unthinkable, in the phrase coined in 1962 by pioneering futurist Herman Kahn. Over the last couple of decades, few people have been thinking about the unthinkable more carefully and systematically than our guest today, Sean ÓhÉigeartaigh. Sean is the author of a recent summary article from Cambridge University Press that we'll be discussing, “Extinction of the human species: What could cause it and how likely is it to occur?”Sean is presently based in Cambridge where he is a Programme Director at the Leverhulme Centre for the Future of Intelligence. Previously he was founding Executive Director of the Centre for the Study of Existential Risk, and before that, he managed research activities at the Future of Humanity Institute in Oxford.Selected follow-ups:Seán Ó hÉigeartaigh - Leverhulme Centre ProfileExtinction of the human species - by Sean ÓhÉigeartaighHerman Kahn - WikipediaMoral.me - by ConsciumClassifying global catastrophic risks - by Shahar Avin et alDefence in Depth Against Human Extinction - by Anders Sandberg et alThe Precipice - book by Toby OrdMeasuring AI Ability to Complete Long Tasks - by METRCold Takes - blog by Holden KarnofskyWhat Comes After the Paris AI Summit? - Article by SeanARC-AGI - by François CholletHenry Shevlin - Leverhulme Centre profileEleos (includes Rosie Campbell and Robert Long)NeurIPS talk by David ChalmersTrustworthy AI Systems To Monitor Other AI: Yoshua BengioThe Unilateralist's Curse - by Nick Bostrom and Anders SandbergMusic: Spike Protein, by Koi Discovery, availabPromoguy Talk PillsAgency in Amsterdam dives into topics like Tech, AI, digital marketing, and more drama...Listen on: Apple Podcasts Spotify
Matt Crawford speaks with author Kristian Ronn about his book, The Darwinian Trap: The Hidden Evolutionary Forces That Explain Our World (and Threaten Our Future.) When people talk about today's biggest challenges—pollution, misinformation, artificial intelligence, inept CEOs, and politicians—they tend to frame the conversation around “bad people” doing “bad things.” But is there more to the story? Humans, it turns out, are intrinsically wired to seek short-term success at the expense of long-term prosperity. Kristian Rönn, an entrepreneur formerly affiliated with the University of Oxford's Future of Humanity Institute, calls these deeply rooted impulses “Darwinian demons.” These forces, a by-product of natural selection, can lead us to act in shortsighted ways that harm others—and even imperil our survival as a species. If this evolutionary glitch is left unchecked, the consequences will grow in magnitude as the power of technology accelerates. In this eye-opening work, Rönn shows that we must learn to cooperate in new ways if we are to escape these evolutionary traps in our daily lives and solve our biggest existential threats. Evolution may be to blame for the trap—but humans need not fall for it. Our salvation, he writes, will involve the creation of new systems that understand, track, and manage what humankind values most. Bold, brilliant, and ultimately optimistic, The Darwinian Trap gives readers a powerful new lens on our world and its problems, and invites us to rethink our priorities for the sake of generations to come.
We have an other amazing guest for this episode: Anders Sandberg is a visionary philosopher, futurist, and transhumanist thinker whose work pushes the boundaries of human potential and the future of intelligence. As a senior research fellow at Oxford University's Future of Humanity Institute until its closing in 2024 Sandberg explores everything from cognitive enhancement and artificial intelligence to existential risks and space colonization. With a background in computational neuroscience, he bridges science and philosophy to tackle some of the most profound questions of our time: How can we expand our cognitive capacities? What are the ethical implications of radical life extension? Could we one day transcend biological limitations entirely? Known for his sharp intellect, playful curiosity, and fearless speculation, Sandberg challenges conventional wisdom, inviting us to imagine—and shape—a future where humanity thrives beyond its current constraints.00:00 introduction04:18 exersise & David Sinclair06:10 Will we survive the century?18:18 Who can we trust? Knowledge and humility23:17 Nuclear armaggedon39:51 Technology as a double-edged sword44:30 Sandberg origin story56:54 Computational neuroscience01:00:30 Personal identity and neural simulation01:05:24 Personal identity and reasons to want to continue living01:09:39 The psychology of behind different philosophical intutions and judgments01:17:48 Is death bad for Anders Sandberg?01:25:00 Altruism and individual rights01:31:29 Elon Musk says we must die for progress01:35:10 Artificial Intelligence01:55:08 AI civilization01:02:07 Cryonics02:04:00 Book recommendations Hosted on Acast. See acast.com/privacy for more information.
In the long run, Keynes famously quipped, we are all dead. But Swedish entrepreneur Kristian Ronn reverses Keynes to argue that in the short term we, as a species, might also be death. In his new book Darwinian Trap, Ronn argues that we're hardwired to prioritize immediate benefits over long-term consequences, creating existential risks like nuclear war and uncontrolled AI development. Ronn suggests we need better system design with proper incentives to overcome these tendencies. He proposes controlling critical parts of technology supply chains (like AI chips) to ensure responsible use, similar to nuclear nonproliferation treaties. Despite acknowledging all the obvious challenges of these kind of UN style regulatory initiatives, Ronn remains hopeful that rational thinking and well-designed systems can help humanity transcend its evolutionary limitations.Here are the 5 KEEN ON take-aways from our conversation with Kristian Ronn:* The "Darwinian Trap" refers to how humans and systems are hardwired for short-term thinking due to evolutionary forces, creating both personal and existential risks.* "Offensive realism" in international politics drives nations to compete for resources and develop increasingly dangerous weapons, creating existential threats through arms races.* AI poses significant existential risks, particularly as a technology multiplier that could enable more destructive weapons and engineered pandemics.* System design with proper incentives is crucial for overcoming our evolutionary short-term thinking—we need to "change the rules of the game" rather than blame human nature.* Strategic control of technology supply chains (like AI chips) could potentially create frameworks for responsible AI development, similar to nuclear nonproliferation treaties.Kristian Rönn is the CEO and co-founder of Normative, a software tool for sustainability accounting. He has a background in mathematics, philosophy, computer science, and artificial intelligence. Before he started Normative, he worked at the University of Oxford's Future of Humanity Institute on issues related to global catastrophic risks. This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit keenon.substack.com/subscribe
Our guest is Anders Sandberg. Anders is a Swedish researcher, futurist and transhumanist. He holds a PhD in computational neuroscience from Stockholm University, and is a former senior research fellow at the Future of Humanity Institute at the University of Oxford.This conversation is about the governance of innovation, and the innovation of governance. Explore Infinita City:* Website: www.infinita.city* X: @InfinitaCity* The Infinita City Times* Join Events This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.strandedtechnologies.com
Jim talks with Kristian Rönn, co-founder of the carbon accounting tech company Normative, about his book The Darwinian Trap: The Hidden Evolutionary Forces That Explain Our World (and Threaten Our Future). They discuss Darwinian traps & demons, the parable of Picher, Oklahoma, the "cost of doing business" mentality, beauty filter arms races, perverse incentives in science, Goodhart's law, how nature deals with defection vs cooperation, kamikaze mutants, pandas as evolutionary dead ends, close calls with nuclear weapons, engineered pathogens, AI risk, radical transparency at the nation-state level, reputation systems, types of reciprocity, distributed reputation marketplaces, developing Darwinian demon literacy, local change, and much more. Episode Transcript The Darwinian Trap: The Hidden Evolutionary Forces That Explain Our World (and Threaten Our Future), by Kristian Rönn "Five Rules for Cooperation," by Martin Nowak "The Vulnerable World Hypothesis," by Nick Bostrom Kristian Rönn is a founder, author, and global governance advocate. He pioneered cloud-based carbon accounting by founding Normative, a platform that helps thousands of companies achieve net-zero emissions. A proponent of effective altruism, Kristian advocates for prioritizing the wellbeing of Earth's inhabitants as the key metric for progress. Before Normative, he worked at Oxford's Future of Humanity Institute, focusing on global catastrophic risks and AI. He has contributed to numerous global standards, legislation, and resolutions on climate and AI governance.
Have you ever wondered if our very instincts—those survival mechanisms that got us here—might actually be our biggest threat? Today, we're diving into this fascinating paradox with Kristian Rönn, a brilliant mind who's not only the CEO and co-founder of Normative, the world's first carbon accounting engine, but also a thought leader whose work spans climate policy, philosophy, and artificial intelligence. Before founding Normative, Kristian worked at the University of Oxford's Future of Humanity Institute, researching global catastrophic risks. His new book, The Darwinian Trap, examines how our evolutionary wiring, these so-called ‘Darwinian demons,' drive behaviors that could undermine humanity's future.In this episode, we're exploring everything from the hidden forces shaping our decisions to the existential risks of technology and our fundamental need for global cooperation. Kristian brings a fresh and urgent perspective to the conversation on climate change and societal challenges, and he's got some revolutionary ideas on what it will take to truly evolve beyond our short-term impulses.By the end of this episode, you'll not only understand these deeply ingrained patterns but hopefully feel inspired to start recognizing them in your own life—and maybe even consider how we, collectively, can work toward a more sustainable future.Episode highlights:00:25 Meet Kristian Rönn: CEO and Thought Leader02:40 Understanding the Darwinian Trap03:44 Examples of the Darwinian Trap in Action07:38 Can We Override Our Evolutionary Instincts?08:42 Hope from Nature's Solutions13:14 Global Cooperation and Governance18:09 Reforming Global Policies and Incentives38:49 The Role of Technology in Global Empathy42:41 Ethical Guidelines for Technological Innovation49:12 Normative's Role in Carbon Emissions Accounting52:18 Conclusion and Contact InformationResources mentioned:Normative.IOThe Darwinian Trap by Kristian RonnGuest's social handles:LinkedInInstagramP.S. If you enjoy this episode and feel it helps to elevate your life, please give us a rating or review. And if you feel others may benefit from this podcast as well, spread the word, share and help grow our tribe of Superhumans. When we help heal One, we help heal All. Much gratitude and love.Yours,Ariane
How do we escape Moloch's trap for good? In this special Burning Man edition of Win-Win, Liv forgoes the usual purple chairs for dusty playa to chat with Kristian Rönn. Kristian is the CEO and co-founder of Normative, a platform for helping industries strive for net zero emissions. With intellectual roots in Oxford's Future of Humanity Institute and his own mission to create positive-sum solutions to climate change, Kristian has just launched a new book - The Darwinian Trap - and in this conversation, Liv and Kristian examine solutions to the short-term thinking and cost externalisation that traditional markets often produce. A conversation full of evolutionary biology, game theory and economics as they examine solutions to the world's deadliest demon. Chapters (01:42)-The Darwinian Trap (03:42)-Why Is Coordination So Hard? (07:19)-Unstable Equilibriums: The Butterfly Effect of Game Theory (13:55)-Natural Selection: Capitalism's Ace In The Hole (20:16)-How Can A Market Model Anything At All? (22:10)-Betting On Our Values (27:29)-What Problems Do Reputational Markets Solve? (32:56)-Centralized Mechanisms for Overcoming The Darwinian Trap (35:16)-The Risks of Over-Centralization (39:46)-The Burning Man Model (43:00)-Mixed Economies (45:53)-Killing The Incentives or Kill The Organism? (50:59)-The Miracle of Evolutionary Success (54:03)-Finding Hope (56:21)-Spreading Awareness To Defeat Moloch (59:58)-Why Burning Man? Links ♾️ Kristian's New Book ♾️ Kristian's Bio ♾️ Liv's TED talk on Moloch ♾️ Reputational Markets ♾️ Episode Transcript Credits ♾️ Hosted and Produced by Liv Boeree ♾️ Post-Production by Ryan Kessler The Win-Win Podcast: Poker champion Liv Boeree takes to the interview chair to tease apart the complexities of one of the most fundamental parts of human nature: competition. Liv is joined by top philosophers, gamers, artists, technologists, CEOs, scientists, athletes and more to understand how competition manifests in their world, and how to change seemingly win-lose games into Win-Wins. #WinWinPodcast #TheDarwinianTrap #Moloch
In this episode of Faster, Please! — The Podcast, I talk with economist Robin Hanson about a) how much technological change our society will undergo in the foreseeable future, b) what form we want that change to take, and c) how much we can ever reasonably predict.Hanson is an associate professor of economics at George Mason University. He was formerly a research associate at the Future of Humanity Institute at Oxford, and is the author of the Overcoming Bias Substack. In addition, he is the author of the 2017 book, The Elephant in the Brain: Hidden Motives in Everyday Life, as well as the 2016 book, The Age of Em: Work, Love, and Life When Robots Rule the Earth.In This Episode* Innovation is clumpy (1:21)* A history of AI advancement (3:25)* The tendency to control new tech (9:28)* The fallibility of forecasts (11:52)* The risks of fertility-rate decline (14:54)* Window of opportunity for space (18:49)* Public prediction markets (21:22)* A culture of calculated risk (23:39)Below is a lightly edited transcript of our conversationInnovation is Clumpy (1:21)Do you think that the tech advances of recent years — obviously in AI, and what we're seeing with reusable rockets, or CRISPR, or different energy advances, fusion, perhaps, even Ozempic — do you think that the collective cluster of these technologies has put humanity on a different path than perhaps it was on 10 years ago?. . . most people don't notice just how much stuff is changing behind the scenes in order for the economy to double every 15 or 20 years.That's a pretty big standard. As you know, the world has been growing exponentially for a very long time, and new technologies have been appearing for a very long time, and the economy doubles roughly every 15 or 20 years, and that can't happen without a whole lot of technological change, so most people don't notice just how much stuff is changing behind the scenes in order for the economy to double every 15 or 20 years. So to say that we're going more than that is really a high standard here. I don't think it meets that standard. Maybe the standard it meets is to say people were worried about maybe a stagnation or slowdown a decade or two ago, and I think this might weaken your concerns about that. I think you might say, well, we're still on target.Innovation's clumpy. It doesn't just out an entirely smooth . . . There are some lumpy ones once in a while, lumpier innovations than usual, and those boost higher than expected, sometimes lower than expected sometimes, and maybe in the last ten years we've had a higher-than-expected clump. The main thing that does is make you not doubt as much as you did when you had the lower-than-expected clump in the previous 10 years or 20 years because people had seen this long-term history and they thought, “Lately we're not seeing so much. I wonder if this is done. I wonder if we're running out.” I think the last 10 years tells you: well, no, we're kind of still on target. We're still having big important advances, as we have for two centuries.A history of AI advancement (3:25)People who are especially enthusiastic about the recent advances with AI, would you tell them their baseline should probably be informed by economic history rather than science fiction?[Y]es, if you're young, and you haven't seen the world for decades, you might well believe that we are almost there, we're just about to automate everything — but we're not.By technical history! We have 70-odd years of history of AI. I was an AI researcher full-time from '84 to '93. If you look at the long sweep of AI history, we've had some pretty big advances. We couldn't be where we are now without a lot of pretty big advances all along the way. You just think about the very first digital computer in 1950 or something and all the things we've seen, we have made large advances — and they haven't been completely smooth, they've come in a bit of clumps.I was enticed into the field in 1984 because of a recent set of clumps then, and for a century, roughly every 30 years, we've had a burst of concern about automation and AI, and we've had big concern in the sense people said, “Are we almost there? Are we about to have pretty much all jobs automated?” They said that in the 1930s, they said it in the 1960s — there was a presidential commission in the 1960s: “What if all the jobs get automated?” I jumped in in the late '80s when there was a big burst there, and I as a young graduate student said, “Gee, if I don't get in now, it'll all be over soon,” because I heard, “All the jobs are going to be automated soon!”And now, in the last decade or so, we've had another big burst, and I think people who haven't seen that history, it feels to them like it felt to me in 1984: “Wow, unprecedented advances! Everybody's really excited! Maybe we're almost there. Maybe if I jump in now, I'll be part of the big push over the line to just automate everything.” That was exciting, it was tempting, I was naïve, and I was sucked in, and we're now in another era like that. Yes, if you're young, and you haven't seen the world for decades, you might well believe that we are almost there, we're just about to automate everything — but we're not.I like that you mentioned the automation scare of the '60s. Just going back and looking at that, it really surprised me how prevalent and widespread and how serious people took that. I mean, you can find speeches by Martin Luther King talking about how our society is going to deal with the computerization of everything. So it does seem to be a recurrent fear. What would you need to see to think it is different this time?The obvious relevant parameter to be tracking is the percentage of world income that goes to automation, and that has been creeping up over the decades, but it's still less than five percent.What is that statistic?If you look at the percentage of the economy that goes to computer hardware and software, or other mechanisms of automation, you're still looking at less than five percent of the world economy. So it's been creeping up, maybe decades ago it was three percent, even one percent in 1960, but it's creeping up slowly, and obviously, when that gets to be 80 percent, game over, the economy has been replaced — but that number is creeping up slowly, and you can track it, so when you start seeing that number going up much faster or becoming a large number, then that's the time to say, “Okay, looks like we're close. Maybe automation will, in fact, take over most jobs, when it's getting most of world income.”If you're looking at economic statistics, and you're looking at different forecasts, whether by the Fed or CBO or Wall Street banks and the forecasts are, “Well, we expect, maybe because of AI, productivity growth to be 0.4 percentage points higher over this kind of time. . .” Those kinds of numbers where we're talking about a tenth of a point here, that's not the kind of singularity-emergent world that some people think or hope or expect that we're on.Absolutely. If you've got young enthusiastic tech people, et cetera — and they're exaggerating. The AI companies, even they're trying to push as big a dramatic images they can. And then all the stodgy conservative old folks, they're afraid of seeming behind the times, and not up with things, and not getting it — that was the big phrase in the Internet Boom: Who “gets it” that this is a new thing?I'm proud to be a human, to have been part of the civilization to have done this . . . but we've seen that for 70 years: new technologies, we get excited, we try them out, we try to apply them, and that's part of what progress is.Now it would be #teamgetsit.Exactly, something like that. They're trying to lean into it, they're trying to give it the best spin they can, but they have some self-respect, so they're going to give you, “Wow 0.4 percent!” They'll say, “That's huge! Wow, this is a really big thing, everybody should be into this!” But they can't go above 0.4 percent because they've got some common sense here. But we've even seen management consulting firms over the last decade or so make predictions that 10 years in the future, half all jobs would be automated. So we've seen this long history of these really crazy extreme predictions into a decade, and none of those remotely happened, of course. But people do want to be in with the latest thing, and this is obviously the latest round of technology, it's impressive. I'm proud to be a human, to have been part of the civilization to have done this, and I'd like to try them out, and see what I can do with them, and think of where they could go. That's all exciting and fun, but we've seen that for 70 years: new technologies, we get excited, we try them out, we try to apply them, and that's part of what progress is. The tendency to control new tech (9:28)Not to talk just about AI, but do you think AI is important enough that policymakers need to somehow guide the technology to a certain outcome? Daron Acemoglu, one of the Nobel Prize winners, has for quite some time, and certainly recently, said that this technology needs to be guided by policymakers so that it helps people, it helps workers, it creates new tasks, it creates new things for them to do, not automate away their jobs or automate a bunch of tasks.Do you think that there's something special about this technology that we need to guide it to some sort of outcome?I think those sort of people would say that about any new technology that seemed like it was going to be important. They are not actually distinguishing AI from other technologies. This is just what they say about everything.It could be “technology X,” we must guide it to the outcome that I have already determined.As long as you've said, “X is new, X is exciting, a lot of things seem to depend on X,” then their answer would be, “We need to guide it.” It wouldn't really matter what the details of X were. That's just how they think about society and technology. I don't see anything distinctive about this, per se, in that sense, other than the fact that — look, in the long run, it's huge.Space, in the long run, is huge, because obviously in the long run almost everything will be in space, so clearly, eventually, space will be the vast majority of everything. That doesn't mean we need to guide space now or to do anything different about it, per se. At the moment, space is pretty small, and it's pretty pedestrian, but it's exciting, and the same for AI. At the moment, AI is pretty small, minor, AI is not remotely threatening to cause harm in our world today. If you look about harmful technologies, this is way down the scale. Demonstrated harms of AI in the last 10 years are minuscule compared to things like construction equipment, or drugs, or even television, really. This is small.Ladders for climbing up on your roof to clean out the gutters, that's a very dangerous technology.Yeah, somebody should be looking into that. We should be guiding the ladder industry to make sure they don't cause harm in the world.The fallibility of forecasts (11:52)I'm not sure how much confidence we should ever have on long-term economic forecasts, but have you seen any reason to think that they might be less reliable than they always have been? That we might be approaching some sort of change? That those 50-year forecasts of entitlement spending might be all wrong because the economy's going to be growing so much faster, or the longevity is going to be increasing so much faster?Previously, the world had been doubling roughly every thousand years, and that had been going on for maybe 10,000 years, and then, within the space of a century, we switched to doubling roughly every 15 or 20 years. That's a factor of 60 increase in the growth rate, and it happened after a previous transition from forging to farming, roughly 10 doublings before.It was just a little over two centuries ago when the world saw this enormous revolution. Previously, the world had been doubling roughly every thousand years, and that had been going on for maybe 10,000 years, and then, within the space of a century, we switched to doubling roughly every 15 or 20 years. That's a factor of 60 increase in the growth rate, and it happened after a previous transition from forging to farming, roughly 10 doublings before.So you might say we can't trust these trends to continue maybe more than 10 doublings, and then who knows what might happen? You could just say — that's 200 years, say, if you double every 20 years — we just can't trust these forecasts more than 200 years out. Look at what's happened in the past after that many doublings, big changes happened, and you might say, therefore, expect, on that sort of timescale, something else big to happen. That's not crazy to say. That's not very specific.And then if you say, well, what is the thing people most often speculate could be the cause of a big change? They do say AI, and then we actually have a concrete reason to think AI would change the growth rate of the economy: That is the fact that, at the moment, we make most stuff in factories, and factories typically push out from the factory as much value as the factory itself embodies, in economic terms, in a few months.If you could have factories make factories, the economy could double every few months. The reason we can't now is we have humans in the factories, and factories don't double them. But if you could make AIs in factories, and the AIs made factories, that made more AIs, that could double every few months. So the world economy could plausibly double every few months when AI has dominated the economy.That's of the magnitude doubling every few months versus doubling every 20 years. That's a magnitude similar to the magnitude we saw before from farming to industry, and so that fits together as saying, sometime in the next few centuries, expect a transition that might increase the growth rate of the economy by a factor of 100. Now that's an abstract thing in the long frame, it's not in the next 10 years, or 20 years, or something. It's saying that economic modes only last so long, something should come up eventually, and this is our best guess of a thing that could come up, so it's not crazy.The risks of fertility-rate decline (14:54)Are you a fertility-rate worrier?If the population falls, the best models say innovation rates would fall even faster.I am, and in fact, I think we have a limited deadline to develop human-level AI, before which we won't for a long pause, because falling fertility really threatens innovation rates. This is something we economists understand that I think most other people don't: You might've thought that a falling population could be easily compensated by a growing economy and that we would still have rapid and fast innovation because we would just have a bigger economy with a lower population, but apparently that's not true.If the population falls, the best models say innovation rates would fall even faster. So say the population is roughly predicted to peak in three decades and then start to fall, and if it's falls, it would fall roughly a factor of two every generation or two, depending on which populations dominate, and then if it fell by a factor of 10, the innovation rate would fall by more than a factor of 10, and that means just a slower rate of new technologies, and, of course, also a reduction in the scale of the world economy.And I think that plausibly also has the side effect of a loss in liberality. I don't think people realize how much it was innovation and competition that drove much of the world to become liberal because the winning nations in the world were liberal and the rest were afraid of falling too far behind. But when innovation goes away, they won't be so eager to be liberal to be innovative because innovation just won't be a thing, and so much of the world will just become a lot less liberal.There's also the risk that — basically, computers are a very durable technology, in principle. Typically we don't make them that durable because every two years they get twice as good, but when innovation goes away, they won't get good very fast, and then you'll be much more tempted to just make very durable computers, and the first generation that makes very durable computers that last hundreds of years, the next generation won't want to buy new computers, they'll just use the old durable ones as the economy is shrinking and then the industry that commuters might just go away. And then it could be a long time before people felt a need to rediscover those technologies.I think the larger-scale story is there's no obvious process that would prevent this continued decline because there's no level at which, when you get that, some process kicks in and it makes us say, “Oh, we need to increase the population.” But the most likely scenario is just that the Amish and [Hutterites] and other insular, fertile subgroups who have been doubling every 20 years for a century will just keep doing that and then come to dominate the world, much like Christians took over the Roman Empire: They took it over by doubling every 20 years for three centuries. That's my default future, and then if we don't get AI or colonize space before this decline, which I've estimated would be roughly 70 years' worth more of progress at previous rates, then we don't get it again until the Amish not only just take over the world, but rediscover a taste for technology and economic growth, and then eventually all of the great stuff could happen, but that could be many centuries later.This does not sound like an issue that can be fundamentally altered by tweaking the tax code.You would have to make a large —— Large turn of the dial, really turn that dial.People are uncomfortable with larger-than-small tweaks, of course, but we're not in an era that's at all eager for vast changes in policy, we are in a pretty conservative era that just wants to tweak things. Tweaks won't do it.Window of opportunity for space (18:49)We can't do things like Daylight Savings Time, which some people want to change. You mentioned this window — Elon Musk has talked about a window for expansion into space, and this is a couple of years ago, he said, “The window has closed before. It's open now. Don't assume it will always be open.”Is that right? Why would it close? Is it because of higher interest rates? Because the Amish don't want to go to space? Why would the window close?I think, unfortunately, we've got a limited window to try to jumpstart a space economy before the earth economy shrinks and isn't getting much value from a space economy.There's a demand for space stuff, mostly at the moment, to service Earth, like the internet circling the earth, say, as Elon's big project to fund his spaceships. And there's also demand for satellites to do surveillance of the earth, et cetera. As the earth economy shrinks, the demand for that stuff will shrink. At some point, they won't be able to afford fixed costs.A big question is about marginal cost versus fixed costs. How much is the fixed cost just to have this capacity to send stuff into space, versus the marginal cost of adding each new rocket? If it's dominated by marginal costs and they make the rockets cheaper, okay, they can just do fewer rockets less often, and they can still send satellites up into space. But if you're thinking of something where there's a key scale that you need to get past even to support this industry, then there's a different thing.So thinking about a Mars economy, or even a moon economy, or a solar system economy, you're looking at a scale thing. That thing needs to be big enough to be self-sustaining and economically cost-effective, or it's just not going to work. So I think, unfortunately, we've got a limited window to try to jumpstart a space economy before the earth economy shrinks and isn't getting much value from a space economy. Space economy needs to be big enough just to support itself, et cetera, and that's a problem because it's the same humans in space who are down here on earth, who are going to have the same fertility problems up there unless they somehow figure out a way to make a very different culture.A lot of people just assume, “Oh, you could have a very different culture on Mars, and so they could solve our cultural problems just by being different,” but I'm not seeing that. I think they would just have a very strong interconnection with earth culture because they're going to have just a rapid bandwidth stuff back and forth, and their fertility culture and all sorts of other culture will be tied closely to earth culture, so I'm not seeing how a Mars colony really solves earth cultural problems.Public prediction markets (21:22)The average person is aware that these things, whether it's betting markets or these online consensus prediction markets, that they exist, that you can bet on presidential races, and you can make predictions about a superconductor breakthrough, or something like that, or about when we're going to get AGI.To me, it seems like they have, to some degree, broken through the filter, and people are aware that they're out there. Have they come of age?. . . the big value here isn't going to be betting on elections, it's going to be organizations using them to make organization decisions, and that process is being explored.In this presidential election, there's a lot of discussion that points to them. And people were pretty open to that until Trump started to be favored, and people said, “No, no, that can't be right. There must be a lot of whales out there manipulating, because it couldn't be Trump's winning.” So the openness to these things often depends on what their message is.But honestly, the big value here isn't going to be betting on elections, it's going to be organizations using them to make organization decisions, and that process is being explored. Twenty-five years ago, I invented this concept of decision markets using in organizations, and now in the last year, I've actually seen substantial experimentation with them and so I'm excited to see where that goes, and I'm hopeful there, but that's not so much about the presidential markets.Roughly a century ago there was more money bet in presidential betting markets than in stock markets at the time. Betting markets were very big then, and then they declined, primarily because scientific polling was declared a more scientific approach to estimating elections than betting markets, and all the respectable people wanted to report on scientific polls. And then of course the stock market became much, much bigger. The interest in presidential markets will wax and wane, but there's actually not that much social value in having a better estimate of who's going to win an election. That doesn't really tell you who to vote for, so there are other markets that would be much more socially valuable, like predicting the consequences of who's elected as president. We don't really have much markets on those, but maybe we will next time around. But there is a lot of experimentation going in organizational prediction markets at the moment, compared to, say, 10 years ago, and I'm excited about those experiments.A culture of calculated risk (23:39)I want a culture that, when one of these new nuclear reactors, or these nuclear reactors that are restarting, or these new small modular reactors, when there's some sort of leak, or when a new SpaceX Starship, when some astronaut gets killed, that we just don't collapse as a society. That we're like, well, things happen, we're going to keep moving forward.Do you think we have that kind of culture? And if not, how do we get it, if at all? Is that possible?That's the question: Why has our society become so much more safety-oriented in the last half-century? Certainly one huge sign of it is the way we way overregulated nuclear energy, but we've also now been overregulating even kids going to school. Apparently they can't just take their bikes to school anymore, they have to go on a bus because that's safer, and in a whole bunch of ways, we are just vastly more safety-oriented, and that seems to be a pretty broad cultural trend. It's not just in particular areas and it's not just in particular countries.I've been thinking a lot about long-term cultural trends and trying to understand them. The basic story, I think, is we don't have a good reason to believe long-term cultural trends are actually healthy when they are shared trends of norms and status markers that everybody shares. Cultural things that can vary within the cultures, like different technologies and firm cultures, those we're doing great. We have great evolution of those things, and that's why we're having all these great technologies. But things like safetyism is more of a shared cultural norm, and we just don't have good reasons to think those changes are healthy, and they don't fix themselves, so this is just another example of something that's going wrong.They don't fix themselves because if you have a strong, very widely shared cultural norm, and someone has a different idea, they need to be prepared to pay a price, and most of us aren't prepared to pay that price.If we had a healthy cultural evolution competition among even nations, this would be fine. The problem is we have this global culture, a monoculture, really, that enforces everybody.Right. If, for example, we have 200 countries, if they were actually independent experiments and had just had different cultures going different directions, then I'd feel great; that okay, the cultures that choose too much safety, they'll lose out to the others and eventually it'll be worn out. If we had a healthy cultural evolution competition among even nations, this would be fine. The problem is we have this global culture, a monoculture, really, that enforces everybody.At the beginning of Covid, all the usual public health efforts said all the usual things, and then world elites got together and talked about it, and a month later they said, “No, that's all wrong. We have a whole different thing to do. Travel restrictions are good, masks are good, distancing is good.” And then the entire world did it the same way, and there was strong pressure on any deviation, even Sweden, that would dare to deviate from the global consensus.If you look about many kinds of regulation, it's very little deviation worldwide. We don't have 200, or even 100, independent policy experiments, we basically have a main global civilization that does it the same, and maybe one or two deviants that are allowed to have somewhat different behavior, but pay a price for it.On sale everywhere The Conservative Futurist: How To Create the Sci-Fi World We Were PromisedFaster, Please! is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.Micro Reads▶ Economics* The Next President Inherits a Remarkable Economy - WSJ* The surprising barrier that keeps us from building the housing we need - MIT* Trump's tariffs, explained - Wapo* Watts and Bots: The Energy Implications of AI Adoption - SSRN* The Changing Nature of Technology Shocks - SSRN* AI Regulation and Entrepreneurship - SSRN▶ Business* Microsoft reports big profits amid massive AI investments - Ars* Meta's Next Llama AI Models Are Training on a GPU Cluster ‘Bigger Than Anything' Else - Wired* Apple's AI and Vision Pro Products Don't Meet Its Standards - Bberg Opinion* Uber revenues surge amid robust US consumer spending - FT* Elon Musk in funding talks with Middle East investors to value xAI at $45bn - FT▶ Policy/Politics* Researchers ‘in a state of panic' after Robert F. Kennedy Jr. says Trump will hand him health agencies - Science* Elon Musk's Criticism of ‘Woke AI' Suggests ChatGPT Could Be a Trump Administration Target - Wired* US Efforts to Contain Xi's Push for Tech Supremacy Are Faltering - Bberg* The Politics of Debt in the Era of Rising Rates - SSRN▶ AI/Digital* Alexa, where's my Star Trek Computer? - The Verge* Toyota, NTT to Invest $3.3 Billion in AI, Autonomous Driving - Bberg* Are we really ready for genuine communication with animals through AI? - NS* Alexa's New AI Brain Is Stuck in the Lab - Bberg* This AI system makes human tutors better at teaching children math - MIT* Can Machines Think Like Humans? A Behavioral Evaluation of LLM-Agents in Dictator Games - Arxiv▶ Biotech/Health* Obesity Drug Shows Promise in Easing Knee Osteoarthritis Pain - NYT* Peak Beef Could Already Be Here - Bberg Opinion▶ Clean Energy/Climate* Chinese EVs leave other carmakers with only bad options - FT Opinion* Inside a fusion energy facility - MIT* Why aren't we driving hydrogen powered cars yet? There's a reason EVs won. - Popular Science* America Can't Do Without Fracking - WSJ Opinion▶ Robotics/AVs* American Drone Startup Notches Rare Victory in Ukraine - WSJ* How Wayve's driverless cars will meet one of their biggest challenges yet - MIT▶ Space/Transportation* Mars could have lived, even without a magnetic field - Big Think▶ Up Wing/Down Wing* The new face of European illiberalism - FT* How to recover when a climate disaster destroys your city - Nature▶ Substacks/Newsletters* Thinking about "temporary hardship" - Noahpinion* Hold My Beer, California - Hyperdimensional* Robert Moses's ideas were weird and bad - Slow Boring* Trading Places? No Thanks. - The Dispatch* The Case For Small Reactors - Breakthrough Journal* The Fourth Industrial Revolution and the Future of Work - Conversable EconomistFaster, Please! is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber. This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit fasterplease.substack.com/subscribe
No final do século XIX, um asceta ortodoxo russo chamado Nikolai Fedorov foi inspirado pelo darwinismo para argumentar que os humanos poderiam direcionar sua própria evolução para trazer a ressurreição. Segundo ele, até este ponto, a seleção natural tinha sido um fenômeno aleatório, mas agora, graças à tecnologia, os humanos podiam intervir neste processo. Invocando profecias bíblicas, ele escreveu: "Este dia será divino, impressionante, mas não milagroso, pois a ressurreição será uma tarefa não de milagre, mas de conhecimento e trabalho comum." Essa teoria foi levada para o século XX por Pierre Teilhard de Chardin, um padre jesuíta francês e paleontólogo que, como Fedorov, acreditava que a evolução levaria ao Reino de Deus. Em 1949, Teilhard propôs que no futuro todas as máquinas seriam conectadas a uma vasta rede global que permitiria que as mentes humanas se fundissem. Com o tempo, essa unificação da consciência levaria a uma explosão de inteligência – o "Ponto Ômega" – permitindo que a humanidade "rompesse a estrutura material do Tempo e Espaço" e se fundisse perfeitamente com o divino. Os transumanistas, geralmente ateus, normalmente reconhecem Teilhard e Fedorov como precursores de seu movimento, mas o contexto religioso de suas ideias raramente é mencionado ou creditado - embora, a todo momento, tudo o que a religião fornecia (mesmo na sua forma herética, gnóstica ou heterodoxa) fosse prontamente substituído por um pretenso equivalente científico. Para se afastar de sua raiz esotérica, ocultista, religiosa, cristã, gnóstica - a maioria dos adeptos do movimento atribui o primeiro uso do termo transumanismo, no sentido que eles de fato desejam comunicar, a Julian Huxley, o eugenista britânico e amigo próximo de Teilhard que, na década de 1950, expandiu muitas das ideias do padre em seus próprios escritos - embora tenha se esforçado para afastar qualquer pista religiosa para soar relevante entre a academia. Durante duas décadas, o trasnhumanismo era tido como uma ideia marginal, até ressurgir com força nos anos 80 em São Francisco entre um grupo de pessoas da indústria de tecnologia com uma veia libertária. Eles inicialmente se autodenominavam extropianos e se comunicavam por meio de boletins informativos e em conferências anuais. Desde então, foram criados jornais, institutos, ONGs e organizações educacionais destinadas a juntar pensadores transumanistas e espalhá-los pelas diversas áreas do conhecimento humano: inteligência artificial, nanotecnologia, engenharia genética, robótica, exploração espacial, memética e a política e economia. O movimento foi ganhando destaque não apenas no meio acadêmico, mas também entre empresários e entusiastas da tecnologia. Russel Kurzweil foi um dos primeiros grandes pensadores a trazer essas ideias para o mainstream e legitimá-las para um público mais amplo. Sua ascensão em 2012 para um cargo de diretor de engenharia no Google, anunciou, para muitos, uma fusão simbólica entre a filosofia transhumanista e a influência de grandes empresas de tecnologia. Os transhumanistas hoje exercem enorme poder no Vale do Silício — empreendedores como Elon Musk e Peter Thiel se identificam como crentes desta "nova religião" — onde fundaram think tanks como a Singularity University e o Future of Humanity Institute. As ideias propostas pelos pioneiros do movimento não são mais reflexões teóricas abstratas, mas estão sendo incorporadas em tecnologias emergentes em organizações como Google, Apple, Tesla e SpaceX. O que torna o movimento transumanista tão sedutor é que ele promete restaurar, por meio da ciência, as esperanças transcendentes que a própria ciência obliterou. Os transumanistas não acreditam na existência de uma alma, mas também não gostariam de soar como materialistas estritos. Deus na Maquina: Transhumanismo, Antihumanismo e Religiões Biônicas https://tavernadolugarnenhum.com.br/filosofia/deus-na-maquina-transhumanismo-antihumanismo-e-religioes-bionicas/
Share this episode: https://www.samharris.org/podcasts/making-sense-episodes/385-ai-utopia Sam Harris speaks with Nick Bostrom about ongoing progress in artificial intelligence. They discuss the twin concerns about the failure of alignment and the failure to make progress, why smart people don’t perceive the risk of superintelligent AI, the governance risk, path dependence and "knotty problems," the idea of a solved world, Keynes’s predictions about human productivity, the uncanny valley of utopia, the replacement of human labor and other activities, meaning and purpose, digital isolation and plugging into something like the Matrix, pure hedonism, the asymmetry between pleasure and pain, increasingly subtle distinctions in experience, artificial purpose, altering human values at the level of the brain, ethical changes in the absence of extreme suffering, our cosmic endowment, longtermism, problems with consequentialism, the ethical conundrum of dealing with small probabilities of large outcomes, and other topics. Nick Bostrom is a professor at Oxford University, where he is the founding director of the Future of Humanity Institute. He is the author of more than 200 publications, including Anthropic Bias (2002), Global Catastrophic Risks (2008), Human Enhancement (2009), and Superintelligence: Paths, Dangers, Strategies (2014), a New York Times bestseller which sparked the global conversation about the future of AI. His work has framed much of the current thinking around humanity’s future (such as the concept of existential risk, the simulation argument, the vulnerable world hypothesis, astronomical waste, and the unilateralist’s curse). He has been on Foreign Policy’s Top 100 Global Thinkers list twice, and was the youngest person to rank among the top 15 in Prospect’s World Thinkers list. He has an academic background in theoretical physics, AI, computational neuroscience, and philosophy. His most recent book is Deep Utopia: Life and Meaning in a Solved World. Website: https://nickbostrom.com/ Learning how to train your mind is the single greatest investment you can make in life. That’s why Sam Harris created the Waking Up app. From rational mindfulness practice to lessons on some of life’s most important topics, join Sam as he demystifies the practice of meditation and explores the theory behind it.
Welcome to episode #951 of Six Pixels of Separation - The ThinkersOne Podcast. Here it is: Six Pixels of Separation - The ThinkersOne Podcast - Episode #951. When it comes to thinking big about artificial intelligence, I think about what Nick Bostrom is thinking. A philosopher widely known for his thought leadership in AI and existential risk, Nick has spent much of his career asking the kinds of questions most of us avoid. As the founding Director of Oxford's Future of Humanity Institute and a researcher who has dabbled in everything from computational neuroscience to philosophy, Nick's intellectual curiosity knows no bounds. His 2014 book, Superintelligence (a must-read), became a New York Times bestseller, framing global discussions about the potential dangers of artificial intelligence. But now, with his latest book, Deep Utopia - Life and Meaning in a Solved World, Nick shifts the conversation to a more optimistic angle - what happens if everything goes right? Deep Utopia tackles a question that feels almost paradoxical: If we solve all of our technological problems, what's left for humanity to do? Nick presents a future where superintelligence has safely arrived, governing a world where human labor is no longer required, and technological advancements have freed us from life's practical necessities. This isn't just a hypothetical playground for futurists... it's a challenge to our understanding of meaning and purpose in a post-work, post-instrumental society. In this conversation, Nick explores the philosophical implications of a world where human nature becomes fully malleable. With AI handling all instrumental tasks, and near-magical technologies at our disposal, the question shifts from "How do we survive?" to "How do we live well?" It's no longer about the technology itself but about our values, our purpose, and how we define meaning when there are no more problems left to solve. Nick's book is not just a call to prepare for the future; it's an invitation to rethink what life could look like when all of humanity's traditional struggles are behind us. As he dives into themes of happiness, pleasure, and the complexities of human nature, Nick encourages us to reimagine the future - not as a dystopia to fear, but as a deep utopia, where we must rediscover what it means to be truly human in a solved world. This stuff bakes my noodle. Enjoy the conversation… Running time: 49:48. Hello from beautiful Montreal. Subscribe over at Apple Podcasts. Please visit and leave comments on the blog - Six Pixels of Separation. Feel free to connect to me directly on Facebook here: Mitch Joel on Facebook. Check out ThinkersOne. or you can connect on LinkedIn. ...or on Twitter. Here is my conversation with Nick Bostrom. Deep Utopia - Life and Meaning in a Solved World. Superintelligence. Future of Humanity Institute. This week's music: David Usher 'St. Lawrence River'. Chapters: (00:00) - Introduction and Background. (01:17) - The Debate: Accelerating AI Development vs. Delaying It. (06:08) - Exploring the Big Picture Questions for Humanity. (08:44) - The Redefinition of Human Intelligence. (13:12) - The Role of Creativity in AI. (19:41) - Towards a Post-Work Society. (23:53) - Philosophical Questions and the Value of Humanity. (27:36) - The Complex Relationship Between Pleasure and Pain. (30:03) - The Impact of Large Language Models and the Transformer Architecture. (33:03) - Challenges in Developing Artificial General Intelligence. (35:49) - The Risks and Importance of Values in AGI Development. (45:19) - Exploring the Concept of Deep Utopia.
Gustavs Zilgalvis is a technology and security policy fellow within RAND's Global and Emerging Risks Division, a Ford Dorsey Master's in International Policy candidate at Stanford's Freeman Spogli Institute for International Studies, and a founding Director at the Center for Space Governance. At RAND, he is specializing in the geopolitical and economic implications of the development of artificial intelligence. Previously, Zilgalvis has written about the interface of space and artificial intelligence in Frontiers of Space Technology, held a Summer Research Fellowship on artificial intelligence at Oxford's Future of Humanity Institute, and his research in computational high-energy physics has appeared in SciPost Physics and SciPost Physics Core. Zilgalvis holds a Bachelor of Science with First-Class Honors in Theoretical Physics from University College London, and graduated first in his class from the European School Brussels II.About Foresight InstituteForesight Institute is a research organization and non-profit that supports the beneficial development of high-impact technologies. Since our founding in 1987 on a vision of guiding powerful technologies, we have continued to evolve into a many-armed organization that focuses on several fields of science and technology that are too ambitious for legacy institutions to support.Allison DuettmannThe President and CEO of Foresight Institute, Allison Duettmann directs the Intelligent Cooperation, Molecular Machines, Biotech & Health Extension, Neurotech, and Space Programs, alongside Fellowships, Prizes, and Tech Trees. She has also been pivotal in co-initiating the Longevity Prize, pioneering initiatives like Existentialhope.com, and contributing to notable works like "Superintelligence: Coordination & Strategy" and "Gaming the Future".Get Involved with Foresight:Apply to our virtual technical seminars Join our in-person events and workshops Donate: Support Our Work – If you enjoy what we do, please consider this, as we are entirely funded by your donations!Follow Us: Twitter | Facebook | LinkedInNote: Explore every word spoken on this podcast through Fathom.fm, an innovative podcast search engine. Hosted on Acast. See acast.com/privacy for more information.
What if everything you know is just a simulation? In 2022, I was joined by the one and only Nick Bostrom to discuss the simulation hypothesis and the prospects of superintelligence. Nick is a Professor at Oxford University, where he heads the Future of Humanity Institute as its founding director. He is the most-cited professional philosopher in the world aged 50 or under and is the author of some 200 publications, including Anthropic Bias (2002), Global Catastrophic Risks (2008), Human Enhancement (2009), and Superintelligence: Paths, Dangers, Strategies (2014), a New York Times bestseller. With a background in theoretical physics, computational neuroscience, logic, and artificial intelligence, there is no one better to answer this question than him! Tune in. — Key Takeaways: 00:00:00 Intro 00:00:44 Judging a book by its cover 00:05:22 How could an AI have emotions and be creative? 00:08:22 How could a computing device / AI feel pain? 00:13:09 The Turing test 00:20:02 The simulation hypothesis 00:22:27 Is there a "Drake Equation" for the simulation hypothesis? 00:27:16 Penroses' orchestrated objective reality 00:34:11 SETI and the prospect of extraterrestrial life 00:49:20 Are computers really getting "smarter"? 00:53:59 Audience questions 01:01:09 Outro — Additional resources:
SpeakerKristian Rönn is the CEO and co-founder of Normative. He has a background in mathematics, philosophy, computer science and artificial intelligence. Before he started Normative he worked at the University of Oxford's Future of Humanity Institute on issues related to global catastrophic risks.Session SummaryWhen people talk about today's biggest challenges they tend to frame the conversation around “bad people” doing “bad things.” But is there more to the story? In this month's Hope Drop we speak to Kristian Rönn, an entrepreneur formerly affiliated with the Future of Humanity Institute. Kristian calls these deeply rooted impulses “Darwinian demons.” These forces, a by-product of natural selection, can lead us to act in shortsighted ways that harm others—and even imperil our survival as a species. In our latest episode, Kristian explains how we can escape these evolutionary traps through cooperation and innovative thinking. Kristian's new book, The Darwinian Trap, is being published on September 24th. Be sure to preorder it today!Full transcript, list of resources, and art piece: https://www.existentialhope.com/podcastsExistential Hope was created to collect positive and possible scenarios for the future so that we can have more people commit to creating a brighter future, and to begin mapping out the main developments and challenges that need to be navigated to reach it. Existential Hope is a Foresight Institute project.Hosted by Allison Duettmann and Beatrice ErkersFollow Us: Twitter | Facebook | LinkedIn | Existential Hope InstagramExplore every word spoken on this podcast through Fathom.fm. Hosted on Acast. See acast.com/privacy for more information.
In today's episode, we discuss the hidden forces that shape human behavior and global challenges with Kristian Rönn, a leading thinker in sustainability and evolutionary psychology. As the CEO and cofounder of Normative, Kristian revolutionized how businesses approach sustainability accounting, helping large enterprises achieve their net zero targets. With a rich background in mathematics, philosophy, computer science, and artificial intelligence, he is also the author of "The Darwinian Trap: The Hidden Evolutionary Forces That Explain Our World (and Threaten Our Future)", where he unpacks the evolutionary pitfalls that hinder long-term success and offers insights into how cooperation and high-value behaviors can lead to better decision-making and a brighter future for all. Join us for a thought-provoking conversation with Kristian Rönn as he uncovers the evolutionary forces that silently shape our world and the challenges we face in both personal and global contexts. How do these hidden forces drive human conflict and short-term thinking? What can we do to avoid the Darwinian traps that hinder long-term success? Kristian shares his deep insights into the psychological underpinnings of our behavior, the impact of competitive pressures on society, and the importance of fostering cooperation to create a better future. Discover actionable strategies to navigate these evolutionary pitfalls, enhance your decision-making, and become a more high-value, cooperative leader in your career and beyond. What to Listen For Introduction – 00:00:00 What sparked Kristian Rönn's passion for understanding the evolutionary forces behind human behavior and global challenges? How did Kristian's journey from Oxford's Future of Humanity Institute to founding Normative shape his perspective on sustainability and global risks? What is the central thesis of Kristian's book, The Darwinian Trap, and why is it crucial for our future? The Darwinian Trap: Evolutionary Forces in Action – 00:06:13 What is the Darwinian Trap, and how does it explain the short-term thinking and conflicts we see in society? How do evolutionary pressures influence everything from nation-state conflicts to personal career competition? What are the "Darwinian demons" we must be aware of, and how can they impact decision-making on both a personal and global level? The Hidden Costs of Short-Term Thinking – 00:24:26 How can short-term evolutionary strategies be both beneficial and harmful in different contexts? What are some real-world examples of how short-term thinking has led to long-term problems, and how can we avoid these pitfalls? Why is it easier to destroy than to build, and how does this concept relate to the challenges we face today? Navigating the Complexities of Modern Society – 00:36:20 How has globalization and the interdependence of nations both helped and hindered global cooperation? Why is decentralized power important in creating a more equitable and cooperative future? Building a Better Future: Cooperation and Governance – 00:45:27 How can we overcome the cooperation failures that drive global conflicts and environmental degradation? What role can reputational markets and decentralized governance play in solving these complex problems? What are the potential dangers of technological advancements, particularly in AI, and how can we manage these risks responsibly? What actionable steps can individuals take to contribute to a more collaborative and sustainable world? Concluding Thoughts and Optimism for the Future – 00:56:23 Despite the challenges, what gives Kristian optimism about the future of humanity and our ability to overcome the Darwinian Trap? How can meditation and introspection help individuals maintain a positive outlook in the face of global challenges? Where can listeners learn more about The Darwinian Trap and Kristian Rönn's work on sustainability and global risks? Learn more about your ad choices. Visit megaphone.fm/adchoices
Why do there seem to be more dystopias than utopias in our collective imagination? Why is it easier to find agreement on what we don't want than on what we do want? Do we simply not know what we want? What are "solved worlds", "plastic worlds", and "vulnerable worlds"? Given today's technologies, why aren't we working less than we potentially could? Can humanity reach a utopia without superintelligent AI? What will humans do with their time, and/or how will they find purpose in life, if AIs take over all labor? What are "quiet" values? With respect to AI, how important is it to us that our conversation partners be conscious? Which factors will likely make the biggest differences in terms of moving the world towards utopia or dystopia? What are some of the most promising strategies for improving global coordination? How likely are we to end life on earth? How likely is it that we're living in a simulation?Nick Bostrom is a Swedish-born philosopher with a background in theoretical physics, computational neuroscience, logic, and artificial intelligence, along with philosophy. He's been a Professor at Oxford University, where he served as the founding Director of the Future of Humanity Institute from 2005 until its closure in April 2024. He is currently the founder and Director of Research of the Macrostrategy Research Initiative. Bostrom is the author of over 200 publications, including Anthropic Bias (2002), Global Catastrophic Risks (2008), Human Enhancement (2009), and Superintelligence: Paths, Dangers, Strategies (2014). His work has pioneered many of the ideas that frame current thinking about humanity's future (such as the concept of an existential risk, the simulation argument, the vulnerable world hypothesis, the unilateralist's curse, etc.), while some of his recent work concerns the moral status of digital minds. His most recent book, Deep Utopia: Life and Meaning in a Solved World, was published in March of 2024. Learn more about him at his website, nickbostrom.com.StaffSpencer Greenberg — Host / DirectorJosh Castle — ProducerRyan Kessler — Audio EngineerUri Bram — FactotumWeAmplify — TranscriptionistsMusicBroke for FreeJosh WoodwardLee RosevereQuiet Music for Tiny Robotswowamusiczapsplat.comAffiliatesClearer ThinkingGuidedTrackMind EasePositlyUpLift[Read more]
Read the full transcript here. How much energy is needed for GDP growth? Would our civilization have developed at the same rate without fossil fuels? Could we potentially do the same things we're currently doing but with significantly less energy? How different would the world look if we'd developed nuclear energy much earlier? Why can't anything go faster than light? Will the heat death of the universe really be "the end" for everything? How can difficult concepts be communicated in simple ways that nevertheless avoid being misleading or confusing? Is energy conservation an unbreakable law? How likely is it that advanced alien civilizations exist? What are S-risks? Can global civilizations be virtuous? What is panspermia? How can we make better backups of our knowledge and culture?Anders Sandberg is a researcher at the Institute for Futures Studies in Sweden. He was formerly senior research fellow at the Future of Humanity Institute at University of Oxford. His research deals with emerging technologies, the ethics of human enhancement, global and existential risks, and very long-range futures. Follow him on Twitter / X at @anderssandberg, find him via his various links here. StaffSpencer Greenberg — Host / DirectorJosh Castle — ProducerRyan Kessler — Audio EngineerUri Bram — FactotumWeAmplify — TranscriptionistsMusicBroke for FreeJosh WoodwardLee RosevereQuiet Music for Tiny Robotswowamusiczapsplat.comAffiliatesClearer ThinkingGuidedTrackMind EasePositlyUpLift[Read more]
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Funding for programs and events on global catastrophic risk, effective altruism, and other topics, published by GCR Capacity Building team (Open Phil) on August 13, 2024 on The Effective Altruism Forum. Post authors: Eli Rose, Asya Bergal Posting in our capacities as members of Open Philanthropy's Global Catastrophic Risks Capacity Building team. Note: This program, together with our separate program for work that builds capacity to address risks from transformative AI, has replaced our 2021 request for proposals for outreach projects. If you have a project which was in-scope for that program but isn't for either of these, you can apply to our team's general application instead. Apply for funding here. Applications are open until further notice and will be assessed on a rolling basis. This is a wide-ranging call for applications, seeking to fund programs and events in a variety of areas of interest to Open Philanthropy - including effective altruism, global catastrophic risks, biosecurity, AI for epistemics, forecasting, and other areas. In general, if the topic of your program or event falls within one of our GCR focus areas, or if it's similar to work we've funded in the past in our GCR focus areas, it may be a good fit for this program. If you're unsure about whether to submit your application, we'd encourage you to err on the side of doing so. By "programs and events" we mean scholarship or fellowship programs, internships, residencies, visitor programs, courses[1], seminars, conferences, workshops, retreats, etc., including both in-person and online activities. We're open to funding programs or events aimed at individuals at any career stage, and with a wide range of potential purposes, including teaching new skills, providing new career opportunities, offering mentorship, or facilitating networking. Examples of programs and events of this type we've funded before include: Condor Camp, a summer program for Brazilian students interested in existential risk work. The Future of Humanity Institute's Research Scholars Program supporting early-career researchers in global catastrophic risk. Effective Altruism Global, a series of conferences for individuals interested in effective altruism. Future Forum, a conference aimed at bringing together members of several communities interested in emerging technology and the future. A workshop on using AI to improve epistemics, organized by academics from NYU, the Forecasting Research Institute, the AI Objectives Institute and Metaculus. AI-focused work We have a separate call up for work that builds societal capacity to address risks from transformative AI. If your program or event is focused on transformative AI and/or risks from transformative AI, we prefer you apply to that call instead. However, which call you apply to is unlikely to make a difference to the outcome of your application. Application information Apply for funding here. The application form asks for information about you, your project/organization (if relevant), and the activities you're requesting funding for. We're interested in funding both individual/one-off programs and events, and organizations that run or support programs and events. We expect to make most funding decisions within 8 weeks of receiving an application (assuming prompt responses to any follow-up questions we may have). You can indicate on our form if you'd like a more timely decision, though we may or may not be able to accommodate your request. Applications are open until further notice and will be assessed on a rolling basis. 1. ^ To apply for funding for the development of new university courses, please see our separate Course Development Grants RFP. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org
Future Affairs LIVE: De mensheid is in staat pandemieën te creëren, nucleaire oorlogen te starten en de natuur nog verder te ontregelen. Maar een moreel kader om dat níet te doen ontbreekt. De kans dat we deze eeuw als mensheid ten onder gaan is volgens de wereldberoemd moraalfilosoof Toby Ord zo'n 1 op 6. De hoogste tijd dat we een plan maken voor ons voortbestaan hier op aarde. Toby Ord is verbonden aan The Future of Humanity Institute van de universiteit van Oxford en hij is adviseur voor World Health Organization, the World Bank, the World Economic Forum, the US National Intelligence Council en the UK Prime Minister's Office. Zijn onderzoek richt zich op de rampen die ons voortbestaan op aarde bedreigen.Wij spraken hem tijdens Brainwash Festival in Amsterdam over de grote vraag: hoeveel toekomst hebben we nog?Gast: Toby OrdPresentatie: Jessica van der Schalk & Wouter van NoortProductie: Brainwash FestivalMontage: Gal Tsadok-HaiZie het privacybeleid op https://art19.com/privacy en de privacyverklaring van Californië op https://art19.com/privacy#do-not-sell-my-info.
Our guest in this episode grew up in an abandoned town in Tasmania, and is now a researcher and blogger in Berkeley, California. After taking a degree in human ecology and science communication, Katja Grace co-founded AI Impacts, a research organisation trying to answer questions about the future of artificial intelligence.Since 2016, Katja and her colleagues have published a series of surveys about what AI researchers think about progress on AI. The 2023 Expert Survey on Progress in AI was published this January, comprising responses from 2,778 participants. As far as we know, this is the biggest survey of its kind to date.Among the highlights are that the time respondents expect it will take to develop an AI with human-level performance dropped between one and five decades since the 2022 survey. So ChatGPT has not gone unnoticed. Selected follow-ups:AI ImpactsWorld Spirit Sock Puppet - Katja's blogSurvey of 2,778 AI authors: six parts in pictures - from AI ImpactsOpenAI researcher who resigned over safety concerns joins Anthropic - article in The Verge about Jan LeikeMIRI 2024 Mission and Strategy Update - from the Machine Intelligence Research Institute (MIRI)Future of Humanity Institute 2005-2024: Final Report - by Anders Sandberg (PDF)Centre for the Governance of AIReasons for Persons - Article by Katja about Derek Parfit and theories of personal identity OpenAI Says It Has Started Training GPT-4 Successor - article in Forbes Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration What If? So What?We discover what's possible with digital and make it real in your businessListen on: Apple Podcasts Spotify
The media is full of dystopian depictions of artificial intelligence, such as The Terminator and The Matrix, yet few have dared to dream up the image of an AI utopia. Nick Bostrom's most recent book, Deep Utopia: Life and Meaning in a Solved World attempts to do exactly that. Bostrom explores what it would mean to live in a post-work world, where human labor is vastly outperformed by AI, or even made obsolete. When all of our problems have been solved in an AI utopia . . . well, what's next for us humans?Bostrom is a philosopher and was founding director of the Future of Humanity Institute at Oxford University. He is currently the founder and director of research at the Macrostrategy Research Initiative. He also wrote the much-discussed 2014 book, Superintelligence: Paths, Dangers, Strategies.In This Episode* Our dystopian predisposition (1:29)* A utopian thought experiment (5:16)* The plausibility of a solved world (12:53)* Weighing the risks (20:17)Below is a lightly edited transcript of our conversationOur dystopian predisposition (1:29)Pethokoukis: The Dutch futurist, Frederik Polak famously put it that any culture without a positive vision of the future has no future. It's a light paraphrase. And I kind of think that's where we are right now, that despite the title of your book, I feel like right now people can only imagine dystopia. Is that what you think? Do I have that wrong?Bostrom: It's easier to imagine dystopia. I think we are all familiar with a bunch of dystopian works of fiction. The average person could rattle off Brave New World, 1984, The Handmaid's Tale. Most people couldn't probably name a single utopian work, and even the attempts that have been made, if you look closely at them, you probably wouldn't actually want to live there. It is an interesting fact that it seems easier for us to imagine ways in which things could be worse than ways in which things could be better. Maybe some culture that doesn't have a positive vision has no future but, then again, cultures that have had positive visions also often have ended in tears. A lot of the times utopian blueprints have been used as excuses for imposing coercively some highly destructive vision on society. So you could argue either way whether it is actually beneficial for societies to have a super clear, long-term vision that they are staring towards.I think if we were to ask people to give a dystopian vision, we would get probably some very picturesque, highly detailed visions from having sort of marinated in science fiction for decades. But then if you asked people about utopia, I wonder if all their visions would be almost alike: Kind of this clean, green world, with maybe some tall skyscrapers or something, and people generally getting along. I think it'd be a fairly bland, unimaginative vision.That would be the idea of “all happy families are alike, but each unhappy family is unhappy in its own unique way.” I think it's easy enough to enable ways in which the world could be slightly better than it is. So imagine a world exactly like the one we have, except minus childhood leukemia. So everybody would agree that definitely seems better. The problem is if you start to add these improvements and you stack on enough of them, then eventually you face a much more philosophically challenging proposition, which is, if you remove all the difficulties and all the shadows of human life, all forms of suffering and inconvenience, and all injustice and everything, then you risk ending up in this rather bland future where there is no challenge, no purpose, no meaning for us humans, and it then almost becomes utopian again, but in a different way. Maybe all our basic needs are catered to, but there seems to be then some other part missing that is important for humans to have flourishing lives.A utopian thought experiment (5:16)Is your book a forecast or is it a thought experiment?It's much more a thought experiment. As it happens, I think there is a non-trivial chance we will actually end up in this condition, I call it a “solved world,” particularly with the impending transition to the machine intelligence era, which I think will be accompanied by significant risks, including existential risk. In my previous book, Superintelligence, which came out in 2014, focused on what could go wrong when we are developing machine super intelligence, but if things go right—and this could unfold within the lifetime of a lot of us who are alive on this planet today—if things go right, they could go very right, and, in particular, all kinds of problems that could be solved with better technology could be solved in this future where you have superintelligent AIs doing the technological development. And we might then actually confront the situation where these questions we can now explore as a thought experiment would become pressing practical questions where we would actually have to make decisions on what kinds of lives we want to live, what kind of future we want to create for ourselves if all these instrumental limitations were removed that currently constrain the choices set that we face.I imagine the book would seem almost purely a thought experiment before November 2022 when ChatGPT was rolled out by OpenAI, and now, to some people, it seems like these are questions certainly worth pondering. You talked about the impending machine superintelligence—how impending do you think, and what is your confidence level? Certainly we have technologists all over the map speaking about the likelihood of reaching that maybe through large language models, other people think they can't quite get us there, so how much work is “impending” doing in that sentence?I don't think we are in a position any longer to rule out even extremely short timelines. We can't be super confident that we might not have an intelligence explosion next year. It could take longer, it could take several years, it could take a decade or longer. We have to think in terms of smeared out probability distributions here, but we don't really know what capabilities will be unlocked as you scale up even the current architectures one more order of magnitude like GPT-5-level or GPT-6-level. It might be that, just as the previous steps from GPT-2 to GPT-3 and 3 to 4 sort of unlocked almost qualitatively new capabilities, the same might hold as we keep going up this ladder of just scaling up the current architectures, and so we are now in a condition where it could happen at any time, basically. It doesn't mean it will happen very soon, but we can't be confident that it won't.I do think it is slightly easier for people maybe now, even just with looking at the current AI systems, we have to take these questions seriously, and I think it will become a lot easier as the penny starts to drop that we're about to see this big transition to the machine intelligence era. The previous book, Superintelligence, back in 2014 when that was published—and it was in the works for six years prior—at that time, what was completely outside the Overton window was even the idea that one day we would have machine superintelligence, and, in particular, the idea that there would then be an alignment problem, a technical difficulty of steering these superintelligent intellects so that they would actually do what we want. It was completely neglected by academia. People thought, that's just science fiction or idle futurism. There were maybe a handful of people on the internet who were starting to think about that. In the intervening 10 years, that has changed, and so now all the frontier AI labs have research teams specifically trying to work on scalable methods for AI alignment, and it's much more widely recognized over the last couple of years that this will be a transformative thing. You have statements coming out from leading policy makers from the White House, the UK had this global summit on AI, and so this alignment problem and the risks related to AI have sort of entered the Overton window, and I think some of these other issues as to what the world will look like if we succeed, similarly, will have to come inside the Overton window, and probably will do so over the next few years.So we have an Overton window, we have this technological advance with machine intelligence. Are you as confident about one of the other pillars of your thought experiment, which is an equally, what might seem science-futuristic advance in our ability to edit ourselves, to modify ourselves and our brains and our emotions. That seems to hand-in-hand with the thought experiment.I think once we develop machine superintelligence, then we will soon thereafter have tremendous advances in other technological areas as well because we would then not be restricted to humans trying to develop new technologies with our biological brains. But this research and development would be done by superintelligences on digital timescales rather than biological timescales. So the transition to superintelligence would, I think, mean a kind of telescoping of the future.So there are all these technologies we can see are, in principle, possible. They don't violate the law of physics. In the fullness of time, probably human civilization would reach them if we had 10,000 years to work on it, all these science fiction like space colonies, or cures for aging, or perfect virtual reality uploading into computers, we could see how we might eventually . . . They're unrealistic given the current state of technology, but there's no (in principle) barriers, so we could imagine developing those if we had thousands of years to work on them. But all those technologies might become available quite soon after you have superintelligence doing the research and development. So I think we will then start to approximate the condition of technological maturity, like a condition where we have already developed most of those general purpose technologies that are physically possible, and for which there exists some in principally feasible pathway from where we are now to developing them. The plausibility of a solved world (12:53)I know one criticism of the book is, with this notion of a “solved world” or technological maturity, that the combinatorial nature of ideas would allow for almost an unlimited number of new possibilities, so in no way could we reach maturity or a technologically solved state of things. Is that a valid criticism?Well, it is a hypothesis you could entertain that there is an infinite number of ever-higher levels of technological capability such that you'd never be able to reach or even approximate any maximum. I think it's more likely that there will eventually be diminishing returns. You will eventually have figured out the best way to do most of the general things that need doing: communicating information, processing information, processing raw materials, creating various physical structures, et cetera, et cetera. That happens to be my best guess, but in any case, you could bracket that, we could at least establish lower bounds on the kinds of technological capabilities that an advanced civilization with superintelligence would be able to develop, and we can list out a number of those technologies. Maybe it would be able to do more than that, but at least it would be able to do various things that we can already sort of see and outline how you could do, it's just we can't quite put all the pieces together and carry it out yet.And the book lists a bunch of these affordances that a technologically mature civilization would at least have, even if maybe there would be further things we haven't even dreamt of yet. And already that set of technological capabilities would be enough to radically transform the human condition, and indeed to present us with some of these basic philosophical challenges of how to live well in this world where we wouldn't only have a huge amount of control over the external reality, we wouldn't only be able to automate human labor across almost all domains, but we would also, as you alluded to earlier, have unprecedented levels of control over ourselves or our biological organism and our minds using various forms of bio technologies or newer technologies.In this kind of scenario, is the purpose of our machines to solve our problems, or, not give us problems, but give us challenges, give us things to do?It then comes down to questions about value. If we had all of these capabilities to achieve various types of worlds, which one would we actually want? And I think there are layers to this onion, different levels of depth at which one can approach and think about this problem. At the outermost layer you have the idea that, well, we will have increased automation as a result of advances in AI and robotics, and so there will be some humans who become unemployed as a result. At the most superficial layer of analysis, you would then think, “Well some jobs become unnecessary, so you need to maybe retrain workers to move to other areas where there is continued demand for human labor. Maybe they need some support whilst they're retraining and stuff like that.”So then you take it a step further, like you peel off another layer of the onion and you realize that, well, if AI truly succeeds, if you have artificial general intelligence, then it's really not just some areas of human economic contribution that gets affected, but all areas, with a few exceptions that we can return to. But AIs could do everything that we can do, and do it better, and cheaper, and more efficiently. And you could say that the goal of AI is full unemployment. The goal is not just to automate a few particular tasks, but to develop a technology that allows us to automate all tasks. That's kind of what AI has always been about; it's not succeeded yet, but that's the goal, and we are seemingly moving closer to that. And so, with the asterisk here that there are a few exceptions that we can zoom in on, you would then get a kind of post-work condition where there would be no need for human labor at all.My baseline—I think this is a reasonable baseline—is that the history of technology is a history of both automating things, but then creating new things for us to do. So I think if you ask just about any economist, they will say that that should be our guide for the future: that this exact same technology will think of new things for people to do, that we, at least up to this point, have shown infinite creativity in creating new things to do, and whether you want to call those “work,” there's certainly things for us to do, so boredom should not be an issue.So there's a further question of whether there is anything for us to do, but if we just look at the work part first, are there ways for humans to engage in economically productive labor? And, so far, what has been the case is that various specific tasks have been automated, and so instead of having people digging ditches using their muscles, we can have bulldozers digging ditches, and you could have one guy driving the bulldozer and do the work of 50 people with a shovel or something. And so human labor is kind of just moving out of the areas where you can automate it and into other areas where we haven't yet been able to automate it. But if AIs are able to do all the things that we can do, then that would be no further place, it would look like, at least at first sight, for human workers to move into. The exceptions to this, I think, are cases were the consumer cares not just about the product, but about how the productThey want that human element.You could have consumers with just a raw preference that a particular task was performed by humans or a particular product—just as now sometimes consumers play a little premium sometimes if a little gadget was produced by a politically favored group, or maybe handcrafted by indigenous people, we may pay more for it than if the same object was made in a sweatshop in Indonesia or something. Even if the actual physical object itself is equally good in both cases, we might care about the causal process that brought it into existence. So to the extent that consumers have those kinds of preferences, there could remain ineliminable demand for human labor, even at technological maturity. You could think of possible examples: Maybe we just prefer to watch human athletes compete, even if robots could run faster or box harder. Maybe you want a human priest to officiate at your wedding, even if the robot could say the same words with the same intonations and the same gestures, et cetera. So there could be niches of that sort, where there would remain demand for human labor no matter how advanced our technology.Weighing the risks (20:17)Let me read one friendly critique from Robin Hanson of the book:Bostrom asks how creatures very much like him might want to live for eons if they had total peace, vast wealth, and full eternal control of extremely competent AI that could do everything better than they. He . . . tries to list as many sensible possibilities as possible . . .But I found it . . . hard to be motivated by his key question. In the future of creatures vastly more capable than us I'm far more interested in what those better creatures would do than what a creature like me now might do there. And I find the idea of creatures like me being rich, at peace, and in full control of such a world quite unlikely.Is the question he would prefer you answer unanswerable, therefore you cannot answer that question, so the only question you can answer is what people like us would be like?No, I think there are several different questions, each of which, I think, is interesting. In some of my other work, I do, in fact, investigate what other creatures, non-human creatures, digital minds we might be building, for example, AIs of different types, what they might want and how one might think of what would be required for the future to go well for these new types of being that we might be introducing. I think that's an extremely important question as well, particularly from a moral point of view. It might be, in the future, most inhabitants of the future will be digital minds or AIs of different kinds. Some might be at scales far larger than us human beings.In this book, though, I think the question I'm primarily interested in is: What if we are interested in it from our own perspective, what is the best possible future we could hope for for ourselves, given the values that we actually have? And I think that could be practically relevant in various ways. There could, for example, arise situations where we have to make trade-offs between delaying the transition to AI with maybe the risk going up or down, depending on how long we take for it. And then, in the meantime, people like us dying, just as a result of aging and disease and all kinds of things that currently result in people.So what are the different risk tradeoffs we are willing to take? And that might depend, in part, on how much better we think our lives could be if this goes well. If the best we could hope for was just continuing our current lives for a bit longer, that might be a different choice situation than if there was actually on the table something that would be super desirable from our current point of view, then we might be willing to take bigger risks to our current lives if there was at least some chance of achieving this much better life. And I think those questions, from a prudential point of view, we can only try to answer if we have some conception of how good the potential outcome would be for us. But I agree with him that both of these questions are important.It also seems to me that, initially, there was a lot of conversation after the rollout of ChatGPT about existential risk, we were talking about an AI pause, and I feel like the pendulum has swung completely to the other side, that, whether it's due to people not wanting to miss out on all the good stuff that AI could create, or worried about Chinese AI beating American AI, that the default mode that we're in right now is full speed ahead, and if there are problems we'll just have to fix them on the fly, but we're just not going to have any substantial way to regulate this technology, other than, perhaps, the most superficial of guardrails. I feel like that's where we're at now; at least, that's what I feel like in Washington right now.Yeah, I think that has been the default mode of AI development since its inception, and still is today, predominantly. The difficulties are actually to get the machines to do more, rather than how to limit what they're allowed to do. That is still the main thrust. I do think, though, that the first derivative of this is towards increased support for various kinds of regulations and restrictions, and even a growing number of people calling for an “AI pause” or wanting to stop AI development altogether. This used to be basically a completely fringe . . . there were no real serious efforts to push in this direction for almost all decades of AI up until maybe two years ago or so. And since then there has been an increasingly vocal, still minority, but a set of people who are trying hard to push for increased regulation, and for slowing down, and for raising the alarm of AI developments. And I think it remains an open question how this will unfold over the coming years.I have a complex view on this, what would actually be desirable here. On the one hand, I do think there are these significant risks, including existential risks, that will accompany a transition. When we develop superintelligent machines, it's not just one cool more gadget, right? It's the most important thing ever happening in human history, and they will be to us as we are to chimpanzees or something—potentially a very powerful force, and things could go wrong there. So I do agree with the C.So I've been told over the past two years!And to the point where some people think of me as a kind of doomsayer or anti-AI, but that's not the full picture. I think, ultimately, it would be a catastrophe if superintelligence was never developed, and that we should develop this, ideally carefully, and it might be desirable if, at a critical point, just when we figure out how to make machines superintelligent, whoever is doing this, whether it's some private lab, or some government Manhattan Project, whoever it is, has the ability to go a little bit slow in that, maybe to pause for six months or, rather than immediately cranking all the knobs up to 11, maybe do it incrementally, see what happens, make sure the safety mechanisms work. I think that might be more ideal than a situation where you have, say, 15 different labs all racing together first, and whoever takes any extra precautions just immediately fall behind and become irrelevant. I think that would seem . . .I feel like where we're at right now—I may have answered this differently 18 months ago—but I feel like where we're at right now is that second scenario. At least here in the United States, and maybe I'm too Washington-centric, but I feel we're at the “crank it up to 11,” realistically, phase.Well, we have seen the first-ever real AI regulations coming on board. It's something rather than nothing, and so you could easily imagine, if pressure continues to build, there will be more demand for this, and then, if you have some actual adverse event, like some bad thing happening, then who knows? There are other technologies that have been stymied because of . . . like human cloning, for example, or nuclear energy in many countries. So it's not unprecedented that society could convince itself that it's bad. So far, historically, all these technology bans and relinquishments have probably been temporary because there have been other societies making other choices, and eventually, just like each generation is, to some extent, like a new role of the die, and eventually you get . . .But it might be that we already have, in particular with AI technologies that, if fully deployed, could allow a society in a few years to lock itself in to some sort of permanent orthodoxy. If you imagine deploying even current AI systems fully to censor dissenting information—if you had some huge stigmatization of AI where it becomes just taboo to say anything positive about AI, and then very efficient ways of enforcing that orthodoxy by shadow banning people who dissent from it, or canceling them, or you could imagine surveilling anybody not to do any research on AI like that, the technology to sort of freeze in a temporary social consensus might be emerging. And so if 10 years from now there were a strong global consensus of some of these issues, then we can't rule out that that would become literally permanent. My probably optimal level of government oversight and regulation would be more than we currently have, but I do worry a little bit about it not increasing to the optimal point and then stopping there, but once the avalanche starts rolling, it could overshoot the target and result in a problem. To be clear, I still think that's unlikely, but I think it's more likely than it was two years ago.In 2050, do you feel like we'll be on the road to deep utopia or deep dystopia?I hope the former, I think both are still in the cards for what we know. There are big forces at play here. We've never had machine intelligence transition before. We don't have the kind of social or economic predictive science that really allows us to say what will happen to political dynamics as we change these fundamental parameters of the human condition. We don't yet have a fully reliable solution to the problem of scalable alignment. I think we are entering uncharted territories here, and both extremely good and extremely bad outcomes are possible, and we are a bit in the dark as to how all of this will unfold.Faster, Please! is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber. This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit fasterplease.substack.com/subscribe
In Episode 366 of Hidden Forces, Demetri Kofinas speaks with Nick Bostrom, the founding director of the Future of Humanity Institute and Principal Researcher at the Macrostrategy Research Initiative. Nick Bostrom is also the author of Superintelligence, which was the book that ignited a global conversation about what might happen if AI development goes wrong. In his latest book, Deep Utopia, Bostrom attempts to answer the opposite question – what happens if things go right? At such a point of technological maturity driven by further and further advancements in artificial intelligence, humanity will confront challenges that are philosophical and spiritual in nature. In such a “solved world,” as Nick Bostrom describes it, what will be the point of human existence? What will give life meaning? How should we spend our days if we no longer need to work, exercise, or make political choices? And is such a world consistent with human agency and freedom? These are all questions that Kofinas explores in this expansive and thought-provoking conversation. You can subscribe to our premium content and access our premium feed, episode transcripts, and Intelligence Reports at HiddenForces.io/subscribe. If you want to join in on the conversation and become a member of the Hidden Forces Genius community, which includes Q&A calls with guests, access to special research and analysis, in-person events, and dinners, you can also do that on our subscriber page at HiddenForces.io/subscribe. If you enjoyed listening to today's episode of Hidden Forces, you can help support the show by doing the following: Subscribe on Apple Podcasts | YouTube | Spotify | Stitcher | SoundCloud | CastBox | RSS Feed Write us a review on Apple Podcasts & Spotify Subscribe to our mailing list at https://hiddenforces.io/newsletter/ Producer & Host: Demetri Kofinas Editor & Engineer: Stylianos Nicolaou Subscribe and Support the Podcast at https://hiddenforces.io Join the conversation on Facebook, Instagram, and Twitter at @hiddenforcespod Follow Demetri on Twitter at @Kofinas Episode Recorded on 05/27/2024
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AXRP Episode 32 - Understanding Agency with Jan Kulveit, published by DanielFilan on May 30, 2024 on The AI Alignment Forum. YouTube link What's the difference between a large language model and the human brain? And what's wrong with our theories of agency? In this episode, I chat about these questions with Jan Kulveit, who leads the Alignment of Complex Systems research group. Topics we discuss: What is active inference? Preferences in active inference Action vs perception in active inference Feedback loops Active inference vs LLMs Hierarchical agency The Alignment of Complex Systems group Daniel Filan: Hello, everybody. This episode, I'll be speaking with Jan Kulveit. Jan is the co-founder and principal investigator of the Alignment of Complex Systems Research Group, where he works on mathematically understanding complex systems composed of both humans and AIs. Previously, he was a research fellow at the Future of Humanity Institute focused on macrostrategy, AI alignment, and existential risk. For links to what we're discussing you can check the description of this episode and you can read the transcript at axrp.net. Okay. Well Jan, welcome to the podcast. Jan Kulveit: Yeah, thanks for the invitation. What is active inference? Daniel Filan: I'd like to start off with this paper that you've published in December of this last year. It was called "Predictive Minds: Large Language Models as Atypical Active Inference Agents." Can you tell me roughly what was that paper about? What's it doing? Jan Kulveit: The basic idea is: there's active inference as a field originating in neuroscience, started by people like Karl Friston, and it's very ambitious. The active inference folks claim roughly that they have a super general theory of agency in living systems and so on. And there are LLMs, which are not living systems, but they're pretty smart. So we're looking into how close the models actually are. Also, it was in part motivated by… If you look at, for example, the 'simulators' series or frame by Janus and these people on sites like the Alignment Forum, there's this idea that LLMs are something like simulators - or there is another frame on this, that LLMs are predictive systems. And I think this terminology… a lot of what's going on there is basically reinventing stuff which was previously described in active inference or predictive processing, which is another term for minds which are broadly trying to predict their sensory inputs. And it seems like there is a lot of similarity, and actually, a lot of what was invented in the alignment community seems basically the same concepts just given different names. So noticing the similarity, the actual question is: in what ways are current LLMs different, or to what extent are they similar or to what extent are they different? And the main insight of the paper is… the main defense is: currently LLMs, they lack the fast feedback loop between action and perception. So if I have now changed the position of my hand, what I see immediately changes. So you can think about [it with] this metaphor, or if you look on how the systems are similar, you could look at base model training of LLMs as some sort of strange edge case of active inference or predictive processing system, which is just receiving sensor inputs, where the sensor inputs are tokens, but it's not acting, it's not changing some data. And then the model is trained, and it maybe changes a bit in instruct fine-tuning, but ultimately when the model is deployed, we claim that you can think about the interactions of the model with users as actions, because what the model outputs ultimately can change stuff in the world. People will post it on the internet or take actions based on what the LLM is saying. So the arrow from the system to the world, changing the world, exists, but th...
For decades, philosopher Nick Bostrom (director of the Future of Humanity Institute at Oxford) has led the conversation around technology and human experience (and grabbed the attention of the tech titans who are developing AI - Bill Gates, Elon Musk, and Sam Altman). Now, a decade after his NY Times bestseller Superintelligence warned us of what could go wrong with AI development, he flips the script in his new book Deep Utopia: Life and Meaning in a Solved World (March 27), asking us to instead consider “What could go well?” Ronan recently spoke to Professor Nick Bostrom. Professor Bostrom talks about his background, his new book Deep Utopia Life and Meaning in a Solved World, why he thinks advanced AI systems could automate most human jobs and more. More about Nick Bostrom: Swedish-born philosopher Nick Bostrom was founder and director of the Future of Humanity Institute at Oxford University. He is the most-cited professional philosopher in the world aged 50 or under and is the author of some 200 publications, including Anthropic Bias (2002), Global Catastrophic Risks (2008), Human Enhancement (2009), and Superintelligence: Paths, Dangers, Strategies (2014), a New York Times bestseller. With a background in theoretical physics, computational neuroscience, logic, and artificial intelligence, his work has pioneered some of the ideas that frame current thinking about humanity's future (such as the concept of an existential risk, the simulation argument, the vulnerable world hypothesis, the unilateralist's curse, etc.), while some of his recent work concerns the moral status of digital minds.His writings have been translated into more than 30 languages; he is a repeat main-stage TED speaker; and he has been interviewed more than 1,000 times by media outlets around the world. He has been on Foreign Policy's Top 100 Global Thinkers list twice and was included in Prospect's World Thinkers list, the youngest person in the top 15. As a graduate student he dabbled in stand-up comedy on the London circuit.
Join my mailing list https://briankeating.com/list to win a real 4 billion year old meteorite! All .edu emails in the USA
Nick Bostrom is a Professor at Oxford University and the founding director of the Future of Humanity Institute. Nick is also the world's most cited philosopher aged 50 or under. He is the author of more than 200 publications, including Anthropic Bias (2002), Global Catastrophic Risks (2008), Human Enhancement(2009), and Superintelligence: Paths, Dangers, Strategies (2014), a wrote a New York Times bestseller which sparked a global conversation about the future of AI. His work has pioneered many of the ideas that frame current thinking about humanity's future (such as the concept of an existential risk, the simulation argument, the vulnerable world hypothesis, the unilateralist's curse, etc.), while some of his recent work concerns the moral status of digital minds. He has been on Foreign Policy's Top 100 Global Thinkers list twice and was included in Prospect's World Thinkers list. He has just published a new book called “Deep Utopia: Life and Meaning in a Solved World.” What you will learn Find out why Nick is spending time in seclusion in Portugal Nick shares the big ideas from his new book “Deep Utopia”, which dreams up a world perfectly fixed by AI Discover why Nick got hooked on AI way before the internet was a big deal and how those big future questions sparked his path What would happen to our jobs and hobbies if AI races ahead in the creative industries? Nick shares his thoughts Gain insights into whether AI is going to make our conversations better or just make it easier for people to push ads and political agendas Plus loads more!
An unexpected hero arises to take on our old enemies, the Neocons. Not the hero we expected, but most likely, the hero we deserve. Topics include: Future of Humanity Institute closing, Oxford, transhumanism, Nick Bostrom, Anders Sandberg, WTA, h+, eugenics, Silicon Valley billionaires, Simulation Hypothesis, racist emails, dysgenics, artificial intelligence, progressive version of transhumanism, fringe ideologies and groups, IEET, Martine Rothblatt, EA, Longtermism, national economic systems, technological development, Neoliberals, Neocons, establishment in crisis, Erik Prince, Blackwater, Xe, Bush Administration, War on Terror, MIC, Indo-Pacific military theater, pivot away from Middle East, defense spending, Reagan, Cold War, post communist Russia, focus on private sector to save governmental failure, Eric Schmidt, national security shift, Boeing, whistleblowers' deaths, basic corruption, focus on profits over all else, financialization, wealth gap increasing, money isn't real, lack of economic philosophy, no accountability, dismantling of legitimate protest, Israel, student protests, banning campus protests, 2024 presidential election, AGI, major governments want to steer their own new world order, space vs the desert
Neocon Future Stratego BBQThe Age of Transitions and Uncle 5-3-2024AOT #421An unexpected hero arises to take on our old enemies, the Neocons. Not the hero we expected, but most likely, the hero we deserve. Topics include: Future of Humanity Institute closing, Oxford, transhumanism, Nick Bostrom, Anders Sandberg, WTA, h+, eugenics, Silicon Valley billionaires, Simulation Hypothesis, racist emails, dysgenics, artificial intelligence, progressive version of transhumanism, fringe ideologies and groups, IEET, Martine Rothblatt, EA, Longtermism, national economic systems, technological development, Neoliberals, Neocons, establishment in crisis, Erik Prince, Blackwater, Xe, Bush Administration, War on Terror, MIC, Indo-Pacific military theater, pivot away from Middle East, defense spending, Reagan, Cold War, post communist Russia, focus on private sector to save governmental failure, Eric Schmidt, national security shift, Boeing, whistleblowers' deaths, basic corruption, focus on profits over all else, financialization, wealth gap increasing, money isn't real, lack of economic philosophy, no accountability, dismantling of legitimate protest, Israel, student protests, banning campus protests, 2024 presidential election, AGI, major governments want to steer their own new world order, space vs the desertUTP #331Topics include: Spirit of Texas BBQ restaurant, Inland Empire, livestream videos, new iPad, Stratego, Minesweeper, cowboy hats, flea market products, apps, YouTuber neighbors, podcasting, favorite things, Jabber Jaw, Groffdale Machine Co., Amish scooters, exercise, cycling, dynamo hub, bike commuting, podcast studio setup, phone call line, Sam Smith vs Petty and Lynne, squirrels, fruit trees, dragon fruit, citrus fruit fliesFRANZ MAIN HUB:https://theageoftransitions.com/PATREONhttps://www.patreon.com/aaronfranzUNCLEhttps://unclethepodcast.com/ORhttps://theageoftransitions.com/category/uncle-the-podcast/FRANZ and UNCLE Merchhttps://theageoftransitions.com/category/support-the-podcasts/KEEP OCHELLI GOING. You are the EFFECT if you support OCHELLI https://ochelli.com/donate/Ochelli Link Treehttps://linktr.ee/chuckochelli
Patreon: https://bit.ly/3v8OhY7 Nick Bostrom is a Swedish philosopher who was most recently Professor at Oxford University, where he served as the founding Director of the Future of Humanity Institute. He is best known for his book Superintelligence (Oxford, 2014), which covers the dangers of artificial intelligence. In this episode, Robinson and Nick discuss his more recent book, Deep Utopia: Life and Meaning in a Solved World (Ideapress, 2024). More particularly, they discuss the alignment problem with artificial intelligence, the problem of utopia, how artificial intelligence—if it doesn't make our world horrible—could make it wonderful, the future of technology, and how humans might adjust to a life without work. Nick's Website: https://nickbostrom.com Deep Utopia: https://a.co/d/b8eHuhQ OUTLINE 00:00 Introduction 02:50 From AI Dystopia to AI Utopia 9:15 On Superintelligence and the Alignment Problem 17:48 The Problem of Utopia 21:14 What Are the Different Types of Utopia? 28:04 AI and the Purpose of Mathematics 38:59 What Technologies Can We Expect in an AI Utopia? 43:59 Philosophical Problems with Immortality 55:14 Are There Advanced Alien Civilizations Out There? 59:54 Why Don't We Live in Utopia? Robinson's Website: http://robinsonerhardt.com Robinson Erhardt researches symbolic logic and the foundations of mathematics at Stanford University. Join him in conversations with philosophers, scientists, weightlifters, artists, and everyone in-between. --- Support this podcast: https://podcasters.spotify.com/pod/show/robinson-erhardt/support
In this episode, Jason Howell and Jeff Jarvis discuss the Rabbit R1 AI device with guest Mark Spoonauer from Tom's Guide, delving into its capabilities, design, and potential use cases. Also, Microsoft's VASA-1 model, Meta's slew of AI announcements, the closure of Oxford's Future of Humanity Institute, and the appointment of Paul Cristiano to the US AI Safety Institute.Consider donating to the AI Inside Patreon: http://www.patreon.com/aiinsideshowINTERVIEW WITH MARK SPOONAUER, EIC OF TOM'S GUIDEFirst impressions of the Rabbit R1 AI deviceDesign and form factor of the Rabbit R1Capabilities and limitations of the Rabbit R1Potential use cases for the Rabbit R1Comparison with other AI devices like Meta's Ray-Ban glassesPricing and availability of the Rabbit R1Concerns about the Rabbit R1 being a companion device rather than a phone replacementSocial implications and potential issues with the Rabbit R1NEWSThe Humane AI pin bad-review backlashAI wearables like LimitlessLimitations of audio-only AI interfaces like the IYO OneClosure of Oxford's Future of Humanity Institute and resignation of Nick BostromAppointment of Paul Cristiano as head of US AI Safety Institute at NISTMicrosoft's new VASA-1 AI model for animating faces from photos and audioGoogle's restructuring, combining Android and hardware teams for AI integrationYouTube's new "Ask" feature for premium subscribers to interact with videos using AIMeta's announcements: multimodal AI support for Ray-Ban glasses, AI assistant integration, and Lama 3 language model Hosted on Acast. See acast.com/privacy for more information.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Express interest in an "FHI of the West", published by habryka on April 18, 2024 on LessWrong. TLDR: I am investigating whether to found a spiritual successor to FHI, housed under Lightcone Infrastructure, providing a rich cultural environment and financial support to researchers and entrepreneurs in the intellectual tradition of the Future of Humanity Institute. Fill out this form or comment below to express interest in being involved either as a researcher, entrepreneurial founder-type, or funder. The Future of Humanity Institute is dead: I knew that this was going to happen in some form or another for a year or two, having heard through the grapevine and private conversations of FHI's university-imposed hiring freeze and fundraising block, and so I have been thinking about how to best fill the hole in the world that FHI left behind. I think FHI was one of the best intellectual institutions in history. Many of the most important concepts[1] in my intellectual vocabulary were developed and popularized under its roof, and many crucial considerations that form the bedrock of my current life plans were discovered and explained there (including the concept of crucial considerations itself). With the death of FHI (as well as MIRI moving away from research towards advocacy), there no longer exists a place for broadly-scoped research on the most crucial considerations for humanity's future. The closest place I can think of that currently houses that kind of work is the Open Philanthropy worldview investigation team, which houses e.g. Joe Carlsmith, but my sense is Open Philanthropy is really not the best vehicle for that kind of work. While many of the ideas that FHI was working on have found traction in other places in the world (like right here on LessWrong), I do think that with the death of FHI, there no longer exists any place where researchers who want to think about the future of humanity in an open ended way can work with other people in a high-bandwidth context, or get operational support for doing so. That seems bad. So I am thinking about fixing it. Anders Sandberg, in his oral history of FHI, wrote the following as his best guess of what made FHI work: What would it take to replicate FHI, and would it be a good idea? Here are some considerations for why it became what it was: Concrete object-level intellectual activity in core areas and finding and enabling top people were always the focus. Structure, process, plans, and hierarchy were given minimal weight (which sometimes backfired - flexible structure is better than little structure, but as organization size increases more structure is needed). Tolerance for eccentrics. Creating a protective bubble to shield them from larger University bureaucracy as much as possible (but do not ignore institutional politics!). Short-term renewable contracts. [...] Maybe about 30% of people given a job at FHI were offered to have their contracts extended after their initial contract ran out. A side-effect was to filter for individuals who truly loved the intellectual work we were doing, as opposed to careerists. Valued: insights, good ideas, intellectual honesty, focusing on what's important, interest in other disciplines, having interesting perspectives and thoughts to contribute on a range of relevant topics. Deemphasized: the normal academic game, credentials, mainstream acceptance, staying in one's lane, organizational politics. Very few organizational or planning meetings. Most meetings were only to discuss ideas or present research, often informally. Some additional things that came up in a conversation I had with Bostrom himself about this: A strong culture that gives people guidance on what things to work on, and helps researchers and entrepreneurs within the organization coordinate A bunch of logistical and operation...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Future of Humanity Institute 2005-2024: Final Report, published by Pablo on April 17, 2024 on The Effective Altruism Forum. Anders Sandberg has written a "final report" released simultaneously with the announcement of FHI's closure. The abstract and an excerpt follow. Normally manifestos are written first, and then hopefully stimulate actors to implement their vision. This document is the reverse: an epitaph summarizing what the Future of Humanity Institute was, what we did and why, what we learned, and what we think comes next. It can be seen as an oral history of FHI from some of its members. It will not be unbiased, nor complete, but hopefully a useful historical source. I have received input from other people who worked at FHI, but it is my perspective and others would no doubt place somewhat different emphasis on the various strands of FHI work. What we did well One of the most important insights from the successes of FHI is to have a long-term perspective on one's research. While working on currently fashionable and fundable topics may provide success in academia, aiming for building up fields that are needed, writing papers about topics before they become cool, and staying in the game allows for creating a solid body of work that is likely to have actual meaning and real-world effect. The challenge is obviously to create enough stability to allow such long-term research. This suggests that long-term funding and less topically restricted funding is more valuable than big funding. Many academic organizations are turned towards other academic organizations and recognized research topics. However, pre-paradigmatic topics are often valuable, and relevant research can occur in non-university organizations or even in emerging networks that only later become organized. Having the courage to defy academic fashion and "investing" wisely in such pre-paradigmatic or neglected domains (and networks) can reap good rewards. Having a diverse team, both in terms of backgrounds but also in disciplines, proved valuable. But this was not always easy to achieve within the rigid administrative structure that we operated in. Especially senior hires with a home discipline in a faculty other than philosophy were nearly impossible to arrange. Conversely, by making it impossible to hire anyone not from a conventional academic background (i.e., elite university postdocs) adversely affects minorities, and resulted in instances where FHI was practically blocked from hiring individuals from under-represented groups. Hence, try to avoid credentialist constraints. In order to do interdisciplinary work, it is necessary to also be curious about what other disciplines are doing and why, as well as to be open to working on topics one never considered before. It also opens the surface to the rest of the world. Unusually for a research group based in a philosophy department, FHI members found themselves giving tech support to the pharmacology department; participating in demography workshops, insurance conferences, VC investor events, geopolitics gatherings, hosting artists and civil servant delegations studying how to set up high-performing research institutions in their own home country, etc. - often with interesting results. It is not enough to have great operations people; they need to understand what the overall aim is even as the mission grows more complex. We were lucky to have had many amazing and mission-oriented people make the Institute function. Often there was an overlap between being operations and a researcher: most of the really successful ops people participated in our discussions and paper-writing. Try to hire people who are curious. Where we failed Any organization embedded in a larger organization or community needs to invest to a certain degree in establishing the right kind of...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: FHI (Future of Humanity Institute) has shut down (2005-2024), published by gwern on April 17, 2024 on LessWrong. Over time FHI faced increasing administrative headwinds within the Faculty of Philosophy (the Institute's organizational home). Starting in 2020, the Faculty imposed a freeze on fundraising and hiring. In late 2023, the Faculty of Philosophy decided that the contracts of the remaining FHI staff would not be renewed. On 16 April 2024, the Institute was closed down. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org
This is a link post. Anders Sandberg has written a “final report” released simultaneously with the announcement of FHI's closure. The abstract and an excerpt follow. Normally manifestos are written first, and then hopefully stimulate actors to implement their vision. This document is the reverse: an epitaph summarizing what the Future of Humanity Institute was, what we did and why, what we learned, and what we think comes next. It can be seen as an oral history of FHI from some of its members. It will not be unbiased, nor complete, but hopefully a useful historical source. I have received input from other people who worked at FHI, but it is my perspective and others would no doubt place somewhat different emphasis on the various strands of FHI work. What we did well One of the most important insights from the successes of FHI is to have a long-term perspective [...] ---Outline:(01:00) What we did well(03:52) Where we failed(05:10) So, you want to start another FHI?--- First published: April 17th, 2024 Source: https://forum.effectivealtruism.org/posts/uK27pds7J36asqJPt/future-of-humanity-institute-2005-2024-final-report Linkpost URL:https://www.dropbox.com/scl/fi/ml8d3ubi3ippxs4yon63n/FHI-Final-Report.pdf?rlkey=2c94czhgagy27d9don7pvbc26&dl=0 --- Narrated by TYPE III AUDIO.
Nick Bostrom's previous book, Superintelligence: Paths, Dangers, Strategies, changed the global conversation on AI and became a New York Times bestseller. It focused on what might happen if AI development goes wrong. But what if things go right? Bostrom and Shermer discuss: An AI Utopia and Protopia • Trekonomics, post-scarcity economics • the hedonic treadmill and positional wealth values • colonizing the galaxy • The Fermi paradox: Where is everyone? • mind uploading and immortality • Google's Gemini AI debacle • LLMs, ChatGPT, and beyond • How would we know if an AI system was sentient? Nick Bostrom is a Professor at Oxford University, where he is the founding director of the Future of Humanity Institute. Bostrom is the world's most cited philosopher aged 50 or under.
Forskaren Anders Sandberg, verksam vid Future of Humanity Institute vid Oxford University, har som jobb att blicka in i framtiden. Dagligen funderar han kring teknologier som kan förändra mänskligheten, hur mänskligheten kan öka sina chanser att överleva över tid och vilka globala katastrofer vi behöver hålla utkik efter. Tillsammans så pratar vi om just framtiden, om vad han och andra forskare tror kan hända. Vi pratar om vilka globala risker han ser med AI, om möjligheterna att koppla upp sin hjärna i en dator och när han tror att människan kommer börja kolonisera rymden. Vi hinner också prata om olika möjligheter att förlänga livet, MDMA, samtalen med Elon Musk och varför Anders valt att frysa ner sig själv när han dör. Tusen tack för att du lyssnar!Ta del av våra kurser på Framgångsakademin.Beställ "Mitt Framgångsår".Följ Alexander Pärleros på Instagram.Följ Alexander Pärleros på Tiktok.Bästa tipsen från avsnittet i Nyhetsbrevet.I samarbete med Convendum. Hosted on Acast. See acast.com/privacy for more information.
Forskaren Anders Sandberg, verksam vid Future of Humanity Institute vid Oxford University, har som jobb att blicka in i framtiden. Dagligen funderar han kring teknologier som kan förändra mänskligheten, hur mänskligheten kan öka sina chanser att överleva över tid och vilka globala katastrofer vi behöver hålla utkik efter. Tillsammans så pratar vi om just framtiden, om vad han och andra forskare tror kan hända. Vi pratar om vilka globala risker han ser med AI, om möjligheterna att koppla upp sin hjärna i en dator och när han tror att människan kommer börja kolonisera rymden. Vi hinner också prata om olika möjligheter att förlänga livet, MDMA, samtalen med Elon Musk och varför Anders valt att frysa ner sig själv när han dör. Tusen tack för att du lyssnar!Ta del av våra kurser på Framgångsakademin.Beställ "Mitt Framgångsår".Följ Alexander Pärleros på Instagram.Följ Alexander Pärleros på Tiktok.Bästa tipsen från avsnittet i Nyhetsbrevet.I samarbete med Convendum. Hosted on Acast. See acast.com/privacy for more information.
What would life look like in a fully automated world? How would we derive meaning in a world of superintelligence? Today's Win-Win episode is all about utopias, dystopias and thought experiments, because I'm talking to Professor Nick Bostrom. Nick is one of the world's leading philosophers - he's a leading thinker on the nature of consciousness, AI, catastrophic risks, cosmology… he's also the guy behind the Simulation Hypothesis, the Paperclip Maximizer thought experiment, the seminal AI book Superintelligence... Off into the hypotheti-sphere we go! Chapters 0:00 - Intro 01:42 - Why a book on Utopia? 03:31 - Different types of Utopias 11:40 - How to find purpose in a solved world? 18:31 - Potential Limits to Technology 22:34 - How would Utopians approach Competition? 30:24 - Superintelligence 34:39 - Vulnerable World Hypothesis 39:48 - Thinking in Superpositions 41:24 - Solutions to the Vulnerable World? 46:34 - Aligning Markets to Defensive Tech 48:43 - Digital Minds & Uploading 52:25 - AI Consciousness 55:08 - Outro Links: Nick's Website - https://nickbostrom.com/ Anthropic Bias Paper - https://anthropic-principle.com/ Deep Utopia Book - https://nickbostrom.com/booklink/deep... Superintelligence book - Superintelligence: Paths, Dangers, Strategies Vulnerable World Hypothesis - https://nickbostrom.com/papers/vulner... Orthogonality Thesis - https://nickbostrom.com/superintellig... Simulation Argument - https://simulation-argument.com/ Digital Minds - https://nickbostrom.com/papers/intere... Future of Humanity Institute - https://www.fhi.ox.ac.uk/ The Win-Win Podcast: Poker champion Liv Boeree takes to the interview chair to tease apart the complexities of one of the most fundamental parts of human nature: competition. Liv is joined by top philosophers, gamers, artists, technologists, CEOs, scientists, athletes and more to understand how competition manifests in their world, and how to change seemingly win-lose games into Win-Wins. Credits ♾️ Hosted by Liv Boeree & Igor Kurganov ♾️ Produced & Edited by Raymond Wei ♾️ Audio Mix by Keir Schmidt
This is a free preview of a paid episode. To hear more, visit www.louiseperry.co.ukRobin Hanson is an associate professor of economics at George Mason University and a research associate at the Future of Humanity Institute of Oxford University. In this episode we spoke about... the future of humanity! In particular, the role of falling birth rates in stifling innovation, and what this means for a possible high tech future. In short, i…
In this episode, Nathan sits down with Robin Hanson, associate professor of economics at George Mason University and researcher at Oxford's Future of Humanity Institute. They discuss the comparison of human brains to LLMs and legacy software systems, what it would take for AI and automation to significantly impact the economy, our relationships with AI and the moral weight it has, and much more. Try the Brave search API for free for up to 2000 queries per month at https://brave.com/api LINKS: Robin's Book, The Age of Em: https://ageofem.com/ Robin's essay on Automation: https://www.overcomingbias.com/p/no-recent-automation-revolutionhtml Robin's Blog: https://www.overcomingbias.com/ AI Scouting Report: https://www.youtube.com/watch?v=0hvtiVQ_LqQ&list=PLVfJCYRuaJIXooK_KWju5djdVmEpH81ee&pp=iAQB Dr. Isaac Kohane Episode: https://www.youtube.com/watch?v=pS5Vye671Xg SPONSORS: The Brave search API can be used to assemble a data set to train your AI models and help with retrieval augmentation at the time of inference. All while remaining affordable with developer first pricing, integrating the Brave search API into your workflow translates to more ethical data sourcing and more human representative data sets. Try the Brave search API for free for up to 2000 queries per month at https://brave.com/api Omneky is an omnichannel creative generation platform that lets you launch hundreds of thousands of ad iterations that actually work customized across all platforms, with a click of a button. Omneky combines generative AI and real-time advertising data. Mention "Cog Rev" for 10% off www.omneky.com NetSuite has 25 years of providing financial software for all your business needs. More than 36,000 businesses have already upgraded to NetSuite by Oracle, gaining visibility and control over their financials, inventory, HR, eCommerce, and more. If you're looking for an ERP platform ✅ head to NetSuite: http://netsuite.com/cognitive and download your own customized KPI checklist. X/SOCIAL: @labenz @robinhanson (Robin) @CogRev_Podcast TIMESTAMPS (00:00) Preview (07:10) Why our current time is a “dream time” and the move back to a Malthusian world (13:30) What sort of world should we be striving for? (13:40) Sponsor - Brave (17:50) Distinguishing value talk from factual talk (18:00) Comparing and contrasting Ems to LLMs (22:30) The comparison of human brains to legacy software systems (30:52) Sponsor - Netsuite (41:01) AIs in medicine (53:30) A several century innovation pause (55:30) Achieving full human level AI in the next 60-90 years (1:03:55) Chess and routine benchmarks not a good predictor of AI performance in the economy (1:07:44) Reaching and exceeding human-level AI in the next 1000 years (1:11:40) Losing technologies tied to scale economies (1:12:00) Why AI is hard to maintain in the long run (1:12:20) Standard deviation in automation (1:14:05) Computing power grows exponentially but automation grows steadily (1:15:50) AI art generation and deepfakes (01:21:42) The economics of AI-powered coding (1:33:51) Merging LLMs (1:36:02) Rot in software and the human brain (1:40:18) Parallelism in LLMs and brain design (1:41:00]) Moral weight for AIs, enslavement, and cooperation with AI (1:47:10) What would change Robin's mind about the future (1:49:18) Wrap
Selam Fularsızlar: Gelecekte yaşayacak insanların da hakları var ve bu insanlardan çok olacak, çünkü gelecek çok büyük. Öyleyse insanlara yardımla ilgilenen herkes, işin gücünü ve bugünün sorunlarını bırakıp, uzun vade potansiyelimizi korumaya ve maksimize etmeye odaklanmalı mı? Bugün uzun vadeciliği masaya yatırıyoruz. Gelecek hakkında epey üfüreceğiz, tüm konu başlıkları aşağıda..Konular:(00:04) Nükleer savaş seçenekleri(01:15) 1280 kişi(04:13) İnsanlığın geleceği enstitüsü(06:20) Nüfus Etiği: Henüz doğmamışların hakları(09:50) Non-Identity Problem(11:43) 10 üzericilik: Gelecek ne kadar büyük(15:00) İğrenç sonuç(18:55) Transhumanizm(20:50) Risk hesapları(23:00) Foundation: Psychohistory(25:20) Hinge of History(30:50) Özet ve Patreon Teşekkürleri.Kaynaklar:Kitap: Reasons and Persons (Derek Parfit, 1984)Makale: Genomic inference of a severe human bottleneckYazı: The Nonidentity ProblemYazı: Against longtermismFuture of Humanity Institute.------- Podbee Sunar -------Bu podcast, Enerjisa hakkında reklam içerir.Bu podcast, Meditopia hakkında reklam içerir.Meditopia hakkında detaylı bilgi almak için bu linke. tıklayarak Meditopia'yı telefonuna indir, yeni yıla özel %60 indirimle huzurlu bir hayata adım at.See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
Rebroadcast: this episode was originally released in October 2021.Preventing the apocalypse may sound like an idiosyncratic activity, and it sometimes is justified on exotic grounds, such as the potential for humanity to become a galaxy-spanning civilisation.But the policy of US government agencies is already to spend up to $4 million to save the life of a citizen, making the death of all Americans a $1,300,000,000,000,000 disaster.According to Carl Shulman, research associate at Oxford University's Future of Humanity Institute, that means you don't need any fancy philosophical arguments about the value or size of the future to justify working to reduce existential risk — it passes a mundane cost-benefit analysis whether or not you place any value on the long-term future.Links to learn more, summary, and full transcript.The key reason to make it a top priority is factual, not philosophical. That is, the risk of a disaster that kills billions of people alive today is alarmingly high, and it can be reduced at a reasonable cost. A back-of-the-envelope version of the argument runs:The US government is willing to pay up to $4 million (depending on the agency) to save the life of an American.So saving all US citizens at any given point in time would be worth $1,300 trillion.If you believe that the risk of human extinction over the next century is something like one in six (as Toby Ord suggests is a reasonable figure in his book The Precipice), then it would be worth the US government spending up to $2.2 trillion to reduce that risk by just 1%, in terms of American lives saved alone.Carl thinks it would cost a lot less than that to achieve a 1% risk reduction if the money were spent intelligently. So it easily passes a government cost-benefit test, with a very big benefit-to-cost ratio — likely over 1000:1 today.This argument helped NASA get funding to scan the sky for any asteroids that might be on a collision course with Earth, and it was directly promoted by famous economists like Richard Posner, Larry Summers, and Cass Sunstein.If the case is clear enough, why hasn't it already motivated a lot more spending or regulations to limit existential risks — enough to drive down what any additional efforts would achieve?Carl thinks that one key barrier is that infrequent disasters are rarely politically salient. Research indicates that extra money is spent on flood defences in the years immediately following a massive flood — but as memories fade, that spending quickly dries up. Of course the annual probability of a disaster was the same the whole time; all that changed is what voters had on their minds.Carl suspects another reason is that it's difficult for the average voter to estimate and understand how large these respective risks are, and what responses would be appropriate rather than self-serving. If the public doesn't know what good performance looks like, politicians can't be given incentives to do the right thing.It's reasonable to assume that if we found out a giant asteroid were going to crash into the Earth one year from now, most of our resources would be quickly diverted into figuring out how to avert catastrophe.But even in the case of COVID-19, an event that massively disrupted the lives of everyone on Earth, we've still seen a substantial lack of investment in vaccine manufacturing capacity and other ways of controlling the spread of the virus, relative to what economists recommended.Carl expects that all the reasons we didn't adequately prepare for or respond to COVID-19 — with excess mortality over 15 million and costs well over $10 trillion — bite even harder when it comes to threats we've never faced before, such as engineered pandemics, risks from advanced artificial intelligence, and so on.Today's episode is in part our way of trying to improve this situation. In today's wide-ranging conversation, Carl and Rob also cover:A few reasons Carl isn't excited by ‘strong longtermism'How x-risk reduction compares to GiveWell recommendationsSolutions for asteroids, comets, supervolcanoes, nuclear war, pandemics, and climate changeThe history of bioweaponsWhether gain-of-function research is justifiableSuccesses and failures around COVID-19The history of existential riskAnd much moreProducer: Keiran HarrisAudio mastering: Ben CordellTranscriptions: Katy Moore
Nothing lasts forever. Even the universe has several possible endings. Will there be a dramatic Big Rip or a Big Chill–also known as the heat death of the universe–in trillions of years? Or will vacuum decay, which could theoretically happen at any moment, do us in? Perhaps the death of a tiny particle – the proton – will bring about the end. We contemplate big picture endings in this episode, and whether one could be brought about by our own machine creations. Guests: Anders Sandberg – Researcher at the Future of Humanity Institute at the University of Oxford Katie Mack – Assistant professor of physics at North Carolina State University, and the author of “The End of Everything, Astrophysically Speaking.” Brian Greene – Brian Greene, professor of physics and mathematics at Columbia, and author of “Until the End of Time: Mind, Matter, and Our Search for Meaning in an Evolving Universe” Originally aired May 3, 2021 Featuring music by Dewey Dellay and Jun Miyake Big Picture Science is part of the Airwave Media podcast network. Please contact advertising@airwavemedia.com to inquire about advertising on Big Picture Science. You can get early access to ad-free versions of every episode by joining us on Patreon. Thanks for your support! Learn more about your ad choices. Visit megaphone.fm/adchoices
Work to remove carbon from the atmosphere, transform the global economy to renewable sources of energy, repair broken ecological systems, and create safe havens for climate refugees is being done by countless, innovative people around the planet. One of these people is Kristian Rönn. With a background in mathematics, philosophy, computer science and artificial intelligence, Kristian and his team are helping organizations quantify their carbon footprint through a practice called carbon accounting. It's a practice that is in its nascent stages, but will very likely become standard operating procedure for most companies around the world in the future. In this interview, Kristian talks about his previous work studying global catastrophic risks - like like nuclear war, runaway artificial intelligence, and climate change - at Oxford's Future of Humanity Institute. He goes on to talk about the work Normative - the company that he co-founded 10 years ago and where he currently serves as CEO - is doing to make carbon visible and how that fits into winning the fight against a warming planet. He finishes the interview by discussing how society can shift key measurements away from GDP to things like well-being and happiness and Kristian gives advice for business and government leaders wanting to use this conversation to make their organizations stronger. Kristian Rönn is the CEO and co-founder of Normative. He is a thought leader within carbon accounting, with speaking engagements at COP and Davos, as well as appearances in media outlets like Bloomberg and Sky News. He has advised governments and international bodies, and has been officially acknowledged for his contribution to UN Goal 13 by UNDP. Before he started Normative he worked at the University of Oxford's Future of Humanity Institute on issues related to global catastrophic risks, including climate change. In 2023, he was named one of Google.org's “Leaders to Watch.”
Sam Harris speaks with Nick Bostrom about the problem of existential risk. They discuss public goods, moral illusions, the asymmetry between happiness and suffering, utilitarianism, “the vulnerable world hypothesis,” the history of nuclear deterrence, the possible need for “turnkey totalitarianism,” whether we're living in a computer simulation, the Doomsday Argument, the implications of extraterrestrial life, and other topics. Nick Bostrom is a Swedish-born philosopher with a background in theoretical physics, computational neuroscience, logic, and artificial intelligence. He is a Professor at Oxford University, where he leads the Future of Humanity Institute as its founding director. He is the author of some 200 publications, including Anthropic Bias, Global Catastrophic Risks, Human Enhancement, and Superintelligence: Paths, Dangers, Strategies, a New York Times bestseller. Episodes that have been re-released as part of the Best of Making Sense series may have been edited for relevance since their original airing.
What will the future look like? What are the risks and opportunities of AI? What role can we play in designing the future we want to live in?Voices of philosophers, futurists, AI experts, science fiction authors, activists, and lawyers reflecting on AI, technology, and the Future of Humanity. All voices in this episode are from our interviews for The Creative Process & One Planet Podcast.Voices on this episode are:DR. SUSAN SCHNEIDER American philosopher and artificial intelligence expert. She is the founding director of the Center for the Future Mind at Florida Atlantic University. Author of Artificial You: AI and the Future of Your Mind, Science Fiction and Philosophy: From Time Travel to Superintelligence, and The Blackwell Companion to Consciousness. www.fau.edu/artsandletters/philosophy/susan-schneider/indexNICK BOSTROM Founder and Director of the Future of Humanity Institute, University of Oxford, Philosopher, Author of NYTimes Bestseller Superintelligence: Paths, Dangers, Strategies. Bostrom's academic work has been translated into more than 30 languages. He is a repeat main TED speaker and has been on Foreign Policy's Top 100 Global Thinkers list twice and was included in Prospect's World Thinkers list, the youngest person in the top 15. https://nickbostrom.com https://www.fhi.ox.ac.ukBRIAN DAVID JOHNSONFuturist in residence at Arizona State University's Center for Science and the Imagination, a professor in the School for the Future of Innovation in Society and the Director of the ASU Threatcasting Lab. He is Author of The Future You: How to Create the Life You Always Wanted, Science Fiction Prototyping: Designing the Future with Science Fiction, 21st Century Robot: The Dr. Simon Egerton Stories, Humanity in the Machine: What Comes After Greed?, Screen Future: The Future of Entertainment, Computing, and the Devices We Love.https://csi.asu.edu/people/brian-david-johnsonDEAN SPADE Professor at SeattleU's School of Law, Author of Mutual Aid, Building Solidarity During This Crisis (and the Next), and Normal Life: Administrative Violence, Critical Trans Politics, and the Limits of Law.www.deanspade.netALLEN STEELEScience Fiction Author. He has been awarded a number of Hugos, Asimov's Readers, and Locus Awards. of the Coyote Trilogy, Arkwright, and other books. His books include Coyote Trilogy and Arkwright. He is a former member of the Board of Directors and Board of Advisors for the Science Fiction and Fantasy Writers of America. He has also served as an advisor for the Space Frontier Foundation. In 2001, he testified before the Subcommittee on Space and Aeronautics of the U.S. House of Representatives in hearings regarding space exploration in the 21st century.www.allensteele.comwww.creativeprocess.infowww.oneplanetpodcast.orgIG www.instagram.com/creativeprocesspodcast