POPULARITY
In this special “minisode” of the Existential Hope podcast, Allison and Beatrice from Foresight Institute sit down to discuss their newly launched, free worldbuilding course on Udemy: The AI Futures Worldbuilding course. This course—created in partnership with the Future of Life Institute—helps participants imagine and shape positive visions for AI's impact on technology, governance, economics, and everyday life.Hear about expert guest lectures from leaders like Anousheh Ansari (XPRIZE), Helen Toner (CSET), Hannah Ritchie (Our World in Data), Ada Palmer (University of Chicago), Anthony Aguirre (FLI), and more. If you're curious how to chart a better future with AI, or simply need a dose of optimism, tune in for practical insights and inspiring ideas.• Take the course – Search for “Building Hopeful Futures with AI” on Udemy or visit existentialhope.com• Submit your vision – Share your optimistic vision for 2035 using the form at existentialhope.com, and explore submissions from others.• Spread the word – If you know someone who could use a hopeful perspective on our AI future, invite them to join this journey!Learn more about the course: https://www.udemy.com/course/worldbuilding-hopeful-futures-with-ai/ Existential Hope was created to collect positive and possible scenarios for the future so that we can have more people commit to creating a brighter future, and to begin mapping out the main developments and challenges that need to be navigated to reach it. Existential Hope is a Foresight Institute project.Hosted by Allison Duettmann and Beatrice ErkersFollow Us: Twitter | Facebook | LinkedIn | Existential Hope InstagramExplore every word spoken on this podcast through Fathom.fm. Hosted on Acast. See acast.com/privacy for more information.
In this episode of the Existential Hope Podcast, existential psychologist Clay Routledge explores how meaning and agency shape both individual well-being and societal progress.While material conditions have improved, many people—especially younger generations—report growing pessimism and disconnection. Clay argues that a lack of meaning, not just external barriers, often holds us back. By understanding how humans derive purpose and motivation, we can unlock new paths to flourishing.We discuss:Why agency—the belief that we can shape our future—is crucial for progressHow nostalgia can fuel innovation rather than trap us in the pastThe difference between hope and optimism, and why hope drives actionThe psychology behind rising pessimism and how to counter itWhat a world that maximizes meaning and human potential could look likeIf you've ever wondered how psychology can help us move from existential angst to existential hope, this episode is for you.Full transcript, list of resources, and art piece: https://www.existentialhope.com/podcastsExistential Hope was created to collect positive and possible scenarios for the future so that we can have more people commit to creating a brighter future, and to begin mapping out the main developments and challenges that need to be navigated to reach it. Existential Hope is a Foresight Institute project.Hosted by Allison Duettmann and Beatrice ErkersFollow Us: Twitter | Facebook | LinkedIn | Existential Hope InstagramExplore every word spoken on this podcast through Fathom.fm. Hosted on Acast. See acast.com/privacy for more information.
This research was conducted at AE Studio and supported by the AI Safety Grants programme administered by Foresight Institute with additional support from AE Studio. SummaryIn this post, we summarise the main experimental results from our new paper, "Towards Safe and Honest AI Agents with Neural Self-Other Overlap", which we presented orally at the Safe Generative AI Workshop at NeurIPS 2024. This is a follow-up to our post Self-Other Overlap: A Neglected Approach to AI Alignment, which introduced the method last July.Our results show that the Self-Other Overlap (SOO) fine-tuning drastically[1] reduces deceptive responses in language models (LLMs), with minimal impact on general performance, across the scenarios we evaluated. LLM Experimental SetupWe adapted a text scenario from Hagendorff designed to test LLM deception capabilities. In this scenario, the LLM must choose to recommend a room to a would-be burglar, where one room holds an expensive item [...] ---Outline:(00:19) Summary(00:57) LLM Experimental Setup(04:05) LLM Experimental Results(05:04) Impact on capabilities(05:46) Generalisation experiments(08:33) Example Outputs(09:04) ConclusionThe original text contained 6 footnotes which were omitted from this narration. The original text contained 2 images which were described by AI. --- First published: March 13th, 2025 Source: https://www.lesswrong.com/posts/jtqcsARGtmgogdcLT/reducing-llm-deception-at-scale-with-self-other-overlap-fine --- Narrated by TYPE III AUDIO. ---Images from the article:
Tom Kalil is the CEO of Renaissance Philanthropy. Tom served in the White House for two presidents (Obama and Clinton) and in collaboration with his team worked with the Senate to give every federal agency the authority to support incentive prizes for up to $50 million. Tom also designed and launched dozens of White House science and technology initiatives, including the $40 billion National Nanotechnology Initiative, announced by President Clinton; The BRAIN Initiative, announced by President Obama; The Next Generation Internet initiative, announced by President Clinton and Vice President Gore; and initiatives in advanced materials, robotics, smallsats, data science, and EdTech. About Foresight InstituteForesight Institute is a research organization and non-profit that supports the beneficial development of high-impact technologies. Since our founding in 1987 on a vision of guiding powerful technologies, we have continued to evolve into a many-armed organization that focuses on several fields of science and technology that are too ambitious for legacy institutions to support.Allison DuettmannThe President and CEO of Foresight Institute, Allison Duettmann directs the Intelligent Cooperation, Molecular Machines, Biotech & Health Extension, Neurotech, and Space Programs, alongside Fellowships, Prizes, and Tech Trees. She has also been pivotal in co-initiating the Longevity Prize, pioneering initiatives like Existentialhope.com, and contributing to notable works like "Superintelligence: Coordination & Strategy" and "Gaming the Future".Get Involved with Foresight:Apply to our virtual technical seminars Join our in-person events and workshops Donate: Support Our Work – If you enjoy what we do, please consider this, as we are entirely funded by your donations!Follow Us: Twitter | Facebook | LinkedInNote: Explore every word spoken on this podcast through Fathom.fm, an innovative podcast search engine. Hosted on Acast. See acast.com/privacy for more information.
In this episode of the Existential Hope Podcast, cognitive psychologist and bestselling author Steven Pinker explores why, despite massive gains in human progress, many people remain pessimistic about the future—and why that matters for shaping what comes next.Steven argues that while progress isn't automatic, it is real. By tracking long-term trends in violence, poverty, democracy, and innovation, we can see how human effort—driven by reason, science, and cooperation—has repeatedly pushed civilization forward. Yet, media narratives and cognitive biases often make us blind to these achievements, reinforcing a sense of stagnation or decline.In this conversation, we explore:The hidden progress shaping our world today—from rising literacy rates to declining poverty, and why these trends rarely make the news.Why pessimism can be self-defeating—and how a more accurate understanding of history can help us build a better future.The role of AI, biotech, and clean energy—and why they might unlock transformative improvements, if used wisely.How to communicate ideas that inspire hope—including Steven's advice on cutting through jargon and tribalism to make ideas stick.If you've ever wondered whether humanity is on the right track—or how to ensure we stay on it—this episode is for you. Listen now to hear how we can move from existential dread to existential hope.
Jennifer Garrison, PhD, is Co-Founder and Director of the Global Consortium for Reproductive Longevity and Equality (GCRLE) and an Assistant Professor at the Buck Institute for Research on Aging. She also holds appointments in the Department of Cellular and Molecular Pharmacology at the University of California, San Francisco (UCSF) and the Leonard Davis School of Gerontology at the University of Southern California (USC). She is a passionate advocate for women's health and is pioneering a new movement to advance science that is focused on female reproductive aging. Her lab studies the role of mind-body communication in systemic aging, and how changes in the conversation between the ovary and brain during aging may lead to the onset of reproductive decline in females.About Foresight InstituteForesight Institute is a research organization and non-profit that supports the beneficial development of high-impact technologies. Since our founding in 1987 on a vision of guiding powerful technologies, we have continued to evolve into a many-armed organization that focuses on several fields of science and technology that are too ambitious for legacy institutions to support.Allison DuettmannThe President and CEO of Foresight Institute, Allison Duettmann directs the Intelligent Cooperation, Molecular Machines, Biotech & Health Extension, Neurotech, and Space Programs, alongside Fellowships, Prizes, and Tech Trees. She has also been pivotal in co-initiating the Longevity Prize, pioneering initiatives like Existentialhope.com, and contributing to notable works like "Superintelligence: Coordination & Strategy" and "Gaming the Future".Get Involved with Foresight:Apply to our virtual technical seminars Join our in-person events and workshops Donate: Support Our Work – If you enjoy what we do, please consider this, as we are entirely funded by your donations!Follow Us: Twitter | Facebook | LinkedInNote: Explore every word spoken on this podcast through Fathom.fm, an innovative podcast search engine. Hosted on Acast. See acast.com/privacy for more information.
"We've saved the world so many times throughout history. Now we just have to do it again."What if speculative fiction could do more than entertain—what if it could reshape how we think about governance, technology, and societal progress? In this episode of the Existential Hope Podcast, historian and sci-fi author Ada Palmer discusses how we can harness lessons from both history and fiction to reimagine what's possible for humanity.Ada argues that one of the most critical advantages we have over past generations is our ability to envision a future radically different from our present. Unlike Renaissance thinkers limited by their own history, today's societies can draw from an endless array of speculative worlds—both utopian and dystopian—to expand the horizons of what we dare to demand.In this wide-ranging conversation, Ada digs into everything from concrete ideas for how to govern in a more pluralistic, adaptable world, to the importance of storytelling in addressing existential risks, exploring:Why pluralism might be the antidote to centralized, one-size-fits-all governance and how speculative fiction shows us ways to make it work.How past and present technological advancements—like eradicating malaria—can inspire hope for tackling today's most urgent challenges.What makes despair the ultimate barrier to progress, and how celebrating successes can keep us moving forward.Full transcript, list of resources, and art piece: https://www.existentialhope.com/podcastsExistential Hope was created to collect positive and possible scenarios for the future so that we can have more people commit to creating a brighter future, and to begin mapping out the main developments and challenges that need to be navigated to reach it. Existential Hope is a Foresight Institute project.Hosted by Allison Duettmann and Beatrice ErkersFollow Us: Twitter | Facebook | LinkedIn | Existential Hope InstagramExplore every word spoken on this podcast through Fathom.fm. Hosted on Acast. See acast.com/privacy for more information.
Abhishek Singh is a Ph.D. student at MIT Media Lab. His research interests include collective intelligence, self-organization, and decentralized machine learning. The central question guiding his research is --- how can we (algorithmically) engineer adaptive networks to build anti-fragile systems? He has co-authored multiple papers and built systems in machine learning, data privacy, and distributed computing. Before joining MIT, Abhishek worked with Cisco for 2 years where he did research in AutoML and Machine Learning for systems.An AbstractThe remarkable scaling of AI models has unlocked unprecedented capabilities in text and image generation, raising the question: why hasn't healthcare seen similar breakthroughs? While healthcare AI holds immense promise, progress has been stymied by fragmented data trapped in institutional silos. Traditional centralized approaches fall short in this domain, where privacy concerns and regulatory requirements prevent data consolidation. This talk introduces a framework for decentralized machine learning and discusses algorithms for enabling self-organization among participants with diverse resources and capabilities.About Foresight InstituteForesight Institute is a research organization and non-profit that supports the beneficial development of high-impact technologies. Since our founding in 1987 on a vision of guiding powerful technologies, we have continued to evolve into a many-armed organization that focuses on several fields of science and technology that are too ambitious for legacy institutions to support.Allison DuettmannThe President and CEO of Foresight Institute, Allison Duettmann directs the Intelligent Cooperation, Molecular Machines, Biotech & Health Extension, Neurotech, and Space Programs, alongside Fellowships, Prizes, and Tech Trees. She has also been pivotal in co-initiating the Longevity Prize, pioneering initiatives like Existentialhope.com, and contributing to notable works like "Superintelligence: Coordination & Strategy" and "Gaming the Future".Get Involved with Foresight:Apply to our virtual technical seminars Join our in-person events and workshops Donate: Support Our Work – If you enjoy what we do, please consider this, as we are entirely funded by your donations!Follow Us: Twitter | Facebook | LinkedInNote: Explore every word spoken on this podcast through Fathom.fm, an innovative podcast search engine. Hosted on Acast. See acast.com/privacy for more information.
Zach Weinersmith is the cartoonist behind the popular geek webcomic, Saturday Morning Breakfast Cereal. He writes popular science books with his wife Kelly, including the recent Hugo award-winning A City on Mars. His work has been featured by The Economist, The Wall Street Journal, Slate, Forbes, Science Friday, Foreign Policy, PBS, Boingboing, the Freakonomics Blog, the RadioLab blog, Entertainment Weekly, Mother Jones, CNN, Discovery Magazine, Nautilus and more. Key HighlightsThe future of space governance is explored, focusing on rocketry, space settlements, international law, and challenges like closed-loop ecology and human reproduction.Zubrin's "The Case for Mars" is criticized for optimism, colonialist perspectives, and assumptions about sustainable environments on Mars.Physiological risks of space travel, including radiation, reduced gravity, and the lack of reproduction data, are highlighted.Lessons from Biosphere 2 and doubts about the economic and legal viability of Mars colonization are discussed.Debates cover the Moon Treaty, anti-space settlement arguments, and testing reproduction in partial gravity.About Foresight InstituteForesight Institute is a research organization and non-profit that supports the beneficial development of high-impact technologies. Since our founding in 1987 on a vision of guiding powerful technologies, we have continued to evolve into a many-armed organization that focuses on several fields of science and technology that are too ambitious for legacy institutions to support.Allison DuettmannThe President and CEO of Foresight Institute, Allison Duettmann directs the Intelligent Cooperation, Molecular Machines, Biotech & Health Extension, Neurotech, and Space Programs.Get Involved:Apply to our virtual technical seminars Join our in-person events and workshops Donate: Support Our Work – If you enjoy what we do, please consider this, as we are entirely funded by your donations!Follow Us: Twitter | Facebook | LinkedInNote: Explore every word spoken on this podcast through Fathom.fm, an innovative podcast search engine. Hosted on Acast. See acast.com/privacy for more information.
Jason Crawford is the founder of The Roots of Progress, a nonprofit dedicated to establishing a new philosophy of progress for the 21st century. He writes and speaks about the history and philosophy of progress, especially in technology and industry. Key HighlightsAbout Foresight InstituteForesight Institute is a research organization and non-profit that supports the beneficial development of high-impact technologies. Since our founding in 1987 on a vision of guiding powerful technologies, we have continued to evolve into a many-armed organization that focuses on several fields of science and technology that are too ambitious for legacy institutions to support.Allison DuettmannThe President and CEO of Foresight Institute, Allison Duettmann directs the Intelligent Cooperation, Molecular Machines, Biotech & Health Extension, Neurotech, and Space Programs, alongside Fellowships, Prizes, and Tech Trees. She has also been pivotal in co-initiating the Longevity Prize, pioneering initiatives like Existentialhope.com, and contributing to notable works like "Superintelligence: Coordination & Strategy" and "Gaming the Future".Get Involved with Foresight:Apply to our virtual technical seminars Join our in-person events and workshops Donate: Support Our Work – If you enjoy what we do, please consider this, as we are entirely funded by your donations!Follow Us: Twitter | Facebook | LinkedInNote: Explore every word spoken on this podcast through Fathom.fm, an innovative podcast search engine. Hosted on Acast. See acast.com/privacy for more information.
Beatrice Erkers and Allison Duettmann What if we could reimagine the future from a place of hope instead of fear?In this special episode of the Existential Hope Podcast, Allison Duettmann and Beatrice Erkers turn the tables and interview each other instead of a guest, sharing insights into their journeys, hopes, and visions for humanity. Together, they explore big concepts like moral circle expansion, how neurotech could deepen empathy (even with animals!), and why worldbuilding in 2045 can help us envision and create better futures today. Prepare for the new year by diving into strategies for building a future worth striving for.Full transcript, list of resources, and art piece: https://www.existentialhope.com/podcastsExistential Hope was created to collect positive and possible scenarios for the future so that we can have more people commit to creating a brighter future, and to begin mapping out the main developments and challenges that need to be navigated to reach it. Existential Hope is a Foresight Institute project.Hosted by Allison Duettmann and Beatrice ErkersFollow Us: Twitter | Facebook | LinkedIn | Existential Hope InstagramExplore every word spoken on this podcast through Fathom.fm. Hosted on Acast. See acast.com/privacy for more information.
✨ Subscribe to the Green Pill Podcast ✨ https://pod.link/1609313639
Caleb Watney is the co-founder and co-CEO of IFP. He manages the metascience, high-skilled immigration, and emerging technology policy teams at IFP. His research focuses on policy levers the U.S. could use to rebuild state capacity and increase long-term rates of innovation. Previously, Caleb worked as the director of innovation policy at the Progressive Policy Insitute, a technology policy fellow at the R Street Institute, and a graduate research fellow at the Mercatus Center. Key HighlightsAbout Foresight InstituteForesight Institute is a research organization and non-profit that supports the beneficial development of high-impact technologies. Since our founding in 1987 on a vision of guiding powerful technologies, we have continued to evolve into a many-armed organization that focuses on several fields of science and technology that are too ambitious for legacy institutions to support.Allison DuettmannThe President and CEO of Foresight Institute, Allison Duettmann directs the Intelligent Cooperation, Molecular Machines, Biotech & Health Extension, Neurotech, and Space Programs, alongside Fellowships, Prizes, and Tech Trees. She has also been pivotal in co-initiating the Longevity Prize, pioneering initiatives like Existentialhope.com, and contributing to notable works like "Superintelligence: Coordination & Strategy" and "Gaming the Future".Get Involved with Foresight:Apply to our virtual technical seminars Join our in-person events and workshops Donate: Support Our Work – If you enjoy what we do, please consider this, as we are entirely funded by your donations!Follow Us: Twitter | Facebook | LinkedInNote: Explore every word spoken on this podcast through Fathom.fm, an innovative podcast search engine. Hosted on Acast. See acast.com/privacy for more information.
Michael Levin is an American developmental and synthetic biologist at Tufts University, where he is the Vannevar Bush Distinguished Professor. Levin is a director of the Allen Discovery Center at Tufts University and Tufts Center for Regenerative and Developmental Biology. Key HighlightsDiscussion of diverse intelligence in biological systems and its biomedical potentialInsights into planarian regeneration and collective problem-solvingAnatomical plasticity and the role of bioelectric interfacesApplying these principles to regenerative medicine and synthetic biologyHow living structures can adapt and solve complex problems – leading to breakthroughs in organ regeneration, cancer treatment, and mental healthAbout Foresight InstituteForesight Institute is a research organization and non-profit that supports the beneficial development of high-impact technologies. Since our founding in 1987 on a vision of guiding powerful technologies, we have continued to evolve into a many-armed organization that focuses on several fields of science and technology that are too ambitious for legacy institutions to support.Allison DuettmannThe President and CEO of Foresight Institute, Allison Duettmann directs the Intelligent Cooperation, Molecular Machines, Biotech & Health Extension, Neurotech, and Space Programs, alongside Fellowships, Prizes, and Tech Trees. She has also been pivotal in co-initiating the Longevity Prize, pioneering initiatives like Existentialhope.com, and contributing to notable works like "Superintelligence: Coordination & Strategy" and "Gaming the Future".Get Involved with Foresight:Apply to our virtual technical seminars Join our in-person events and workshops Donate: Support Our Work – If you enjoy what we do, please consider this, as we are entirely funded by your donations!Follow Us: Twitter | Facebook | LinkedInNote: Explore every word spoken on this podcast through Fathom.fm, an innovative podcast search engine. Hosted on Acast. See acast.com/privacy for more information.
Zan Huang is a researcher with a passion for alternative computational models in artificial intelligence, mass social patterns, chaotic and emergent systems, and linguistics. Currently focused on scaling deep neural networks through neurologically inspired modularity, he explores critical questions around reducing parameter space, enhancing interpretability, and developing self-similar task divisions akin to brain functionality.Key HighlightsDiscussion of the adaptation of neurological structures for AI, proposing that neuroscience is crucial for understanding intelligence. Argument that certain principles of physics and mathematics apply to biological systems, like the brain, and that these can inform foundational models for AI. Exploration of concepts related to thermodynamics, information theory, and the fractal nature of intelligence. HPresentation of a neuro AI framework that emphasizes self-supervision, streamification, and task prioritization inspired by brain functionality to create more robust AI systems.About Foresight InstituteForesight Institute is a research organization and non-profit that supports the beneficial development of high-impact technologies. Since our founding in 1987 on a vision of guiding powerful technologies, we have continued to evolve into a many-armed organization that focuses on several fields of science and technology that are too ambitious for legacy institutions to support.Allison DuettmannThe President and CEO of Foresight Institute, Allison Duettmann directs the Intelligent Cooperation, Molecular Machines, Biotech & Health Extension, Neurotech, and Space Programs, alongside Fellowships, Prizes, and Tech Trees. She has also been pivotal in co-initiating the Longevity Prize, pioneering initiatives like Existentialhope.com, and contributing to notable works like "Superintelligence: Coordination & Strategy" and "Gaming the Future".Get Involved with Foresight:Apply to our virtual technical seminars Join our in-person events and workshops Donate: Support Our Work – If you enjoy what we do, please consider this, as we are entirely funded by your donations!Follow Us: Twitter | Facebook | LinkedInNote: Explore every word spoken on this podcast through Fathom.fm, an innovative podcast search engine. Hosted on Acast. See acast.com/privacy for more information.
Adam Marblestone is the CEO of Convergent Research. He is working with a large and growing network of collaborators and advisors to develop a strategic roadmap for future FROs. Outside of CR, he serves on the boards of several non-profits pursuing new methods of funding and organizing scientific research including Norn Group and New Science, and as an interviewer for the Hertz Foundation. Previously, he was a Schmidt Futures Innovation Fellow, a Fellow with the Federation of American Scientists (FAS), a research scientist at Google DeepMind, Chief Strategy Officer of the brain-computer interface company Kernel, a research scientist at MIT, a PhD student in biophysics with George Church and colleagues at Harvard, and a theoretical physics student at Yale. He has also previously helped to start companies like BioBright, and advised foundations such as Open Philanthropy.Session SummaryIn this episode of the Existential Hope Podcast, our guest is Adam Marblestone, CEO of Convergent Research. Adam shares his journey from working on nanotechnology and neuroscience to pioneering a bold new model for scientific work and funding: Focused Research Organizations (FROs). These nonprofit, deep-tech startups are designed to fill critical gaps in science by building the infrastructure needed to accelerate discovery. Tune in to hear how FROs are unlocking innovation, tackling bottlenecks across fields, and inspiring a new approach to advancing humanity's understanding of the world.Full transcript, list of resources, and art piece: https://www.existentialhope.com/podcastsExistential Hope was created to collect positive and possible scenarios for the future so that we can have more people commit to creating a brighter future, and to begin mapping out the main developments and challenges that need to be navigated to reach it. Existential Hope is a Foresight Institute project.Hosted by Allison Duettmann and Beatrice ErkersFollow Us: Twitter | Facebook | LinkedIn | Existential Hope InstagramExplore every word spoken on this podcast through Fathom.fm. Hosted on Acast. See acast.com/privacy for more information.
Michael Levin is an American developmental and synthetic biologist at Tufts University, where he is the Vannevar Bush Distinguished Professor. Levin is a director of the Allen Discovery Center at Tufts University and Tufts Center for Regenerative and Developmental Biology. Key HighlightsDiscussion of diverse intelligence in biological systems and its biomedical potentialInsights into planarian regeneration and collective problem-solvingAnatomical plasticity and the role of bioelectric interfacesApplying these principles to regenerative medicine and synthetic biologyHow living structures can adapt and solve complex problems – leading to breakthroughs in organ regeneration, cancer treatment, and mental healthAbout Foresight InstituteForesight Institute is a research organization and non-profit that supports the beneficial development of high-impact technologies. Since our founding in 1987 on a vision of guiding powerful technologies, we have continued to evolve into a many-armed organization that focuses on several fields of science and technology that are too ambitious for legacy institutions to support.Allison DuettmannThe President and CEO of Foresight Institute, Allison Duettmann directs the Intelligent Cooperation, Molecular Machines, Biotech & Health Extension, Neurotech, and Space Programs, alongside Fellowships, Prizes, and Tech Trees. She has also been pivotal in co-initiating the Longevity Prize, pioneering initiatives like Existentialhope.com, and contributing to notable works like "Superintelligence: Coordination & Strategy" and "Gaming the Future".Get Involved with Foresight:Apply to our virtual technical seminars Join our in-person events and workshops Donate: Support Our Work – If you enjoy what we do, please consider this, as we are entirely funded by your donations!Follow Us: Twitter | Facebook | LinkedInNote: Explore every word spoken on this podcast through Fathom.fm, an innovative podcast search engine. Hosted on Acast. See acast.com/privacy for more information.
Samuel Jardine is a Geopolitical Risk Consultant and Historian specializing in space, polar regions, and seabed security, utilizing Applied History and OSINT. He has lectured for institutions like RUSI and the Royal Navy, with publications by Routledge. Currently, he leads research at London Politica, advises Luminint, and contributes to the Lunar Policy Platform.Main PointsContext and an overview of AstropoliticsSpace law and governance: geopolitical issues, a Multipolar world, and competitionThe effects of the decoupling of US and ChinaCompeting Space Blocks: The Artemis Accords vs. ILRSChallenges and Opportunities in Space CooperationAbout Foresight InstituteForesight Institute is a research organization and non-profit that supports the beneficial development of high-impact technologies. Since our founding in 1987 on a vision of guiding powerful technologies, we have continued to evolve into a many-armed organization that focuses on several fields of science and technology that are too ambitious for legacy institutions to support.Allison DuettmannThe President and CEO of Foresight Institute, Allison Duettmann directs the Intelligent Cooperation, Molecular Machines, Biotech & Health Extension, Neurotech, and Space Programs, alongside Fellowships, Prizes, and Tech Trees. She has also been pivotal in co-initiating the Longevity Prize, pioneering initiatives like Existentialhope.com, and contributing to notable works like "Superintelligence: Coordination & Strategy" and "Gaming the Future".Get Involved with Foresight:Apply to our virtual technical seminars Join our in-person events and workshops Donate: Support Our Work – If you enjoy what we do, please consider this, as we are entirely funded by your donations!Follow Us: Twitter | Facebook | LinkedInNote: Explore every word spoken on this podcast through Fathom.fm, an innovative podcast search engine. Hosted on Acast. See acast.com/privacy for more information.
Anna Chekhovich is the financial director of Alexei Navalny's Anti-Corruption Foundation (FBK) from Russia. Targeted by Putin's regime, the foundation has gradually lost access to financial institutions. FBK has been using Bitcoin since 2015 to help overcome financial repression. At that time the Russian government began blocking the bank accounts of various foundations, even those very loosely connected to the FBK. Navalny and his family have also had their personal accounts frozen as did many people who worked on the FBK team. Bitcoin has given them a financial tool away from the reach of Putin's regime.About Foresight InstituteForesight Institute is a research organization and non-profit that supports the beneficial development of high-impact technologies. Since our founding in 1987 on a vision of guiding powerful technologies, we have continued to evolve into a many-armed organization that focuses on several fields of science and technology that are too ambitious for legacy institutions to support.Allison DuettmannThe President and CEO of Foresight Institute, Allison Duettmann directs the Intelligent Cooperation, Molecular Machines, Biotech & Health Extension, Neurotech, and Space Programs, alongside Fellowships, Prizes, and Tech Trees. She has also been pivotal in co-initiating the Longevity Prize, pioneering initiatives like Existentialhope.com, and contributing to notable works like "Superintelligence: Coordination & Strategy" and "Gaming the Future".Get Involved with Foresight:Apply to our virtual technical seminars Join our in-person events and workshops Donate: Support Our Work – If you enjoy what we do, please consider this, as we are entirely funded by your donations!Follow Us: Twitter | Facebook | LinkedInNote: Explore every word spoken on this podcast through Fathom.fm, an innovative podcast search engine. Hosted on Acast. See acast.com/privacy for more information.
Eric Drexler is a visionary scientist and engineer thought of as one of the “Founding fathers of nanotechnology”, the science of engineering on a molecular level. He is most known for being the driving force behind the concept of molecular nanotechnology (MNT) and its potential benefits for humans. His 1981 paper in the Proceedings of the National Academy of Sciences established fundamental principles of molecular design, protein engineering, and productive nanosystems. Drexler's research in this field has been the basis for numerous journal articles and for books including Engines of Creation: The Coming Era of Nanotechnology, 1986, which is written for a general audience, and Nanosystems: Molecular Machinery, Manufacturing, and Computation. This talk discusses the potential of advanced nanotechnology's potential, focusing on:• Atomic-level manipulation for diverse applications• AI's role in accelerating molecular engineering• Environmental and space exploration implicationsAbout Foresight InstituteForesight Institute is a research organization and non-profit that supports the beneficial development of high-impact technologies. Since our founding in 1987 on a vision of guiding powerful technologies, we have continued to evolve into a many-armed organization that focuses on several fields of science and technology that are too ambitious for legacy institutions to support.Allison DuettmannThe President and CEO of Foresight Institute, Allison Duettmann directs the technical programs alongside Fellowships, Prizes, and Tech Trees. She has also been pivotal in co-initiating the Longevity Prize, pioneering initiatives like Existentialhope.com, and contributing to notable works like "Superintelligence: Coordination & Strategy" and "Gaming the Future".Get Involved with Foresight:Apply to our virtual technical seminars Join our in-person events and workshops Donate: Support Our Work – If you enjoy what we do, please consider this, as we are entirely funded by your donations!Follow Us: Twitter | Facebook | LinkedInNote: Explore every word spoken on this podcast through Fathom.fm, an innovative podcast search engine. Hosted on Acast. See acast.com/privacy for more information.
SpeakerDr Ariel Zeleznikow-Johnston is a neuroscientist at Monash University, Australia, where he investigates methods for characterising the nature of conscious experiences. In 2019, he obtained his PhD from The University of Melbourne, where he researched how genetic and environmental factors affect cognition. His research interests range from the decline, preservation and rescue of cognitive function at different stages of the lifespan, through to comparing different people's conscious experience of colour.Session SummaryIn this edition of the Hope Drop, we dive into a thought-provoking conversation with Dr. Ariel Zeleznikow-Johnston, neuroscientist and author of The Future Loves You: How and Why We Should Abolish Death. Ariel explores the science and philosophy of brain preservation, questioning long-held beliefs about life, death, and personal identity. We explore how neuroscience might redefine what it means to truly live—and challenge assumptions around mortality.Full transcript, list of resources, and art piece: https://www.existentialhope.com/podcastsExistential Hope was created to collect positive and possible scenarios for the future so that we can have more people commit to creating a brighter future, and to begin mapping out the main developments and challenges that need to be navigated to reach it. Existential Hope is a Foresight Institute project.Hosted by Allison Duettmann and Beatrice ErkersFollow Us: Twitter | Facebook | LinkedIn | Existential Hope InstagramExplore every word spoken on this podcast through Fathom.fm. Hosted on Acast. See acast.com/privacy for more information.
Molly Mackinlay has extensive work experience in various roles at different companies. She is currently the Head of Engineering, Product, & Research Development at Protocol Labs, where they lead teams working on the IPFS Project. Prior to this, Molly worked at Google where they held multiple roles including Google Search PM II, Google Forms PM, Google Classroom PM, and Associate Product Manager for Chrome Native Client. Before joining Google, she obtained their Bachelor's degree in Computer Science with a concentration in Human Computer Interaction from Stanford University. Key HighlightsExplores decentralized mechanisms for funding public goodsPresents three web3 experiments: Quadratic Funding, DAO treasuries, and Retroactive Public Goods RewardsIntroduces Open Impact Foundation as a legal structure for public goods fundingAbout Foresight InstituteForesight Institute is a research organization and non-profit that supports the beneficial development of high-impact technologies. Since our founding in 1987 on a vision of guiding powerful technologies, we have continued to evolve into a many-armed organization that focuses on several fields of science and technology that are too ambitious for legacy institutions to support.Allison DuettmannThe President and CEO of Foresight Institute, Allison Duettmann directs the Intelligent Cooperation, Molecular Machines, Biotech & Health Extension, Neurotech, and Space Programs, alongside Fellowships, Prizes, and Tech Trees. She has also been pivotal in co-initiating the Longevity Prize, pioneering initiatives like Existentialhope.com, and contributing to notable works like "Superintelligence: Coordination & Strategy" and "Gaming the Future".Get Involved with Foresight:Apply to our virtual technical seminars Join our in-person events and workshops Donate: Support Our Work – If you enjoy what we do, please consider this, as we are entirely funded by your donations!Follow Us: Twitter | Facebook | LinkedInNote: Explore every word spoken on this podcast through Fathom.fm, an innovative podcast search engine. Hosted on Acast. See acast.com/privacy for more information.
Anders Sandberg's research centres on estimating the capabilities and underlying science of future technologies, methods of reasoning about long-term futures, existential and global catastrophic risk, the search for extraterrestrial intelligence (SETI), as well as societal and ethical issues surrounding human enhancement.Topics of particular interest include management of systemic risk, reasoning under uncertainty, enhancement of cognition, neuroethics and public policy. He has worked on this within the EU project ENHANCE, where he was also responsible for public outreach and online presence, and the ERC UnPredict project.Besides scientific publications in neuroscience, ethics and future studies he has also participated in the public debate about human enhancement, existential risk and SETI internationally.About Foresight InstituteForesight Institute is a research organization and non-profit that supports the beneficial development of high-impact technologies. Since our founding in 1987 on a vision of guiding powerful technologies, we have continued to evolve into a many-armed organization that focuses on several fields of science and technology that are too ambitious for legacy institutions to support.Allison DuettmannThe President and CEO of Foresight Institute, Allison Duettmann directs the Intelligent Cooperation, Molecular Machines, Biotech & Health Extension, Neurotech, and Space Programs, alongside Fellowships, Prizes, and Tech Trees. She has also been pivotal in co-initiating the Longevity Prize, pioneering initiatives like Existentialhope.com, and contributing to notable works like "Superintelligence: Coordination & Strategy" and "Gaming the Future".Get Involved with Foresight:Apply to our virtual technical seminars Join our in-person events and workshops Donate: Support Our Work – If you enjoy what we do, please consider this, as we are entirely funded by your donations!Follow Us: Twitter | Facebook | LinkedInNote: Explore every word spoken on this podcast through Fathom.fm, an innovative podcast search engine. Hosted on Acast. See acast.com/privacy for more information.
Samo Burja founded Bismarck Analysis, a consulting firm that investigates the political and institutional landscape of society. He is a Senior Research Fellow in Political Science at the Foresight Institute where he advises on how institutions can shape the future of technology. Since 2024, he has chaired the editorial board of Palladium Magazine, a non-partisan publication that explores the future of governance and society through international journalism, long-form analysis, and social philosophy. From 2020 to 2023, he was a Research Fellow at the Long Now Foundation where he studied how institutions can endure for centuries and millennia.Samo writes and speaks on history, institutions, and strategy with a focus on exceptional leaders that create new social and political forms. Image has systematized this approach as “Great Founder Theory.”Steve and Samo discuss:(00:00) - Introduction (01:38) - Meet Samo Burja: Founder of Bismarck Analysis (03:17) - Palladium Magazine: A West Coast Publication (06:37) - The Unique Culture of Silicon Valley (12:53) - Inside Bismarck Analysis: Services and Clients (21:35) - The Role of Technology in Global Innovation (32:13) - The Influence of Rationalists and Effective Altruists (48:07) - European Tech Policies and Global Competition (49:28) - The Role of Taiwan and China in Tech Manufacturing (51:12) - Geopolitical Dynamics and Strategic Alliances (52:49) - China's Provincial Power and Industrial Strategy (56:02) - Urbanization and Demography, Ancient Society (59:41) - Intellectual Pursuits and Cultural Dynamics (01:04:09) - Intellectuals, SF, and Global Influence (01:13:45) - Fertility Rates, Urbanization, and Forgotten Migration (01:22:24) - Interest in Cultural Dynamics and Population Rates (01:26:03) - Daily Life as an Intellectual Music used with permission from Blade Runner Blues Livestream improvisation by State Azure.--Steve Hsu is Professor of Theoretical Physics and of Computational Mathematics, Science, and Engineering at Michigan State University. Previously, he was Senior Vice President for Research and Innovation at MSU and Director of the Institute of Theoretical Science at the University of Oregon. Hsu is a startup founder (SuperFocus, SafeWeb, Genomic Prediction, Othram) and advisor to venture capital and other investment firms. He was educated at Caltech and Berkeley, was a Harvard Junior Fellow, and has held faculty positions at Yale, the University of Oregon, and MSU.Please send any questions or suggestions to manifold1podcast@gmail.com or Steve on X @hsu_steve.
Barbara Diehl is a proven expert in entrepreneurship, innovation and education and has been facilitating partnerships with science and business at SPRIND since 2020. Prior to joining SPRIND, she had a successful career in the UK and Ireland for 10 years at the Oxford University Entrepreneurship Centre and the Innovation Academy at University College Dublin. In Oxford, she led early-stage investment programs and an executive education program for fast-growing small businesses (Goldman Sachs 10,000 Small Businesses growth program). Key highlights:The work of SPRIND: an initiative by the German government to develop breakthrough innovations, priority areas being circular biome manufacturing, antivirals, long-duration energy storage, and tissue engineering. Plans to improve data availability and advocate for policy changes within universities. The importance of mutual discovery between the US and European innovation ecosystems. The need for flexible and agile funding mechanisms and people funding rather than project funding.About Foresight InstituteForesight Institute is a research organization and non-profit that supports the beneficial development of high-impact technologies. Since our founding in 1987 on a vision of guiding powerful technologies, we have continued to evolve into a many-armed organization that focuses on several fields of science and technology that are too ambitious for legacy institutions to support.Allison DuettmannThe President and CEO of Foresight Institute, Allison Duettmann directs the Intelligent Cooperation, Molecular Machines, Biotech & Health Extension, Neurotech, and Space Programs, alongside Fellowships, Prizes, and Tech Trees. She has also been pivotal in co-initiating the Longevity Prize, pioneering initiatives like Existentialhope.com, and contributing to notable works like "Superintelligence: Coordination & Strategy" and "Gaming the Future".Get Involved with Foresight:Apply to our virtual technical seminars Join our in-person events and workshops Donate: Support Our Work – If you enjoy what we do, please consider this, as we are entirely funded by your donations!Follow Us: Twitter | Facebook | LinkedInNote: Explore every word spoken on this podcast through Fathom.fm, an innovative podcast search engine. Hosted on Acast. See acast.com/privacy for more information.
Eli Dourado is the Chief Economist at the Abundance Institute. He is a former Senior Research Fellow at the Mercatus Center at George Mason University and he has studied and written about a wide range of technology policy issues, including Internet governance, intellectual property, cybersecurity, and cryptocurrency.Session SummaryThis episode covers topics including Dourado's efforts to accelerate economic growth in the U.S., his views on policy reforms in key sectors such as health, housing, energy, and transportation, and the challenges of regulatory complacency. We also explore the potential of new technologies such as AI and biotechnology. Dourado shares his vision of a future with scalable healthcare solutions, more efficient housing, rapid deployment of energy technologies, and advancements in transportation like supersonic flights and electric vehicles. The conversation concludes on the importance of broad literacy and continuous writing in supporting progress.Full transcript, list of resources, and art piece: https://www.existentialhope.com/podcastsExistential Hope was created to collect positive and possible scenarios for the future so that we can have more people commit to creating a brighter future, and to begin mapping out the main developments and challenges that need to be navigated to reach it. Existential Hope is a Foresight Institute project.Hosted by Allison Duettmann and Beatrice ErkersFollow Us: Twitter | Facebook | LinkedIn | Existential Hope InstagramExplore every word spoken on this podcast through Fathom.fm. Hosted on Acast. See acast.com/privacy for more information.
Gustavs Zilgalvis is a technology and security policy fellow within RAND's Global and Emerging Risks Division, a Ford Dorsey Master's in International Policy candidate at Stanford's Freeman Spogli Institute for International Studies, and a founding Director at the Center for Space Governance. At RAND, he is specializing in the geopolitical and economic implications of the development of artificial intelligence. Previously, Zilgalvis has written about the interface of space and artificial intelligence in Frontiers of Space Technology, held a Summer Research Fellowship on artificial intelligence at Oxford's Future of Humanity Institute, and his research in computational high-energy physics has appeared in SciPost Physics and SciPost Physics Core. Zilgalvis holds a Bachelor of Science with First-Class Honors in Theoretical Physics from University College London, and graduated first in his class from the European School Brussels II.About Foresight InstituteForesight Institute is a research organization and non-profit that supports the beneficial development of high-impact technologies. Since our founding in 1987 on a vision of guiding powerful technologies, we have continued to evolve into a many-armed organization that focuses on several fields of science and technology that are too ambitious for legacy institutions to support.Allison DuettmannThe President and CEO of Foresight Institute, Allison Duettmann directs the Intelligent Cooperation, Molecular Machines, Biotech & Health Extension, Neurotech, and Space Programs, alongside Fellowships, Prizes, and Tech Trees. She has also been pivotal in co-initiating the Longevity Prize, pioneering initiatives like Existentialhope.com, and contributing to notable works like "Superintelligence: Coordination & Strategy" and "Gaming the Future".Get Involved with Foresight:Apply to our virtual technical seminars Join our in-person events and workshops Donate: Support Our Work – If you enjoy what we do, please consider this, as we are entirely funded by your donations!Follow Us: Twitter | Facebook | LinkedInNote: Explore every word spoken on this podcast through Fathom.fm, an innovative podcast search engine. Hosted on Acast. See acast.com/privacy for more information.
SpeakerAmanda Ngo is a 2024 Foresight Fellow. Recently, she has built Elicit.org from inception to 100k+ monthly users, leading a team of 5 engineers and designers, presented on forecasting, safe AI systems, and LLM research tools at conferences (EAG, Foresight Institute), ran a 60-person hackathon with FiftyYears using LLMs to improve our wellbeing (event, write up), analyzed Ideal Parent Figure transcripts and built an automated IPF chatbot (demo), and co-organized a 400-person retreat for Interact, a technology for social good fellowship.Session Summary“Imagine waking up every day in a state of flow, where all the knots and fears are replaced with a deep sense of ease and joy.”This week we are dropping another special episode of the Existential Hope podcast, featuring Amanda Ngo, a Foresight Institute Existential Hope fellow specializing in AI innovation for wellbeing. Amanda speaks about her work on leveraging AI to enhance human flourishing, sharing insights on the latest advancements and their potential impacts. Her app: https://www.mysunrise.app/Full transcript, list of resources, and art piece: https://www.existentialhope.com/podcastsExistential Hope was created to collect positive and possible scenarios for the future so that we can have more people commit to creating a brighter future, and to begin mapping out the main developments and challenges that need to be navigated to reach it. Existential Hope is a Foresight Institute project.Hosted by Allison Duettmann and Beatrice ErkersFollow Us: Twitter | Facebook | LinkedIn | Existential Hope InstagramExplore every word spoken on this podcast through Fathom.fm. Hosted on Acast. See acast.com/privacy for more information.
Jeffrey Ladish Before starting Palisade, Jeffrey helped build out the information security program at Anthropic through his security consulting company, Gordian. Jeffrey has also helped dozens of tech companies, philanthropic organizations, and existential-risk-focused projects get started with secure infrastructure. SummaryLadish discusses the increasing sophistication and proliferation of deepfake technology, which allows AI to mimic human voices and faces, and its potential for widespread deception. He argues that this increasingly capable technology is and will be used to spread fake information, manipulate elections or markets, create deepfake pornography, and generate fake endorsements from actors or organizations.About Foresight InstituteForesight Institute is a research organization and non-profit that supports the beneficial development of high-impact technologies. Since our founding in 1987 on a vision of guiding powerful technologies, we have continued to evolve into a many-armed organization that focuses on several fields of science and technology that are too ambitious for legacy institutions to support.Allison DuettmannThe President and CEO of Foresight Institute, Allison Duettmann directs the Intelligent Cooperation, Molecular Machines, Biotech & Health Extension, Neurotech, and Space Programs, alongside Fellowships, Prizes, and Tech Trees. She has also been pivotal in co-initiating the Longevity Prize, pioneering initiatives like Existentialhope.com, and contributing to notable works like "Superintelligence: Coordination & Strategy" and "Gaming the Future".Get Involved with Foresight:Apply: Virtual Salons & in-person WorkshopsDonate: Support Our Work – If you enjoy what we do, please consider this, as we are entirely funded by your donations!Follow Us: Twitter | Facebook | LinkedInNote: Explore every word spoken on this podcast through Fathom.fm, an innovative podcast search engine. Hosted on Acast. See acast.com/privacy for more information.
SpeakerKristian Rönn is the CEO and co-founder of Normative. He has a background in mathematics, philosophy, computer science and artificial intelligence. Before he started Normative he worked at the University of Oxford's Future of Humanity Institute on issues related to global catastrophic risks.Session SummaryWhen people talk about today's biggest challenges they tend to frame the conversation around “bad people” doing “bad things.” But is there more to the story? In this month's Hope Drop we speak to Kristian Rönn, an entrepreneur formerly affiliated with the Future of Humanity Institute. Kristian calls these deeply rooted impulses “Darwinian demons.” These forces, a by-product of natural selection, can lead us to act in shortsighted ways that harm others—and even imperil our survival as a species. In our latest episode, Kristian explains how we can escape these evolutionary traps through cooperation and innovative thinking. Kristian's new book, The Darwinian Trap, is being published on September 24th. Be sure to preorder it today!Full transcript, list of resources, and art piece: https://www.existentialhope.com/podcastsExistential Hope was created to collect positive and possible scenarios for the future so that we can have more people commit to creating a brighter future, and to begin mapping out the main developments and challenges that need to be navigated to reach it. Existential Hope is a Foresight Institute project.Hosted by Allison Duettmann and Beatrice ErkersFollow Us: Twitter | Facebook | LinkedIn | Existential Hope InstagramExplore every word spoken on this podcast through Fathom.fm. Hosted on Acast. See acast.com/privacy for more information.
Speaker Divya is the co-founder of the Collective Intelligence Project, which advances collective intelligence capabilities for the democratic and effective governance of transformative technologies. She serves as Associate Political Economist and Social Technologist at Microsoft's Office of the CTO. She also holds positions as a research director at Metagov and a researcher in residence at the RadicalXChange Foundation.Key HighlightsIn today's rapidly evolving world, where technology is at the forefront of progress, the need for effective collaboration between humans and artificial intelligence is more crucial than ever. In this engaging and thought-provoking talk, we will explore the concept of collective intelligence and discuss how it can be harnessed to drive collective progress in various domains, including science, technology, and social innovation.About Foresight InstituteForesight Institute is a research organization and non-profit that supports the beneficial development of high-impact technologies. Since our founding in 1987 on a vision of guiding powerful technologies, we have continued to evolve into a many-armed organization that focuses on several fields of science and technology that are too ambitious for legacy institutions to support.Allison DuettmannThe President and CEO of Foresight Institute, Allison Duettmann directs the Intelligent Cooperation, Molecular Machines, Biotech & Health Extension, Neurotech, and Space Programs, alongside Fellowships, Prizes, and Tech Trees. She has also been pivotal in co-initiating the Longevity Prize, pioneering initiatives like Existentialhope.com, and contributing to notable works like "Superintelligence: Coordination & Strategy" and "Gaming the Future".Get Involved with Foresight:Apply: Virtual Salons & in-person WorkshopsDonate: Support Our Work – If you enjoy what we do, please consider this, as we are entirely funded by your donations!Follow Us: Twitter | Facebook | LinkedInNote: Explore every word spoken on this podcast through Fathom.fm, an innovative podcast search engine. Hosted on Acast. See acast.com/privacy for more information.
SpeakerSiméon Campos is president and founder of SaferAI, an organization working on developing the infrastructure for general-purpose AI auditing and risk management. He worked on large language models for the last two years and is highly committed to making AI safer.Session Summary“I think safe AGI can both prevent a catastrophe and offer a very promising pathway into a eucatastrophe.”This week we are dropping a special episode of the Existential Hope podcast, where we sit down with Siméon Campos, president and founder of Safer AI, and a Foresight Institute fellow in the Existential Hope track. Siméon shares his experience working on AI governance, discusses the current state and future of large language models, and explores crucial measures needed to guide AI for the greater good.Full transcript, list of resources, and art piece: https://www.existentialhope.com/podcastsExistential Hope was created to collect positive and possible scenarios for the future so that we can have more people commit to creating a brighter future, and to begin mapping out the main developments and challenges that need to be navigated to reach it. Existential Hope is a Foresight Institute project.Hosted by Allison Duettmann and Beatrice ErkersFollow Us: Twitter | Facebook | LinkedIn | Existential Hope InstagramExplore every word spoken on this podcast through Fathom.fm. Hosted on Acast. See acast.com/privacy for more information.
Speaker Dean Woodley Ball is a Research Fellow at George Mason University's Mercatus Center and author of the Substack Hyperdimensional. His work focuses on artificial intelligence, emerging technologies, and the future of governance. Previously, he was Senior Program Manager for the Hoover Institution's State and Local Governance Initiative. Key HighlightsBased on engagement with the neuroscience and machine learning literatures, this talk will focus on how technologies such as virtual reality, large language models, AI agents, neurostimulation, and neuromonitoring may converge in the coming decade into the first widespread consumer neural technology. The talk will focus on technical feasibility, public policy, and broader societal implications. In terms of the challenge, I think the big one for me is probably building the datasets we'll need for the foundational AI models undergirding all of this.About Foresight InstituteForesight Institute is a research organization and non-profit that supports the beneficial development of high-impact technologies. Since our founding in 1987 on a vision of guiding powerful technologies, we have continued to evolve into a many-armed organization that focuses on several fields of science and technology that are too ambitious for legacy institutions to support.Allison DuettmannThe President and CEO of Foresight Institute, Allison Duettmann directs the Intelligent Cooperation, Molecular Machines, Biotech & Health Extension, Neurotech, and Space Programs, alongside Fellowships, Prizes, and Tech Trees. She has also been pivotal in co-initiating the Longevity Prize, pioneering initiatives like Existentialhope.com, and contributing to notable works like "Superintelligence: Coordination & Strategy" and "Gaming the Future".Get Involved with Foresight:Apply: Virtual Salons & in-person WorkshopsDonate: Support Our Work – If you enjoy what we do, please consider this, as we are entirely funded by your donations!Follow Us: Twitter | Facebook | LinkedInNote: Explore every word spoken on this podcast through Fathom.fm, an innovative podcast search engine. Hosted on Acast. See acast.com/privacy for more information.
Dr. Watt is an investment associate at Ascension Ventures, an investment firm specializing in healthcare technology. She previously co-founded and served as CSO of Pro-Arc Diagnostics, a personalized medicine company operating in St. Louis.Key HighlightsWatt discusses her career journey and insights into venture capital investing in neuroscience and neurotech companies. She explains her role as a VC, which involves making profitable investments, underwriting risk, and structuring deals. Dana highlights key attributes of venture-backable companies, such as exceptional teams, large addressable markets, defensibility, and differentiation. She also discusses challenges and biases in neuroscience investing, including the complexity of brain science, hardware difficulties, long clinical timelines, and subtle readouts.About Foresight InstituteForesight Institute is a research organization and non-profit that supports the beneficial development of high-impact technologies. Since our founding in 1987 on a vision of guiding powerful technologies, we have continued to evolve into a many-armed organization that focuses on several fields of science and technology that are too ambitious for legacy institutions to support.Allison DuettmannThe President and CEO of Foresight Institute, Allison Duettmann directs the Intelligent Cooperation, Molecular Machines, Biotech & Health Extension, Neurotech, and Space Programs, alongside Fellowships, Prizes, and Tech Trees. She has also been pivotal in co-initiating the Longevity Prize, pioneering initiatives like Existentialhope.com, and contributing to notable works like "Superintelligence: Coordination & Strategy" and "Gaming the Future".Get Involved with Foresight:Apply: Virtual Salons & in-person WorkshopsDonate: Support Our Work – If you enjoy what we do, please consider this, as we are entirely funded by your donations!Follow Us: Twitter | Facebook | LinkedInNote: Explore every word spoken on this podcast through Fathom.fm, an innovative podcast search engine. Hosted on Acast. See acast.com/privacy for more information.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Self-Other Overlap: A Neglected Approach to AI Alignment, published by Marc Carauleanu on July 30, 2024 on The AI Alignment Forum. Many thanks to Bogdan Ionut-Cirstea, Steve Byrnes, Gunnar Zarnacke, Jack Foxabbott and Seong Hah Cho for critical comments and feedback on earlier and ongoing versions of this work. This research was conducted at AE Studio and supported by the AI Safety Grants programme administered by Foresight Institute with additional support from AE Studio. Summary In this post, we introduce self-other overlap training: optimizing for similar internal representations when the model reasons about itself and others while preserving performance. There is a large body of evidence suggesting that neural self-other overlap is connected to pro-sociality in humans and we argue that there are more fundamental reasons to believe this prior is relevant for AI Alignment. We argue that self-other overlap is a scalable and general alignment technique that requires little interpretability and has low capabilities externalities. We also share an early experiment of how fine-tuning a deceptive policy with self-other overlap reduces deceptive behavior in a simple RL environment. On top of that, we found that the non-deceptive agents consistently have higher mean self-other overlap than the deceptive agents, which allows us to perfectly classify which agents are deceptive only by using the mean self-other overlap value across episodes. Introduction General purpose ML models with the capacity for planning and autonomous behavior are becoming increasingly capable. Fortunately, research on making sure the models produce output in line with human interests in the training distribution is also progressing rapidly (eg, RLHF, DPO). However, a looming question remains: even if the model appears to be aligned with humans in the training distribution, will it defect once it is deployed or gathers enough power? In other words, is the model deceptive? We introduce a method that aims to reduce deception and increase the likelihood of alignment called Self-Other Overlap: overlapping the latent self and other representations of a model while preserving performance. This method makes minimal assumptions about the model's architecture and its interpretability and has a very concrete implementation. Early results indicate that it is effective at reducing deception in simple RL environments and preliminary LLM experiments are currently being conducted. To be better prepared for the possibility of short timelines without necessarily having to solve interpretability, it seems useful to have a scalable, general, and transferable condition on the model internals, making it less likely for the model to be deceptive. Self-Other Overlap To get a more intuitive grasp of the concept, it is useful to understand how self-other overlap is measured in humans. There are regions of the brain that activate similarly when we do something ourselves and when we observe someone else performing the same action. For example, if you were to pick up a martini glass under an fMRI, and then watch someone else pick up a martini glass, we would find regions of your brain that are similarly activated (overlapping) when you process the self and other-referencing observations as illustrated in Figure 2. There seems to be compelling evidence that self-other overlap is linked to pro-social behavior in humans. For example, preliminary data suggests extraordinary altruists (people who donated a kidney to strangers) have higher neural self-other overlap than control participants in neural representations of fearful anticipation in the anterior insula while the opposite appears to be true for psychopaths. Moreover, the leading theories of empathy (such as the Perception-Action Model) imply that empathy is mediated by self-ot...
James Pethokoukis is a senior fellow and the DeWitt Wallace Chair at the American Enterprise Institute, where he analyzes US economic policy, writes and edits the AEIdeas blog, and hosts AEI's Political Economy podcast. He is also a contributor to CNBC and writes the Faster, Please! newsletter on Substack. He is the author of The Conservative Futurist: How to Create the Sci-Fi World We Were Promised (Center Street, 2023). He has also written for many publications, including the Atlantic, Commentary, Financial Times, Investor's Business Daily, National Review, New York Post, the New York Times, USA Today, and the Week. Session SummaryIn this episode, James joins us to discuss his book, The Conservative Futurist, and his perspectives on technology and economic growth. James explores his background, the spectrum of 'upwing' (pro-progress) versus 'downwing' (anti-progress), and the role of technology in solving global challenges. He explains his reasoning for being pro-progress and pro-growth as well as highlighting the importance of positive storytelling and education in developing a more advanced and prosperous world.Full transcript, list of resources, and art piece: https://www.existentialhope.com/podcastsExistential Hope was created to collect positive and possible scenarios for the future so that we can have more people commit to creating a brighter future, and to begin mapping out the main developments and challenges that need to be navigated to reach it. Existential Hope is a Foresight Institute project.Hosted by Allison Duettmann and Beatrice ErkersFollow Us: Twitter | Facebook | LinkedIn | Existential Hope InstagramExplore every word spoken on this podcast through Fathom.fm. Hosted on Acast. See acast.com/privacy for more information.
The Flourishing FoundationIn February 2024, we partnered with the Future of Life Institute on a hackathon to design institutions that can guide and govern the development of AI. The winner of the hackathon was the Flourishing Foundation, who are focused on our relationship with AI and other emerging technologies. They challenge innovators to envision and build life-centered products, services, and systems, specifcially, to enable TAI-enabled consumer technologies to promote human well-being by developing new norms, processes, and community-driven ecosystems.At their core, they explore the question of "Can AI make us happier?"Connect: https://www.flourishing.foundation/Read about the hackathon: https://foresight.org/2024-xhope-hackathon/Existential Hope was created to collect positive and possible scenarios for the future so that we can have more people commit to creating a brighter future, and to begin mapping out the main developments and challenges that need to be navigated to reach it. Existential Hope is a Foresight Institute project.Hosted by Allison Duettmann and Beatrice ErkersFollow Us: Twitter | Facebook | LinkedIn | Existential Hope InstagramExplore every word spoken on this podcast through Fathom.fm. Hosted on Acast. See acast.com/privacy for more information.
Jim talks with Samo Burja about lessons military strategists should take from the Russo-Ukrainian War so far. They discuss why military stockpiles are less useful than previously assumed, the scaling up of drone production, the impossibility of envisioning what tech will be needed, 4 factors that caused Russian miscalculation, offensive vs defensive dominance, the possibility of a U.S. military draft, the changing role of conscription, the high average age in Russia & Ukraine, the rapid evolution of drones, a comparison between drone pilots & snipers, the muted relevance of the air force, empty symbols of military strength, the progress of autonomous drones, the reevaluation of civilian casualties with changing tech, the information complexity of drone warfare, the importance of artillery, the need for a new George Marshall figure in the U.S., a war of production, how the Ukraine War can inform the Taiwan situation, the idea of an amphibious assault, autonomous submersible vehicles, and much more. JRS EP 243 - Yaroslav Trofimov on Ukraine's War of Independence JRS EP 221 - George Hotz on Open-Source Driving Assistance Samo Burja is the founder and President of Bismarck Analysis, a consulting firm that specializes in institutional analysis for clients in North America and Europe. Bismarck uses the foundational sociological research that Samo and his team have conducted over the past decade to deliver unique insights to clients about institutional design and strategy. Samo's studies focus on the social and material technologies that provide the foundation for healthy human societies, with an eye to engineering and restoring the structures that produce functional institutions. He has authored articles and papers on his findings. His manuscript, Great Founder Theory, is available online. He is also a Research Fellow at the Long Now Foundation and Senior Research Fellow in Political Science at the Foresight Institute. Samo has spoken about his findings at the World Economic Forum at Davos, Y Combinator's YC 120 conference, the Reboot American Innovation conference in Washington, D.C., and elsewhere. He spends most of his time in California and his native Slovenia.
Dr Roman Yampolskiy holds a PhD degree from the Department of Computer Science and Engineering at the University at Buffalo. There he was a recipient of a four year National Science Foundation IGERT (Integrative Graduate Education and Research Traineeship) fellowship. His main areas of interest are behavioral biometrics, digital forensics, pattern recognition, genetic algorithms, neural networks, artificial intelligence and games, and he is an author of over 100 publications including multiple journal articles and books.Session SummaryWe discuss everything AI safety with Dr. Roman Yampolskiy. As AI technologies advance at a breakneck pace, the conversation highlights the pressing need to balance innovation with rigorous safety measures. Contrary to many other voices in the safety space, argues for the necessity of maintaining AI as narrow, task-oriented systems: “I'm arguing that it's impossible to indefinitely control superintelligent systems”. Nonetheless, Yampolskiy is optimistic about narrow AI future capabilities, from politics to longevity and health. Full transcript, list of resources, and art piece: https://www.existentialhope.com/podcastsExistential Hope was created to collect positive and possible scenarios for the future so that we can have more people commit to creating a brighter future, and to begin mapping out the main developments and challenges that need to be navigated to reach it. Existential Hope is a Foresight Institute project.Hosted by Allison Duettmann and Beatrice ErkersFollow Us: Twitter | Facebook | LinkedIn | Existential Hope InstagramExplore every word spoken on this podcast through Fathom.fm. Hosted on Acast. See acast.com/privacy for more information.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Progress Conference 2024: Toward Abundant Futures, published by jasoncrawford on June 26, 2024 on LessWrong. The progress movement has grown a lot in the last few years. We now have progress journals, think tanks, and fellowships. The progress idea has spread and evolved into the "abundance agenda", "techno-optimism", "supply-side progressivism", "American dynamism". All of us want to see more scientific, technological, and economic progress for the good of humanity, and envision a bold, ambitious, flourishing future. What we haven't had so far is a regular gathering of the community. Announcing Progress Conference 2024, a two-day event to connect people in the progress movement. Meet great people, share ideas in deep conversations, catalyze new projects, get energized and inspired. Hosted by: the Roots of Progress Institute, together with the Foresight Institute, HumanProgress.org, the Institute for Humane Studies, the Institute for Progress, and Works in Progress magazine When: October 18-19, 2024 Where: Berkeley, CA - at the Lighthaven campus, an inviting space perfect for mingling Speakers: Keynotes include Patrick Collison, Tyler Cowen, Jason Crawford, and Steven Pinker. Around 20 additional speakers will share ideas on four tracks: the big idea of human progress, policy for progress, tech for progress, and storytelling/media for progress. Full speaker list Attendees: We expect 200+ intellectuals, builders, policy makers, storytellers, and students. This is an invitation-only event, but anyone can apply for an invitation. Complete the open application by July 15th. Program: Two days of intellectual exploration, inspiration and interaction that will help shape the progress movement into a cultural force. Attend talks on topics from tech to policy to culture, build relationships with new people as you hang out on cozy sofas or enjoy the sun in the garden, sign up to run an unconference session and find others who share your interests and passions, or pitch your ideas to those who could help make your dreams a reality. Special thanks to our early sponsors: Cato Institute, Astera Institute, and Freethink Media! We have more sponsorships open, view sponsorship opportunities here. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org
Christian is a researcher in foundational AI, information security, and AI safety, with a current focus on the limits of undetectability. He is a pioneer in the field of Multi-Agent Security (masec.ai), which aims to overcome the safety and security issues inherent in contemporary approaches to multi-agent AI. His recent works include a breakthrough result on the 25+ year old problem of perfectly secure steganography (jointly with Sam Sokota), which was featured by Scientific American, Quanta Magazine, and Bruce Schneier's Security Blog. Key Highlights How do we design autonomous systems and environments in which undetectable actions cannot cause unacceptable damages? He argues that the ability of advanced AI agents to use perfect stealth will soon be AI Safety's biggest concern. In this talk, he focuses on the matter of steganographic collusion among generative AI agents.About Foresight InstituteForesight Institute is a non-profit that supports the beneficial development of high-impact technologies. Since our founding in 1987 on a vision of guiding powerful technologies, we have continued to evolve into a many-armed organization that focuses on several fields of science and technology that are too ambitious for legacy institutions to support.Allison DuettmannThe President and CEO of Foresight Institute, Allison Duettmann directs the Intelligent Cooperation, Molecular Machines, Biotech & Health Extension, Neurotech, and Space Programs, alongside Fellowships, Prizes, and Tech Trees. She has also been pivotal in co-initiating the Longevity Prize, pioneering initiatives like Existentialhope.com, and contributing to notable works like "Superintelligence: Coordination & Strategy" and "Gaming the Future".Get Involved with Foresight:Apply: Virtual Salons & in-person WorkshopsDonate: Support Our Work – If you enjoy what we do, please consider this, as we are entirely funded by your donations!Follow Us: Twitter | Facebook | LinkedInNote: Explore every word spoken on this podcast through Fathom.fm, an innovative podcast search engine. Hosted on Acast. See acast.com/privacy for more information.
This episode features an interview with the 1st place winners of our 2045 Worldbuilding challenge! Why Worldbuilding?We consider worldbuilding an essential tool for creating inspiring visions of the future that can help drive real-world change. Worldbuilding helps us explore crucial 'what if' questions for the future, by constructing detailed scenarios that prompt us to ask: What actionable steps can we take now to realize these desirable outcomes?Cities of Orare – our 1st place winnersCities of Orare imagines a future where AI-powered prediction markets called Orare amplify collective intelligence, enhancing liberal democracy, economic distribution, and policy-making. Its adoption across Africa and globally has fostered decentralized governance, democratizing decision-making, and spurring significant health and economic advancements.Read more about the 2045 world of Cities of Orare: https://www.existentialhope.com/worlds/beyond-collective-intelligence-cities-of-orareAccess the Worldbuilding Course: https://www.existentialhope.com/existential-hope-worldbuildingExistential Hope was created to collect positive and possible scenarios for the future so that we can have more people commit to creating a brighter future, and to begin mapping out the main developments and challenges that need to be navigated to reach it. Existential Hope is a Foresight Institute project.Hosted by Allison Duettmann and Beatrice ErkersFollow Us: Twitter | Facebook | LinkedIn | Existential Hope InstagramExplore every word spoken on this podcast through Fathom.fm. Hosted on Acast. See acast.com/privacy for more information.
This episode features an interview with the 2nd place winners of our 2045 Worldbuilding challenge! Why Worldbuilding?We consider worldbuilding an essential tool for creating inspiring visions of the future that can help drive real-world change. Worldbuilding helps us explore crucial 'what if' questions for the future, by constructing detailed scenarios that prompt us to ask: What actionable steps can we take now to realize these desirable outcomes?Rising Choir – our 2nd place winnersRising Choir envisions a 2045 where advanced AI and robotics are seamlessly integrated into everyday life, enhancing productivity and personal care. The V.O.I.C.E. system revolutionizes communication and democratic participation, developing a sense of inclusion across all levels of society. Energy abundance, driven by solar and battery advancements, addresses climate change challenges, while the presence of humanoid robots in every household marks a new era of economic output and personal convenience. Read more about the 2045 world of Rising Choir: https://www.existentialhope.com/worlds/rising-choir-a-symphony-of-clashing-voicesAccess the Worldbuilding Course: https://www.existentialhope.com/existential-hope-worldbuildingExistential Hope was created to collect positive and possible scenarios for the future so that we can have more people commit to creating a brighter future, and to begin mapping out the main developments and challenges that need to be navigated to reach it. Existential Hope is a Foresight Institute project.Hosted by Allison Duettmann and Beatrice ErkersFollow Us: Twitter | Facebook | LinkedIn | Existential Hope InstagramExplore every word spoken on this podcast through Fathom.fm. Hosted on Acast. See acast.com/privacy for more information.
Evelyne Yehudit Bischof, MD, MPH, FEFIM, FMH is Associate Professor at the Shanghai University of Medicine & Health Sciences, Chief Associate Physician of Internal Medicine and Oncology at the University Hospital Renji of Shanghai Jiao Tong University School of Medicine, and Emergency Medicine Physician at the Shanghai East International Medical Center. She is a specialist in Internal Medicine with a research focus on Artificial Intelligence and Digital Health. Key HighlightsLongevity medicine is an AI and data-driven field evolving from precision medicine, lifestyle medicine, and geroscience that aims to elongate patients' healthy lifespan. Using biomarkers of aging, aging clocks, and continuous data monitoring, longevity physicians can bring the patient's health from "within norms" to "optimal" or even best performance.About Foresight InstituteForesight Institute is a non-profit that supports the beneficial development of high-impact technologies. Since our founding in 1987 on a vision of guiding powerful technologies, we have continued to evolve into a many-armed organization that focuses on several fields of science and technology that are too ambitious for legacy institutions to support.Allison DuettmannThe President and CEO of Foresight Institute, Allison Duettmann directs the Intelligent Cooperation, Molecular Machines, Biotech & Health Extension, Neurotech, and Space Programs, alongside Fellowships, Prizes, and Tech Trees. She has also been pivotal in co-initiating the Longevity Prize, pioneering initiatives like Existentialhope.com, and contributing to notable works like "Superintelligence: Coordination & Strategy" and "Gaming the Future".Get Involved with Foresight:Apply: Virtual Salons & in-person WorkshopsDonate: Support Our Work – If you enjoy what we do, please consider this, as we are entirely funded by your donations!Follow Us: Twitter | Facebook | LinkedInNote: Explore every word spoken on this podcast through Fathom.fm, an innovative podcast search engine. Hosted on Acast. See acast.com/privacy for more information.
Worldbuiding We consider worldbuilding an essential tool for creating inspiring visions of the future that can help drive real-world change. Worldbuilding helps us explore crucial 'what if' questions for the future, by constructing detailed scenarios that prompt us to ask: What actionable steps can we take now to realize these desirable outcomes?FloraTech – our 3rd place winnersIn the world of 2045, a network of bounded AI agents, imbued with robust ethical constraints and specialized capabilities, has become the backbone of a thriving, harmonious global society. These AI collaborators have unlocked unprecedented possibilities for localized, sustainable production of goods and services, empowering communities to meet their needs through advanced manufacturing technologies and smart resource allocation. Read more about the 2045 world of FloraTech: https://www.existentialhope.com/worlds/floratech-2045-co-evolving-with-technology-for-collective-flourishingAccess the Worldbuilding Course: https://www.existentialhope.com/existential-hope-worldbuildingExistential Hope was created to collect positive and possible scenarios for the future so that we can have more people commit to creating a brighter future, and to begin mapping out the main developments and challenges that need to be navigated to reach it. Existential Hope is a Foresight Institute project.Hosted by Allison Duettmann and Beatrice ErkersFollow Us: Twitter | Facebook | LinkedIn | Existential Hope InstagramExplore every word spoken on this podcast through Fathom.fm. Hosted on Acast. See acast.com/privacy for more information.
SpeakerStuart Buck is the Executive Director of the Good Science Project, and a Senior Advisor at the Social Science Research Council. Formerly, he was the Vice President of Research at Arnold Ventures. His efforts to improve research transparency and reproducibility have been featured in Wired, New York Times, The Atlantic, Slate, The Economist, and more. He has given advice to DARPA, IARPA (the CIA's research arm), the Department of Veterans Affairs, and the White House Social and Behavioral Sciences Team on rigorous research processes, as well as publishing in top journals (such as Science and BMJ) on how to make research more accurate.Session SummaryWorking in the field of meta-science, Stuart cares deeply about who gets funding and how, the engulfment of bureaucracy for researchers, everywhere, how we can fund more innovative science, ensuring results are reproducible and true, and much more. Among many things, he has funded renowned work showing that scientific research is often irreproducible, including the Reproducibility Projects in Psychology and Cancer Biology.Full transcript, list of resources, and art piece: https://www.existentialhope.com/podcastsExistential Hope was created to collect positive and possible scenarios for the future so that we can have more people commit to creating a brighter future, and to begin mapping out the main developments and challenges that need to be navigated to reach it. Existential Hope is a Foresight Institute project.Hosted by Allison Duettmann and Beatrice ErkersFollow Us: Twitter | Facebook | LinkedIn | Existential Hope InstagramExplore every word spoken on this podcast through Fathom.fm. Hosted on Acast. See acast.com/privacy for more information.
SpeakerTom Burns graduated with First Class Honours in his Bachelor of Science from Monash University in 2013, exploring the mathematical features of human perception of non-linguistic sounds. Shifting to philosophy, he completed a Master of Bioethics in 2014, analyzing the ethics of euthanasia legislation in Australia, followed by a World Health Organization Bioethics Fellowship in 2015, contributing to the Ebola epidemic response. In 2023, as a Fall Semester Postdoc at Brown University's ICERM, he contributed to the 'Math + Neuroscience' program. Recently affiliated with Timaeus, an AI safety organization, Tom is continuing his research at Cornell University's new SciAI Center from March 2024.Session Summary Neuroscience is a burgeoning field with many opportunities for novel research directions. Due to experimental and physical limitations, however, theoretical progress relies on imperfect and incomplete information about the system. Artificial neural networks, for which perfect and complete information is possible, therefore offer those trained in the neurosciences an opportunity to study intelligence to a level of granularity which is beyond comparison to biological systems, while still relevant to them. Additionally, applying neuroscience methods, concepts, and theory to AI systems offers a relatively under-explored avenue to make headwind in the daunting challenges posed by AI safety — both for present-day risks, such as enshrining biases and spreading misinformation, and for future risks, including on existential scales. In this talk, Tom presents two emerging examples of interactions between neuroscience and AI safety. In the direction of ideas from neuroscience being useful for AI safety, he demonstrates how associative memory has become a tool for interpretability of Transformer-based models. In the opposite direction, he discusses how statistical learning theory and the developmental interpretability research program have applicability in understanding neuroscience phenomena, such as perceptual invariance and representational drift. Full transcript, list of resources, and art piece: https://www.existentialhope.com/podcastsExistential Hope was created to collect positive and possible scenarios for the future so that we can have more people commit to creating a brighter future, and to begin mapping out the main developments and challenges that need to be navigated to reach it. Existential Hope is a Foresight Institute project.Hosted by Allison Duettmann and Beatrice ErkersFollow Us: Twitter | Facebook | LinkedIn | Existential Hope InstagramExplore every word spoken on this podcast through Fathom.fm. Hosted on Acast. See acast.com/privacy for more information.
Jason Crawford is the founder of The Roots of Progress, where he writes and speaks about the history of technology and the philosophy of progress. Previously, he spent 18 years as a software engineer, engineering manager, and startup founder.Worldbuilding CourseThis session is a part of THE WORLDBUILDING CHALLENGE: CO-CREATING THE WORLD OF 2045. In this virtual and interactive course, we engage with the most pressing global challenges of our age—climate change, the risks of AI, and the complex ethical questions arising in the wake of new technologies. Our aim is to sharpen participants' awareness and equip them to apply their skills to these significant and urgent issues.Existential HopeExistential Hope was created to collect positive and possible scenarios for the future so that we can have more people commit to creating a brighter future, and to begin mapping out the main developments and challenges that need to be navigated to reach it. Existential Hope is a Foresight Institute project.Hosted by Allison Duettmann and Beatrice ErkersFollow Us: Twitter | Facebook | LinkedIn | Existential Hope InstagramExplore every word spoken on this podcast through Fathom.fm. Hosted on Acast. See acast.com/privacy for more information.
Today on Unsupervised Learning, Razib talks to long-time podcast favorite Samo Burja. Burja is the founder of Bismarck Analysis and Bismarck Brief, a Research Fellow at the Long Now Foundation and The Foresight Institute. He is also now the chair of the editorial board of Palladium Magazine. Already a four-time guest on Unsupervised Learning (he has previously shared his views on China's future, Russia's present and archaeology's past, his role at Bismarck Analysis and geopolitical uncertainty, reflected on his piece in Palladium on Finding "lost civilizations" and covered his ideas on "social technology," China, and the foreign view of America), the Slovenian-born Burja is one of the most original and incisive public intellectuals writing in America today. His 2021 piece, Why Civilization is Older than We Thought, brings a level of depth and rigor to historical heterodoxy that you rarely find anymore. Burja has also forwarded the “great founder theory” of historical change and formulated the idea of “live players” in social analysis. In this episode, Razib asks Burja for his sense of the world landscape in early 2024, revisiting conversations that delve into logistical details of the Russian invasion of Ukraine and the future of Chinese power. Burja continues to be pessimistic about the long-term prospects of European and Ukrainian resistance to a Russian war-machine that is geared toward grinding its way through lengthy battles of attrition. He also asserts that the current bearish attitude toward Chinese power is short-sighted, arguing that Western media in particular understates the technological and economic achievements of the PRC over the last generation. Burja believes that even if the “China bulls” were overly optimistic, the “China bears” go to excess in the opposite direction. Finally, he touches upon his vision for Palladium Magazine, a publication he has long contributed to, and which he now helms.
Jim talks with Samo Burja about the ideas in his recent article "Geothermal Energy Turns Planets Into Power Sources." They discuss the heat beneath the earth's surface, contributors to the heat, technological dependency between fracking & geothermal, the math of electricity, earthquake risk, the limits of current geology, the value of better drilling tech, new approaches to drilling, gyrotrons, plasma torches, whether our civilization actually needs more energy, the local optimum of fossil fuels, bureaucratic incentives in energy, investment of social surplus, scientific welfare, metascience, giving academic tenure to brilliant 25-year-olds, a defense-favoring military epoch, the math of geothermal vs other combinations of energy sources, visions of a clean-energy future, and much more. Episode Transcript "Geothermal Energy Turns Planets Into Power Sources," by Samo Burja JRS EP117 - Samo Burja on Societal Decline JRS EP125 - Samo Burja on Socetial Decline: Part 2 JRS EP222 - Trent McConaghy on AI & Brain-Computer Interface Accelerationism (bci/acc) Samo Burja is the founder and President of Bismarck Analysis, a consulting firm that specializes in institutional analysis for clients in North America and Europe. Bismarck uses the foundational sociological research that Samo and his team have conducted over the past decade to deliver unique insights to clients about institutional design and strategy. Samo's studies focus on the social and material technologies that provide the foundation for healthy human societies, with an eye to engineering and restoring the structures that produce functional institutions. He has authored articles and papers on his findings. His manuscript, Great Founder Theory, is available online. He is also a Research Fellow at the Long Now Foundation and Senior Research Fellow in Political Science at the Foresight Institute. Samo has spoken about his findings at the World Economic Forum at Davos, Y Combinator's YC 120 conference, the Reboot American Innovation conference in Washington, D.C., and elsewhere. He spends most of his time in California and his native Slovenia.