Hear This Idea is a podcast showcasing new thinking in philosophy, the social sciences, and effective altruism. Each episode has an accompanying write-up at www.hearthisidea.com/episodes.
Fin Moorhouse and Luca Righetti
Max Smeets is a Senior Researcher at ETH Zurich's Center for Security Studies and Co-Director of Virtual Routes You can find links and a transcript at www.hearthisidea.com/episodes/smeets In this episode we talk about: The different types of cyber operations that a nation state might launch How international norms formed around what kind of cyber attacks are “allowed” The challenges that even elite cyber forces face What capabilities future AI systems would need to drastically change the space You can get in touch through our website or on Twitter. Consider leaving us an honest review wherever you're listening to this — it's the best free way to support the show. Thanks for listening!
Tom Kalil is the CEO of Renaissance Philanthropy. He also served in the White House for two presidents (under Obama and Clinton); where he helped establish incentive prizes in government through challenge.gov; in addition to dozens of science and tech program. More recently Tom served as the Chief Innovation Officer at Schmidt Futures, where he helped launch Convergent Research. Matt Clancy is an economist and a research fellow at Open Philanthropy. He writes ‘New Things Under the Sun', which is a living literature review on academic research about science and innovation. We talked about: What is ‘influence without authority'? Should public funders sponsor more innovation prizes? Can policy entrepreneurship be taught formally? Why isn't ultra-wealthy philanthropy much more ambitious? What's the optimistic case for increasing US state capacity? What was it like being principal staffer to Gordon Moore? What is Renaissance Philanthropy? You can get in touch through our website or on Twitter. Consider leaving us an honest review wherever you're listening to this — it's the best way to support the show. Thanks for listening!
Dr Cynthia Schuck-Paim is the Scientific Director of the Welfare Footprint Project, a scientific effort to quantify animal welfare to inform practice, policy, investing and purchasing decisions. You can find links and a transcript at www.hearthisidea.com/episodes/schuck. We discuss: How to begin thinking about quantifying animal experiences in a cross-comparable way Whether the ability to feel pain is unique to big brained animals, or more widespread in the tree of life How fish farming compares to poultry and livestock farming How worried to be about bird flu zoonosis Whether different animal species experience time differently Whether positive experiences like joy could make life worth living for some farmed animals How animal welfare advocates can learn from anti-corruption nonprofits You can get in touch through our website or on Twitter. Consider leaving us an honest review wherever you're listening to this — it's the best way to support the show. Thanks for listening!
Dan Williams is a Lecturer in Philosophy at the University of Sussex and an Associate Fellow at the Leverhulme Centre for the Future of Intelligence (CFI) at the University of Cambridge. You can find links and a transcript at www.hearthisidea.com/episodes/williams. We discuss: If reasoning is so useful, why are we so bad at it? Do some bad ideas really work like ‘mind viruses'? Is the ‘luxury beliefs' concept useful? What's up with the idea of a ‘marketplace for ideas'? Are people shopping for new beliefs, or to rationalise their existing attitudes? How dangerous is misinformation, really? Can we ‘vaccinate' or ‘inoculate' against it? Will AI help us form more accurate beliefs, or will it persuade more people of unhinged ideas? Does fact-checking work? Under transformative AI, should we worry more about the suppression or the proliferation of counter-establishment ideas? You can get in touch through our website or on Twitter. Consider leaving us an honest review wherever you're listening to this — it's the best way to support the show. Thanks for listening!
Tamay Besiroglu is a researcher working on the intersection of economics and AI. He is currently the Associate Director of Epoch AI, a research institute investigating key trends and questions that will shape the trajectory and governance of AI. You can find links and a transcript at www.hearthisidea.com/episodes/besiroglu In this episode we talked about open source the risks and benefits of open source AI models. We talk about: The argument for explosive growth from ‘increasing returns to scale' Does AI need to be able to automate R&D to cause rapid growth? Which theories of growth best explain the Industrial Revolution; and what do they predict from AI? What happens to human incomes under near-total job automation? Are regulations likely to slow down frontier AI progress enough to prevent this? Might AI go the way of nuclear power? Will AI hit on resource or power limits before explosive growth? Won't it run out of data first? Why aren't academic economists more interested in the prospect of explosive growth, if indeed it is so plausible? You can get in touch through our website or on Twitter. Consider leaving us an honest review wherever you're listening to this — it's the best way to support the show. Thanks for listening!
Jacob Trefethen oversees Open Philanthropy's science and science policy programs. He was a Henry Fellow at Harvard University, and has a B.A. from the University of Cambridge. You can find links and a transcript at www.hearthisidea.com/episodes/trefethen In this episode we talked about open source the risks and benefits of open source AI models. We talk about: Life-saving health technologies which probably won't exist in 5 years (without a concerted effort) — like a widely available TB vaccine, and bugs which stop malaria spreading How R&D for neglected diseases works — How much does the world spend on it? How do drugs for neglected diseases go from design to distribution? No-brainer policy ideas for speeding up global health R&D Comparing health R&D to public health interventions (like bed nets) Comparing the social returns to frontier (‘Progress Studies') to global health R&D Why is there no GiveWell-equivalent for global health R&D? Won't AI do all the R&D for us soon? You can get in touch through our website or on Twitter. Consider leaving us an honest review wherever you're listening to this — it's the best free way to support the show. Thanks for listening!
Elizabeth Seger is the Director of Technology Policy at Demos, a cross-party UK think tank with a program on trustworthy AI. You can find links and a transcript at www.hearthisidea.com/episodes/seger In this episode we talked about open source the risks and benefits of open source AI models. We talk about: What ‘open source' really means What is (and isn't) open about ‘open source' AI models How open source weights and code are useful for AI safety research How and when the costs of open sourcing frontier model weights might outweigh the benefits Analogies to ‘open sourcing nuclear designs' and the open science movement You can get in touch through our website or on Twitter. Consider leaving us an honest review wherever you're listening to this — it's the best free way to support the show. Thanks for listening! Note that this episode was recorded before the release of Meta's Llama 3.1 family of models. Note also that in the episode Elizabeth referenced an older version of the definition maintained by OSI (roughly version 0.0.3). The current OSI definition (0.0.8) now does a much better job of delineating between different model components.
Joe Carlsmith is a writer, researcher, and philosopher. He works as a senior research analyst at Open Philanthropy, where he focuses on existential risk from advanced artificial intelligence. He also writes independently about various topics in philosophy and futurism, and holds a doctorate in philosophy from the University of Oxford. You can find links and a transcript at www.hearthisidea.com/episodes/carlsmith In this episode we talked about a report Joe recently authored, titled ‘Scheming AIs: Will AIs fake alignment during training in order to get power?'. The report “examines whether advanced AIs that perform well in training will be doing so in order to gain power later”; a behaviour Carlsmith calls scheming. We talk about: Distinguishing ways AI systems can be deceptive and misaligned Why powerful AI systems might acquire goals that go beyond what they're trained to do, and how those goals could lead to scheming Why scheming goals might perform better (or worse) in training than less worrying goals The ‘counting argument' for scheming AI Why goals that lead to scheming might be simpler than the goals we intend Things Joe is still confused about, and research project ideas You can get in touch through our website or on Twitter. Consider leaving us an honest review wherever you're listening to this — it's the best free way to support the show. Thanks for listening!
Eric Schwitzgebel is a professor of philosophy at the University of California, Riverside. His main interests include connections between empirical psychology and philosophy of mind and the nature of belief. His book The Weirdness of the World can be found here. We talk about: The possibility of digital consciousness Policy ideas for avoiding major moral mistakes around digital consciousness Prospects for the science of consciousness, and why we likely won't have clear answers in time Why introspection is much less reliable than most people think How and why we invent false stories about our own choices without realising What randomly sampling people's experiences reveals about what we're doing with most of our attention The possibility of 'overlapping minds' How and why our actions might have infinite effects, both good and bad Whether it would be good news to learn that our actions have infinite effects, or that the universe is infinite in extent The best science fiction on digital minds and AI You can get in touch through our website or on Twitter. Consider leaving us an honest review wherever you're listening to this — it's the best free way to support the show. Thanks for listening!
Sonia Ben Ouagrham-Gormley is an associate professor at George Mason University and Deputy Director of their Biodefence Programme In this episode we talk about: Where the belief that 'bioweapons are easy to make' came from and why it has been difficult to change Why transferring tacit knowledge is so difficult -- and the particular challenges that rogue actors face As well as lastly what Sonia makes of the AI-Bio risk discourse and what types of advances in technology would cause her concern You can get in touch through our website or on Twitter. Consider leaving us an honest review wherever you're listening to this — it's the best free way to support the show. Thanks for listening!
In this bonus episode we are sharing an episode by another podcast: How I Learned To Love Shrimp. It is co-hosted by Amy Odene and James Ozden, who together are "showcasing innovative and impactful ways to help animals". In this interview they speak to David Coman-Hidy, who is the former President of The Humane –League, one of the largest farm animal advocacy organisations in the world. He now works as a Partner at Sharpen Strategy working to coach animal advocacy organisations.
Michelle Lavery is a Program Associate with Open Philanthropy's Farm Animal Welfare team, with a focus on the science and study of animal behaviour & welfare. In this episode we talk about: How do scientists study animal emotions in the first place? How is a "science" of animal emotion even feasible? When is it useful to anthropomorphise animals to understand them? How can you study the preferences of animals? How can you measure the “strength” of preferences? How do farmed animal welfare advocates relate to animal welfare science? Are their perceptions fair? How can listeners get involved with the study of animal emotions? You can get in touch through our website or on Twitter. Consider leaving us an honest review wherever you're listening to this — it's the best free way to support the show. Thanks for listening!
Dr Richard Bruns is a Senior Scholar at the Johns Hopkins Center for Health Security, and before that was a Senior Economist at the US Food and Drug Administration (the FDA). In this episode we talk about the importance of indoor air quality (IAQ), and how to improve it. Including: Estimating the DALY cost of unclean indoor air from pathogens and particulate matter How much pandemic risk could be reduced from improving IAQ? How economists convert health losses into dollar figures — and how not to put a price on life Key interventions to improve IAQ Air filtration Germicidal UV light (especially Far-UVC light) Barriers to adoption, including UV smog and empirical studies needed most National and state-level policy changes to get these interventions adopted widely You can get in touch through our website or on Twitter. Consider leaving us an honest review wherever you're listening to this — it's the best free way to support the show. Thanks for listening!
Saloni Dattani is a Researcher at Our World in Data, and a founder & editor at the online magazine Works in Progress. She holds a PhD in psychiatric genetics from King's College London. You can see more links and a full transcript at hearthisidea.com/episodes/dattani. In this episode we talk about: The history of malaria and attempts to eradicate it The role of DDT and insecticide spraying campaigns — and why they were scaled down Why we didn't get a malaria vaccine sooner What comes after vaccine discovery — rolling out the RTS,S vaccine New funding models to accelerate similar life-saving research, like vaccines for TB and HIV Why so much global health data is missing, and why that matters How the ‘million deaths study' revealed that about 50,000 deaths per year from snakebites in India went uncounted by health agencies You can get in touch through our website or on Twitter. Consider leaving us an honest review wherever you're listening to this — it's the best free way to support the show. Thanks for listening!
Liv Boeree is a former poker champion turned science communicator and podcaster, with a background in astrophysics. In 2014, she founded the nonprofit Raising for Effective Giving, which has raised more than $14 million for effective charities. Before retiring from professional poker in 2019, Liv was the Female Player of the Year for three years running. Currently she hosts the Win-Win podcast (you'll enjoy it if you enjoy this podcast). You can see more links and a full transcript at hearthisidea.com/episodes/boeree. In this episode we talk about: Is the ‘poker mindset' valuable? Is it learnable? How and why to bet on your beliefs — and whether there are outcomes you shouldn't make bets on Would cities be better without public advertisements? What is Moloch, and why is it a useful abstraction? How do we escape multipolar traps? Why might advanced AI (not) act like profit-seeking companies? What's so important about complexity? What is complexity, for that matter? You can get in touch through our website or on Twitter. Consider leaving us an honest review wherever you're listening to this — it's the best free way to support the show. Thanks for listening!
Jon Y is the creator of the Asianometry YouTube channel and accompanying newsletter. He describes his channel as making "video essays on business, economics, and history. Sometimes about Asia, but not always." You can see more links and a full transcript at hearthisidea.com/episodes/asianometry In this episode we talk about: Compute trends driving recent progress in Artificial Intelligence; The semiconductor supply chain and its geopolitics; The buzz around LK-99 and superconductivity. If you have any feedback, you can get a free book for filling out our new feedback form. You can also get in touch through our website or on Twitter. Consider leaving us a review wherever you're listening to this — it's the best free way to support the show. Thanks for listening!
Steven Teles s is a Professor of Political Science at Johns Hopkins University and a Senior Fellow at the Niskanen Center. His work focuses on American politics and he written several books on topics such as elite politics, the judiciary, and mass incarceration. You can see more links and a full transcript at hearthisidea.com/teles In this episode we talk about: The rise of the conservative legal movement; How ideas can come to be entrenched in American politics; Challenges in building a new academic field like "law and economics"; The limitations of doing quantitative evaluations of advocacy groups. If you have any feedback, you can get a free book for filling out our new feedback form. You can also get in touch through our website or on Twitter. Consider leaving us a review wherever you're listening to this — it's the best free way to support the show. Thanks for listening! Key links:
Guive Assadi is a Research Scholar at the Center for the Governance of AI. Guive's research focuses on the conceptual clarification of, and prioritisation among, potential risks posed by emerging technologies. He holds a master's in history from Cambridge University, and a bachelor's from UC Berkeley. In this episode, we discuss Guive's paper, Will Humanity Choose Its Future?. What is an 'evolutionary future', and would it count as an existential catastrophe? How did the agricultural revolution deliver a world which few people would have chosen? What does it mean to say that we are living in the dreamtime? Will it last? What competitive pressures in the future could drive the world to undesired outcomes? Digital minds Space settlement What measures could prevent an evolutionary future, and allow humanity to more deliberately choose its future? World government Strong global coordination Defensive advantage Should this all make us more or less hopeful about humanity's future? Ideas for further research Guive's recommended reading: Rationalist Explanations for War by James D. Fearon Meditations on Moloch by Scott Alexander The Age of Em by Robin Hanson What is a Singleton? By Nick Bostrom Other key links: Will Humanity Choose Its Future? by Guive Assadi Colder Wars by Gwern The Secret of Our Success: How Culture Is Driving Human Evolution, Domesticating Our Species, and Making Us Smarter by Joseph Henrich (and a review by Scott Alexander)
Michael Cohen is is a DPhil student at the University of Oxford with Mike Osborne. He will be starting a postdoc with Professor Stuart Russell at UC Berkeley, with the Center for Human-Compatible AI. His research considers the expected behaviour of generally intelligent artificial agents, with a view to designing agents that we can expect to behave safely. You can see more links and a full transcript at www.hearthisidea.com/episodes/cohen. We discuss: What is reinforcement learning, and how is it different from supervised and unsupervised learning? Michael's recently co-authored paper titled 'Advanced artificial agents intervene in the provision of reward' Why might it be hard to convey what we really want to RL learners — even when we know exactly what we want? Why might advanced RL systems might tamper with their sources of input, and why could this be very bad? What assumptions need to hold for this "input tampering" outcome? Is reward really the optimisation target? Do models "get reward"? What's wrong with the analogy between RL systems and evolution? Key links: Michael's personal website 'Advanced artificial agents intervene in the provision of reward' by Michael K. Cohen, Marcus Hutter, and Michael A. Osborne 'Pessimism About Unknown Unknowns Inspires Conservatism' by Michael Cohen and Marcus Hutter 'Intelligence and Unambitiousness Using Algorithmic Information Theory' by Michael Cohen, Badri Vallambi, and Marcus Hutter 'Quantilizers: A Safer Alternative to Maximizers for Limited Optimization' by Jessica Taylor 'RAMBO-RL: Robust Adversarial Model-Based Offline Reinforcement Learning' by Marc Rigter, Bruno Lacerda, and Nick Hawes 'Quantilizers: A Safer Alternative to Maximizers for Limited Optimization' by Jessica Taylor Season 40 of Survivor
Katja Grace is a researcher and writer. She runs AI Impacts, a research project trying to incrementally answer decision-relevant questions about the future of artificial intelligence (AI). Katja blogs primarily at worldspiritsockpuppet, and indirectly at Meteuphoric, Worldly Positions, LessWrong and the EA Forum. We discuss: What is AI Impacts working on? Counterarguments to the basic AI x-risk case Reasons to doubt that superhuman AI systems will be strongly goal-directed Reasons to doubt that if goal-directed superhuman AI systems are built, their goals will be bad by human lights Aren't deep learning systems fairly good at understanding our 'true' intentions? Reasons to doubt that (misaligned) superhuman AI would overpower humanity The case for slowing down AI Is AI really an arms race? Are there examples from history of valuable technologies being limited or slowed down? What does Katja think about the recent open letter on pausing giant AI experiments? Why read George Saunders? Key links: World Spirit Sock Puppet (Katja's main blog) Counterarguments to the basic AI x-risk case Let's think about slowing down AI We don't trade with ants Thank You, Esther Forbes (George Saunders) You can see more links and a full transcript at hearthisidea.com/episodes/grace.
Michael Aird is a senior research manager at Rethink Priorities, where he co-leads the Artificial Intelligence Governance and Strategy team alongside Amanda El-Dakhakhni. Before that, he conducted nuclear risk research for Rethink Priorities and longtermist macrostrategy research for Convergence Analysis, the Center on Long-Term Risk, and the Future of Humanity Institute, which is where we know each other from. Before that, he was a teacher and a stand up comedian. He previously spoke to us about impact-driven research on Episode 52. In this episode, we talk about: The basic case for working on existential risk from AI How to begin figuring out what to do to reduce the risks Threat models for the risks of advanced AI 'Theories of victory' for how the world mitigates the risks 'Intermediate goals' in AI governance What useful (and less useful) research looks like for reducing AI x-risk Practical advice for usefully contributing to efforts to reduce existential risk from AI Resources for getting started and finding job openings Key links: Apply to be a Compute Governance Researcher or Research Assistant at Rethink Priorities (applications open until June 12, 2023) Rethink Priority's survey on intermediate goals in AI governance The Rethink Priorities newsletter The Rethink Priorities tab on the Effective Altruism Forum Some AI Governance Research Ideas compiled by Markus Anderljung & Alexis Carlier Strategic Perspectives on Long-term AI Governance by Matthijs Maas Michael's posts on the Effective Altruism Forum (under the username "MichaelA") The 80,000 Hours job board
Ben Garfinkel is a Research Fellow at the University of Oxford and Acting Director of the Centre for the Governance of AI. In this episode we talk about: An overview of AI governance space, and disentangling concrete research questions that Ben would like to see more work on Seeing how existing arguments for the risks from transformative AI have held up and Ben's personal motivations for working on global risks from AI GovAI's own work and opportunities for listeners to get involved Further reading and a transcript is available on our website: hearthisidea.com/episodes/garfinkel If you have any feedback, you can get a free book for filling out our new feedback form. You can also get in touch through our website or on Twitter. Consider leaving us a review wherever you're listening to this — it's the best free way to support the show. Thanks for listening!
Anders Sandberg is a researcher, futurist, transhumanist and author. He holds a PhD in computational neuroscience from Stockholm University, and is currently a Senior Research Fellow at the Future of Humanity Institute at the University of Oxford. His research covers human enhancement, exploratory engineering, and 'grand futures' for humanity. This episode is a recording of a live interview at EAGx Cambridge (2023). You can find upcoming effective altruism conferences here: www.effectivealtruism.org/ea-global We talk about: What is exploratory engineering and what is it good for? Progress on whole brain emulation Are we near the end of humanity's tech tree? Is diversity intrinsically valuable in grand futures? How Anders does research Virtue ethics for civilisations Anders' takes on AI risk and whether LLMs are close to general intelligence And much more! Further reading and a transcript is available on our website: hearthisidea.com/episodes/sandberg-live If you have any feedback, you can get a free book for filling out our new feedback form. You can also get in touch through our website or on Twitter. Consider leaving us a review wherever you're listening to this — it's the best free way to support the show. Thanks for listening!
Rory Stewart is the President of GiveDirectly and a visiting fellow at Yale's Jackson Institute for Global Affairs. Before that, Rory was (amongst other things) a Member of Parliament in the UK, a Professor in Human Rights at Harvard, and a diplomat. He is also the author of several books and co-hosts the podcast The Rest Is Politics. In this episode, we talk about: The moral case for radically scaling cash-transfers What we can do to raise governments' ambitions to end global poverty What Rory learned about aid since being Secretary of State for International Development Further reading is available on our website: hearthisidea.com/episodes/stewart If you have any feedback, you can get a free book for filling out our new feedback form. You can also get in touch through our website or on Twitter. Consider leaving us a review wherever you're listening to this — it's the best free way to support the show. Thanks for listening!
Jaime Sevilla is the Director of Epoch, a team of researchers investigating and forecasting the development of advanced AI. This is his second time on the podcast. Over the next few episodes, we will be exploring the potential for catastrophe cause by advanced artificial intelligence. Why? First, you might think that AI is likely to become transformatively powerful within our lifetimes. Second, you might think that such transformative AI could result in catastrophe unless we're very careful about how it gets implemented. This episode is about understanding the first of those two claims. Fin spoke with Jaime about: We've seen a crazy amount of progress in AI capabilities in the last few months; even weeks. How should we think about that progress continuing into the future? How has the amount of compute used to train AI models been changing over time? What about algorithmic efficiency? Will data soon become a bottleneck in training state-of-the-art text models? Further reading is available on our website: hearthisidea.com/episodes/sevilla If you have any feedback, you can get a free book for filling out our new feedback form. You can also get in touch through our website or on Twitter. Consider leaving us a review wherever you're listening to this — it's the best free way to support the show. Thanks for listening!
Chris Miller is an Associate Professor of International History at Tufts University and author of the book “Chip War: The Fight for the World's Most Critical Technology” (the Financial Times Business Book of the Year). He is also a Visiting Fellow at the American Enterprise Institute, and Eurasia Director at the Foreign Policy Research Institute. Over the next few episodes we will be exploring the potential for catastrophe cause by advanced artificial intelligence. But before we look ahead, we wanted to give a primer on where we are today: on the history and trends behind the development of AI so far. In this episode, we discuss: How semiconductors have historically been related to US military strategy How the Taiwanese company TSMC became such an important player in this space — while other countries' attempts have failed What the CHIPS Act signals about attitudes to compute governance in the decade ahead Further reading is available on our website: hearthisidea.com/episodes/miller If you have any feedback, you can get a free book for filling out our new feedback form. You can also get in touch through our website or on Twitter. Consider leaving us a review wherever you're listening to this — it's the best free way to support the show. Thanks for listening!
AI might bring huge benefits — if we avoid the risks. This episode is a rebroadcast of an article written for 80,000 Hours Preventing an AI-related catastrophe. It was written by Benjamin Hilton and narrated by Perrin Walker for Type III Audio. The full url is: 80000hours.org/problem-profiles/artificial-intelligence Why are we sharing this article on our podcast feed? Over the next few months, we are planning to do a bunch of episodes on artificial intelligence. But first, we wanted to share an introduction to the problems: something which explains why AI might pose existential-level threats to humanity, and why you might prioritise this problem when you're thinking about what to work on or just what to learn more about. And we don't think we're going to be able to do a better job than this article. You can view all our episodes at hearthisidea.com, and you give feedback at feedback.hearthisidea.com/listener.
A full writeup of this episode, including references and a transcript, is available on our website: https://hearthisidea.com/episodes/robichaud. Carl Robichaud co-leads Longview Philanthropy's programme on nuclear weapons. We discuss: Lessons from the Ukraine crisis China's future as a nuclear power Nuclear near-misses The Reykjavik Summit, Acheson–Lilienthal Report and Baruch Plan Lessons from nuclear risk for other emerging technological risks What's happened to philanthropy aimed at reducing risks from nuclear weapons, and what philanthropy can support today If you have any feedback, you can get a free book for filling out our new feedback form. You can also get in touch through our website or on Twitter. Consider leaving us a review wherever you're listening to this — it's the best free way to support the show. Thanks for listening!
Damon Binder is a research analyst at Open Philanthropy. His research focuses on potential risks from pandemics and from biotechnology. He previously worked as a research scholar at the University of Oxford's Future of Humanity Institute, where he studied existential risks. Prior to that he completed his PhD in theoretical physics at Princeton University. We discuss: How did early states manage large populations? What explains the hockeystick shape of world economic growth? Did urbanisation enable more productive farming, or vice-versa? What does transformative AI mean for growth? Would 'degrowth' benefit the world? What do theoretical physicists actually do, and what are they still trying to understand? Why not just run bigger physics experiments to solve the latest problems? What could the history of physics tell us about its future? In what sense are the universe's constants fine-tuned? Will the universe ever just... end? Why might we expect digital minds to be a big deal? Links Damon's list of book recommendations A Collection of Unmitigated Pedantry (history blog) Cold Takes by Holden Karnofsky (blog on futurism and AI). Highlight from Cold Takes: The Most Important Century series of posts Crusader Kings Europa Universalis The Age of Em by Robin Hanson The Five Ages of the Universe by Fred Adams You can find more episodes and links at our website, hearthisidea.com. (This episode is a bonus episode because it's less focused on topics in effective altruism than normal)
A full writeup of this episode, including references and a transcript, is available on our website: https://hearthisidea.com/episodes/nemet Greg Nemet is a a Professor at the University of Wisconsin–Madison in the La Follette School of Public Affairs and an Andrew Carnegie Fellow. He is also the author of How Solar Energy Became Cheap We discuss: The distinct phases that helped solar PV move down its learning curve What lessons we can learn on how to accelerate and affect other technologies Theories about National Innovation Systems and lock-in If you have any feedback, you can get a free book for filling out our new feedback form. You can also get in touch through our website or on Twitter. Consider leaving us a review wherever you're listening to this — it's the best free way to support the show. Thanks for listening!
A full writeup of this episode, including references and a transcript, is available on our website: https://hearthisidea.com/episodes/erwan Dewi Erwan is a co-founder of BlueDot Impact, the Biosecurity Advisor to the Cambridge Existential Risk Initiative, and the previous Executive Director ofEffective Altruism Cambridge. We discuss: Setting up BlueDot Impact and scaling pilot programmes Talent gaps in the EA community and more strategic goal setting Career advice and leadership skills If you have any feedback, you can get a free book for filling out our new feedback form. You can also get in touch through our website or on Twitter. Consider leaving us a review wherever you're listening to this — it's the best free way to support the show. Thanks for listening!
A full writeup of this episode, including references and a transcript, is available on our website: hearthisidea.com/episodes/pannu-monrad Jassi Pannu is a Resident Physician at Stanford, a Visiting Scholar at John Hopkins, and a Fellow at the Emerging Leaders in Biosecurity Initiative. Joshua Monrad is a Biosecurity Program Officer at Effective Giving and a Researcher at Oxford's Future Humanity Institute. We discuss: The post-COVID biosecurity landscape, including the American Pandemic Preparedness Plan The Biological Weapons Convention and current issues in dual-use research The role of antivirals, increasing vaccine capacity, and market failures Similarities and differences between GCBR mitigation and general pandemic preparedness How some interventions are underpinned by global cooperation If you have any feedback, you can get a free book for filling out our new feedback form. You can also get in touch through our website or on twitter. Consider leaving us a review wherever you're listening to this — it's the best free way to support the show. Thanks for listening!
A full writeup of this episode, including references and a transcript, is available on our website: hearthisidea.com/episodes/mathieu Edouard Mathieu is the Head of Data at Our World in Data (OWID), a scientific online publication that focuses on large global problems such as poverty, disease, hunger, climate change, war, existential risks, and inequality. We discuss: What Ed learned from working with governments and the WHO A simple change the WHO could make to radically improve how countries share data for the next pandemic The idea of 'experimental longtermism' How Ed is thinking about collecting data on transformative artificial intelligence and other potential existential risks Figuring out the impact of making everyone slightly better-informed Lessons for starting a career in impact-oriented data science And finally... Ed's favourite OWID chart If you have any feedback, you can get a free book for filling out our new feedback form. You can also get in touch through our website or on Twitter. Consider leaving us a review wherever you're listening to this — it's the best free way to support the show. Thanks for listening!
A full writeup of this episode, including references and a transcript, is available on our website: hearthisidea.com/episodes/alexanian-ahuja Tessa Alexanian is the Safety & Security Program Officer at the iGEM Foundation, which organises a worldwide competition in synthetic biology and helps foster a collaborative community. She is a fellow at the Emerging Leaders in Biosecurity Initiative, was previously a fellow at the Foresight Institute, and co-founded the East Bay Biosecurity Group. Janvi Ahuja is a PhD student in computational biology at the University of Oxford, where she is affiliated with the Future of Humanity Institute and works with MIT's Nucleic Acid Observatory on metagenomic sequencing. Janvi is also a fellow at the Emerging Leaders in Biosecurity Initiative, and was previously an intern at the UN's Biological Weapons Convention ISU We discuss: How synthetic biology began and why it is an exploding field The iGEM competition and how to get involved in the community Challenges and trade-offs in creating a culture of responsibility in synthetic biology Emerging risks in synthetic biology and what this means for global catastrophic risks Technical projects in biosecurity and career advice for how to get involved If you have any feedback, you can get a free book for filling out our new feedback form. You can also get in touch through our website or on Twitter. Consider leaving us a review wherever you're listening to this — it's the best free way to support the show. Thanks for listening!
Michael Aird is a senior research manager at Rethink Priorities, where he co-leads the Artificial Intelligence Governance and Strategy team alongside Amanda El-Dakhakhni. Before that, he conducted nuclear risk research for Rethink Priorities and longtermist macrostrategy research for Convergence Analysis, the Center on Long-Term Risk, and the Future of Humanity Institute, which is where we know each other from. Before that, he was a teacher and a stand up comedian. We discuss: Whether you should stay in academia if you want to do impactful research How to start looking for roles at impact-driven research organisations What simple changes can improve how you write about your research The uses of 'reductionism' and quantitative thinking The concept of ‘reasoning transparency' Michael's experience investigating nuclear security Key links: Michael's posts on the EA Forum Interested in EA/longtermist research careers? Here are my top recommended resources Don't think, just apply! (usually) List of EA funding opportunities Rethink Priorities Reasoning Transparency A central directory for open research questions You can find more links, and read the full transcript, in this episode's write-up: hearthisidea.com/episodes/aird. If you have any feedback, you can get a free book for filling out our new feedback form. You can also get in touch through our website or on Twitter. Consider leaving us a review wherever you're listening to this — it's the best free way to support the show. Thanks for listening!
A full writeup of this episode is available on our website: hearthisidea.com/episodes/esvelt-sandbrink. Kevin Esvelt is an assistant professor at the MIT Media Lab, where he is director of the Sculpting Evolution group, which invents new ways to study and influence the evolution of ecosystems. He helped found the SecureDNA Project and the Nucleic Acid Observatory, both of which we discuss in the episode. Esvelt is also known for proposing the idea of using CRISPR to implement gene drives. Jonas Sandbrink is a researcher and DPhil student at the Future of Humanity Institute. He is a fellow at both the Emerging Leaders in Biosecurity Initiative at the Johns Hopkins Center for Health Security, and with the Ending Bioweapons Program at the Council on Strategic Risks. Jonas' research interests include the dual-use potential of life sciences research and biotechnology, as well as fast response countermeasures like vaccine platforms. We discuss: The concepts of differential technological development, dual-use research, transfer risks in research, 'information loops', and responsible access to biological data Strengthening norms against risky biological research, such as novel virus identification and gain of function research Connection-based warning systems and metagenomic sequencing technology Advanced PPE, Far-UVC sterilisation technology, and other countermeasures against pandemics potentially worse than Covid Analogies between progress in biotechnology and the early history of nuclear weapons How to use your career to work on these problems — even if you don't have a background in biology. You can read more about the topics we cover in this episode's write-up: hearthisidea.com/episodes/farmer. If you have any feedback, you can get a free book for filling out our new feedback form. You can also get in touch through our website or on Twitter. Consider leaving us a review wherever you're listening to this — it's the best free way to support the show. Thanks for listening!
In this episode, Fin and Luca celebrate 50 episodes of Hear This Idea: all the highs, lows, and near-disasters along the way. We chat about: The HTI origin story Favourite behind the scenes moments Should we argue with guests more? Mistakes we've made (and are still making?) What we've learned about asking better questions Starting projects from scratch Ideas for the next 50 episodes Future topics, dream guests Why does this podcast exist? Podcasting tips A potential new program Our media recommendations
Professor Doyne Farmer is the Baillie Gifford Professor in Mathematics at Oxford, the Director of the Complexity Economics programme at INET, and an External Professor at the Santa Fe Institute. In our conversation we discuss: How Doyne and his friends used physics and hidden computers to beat the roulette wheel in Las Vegas casinos Advancing economic models to better predict business cycles and knock-on effects from extreme events like Covid-19 Techniques for predicting technological progress and long-run growth, with specific applications to energy technologies and climate change You can read more about the topics we cover in this episode's write-up: hearthisidea.com/episodes/farmer If you have any feedback or suggestions for future guests, feel free to get in touch through our website. Consider leaving us a review wherever you're listening to this — it's the best free way to support the show. If you want to support the show more directly, consider leaving a tip. Thanks for listening!
Ajay Karpur is a Senior Program Associate in Biosecurity and Pandemic Preparedness at Open Philanthropy. He's hoping to start tweeting again soon, at @ajaykarpur. Joining as a guest co-host on this episode was Janvi Ahuja, who is a PhD student in computational biology at Oxford University, and part of the Johns Hopkins Centre for Health Security ‘Emerging Leaders in Biosecurity' program. She's tweeting at @jn_ahuja. In our conversation, we discuss: What is metagenomic sequencing, and why could it matter so much for it to become affordable and ubiquitous? How and why can nonprofits help positive technologies become more accessible? How emerging biotech can help the world respond better to the next emerging (potential) pandemic Refuges against biological threats Analogies between fire protection and pathogen protection through monitoring and cleaner air Career advice for entering biosecurity, especially with an engineering background. If you have any feedback or suggestions for future guests, feel free to get in touch through our website. Consider leaving us a review wherever you're listening to this — it's the best free way to support the show. If you want to support the show more directly, consider leaving a tip. Thanks for listening!
Dr Spencer R. Weart served as the Director of the Center for History of Physics at the American Institute of Physics from 1974 to 2009. He is the author of several books, including The Discovery of Global Warming and The Rise of Nuclear Fear. In our conversation, we discuss: How climate science emerged, what it took for scientists to form a consensus in the mid-1960s, and how that consensus has evolved since The IPCC's emerging understanding of so-called “tipping points” in the climate system, and our current best guesses as to what kind of threat they pose Exploring the changing cultural relationship humans have had with nuclear energy — and why it remains stigmatised amongst many environmental groups You can read more about the topics we cover in this episode's write-up: hearthisidea.com/episodes/weart If you have any feedback or suggestions for future guests, feel free to get in touch through our website. Consider leaving us a review wherever you're listening to this — it's the best free way to support the show. If you want to support the show more directly, consider leaving a tip. Thanks for listening!
Jason Crawford is the founder of The Roots of Progress, a nonprofit dedicated to establishing a new philosophy of progress for the 21st century. He writes and speaks about the history and philosophy of progress, especially in technology and industry. In our conversation we discuss — What progress is, and why it matters (maybe more than you think) How to think about resource constraints — why they are sometimes both real and surmountable The 'low-hanging fruit' explanation for stagnation, and prospects for speeding up innovation Tradeoffs between progress and (existential) safety Differences between the Progress Studies and Effective Altruism communities You can read more about the topics we cover in this episode's write-up: hearthisidea.com/episodes/crawford If you have any feedback or suggestions for future guests, feel free to get in touch through our website. Consider leaving us a review wherever you're listening to this — it's the best free way to support the show. If you want to support the show more directly, consider leaving a tip. Thanks for listening!
Cristina Bicchieri is the S. J. Patterson Harvie Professor of Social Thought and Comparative Ethics at the University of Pennsylvania. In our conversation we discuss — How to define social norms and distinguish them from similar concepts How social norms evolve and why they often persist, even in situations where they are harmful Real world policy applications of social norms, including covid and high-level decision making You can read more about the topics we cover in this episode's write-up: hearthisidea.com/episodes/bicchieri If you have any feedback or suggestions for future guests, feel free to get in touch through our website. Consider leaving us a review wherever you're listening to this — it's the best free way to support the show. If you want to support the show more directly, consider leaving a tip. Thanks for listening!
Lord Bird is the co-founder of The Big Issue, a magazine supporting street vendors who are homeless, a crossbench peer in the House of Lords, and co-chair of the All-Party Parliamentary Groups on Future Generations. In our conversation, we discuss — The Future Generations Bill, currently being discussed in the UK Parliament Causes of political short-sightedness Broader social issues facing the UK You can read more about the topics we cover in this episode's write-up: hearthisidea.com/episodes/bird If you have any feedback or suggestions for future guests, feel free to get in touch through our website. Consider leaving us a review wherever you're listening to this — it's the best free way to support the show. If you want to support the show more directly, consider leaving a tip. Thanks for listening!
Sam Hilton is the Research Director at Charity Entrepreneurship, the Parliamentary Coordinator at the UK's All-Party Parliamentary Group for Future Generations, and a Research Affiliate at the Center for the Study of Existential Risk. In our conversation, we discuss — Charity Entrepreneurship's plans for the 2022 Incubation Program Exploratory Altruism and finding new cause areas Lessons for longtermist policy and thoughts on the Future Generations Bill You can read more about the topics we cover in this episode's write-up: hearthisidea.com/episodes/hilton If you have any feedback or suggestions for future guests, feel free to get in touch through our website. Consider leaving us a review wherever you're listening to this — it's the best free way to support the show. If you want to support the show more directly, consider leaving a tip. Thanks for listening!
Glen Weyl is Microsoft's Office of the Chief Technology Officer Political Economist and Social Technologist (OCTOPEST), where he advises Microsoft's senior leaders on macroeconomics, geopolitics and the future of technology. Glen also co-authored Radical Markets: Uprooting Capitalism and Democracy for a Just Society; a book about "Revolutionary ideas on how to use markets to bring about fairness and prosperity for all". In our conversation, we discuss — Quadratic voting and funding The new political divides of the 21st century What the history of personal computing teaches us about the possibility of shaping technological progress Glen's impression of rationalism, effective altruism and longtermism Why and how longtermism should be more generative of new ideas Underrated thinkers relevant for today
Habiba Islam is a member of the 80,000 Hours career advising team. First, the two most important links: Apply to receive free career coaching: 80000hours.org/hti Apply to join the 80k career advising team here In this conversation, we talk about — How to begin planning a high-impact career What one-on-one calls with 80k are like (and why you might consider applying) Different motivations and framings for longtermism The case for being ambitious if you want to do good in your career Concrete next steps for beginning the process of career planning You can read more about the topics we cover in this episode's write-up: hearthisidea.com/episodes/habiba. If you have any feedback or suggestions for future guests, feel free to get in touch through our website. Consider leaving us a review wherever you're listening to this — it's the best free way to support the show. If you want to support the show more directly, consider leaving a tip. Thanks for listening!
Michael Bhaskar is a writer, researcher and publisher. He is a former consultant Writer in Residence at DeepMind, and most recently he wrote a book called Human Frontiers, which tries to answer the question: “why has the flow of big, world-changing ideas slowed down?” In our conversation, we discuss — The 'Adams curve' How so much of the modern world was invented in exceptional 20th century research institutes such as Bell Labs and Xerox PARC Evidence for slowdown in new ideas from analysing the patent record Whether scientific progress is limitless, or whether there are things we'll never be able to know Whether 'big ideas' are also slowing in the arts Reasons for optimism about progress in big ideas, especially from advanced AI You can read more about the topics we cover in this episode's write-up: hearthisidea.com/episodes/michael/. If you have any feedback or suggestions for future guests, feel free to get in touch through our website. Consider leaving us a review wherever you're listening to this — it's the best free way to support the show. If you want to support the show more directly, consider leaving a tip. Thanks for listening!
Mike Hinge is a Senior Economist at ALLFED (Alliance to Feed the Earth in Disasters). In our interview, we discuss: Why nuclear fallout blocking sunlight could be one of the most extreme threats to the global food supply, and how this compares to the risk from climate change; How scientists and ALLFED model the fallout of nuclear winter, how it affects crop yields, and how it changes food prices for the global poor; Potential technologies for feeding everyone in case of a disaster, such as repurposed paper mills and seaweed could help us recover; Modeling the economic and political challenges of feeding everyone in the aftermath of a disaster You can read more about the topics we cover in this episode's write-up: hearthisidea.com/episodes/mike/. You can email mike at: mike [at] allfed [dot] info. If you have any feedback or suggestions for future guests, feel free to get in touch through our website. Consider leaving us a review wherever you're listening to this — it's the best free way to support the show. If you want to support the show more directly, consider leaving a tip. Thanks for listening!
Keith Frankish is a philosopher of mind. He is an Honorary Reader at the University of Sheffield, UK, Visiting Research Fellow with The Open University, and adjunct Professor with the Brain and Mind Programme at the University of Crete. In our interview, we discuss: What is the hard problem of consciousness? What is the illusionist theory of consciousness? What does illusionism have to do with ethics? When should we care for robot dogs? How should academics use twitter? You can read more about the topics we cover in this episode's write-up: hearthisidea.com/episodes/keith. If you have any feedback or suggestions for future guests, feel free to get in touch through our website. Consider leaving us a review wherever you're listening to this — it's the best free way to support the show. If you want to support the show more directly, consider leaving a tip. Thanks for listening!
Christoph Winter is an Assistant Professor of Law at ITAM in Mexico, a Visiting Scholar in Psychology at Harvard, and the founder of the Legal Priorities Project In our interview, we discuss: A global survey of legal academics about protecting future generations; How constitutional law might best help in this effort; Endangerment law and the "risk of creating a risk"; And lots more! You can read more about the topics we cover in this episode's write-up: hearthisidea.com/episodes/christoph/. If you have any feedback or suggestions for future guests, feel free to get in touch through our website. Consider leaving us a review wherever you're listening to this — it's the best free way to support the show. If you want to support the show more directly, consider leaving a tip. Thanks for listening!
Gillian Hadfield is Director of the Schwartz Reisman Institute for Technology and Society. She is a Professor of Law and Professor of Strategic Management at the University of Toronto. In our interview, we discuss: Why humans invented law, and what Gillian describes as "the demand side" for legal infrastructure; Why social norms continue to be important today and how Ancient Athens managed to use a decentralised system of collective punishment; The case for "regulatory markets" in governing artificial intelligence, and how governments in the 21st Century need to keep up with rapid advances in technology "Silly rules" and why seemingly arbitrary norms are actually really important in creating society's normative infrastructures You can read more about the topics we cover in this episode's write-up: hearthisidea.com/episodes/gillian. If you have any feedback or suggestions for future guests, feel free to get in touch through our website. Consider leaving us a review wherever you're listening to this — it's the best free way to support the show. If you want to support the show more directly, consider leaving a tip. Thanks for listening!