Podcasts about progress studies

  • 26PODCASTS
  • 41EPISODES
  • 44mAVG DURATION
  • 1MONTHLY NEW EPISODE
  • Jan 20, 2025LATEST

POPULARITY

20172018201920202021202220232024


Best podcasts about progress studies

Latest podcast episodes about progress studies

Podcast Notes Playlist: Latest Episodes
Tyler Cowen - the #1 bottleneck to AI progress is humans

Podcast Notes Playlist: Latest Episodes

Play Episode Listen Later Jan 20, 2025


The Lunar Society: Read the notes at at podcastnotes.org. Don't forget to subscribe for free to our newsletter, the top 10 ideas of the week, every Monday --------- I interviewed Tyler Cowen at the Progress Conference 2024. As always, I had a blast. This is my fourth interview with him – and yet I'm always hearing new stuff.We talked about why he thinks AI won't drive explosive economic growth, the real bottlenecks on world progress, him now writing for AIs instead of humans, and the difficult relationship between being cultured and fostering growth – among many other things in the full episode.Thanks to the Roots of Progress Institute (with special thanks to Jason Crawford and Heike Larson) for such a wonderful conference, and to FreeThink for the videography.Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform. Read the full transcript here.SponsorsI'm grateful to Tyler for volunteering to say a few words about Jane Street. It's the first time that a guest has participated in the sponsorship. I hope you can see why Tyler and I think so highly of Jane Street. To learn more about their open rules, go to janestreet.com/dwarkersh.Timestamps(00:00:00) Economic Growth and AI(00:14:57) Founder Mode and increasing variance(00:29:31) Effective Altruism and Progress Studies(00:33:05) What AI changes for Tyler(00:44:57) The slow diffusion of innovation(00:49:53) Stalin's library(00:52:19) DC vs SF vs EU Get full access to Dwarkesh Podcast at www.dwarkeshpatel.com/subscribe

Podcast Notes Playlist: Business
Tyler Cowen - the #1 bottleneck to AI progress is humans

Podcast Notes Playlist: Business

Play Episode Listen Later Jan 20, 2025 59:45


The Lunar Society Key Takeaways  While the AIs will be smart and conscientious, they will still face human bottlenecks, such as bureaucracies and committees at universitiesWe may not notice AI productivity gains on shorter timeframes: Even if they only boost economic growth by 0.5% per year, that is a massive productivity gain over 30-40 years! “There are going to be bottlenecks all along the way. It's going to be a tough slug – like the printing press, like electricity. The people who study diffusion of new technologies never think there will be rapid takeoff.” – Tyler CowenOpposition to AI will only increase as the technology starts to change what the world looks like There is increasing variance in the human distribution: Young people at the top are doing much better and are more impressive than they were in earlier times. The very bottom of the distribution is also getting better. But the “thick middle” is getting worse.Since humans are an input “other than the AI”, then humans will rise in marginal value, even if we will have to learn to do different thingsOn Popularity and Progress: There is a danger that as a thing becomes more popular, at the margin it becomes much worseThe Tyler Cowen Investment Philosophy: Buy and hold, diversify, hold on tight, make sure you have some cheap hobbies and can cook Tech diffusion is universally pretty slow: While people in the Bay Area are the smartest, most dynamic, and most ambitious, they tend to overvalue intelligence On progress: War should always be the main concern during a period of rapid technological progress; throughout history, when new technologies emerge, they are turned into instruments of war – and terrible things can happen  Read the full notes @ podcastnotes.orgI interviewed Tyler Cowen at the Progress Conference 2024. As always, I had a blast. This is my fourth interview with him – and yet I'm always hearing new stuff.We talked about why he thinks AI won't drive explosive economic growth, the real bottlenecks on world progress, him now writing for AIs instead of humans, and the difficult relationship between being cultured and fostering growth – among many other things in the full episode.Thanks to the Roots of Progress Institute (with special thanks to Jason Crawford and Heike Larson) for such a wonderful conference, and to FreeThink for the videography.Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform. Read the full transcript here.SponsorsI'm grateful to Tyler for volunteering to say a few words about Jane Street. It's the first time that a guest has participated in the sponsorship. I hope you can see why Tyler and I think so highly of Jane Street. To learn more about their open rules, go to janestreet.com/dwarkersh.Timestamps(00:00:00) Economic Growth and AI(00:14:57) Founder Mode and increasing variance(00:29:31) Effective Altruism and Progress Studies(00:33:05) What AI changes for Tyler(00:44:57) The slow diffusion of innovation(00:49:53) Stalin's library(00:52:19) DC vs SF vs EU Get full access to Dwarkesh Podcast at www.dwarkeshpatel.com/subscribe

Podcast Notes Playlist: Startup
Tyler Cowen - the #1 bottleneck to AI progress is humans

Podcast Notes Playlist: Startup

Play Episode Listen Later Jan 20, 2025 59:45


The Lunar Society Key Takeaways  While the AIs will be smart and conscientious, they will still face human bottlenecks, such as bureaucracies and committees at universitiesWe may not notice AI productivity gains on shorter timeframes: Even if they only boost economic growth by 0.5% per year, that is a massive productivity gain over 30-40 years! “There are going to be bottlenecks all along the way. It's going to be a tough slug – like the printing press, like electricity. The people who study diffusion of new technologies never think there will be rapid takeoff.” – Tyler CowenOpposition to AI will only increase as the technology starts to change what the world looks like There is increasing variance in the human distribution: Young people at the top are doing much better and are more impressive than they were in earlier times. The very bottom of the distribution is also getting better. But the “thick middle” is getting worse.Since humans are an input “other than the AI”, then humans will rise in marginal value, even if we will have to learn to do different thingsOn Popularity and Progress: There is a danger that as a thing becomes more popular, at the margin it becomes much worseThe Tyler Cowen Investment Philosophy: Buy and hold, diversify, hold on tight, make sure you have some cheap hobbies and can cook Tech diffusion is universally pretty slow: While people in the Bay Area are the smartest, most dynamic, and most ambitious, they tend to overvalue intelligence On progress: War should always be the main concern during a period of rapid technological progress; throughout history, when new technologies emerge, they are turned into instruments of war – and terrible things can happen  Read the full notes @ podcastnotes.orgI interviewed Tyler Cowen at the Progress Conference 2024. As always, I had a blast. This is my fourth interview with him – and yet I'm always hearing new stuff.We talked about why he thinks AI won't drive explosive economic growth, the real bottlenecks on world progress, him now writing for AIs instead of humans, and the difficult relationship between being cultured and fostering growth – among many other things in the full episode.Thanks to the Roots of Progress Institute (with special thanks to Jason Crawford and Heike Larson) for such a wonderful conference, and to FreeThink for the videography.Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform. Read the full transcript here.SponsorsI'm grateful to Tyler for volunteering to say a few words about Jane Street. It's the first time that a guest has participated in the sponsorship. I hope you can see why Tyler and I think so highly of Jane Street. To learn more about their open rules, go to janestreet.com/dwarkersh.Timestamps(00:00:00) Economic Growth and AI(00:14:57) Founder Mode and increasing variance(00:29:31) Effective Altruism and Progress Studies(00:33:05) What AI changes for Tyler(00:44:57) The slow diffusion of innovation(00:49:53) Stalin's library(00:52:19) DC vs SF vs EU Get full access to Dwarkesh Podcast at www.dwarkeshpatel.com/subscribe

The Lunar Society
Tyler Cowen - The #1 Bottleneck to AI progress Is Humans

The Lunar Society

Play Episode Listen Later Jan 9, 2025 59:45


I interviewed Tyler Cowen at the Progress Conference 2024. As always, I had a blast. This is my fourth interview with him – and yet I'm always hearing new stuff.We talked about why he thinks AI won't drive explosive economic growth, the real bottlenecks on world progress, him now writing for AIs instead of humans, and the difficult relationship between being cultured and fostering growth – among many other things in the full episode.Thanks to the Roots of Progress Institute (with special thanks to Jason Crawford and Heike Larson) for such a wonderful conference, and to FreeThink for the videography.Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform. Read the full transcript here.SponsorsI'm grateful to Tyler for volunteering to say a few words about Jane Street. It's the first time that a guest has participated in the sponsorship. I hope you can see why Tyler and I think so highly of Jane Street. To learn more about their open rules, go to janestreet.com/dwarkersh.Timestamps(00:00:00) Economic Growth and AI(00:14:57) Founder Mode and increasing variance(00:29:31) Effective Altruism and Progress Studies(00:33:05) What AI changes for Tyler(00:44:57) The slow diffusion of innovation(00:49:53) Stalin's library(00:52:19) DC vs SF vs EU Get full access to Dwarkesh Podcast at www.dwarkeshpatel.com/subscribe

Slate Star Codex Podcast
Notes From The Progress Studies Conference

Slate Star Codex Podcast

Play Episode Listen Later Nov 12, 2024 34:09


Tyler Cowen is an economics professor and blogger at Marginal Revolution. Patrick Collison is the billionaire founder of the online payments company Stripe. In 2019, they wrote an article calling for a discipline of Progress Studies, which would figure out what progress was and how to increase it. Later that year, tech entrepreneur Jason Crawford stepped up to spearhead the effort. The immediate reaction was mostly negative. There were the usual gripes that “progress” was problematic because it could imply that some cultures/times/places/ideas were better than others. But there were also more specific objections: weren't historians already studying progress? Wasn't business academia already studying innovation? Are you really allowed to just invent a new field every time you think of something it would be cool to study? It seems like you are. Five years later, Progress Studies has grown enough to hold its first conference. I got to attend, and it was great. https://www.astralcodexten.com/p/notes-from-the-progress-studies-conference 

The Studies Show
Episode 50: Toxoplasma

The Studies Show

Play Episode Listen Later Sep 24, 2024 68:48


Been feeling a little strange lately? A bit impulsive, maybe? Feeling a sudden urge to get a pet cat? Sorry to say it, but maybe you're infected with a scary mind control parasite: specifically, the paraside Toxoplasma gondii.Or… maybe not. It turns out that, despite popular belief, the supposed behavioural effects of T. gondii are supported by very weak scientific evidence. In this episode of The Studies Show, Tom and Stuart explain.The Studies Show is sponsored by Works in Progress magazine. It's the no.1 destination online if you're interested in “Progress Studies”: research on how things got better in the past and might get better in future. Whether it's medical technology, construction materials, or policy innovation, you can read detailed essays on it at worksinprogress.co.Show notes* Alex Tabbarok's review of Parasite, arguing people took the wrong lessons from the film* Zombie ant fungus description* Theory for how the horsehair worm affects its host* Scepticism about whether it involves “mind control”* Description of acute toxoplasmosis* Tiny study on rats and cat urine* Well-cited (but also tiny) PNAS study on rats, mice, and cat urine* Review of toxoplasma and behavioural effects* Very useful sceptical article about toxoplasma's effects on rodent and human behaviour (source of the quotes on Alzheimer's)* Another (somewhat older) sceptical article* Study on getting humans to smell cat (and other) urine* Preprint on (self-reported!) toxoplasma infection and psychological traits* Initial, smaller entrepreneurship study* Later, larger entrepreneurship study (from Denmark)* Meta-analysis on whether childhood cat exposure is related to schizophrenia* Dunedin Cohort Study paper on toxoplasma and life outcomes* “The Toxoplasma of Rage” on Slate Star CodexCredits* The Studies Show is produced by Julian Mayers at Yada Yada Productions. This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit www.thestudiesshowpod.com/subscribe

Hear This Idea
#78 – Jacob Trefethen on Global Health R&D

Hear This Idea

Play Episode Listen Later Sep 8, 2024 150:16


Jacob Trefethen oversees Open Philanthropy's science and science policy programs. He was a Henry Fellow at Harvard University, and has a B.A. from the University of Cambridge. You can find links and a transcript at www.hearthisidea.com/episodes/trefethen In this episode we talked about open source the risks and benefits of open source AI models. We talk about: Life-saving health technologies which probably won't exist in 5 years (without a concerted effort) — like a widely available TB vaccine, and bugs which stop malaria spreading How R&D for neglected diseases works — How much does the world spend on it? How do drugs for neglected diseases go from design to distribution? No-brainer policy ideas for speeding up global health R&D Comparing health R&D to public health interventions (like bed nets) Comparing the social returns to frontier (‘Progress Studies') to global health R&D Why is there no GiveWell-equivalent for global health R&D? Won't AI do all the R&D for us soon? You can get in touch through our website or on Twitter. Consider leaving us an honest review wherever you're listening to this — it's the best free way to support the show. Thanks for listening!

The Studies Show
Episode 48: Alcohol

The Studies Show

Play Episode Listen Later Sep 3, 2024 55:19


Okay, it's time to finally answer the question: is drinking booze good or bad? Is there really a “J-curve”, such that it's bad to drink zero alcohol, good to drink a little, and then bad to drink any more than that? What exactly is the “safe level” of alcohol consumption, and why do the meta-analyses on this topic all seem to tell us entirely different things?In this episode of The Studies Show, Tom and Stuart get very badly intoxicated—with statistics.We're sponsored by Works in Progress magazine. There's no better place online to find essays on the topic of “Progress Studies”—the new field that digs deep into the data on how scientific and technological advances were made in the past, and tries to learn the lessons for the future. Check them out at worksinprogress.co.Show notes* Media reports say alcohol is good! Oh no wait, it's bad. Oh, sorry, it's actually good! No, wait, actually bad. And so on, ad infinitum* The three conflicting meta-analyses:* 2018 in The Lancet (“no safe level”)* 2022 in The Lancet (the J-curve returns)* 2023 in JAMA Network Open (using “occasional drinkers” as the comparison)* Some of the press coverage about the J-curve age differences* David Spiegelhalter's piece comparing the two Lancet meta-analyses* Tom's piece on the idea of “safe drinking”CreditsThe Studies Show is produced by Julian Mayers at Yada Yada Productions. We're very grateful to Sir David Spiegelhalter for talking to us about this episode (as ever, any errors are ours alone). This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit www.thestudiesshowpod.com/subscribe

E68: Tyler Cowen on Talent, the Importance of Stamina, and Predicting Success

Play Episode Listen Later Jul 19, 2024 56:37


This week on Upstream, we're releasing a fascinating discussion with economist, professor, and bestselling author Tyler Cowen about how to find talented people. This was recorded in 2022 around the launch of his book 'Talent: How to Identify Energizers, Creatives, and Winners Around the World' co-authored with Daniel Gross. Tyler and Erik discuss strategies for assessing raw talent, recognizing late bloomers, and fostering an environment conducive to high achievers. They also cover the importance of understanding founder compatibility, building strong peer groups, and the role of mentorship in talent development.

Effective Altruism Forum Podcast
“PhD on Moral Progress - Bibliography Review” by Rafael Ruiz

Effective Altruism Forum Podcast

Play Episode Listen Later Dec 10, 2023 80:39


Epistemic Status: I've researched this broad topic for a couple of years. I've read about 30+ books and 100+ articles on the topic so far (I'm not really keeping count). I've also read many other works in the related areas of normative moral philosophy, moral psychology, moral epistemology, moral methodology, and metaethics, since it's basically my area of specialization within philosophy. This project will be my PhD thesis. However, I still have 3 years of the PhD to go, so a substantial amount of my opinions on the matter are subject to changes. Disclaimer: I have received some funding as a Forethought Foundation Fellow in support of my PhD research. But all the opinions expressed here are my own. Index. Part I - Bibliography Review Part II - Preliminary Takes and Opinions (I'm writing it, coming very soon!) More parts to be published later on. Introduction. Hi everyone, this [...] ---Outline:(00:51) Index.(01:05) Introduction.(03:55) Guiding Questions.(08:33) Who has a good Personal Fit for becoming a Moral Progress researcher?(15:05) Bibliography Review.(15:32) TL;DR / Recommended Reading Order.(17:05) Amazing books (5/5 ⭐⭐⭐⭐⭐ - Read them and take notes)(17:15) Allen Buchanan and Rachel Powell - The Evolution of Moral Progress: A Biocultural Theory (2018) - Genre: Moral Philosophy - No Audiobook(19:32) Steven Pinker - The Better Angels of Our Nature. The Decline of Violence in History and Its Causes (2011) - Genre: Historical Trends - Audiobook Available(21:11) Hanno Sauer - Moral Teleology: A Theory of Progress (2023) - Genre: Moral Philosophy - No Audiobook(22:07) Oded Galor - The Journey of Humanity (2020) - Genre: Historical Trends - Audiobook Available(23:02) Joseph Henrichs - The Secret of Our Success (2016) - Genre: Cultural Evolution, Pre-History - Audiobook Available(23:59) Joseph Henrich - The WEIRDest People in the World: How the West Became Psychologically Peculiar and Particularly Prosperous (2020) - Genre: Cultural Evolution, Historical Trends since the 1300s - Audiobook Available(26:51) Great books (4/5 ⭐⭐⭐⭐ - Read them)(26:59) Victor Kumar and Richmond Campbell - A Better Ape: The Evolution of the Moral Mind and How it Made Us Human (2022) - Genre: Moral Psychology, Moral Philosophy - Audiobook Available(27:40) Philip Kitcher - Moral Progress (2021) - Genre: Moral Philosophy, Social Movements - No Audiobook(30:22) Hans Rosling - Factfulness: Ten Reasons Were Wrong About the World and Why Things Are Better Than You Think (2018) - Genre: Post-Industrial Historical Trends. - Audiobook Available(31:10) Michael Tomasello - Becoming Human: A Theory of Ontogeny (2018)- Genre: Cognitive Human Development - No Audiobook(32:03) Jose Antonio Marina - Biography of Inhumanity (2021) - Genre: Moral Values, Cultural Evolution - Audiobook in Spanish only(32:32) Kim Sterelnys - The Evolved Apprentice: How Evolution Made Humans Unique (2009) - Genre: Human Pre-History - Audiobook Available(33:25) Jonathan Haidt - The Righteous Mind (2011) - Genre: Political Psychology - Audiobook Available(34:52) Okay books (3/5 ⭐⭐⭐ - Skim them)(34:59) Peter Singer - The Expanding Circle: Ethics and Sociobiology (1979 [2011]) - Genre: Moral Philosophy - No Audiobook(36:19) Frans de Waal - Primates and Philosophers: How Morality Evolved (2006) - Genre: Ape Proto-Morality - No Audiobook(37:00) Robert Wright - Nonzero: The Logic of Human Destiny (2000) - Genre: Cultural Evolution - Audiobook Available(37:51) Joshua Greene - Moral Tribes. Emotion, Reason, and the gap between Us and Them (2013) - Genre: Moral Psychology - Audiobook Available(38:48) Derek Parfit - On What Matters (2011) (just the section on the Triple Theory) - Genre: Moral Philosophy - No Audiobook(39:28) Steven Pinker - Enlightenment Now. The Case for Reason, Science, Humanism, and Progress (2018) - Genre: Social Values / Enlightenment Values - Audiobook Available(40:15) Benedict Anderson - Imagined Communities: Reflections on the Origins and Spread of Nationalism (1983) - Genre: Modernity - No Audiobook(41:26) William MacAskill - Moral Uncertainty (2020) - Genre: Moral Philosophy - No Audiobook(42:01) Daniel Dennett - Darwins Dangerous Idea: Evolution and the Meanings of Life (1995) - Genre: Evolution - Audiobook Available(42:52) Ingmar Persson and Julian Savulescu - Unfit for the Future: The Need for Moral Enhancement (2012) - Genre: Transhumanism, Human Nature - No Audiobook(43:25) Isaiah Berlin - The Roots of Romanticism (1965) - Genre: Romantic Values, Nationalism - No Audiobook(44:06) Mediocre books (2/5 ⭐⭐ - Skip to the relevant sections)(44:13) Kwame Anthony Appiah - The Honor Code: How Moral Revolutions Happen (2010) - Genre: Moral Philosophy, Social Movements - Audiobook Available(46:13) Steven Pinker - The Blank Slate (2000) - Genre: General Psychology - Audiobook Available(47:10) Cecilia Heyes - Cognitive Gadgets: The Cultural Evolution of Thinking (2018) - Genre: Cultural Evolution, Psychology - Audiobook Available(48:11) Cass Sunstein - How Change Happens (2019) - Genre: Social Change, Policy - Audiobook Available(48:44) Angus Deaton - The Great Escape: Health, Wealth, and the Origins of Inequality (2013) - Genre: Trends in Global Poverty, Health - Audiobook Available(49:09) Johan Norberg - Progress: Ten Reasons to Look Forward to the Future (2016) - Genre: Post-Industrial Historical Trends - Audiobook Available(49:39) David Livingstone Smith - On Inhumanity: Dehumanization and How to Resist It (2020) - Genre: - Audiobook Available(50:18) Bad books (1/5 ⭐ - Skip)(50:23) Michael Shermer - The Moral Arc: How Science Makes Us Better People (2015) - Genre: Enlightenment Values - Audiobook Available(50:51) Michele Moody-Adams - Genre: Social Movements, Moral Philosophy - Making Space for Justice (2023) - Audiobook Available(51:21) Thomas Piketty - A Brief History of Equality (2021) - Genre: Historical Trends - Audiobook Available(51:44) Article collection.(52:08) Worthwhile articles (Read them).(52:55) Alright ones (Skim them).(01:03:29) Bad ones (Skip them).(01:03:55) Havent read them yet or dont remember enough to classify them.(01:05:31) Books I havent read yet, and my reasoning for why I want to read them.(01:05:37) Important books or articles I havent read yet.(01:07:13) Books or articles I havent read yet. I might read them but I consider less directly relevant or less pressing.(01:09:56) Minor readings I might do when I have free time (e.g. over the summer just to corroborate if Im missing anything important in my own work):(01:10:58) Potentially interesting extensions but probably beyond the scope of my work.(01:13:13) EA work on Moral Progress and related topics.(01:13:29) Moral Circle Expansion.(01:15:12) Economic Growth and Moral Progress.(01:15:31) Progress Studies.(01:16:22) Social and Intellectual Movements.(01:16:58) Historical Processes.(01:17:16) Cultural Evolution and Value Drift.(01:18:37) Longtermist Institutional Reform.(01:19:17) Conclusion.(01:19:46) Acknowledgements.(01:20:05) Contact Information.--- First published: December 10th, 2023 Source: https://forum.effectivealtruism.org/posts/YC3Mvw2xNtpKxR5sK/phd-on-moral-progress-bibliography-review --- Narrated by TYPE III AUDIO.

Luminary
Jason Crawford on progress and the history of technology 

Luminary

Play Episode Listen Later Oct 9, 2023 62:25


Jason Crawford is the founder of Roots of Progress and a prolific writer on all things technology and progress. Jason […] The post Jason Crawford on progress and the history of technology  appeared first on Luminary.fm.

Slate Star Codex Podcast
Bride Of Bay Area House Party

Slate Star Codex Podcast

Play Episode Listen Later Aug 20, 2023 16:13


[previously in series: 1, 2, 3] You spent the evening agonizing over which Bay Area House Party to attend. The YIMBY parties are always too crowded. VC parties were a low-interest-rate phenomenon. You've heard too many rumors of consent violations at the e/acc parties - they don't know when to stop. And last time you went to a crypto bro party, you didn't even have anything to drink, and somehow you still woke up the next morning lying in a gutter, minus your wallet and clothes. You finally decide on a Progress Studies party - the last one was kind of dull, but you hear they're getting better. https://astralcodexten.substack.com/p/bride-of-bay-area-house-party

Luminary
Matt Clancy on innovation, policy, and Progress Studies 

Luminary

Play Episode Listen Later Mar 13, 2023 63:08


Matt Clancy is a research fellow at Open Philanthropy and a senior fellow at The Institute for Progress, a think […] The post Matt Clancy on innovation, policy, and Progress Studies  appeared first on Luminary.fm.

The Foresight Institute Podcast
Jason Crawford, Roots of Progress | Why We Need A New Philosophy of Progress

The Foresight Institute Podcast

Play Episode Listen Later Oct 6, 2022 60:42


What happened to the idea of progress? How do we regain our sense of agency? And how do we move forward, in the 21st century and beyond?Jason CrawfordI am the founder of The Roots of Progress, a nonprofit dedicated to establishing a new philosophy of progress for the 21st century. I write and speak about the history and philosophy of progress, especially in technology and industry. I am also the creator of Progress Studies for Young Scholars, an online learning program about the history of technology for high schoolers, and a part-time technical consultant and adviser to Our World in Data.Session summary: (425) Jason Crawford | A New Philosophy of Progress - YouTubeThe Foresight Institute is a research organization and non-profit that supports the beneficial development of high-impact technologies. Since our founding in 1987 on a vision of guiding powerful technologies, we have continued to evolve into a many-armed organization that focuses on several fields of science and technology that are too ambitious for legacy institutions to support.Allison Duettmann is the president and CEO of Foresight Institute. She directs the Intelligent Cooperation, Molecular Machines, Biotech & Health Extension, Neurotech, and Space Programs, Fellowships, Prizes, and Tech Trees, and shares this work with the public. She founded Existentialhope.com, co-edited Superintelligence: Coordination & Strategy, co-authored Gaming the Future, and co-initiated The Longevity Prize. Apply to Foresight's virtual salons and in person workshops here!We are entirely funded by your donations. If you enjoy what we do please consider donating through our donation page.Visit our website for more content, or join us here:TwitterFacebookLinkedInEvery word ever spoken on this podcast is now AI-searchable using Fathom.fm, a search engine for podcasts.  Hosted on Acast. See acast.com/privacy for more information.

POD OF JAKE
#113 - JASON CRAWFORD

POD OF JAKE

Play Episode Listen Later Sep 1, 2022 45:13


Jason is the Founder & President of The Roots of Progress, a nonprofit dedicated to establishing a new philosophy of progress for the 21st century. He is also an adviser to Our World in Data and the creator of Progress Studies for Young Scholars, an online learning program about the history of technology for high schoolers. Jason writes and speaks about the history and philosophy of progress, especially in technology and industry. Follow him on Twitter @jasoncrawford. [2:18] - How Jason's evolving interests have influenced his understanding of human progress [10:53] - Jason's experience “dropping out” of high school and choosing to learn on his own [15:22] - Why Jason chose to transition from building tech startups to studying human progress [18:35] - Introducing The Roots of Progress [24:07] - The kind of CEO Jason is looking to lead The Roots of Progress foundation and its fellowship program [28:05] - Writers and models that have inspired and influenced Jason's studies [32:32] - Humanism vs. Environmentalism [39:44] - Why Jason views the invention of electricity as one of the most important drivers of human progress in the last 150 years --- Support the show by checking out my sponsors: Join Levels and get personalized insights to learn about your metabolic health. Go to https://levels.link/jake. --- https://homeofjake.com

The Nonlinear Library
EA - Long-Term Future Fund: December 2021 grant recommendations by abergal

The Nonlinear Library

Play Episode Listen Later Aug 18, 2022 22:51


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Long-Term Future Fund: December 2021 grant recommendations, published by abergal on August 18, 2022 on The Effective Altruism Forum. Introduction The Long-Term Future Fund made the following grants as part of its 2021 Q4 grant cycle (grants paid out sometime between August and December 2021): Total funding distributed: 2,081,577 Number of grantees: 34 Acceptance rate (excluding desk rejections): 54% Payout date: July - December 2021 Report authors: Asya Bergal (Chair), Oliver Habryka, Adam Gleave, Evan Hubinger 2 of our grantees requested that we not include public reports for their grants. (You can read our policy on public reporting here). We also referred 2 grants, totalling $110,000, to private funders, and approved 3 grants, totalling $102,000, that were later withdrawn by grantees.If you're interested in getting funding from the Long-Term Future Fund, apply here.(Note: The initial sections of this post were written by me, Asya Bergal.) Other updates Our grant volume and overall giving increased significantly in 2021 (and in 2022 – to be featured in a later payout report). In the second half of 2021, we applied for funding from larger institutional funders to make sure we could make all the grants that we thought were above the bar for longtermist spending. We received two large grants at the end of 2021: $1,417,000 from the Survival and Flourishing Fund's 2021-H2 S-process round $2,583,000 from Open Philanthropy Going forward, my guess is that donations from smaller funders will be insufficient to support our grantmaking, and we'll mainly be relying on larger funders. More grants and limited fund manager time mean that the write-ups in this report are shorter than our write-ups have been traditionally. I think communicating publicly about our decision-making process continues to be valuable for the overall ecosystem, so in future reports, we're likely to continue writing short one-sentence summaries for most of our grants, and more for larger grants or grants that we think are particularly interesting. Highlights Here are some of the public grants from this round that I thought looked most exciting ex ante: $50,000 to support John Wentworth's AI alignment research. We've written about John Wentworth's work in the past here. (Note: We recommended this grant to a private funder, rather than funding it through LTFF donations.) $18,000 to support Nicholas Whitaker doing blogging and movement building at the intersection of EA / longtermism and Progress Studies. The Progress Studies community is adjacent to the longtermism community, and is one of a small number of communities thinking carefully about the long-term future. I think having more connections between the two is likely to be good both from an epistemic and a talent pipeline perspective. Nick had strong references and seemed well-positioned to do this work, as the co-founder and editor of the Works in Progress magazine. $60,000 to support Peter Hartree pursuing independent study, plus a few "special projects". Peter has done good work for 80K for several years, received very strong references, and has an impressive history of independent projects, including Inbox When Ready. Grant Recipients In addition to the grants described below, 2 grants have been excluded from this report at the request of the applicants. Note: Some of the grants below include detailed descriptions of our grantees. Public reports are optional for our grantees, and we run all of our payout reports by grantees before publishing them. We think carefully about what information to include to maximize transparency while respecting grantees' preferences. We encourage anyone who thinks they could use funding to positively influence the long-term trajectory of humanity to apply for funding. Grants evaluated by Evan Hubinger EA Switzerland/PIB...

The Nonlinear Library
EA - Inequality is a problem for EA and economic growth by karthik-t

The Nonlinear Library

Play Episode Listen Later Aug 8, 2022 13:03


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Inequality is a problem for EA and economic growth, published by karthik-t on August 8, 2022 on The Effective Altruism Forum. Recently, EAs have considered economic growth as a potential way to improve overall wellbeing and also help the worst-off people. The recently-influential Progress Studies movement focuses on economic growth as the most important way to improve people's lives. In an essay that won the EA Forum Decade Review, Hauke Hillebrandt and John Halstead argued that we should focus on promoting economic growth in developing countries as a better alternative to targeting the extreme poor with health programs or cash transfers. In contrast, last month, Open Philanthropy published a report on the social returns to productivity growth, concluding that the cost-effectiveness of R&D spending is only 45X (45% as good as cash transfers to the poor, 4.5% of the bar for OP funding). This essay quantifies a common objection to economic growth as the best way to improve wellbeing: the objection that growth is unequally distributed. Inequality has been shockingly neglected by EAs, to the point that I literally had to create the inequality tag for posting this essay. This is probably because EAs care about maximizing welfare, not about reducing inequality: but I argue that inequality reduces the welfare gains from economic growth, so inequality is your problem too. To quantify this argument, I build on Open Philanthropy's framework for modelling the cost-effectiveness of growth. I extend this framework to account for inequality in two ways: I use an empirically grounded (isoelastic) utility function, rather than the more commonly used (logarithmic) utility function that overvalues consumption growth for well-off people. This change reduces the social value of economic growth by 90%. I use data on inequality of income growth, and show that adjusting for this inequality reduces the social value of economic growth by 36%, independently of the above change. In short, inequality is a serious problem for people who support promoting growth as a cost-effective way to improve the world. Thinking about inequality should make us favor more conventional global health and development interventions, which target the extreme poor Why inequality matters, even to utilitarians Inequality is usually framed as a concern for egalitarian-minded people. But if you want to maximize utility, you also have to care about inequality, because of the simple fact of diminishing marginal utility. Inequality means that more income is accruing to people who don't derive as much utility from that income. Consider a toy example: There is an economy with two agents, Alice and Bob. Alice and Bob each have the logarithmic utility function u(c)=ln(c). The social utility function is the sum of their utilities, U(c)=ln(cA)+ln(cB). Note that this social utility function is completely neutral to inequality: it does not place any inherent weight on Alice and Bob having similar incomes, or penalize deviations from that. Initial GDP is $100, split unevenly: Alice has an income of $80 and Bob has an income of $20. Now consider two scenarios of economic growth: Scenario 1: GDP grows by 10% ($10). Alice and Bob split this surplus evenly, so Alice gets $5 and Bob gets $5. The change in social welfare is ΔU=[ln(85)+ln(25)]−[ln(80)+ln(20)] ⟹ΔU=ln(85/80)+ln(25/20)≈0.28 so utility increases by 0.28 log units. Scenario 2: GDP grows by 10% ($10). Alice and Bob split this surplus unevenly, but proportional to their income, so that Alice gets $7 and Bob gets $3. The change in social welfare would be ΔU=ln(88/80)+ln(22/20)≈0.19 so utility increases by 0.19 log units, which is a smaller increase than in scenario 1. What's going on here? Even though aggregate income growth is the same in both scenarios, in the second, Alice get...

The Nonlinear Library
EA - Report on Social Returns to Productivity Growth by Tom Davidson

The Nonlinear Library

Play Episode Listen Later Jul 16, 2022 4:13


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Report on Social Returns to Productivity Growth, published by Tom Davidson on July 15, 2022 on The Effective Altruism Forum. Historically, economic growth has had huge social benefits, lifting billions out of poverty and improving health outcomes around the world. This leads some to argue that accelerating economic growth, or at least productivity growth, should be a major philanthropic and social priority going forward. I've written a report in which I evaluate this view in order to inform Open Philanthropy's Global Health and Wellbeing (GHW) grantmaking. Specifically, I use a relatively simple model to estimate the social returns to directly funding research and development (R&D). I focus on R&D spending because it seems like a particularly promising way to accelerate productivity growth, but I think broadly similar conclusions would apply to other innovative activities. My estimate, which draws heavily on the methodology of Jones and Summers (2020), asks two primary questions: How much would a little bit of extra R&D today increase people's incomes into the future, holding fixed the amount of R&D conducted at later times? How much welfare is produced by this increase in income? In brief, I find that: The social returns to marginal R&D are high, but typically not as high as the returns in other areas we're interested in. Measured in our units of impact (where “1x” is giving cash to someone earning $50k/year) I estimate that the cost-effectiveness of funding R&D is 45x. This is ~4% as impactful as the (roughly 1,000x) GHW bar for funding. Put another way, I estimate that $20 billion to “average” R&D has the same welfare benefit as increasing the incomes of 180 million people by 10% each for one year. That said, the best R&D projects might have much higher returns. So could projects aimed at increasing the amount of R&D (for example, improving science policy). This estimate is very rough, and I could readily imagine it being off by a factor of 2-3 in either direction, even before accounting for the limitations below. Returns to R&D were plausibly much higher in the past. This is because R&D was much more neglected, and because of feedback loops where R&D increased the amount of R&D occurring at later times. My estimate has many important limitations. For example, it omits potential downsides to R&D (e.g. increasing global catastrophic risks), and it focuses on a specific scenario in which historical rates of return to R&D continue to apply even as population growth stagnates. Alternative scenarios might change the bottom line. For instance, R&D today might speed up the development of some future technology that drastically accelerates R&D progress. This would significantly increase the returns to R&D, but in my view would also strengthen the case for Open Phil to focus on reducing risks from that technology rather than accelerating its development. Overall, the model implies that the best R&D-related projects might be above our GHW bar, but it also leaves us relatively skeptical of arguments that accelerating innovation should be the primary social priority going forward. In the full report, I also discuss: How alternative scenarios might affect social returns to R&D. What these returns might have looked like in the year 1800. How my estimates compare to those of economics papers that use statistical techniques to estimate returns to R&D growth. The ways in which my current views differ from those of certain thinkers in the Progress Studies movement. If environmental constraints require that we reduce our use of various natural resources, productivity growth can allow us to maintain our standards of living while using fewer of these scarce inputs. For example: in Stubborn Attachments, Tyler Cowen argues that the best way to improve the long-run future is to maximize ...

The Nonlinear Library
EA - Will faster economic growth make us happier? The relevance of the Easterlin Paradox to Progress Studies by MichaelPlant

The Nonlinear Library

Play Episode Listen Later Jun 24, 2022 33:41


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Will faster economic growth make us happier? The relevance of the Easterlin Paradox to Progress Studies, published by MichaelPlant on June 24, 2022 on The Effective Altruism Forum. This is a transcript of a talk I gave at the Moral Foundations of Progress Studies workshop at the University of Texas in March 2022. Or rather, it's a re-recorded and edited version of the talk that was subsequently produced for a Global Priorities Institute reading group on ‘progress' and then updated in light of many helpful comments from that seminar. The original slide deck can be viewed here. 1. Introduction As I understand it, Progress Studies is a nascent intellectual field which starts by asking the question, “Since we seem to have gotten a lot of progress over the last couple of hundred years, where did this come from, and what can we do to get more of it?” (Vox, 2021). Progress Studies has been popularised by academics such as Tyler Cowen and Steven Pinker. However, the Easterlin Paradox presents a real challenge to the claim that if we want more progress, we just need to improve the long-run growth rate - a view that Cowen argues for in his book Stubborn Attachments. This is a possible version of Progress Studies and the one I'm responding to. So what is the Easterlin Paradox? Quoting Easterlin and O'Connor (2022), the Easterlin Paradox states: At a point in time, happiness varies directly with income both among and within nations, but over time the long-term growth rates of happiness and income are not significantly related. There is a common view that economic growth is going to make our lives better, but the Easterlin Paradox challenges this. What's paradoxical is that at a given point in time, richer people are more satisfied than poorer people and richer countries are more satisfied than poorer countries, but over the course of time, countries which grow faster don't seem to get happier faster. In other words, if I get richer, that will be good for me, but if we all get richer, that won't do anything for us collectively. While subjective wellbeing (self-reported happiness and life satisfaction) has gone up in previous decades, the challenge of the Easterlin Paradox is that countries which grow faster do not seem to be getting happier faster; growth per se seems unrelated to average subjective wellbeing. If the paradox holds, the result would be striking and significant. It would suggest that, if we want to increase average wellbeing, we must not rely on growth, but go back to the drawing board and see what really works. There's been quite a bit of debate over the nature and existence of the Paradox. The topic first emerged in 1974 when Richard Easterlin published a paper called, Does Economic Growth Improve the Human Lot? It's been particularly challenged by Stevenson and Wolfers (2008), who claim the paradox is an illusion and growth is making us happier. However, after looking into this myself, I actually think that Easterlin has the better half of the debate and the paradox does propose a real challenge to the idea that economic growth alone will make us happier. My main purpose here is to explain what the Easterlin Paradox is and why - despite doubts - we need to take it seriously. My second purpose is to show that we can work out how to improve subjective wellbeing in society and make some tentative suggestions about this. However, this project is only starting to be taken seriously and there is lots more work to be done. 2. Evidence for the Paradox So where does the Easterlin Paradox data come from? It's based on survey questions such as: Taking all things together, how would you say things are these days? Would you say you're very happy, pretty happy, or not too happy?[1] All things considered, how satisfied are you with your life as a whole nowadays, from one, diss...

Bretton Goods
Ep 40: Progress Studies

Bretton Goods

Play Episode Listen Later Jun 19, 2022 48:24


I spoke to Jason Crawford of The Roots of Progress about the new movement of Progress Studies. We talked about Building a culture of economic progress Why are developed countries more averse to progress? Is there a tradeoff between economic progress and existential risk? What is the main constraint for the movement today? --- Send in a voice message: https://anchor.fm/pradyumna-sp/message

The Nonlinear Library
EA - BBC Future longform article on Progress Studies (including connection between progress and risk) by Garrison

The Nonlinear Library

Play Episode Listen Later Jun 16, 2022 2:44


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: BBC Future longform article on Progress Studies (including connection between progress and risk), published by Garrison on June 16, 2022 on The Effective Altruism Forum. I wrote a 4k word feature on Progress Studies for BBC Future that just went up. (Tweet here). I explore: the stagnation hypothesis the origins of progress studies what PS believes will economic growth make us happier frontier vs. catch-up growth, and how PS's focus on the former reveals its biases progress and existential risk the future of the community I think of PS as like EA circa ~2012. They have billionaire support and are quickly professionalizing. The community seems very interested in learning from EA and has responded to critiques that they are not focused enough on x-risk. But they are different in important ways. Compared to EA, I think PS is: more entrepreneurial, with a strong bias to action less academic more American, tech-y, and rooted in the Bay Area less defined - PS is not nearly as rigorous or philosophically oriented less demanding - PS doesn't really ask much, if anything, from its followers. I think this may be a huge force multiplier for the community, as it will better appeal to wealthy tech people, but probably makes any individual member less effective more neoliberal and libertarian (though, compared to these groups PS is quicker to recognize market failures and call for govt intervention) more speciesist - the focus is just on human progress, and tech has clearly been net-bad for farmed animals IMO more growth-oriented For an interesting look at what the intersection of PS and EA looks like, check out the Institute for Progress, a new think tank funded by PS co-founder Patrick Collison, Open Phil, SBF, and others. Timeline March 2017 - Roots of Progress blog starts July 2018 - Stripe Press launches July 2019 - “We Need a New Science of Progress” essay published in Atlantic August 2019 - PS Slack channel launches Oct 2019 - Roots of Progress becomes a nonprofit Aug 2020 - Works in Progress online magazine starts January 2022 - Institute for Progress think tank launches Feb 2022 - Works in Progress acquired by Stripe Press April 2022 - Progress Forum launches (sponsored by Roots of Progress) May 2022 - The Atlantic Progress series launches Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.

Solus Christus Reformed Baptist Church
Pilgrim's Progress Studies - Christian And Evangelist - Counseling the Awakened

Solus Christus Reformed Baptist Church

Play Episode Listen Later Jun 2, 2022 52:00


This class was taught in 2017. The first lessons had audio problems. This is the correction of those glitches since software has improved a lot in 5 years. This was recorded before a live audience. The bibliographical comments may be unequaled in any other study of Pilgrim's Progress recorded. The research for those additions took hours of research.

Thomas Sullivan on SermonAudio
Pilgrim's Progress Studies - Christian And Evangelist - Counseling the Awakened

Thomas Sullivan on SermonAudio

Play Episode Listen Later Jun 2, 2022 52:00


A new MP3 sermon from The Narrated Puritan is now available on SermonAudio with the following details: Title: Pilgrim's Progress Studies - Christian And Evangelist - Counseling the Awakened Subtitle: Pilgrim's Progress Speaker: Thomas Sullivan Broadcaster: The Narrated Puritan Event: Sunday School Date: 6/1/2022 Length: 52 min.

The Nonlinear Library
EA - Apply to help run EAGxIndia, Berkeley, Singapore and Future Forum! by Vaidehi Agarwalla

The Nonlinear Library

Play Episode Listen Later May 22, 2022 5:43


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Apply to help run EAGxIndia, Berkeley, Singapore and Future Forum!, published by Vaidehi Agarwalla on May 21, 2022 on The Effective Altruism Forum. Apply to join the EAGxIndia 2023, the Future Forum and the EAGxBerkeley 2023 teams. The application takes 15 minutes and the deadline is Tuesday May 31st at 11:59pm Pacific Time. We are also seeking (separate application) volunteers to run EAGxSingapore 2022. Apply now! About the conferences Future Forum 2022 The Future Forum is a 4-day event running in San Francisco from August 4-7th 2022. We are concentrating 250 promising individuals from across communities there; drawing from Effective Altruism, Emergent Ventures, Silicon Valley tech, Progress Studies, and more. We want to arm many more of the world's brightest minds with the tools they need to tackle global problems, be it funding or great mentors. We believe the Forum can become a highly impactful event in this space. In our early stages, the Future Forum and surrounding community accelerated 20+ highly promising individuals through e.g. the EA ecosystem and Emergent Ventures, and led to the creation of 10+ start-ups and projects. For the main event, our first round of speakers & supporters includes, among others, Holden Karnofsky, Anders Sandberg, Daniela Amodei, Ed Boyden, and Jason Crawford. We are looking for more ops support to make this event happen well, with the core of work in June and July 2022. We are also excited about Event Leads who have organized a mid-to-large-scale conference before. EAGxIndia 2023 This is the first ever EAGx conference in India and will likely take place in early 2023. EAGxIndia will have about 200-300 attendees, primarily aimed at community members based in India, with up to 100 attendees for other groups, and also EA-adjacent Indian organizations that might contribute to and benefit from attending the conference. This event will be more introductory than other EAGx's, and the content will also be more targeted at addressing gaps that Indian members struggle with in EA such as localized career advice through career workshops, cause-specific career lightning talks and more. CEA has approved funding for this event and it is led by Anubhuti Jain and Pratik Agarwal, who are currently working on community building in India. EAGxBerkeley 2023 EAG SF is great for engaged community members but there are fewer ways for newer EAs to get involved with the EA community in the Bay Area, despite a fairly high awareness of EA in the area. I think an EAGx would be a good way to bring such people into the movement, and think it would have positive effects strengthening Bay Area university groups (such as UC Berkeley, Stanford and others). The application for EAGxBerkeley is run by me (Vaidehi Agarwalla) to help coordinate people interested in running this event, but I will not be taking on a leading role. Note: CEA has not yet approved this event but they have encouraged us to apply for funding. We will apply for funding once we have a team. These roles are a good fit for someone who: Wants to test their fit for event management, operations and community building Depending on team structure: Has the capacity to take on 5-10 hours of flexible work per week leading up to the conference Is able to work full- or near-fulltime 2 weeks before the conference Could be good fits for any of the following (specific needs will vary based on the location) Strategy (Goals, Metrics) Project management (Budgeting, Team lead, Evaluation Production (Venue, Catering, AV, Health & Safety) Admissions (Application review, Stewardship) Content (Speaker Selection & Liaison, Swapcard Manager) Communications (Emails, Marketing, PR, Website) Is organized, reliable, and handles crisis situations well You don't have to be super qualified, just capable and enthusiastic. Role l...

Hear This Idea
47. Jason Crawford on Progress Studies

Hear This Idea

Play Episode Listen Later May 12, 2022 110:02


Jason Crawford is the founder of The Roots of Progress, a nonprofit dedicated to establishing a new philosophy of progress for the 21st century. He writes and speaks about the history and philosophy of progress, especially in technology and industry. In our conversation we discuss — What progress is, and why it matters (maybe more than you think) How to think about resource constraints — why they are sometimes both real and surmountable The 'low-hanging fruit' explanation for stagnation, and prospects for speeding up innovation Tradeoffs between progress and (existential) safety Differences between the Progress Studies and Effective Altruism communities You can read more about the topics we cover in this episode's write-up: hearthisidea.com/episodes/crawford If you have any feedback or suggestions for future guests, feel free to get in touch through our website. Consider leaving us a review wherever you're listening to this — it's the best free way to support the show. If you want to support the show more directly, consider leaving a tip. Thanks for listening!

The Nonlinear Library
EA - "Long-Termism" vs. "Existential Risk" by Scott Alexander

The Nonlinear Library

Play Episode Listen Later Apr 6, 2022 5:55


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: "Long-Termism" vs. "Existential Risk", published by Scott Alexander on April 6, 2022 on The Effective Altruism Forum. The phrase "long-termism" is occupying an increasing share of EA community "branding". For example, the Long-Term Future Fund, the FTX Future Fund ("we support ambitious projects to improve humanity's long-term prospects"), and the impending launch of What We Owe The Future ("making the case for long-termism"). Will MacAskill describes long-termism as: I think this is an interesting philosophy, but I worry that in practical and branding situations it rarely adds value, and might subtract it. In The Very Short Run, We're All Dead AI alignment is a central example of a supposedly long-termist cause. But Ajeya Cotra's Biological Anchors report estimates a 10% chance of transformative AI by 2031, and a 50% chance by 2052. Others (eg Eliezer Yudkowsky) think it might happen even sooner. Let me rephrase this in a deliberately inflammatory way: if you're under ~50, unaligned AI might kill you and everyone you know. Not your great-great-(...)-great-grandchildren in the year 30,000 AD. Not even your children. You and everyone you know. As a pitch to get people to care about something, this is a pretty strong one. But right now, a lot of EA discussion about this goes through an argument that starts with "did you know you might want to assign your descendants in the year 30,000 AD exactly equal moral value to yourself? Did you know that maybe you should care about their problems exactly as much as you care about global warming and other problems happening today?" Regardless of whether these statements are true, or whether you could eventually convince someone of them, they're not the most efficient way to make people concerned about something which will also, in the short term, kill them and everyone they know. The same argument applies to other long-termist priorities, like biosecurity and nuclear weapons. Well-known ideas like "the hinge of history", "the most important century" and "the precipice" all point to the idea that existential risk is concentrated in the relatively near future - probably before 2100. The average biosecurity project being funded by Long-Term Future Fund or FTX Future Fund is aimed at preventing pandemics in the next 10 or 30 years. The average nuclear containment project is aimed at preventing nuclear wars in the next 10 to 30 years. One reason all of these projects are good is that they will prevent humanity from being wiped out, leading to a flourishing long-term future. But another reason they're good is that if there's a pandemic or nuclear war 10 or 30 years from now, it might kill you and everyone you know. Does Long-Termism Ever Come Up With Different Conclusions Than Thoughtful Short-Termism? I think yes, but pretty rarely, in ways that rarely affect real practice. Long-termism might be more willing to fund Progress Studies type projects that increase the rate of GDP growth by .01% per year in a way that compounds over many centuries. "Value change" type work - gradually shifting civilizational values to those more in line with human flourishing - might fall into this category too. In practice I rarely see long-termists working on these except when they have shorter-term effects. I think there's a sense that in the next 100 years, we'll either get a negative technological singularity which will end civilization, or a positive technological singularity which will solve all of our problems - or at least profoundly change the way we think about things like "GDP growth". Most long-termists I see are trying to shape the progress and values landscape up until that singularity, in the hopes of affecting which way the singularity goes - which puts them on the same page as thoughtful short-termists planning for the next 100 years. Long...

The Nonlinear Library
LW - Effective Ideas is announcing a $100,000 blog prize by fin

The Nonlinear Library

Play Episode Listen Later Mar 8, 2022 3:06


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Effective Ideas is announcing a $100,000 blog prize, published by fin on March 7, 2022 on LessWrong. From the EA Forum: We want to encourage a broader, public conversation around effective altruism and longtermism. To that end, we're offering up to 5 awards of $100,000 each for the best new and recent blogs. We're also making grants to promising young writers in the community. You can learn more about the project and get on our radar here. Why does this matter? Top-of-funnel community growth in EA is slower than it could and should be. At the same time, EA is relatively underrepresented in intellectual discourse compared to newer and smaller movements like Progress Studies. EA is producing a ton of thoughtful writing, but the majority takes place in internal discussions and private documents. For some discussions, this would be the only sensible way to have them. But having other discussions in public should help to raise the salience of EA in the broader discourse and bring more people in. It could also help spark new ideas. Further, we think EA needs more strong writers who can share key ideas in prestigious and popular venues — to persuade people to work on the most pressing issues of our time and to advance our thinking about them. We want to incentivize EAs to develop those skills. What's the plan? We want to jumpstart these ambitions with the Blog Prize. Over the course of 2022, we want to find the very best new blogs exploring themes related to effective altruism and longtermism. Up to 5 winning bloggers will receive a prize of $100,000 each. (We were inspired by Tyler Cowen's “Liberalism 2.0” blog prize). You can read more about our rules and guidelines on our website. The judging panel for the blog prize is me (Nick Whitaker), Leopold Aschenbrenner, Avital Balwit, and Fin Moorhouse. Most blogs will be considered via our self-nomination form, but please feel free to send us recommendations. What next? We hope this can be a first step towards a more ambitious effort to support an ecosystem of public-facing writing for EA and longtermism. We believe that EA blogs could soon make up a major part of the general blogosphere, finding audiences (and potential EAs) we wouldn't have found otherwise. Hopefully, we will also inspire the writing of foundational blog posts and posts that evolve into great projects. To help potential bloggers, we've compiled a “How to start a blog” guide. We're also offering mentorship on writing and editorial strategy to bloggers amid our private Slack community of bloggers. Self-nominate your blog on our website if you'd like to join. We have fostered a lively community for discussion, cross-promotion, and peer-to-peer feedback. Eventually, we hope to offer bloggers seminars from established EA writers. While this is our first big announcement, stay tuned for future plans and follow ongoing efforts from Effective Ideas to build and foster a written media ecosystem for EA. Watch this space. This project is supported by FTX Future Fund and Longview Philanthropy. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.

The Nonlinear Library: LessWrong
LW - Effective Ideas is announcing a $100,000 blog prize by fin

The Nonlinear Library: LessWrong

Play Episode Listen Later Mar 8, 2022 3:06


Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Effective Ideas is announcing a $100,000 blog prize, published by fin on March 7, 2022 on LessWrong. From the EA Forum: We want to encourage a broader, public conversation around effective altruism and longtermism. To that end, we're offering up to 5 awards of $100,000 each for the best new and recent blogs. We're also making grants to promising young writers in the community. You can learn more about the project and get on our radar here. Why does this matter? Top-of-funnel community growth in EA is slower than it could and should be. At the same time, EA is relatively underrepresented in intellectual discourse compared to newer and smaller movements like Progress Studies. EA is producing a ton of thoughtful writing, but the majority takes place in internal discussions and private documents. For some discussions, this would be the only sensible way to have them. But having other discussions in public should help to raise the salience of EA in the broader discourse and bring more people in. It could also help spark new ideas. Further, we think EA needs more strong writers who can share key ideas in prestigious and popular venues — to persuade people to work on the most pressing issues of our time and to advance our thinking about them. We want to incentivize EAs to develop those skills. What's the plan? We want to jumpstart these ambitions with the Blog Prize. Over the course of 2022, we want to find the very best new blogs exploring themes related to effective altruism and longtermism. Up to 5 winning bloggers will receive a prize of $100,000 each. (We were inspired by Tyler Cowen's “Liberalism 2.0” blog prize). You can read more about our rules and guidelines on our website. The judging panel for the blog prize is me (Nick Whitaker), Leopold Aschenbrenner, Avital Balwit, and Fin Moorhouse. Most blogs will be considered via our self-nomination form, but please feel free to send us recommendations. What next? We hope this can be a first step towards a more ambitious effort to support an ecosystem of public-facing writing for EA and longtermism. We believe that EA blogs could soon make up a major part of the general blogosphere, finding audiences (and potential EAs) we wouldn't have found otherwise. Hopefully, we will also inspire the writing of foundational blog posts and posts that evolve into great projects. To help potential bloggers, we've compiled a “How to start a blog” guide. We're also offering mentorship on writing and editorial strategy to bloggers amid our private Slack community of bloggers. Self-nominate your blog on our website if you'd like to join. We have fostered a lively community for discussion, cross-promotion, and peer-to-peer feedback. Eventually, we hope to offer bloggers seminars from established EA writers. While this is our first big announcement, stay tuned for future plans and follow ongoing efforts from Effective Ideas to build and foster a written media ecosystem for EA. Watch this space. This project is supported by FTX Future Fund and Longview Philanthropy. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.

The Nonlinear Library
EA - We're announcing a $100,000 blog prize by Nick Whitaker

The Nonlinear Library

Play Episode Listen Later Mar 7, 2022 3:05


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: We're announcing a $100,000 blog prize, published by Nick Whitaker on March 7, 2022 on The Effective Altruism Forum. We want to encourage a broader, public conversation around effective altruism and longtermism. To that end, we're offering up to 5 awards of $100,000 each for the best new and recent blogs. We're also making grants to promising young writers in the community. You can learn more about the project and get on our radar here. Why does this matter? Top-of-funnel community growth in EA is slower than it could and should be. At the same time, EA is relatively underrepresented in intellectual discourse compared to newer and smaller movements like Progress Studies. EA is producing a ton of thoughtful writing, but the majority takes place in internal discussions and private documents. For some discussions, this would be the only sensible way to have them. But having other discussions in public should help to raise the salience of EA in the broader discourse and bring more people in. It could also help spark new ideas. Further, we think EA needs more strong writers who can share key ideas in prestigious and popular venues — to persuade people to work on the most pressing issues of our time and to advance our thinking about them. We want to incentivize EAs to develop those skills. What's the plan? We want to jumpstart these ambitions with the Blog Prize. Over the course of 2022, we want to find the very best new blogs exploring themes related to effective altruism and longtermism. Up to 5 winning bloggers will receive a prize of $100,000 each. (We were inspired by Tyler Cowen's “Liberalism 2.0” blog prize). You can read more about our rules and guidelines on our website. The judging panel for the blog prize is me (Nick Whitaker), Leopold Aschenbrenner, Avital Balwit, and Fin Moorhouse. Most blogs will be considered via our self-nomination form, but please feel free to send us recommendations. What next? We hope this can be a first step towards a more ambitious effort to support an ecosystem of public-facing writing for EA and longtermism. We believe that EA blogs could soon make up a major part of the general blogosphere, finding audiences (and potential EAs) we wouldn't have found otherwise. Hopefully, we will also inspire the writing of foundational blog posts and posts that evolve into great projects. To help potential bloggers, we've compiled a “How to start a blog” guide. We're also offering mentorship on writing and editorial strategy to bloggers amid our private Slack community of bloggers. Self-nominate your blog on our website if you'd like to join. We have fostered a lively community for discussion, cross-promotion, and peer-to-peer feedback. Eventually, we hope to offer bloggers seminars from established EA writers. While this is our first big announcement, stay tuned for future plans and follow ongoing efforts from Effective Ideas to build and foster a written media ecosystem for EA. Watch this space. This project is supported by FTX Future Fund and Longview Philanthropy. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.

Austin Next
Research Speaks - Progress Studies with Jason Crawford, Founder and CEO of the Roots of Progress

Austin Next

Play Episode Listen Later Feb 8, 2022 37:57


For Austin to solidify its role as the next great innovation powerhouse, we must research what has come before and what are the trends pushing us forward. One area of study that greatly affects our future is the exploration of the very nature of progress itself…what is it, what drives it, and how does it affect our region's future? Today we are talking with Jason Crawford, Founder and CEO of the Roots of Progress, a nonprofit dedicated to establishing a new philosophy of progress.  How we view innovation and progress lays the foundation for...What's Next Austin? Podcast Production Services by NCC Audio Our music is “Tech Talk” by Kevin MacLeod. Licensed under Creative Commons 4.0 License

Outliers with Daniel Scrivner
20 Minute Playbook – Jason Crawford of Roots of Progress

Outliers with Daniel Scrivner

Play Episode Listen Later Jan 7, 2022 19:28


“I would say the single most powerful technique that I have found is when you start your day, rather than checking email or or anything, start by pulling out a blank sheet of paper, whether that's literal paper or whether, like me, you use an electronic note-taking system, and just think on paper about your day. How's it going? What's up? What am I going to do today? What are my priorities and so forth?” – Jason Crawford Jason Crawford (@jasoncrawford) is Founder and CEO of The Roots of Progress, a nonprofit dedicated to establishing a new philosophy of progress for the 21st century. He created Progress Studies for Young Scholars and is a technical consultant and adviser to Our World in Data. He previously co-founded Fieldbook and Kima Labs, and was an engineering manager at Flexport, Amazon and Groupon. Show notes with links, quotes, and a transcript of the episode: https://www.danielscrivner.com/notes/roots-of-progress-jason-crawford-20mp-show-notes  Chapters Research on the Gutenberg press Learning, planning the day, and improving sleep Recommended apps, including Bear, Readwise, and Anki Recommended books On success and gratitude Sign up here for Outlier Debrief, our weekly newsletter that highlights the latest episode, expands on important business and investing concepts, and contains the best of what we read each week. Follow Outlier Academy on Twitter: https://twitter.com/outlieracademy. If you loved this episode, please share a quick review on Apple Podcasts. Learn more about your ad choices. Visit megaphone.fm/adchoices

Boundaryless Conversations Podcast
S3 Ep. 5 Jason Crawford – Narrative decentralization and the future of progress

Boundaryless Conversations Podcast

Play Episode Listen Later Jan 4, 2022 50:34


Jason Crawford is the founder of The Roots of Progress, where he writes and speaks about the history of technology and the philosophy of progress. He is also the creator of Progress Studies for Young Scholars, an online learning program for high schoolers; and a part-time adviser and technical consultant to Our World in Data, an Oxford-based non-profit for research and data on global development. Previously, he spent 18 years as a software engineer, engineering manager, and startup founder. A full transcript of the episode can be found on our website: boundaryless.io/podcast/jason-crawford/   Key highlights from the conversation We discussed: > The impact of information technology on centralization and decentralization > The emergence of a plurality of meanings of progress > Removing excess capacity and what it means for supply chain disruptions > The potential for cryptocurrencies to automate legal and financial actions > Why the pace of progress is slowing down - and how to speed it up   To find out more about Jason's work: > Website: rootsofprogress.org/ > LinkedIn: www.linkedin.com/in/jasonc > Twitter: twitter.com/jasoncrawford   Other references and mentions: > Naval Ravikant: twitter.com/naval   Find out more about the show and the research at Boundaryless at boundaryless.io/resources/podcast/   Thanks for the ad-hoc music to Liosound / Walter Mobilio. Find his portfolio here: boundaryless.io/podcast-music   Recorded on 23 November 2021.

The Nonlinear Library
EA - 13 Very Different Stances on AGI by Ozzie Gooen

The Nonlinear Library

Play Episode Listen Later Dec 28, 2021 5:38


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: 13 Very Different Stances on AGI, published by Ozzie Gooen on December 27, 2021 on The Effective Altruism Forum. Epistemic Status: Quickly written (~4 hours), uncertain. AGI policy isn't my field, don't take me to be an expert in it. This was originally posted to Facebook here, where it had some discussion. Also, see this earlier post too. Over the last 10 years or so, I've talked to a bunch of different people about AGI, and have seen several more unique positions online. Theories that I've heard include: AGI is a severe risk, but we have decent odds of it going well. Alignment work is probably good, AGI capabilities are probably bad. (Many longtermist AI people) AGI seems like a huge deal, but there's really nothing we could do about it at this point. (Many non-longtermist EAs) AGI will very likely kill us all. We should try to stop it, though very few interventions are viable, and it's a long shot. (Eliezer. See Facebook comments for more discussion) AGI is amazing for mankind and we should basically build it as quickly as possible. There are basically no big risks. (lots of AI developers) It's important that Western countries build AI/AGI before China does, so we should rush it (select longtermists, I think Eric Schmidt) It's important that we have lots of small and narrow AI companies because these will help draw attention and talent from the big and general-purpose companies. It's important that AI development be particularly open and transparent, in part so that there's less fear around it. We need to develop AI quickly to promote global growth, without which the world might experience severe economic decline, the consequences will be massively violent (Peter Thiel, maybe some Progress Studies people to a lesser extent) We should mostly focus on making sure that AI does not increase discrimination or unfairness in society. (Lots of "Safe AI" researchers, often liberal) AGI will definitely be fine because that's what God would want. We might as well make it very quickly. AGI will kill us, but we shouldn't be worried, because whatever it is, it will be more morally important than us anyway. (fairly fringe) AGI is a meaningless concept, in part because intelligence is not a single unit. The entire concept doesn't make sense, so talking about it is useless. There's basically no chance of transformative AI happening in the next 30-100 years. (Most of the world and governments, from what I can tell) Naturally, smart people are actively working on advancing the majority of these. There's a ton of unilateralist and expensive actions being taken. One weird thing, to me, is just how intense some of these people are. Like, they choose one or two of these 13 theories and really go all-in on them. It feels a lot like a religious divide.Some key reflections: A bunch of intelligent/powerful people care a lot about AGI. I expect that over time, many more will. There are several camps that strongly disagree with each other. I think these disagreements often aren't made explicit, so the situation is confusing. The related terminology and conceptual space are really messy. Some positions are strongly believed but barely articulated (see single tweets dismissing AGI concerns, for example). The disagreements are along several different dimensions, not one or two. It's not simply short vs. long timelines, or "Is AGI dangerous or not?". A lot of people seem really confident[1] in their views on this topic. If you were to eventually calculate the average Brier score of all of these opinions, it would be pretty bad. Addendum Possible next steps The above 13 stances were written quickly and don't follow a neat structure. I don't mean for them to be definitive, I just want to use this post to highlight the issue. Some obvious projects include: Come up with better lists. Try to isolate t...

Outliers with Daniel Scrivner
IG – Progress Studies with Jason Crawford of Roots of Progress

Outliers with Daniel Scrivner

Play Episode Listen Later Dec 15, 2021 59:59


“We can pay it forward to future generations by making sure that progress continues and by making sure that future generations are living as well off compared to us today, as we are compared to the past. So let's have that ambition for the future.” – Jason Crawford  Jason Crawford (@jasoncrawford) is Founder and CEO of The Roots of Progress, a nonprofit dedicated to establishing a new philosophy of progress for the 21st century. He created Progress Studies for Young Scholars and is a technical consultant and adviser to Our World in Data. He previously co-founded Fieldbook and Kima Labs, and was an engineering manager at Flexport, Amazon and Groupon. Show notes with links, quotes, and a transcript of the episode: https://www.danielscrivner.com/notes/progress-studies-roots-of-progress-jason-crawford​​-ig-show-notes/ Chapters About Jason's work with The Roots of Progress Nurturing progress is a moral imperative On Louis Pasteur What is progress? What is technology? On starting The Roots of Progress  The growth of progress studies Is progress really stagnating? How bureaucracy can slow progress The speed of innovation Advancements in biotech, artificial intelligence, energy technologies, and nanotech Appreciating the progress we see on a daily basis Sign up here for Outlier Debrief, our weekly newsletter that highlights the latest episode, expands on important business and investing concepts, and contains the best of what we read each week. Follow Outlier Academy on Twitter: https://twitter.com/outlieracademy. If you loved this episode, please share a quick review on Apple Podcasts.

The Nonlinear Library: LessWrong Top Posts
Common knowledge about Leverage Research 1.0 by BayAreaHuman

The Nonlinear Library: LessWrong Top Posts

Play Episode Listen Later Dec 12, 2021 8:10


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Common knowledge about Leverage Research 1.0, published by BayAreaHuman on the LessWrong. I've spoken to people recently who were unaware of some basic facts about Leverage Research 1.0; facts that are more-or-less "common knowledge" among people who spent time socially adjacent to Leverage, and are not particularly secret or surprising in Leverage-adjacent circles, but aren't attested publicly in one place anywhere. Today, Geoff Anders and Leverage 2.0 are moving into the "Progress Studies" space, and seeking funding in this area (see: Geoff recently got a small grant from Emergent Ventures). This seems like an important time to contribute to common knowledge about Leverage 1.0. You might conclude that I'm trying to discredit people who were involved, but that's not my aim here. My friends who were involved in Leverage 1.0 are people who I respect greatly. Rather, I just keep being surprised that people haven't heard certain specific, more-or-less legible facts about the past, that seem well-known or obvious to me, and that I feel should be taken into account when evaluating Leverage as a player in the current landscape. I would like to create here a publicly-linkable document containing these statements. Facts that are common knowledge among people I know: Members of Leverage 1.0 lived and worked in the same Leverage-run building, an apartment complex near Lake Merritt. (Living there was not required, but perhaps half the members did, and new members were particularly encouraged to.) Participation in the project involved secrecy / privacy / information-management agreements. People were asked to sign an agreement that prohibited publishing almost anything (for example, in one case someone I know starting a personal blog on unrelated topics without permission led to a stern reprimand). Geoff developed a therapy technique, "charting". He says he developed it based on his novel and complete theory of psychology, called "Connection Theory". In my estimation, "charting" is in the same rough family of psychotherapy techniques as Internal Family Systems, Coherence Therapy, Core Transformation, and similar. Like those techniques, it leads to shifts in clients' beliefs and moods. I know people from outside Leverage who did charting sessions with a "coach" from Paradigm Academy, and reported it helped them greatly. I've also heard people who did lots of charting within Leverage report that it led to dissociation and fragmentation, that they have found difficult to reverse. Members who were on payroll were expected to undergo charting/debugging sessions with a supervisory "trainer", and to "train" other members. The role of trainer is something like "manager + therapist": that is, both "is evaluating your job performance" and "is doing therapy on you". Another type of practice done at the organization, and offered to some people outside the organization, was "bodywork", which involved physical contact between the trainer and the trainee. "Bodywork" could in other contexts be a synonym for "massage", but that's not what's meant here; descriptions I heard of sessions sounded to me more like "energy work". People I've spoken to say it was reported to produce deeper and less legible change. Using psychological techniques to experiment on one another, and on the "sociology" of the group itself, was a main purpose of the group. It was understood among members that they were signing up to be guinea pigs for experiments in introspection, altering one's belief structure, and experimental group dynamics. The stated purpose of the group was to discover more theories of human behavior and civilization by "theorizing", while building power, and then literally take over US and/or global governance (the vibe was "take over the world"). The purpose of gaining global power was to lead to bett...

The Nonlinear Library: EA Forum Top Posts
Help me find the crux between EA/XR and Progress Studies by jasoncrawford

The Nonlinear Library: EA Forum Top Posts

Play Episode Listen Later Dec 11, 2021 5:04


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. this is: Help me find the crux between EA/XR and Progress Studies, published by jasoncrawford on the effective altruism forum. I'm trying to get to the crux of the differences between the progress studies (PS) and the EA / existential risk (XR) communities. I'd love input from you all on my questions below. The road trip metaphor Let me set up a metaphor to frame the issue: Picture all of humanity in a car, traveling down the highway of progress. Both PS and EA/XR agree that the trip is good, and that as long as we don't crash, faster would be better. But: XR thinks that the car is out of control and that we need a better grip on the steering wheel. We should not accelerate until we can steer better, and maybe we should even slow down in order to avoid crashing. PS thinks we're already slowing down, and so wants to put significant attention into re-accelerating. Sure, we probably need better steering too, but that's secondary. (See also @Max_Daniel's recent post) My questions Here are some things I don't really understand about the XR position (granted that I haven't read the literature on it extensively yet, but I have read a number of the foundational papers). (Edit for clarity: these questions are not proposed as cruxes. They are just questions I am unclear on, related to my attempt to find the crux) How does XR weigh costs and benefits? Is there any cost that is too high to pay, for any level of XR reduction? Are they willing to significantly increase global catastrophic risk—one notch down from XR in Bostrom's hierarchy—in order to decrease XR? I do get that impression. They seem to talk about any catastrophe less than full human extinction as, well, not that big a deal. For instance, suppose that if we accelerate progress, we can end poverty (by whatever standard) one century earlier than otherwise. In that case, failing to do so, in itself, should be considered a global catastrophic risk, or close to it. If you're willing to accept GCR in order to slightly reduce XR, then OK—but it feels to me that you've fallen for a Pascal's Mugging. Eliezer has specifically said that he doesn't accept Pascal's Mugging arguments in the x-risk context, and Holden Karnofsky has indicated the same. The only counterarguments I've seen conclude “so AI safety (or other specific x-risk) is still a worthy cause”—which I'm fine with. I don't see how you get to “so we shouldn't try to speed up technological progress.” Does XR consider tech progress default-good or default-bad? My take is that tech progress is default good, but we should be watchful for bad consequences and address specific risks. I think it makes sense to pursue specific projects that might increase AI safety, gene safety, etc. I even think there are times when it makes sense to put a short-term moratorium on progress in an area in order to work out some safety issues—this has been done once or twice already in gene safety. When I talk to XR folks, I sometimes get the impression that they want to flip it around, and consider all tech progress to be bad unless we can make an XR-based case that it should go forward. That takes me back to point (1). What would moral/social progress actually look like? This idea that it's more important to make progress in non-tech areas: epistemics, morality, coordination, insight, governance, whatever. I actually sort of agree with that, but I'm not sure at all that what I have in mind there corresponds to what EA/XR folks are thinking. Maybe this has been written up somewhere, and I haven't found it yet? Without understanding this, it comes across as if tech progress is on indefinite hold until we somehow become better people and thus have sufficiently reduced XR—although it's unclear how we could ever reduce it enough, because of (1). What does XR think about the large numbers of people who do...

Thoughts in Between: exploring how technology collides with politics, culture and society
Jason Crawford: What is progress and how do we get more of it?

Thoughts in Between: exploring how technology collides with politics, culture and society

Play Episode Listen Later Oct 11, 2021 59:25


Jason Crawford is the founder of Roots of Progress, a nonprofit dedicated to establishing a new philosophy of progress for the 21st century. Jason is a prolific writer on the history of science and technology and is one of the leading figures in the Progress Studies community. In this conversation, we discuss what causes progress; why it's not universally popular; what the history of bicycle tells us about why advances in technology sometimes take so long; why the future people imagined in the 1960s didn't happen; and much more.-----------------Thanks to Cofruition for consulting on and producing the show. You can learn more about Entrepreneur First at www.joinef.com and subscribe to my weekly newsletter at tib.matthewclifford.com

Ben Yeoh Chats
Matt Clancy on innovation, progress studies and remote work.

Ben Yeoh Chats

Play Episode Listen Later Jul 15, 2021 125:50


Matt Clancy is a progress fellow at Emergent Ventures. He teaches at Iowa State University and writes on Substack a newsletter called New Things Under the Sun, which you should subscribe to if you are interested in anything innovation related. Matt has also synthesised many of the emerging studies on remote working. Transcript and video links here. We discuss whether progress has been stagnating and the importance of moral and social progress as well as technological. Whether small team or large teams are better for invention. How important are agglomeration effects and how a declining agglomeration impact might make the case for remote work stronger. The role of innovation prizes and patents for incentivising innovation and if copyright is too long. Whether innovation agencies (eg ARPA) are the answer and what Matt would do as an executive director of one. Differences between UK and US university systems and advice for young people. Matt's thinking on remote work.

Grey Mirror: MIT Media Lab’s Digital Currency Initiative on Technology, Society, and Ethics
#86 Solo: Cultural Progress Studies and Predictive Processing

Grey Mirror: MIT Media Lab’s Digital Currency Initiative on Technology, Society, and Ethics

Play Episode Listen Later Jun 3, 2021 59:52


I discuss: There Is A Level 5, Cultural Progress Studies, Predictive Processing, Internet Annotations. https://www.roote.co/ https://patreon.com/rhyslindmark https://twitter.com/RhysLindmark

Ben Yeoh Chats
Anton Howes on innovation history, the improving mindset and progress studies.

Ben Yeoh Chats

Play Episode Listen Later May 21, 2021 86:45


Anton Howes on innovation history, the improving mindset and progress studies. Anton Howes is an innovation historian and policy thinker, we have a fascinating wide ranging conversation on innovation. Transcript and video available here. We discuss raising the prestige of innovators today, but consider it easy to say but harder to enact. Anton argues for the benefits of a “great Exhibition” as a direct mechanism to inspire an “improving” mindset - the type of mindset that leads to innovation. Anton shares what he has discovered about how invention has happened in history; and whether stagnation has happened or not recently, that it might be good to send a signal on the importance of innovation in any case. Why incremental innovation might be underrated, and why the process of innovation (ideas, iterations) is not publicised more. Anton discusses evidence that formal education has not been needed for historic inventors (an improving mindset being potentially more important) and whether there are more than enough innovation prizes currently. We have a strong section on problems with copyright and how rules around copyright might not be fit for purpose today and how to pronounce “gimcrack” - a useless invention - and why having more gimcracks might be a sign of healthy innovation. A fascinating walk through innovation history. Anton Howes is an innovation historian and policy thinker. He's written a brilliant history of the RSA - the royal society for arts, manufactures and commerce - arguably Britain's national improvement agency over the last 260 years - and is the RSA's Historian in Residence. I recommend you check out his book, Arts and Minds. He writes a substack newsletter blog on innovation thinking that has won an award from Tyler Cowen's Emergent Ventures. He has a day job as head of innovation research at the Entrepreneurs Network think tank and in my mind is an all round excellent thinker on innovation.

The Revolving Door
The Revolving Door #163 – The Torch of Progress – A Weekly Podcast on Progress Studies

The Revolving Door

Play Episode Listen Later Sep 27, 2020


Show Link: https://podnutz.com/category/the-revolving-door/ Discord Link – https://discord.gg/sbeUC9b iTunes Link: https://podcasts.apple.com/us/podcast/podcast-review/id1507424341 RSS Link: http://feeds.feedburner.com/podnutz/podcastreview Clothing Link: http://podnutz.com/clothing Email: Host: DoorToDoorGeek 1$ a month donation – http://patreon.com/podnutz One Time Donation Notes: The Torch of Progress – A Weekly Podcast on Progress Studies https://feeds.soundcloud.com/playlists/soundcloud:playlists:1065404530/sounds.rss https://soundcloud.com/thought-and-industry/sets/the-torch-of-progress-a-weekly —– Help Eric Arduini and family: https://www.gofundme.com/f/help-eric-arduini-and-family —– NEW – My Podcast OPML […]

progress studies torch revolving doors progress studies rss link help eric arduini