POPULARITY
Scott and Daniel break down every month from now until the 2027 intelligence explosion.Scott Alexander is author of the highly influential blogs Slate Star Codex and Astral Codex Ten. Daniel Kokotajlo resigned from OpenAI in 2024, rejecting a non-disparagement clause and risking millions in equity to speak out about AI safety.We discuss misaligned hive minds, Xi and Trump waking up, and automated Ilyas researching AI progress.I came in skeptical, but I learned a tremendous amount by bouncing my objections off of them. I highly recommend checking out their new scenario planning document, AI 2027Watch on Youtube; listen on Apple Podcasts or Spotify.----------Sponsors* WorkOS helps today's top AI companies get enterprise-ready. OpenAI, Cursor, Perplexity, Anthropic and hundreds more use WorkOS to quickly integrate features required by enterprise buyers. To learn more about how you can make the leap to enterprise, visit workos.com* Jane Street likes to know what's going on inside the neural nets they use. They just released a black-box challenge for Dwarkesh listeners, and I had blast trying it out. See if you have the skills to crack it at janestreet.com/dwarkesh* Scale's Data Foundry gives major AI labs access to high-quality data to fuel post-training, including advanced reasoning capabilities. If you're an AI researcher or engineer, learn about how Scale's Data Foundry and research lab, SEAL, can help you go beyond the current frontier at scale.com/dwarkeshTo sponsor a future episode, visit dwarkesh.com/advertise.----------Timestamps(00:00:00) - AI 2027(00:06:56) - Forecasting 2025 and 2026(00:14:41) - Why LLMs aren't making discoveries(00:24:33) - Debating intelligence explosion(00:49:45) - Can superintelligence actually transform science?(01:16:54) - Cultural evolution vs superintelligence(01:24:05) - Mid-2027 branch point(01:32:30) - Race with China(01:44:47) - Nationalization vs private anarchy(02:03:22) - Misalignment(02:14:52) - UBI, AI advisors, & human future(02:23:00) - Factory farming for digital minds(02:26:52) - Daniel leaving OpenAI(02:35:15) - Scott's blogging advice Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
Thanks to the 5,975 people who took the 2025 Astral Codex Ten survey. See the questions for the ACX survey See the results from the ACX Survey (click “see previous responses” on that page1) I'll be publishing more complicated analyses over the course of the next year, hopefully starting later this month. If you want to scoop me, or investigate the data yourself, you can download the answers of the 5500 people who agreed to have their responses shared publicly. Out of concern for anonymity, the public dataset will exclude or bin certain questions2. If you want more complete information, email me and explain why, and I'll probably send it to you. You can download the public data here as an Excel or CSV file: http://slatestarcodex.com/Stuff/ACXPublic2025.xlsx http://slatestarcodex.com/Stuff/ACXPublic2025.csv Here are some of the answers I found most interesting: https://www.astralcodexten.com/p/acx-survey-results-2025
Dr. Cruse (@predoctit) argues that Pete Hegseth, Trump's nominee for Secretary of Defense, is much less likely to get confirmed than the current markets prices indicate. Dr. Cruse and Pratik Chougule also discuss the universe of Republican senators who are willing to vote against Trump nominees. Timestamps 0:00: Pratik introduces episode 0:11: Thune whip count on Hegseth 8:05: Intro ends 10:06: Interview with Cruse begins 10:42: Trump nominees' confirmation prospects 11:24: Democratic Senators 12:11: Rubio 15:44: Most controversial nominees 16:27: Hegseth scandals 31:14: Factors in likelihood of confirmation 33:46: Republican Senators 46:41: Influence of Hegseth markets 46:56: Sexual harrassment allegations Follow Star Spangled Gamblers on Twitter @ssgamblers Trade on Hegseth's nomination at Polymarket.com, the world's largest prediction market. https://polymarket.com/event/of-senate-votes-to-confirm-hegseth-as-secretary-of-defense?tid=1736804670342 https://polymarket.com/event/which-trump-picks-will-be-confirmed/will-pete-hegseth-be-confirmed-as-secretary-of-defense?tid=1736804692254 https://polymarket.com/event/who-will-be-trumps-defense-secretary/will-pete-hegseth-be-trumps-defense-secretary?tid=1736804733018 Join us for our first DC Forecasting & Prediction Markets meetup of the year! This will be a very casual meetup to meet and socialize with others interested in forecasting, prediction markets, political gambling, sports betting, or anything else relating to predicting the future. Location is TBD but you'll be notified when we've finalized a venue. Last-minute/onsite walk-in RSVPs here on this Partiful event page are welcomed! Who are we? We are prediction market traders on Manifold (and other prediction markets like PredictIt, Kalshi, and Polymarket), forecasters (e.g. on Metaculus and Good Judgment Open), sports bettors (e.g. on FanDuel, DraftKings, and other sportsbooks), consumers of forecasting (or related) content (e.g. Star Spangled Gamblers, Nate Silver's Silver Bulletin, Scott Alexander's Astral Codex Ten), effective altruists, rationalists, and data scientists. Forecast on Manifold how many people will attend this month: https://manifold.markets/dglid/how-many-people-will-attend-a-forec-OzPZILyc5C?play=true Forecast on Manifold how many people will attend meetups this year: https://manifold.markets/dglid/how-many-attendees-will-there-be-at?play=true This meetup is hosted by the Forecasting Meetup Network. Help us grow the forecasting community to positively influence the future by supporting us with an upvote, comment, or pledge on Manifund: https://manifund.org/projects/forecasting-meetup-network---washington-dc-pilot-4-meetups Get notified whenever a new meetup is scheduled and learn more about the Forecasting Meetup Network here: https://bit.ly/forecastingmeetupnetwork Join our Discord to connect with others in the community between monthly meetups: https://discord.com/invite/hFn3yukSwv
With the enormous increase in the power of AI (specifically LLMs) people are using them for all sorts of things, hoping to find areas where they're better, or at least cheaper than humans. FiveThirtyNine (get it?) is one such attempt, and they claim that AI can do forecasting better than humans. Scott Alexander, of Astral Codex Ten, reviewed the service and concluded that they still have a long way to go. I have no doubt that this is the case, but one can imagine that this will not always be the case. What then? My assertion would be that at the point when AI forecasting does “work” (should that ever happen) it will make the problems of superforecasting even worse.2 I- The problems of superforecasting What are the problems of superforecasting? ...
We had the pleasure of speaking with Mario Gibney, who decided years ago that he couldn't just wait for someone else to take action on AI safety. In 2022, Mario co-founded AI Governance and Safety (AIGS) Canada to push the national conversation forward. Through movement and policy advocacy, AIGS is helping Canada become a leader in AI governance and safety. I recommend reading their concise white papers to get a summary of the issues. We learn how Mario got into this line of work, what Canadians think about the state of AI Safety these days, and things to get excited about in the Toronto scene. I leave you with a message about how to deal with the emotional toll of AI doomerism. Thanks to my amazing producer Chad Clarke for being essential in putting this show together. All mistakes are mine. Artificial Intelligence Governance & Safety Canada aigs.ca LessWrong https://www.lesswrong.com/ Slate Star Codex https://slatestarcodex.com/ Astral Codex Ten https://www.astralcodexten.com/ 80,000 Hours https://80000hours.org/ Center for AI Safety https://www.safe.ai/ Future of Life Institute https://futureoflife.org/ EAGxToronto applications are open until 31 July at 11:59 pm Eastern–apply now! https://www.effectivealtruism.org/ea-global/events/eagxtoronto-2024 We're a PFG! Profit4good.org
3-Part Episode Part I: Pratik Chougule (@pjchougule), SSG TItle Belt Champ Ben Freeman (@benwfreeman1), and Title Belt Challenger Alex Chan (@ianlazaran) debate whether or not Trump cares about qualifications in his VP decision, or whether it will come down to politics. Part II: Doug Campbell (@tradeandmoney) analyzes how he won the 2023 Astral Codex Ten forecasting competition. Part III: Saul Munn explains how to organize the forecasting community 0:11: Pratik introduces VP segment 0:26: Pratik introduces Campbell segment 1:05: Pratik introduces Munn segment 4:09: VP segment begins 8:43: 2028 considerations 18:13: Campbell segment begins 22:55: Expertise and prediction 28:10: Interview with Munn begins 28:45: The importance of in-person events 29:00: Manifest's origins 30:46: The political gambling community 32:11: Transitioning from an online community 34:04: How to organize the forecasting community 36:05: University clubs 37:06: Reluctance to organize events 38:15: EA Global 40:59: Low bar to community-building Bet on who Trump will select as his running mate at Polymarket, the world's largest prediction market, at polymarket.com. SUPPORT US: Patreon: www.patreon.com/starspangledgamblers FOLLOW US ON SOCIAL: Twitter: www.twitter.com/ssgamblers
Many cities have regular Astral Codex Ten meetup groups. Twice a year, I try to advertise their upcoming meetups and make a bigger deal of it than usual so that irregular attendees can attend. This is one of those times. This year we have spring meetups planned in over eighty cities, from Tokyo, Japan to Seminyak, Indonesia. Thanks to all the organizers who responded to my request for details, and to Meetups Czar Skyler and the Less Wrong team for making this happen. You can find the list below, in the following order: Africa & Middle East Asia-Pacific (including Australia) Europe (including UK) North America & Central America South America There should very shortly be a map of these meetups on the LessWrong community page. https://www.astralcodexten.com/p/spring-meetups-everywhere-2024
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The Pareto Best and the Curse of Doom, published by Screwtape on February 22, 2024 on LessWrong. I. Prerequisite reading: Being the (Pareto) Best in the World. A summary of Being the (Pareto) Best in the World: Being the world's best mathematician is hard. Being the world's best musician is hard. Being the world's best mathematician/musician is much easier, especially since there are multiple slots; an amazing mathematician who is also a competent musician, someone who is good at both, and a competent mathematician who is also an amazing musician can all find a niche. I like this concept, and have kept it in my back pocket ever since I read it. I have sometimes described myself as a software engineer who was competent at public speaking and project management. That particular overlapping skillset is, it turns out, fairly valuable. While I was attempting to become a better software engineer, I was also trying to add competence at corporate budgets and accounting to that skillset. These days I spend a lot of time talking to the kind of person who hangs out on LessWrong a lot or spends a lot of time going to Astral Codex Ten meetups. If ever I faced a problem that required a brilliant neuroscientist, or a gifted Haskell programmer, or a world leading expert in training honeybees, well, let's just say I know somebody. There are people out there who are exemplary at the thing they do. Sometimes they're not very good at other things though. While Being The (Pareto) Best in the World felt optimistic when I first read it, these days I regard it as a curse of doom upon the world, blighting otherwise promising areas of effort and endeavor. I look around at places where it feels like everyone is dropping the ball and see a blasted wasteland where nothing grows because nobody has the right combination of seemingly basic skills. II. Imagine a toy model where everyone has a hundred points to put into being good at things. (This is, to be clear, not just a toy model but an incorrect model. It's easy to look at your incoming university students and notice a strong inverse correlation between math and verbal SAT scores, forgetting that those get summed together during applications and anyone below a certain threshold probably has their application discarded. Still, let's use this model for the moment.) Leading talents in a field maybe put 75 points in their area. Why not 100? Because you need points in living your life. There's an archetype of the absent minded professor, someone who can understand a complex abstract subject but who shows up to give lectures having forgotten to put their shoes on or eat breakfast. Hitting 90 points in your field requires someone else to do a lot of the upkeep for you; many FAANG jobs provide food and other amenities, and I don't think it's entirely because it's a cheap perk. Politely, I know some FAANG engineers who I suspect would forget lunch and dinner if it was not conveniently provided for them. At sufficiently high levels of dedication, seemingly important related skills start to fall by the wayside. Many programmers are not good at documenting their code, writing or reading specifications, or estimating story points and timelines. Fiction authors vary wildly in their comfort with self-promotion, proofreading, and layout. That's what publishers and agents are for. There's a few indie musicians I enjoy whose mastery of sound mixing or recording technology is not the equal to their actual playing. You can spend 40 points on singing, 40 points on recording, and 20 points on living your life. At this point, you're giving up some noticeable quality somewhere. I'll arbitrarily draw a line at 50 points and say this is where so-called "professional" quality tends to hang out, the people you see do their thing and you think "man, they could make a livin...
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The Pareto Best and the Curse of Doom, published by Screwtape on February 22, 2024 on LessWrong. I. Prerequisite reading: Being the (Pareto) Best in the World. A summary of Being the (Pareto) Best in the World: Being the world's best mathematician is hard. Being the world's best musician is hard. Being the world's best mathematician/musician is much easier, especially since there are multiple slots; an amazing mathematician who is also a competent musician, someone who is good at both, and a competent mathematician who is also an amazing musician can all find a niche. I like this concept, and have kept it in my back pocket ever since I read it. I have sometimes described myself as a software engineer who was competent at public speaking and project management. That particular overlapping skillset is, it turns out, fairly valuable. While I was attempting to become a better software engineer, I was also trying to add competence at corporate budgets and accounting to that skillset. These days I spend a lot of time talking to the kind of person who hangs out on LessWrong a lot or spends a lot of time going to Astral Codex Ten meetups. If ever I faced a problem that required a brilliant neuroscientist, or a gifted Haskell programmer, or a world leading expert in training honeybees, well, let's just say I know somebody. There are people out there who are exemplary at the thing they do. Sometimes they're not very good at other things though. While Being The (Pareto) Best in the World felt optimistic when I first read it, these days I regard it as a curse of doom upon the world, blighting otherwise promising areas of effort and endeavor. I look around at places where it feels like everyone is dropping the ball and see a blasted wasteland where nothing grows because nobody has the right combination of seemingly basic skills. II. Imagine a toy model where everyone has a hundred points to put into being good at things. (This is, to be clear, not just a toy model but an incorrect model. It's easy to look at your incoming university students and notice a strong inverse correlation between math and verbal SAT scores, forgetting that those get summed together during applications and anyone below a certain threshold probably has their application discarded. Still, let's use this model for the moment.) Leading talents in a field maybe put 75 points in their area. Why not 100? Because you need points in living your life. There's an archetype of the absent minded professor, someone who can understand a complex abstract subject but who shows up to give lectures having forgotten to put their shoes on or eat breakfast. Hitting 90 points in your field requires someone else to do a lot of the upkeep for you; many FAANG jobs provide food and other amenities, and I don't think it's entirely because it's a cheap perk. Politely, I know some FAANG engineers who I suspect would forget lunch and dinner if it was not conveniently provided for them. At sufficiently high levels of dedication, seemingly important related skills start to fall by the wayside. Many programmers are not good at documenting their code, writing or reading specifications, or estimating story points and timelines. Fiction authors vary wildly in their comfort with self-promotion, proofreading, and layout. That's what publishers and agents are for. There's a few indie musicians I enjoy whose mastery of sound mixing or recording technology is not the equal to their actual playing. You can spend 40 points on singing, 40 points on recording, and 20 points on living your life. At this point, you're giving up some noticeable quality somewhere. I'll arbitrarily draw a line at 50 points and say this is where so-called "professional" quality tends to hang out, the people you see do their thing and you think "man, they could make a livin...
Astral Codex Ten has a paid subscription option. You pay $10 (or $2.50 if you can't afford the regular price) per month, and get: Extra articles (usually 1-2 per month) A Hidden Open Thread per week Access to the occasional Ask Me Anythings I do with subscribers Early access to some draft posts The warm glow of supporting the blog. https://www.astralcodexten.com/p/subscrive-drive-2024-free-unlocked
Brandon Hendrickson (creator of scienceisweird.com) says no one's ever asked him about the sabertooth tiger skull in his Zoom background - until now! Brandon's a teacher steeped in the ideas of Kieran Egan - a prolific educational theorist who believes the world is FASCINATING and that IMAGINATION is key to how we humans learn. We explore how Egan's approach could work for autodidact software engineers, offer untold book suggestions, and, of course, propose some ways that ChatGPT might be able to help us along the way.Shownotes:Science is WEIRDBrandon's 2023 Astral Codex Ten book review contest winning review of Kieran Egan's THE EDUCATED MINDKieran Egan (wikipedia)A New History of Greek Mathematics - Reviel NetzDie Hard water jug challenge XKCD someone is wrong on the internet
“Science is political”. How could it not be? It's done by humans, whose political biases will influence not just the topics they choose to study but also how they study them. But does that mean it's fine for scientists to blatantly bring their politics into their work? Does that mean it's okay for scientific journals to endorse political candidates?In this slightly unusual episode of The Studies Show (which doesn't include very many actual studies), Tom and Stuart discuss the never-ending debate over where politics begins and ends in science, debate whether it's possible for science to be politics-free, and cover the recent story of the scientific journal editor fired for expressing a (pretty mild, all things considered) political opinion on Twitter.The Studies Show is brought to you by the i, the non-partisan UK daily newspaper for readers with open minds. For the best insights into British politics, as well as extensive interviews, lifestyle insights, and all the rest, consider subscribing to the paper. You can get a money-off deal on your digital subscription by following this special podcast link.The Studies Show is also sponsored by Works in Progress, the online magazine about science, technology, and human progress. Their newest November 2023 issue is packed with data-driven, deeply-researched articles on the history and future of the science and tech that shapes our world. It's all freely-available right here at this link.Show notes* Eisen's joke about a worm which caused a racism/sexism row* His fateful tweet about Hamas that eventually got him fired as editor of eLife* Coverage of his firing in Nature News; in Science * Nature endorses Biden in 2020* Tom's article in Unherd about politicising science* Tom's article in Unherd about the importance of “decoupling”* Stuart's Substack article about how science is political - but that's a bad thing* Astral Codex Ten article about the arrogance of presuming it's not possible to be any more rational than you are right now* Study of how Nature's political endorsements affected people's trust in the journal* Stuart's article in the i on this study; summary in Politico* Nature's editorial response, arguing that they'll do endorsements anywayCreditsThe Studies Show is produced by Julian Mayers at Yada Yada Productions. This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit www.thestudiesshowpod.com/subscribe
Crosspost from Astral Codex Ten. I.Last month, Ben West of the Center for Effective Altruism hosted a debate among long-termists, forecasters, and x-risk activists about pausing AI.Everyone involved thought AI was dangerous and might even destroy the world, so you might expect a pause - maybe even a full stop - would be a no-brainer. It wasn't. Participants couldn't agree on basics of what they meant by “pause”, whether it was possible, or whether it would make things better or worse.There was at least some agreement on what a successful pause would have to entail. Participating governments would ban “frontier AI models”, for example models using more training compute than GPT-4. Smaller models, or novel uses of new models would be fine, or else face an FDA-like regulatory agency. States would enforce the ban against domestic companies by monitoring high-performance microchips; they would enforce it against non-participating governments by banning [...] The original text contained 4 footnotes which were omitted from this narration. --- First published: October 10th, 2023 Source: https://forum.effectivealtruism.org/posts/7WfMYzLfcTyDtD6Gn/pause-for-thought-the-ai-pause-debate Linkpost URL:https://www.astralcodexten.com/p/pause-for-thought-the-ai-pause-debate --- Narrated by TYPE III AUDIO.
Dan Romero and Erik Torenberg talk about their intellectual evolution, why San Francisco is back, and the ideas finding new momentum because of the thorny politics in tech the last few years. If you're looking for SOC 2, ISO 27001, GDPR or HIPAA compliance, head to Vanta for $1000 off: https://www.vanta.com/zen – SPONSORS: VANTA | NETSUITE Are you building a business? If you're looking for SOC 2, ISO 27001, GDPR or HIPAA compliance, head to Vanta. Achieving compliance can actually unlock major growth for your company and build customer loyalty. Vanta automates up to 90% of Compliance work, getting you audit-ready in weeks instead of months and saving 85% of associated costs. Moment of Zen listeners get $1000 off at: https://www.vanta.com/zen NetSuite has 25 years of providing financial software for all your business needs. More than 36,000 businesses have already upgraded to NetSuite by Oracle, gaining visibility and control over their financials, inventory, HR, eCommerce, and more. If you're looking for an ERP platform head to NetSuite: http://netsuite.com/zen and download your own customized KPI checklist. – RECOMMENDED PODCAST: The HR industry is at a crossroads. What will it take to construct the next generation of incredible businesses – and where can people leaders have the most business impact? Hosts Nolan Church and Kelli Dragovich have been through it all, the highs and the lows – IPOs, layoffs, executive turnover, board meetings, culture changes, and more. With a lineup of industry vets and experts, Nolan and Kelli break down the nitty-gritty details, trade offs, and dynamics of constructing high performing companies. Through unfiltered conversations that can only happen between seasoned practitioners, Kelli and Nolan dive deep into the kind of leadership-level strategy that often happens behind closed doors. Check out the first episode with the architect of Netflix's culture deck Patty McCord. https://link.chtbl.com/hrheretics – Sign up for our newsletter to receive the full shownotes: https://momentofzen.substack.com/ – SELECT LINKS: Dan's reference: the most interesting new city project in San Francisco: https://www.nytimes.com/2023/08/25/business/land-purchases-solano-county.html James Portnoy, Founder of Barstool, going on the offensive against a Washington Post reporter writing a hit piece: https://deadline.com/2023/09/barstool-sports-owner-dave-portnoy-confronts-washington-post-reporter-1235554436/ Most important blog: Scott Alexander's https://slatestarcodex.com/ which is now Astral Codex Ten https://www.astralcodexten.com/ Curtis Yarvin: https://www.unqualified-reservations.org/, https://graymirror.substack.com/ – X / TWITTER: @dwr (Dan) @eriktorenberg (Erik Torenberg) @moz_podcast @TurpentineMedia – TIMESTAMPS: (00:00) Episode Preview (05:00) Why San Francisco's Network Effects, like Twitter, is futile to leave (Network Effects are a bitch) (13:40) Tech reclaiming a political voice (15:30) Sponsors: Vanta | NetSuite (17:40) Media losing ground after taking too many shots during the last administration (20:35) Mood affiliation & social justice policies in tech companies (23:05) In this economy? (30:41) Balaji Srinivasan (34:25)The Borg (39:10) Social Supply Chain (40:21) Curtis Yarvin (59:40) Bring your whole self to work (01:03:00) The clear pill: winning by losing (01:10:00) Good ideas, bad UI (01:16:35) Newspapers are a mood affiliation business (01:19:24) Right and Left wing, who loses in the long arc of history? (01:26:30) Dan's thinking around the next successful candidate (01:33:05) Threshold for changing your thinking
“He who sees all beings in his Self and his Self in all beings, he never suffers; because when he sees all creatures within his true Self, then jealousy, grief and hatred vanish.” Welcome back to another episode of Made You Think! In this episode, we're covering The Upanishads, a collection of ancient Indian texts which explore the philosophical and spiritual teachings of Hinduism. These texts delve into the nature of reality and the self while emphasizing the importance of meditation, self-realization, and the pursuit of knowledge to achieve enlightenment. We cover a wide range of topics including: Consciousness and the interconnectedness of all beings Parallels between The Upanishads and the philosophies of Plato How cities differ in their homelessness approach Rationalism vs. intuition Our thoughts on the Twitter/X rebrand And much more. Please enjoy, and make sure to follow Nat, Neil, and Adil on Twitter and share your thoughts on the episode. Links from the Episode: Mentioned in the Show: Project Hail Mary (9:00) The Three-Body Problem film (9:08) Silo (9:19) The Great Library of Alexandria (15:39) Soma (16:18) The 99 Names For God In Arabic (18:46) Upanishads Wikipedia (20:13) Georgism (34:00) Does Georgism Work? Astral Codex Ten (35:11) In Praise of the Gods (44:38) Thunder's Catch (1:09:47) Books Mentioned: Essays and Aphorisms (0:25) The World as Will and Representation (1:14) Bhagavad Gita (4:00) The Upanishads translated by Juan Mascaro (4:46) The Egg (8:01) Wool, Shift, Dust (trilogy) (9:26) The Expanse (9:46) The Right Stuff (38:04) Tao Te Ching (38:59) (Book Episode) (Nat's Book Notes) The Analects of Confucius (39:03) (Book Episode) (Nat's Book Notes) Straw Dogs (45:21) God's Debris (50:34) What Your Food Ate (1:10:57) (Book Episode) People Mentioned: Nassim Taleb (1:03) Simon Sarris (44:32) Scott Adams (50:26) Show Topics: [1:24] In today's episode, we're discussing The Upanishads, widely considered as some of the most important and influential works in the history of Indian philosophy and spirituality. [5:30] We dive into discussion about consciousness, souls, and whether everyone is a part of the same entity. The book suggests that there is a right path to follow, but doesn't necessarily give concrete details about what is good vs. bad. [8:58] There are several upcoming sci-fi movies coming out based on books that we're fans of. Which ones are you most excited to see? [11:12] Old texts are like a game of telephone: While the message may only change slightly each time its told, it can add up to a large percent over a period of time. We also talk about Plato's early texts and how they poke at the ideas of Christianity, even before Christ. [16:04] Soma is a ritual drink referenced in many ancient Hindu texts as well as in The Upanishads, thought to possibly contain mushrooms or other psychedelic properties. [18:15] Calling an infinite being by a finite name. In Islam, there are 99 names for God so to capture all properties of God. [20:09] We talk about some of the main parallels between the book and the philosophies of Plato as well as the longevity of large ancient empires (ex: Persian Empire). How did news spread among such a wide area without the communication tools we have today? [26:02] The contrast of ancient artifacts you can find in European cities vs. US cities. Plus, a little tangent on the birthplace of Teddy Roosevelt! [27:47] How cities differ in their homelessness approach. [34:26] What is Georgism and how would it look if it were applied in the US? [36:32] ChatGPT's gives it's interpretation of Made You Think. We also reflect on books similar to The Upanishads that we have done in previous episodes. [40:48] Rationalism vs. intuition. We pose the question of whether the ideas from this book were independently developed or whether they sprout from other teachings. It may simply depend on what lens you're looking at it from. [45:18] Society's move to secularism and what may have repulsed people away from religion. [49:02] The world is full of mystery. Even someone with a rationalist approach would have to take a step back and recognize there are some things we just don't know. [51:32] Our opinions on the Twitter/X rebrand and how the algorithm can change based on who you're following. The impact of replies in amplifying your tweet. [57:27] What are the first tweets we see when we open the Twitter app? [1:01:31] The progression of spacecrafts and the advancement of automation systems. India's recent achievement of landing on the moon. [1:06:23] We conclude the episode with a talk on cod, Alaskan salmon, and the chicken farming industry. [1:11:55] That's it for this episode! Join us next time as we dive into The Right Stuff by Tom Wolfe. Make sure to pick up a copy if you want to read along! If you enjoyed this episode, let us know by leaving a review on iTunes and tell a friend. As always, let us know if you have any book recommendations! You can say hi to us on Twitter @TheRealNeilS, @adilmajid, @nateliason and share your thoughts on this episode. You can now support Made You Think using the Value-for-Value feature of Podcasting 2.0. This means you can directly tip the co-hosts in BTC with minimal transaction fees. To get started, simply download a podcast app (like Fountain or Breez) that supports Value-for-Value and send some BTC to your in-app wallet. You can then use that to support shows who have opted-in, including Made You Think! We'll be going with this direct support model moving forward, rather than ads. Thanks for listening. See you next time!
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Biological Anchors: The Trick that Might or Might Not Work, published by Scott Alexander on August 12, 2023 on LessWrong. This post originally posted on Astral Codex Ten on Feb 23 2022. It was printed in The Carving of Reality, the third volume of the Best of LessWrong book series. It was included as a (shorter) replacement for Ajeya Cotra's Draft report on AI timelines, and Eliezer's Biology-Inspired AGI Timelines: The Trick That Never Works, covering the topic from multiple sides. It's crossposted here with Scott's permission for completeness (i.e. having all essays in the book appear on LessWrong). Introduction I've been trying to review and summarize Eliezer Yudkowksy's recent dialogues on AI safety. Previously in sequence: Yudkowsky Contra Ngo On Agents. Now we're up to Yudkowsky contra Cotra on biological anchors, but before we get there we need to figure out what Cotra's talking about and what's going on. The Open Philanthropy Project ("Open Phil") is a big effective altruist foundation interested in funding AI safety. It's got $20 billion, probably the majority of money in the field, so its decisions matter a lot and it's very invested in getting things right. In 2020, it asked senior researcher Ajeya Cotra to produce a report on when human-level AI would arrive. It says the resulting document is "informal" - but it's 169 pages long and likely to affect millions of dollars in funding, which some might describe as making it kind of formal. The report finds a 10% chance of "transformative AI" by 2031, a 50% chance by 2052, and an almost 80% chance by 2100. Eliezer rejects their methodology and expects AI earlier (he doesn't offer many numbers, but here he gives Bryan Caplan 50-50 odds on 2030, albeit not totally seriously). He made the case in his own very long essay, Biology-Inspired AGI Timelines: The Trick That Never Works, sparking a bunch of arguments and counterarguments and even more long essays. There's a small cottage industry of summarizing the report already, eg OpenPhil CEO Holden Karnofsky's article and Alignment Newsletter editor Rohin Shah's comment. I've drawn from both for my much-inferior attempt. Part I: The Cotra Report Ajeya Cotra is a senior research analyst at OpenPhil. She's assisted by her fiancee Paul Christiano (compsci PhD, OpenAI veteran, runs an AI alignment nonprofit) and to a lesser degree by other leading lights. Although not everyone involved has formal ML training, if you care a lot about whether efforts are "establishment" or "contrarian", this one is probably more establishment. The report asks when will we first get "transformative AI" (ie AI which produces a transition as impressive as the Industrial Revolution; probably this will require it to be about as smart as humans). Its methodology is: 1. Figure out how much inferential computation the human brain does. 2. Try to figure out how much training computation it would take, right now, to get a neural net that does the same amount of inferential computation. Get some mind-bogglingly large number. 3. Adjust for "algorithmic progress", ie maybe in the future neural nets will be better at using computational resources efficiently. Get some number which, realistically, is still mind-bogglingly large. 4. Probably if you wanted that mind-bogglingly large amount of computation, it would take some mind-bogglingly large amount of money. But computation is getting cheaper every year. Also, the economy is growing every year. Also, the share of the economy that goes to investments in AI companies is growing every year. So at some point, some AI company will actually be able to afford that mind-boggingly-large amount of money, deploy the mind-bogglingly large amount of computation, and train the AI that has the same inferential computation as the human brain. 5. Figure out what year t...
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Biological Anchors: The Trick that Might or Might Not Work, published by Scott Alexander on August 12, 2023 on LessWrong. This post originally posted on Astral Codex Ten on Feb 23 2022. It was printed in The Carving of Reality, the third volume of the Best of LessWrong book series. It was included as a (shorter) replacement for Ajeya Cotra's Draft report on AI timelines, and Eliezer's Biology-Inspired AGI Timelines: The Trick That Never Works, covering the topic from multiple sides. It's crossposted here with Scott's permission for completeness (i.e. having all essays in the book appear on LessWrong). Introduction I've been trying to review and summarize Eliezer Yudkowksy's recent dialogues on AI safety. Previously in sequence: Yudkowsky Contra Ngo On Agents. Now we're up to Yudkowsky contra Cotra on biological anchors, but before we get there we need to figure out what Cotra's talking about and what's going on. The Open Philanthropy Project ("Open Phil") is a big effective altruist foundation interested in funding AI safety. It's got $20 billion, probably the majority of money in the field, so its decisions matter a lot and it's very invested in getting things right. In 2020, it asked senior researcher Ajeya Cotra to produce a report on when human-level AI would arrive. It says the resulting document is "informal" - but it's 169 pages long and likely to affect millions of dollars in funding, which some might describe as making it kind of formal. The report finds a 10% chance of "transformative AI" by 2031, a 50% chance by 2052, and an almost 80% chance by 2100. Eliezer rejects their methodology and expects AI earlier (he doesn't offer many numbers, but here he gives Bryan Caplan 50-50 odds on 2030, albeit not totally seriously). He made the case in his own very long essay, Biology-Inspired AGI Timelines: The Trick That Never Works, sparking a bunch of arguments and counterarguments and even more long essays. There's a small cottage industry of summarizing the report already, eg OpenPhil CEO Holden Karnofsky's article and Alignment Newsletter editor Rohin Shah's comment. I've drawn from both for my much-inferior attempt. Part I: The Cotra Report Ajeya Cotra is a senior research analyst at OpenPhil. She's assisted by her fiancee Paul Christiano (compsci PhD, OpenAI veteran, runs an AI alignment nonprofit) and to a lesser degree by other leading lights. Although not everyone involved has formal ML training, if you care a lot about whether efforts are "establishment" or "contrarian", this one is probably more establishment. The report asks when will we first get "transformative AI" (ie AI which produces a transition as impressive as the Industrial Revolution; probably this will require it to be about as smart as humans). Its methodology is: 1. Figure out how much inferential computation the human brain does. 2. Try to figure out how much training computation it would take, right now, to get a neural net that does the same amount of inferential computation. Get some mind-bogglingly large number. 3. Adjust for "algorithmic progress", ie maybe in the future neural nets will be better at using computational resources efficiently. Get some number which, realistically, is still mind-bogglingly large. 4. Probably if you wanted that mind-bogglingly large amount of computation, it would take some mind-bogglingly large amount of money. But computation is getting cheaper every year. Also, the economy is growing every year. Also, the share of the economy that goes to investments in AI companies is growing every year. So at some point, some AI company will actually be able to afford that mind-boggingly-large amount of money, deploy the mind-bogglingly large amount of computation, and train the AI that has the same inferential computation as the human brain. 5. Figure out what year t...
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Biological Anchors: The Trick that Might or Might Not Work, published by Scott Alexander on August 12, 2023 on LessWrong.This post originally posted on Astral Codex Ten on Feb 23 2022.It was printed in The Carving of Reality, the third volume of the Best of LessWrong book series. It was included as a (shorter) replacement for Ajeya Cotra's Draft report on AI timelines, and Eliezer's Biology-Inspired AGI Timelines: The Trick That Never Works, covering the topic from multiple sides.It's crossposted here with Scott's permission for completeness (i.e. having all essays in the book appear on LessWrong).IntroductionI've been trying to review and summarize Eliezer Yudkowksy's recent dialogues on AI safety. Previously in sequence: Yudkowsky Contra Ngo On Agents. Now we're up to Yudkowsky contra Cotra on biological anchors, but before we get there we need to figure out what Cotra's talking about and what's going on.The Open Philanthropy Project ("Open Phil") is a big effective altruist foundation interested in funding AI safety. It's got $20 billion, probably the majority of money in the field, so its decisions matter a lot and it's very invested in getting things right. In 2020, it asked senior researcher Ajeya Cotra to produce a report on when human-level AI would arrive. It says the resulting document is "informal" - but it's 169 pages long and likely to affect millions of dollars in funding, which some might describe as making it kind of formal. The report finds a 10% chance of "transformative AI" by 2031, a 50% chance by 2052, and an almost 80% chance by 2100.Eliezer rejects their methodology and expects AI earlier (he doesn't offer many numbers, but here he gives Bryan Caplan 50-50 odds on 2030, albeit not totally seriously). He made the case in his own very long essay, Biology-Inspired AGI Timelines: The Trick That Never Works, sparking a bunch of arguments and counterarguments and even more long essays.There's a small cottage industry of summarizing the report already, eg OpenPhil CEO Holden Karnofsky's article and Alignment Newsletter editor Rohin Shah's comment. I've drawn from both for my much-inferior attempt.Part I: The Cotra ReportAjeya Cotra is a senior research analyst at OpenPhil. She's assisted by her fiancee Paul Christiano (compsci PhD, OpenAI veteran, runs an AI alignment nonprofit) and to a lesser degree by other leading lights. Although not everyone involved has formal ML training, if you care a lot about whether efforts are "establishment" or "contrarian", this one is probably more establishment.The report asks when will we first get "transformative AI" (ie AI which produces a transition as impressive as the Industrial Revolution; probably this will require it to be about as smart as humans). Its methodology is:1. Figure out how much inferential computation the human brain does.2. Try to figure out how much training computation it would take, right now, to get a neural net that does the same amount of inferential computation. Get some mind-bogglingly large number.3. Adjust for "algorithmic progress", ie maybe in the future neural nets will be better at using computational resources efficiently. Get some number which, realistically, is still mind-bogglingly large.4. Probably if you wanted that mind-bogglingly large amount of computation, it would take some mind-bogglingly large amount of money. But computation is getting cheaper every year. Also, the economy is growing every year. Also, the share of the economy that goes to investments in AI companies is growing every year. So at some point, some AI company will actually be able to afford that mind-boggingly-large amount of money, deploy the mind-bogglingly large amount of computation, and train the AI that has the same inferential computation as the human brain.5. Figure out what year t...
Finalist #11 in the Book Review Contest [This is one of the finalists in the 2023 book review contest, written by an ACX reader who will remain anonymous until after voting is done. I'll be posting about one of these a week for several months. When you've read them all, I'll ask you to vote for a favorite, so remember which ones you liked] What kind of fiction could be remarkable enough for an Astral Codex Ten review? How about the drug-fueled fantasies of a serial killer? Or perhaps the innovative, sophisticated prose of the first novel of a brilliant polymath? Or would you prefer a book written in such fantastically lucid language it feels more like a dream than a story? Possibly you'd be more interested in a book so unbelievably dangerous that the attempt to publish it was literally suicidal. Or maybe an unusual political book, such as an ultraconservative indictment of democracy by Adolf Hitler's favorite author? Or rather an indictment of both Hitler and Bolshevism, written by someone who was among the first to recognize Hitler as a true enemy of humanity? I picked On the Marble Cliffs, because it is all of that at the same time. https://astralcodexten.substack.com/p/your-book-review-on-the-marble-cliffs
Manifund is launching a new regranting program! We will allocate ~$2 million over the next six months based on the recommendations of our regrantors. Grantees can apply for funding through our site; we're also looking for additional regrantors and donors to join.What is regranting?Regranting is a funding model where a donor delegates grantmaking budgets to different individuals known as “regrantors”. Regrantors are then empowered to make grant decisions based on the objectives of the original donor.This model was pioneered by the FTX Future Fund; in a 2022 retro they considered regranting to be very promising at finding new projects and people to fund. More recently, Will MacAskill cited regranting as one way to diversify EA funding.What is Manifund?Manifund is the charitable arm of Manifold Markets. Some of our past work:Impact certificates, with Astral Codex Ten and the OpenPhil AI Worldviews ContestForecasting tournaments, with Charity Entrepreneurship and Clearer ThinkingDonating prediction market winnings to charity, funded by the Future FundHow does regranting on Manifund work?Our website makes the process simple, transparent, and fast:A donor contributes money to Manifold for Charity, our registered 501c3 nonprofitThe donor then allocates the money between regrantors of their choice. They can increase budgets for regrantors doing a good job, or pick out new regrantors who share the donor's values.Regrantors choose which opportunities (eg existing charities, new projects, or individuals) to spend their budgets on, writing up an explanation for each grant made.We expect most regrants to start with a conversation between the recipient and the regrantor, and after that, for the process to take less than two weeks.Alternatively, people looking for funding can post their project on the Manifund site. Donors and regrantors can then decide whether to fund it, similar to Kickstarter.The Manifund team screens the grant to make sure it is legitimate, legal, and aligned with our mission. If so, we approve the grant, which sends money to the recipient's Manifund account.The recipient withdraws money from their Manifund account to be used for their project.Differences from the Future Fund's regranting programAnyone can donate to regrantors. Part of what inspired us to start this program is how hard it is to figure out where to give as a longtermist donor—there's no GiveWell, no ACE, just a mass of opaque, hard-to-evaluate research orgs. Manifund's regranting infrastructure lets individual donors outsource their giving decisions to people they trust, who may be more specialized and more qualified at grantmaking.All grant information is public. This includes the identity of the regrantor and grant recipient, the project description, the grant size, and the regrantor's writeup. We strongly believe in transparency as it allows for meaningful public feedback, accountability of decisions, and establishment of regrantor track records.Almost everything is done through our website. This lets us move faster, act transparently, set good defaults, and encourage discourse about the projects in comment sections.We recognize that not all grants are suited for publishing; for now, we recommend sensitive grants apply to other donors (such as LTFF, SFF, OpenPhil).We're starting with less money. The Future [...]--- First published: July 5th, 2023 Source: https://forum.effectivealtruism.org/posts/RMXctNAksBgXgoszY/announcing-manifund-regrants Linkpost URL:https://manifund.org/rounds/regrants --- Narrated by TYPE III AUDIO. Share feedback on this narration.
https://astralcodexten.substack.com/p/spring-meetups-everywhere-2023 Many cities have regular Astral Codex Ten meetup groups. Twice a year, I try to advertise their upcoming meetups and make a bigger deal of it than usual so that irregular attendees can attend. This is one of those times. This year we have spring meetups planned in over eighty cities, from Tokyo to Punta Cana in the Dominican Republic. Thanks to all the organizers who responded to my request for details, and to Meetups Czar Skyler and the Less Wrong team for making this happen. You can find the list below, in the following order: Africa Asia-Pacific (including Australia) Europe (including UK) Latin America North America (including Canada)
Artificial General Intelligence (AGI) Show with Soroush Pour
In this episode, we speak with forecasting researcher & data scientist at Amazon AWS, Ryan Kupyn, about his timelines for the arrival of AGI.Ryan was recently ranked the #1 forecaster in Astral Codex Ten's 2022 Prediction contest, beating out 500+ other forecasters and proving himself to be a world-class forecaster. He has also done work in ML & works as a forecaster for Amazon AWS.Hosted by Soroush Pour. Follow me for more AGI content:Twitter: https://twitter.com/soroushjpLinkedIn: https://www.linkedin.com/in/soroushjp/== Show links ==-- About Ryan Kupyn --* Bio: Ryan is a forecasting researcher at Amazon. His main hobby outside of work is designing walking tours for different Los Angeles neighborhoods.* Ryan's meet-me email address: coffee AT ryankupyn DOT com * Ryan: "I love to meet new people and talk about careers, ML, their best breakfast recipes and anything else."-- Further resources --* Superintelligence (Bostrom)* Superforecasting (Tetlock, Gardner)* Elements of Statistical Learning (Hastie, Tibshirani, Friedman) * Ryan: "For general background on forecasting/statistics. This book is my go-to reference for understanding the math behind a lot of foundational statistical techniques."* Animal Spirits (Akerlof, Shiller) * Ryan: "For understanding how forecasts can be driven by emotion. I find this a useful book for understanding how forecasts can be wrong, and a useful reminder to be mindful of my own forecasts."* Normal Accidents (Perrow) * Ryan: "For understanding how humans interact with systems in ways that negate attempts by their creators to make them safer. I think there's some utility in looking at previous accidents in complex systems to AGI, as presented in this book".
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Retrospective on the 2022 Conjecture AI Discussions, published by Andrea Miotti on February 24, 2023 on LessWrong. At the end of 2022, following the success of the 2021 MIRI Conversations, Conjecture started a project to host discussions about AGI and alignment with key people in the field. The goal was simple: surface positions and disagreements, identify cruxes, and make these debates public whenever possible for collective benefit. Given that people and organizations will have to coordinate to best navigate AI's increasing effects, this is the first, minimum-viable coordination step needed to start from. Coordination is impossible without at least common knowledge of various relevant actors' positions and models. People sharing their beliefs, discussing them and making as much as possible of that public is strongly positive for a series of reasons. First, beliefs expressed in public discussions count as micro-commitments or micro-predictions, and help keep the field honest and truth-seeking. When things are only discussed privately, humans tend to weasel around and take inconsistent positions over time, be it intentionally or involuntarily. Second, commenters help debates progress faster by pointing out mistakes. Third, public debates compound. Knowledge shared publicly leads to the next generation of arguments being more refined, and progress in public discourse. We circulated a document about the project to various groups in the field, and invited people from OpenAI, DeepMind, Anthropic, Open Philanthropy, FTX Future Fund, ARC, and MIRI, as well as some independent researchers to participate in the discussions. We prioritized speaking to people at AGI labs, given that they are focused on building AGI capabilities. The format of discussions was as follows: A brief initial exchange with the participants to decide on the topics of discussion. By default, the discussion topic was “How hard is Alignment?”, since we've found we disagree with most people about this, and the reasons for it touch on many core cruxes about AI. We held the discussion synchronously for ~120 minutes, in writing, each on a dedicated, private Slack channel. We involved a moderator when possible. The moderator's role was to help participants identify and address their cruxes, move the conversation forward, and summarize points of contention. We planned to publish cleaned up versions of the transcripts and summaries to Astral Codex Ten, LessWrong, and the EA Forum. Participants were given the opportunity to clarify positions and redact information they considered infohazards or PR risks, as well as veto publishing altogether. We included this clause specifically to address the concerns expressed by people at AI labs, who expected heavy scrutiny by leadership and communications teams on what they can state publicly. People from ARC, DeepMind, and OpenAI, as well as one independent researcher agreed to participate. The two discussions with Paul Christiano and John Wentworth will be published shortly. One discussion with a person working at DeepMind is pending approval before publication. After a discussion with an OpenAI researcher took place, OpenAI strongly recommended against publishing, so we will not publish it. Most people we were in touch with were very interested in participating. However, after checking with their own organizations, many returned saying their organizations would not approve them sharing their positions publicly. This was in spite of the extensive provisions we made to reduce downsides for them: making it possible to edit the transcript, veto publishing, strict comment moderation, and so on. We think organizations discouraging their employees from speaking openly about their views on AI risk is harmful, and we want to encourage more openness. We are pausing the project for...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Manifund Impact Market / Mini-Grants Round On Forecasting, published by Scott Alexander on February 24, 2023 on The Effective Altruism Forum. A team associated with Manifold Markets has created a prototype market for minting and trading impact certificates. To help test it out, I'm sponsoring a $20,000 grants round, restricted to forecasting-related projects only (to keep it small - sorry, everyone else). You can read the details at the Astral Codex Ten post. If you have a forecasting-related project idea for less than that amount of money, consider reading the post and creating a Manifund account and minting an impact certificate for it. If you're an accredited investor, you can buy and sell impact certificates. Read the post, create a Manifund account, send them enough financial information to confirm your accreditation, and start buying and selling. If you have a non-forecasting related project, you can try using the platform, but you won't be eligible for this grants round and you'll have to find your own oracular funding. We wouldn't recommend this unless you know exactly what you're doing. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Retrospective on the 2022 Conjecture AI Discussions, published by Andrea Miotti on February 24, 2023 on The AI Alignment Forum. At the end of 2022, following the success of the 2021 MIRI Conversations, Conjecture started a project to host discussions about AGI and alignment with key people in the field. The goal was simple: surface positions and disagreements, identify cruxes, and make these debates public whenever possible for collective benefit. Given that people and organizations will have to coordinate to best navigate AI's increasing effects, this is the first, minimum-viable coordination step needed to start from. Coordination is impossible without at least common knowledge of various relevant actors' positions and models. People sharing their beliefs, discussing them and making as much as possible of that public is strongly positive for a series of reasons. First, beliefs expressed in public discussions count as micro-commitments or micro-predictions, and help keep the field honest and truth-seeking. When things are only discussed privately, humans tend to weasel around and take inconsistent positions over time, be it intentionally or involuntarily. Second, commenters help debates progress faster by pointing out mistakes. Third, public debates compound. Knowledge shared publicly leads to the next generation of arguments being more refined, and progress in public discourse. We circulated a document about the project to various groups in the field, and invited people from OpenAI, DeepMind, Anthropic, Open Philanthropy, FTX Future Fund, ARC, and MIRI, as well as some independent researchers to participate in the discussions. We prioritized speaking to people at AGI labs, given that they are focused on building AGI capabilities. The format of discussions was as follows: A brief initial exchange with the participants to decide on the topics of discussion. By default, the discussion topic was “How hard is Alignment?”, since we've found we disagree with most people about this, and the reasons for it touch on many core cruxes about AI. We held the discussion synchronously for ~120 minutes, in writing, each on a dedicated, private Slack channel. We involved a moderator when possible. The moderator's role was to help participants identify and address their cruxes, move the conversation forward, and summarize points of contention. We planned to publish cleaned up versions of the transcripts and summaries to Astral Codex Ten, LessWrong, and the EA Forum. Participants were given the opportunity to clarify positions and redact information they considered infohazards or PR risks, as well as veto publishing altogether. We included this clause specifically to address the concerns expressed by people at AI labs, who expected heavy scrutiny by leadership and communications teams on what they can state publicly. People from ARC, DeepMind, and OpenAI, as well as one independent researcher agreed to participate. The two discussions with Paul Christiano and John Wentworth will be published shortly. One discussion with a person working at DeepMind is pending approval before publication. After a discussion with an OpenAI researcher took place, OpenAI strongly recommended against publishing, so we will not publish it. Most people we were in touch with were very interested in participating. However, after checking with their own organizations, many returned saying their organizations would not approve them sharing their positions publicly. This was in spite of the extensive provisions we made to reduce downsides for them: making it possible to edit the transcript, veto publishing, strict comment moderation, and so on. We think organizations discouraging their employees from speaking openly about their views on AI risk is harmful, and we want to encourage more openness. We are pausing th...
https://astralcodexten.substack.com/p/acx-survey-results-2022 Thanks to the 7,341 people who took the 2022 Astral Codex Ten survey. See the questions for the ACX survey See the results from the ACX Survey (click “see previous responses” on that page I'll be publishing more complicated analyses over the course of the next year, hopefully starting later this month. If you want to scoop me, or investigate the data yourself. you can download the answers of the 7000 people who agreed to have their responses shared publicly. Out of concern for anonymity, the public dataset will exclude or bin certain questions. If you want more complete information, email me and explain why, and I'll probably send it to you. Download the public data (.xlsx, .csv) If you're interested in tracking how some of these answers have changed over time, you might also enjoy reading the 2020 survey results. 1 I don't think I can Google Forms only present data from people who agreed to make their responses public, so I've deleted everything identifiable on the individual level, eg your written long response answers. Everything left is just things like “X% of users are Canadian” or “Y% of users have ADHD”. There's no way to put these together and identify an ADHD Canadian, so I don't think they're privacy relevant. If you notice anything identifiable on the public results page, please let me know. 2 There will be a few confusing parts. I added some questions halfway through, so they will have fewer responses than others. On the “What Event Led To Your Distrust?” question, I added new multiple choice responses halfway through, so they will incorrectly appear less popular than the other responses. I think that is the only place I did that, but you can email me if you have any questions. 3 I deleted email address (obviously), some written long answers, some political questions that people might get in trouble for answering honestly, and some sex-related questions. I binned age to the nearest 5 years and deleted the finer-grained ethnicity question. I binned all incomes above $500,000 into “high”, and removed all countries that had fewer than ten respondents (eg if you said you were from Madagascar, it would have made you identifiable, so I deleted that). If you need this information for some reason, email me. Subscribe to Astral Codex Ten
https://astralcodexten.substack.com/p/2023-subscription-drive-free-unlocked Astral Codex Ten has a paid subscription option. You pay $10 (or $2.50 if you can't afford the regular price) per month, and get: Extra articles (usually 1-2 per month) A Hidden Open Thread per week Early access to some draft posts The warm glow of supporting the blog. I feel awkward doing a subscription drive, because I already make a lot of money with this blog. But the graph of paid subscribers over time looks like this:
Yuehan Xiao loves to learn. Listen to learn about:Yuehan's favorite podcastsYuehan's favorite blogs (including Scott Alexander and his substack, Astral Codex Ten)His thoughts on Andy BernardHis favorite classesHis career journey to TuckHis experiences growing up in 3 different countriesWhy he doesn't have an instagramWhat he was most excited about for the end of 1st yearPS this episode was recorded at the end of April 2022 over a couple cans of Makku in case you were wondering
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Metaforecast late 2022 update: GraphQL API, Charts, better infrastructure behind the scenes., published by NunoSempere on November 4, 2022 on The Effective Altruism Forum. tl;dr: Metaforecast is a search engine and an associated repository for forecasting questions. Since our last update, we have added a GraphQL API, charts, and dashboards. We have also reworked our infrastructure to make it more stable. New API Our most significant new addition is our GraphQL API. It allows other people to build on top of our efforts. It can be accessed on metaforecast.org/api/graphql, and looks similar to the EA Forum's own graphql api. To get the first 1000 questions, you could use a query like: title url description options { name probability } qualityIndicators { numForecasts stars } timestamp } } pageInfo { endCursor startCursor You can find more examples, like code to download all questions, in our /scripts folder, to which we welcome contributions. Charts and question pages. Charts display a question's history. They look as follows: Charts can be accessed by clicking the expand button on the front page although they are fairly slow to load at the moment. Clicking on the expand button brings the user to a question page, which contains a chart, the full question description, and a range of quality indicators: We are also providing an endpoint at metaforecast.org/questions/embed/[id] to allow other pages to embed our charts. For instance, to embed a question whose id is betfair-1.178163916, the endpoint would be here. One would use it in the following code: You can find the necessary question id by clicking a toggle under "advanced options" on the frontpage, or simply by noticing the id in our URL when expanding the question. With time, we aim to improve these pages, make them more interactive, etc. We also think it would be a good idea to embed Metaforecast questions and dashboards into the EA Forum, and we are trying to collaborate with the Manifold team, who have done this before, to make that happen. Dashboards Dashboards are collections of questions. For instance, here is a dashboard on global markets and inflation, as embedded in Global Guessing. Like questions, you can either view dashboards directly, or embed them. You can also create them, at. Better infrastructure We have also revamped our infrastructure. We moved to from JavaScript to Typescript, from MongoDB to Postgres, and simplified our backend. We are open to collaborations We are very much open to collaborations. If you want to integrate Metaforecast into your project and need help do not hesitate to reach out, e.g., on our Github. Metaforecast is also open source, and we welcome contributions. You can see some to-dos here. Developing is going more slowly now because it's mostly driven by Nuño working in his spare time, so contributions would be counterfactual. Acknowledgements Metaforecast is hosted by the Quantified Uncertainty Research Institute, and has received funding from Astral Codex Ten. It has received significant contributions from Vyacheslav Matyuhin, who was responsible for the upgrade to Typescript and GraphQL. Thanks to Clay Graubard of Global Guessing for their comments and dashboards, to Insight Prediction for help smoothing out their API, to Nathan Young for general comments, to others for their comments and suggestions. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: It's OK not to go into AI (for students), published by ruthgrace on July 14, 2022 on The Effective Altruism Forum. I've been running EA events in San Francisco every other month, and often I will meet a recent graduate, and as part of their introduction they will explain to me why they are or aren't working on AI stuff.[1] For the EA movement to be effective in getting things done, we need to be able to identify new cause areas have diverse skillsets have knowledge of different global industries I think you can gain knowledge that can help with these things at any job, by getting a deep enough understanding of your industry to identify what the most pressing problems are, and how someone might go about solving them. Richard Hamming's talk about how to do impactful work has a lot of good advice that is pretty much broadly applicable to any job. Cal Newport writes in So Good They Can't Ignore You that the most important factor for success and happiness is getting really good at what you do, since it gives you more job options so that you can find one where you have the autonomy to make an impact (book summary on Youtube). Having effective altruists familiar with different global industries, such as food and beverage manufacturing agriculture electronics biology, chemistry, pharma supply chain physical infrastructure (everything from public transportation to cell towers to space shuttles) (insert other field that requires knowledge outside of computer desk work) will help expand the tools and mechanisms the movement has to do good, and expand what the movement thinks is possible. For example, in the cause area of poverty, we want effective altruism to grow beyond malarial nets[2] and figure out how to get a country to go from developing to developed. This type of change requires many people on the ground doing many different things – starting businesses, building infrastructure, etc. The current pandemic might not meet the bar of being an extinction risk, but similar to an extinction risk, mitigating harm requires people with diverse skillsets to be able to do things like build better personal protective equipment, improve the cleanliness of air indoors, foster trust between people and public health institutions, and optimize regulatory bodies for responding to emergencies. Effective altruism is actively looking for people who are really good at, well, pretty much anything. Take a look at the Astral Codex Ten grantees and you'll find everything from people who are working on better voting systems to better slaughterhouses. Open Philanthropy has had more targeted focus areas, but even then their range goes from fake meat to criminal justice reform, and they are actively looking for new cause areas. It's OK to not go into AI, and there's no need to explain yourself or feel bad if you don't. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Increased Availability and Willingness for Deployment of Resources for Effective Altruism and Long-Termism, published by Evan Gaensbauer on December 29, 2021 on The Effective Altruism Forum. In the last few months, there have been several indicators of a dramatic increase of resources available from a number of organizations in effective altruism, in particular for the focus area of EA community/infrastructure building and long-termism. This includes not only financial resources for typical grant-making but also for research fellowships and scholarships from newer funding sources, as well as different kinds of institutional support and career opportunities. This list isn't a call for applications or requests at this time from any of the organizations in question. This is only a summary of recent developments so the EA community at large is aware of these opportunities and changes for the purpose of strategic decision-making. Benjamin Todd, CEO of 80,000 Hours, made the case for why EA needs 'mega-projects' i.e., projects that can deploy up to $100 million per year. The Centre for Effective Altruism (CEA) has dramatically grown in the last year, nearly doubling in staff. Rethink Priorities has also doubled their team and is looking to expand further. The EA Infrastructure Fund expects the total that will be granted in 2021 will exceed $5 million, more than 2.7x the total granted in 2020, and 1.3x has awarded in all previous years (2018-2020) combined, and that they expect to be able to make these funding needs. The Survival and Flourishing Fund deployed approximately $19 million in 2021 vs. $5.4 million in 2020. Vitalik Buterin, founder of the blockchain platform Ethereum, donated $25 million to the Future of Life Institute to support research activities such as Ph.D. scholarships. Lightcone Infrastructure, now the parent organization for rationality community blog LessWrong, are hiring new staff with salaries of $150-200 thousand dollars. The Effective Altruism Forum recently held a creative writing contest that rewarded $20 thousand total the winners. AI safety and research company Anthropic has raised $124 million in a Series A funding round. Community member Sam Bankman-Fried is now the richest person under 30 and the cryptocurrency exchange he founded, FTX, is now offering EA fellowships. The CEA is launching Campus Centers at top universities and is expecting some of these centers to be spending millions of dollars per year within the next few years. Beyond this, local groups funding is becoming much more accessible from either the CEA or the Global Challenges Project. Open Philanthropy is more heavily investing in outreach projects and has announced Early Career Funding for Improving the Long-Term Future, biosecurity scholarships and tech policy fellowships. Popular blogger and EA community member Scott Alexander through his blog, Astral Codex Ten, and the community around it is overseeing $1.5 million worth of grant-making.Thanks to Chris Leong for helping collate the above information and resources. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Increased Availability and Willingness for Deployment of Resources for Effective Altruism and Long-Termism, published by Evan Gaensbauer on December 29, 2021 on LessWrong. In the last few months, there have been several indicators of a dramatic increase of resources available from a number of organizations in effective altruism, in particular for the focus area of EA community/infrastructure building and long-termism. This includes not only financial resources for typical grant-making but also for research fellowships and scholarships from newer funding sources, as well as different kinds of institutional support and career opportunities. This list isn't a call for applications or requests at this time from any of the organizations in question. This is only a summary of recent developments so the EA community at large is aware of these opportunities and changes for the purpose of strategic decision-making. Benjamin Todd, CEO of 80,000 Hours, made the case for why EA needs 'mega-projects' i.e., projects that can deploy up to $100 million per year. The Centre for Effective Altruism (CEA) has dramatically grown in the last year, nearly doubling in staff. Rethink Priorities has also doubled their team and is looking to expand further. The EA Infrastructure Fund expects the total that will be granted in 2021 will exceed $5 million, more than 2.7x the total granted in 2020, and 1.3x has awarded in all previous years (2018-2020) combined, and that they expect to be able to make these funding needs. The Survival and Flourishing Fund deployed approximately $19 million in 2021 vs. $5.4 million in 2020. Vitalik Buterin, founder of the blockchain platform Ethereum, donated $25 million to the Future of Life Institute to support research activities such as Ph.D. scholarships. Lightcone Infrastructure, now the parent organization for rationality community blog LessWrong, is now hiring new staff with salaries of $150-200 thousand dollars. The Effective Altruism Forum recently held a creative writing contest that rewarded $20 thousand total the winners. AI safety and research company Anthropic has raised $124 million in a Series A funding round. Community member Sam Bankman-Fried is now the richest person under 30 and the cryptocurrency exchange he founded, FTX, is now offering EA fellowships. The CEA is launching Campus Centers at top universities and is expecting some of these centers to be spending millions of dollars per year within the next few years. Beyond this, local groups funding is becoming much more accessible from either the CEA or the Global Challenges Project. Open Philanthropy is more heavily investing in outreach projects and has announced Early Career Funding for Improving the Long-Term Future, biosecurity scholarships and tech policy fellowships. Popular blogger and EA community member Scott Alexander through his blog, Astral Codex Ten, and the community around it is overseeing $1.5 million worth of grant-making.Thanks to Chris Leong for helping collate the above information and resources. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Increased Availability and Willingness for Deployment of Resources for Effective Altruism and Long-Termism, published by Evan Gaensbauer on December 29, 2021 on LessWrong. In the last few months, there have been several indicators of a dramatic increase of resources available from a number of organizations in effective altruism, in particular for the focus area of EA community/infrastructure building and long-termism. This includes not only financial resources for typical grant-making but also for research fellowships and scholarships from newer funding sources, as well as different kinds of institutional support and career opportunities. This list isn't a call for applications or requests at this time from any of the organizations in question. This is only a summary of recent developments so the EA community at large is aware of these opportunities and changes for the purpose of strategic decision-making. Benjamin Todd, CEO of 80,000 Hours, made the case for why EA needs 'mega-projects' i.e., projects that can deploy up to $100 million per year. The Centre for Effective Altruism (CEA) has dramatically grown in the last year, nearly doubling in staff. Rethink Priorities has also doubled their team and is looking to expand further. The EA Infrastructure Fund expects the total that will be granted in 2021 will exceed $5 million, more than 2.7x the total granted in 2020, and 1.3x has awarded in all previous years (2018-2020) combined, and that they expect to be able to make these funding needs. The Survival and Flourishing Fund deployed approximately $19 million in 2021 vs. $5.4 million in 2020. Vitalik Buterin, founder of the blockchain platform Ethereum, donated $25 million to the Future of Life Institute to support research activities such as Ph.D. scholarships. Lightcone Infrastructure, now the parent organization for rationality community blog LessWrong, is now hiring new staff with salaries of $150-200 thousand dollars. The Effective Altruism Forum recently held a creative writing contest that rewarded $20 thousand total the winners. AI safety and research company Anthropic has raised $124 million in a Series A funding round. Community member Sam Bankman-Fried is now the richest person under 30 and the cryptocurrency exchange he founded, FTX, is now offering EA fellowships. The CEA is launching Campus Centers at top universities and is expecting some of these centers to be spending millions of dollars per year within the next few years. Beyond this, local groups funding is becoming much more accessible from either the CEA or the Global Challenges Project. Open Philanthropy is more heavily investing in outreach projects and has announced Early Career Funding for Improving the Long-Term Future, biosecurity scholarships and tech policy fellowships. Popular blogger and EA community member Scott Alexander through his blog, Astral Codex Ten, and the community around it is overseeing $1.5 million worth of grant-making.Thanks to Chris Leong for helping collate the above information and resources. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: [Link] Still Alive - Astral Codex Ten, published by jimrandomh on the LessWrong. This is a linkpost for I. This was a triumph I'm making a note here, huge success No, seriously, it was awful. I deleted my blog of 1,557 posts. I wanted to protect my privacy, but I ended up with articles about me in New Yorker, Reason, and The Daily Beast. I wanted to protect my anonymity, but I Streisand-Effected myself, and a bunch of trolls went around posting my real name everywhere they could find. I wanted to avoid losing my day job, but ended up quitting so they wouldn't be affected by the fallout. I lost a five-digit sum in advertising and Patreon fees. I accidentally sent about three hundred emails to each of five thousand people in the process of trying to put my blog back up. I had, not to mince words about it, a really weird year. The first post on Scott Alexander's new blog on Substack, Astral Codex Ten. Thanks for listening. to help us out with the nonlinear library or to learn more, please visit nonlinear.org.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Why I Am Not in Charge, published by Zvi on the LessWrong. Epistemic Status: Long piece written because the speed premium was too high to write a short one. Figured I should get my take on this out quickly. (Note: This has been edited to reflect that I originally mixed up UpToDate with WebMD (since it's been years since I've used either and was writing quickly) and gave the impression that WebMD was a useful product and that my wife liked it or used it. I apologize for the mix-up, and affirm that WebMD is mostly useless, but UpToDate is awesome.) Scott Alexander has a very high opinion of my ability to figure things out.. It's quite the endorsement to be called the person that first comes to mind as likely things right. Scott even thinks it would hypothetically be great if I was the benevolent dictator or secretary of health, as my decisions would do a lot of good. In turn, I have a very high opinion of Scott Alexander. If you get to read one person on the internet going either forwards or backwards, I'd go with Scott. Even as others are often in awe of my level (and sometimes quality) of output, I have always been in awe of his level and quality of output. His core explanation in one paragraph: I have a much easier task than those in charge. All I have to do is get the right answer, without worrying (anything like as much) about liability or politics or leadership, or being legible, or any number of other things those with power and responsibility have to worry about lest they lose that power and responsibility and/or get sued into oblivion. Those with power have to optimize for seeking power and play the game of Moloch, and we need to pick a selection process that makes this the least destructive we can and thus can only rely on legible expertise, and we actually kind of do a decent job of it. In that spirit, I'd like to welcome everyone coming here from Astral Codex Ten, flesh out and make more explicit my model of the dynamics involved, and point out some of the ways in which I think Scott's model of the situation is wrong or incomplete. Which in turn will be partly wrong, and only be some of the ways in which it is wrong or incomplete, as my model is also doubtless wrong and incomplete. The core disagreement between my model and Scott's model is that Scott's model implicitly assumes people with power have goals and are maximizing for success given those goals. I do not believe this. More broadly, Scott assumes a generalized version of the efficient market hypothesis, or that people are efficiently producing a bundle of goods at the production possibilities frontier, because to do better in some way they'd have to sacrifice in another – if you want better policies you'll have to pay political capital for it, and you only have so much political capital, whereas there aren't free wins lying around or someone would have taken them. Again, I think this is incorrect. There's treasure everywhere. The other core disagreement is revealed most explicitly in the comments, when Scott is asked what the mysterious ‘force of power' is that would work to take me down if Biden decided to put me in charge. Scott's answer is ‘someone like but not as high quality as Dr. Fauci' would take me out, which is a plausible figurehead of such an effort, but I think the real answer is most everyone with power, implicitly acting together. I've divided this post into sections that correspond to Scott's sections, so my Section I comments on Scott's Section I. I I think Scott's diagnosis of WebMD is mostly spot on. I know this because my wife is a psychiatrist and when I wrote the original version of this I remembered ‘yeah, the online website she uses is great' and somehow thought it was WebMD. Which it wasn't. UpToDate is the useful website that actually works and manages to provide useful in...
Salty Talk is a special edition of Healthy Rebellion Radio. Each week on Salty Talk Robb will do a deep dive into current health and performance news, mixed with an occasional Salty conversation with movers and shakers in the world of research, performance, health, and longevity. For the full the video presentation of this episode and to be a part of the conversation, join us in The Healthy Rebellion online community. WARNING: These episodes may get “salty” with the occasional expletive. Please Subscribe and Review: Apple Podcasts | RSS Submit your questions for the podcast here Show Notes: We had a THR member mention she was unfamiliar with substack and was wondering how to find people to follow. Several members chimed in with substacks they subscribe to, and we thought this might be a fun thing to talk about in today's episode. Common Sense with Bari Weiss - https://bariweiss.substack.com/ bad cattitude - boriqua gato https://boriquagato.substack.com/p/a-cats-tale-how-getting-canceled Vinay prasad https://vinayprasadmdmph.substack.com/p/which-causes-more-myocarditis-covid19 Astral Codex Ten https://astralcodexten.substack.com/p/ivermectin-much-more-than-you-wanted Eugyppius - a plague chronicle https://eugyppius.substack.com/ https://eugyppius.substack.com/p/stupid-and-evil-in-equal-measures Natural selections - Heather Heying https://naturalselections.substack.com/ Slowdown Farmstead - Tara Couture https://www.slowdownfarmstead.com/ an ode to the salty https://www.slowdownfarmstead.com/p/an-ode-to-the-salty Raelle Kaia Open Heart, Open Mind https://raellekaia.substack.com/p/whats-to-be-done-about-the-vaccine Other authors to check out that were mentioned by THR members: Glen Greenwald https://greenwald.substack.com/ Matt Taibi https://taibbi.substack.com/ Popular Rationalism https://popularrationalism.substack.com/ John Mcwhorter https://substack.com/profile/6527799-john-mcwhorter Alex Berenson https://alexberenson.substack.com/ Toby Rodgers https://tobyrogers.substack.com/ Steve Kirsch https://stevekirsch.substack.com/ Dr Rollergator https://drrollergator.substack.com/ Sponsor: The Healthy Rebellion Radio is sponsored by our electrolyte company, LMNT. Proper hydration is more than just drinking water. You need electrolytes too! Check out The Healthy Rebellion Radio sponsor LMNT for grab-and-go electrolyte packets to keep you at your peak! They give you all the electrolytes want, none of the stuff you don't. Click here to get your LMNT electrolytes Transcript: For a transcript of this episode check out the blog post at https://robbwolf.com/2021/11/19/substack-goodness-salty-talk-036-thrr/
https://astralcodexten.substack.com/p/book-review-contest-winners Thanks to everyone who participated or voted in the Book Review Contest. The winners are: FIRST PLACE: Progress and Poverty, reviewed by Lars Doucet Lars is a Norwegian-Texan game designer, and you can read his game design blog here. He's a pretty serious Georgist and posts regularly in the Georgism subreddit. SECOND PLACE: Down And Out In Paris And London, reviewed by Whimsi Whimsi blogs here, but otherwise asks to remain mysterious. THIRD PLACE: On The Natural Faculties, reviewed by ELP. E is a researcher and an author of the blog Slime Mold Time Mold READERS' CHOICE AWARD: Disunited Nations vs. Dawn Of Eurasia, reviewed by Misha Saul Misha is an investor in Sydney, Australia, and blogs here. And congratulations to all other finalists (here listed in order of appearance), whose secret identities were: Order Without Law, reviewed by Phil Hazelden Are We Smart Enough To Know How Smart Animals Are, reviewed by Jeff Russell Why Buddhism Is True, reviewed by Eve Bigaj Double Fold, reviewed by Boštjan P The Wizard And The Prophet, reviewed by Maryana Through The Eye Of A Needle, reviewed by Tom Powell Years Of Lyndon Johnson, reviewed by Theodore Ehrenborg Addiction By Design, reviewed by Ketchup Duck The Accidental Superpower, reviewed by John B Humankind, reviewed by Neil R The Collapse Of Complex Societies, reviewed by Etirabys Where's My Flying Car, reviewed by Jonathan P How Children Fail, reviewed by HonoreDB Plagues And Peoples, reviewed by Joel Ferris (who is looking for a job, email here) All finalists win a permanent free subscription to Astral Codex Ten - since a subscription costs $10/month, this is technically an infinity dollar value! If you already have a subscription, you are now a Super Double Mega-Subscriber, which has no consequences in the material world, but several important metaphysical advantages. I should have already credited this to your email addresses; please let me know if it didn't go through or if I used the wrong address.
Bean blogs on naval warfare here. He writes on modern naval warfare (well, from the mid 19th century). His first love is battleships but it goes much wider than this. We talked about battleships (the Yamato in particular), the fate of the French fleet in WWII, whether carriers are obsolete and the Falklands War. If you like this kind of thing then, well, this is the kind of thing you may very well like.Bean is also a big hit over at Astral Codex Ten where his posts in the comments section generate a lot of interest.