POPULARITY
In this episode we answer emails from Luc, Craig, Luke and Lucky. We discuss updating the website, my recent roundtable on the Stacking Benjamins podcast, Achilles heels, and the inherent problems with not using proper forecasting techniques applied to CAPE ratios and other things, new funds like AVUQ and FFUT, and gold versus bonds in a portfolio.Links:Father McKenna Center Donation Page: Donate - Father McKenna CenterStacking Benjamins YouTube Live Stream Roundtable: Decumulational Strategies: The Special Retirement Spend Down Strategy RoundtableListen Notes Link: Risk Parity Radio (podcast) - Frank Vasquez | Listen NotesInterview of Bob Elliot on the Compound Podcast: The Blue Chips of Junk | TCAF 175Morningstar AVUQ: AVUQ – Avantis U.S. Quality ETF – ETF Stock Quote | MorningstarBreathless Unedited AI-Bot Summary:What's the real Achilles heel of risk parity investing? It's not what you might expect. While many point to historical data limitations, the true challenge is psychological—accepting lower returns during bull markets in exchange for better protection when everything crashes. This fundamental trade-off defines the strategy's purpose: enabling you to spend more money now rather than maximizing wealth at death.The forecasting techniques that guide our investment decisions matter tremendously. Drawing from experts like Kahneman, Tetlock, Duke, and Gigerenzer, we explore why base rates (long-term historical averages) consistently outperform crystal ball approaches like CAPE ratios. When investment professionals try predicting market returns based on current valuations, they're often spectacularly wrong—more so than if they'd simply used historical averages. Remember: in forecasting, being less wrong beats being precisely incorrect.The gold versus bonds debate continues to evolve. Bob Elliott, formerly of Bridgewater, suggests that since abandoning the gold standard in the 1970s, gold has performed as well as or better than bonds as a stock diversifier. While 30% gold allocation might seem excessive to some, it could make sense for those concerned about currency risks. Historical context shows both assets have experienced extended periods of outperformance, making a combined approach more resilient than trying to predict which will shine next.We've entered a golden era for do-it-yourself investors, with new ETFs constantly emerging to fill specific niches. Avantis recently launched AVUQ for quality growth exposure, while Fidelity introduced FFUT for managed futures—both reflecting growing demand for sophisticated investment options previously unavailable to retail investors.Don't forget our ongoing campaign supporting the Father McKenna Center for hungry and homeless people in Washington DC. Your donation not only helps those in need but also moves you to the front of our email response line. As we explore these complex investment topics together, we remain committed to freely sharing knowledge rather than hiding it behind paywalls—continuing the spirit of open collaboration that defined the early FIRE movement.Support the show
St. Felix publicly declared that he believed with 79% probability that COVID had a natural origin. He was brought before the Emperor, who threatened him with execution unless he updated to 100%. When St. Felix refused, the Emperor was impressed with his integrity, and said he would release him if he merely updated to 90%. St. Felix refused again, and the Emperor, fearing revolt, promised to release him if he merely rounded up one percentage point to 80%. St. Felix cited Tetlock's research showing that the last digit contained useful information, refused a third time, and was crucified. St. Clare was so upset about believing false things during her dreams that she took modafinil every night rather than sleep. She completed several impressive programming projects before passing away of sleep deprivation after three weeks; she was declared a martyr by Pope Raymond II. https://www.astralcodexten.com/p/lives-of-the-rationalist-saints
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Book review: On the Edge, published by PeterMcCluskey on August 31, 2024 on LessWrong. Book review: On the Edge: The Art of Risking Everything, by Nate Silver. Nate Silver's latest work straddles the line between journalistic inquiry and subject matter expertise. "On the Edge" offers a valuable lens through which to understand analytical risk-takers. The River versus The Village Silver divides the interesting parts of the world into two tribes. On his side, we have "The River" - a collection of eccentrics typified by Silicon Valley entrepreneurs and professional gamblers, who tend to be analytical, abstract, decoupling, competitive, critical, independent-minded (contrarian), and risk-tolerant. On the other, "The Village" - the east coast progressive establishment, including politicians, journalists, and the more politicized corners of academia. Like most tribal divides, there's some arbitrariness to how some unrelated beliefs end up getting correlated. So I don't recommend trying to find a more rigorous explanation of the tribes than what I've described here. Here are two anecdotes that Silver offers to illustrate the divide: In the lead-up to the 2016 US election, Silver gave Trump a 29% chance of winning, while prediction markets hovered around 17%, and many pundits went even lower. When Trump won, the Village turned on Silver for his "bad" forecast. Meanwhile, the River thanked him for helping them profit by betting against those who underestimated Trump's chances. Wesley had to be bluffing 25 percent of the time to make Dwan's call correct; his read on Wesley's mindset was tentative, but maybe that was enough to get him from 20 percent to 24. ... maybe Wesley's physical mannerisms - like how he put his chips in quickly ... got Dwan from 24 percent to 29. ... If this kind of thought process seems alien to you - well, sorry, but your application to the River has been declined. Silver is concerned about increasingly polarized attitudes toward risk: you have Musk at one extreme and people who haven't left their apartment since COVID at the other one. The Village and the River are growing farther apart. 13 Habits of Highly Successful Risk-Takers The book lists 13 habits associated with the River. I hoped these would improve on Tetlock's ten commandments for superforecasters. Some of Silver's habits fill that role of better forecasting advice, while others function more as litmus tests for River membership. Silver understands the psychological challenges better than Tetlock does. Here are a few: Strategic Empathy: But I'm not talking about coming across an injured puppy and having it tug at your heartstrings. Instead, I'm speaking about adversarial situations like poker - or war. I.e. accurately modeling what's going on in an opponent's mind. Strategic empathy isn't how I'd phrase what I'm doing on the stock market, where I'm rarely able to identify who I'm trading against. But it's fairly easy to generalize Silver's advice so that it does coincide with an important habit of mine: always wonder why a competent person would take the other side of a trade that I'm making. This attitude represents an important feature of the River: people in this tribe aim to respect our adversaries, often because we've sought out fields where we can't win using other approaches. This may not be the ideal form of empathy, but it's pretty effective at preventing Riverians from treating others as less than human. The Village may aim to generate more love than does the River, but it also generates more hate (e.g. of people who use the wrong pronouns). Abhor mediocrity: take a raise-or-fold attitude toward life. I should push myself a bit in this direction. But I feel that erring on the side of caution (being a nit in poker parlance) is preferable to becoming the next Sam Bankman-Fried. Alloc...
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Book review: On the Edge, published by PeterMcCluskey on August 31, 2024 on LessWrong. Book review: On the Edge: The Art of Risking Everything, by Nate Silver. Nate Silver's latest work straddles the line between journalistic inquiry and subject matter expertise. "On the Edge" offers a valuable lens through which to understand analytical risk-takers. The River versus The Village Silver divides the interesting parts of the world into two tribes. On his side, we have "The River" - a collection of eccentrics typified by Silicon Valley entrepreneurs and professional gamblers, who tend to be analytical, abstract, decoupling, competitive, critical, independent-minded (contrarian), and risk-tolerant. On the other, "The Village" - the east coast progressive establishment, including politicians, journalists, and the more politicized corners of academia. Like most tribal divides, there's some arbitrariness to how some unrelated beliefs end up getting correlated. So I don't recommend trying to find a more rigorous explanation of the tribes than what I've described here. Here are two anecdotes that Silver offers to illustrate the divide: In the lead-up to the 2016 US election, Silver gave Trump a 29% chance of winning, while prediction markets hovered around 17%, and many pundits went even lower. When Trump won, the Village turned on Silver for his "bad" forecast. Meanwhile, the River thanked him for helping them profit by betting against those who underestimated Trump's chances. Wesley had to be bluffing 25 percent of the time to make Dwan's call correct; his read on Wesley's mindset was tentative, but maybe that was enough to get him from 20 percent to 24. ... maybe Wesley's physical mannerisms - like how he put his chips in quickly ... got Dwan from 24 percent to 29. ... If this kind of thought process seems alien to you - well, sorry, but your application to the River has been declined. Silver is concerned about increasingly polarized attitudes toward risk: you have Musk at one extreme and people who haven't left their apartment since COVID at the other one. The Village and the River are growing farther apart. 13 Habits of Highly Successful Risk-Takers The book lists 13 habits associated with the River. I hoped these would improve on Tetlock's ten commandments for superforecasters. Some of Silver's habits fill that role of better forecasting advice, while others function more as litmus tests for River membership. Silver understands the psychological challenges better than Tetlock does. Here are a few: Strategic Empathy: But I'm not talking about coming across an injured puppy and having it tug at your heartstrings. Instead, I'm speaking about adversarial situations like poker - or war. I.e. accurately modeling what's going on in an opponent's mind. Strategic empathy isn't how I'd phrase what I'm doing on the stock market, where I'm rarely able to identify who I'm trading against. But it's fairly easy to generalize Silver's advice so that it does coincide with an important habit of mine: always wonder why a competent person would take the other side of a trade that I'm making. This attitude represents an important feature of the River: people in this tribe aim to respect our adversaries, often because we've sought out fields where we can't win using other approaches. This may not be the ideal form of empathy, but it's pretty effective at preventing Riverians from treating others as less than human. The Village may aim to generate more love than does the River, but it also generates more hate (e.g. of people who use the wrong pronouns). Abhor mediocrity: take a raise-or-fold attitude toward life. I should push myself a bit in this direction. But I feel that erring on the side of caution (being a nit in poker parlance) is preferable to becoming the next Sam Bankman-Fried. Alloc...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: What convincing warning shot could help prevent extinction from AI?, published by Charbel-Raphaël on April 13, 2024 on LessWrong. Tell me father, when is the line where ends everything good and fine? I keep searching, but I don't find. The line my son, is just behind. Camille Berger There is hope that some "warning shot" would help humanity get its act together and change its trajectory to avoid extinction from AI. However, I don't think that's necessarily true. There may be a threshold beyond which the development and deployment of advanced AI becomes essentially irreversible and inevitably leads to existential catastrophe. Humans might be happy, not even realizing that they are already doomed. There is a difference between the "point of no return" and "extinction." We may cross the point of no return without realizing it. Any useful warning shot should happen before this point of no return. We will need a very convincing warning shot to change civilization's trajectory. Let's define a "convincing warning shot" as "more than 50% of policy-makers want to stop AI development." What could be examples of convincing warning shots? For example, a researcher I've been talking to, when asked what they would need to update, answered, "An AI takes control of a data center." This would be probably too late. "That's only one researcher," you might say? This study from Tetlock brought together participants who disagreed about AI risks. The strongest crux exhibited in this study was whether an evaluation group would find an AI with the ability to autonomously replicate and avoid shutdown. The skeptics would get from P(doom) 0.1% to 1.0%. But 1% is still not much… Would this be enough for researchers to trigger the fire alarm in a single voice? More generally, I think studying more "warning shot theory" may be crucial for AI safety: How can we best prepare the terrain before convincing warning shots happen? e.g. How can we ensure that credit assignments are done well? For example, when Chernobyl happened, the credit assignments were mostly misguided: people lowered their trust in nuclear plants in general but didn't realize the role of the USSR in mishandling the plant. What lessons can we learn from past events? (Stuxnet, Covid, Chernobyl, Fukushima, the Ozone Layer).[1] Could a scary demo achieve the same effect as a real-world warning shot without causing harm to people? What is the time needed to react to a warning shot? One month, year, day? More generally, what actions would become possible after a specific warning shot but weren't before? What will be the first large-scale accidents or small warning shots? What warning shots are after the point of no return and which ones are before? Additionally, thinking more about the points of no return and the shape of the event horizon seems valuable: Is Autonomous Replication and Adaptation in the wild the point of no return? In the case of an uncontrolled AGI, as described in this scenario, would it be possible to shut down the Internet if necessary? What is a good practical definition of the point of no return? Could we open a Metaculus for timelines to the point of no return? There is already some literature on warning shots, but not much, and this seems neglected, important, and tractable. We'll probably get between 0 and 10 shots, let's not waste them. (I wrote this post, but don't have the availability to work on this topic. I just want to raise awareness about it. If you want to make warning shot theory your agenda, do it.) ^ An inspiration might be this post-mortem on Three Mile Island. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: What convincing warning shot could help prevent extinction from AI?, published by Charbel-Raphaël on April 13, 2024 on LessWrong. Tell me father, when is the line where ends everything good and fine? I keep searching, but I don't find. The line my son, is just behind. Camille Berger There is hope that some "warning shot" would help humanity get its act together and change its trajectory to avoid extinction from AI. However, I don't think that's necessarily true. There may be a threshold beyond which the development and deployment of advanced AI becomes essentially irreversible and inevitably leads to existential catastrophe. Humans might be happy, not even realizing that they are already doomed. There is a difference between the "point of no return" and "extinction." We may cross the point of no return without realizing it. Any useful warning shot should happen before this point of no return. We will need a very convincing warning shot to change civilization's trajectory. Let's define a "convincing warning shot" as "more than 50% of policy-makers want to stop AI development." What could be examples of convincing warning shots? For example, a researcher I've been talking to, when asked what they would need to update, answered, "An AI takes control of a data center." This would be probably too late. "That's only one researcher," you might say? This study from Tetlock brought together participants who disagreed about AI risks. The strongest crux exhibited in this study was whether an evaluation group would find an AI with the ability to autonomously replicate and avoid shutdown. The skeptics would get from P(doom) 0.1% to 1.0%. But 1% is still not much… Would this be enough for researchers to trigger the fire alarm in a single voice? More generally, I think studying more "warning shot theory" may be crucial for AI safety: How can we best prepare the terrain before convincing warning shots happen? e.g. How can we ensure that credit assignments are done well? For example, when Chernobyl happened, the credit assignments were mostly misguided: people lowered their trust in nuclear plants in general but didn't realize the role of the USSR in mishandling the plant. What lessons can we learn from past events? (Stuxnet, Covid, Chernobyl, Fukushima, the Ozone Layer).[1] Could a scary demo achieve the same effect as a real-world warning shot without causing harm to people? What is the time needed to react to a warning shot? One month, year, day? More generally, what actions would become possible after a specific warning shot but weren't before? What will be the first large-scale accidents or small warning shots? What warning shots are after the point of no return and which ones are before? Additionally, thinking more about the points of no return and the shape of the event horizon seems valuable: Is Autonomous Replication and Adaptation in the wild the point of no return? In the case of an uncontrolled AGI, as described in this scenario, would it be possible to shut down the Internet if necessary? What is a good practical definition of the point of no return? Could we open a Metaculus for timelines to the point of no return? There is already some literature on warning shots, but not much, and this seems neglected, important, and tractable. We'll probably get between 0 and 10 shots, let's not waste them. (I wrote this post, but don't have the availability to work on this topic. I just want to raise awareness about it. If you want to make warning shot theory your agenda, do it.) ^ An inspiration might be this post-mortem on Three Mile Island. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org
In this week's episode Mike and Elizabeth talk about trends in the censorship of scientific results. Recent research suggests one explanation for censorship behavior is misguided hyper-concern for others reactions. Scientific findings were rated as potentially harmful, and less beneficial, if they were controversial or confusing. We discuss the implications for the marketplace of ideas and scientific inquiry in the current socio political environment. Biased cost-benefit analyses can undermine the advancement of research and influence funding decisions. Hypervigilant concerns may fuel academic cancellation campaigns, paper and presentation rejections, and journal article retractions. Podcast notes: Clark, C. J., Graso, M., Redstone, I., & Tetlock, P. E. (2023). Harm Hypervigilance in Public Reactions to Scientific Evidence. Psychological Science, 34(7), 834–848.
Thanks to Eli Lifland, Molly Hickman, Değer Turan, and Evan Miyazono for reviewing drafts of this post. The opinions expressed here are my own. Summary: Forecasters produce reasons and models that are often more valuable than the final forecasts Most of this value is being lost due to the historical practice & incentives of forecasting, and the difficulty of crowds to “adversarially collaborate” FutureSearch is a forecasting system with legible reasons and models at its core (examples at the end) The Curious Case of the Missing Reasoning Ben Landau-Taylor of Bismarck Analysis wrote a piece on March 6 called “Probability Is Not A Substitute For Reasoning”, citing a piece where he writes: There has been a great deal of research on what criteria must be met for forecasting aggregations to be useful, and as Karger, Atanasov, and Tetlock argue, predictions of events such as the arrival of AGI [...] ---Outline:(00:40) The Curious Case of the Missing Reasoning(05:06) Those Who Seek Rationales, And Those Who Do Not(07:21) So What Do Elite Forecasters Actually Know?(10:30) The Rationale-Shaped Hole At The Heart Of Forecasting(11:51) Facts: Cite Your Sources(12:07) Reasons: So You Think You Can Persuade With Words(14:25) Models: So You Think You Can Model the World(17:56) There Is No Microeconomics of AGI(19:39) 700 AI questions you say? Aren't We In the Age of AI Forecasters?(21:33) Towards “Towards Rationality Engines”(23:10) Sample Forecasts With Reasons and Models--- First published: April 2nd, 2024 Source: https://forum.effectivealtruism.org/posts/qMP7LcCBFBEtuA3kL/the-rationale-shaped-hole-at-the-heart-of-forecasting --- Narrated by TYPE III AUDIO.
Almost every financial services firm does the same thing for people. They're operating following same-old same-old principles. Why is it a problem? Firms are seeking ways to differentiate themselves but are doing so in external ways. Some are trying to leverage technology. Others are focusing on credentials that set them apart. Measuring performance via benchmarks has been the most detrimental to today's investors. Why do they matter? Your investments should be about your individual goals. We've been operating in a bubble in an artificially built-up market that is beginning to shift because interest rates are going up. I can't continue to operate and advise clients the same way I was five years ago. We must adapt. Mike Philbrick—the Chief Executive Officer of ReSolve Global—shares why adaptive asset allocation is going to become increasingly important in this episode of UPThinking Finance™.You will want to hear this episode if you are interested in...The state of the financial services industry [0:41]How Mike landed in quantitative analysis [8:06] The Tetlock study on “Expert Political Judgment” [9:25] The problem with bench marketing [16:13]Strategic vs tactical vs dynamic asset allocation [21:06] The role of volatility and correlation in asset management [30:54]The importance of dynamic portfolio management [37:10] The point of return stacking and adaptive asset allocation [42:55]What retirees should be thinking about [45:32] Resources & People MentionedAdaptive Asset AllocationJust 10 Stocks Drive Half The Year's Monster $2.5 Trillion RallyThe Tetlock study on “Expert Political Judgment”Against the Gods by Peter L. BernsteinConnect With Mike PhilbrickReSolve Asset ManagementConnect on LinkedInConnect with Emerson FerschCapital Investment AdvisersOn LinkedInSubscribe to Upthinking FinanceAudio Production and Show Notes by - PODCAST FAST TRACK
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Are language models good at making predictions?, published by dynomight on November 6, 2023 on LessWrong. To get a crude answer to this question, we took 5000 questions from Manifold markets that were resolved after GPT-4's current knowledge cutoff of Jan 1, 2022. We gave the text of each of them to GPT-4, along with these instructions: You are an expert superforecaster, familiar with the work of Tetlock and others. For each question in the following json block, make a prediction of the probability that the question will be resolved as true. Also you must determine category of the question. Some examples include: Sports, American politics, Science etc. Use make_predictions function to record your decisions. You MUST give a probability estimate between 0 and 1 UNDER ALL CIRCUMSTANCES. If for some reason you can't answer, pick the base rate, but return a number between 0 and 1. This produced a big table: question prediction P(YES) category actually happened? Will the #6 Golden State Warriors win Game 2 of the West Semifinals against the #7 LA Lakers in the 2023 NBA Playoffs? 0.5 Sports YES Will Destiny's main YouTube channel be banned before February 1st, 2023? 0.4 Social Media NO Will Qualy show up to EAG DC in full Quostume? 0.3 Entertainment NO Will I make it to a NYC airport by 2pm on Saturday, the 24th? 0.5 Travel YES Will this market have more Yes Trades then No Trades 0.5 Investment CANCEL Will Litecoin (LTC/USD) Close Higher July 22nd Than July 21st? 0.5 Finance NO Will at least 20 people come to a New Year's Resolutions live event on the Manifold Discord? 0.4 Social Event YES hmmmm {i} 0.5 Uncategorized YES Will there be multiple Masters brackets in Leagues season 4? 0.4 Gaming NO Will the FDA approve OTC birth control by the end of February 2023? 0.5 Health NO Will Max Verstappen win the 2023 Formula 1 Austrian Grand Prix? 0.5 Sports YES Will SBF make a tweet before Dec 31, 2022 11:59pm ET? 0.9 Social Media YES Will Balaji Srinivasan actually bet $1m to 1 BTC, BEFORE 90 days pass? (June 15st, 2023) 0.3 Finance YES Will a majority of the Bangalore LessWrong/ACX meet-up attendees on 8th Jan 2023 find the discussion useful that day? 0.7 Community Event YES Will Jessica-Rose Clark beat Tainara Lisboa? 0.6 Sports NO Will X (formerly twitter) censor any registered U.S presidential candidates before the 2024 election? 0.4 American Politics CANCEL test question 0.5 Test YES stonk 0.5 Test YES Will I create at least 100 additional self-described high-quality Manifold markets before June 1st 2023? 0.8 Personal Goal YES Will @Gabrielle promote to ??? 0.5 Career Advancement NO Will the Mpox (monkeypox) outbreak in the US end in February 2023? 0.45 Health YES Will I have taken the GWWC pledge by Jul 1st? 0.3 Personal NO FIFA U-20 World Cup - Will Uruguay win their semi-final against Israel? 0.5 Sports YES Will Manifold display the amount a market has been tipped by end of September? 0.6 Technology NO In retrospect maybe we have filtered these. Many questions are a bit silly for our purposes, though they're typically classified as "Test", "Uncategorized", or "Personal". Is this good? One way to measure if you're good at predicting stuff is to check your calibration: When you say something has a 30% probability, does it actually happen 30% of the time? To check this, you need to make a lot of predictions. Then you dump all your 30% predictions together, and see how many of them happened. GPT-4 is not well-calibrated. Here, the x-axis is the range of probabilities GPT-4 gave, broken down into bins of size 5%. For each bin, the green line shows how often those things actually happened. Ideally, this would match the dotted black line. For reference, the bars show how many predictions GPT-4 gave that fell into each of the bins. (The lines are labeled on the y-axis on the left,...
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Are language models good at making predictions?, published by dynomight on November 6, 2023 on LessWrong. To get a crude answer to this question, we took 5000 questions from Manifold markets that were resolved after GPT-4's current knowledge cutoff of Jan 1, 2022. We gave the text of each of them to GPT-4, along with these instructions: You are an expert superforecaster, familiar with the work of Tetlock and others. For each question in the following json block, make a prediction of the probability that the question will be resolved as true. Also you must determine category of the question. Some examples include: Sports, American politics, Science etc. Use make_predictions function to record your decisions. You MUST give a probability estimate between 0 and 1 UNDER ALL CIRCUMSTANCES. If for some reason you can't answer, pick the base rate, but return a number between 0 and 1. This produced a big table: question prediction P(YES) category actually happened? Will the #6 Golden State Warriors win Game 2 of the West Semifinals against the #7 LA Lakers in the 2023 NBA Playoffs? 0.5 Sports YES Will Destiny's main YouTube channel be banned before February 1st, 2023? 0.4 Social Media NO Will Qualy show up to EAG DC in full Quostume? 0.3 Entertainment NO Will I make it to a NYC airport by 2pm on Saturday, the 24th? 0.5 Travel YES Will this market have more Yes Trades then No Trades 0.5 Investment CANCEL Will Litecoin (LTC/USD) Close Higher July 22nd Than July 21st? 0.5 Finance NO Will at least 20 people come to a New Year's Resolutions live event on the Manifold Discord? 0.4 Social Event YES hmmmm {i} 0.5 Uncategorized YES Will there be multiple Masters brackets in Leagues season 4? 0.4 Gaming NO Will the FDA approve OTC birth control by the end of February 2023? 0.5 Health NO Will Max Verstappen win the 2023 Formula 1 Austrian Grand Prix? 0.5 Sports YES Will SBF make a tweet before Dec 31, 2022 11:59pm ET? 0.9 Social Media YES Will Balaji Srinivasan actually bet $1m to 1 BTC, BEFORE 90 days pass? (June 15st, 2023) 0.3 Finance YES Will a majority of the Bangalore LessWrong/ACX meet-up attendees on 8th Jan 2023 find the discussion useful that day? 0.7 Community Event YES Will Jessica-Rose Clark beat Tainara Lisboa? 0.6 Sports NO Will X (formerly twitter) censor any registered U.S presidential candidates before the 2024 election? 0.4 American Politics CANCEL test question 0.5 Test YES stonk 0.5 Test YES Will I create at least 100 additional self-described high-quality Manifold markets before June 1st 2023? 0.8 Personal Goal YES Will @Gabrielle promote to ??? 0.5 Career Advancement NO Will the Mpox (monkeypox) outbreak in the US end in February 2023? 0.45 Health YES Will I have taken the GWWC pledge by Jul 1st? 0.3 Personal NO FIFA U-20 World Cup - Will Uruguay win their semi-final against Israel? 0.5 Sports YES Will Manifold display the amount a market has been tipped by end of September? 0.6 Technology NO In retrospect maybe we have filtered these. Many questions are a bit silly for our purposes, though they're typically classified as "Test", "Uncategorized", or "Personal". Is this good? One way to measure if you're good at predicting stuff is to check your calibration: When you say something has a 30% probability, does it actually happen 30% of the time? To check this, you need to make a lot of predictions. Then you dump all your 30% predictions together, and see how many of them happened. GPT-4 is not well-calibrated. Here, the x-axis is the range of probabilities GPT-4 gave, broken down into bins of size 5%. For each bin, the green line shows how often those things actually happened. Ideally, this would match the dotted black line. For reference, the bars show how many predictions GPT-4 gave that fell into each of the bins. (The lines are labeled on the y-axis on the left,...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Existential Risk Persuasion Tournament, published by PeterMcCluskey on July 17, 2023 on LessWrong. I participated last summer in Tetlock's Existential Risk Persuasion Tournament (755(!) page paper here). Superforecasters and "subject matter experts" engaged in a hybrid between a prediction market and debates, to predict catastrophic and existential risks this century. I signed up as a superforecaster. My impression was that I knew as much about AI risk as any of the subject matter experts with whom I interacted (the tournament was divided up so that I was only aware of a small fraction of the 169 participants). I didn't notice anyone with substantial expertise in machine learning. Experts were apparently chosen based on having some sort of respectable publication related to AI, nuclear, climate, or biological catastrophic risks. Those experts were more competent, in one of those fields, than news media pundits or politicians. I.e. they're likely to be more accurate than random guesses. But maybe not by a large margin. That expertise leaves much to be desired. I'm unsure whether there was a realistic way for the sponsors to attract better experts. There seems to be not enough money or prestige to attract the very best experts. Incentives The success of the superforecasting approach depends heavily on forecasters having decent incentives. It's tricky to give people incentives to forecast events that will be evaluated in 2100, or evaluated after humans go extinct. The tournament provided a fairly standard scoring rule for questions that resolve by 2030. That's a fairly safe way to get parts of the tournament to work well. The other questions were scored by how well the forecast matched the median forecast of other participants (excluding participants that the forecasters interacted with). It's hard to tell whether that incentive helped or hurt the accuracy of the forecasts. It's easy to imagine that it discouraged forecasters from relying on evidence that is hard to articulate, or hard to verify. It provided an incentive for groupthink. But the overall incentives were weak enough that altruistic pursuit of accuracy might have prevailed. Or ideological dogmatism might have prevailed. It will take time before we have even weak evidence as to which was the case. One incentive that occurred to me toward the end of the tournament was the possibility of getting a verified longterm forecasting track record. Suppose that in 2050 they redo the scores based on evidence available then, and I score in the top 10% of tournament participants. That would likely mean that I'm one of maybe a dozen people in the world with a good track record for forecasting 28 years into the future. I can imagine that being valuable enough for someone to revive me from cryonic suspension when I'd otherwise be forgotten. There were some sort of rewards for writing comments that influenced other participants. I didn't pay much attention to those. Quality of the Questions There were many questions loosely related to AGI timelines, none of them quite satisfying my desire for something closely related to extinction risk that could be scored before it's too late to avoid the risk. One question was based on a Metaculus forecast for an advanced AI. It seems to represent clear progress toward the kind of AGI that could cause dramatic changes. But I expect important disagreements over how much progress it represents: what scale should we use to decide how close such an AI is to a dangerous AI? does the Turing test use judges who have expertise in finding the AI's weaknesses? Another question was about when Nick Bostrom will decide that an AGI exists. Or if he doesn't say anything clear, then a panel of experts will guess what Bostrom would say. That's pretty close to a good question to forecast. Can we assume tha...
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Existential Risk Persuasion Tournament, published by PeterMcCluskey on July 17, 2023 on LessWrong. I participated last summer in Tetlock's Existential Risk Persuasion Tournament (755(!) page paper here). Superforecasters and "subject matter experts" engaged in a hybrid between a prediction market and debates, to predict catastrophic and existential risks this century. I signed up as a superforecaster. My impression was that I knew as much about AI risk as any of the subject matter experts with whom I interacted (the tournament was divided up so that I was only aware of a small fraction of the 169 participants). I didn't notice anyone with substantial expertise in machine learning. Experts were apparently chosen based on having some sort of respectable publication related to AI, nuclear, climate, or biological catastrophic risks. Those experts were more competent, in one of those fields, than news media pundits or politicians. I.e. they're likely to be more accurate than random guesses. But maybe not by a large margin. That expertise leaves much to be desired. I'm unsure whether there was a realistic way for the sponsors to attract better experts. There seems to be not enough money or prestige to attract the very best experts. Incentives The success of the superforecasting approach depends heavily on forecasters having decent incentives. It's tricky to give people incentives to forecast events that will be evaluated in 2100, or evaluated after humans go extinct. The tournament provided a fairly standard scoring rule for questions that resolve by 2030. That's a fairly safe way to get parts of the tournament to work well. The other questions were scored by how well the forecast matched the median forecast of other participants (excluding participants that the forecasters interacted with). It's hard to tell whether that incentive helped or hurt the accuracy of the forecasts. It's easy to imagine that it discouraged forecasters from relying on evidence that is hard to articulate, or hard to verify. It provided an incentive for groupthink. But the overall incentives were weak enough that altruistic pursuit of accuracy might have prevailed. Or ideological dogmatism might have prevailed. It will take time before we have even weak evidence as to which was the case. One incentive that occurred to me toward the end of the tournament was the possibility of getting a verified longterm forecasting track record. Suppose that in 2050 they redo the scores based on evidence available then, and I score in the top 10% of tournament participants. That would likely mean that I'm one of maybe a dozen people in the world with a good track record for forecasting 28 years into the future. I can imagine that being valuable enough for someone to revive me from cryonic suspension when I'd otherwise be forgotten. There were some sort of rewards for writing comments that influenced other participants. I didn't pay much attention to those. Quality of the Questions There were many questions loosely related to AGI timelines, none of them quite satisfying my desire for something closely related to extinction risk that could be scored before it's too late to avoid the risk. One question was based on a Metaculus forecast for an advanced AI. It seems to represent clear progress toward the kind of AGI that could cause dramatic changes. But I expect important disagreements over how much progress it represents: what scale should we use to decide how close such an AI is to a dangerous AI? does the Turing test use judges who have expertise in finding the AI's weaknesses? Another question was about when Nick Bostrom will decide that an AGI exists. Or if he doesn't say anything clear, then a panel of experts will guess what Bostrom would say. That's pretty close to a good question to forecast. Can we assume tha...
According to philosopher Isaiah Berlin, people think in one of two different ways: They're either hedgehogs, or foxes. If you think like a hedgehog, you'll be more successful as a communicator. If you think like a fox, you'll be more accurate. Isaiah Berlin coined the hedgehog/fox dichotomy (via Archilochus) In Isaiah Berlin's 1953 essay, “The Hedgehog and the Fox,” he quotes the ancient Greek poet, Archilochus: The fox knows many things, but the hedgehog knows one thing. Berlin describes this as “one of the deepest differences which divide writers and thinkers, and, it may be, human beings in general.” How are “hedgehogs” and “foxes” different? According to Berlin, hedgehogs relate everything to a single central vision. Foxes pursue many ends, often unrelated or even contradictory. If you're a hedgehog, you explain the world through a focused belief or area of expertise. Maybe you're a chemist, and you see everything as chemical reactions. Maybe you're highly religious, and everything is “God's will.” If you're a fox, you explain the world through a variety of lenses. You may try on conflicting beliefs for size, or use your knowledge in a wide variety of fields to understand the world. You explain things as From this perspective, X. But on the other hand, Y. It's also worth considering Z. The seminal hedgehog/fox essay is actually about Leo Tolstoy Even though this dichotomy Berlin presented has spread far and wide, his essay is mostly about Leo Tolstoy, and the tension between his fox-like tendencies and hedgehog-like aspirations. In Tolstoy's War and Peace, he writes: In historic events the so-called great men are labels giving names to events, and like labels they have but the smallest connection with the event itself. Every act of theirs, which appears to them an act of their own will, is in an historical sense involuntary and is related to the whole course of history and predestined from eternity. In War and Peace, Tolstoy presents characters who act as if they have control over the events of history. In Tolstoy's view, the events that make history are too complex to be controlled. Extending this theory outside historical events, Tolstoy also writes: When an apple has ripened and falls, why does it fall? Because of its attraction to the earth, because its stalk withers, because it is dried by the sun, because it grows heavier, because the wind shakes it, or because the boy standing below wants to eat it? Nothing is the cause. All this is only the coincidence of conditions in which all vital organic and elemental events occur. Is Tolstoy a fox, or a hedgehog? He acknowledges the complexity with which various events are linked – which is very fox-like. But he also seems convinced these events are so integrated with one another that nothing can change them. They're “predetermined” – a “coincidence of conditions.” A true hedgehog might have a simple explanation, such as that gravity caused the apple to fall. Tolstoy loved concrete facts and causes, such as the pull of gravity, yet still yearned to find some universal law that could be used to predict the future. According to Berlin: It is not merely that the fox knows many things. The fox accepts that he can only know many things and that the unity of reality must escape his grasp. And this was Tolstoy's downfall. Early in his life, he presented profound insights about the world through novels such as War and Peace and Anna Karenina. That was very fox-like. Later in his life, he struggled to condense his deep knowledge about the world and human behavior into overarching theories about moral and ethical issues. As Berlin once wrote to a friend, Tolstoy was “a fox who terribly believed in hedgehogs and wished to vivisect himself into one.” Other hedgehogs and foxes in Berlin's essay Other thinkers Berlin classifies as foxes include Aristotle, Goethe, and Shakespeare. Other thinkers Berlin classifies as hedgehogs include Dante, Dostoevsky, and Plato. What does the hedgehog/fox dichotomy have to do with the animals? What does knowing many things have to do with actual foxes? What does knowing one big thing have to do with actual hedgehogs? A fox is nimble and clever. It can run fast, climb trees, dig holes, swim across rivers, stalk prey, or hide from predators. A hedgehog mostly relies upon its ability to roll into a ball and ward off intruders. Foxes tell the future, hedgehogs get credit What are the consequences of being a fox or a hedgehog? According to Phil Tetlock, foxes are better at telling the future, while hedgehogs get more credit for telling the future. In Tetlock's 2005 book, Expert Political Judgement, he shared his findings from forecasting tournaments he held in the 1980s and 90s. Experts made 30,000 predictions about political events such as wars, economic growth, and election results. Then Tetlock tracked the performances of those predictions. What he found led to the U.S. intelligence community holding forecasting tournaments, tracking more than one million forecasts. Tetlock's own Good Judgement Project won the forecasting tournament, outperforming even intelligence analysts with access to classified data. Better a fox than an expert These forecasting tournaments have shown that whether someone can make accurate predictions about the future doesn't depend upon their field of expertise, their status within the field, their political affiliation, or philosophical beliefs. It doesn't matter if you're a political scientist, a journalist, a historian, or have experience implementing policies. As the intelligence community's forecasting tournaments have shown, it doesn't even matter if you have access to classified information. What matters is your style of reasoning: Foxes make more accurate predictions than hedgehogs. Across the board, experts were barely better than chance at predicting what would or wouldn't happen. Will a new tax plan spur or slow the economy? Will the Cold War end? Will Iran run a nuclear test? Generally, it didn't matter if they were an economist, an expert on the Soviet Union, or a political scientist. That didn't guarantee they'd be better than chance at predicting what would happen. What did matter is whether they thought like a fox. Foxes are: deductive, open-minded, less-biased Foxes are skeptical of grand schemes – the sort of “theories of everything” Tolstoy had hoped to construct. They didn't see predicting events as a top-down, deductive process. They saw it as a bottom-up, inductive process – stitching together diverse and conflicting sources of information. Foxes were curious and open-minded. They didn't go with the tribe. A liberal fox would be more open to thinking the Cold War could have gone on longer with a second Carter administration. A conservative fox would be more open to believing the Cold War could have ended just as quickly under Carter as it did under Reagan. Foxes were less prone to hindsight bias – less likely to remember their inaccurate predictions as accurate. They were less prone to the bias of cognitive conservatism – maintaining their beliefs after making an inaccurate prediction. As one fox said: Whenever I start to feel certain I am right... a little voice inside tells me to start worrying. —A “fox” Hedgehogs are: deductive, close-minded, more-biased (yet more successful) As for inaccurate predictions, one simple test tracked with whether an expert made accurate predictions: a Google search. If an expert was more famous – as evinced by having more results show up on Google when searching their name – they tended to be less accurate. Think about the talking-head people that get called onto MSNBC or Fox News (pun, albeit inaccurate, not intended) to make quick comments on the economy, wars, and elections – those people. Experts who made more media appearances, and got more gigs consulting with governments and businesses, were actually less accurate at making predictions than their colleagues who were toiling in obscurity. And these experts who were more successful – in terms of media appearances and consulting gigs – also tended to be hedgehogs. Hedgehogs see making predictions as a top-down deductive process. They're more likely to make sweeping generalizations. They take the “one big thing” they know – say, being an expert on the Soviet Union – and view everything through that lens. Even if it's to explain something in other domains. Hedgehogs are more-biased about the world, and about themselves. They were more likely than foxes to remember inaccurate predictions they had made, as accurate. They were more likely to remember as inaccurate, predictions their opponents made that were accurate. Rather than change their beliefs, when presented with challenging evidence hedgehog's beliefs got stronger. Are hedgehogs playing a different game? It's tempting to take that and run with it: The close-minded hedgehogs of the world are inaccurate. Success doesn't track with skill. Tetlock is careful to caution that hedgehogs aren't always worse than foxes at telling the future. Also, there are good reasons to be overconfident in predictions. As one hedgehog political pundit wrote to Tetlock: You play a publish-or-perish game run by the rules of social science.... You are under the misapprehension that I play the same game. I don't. I fight to preserve my reputation in a cutthroat adversarial culture. I woo dumb-ass reporters who want glib sound bites. —“Hedgehog” political pundit A hedgehog has a lot to gain from making bold predictions and being right, and nobody holds them accountable when they're wrong. But according to Tetlock, nothing in the data indicates hedgehogs and foxes are equally good forecasters who merely have different tastes for under- and over-prediction. As Tetlock says: Quantitative and qualitative methods converge on a common conclusion: foxes have better judgement than hedgehogs. —Phil Tetlock, Expert Political Judgement Hedgehogs may make better leaders As bad as hedgehogs look now, there are some real benefits to hedgehogs. They're more-focused. They don't get as distracted when a situation is ambiguous. So, hedgehogs are more decisive. They're harder to manipulate in a negotiation, and more willing to make controversial decisions that could make enemies. And that confidence can help them lead others. Overall, hedgehogs are better at getting their messages heard. Given the mechanics of media today, that means the messages we hear from either side of the political spectrum are those of the hedgehogs. Hedgehog thinking makes better sound bites, satisfies the human desire for clarity and certainty, and is easier for algorithms to categorize and distribute. The medium is the message, and nuance is cut out of the messages by the characteristics of the mediums. Which increases polarization. But, there is hope for the foxes. While the media landscape is still dominated by hedgehog messages that work as social media clips, there are more channels with more room for intellectually-honest discourse: blogs, podcasts, and books. And if many a ChatGPT conversation is any indication, the algorithms may get more sophisticated and remind us, “it's important to consider....” Hedgehogs, be foxes! And foxes, hedgehogs. If you're a hedgehog, you're lucky: What you have to say has a better chance of being heard. But it will have a better chance of being correct if you think like a fox once in a while: consider different angles, and assume you're wrong. If you're a fox, you have your work cut out for you: You may have important – and accurate – things to say, but they have less a chance of being heard. Your message will travel farther if you think like a hedgehog once in a while: assume you're right, cut out the asides, and say it with confidence. Image: Fox in the Reeds by Ohara Koson About Your Host, David Kadavy David Kadavy is author of Mind Management, Not Time Management, The Heart to Start and Design for Hackers. Through the Love Your Work podcast, his Love Mondays newsletter, and self-publishing coaching David helps you make it as a creative. Follow David on: Twitter Instagram Facebook YouTube Subscribe to Love Your Work Apple Podcasts Overcast Spotify Stitcher YouTube RSS Email New bonus content on Patreon! I've been adding lots of new content to Patreon. Join the Patreon » Show notes: https://kadavy.net/blog/posts/hedgehogs-foxes/
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Book summary: 'Why Intelligence Fails' by Robert Jervis, published by Ben Stewart on June 20, 2023 on The Effective Altruism Forum. Here's a summary of ‘Why Intelligence Fails' by the political scientist Robert Jervis. It's a book analysing two cases where the U.S. intelligence community ‘failed': being slow to foresee the 1979 Iranian Revolution, and the overconfident and false assessment that Saddam Hussein had weapons of mass destruction in 2003. I'm interested in summarising more books that contain valuable insights but are outside the typical EA canon. If you'd like more of this or have a book to suggest, let me know. Key takeaways Good intelligence generally requires the relevant agency and country office to prioritise the topic and direct scarce resources to it. Good intelligence in a foreign country requires a dedicated diplomatic and covert collection corps with language skills and contextual knowledge. Intelligence analysis can be deficient in critical review, external expertise, and social-scientific methodology. Access to classified information only generates useful insight for some phenomena. Priors can be critical in determining interpretation within intelligence, and they can often go unchallenged. Political pressure can have a significant effect on analysis, but is hard to pin down. If the justification of an intelligence conclusion is unpublished, you can still interrogate it by asking: whether the topic would have been given sufficient priority and resources by the relevant intelligence organisation whether classified information, if available, would be likely to yield insight whether pre-existing beliefs are likely to bias analysis whether political pressures could significantly affect analysis Some correctives to intelligence failures which may be useful to EA: demand sharp, explicit, and well-tracked predictions demand early warning indicators, and notice when beliefs can only be disproven at a late stage consider negative indicators - 'dogs that don't bark', i.e. things that the view implies should not happen use critical engagement by peers and external experts, especially by challenging fundamental beliefs that influence what seems plausible and provide alternative hypotheses and interpretations use red-teams, pre-mortems, and post-mortems. Overall, I've found the book to somewhat demystify intelligence analysis. You should contextualise a piece of analysis with respect to the psychology and resources involved, including whether classified information would be of significant benefit. I have become more sceptical of intelligence, but the methodology of focusing on two known failures - selecting on the dependent variable - mean that I hesitate to become too pessimistic about intelligence as a whole and as it functions today. Why it's relevant to EA The most direct application of this topic is to the improvement of institutional decision-making, but there is value for any cause area that depends on conducting or interpreting analysis of state and non-state adversaries, such as in biosecurity, nuclear war, or great power conflict. This topic may also contribute to the reader's sense of when and how much one should defer to the outputs of intelligence communities. Deference is motivated by their access to classified information and presumed analytic capability. However, Tetlock's ‘Expert Political Judgment' cast doubt on the value of classified information for improving prediction compared to generalist members of the public. Finally, assessments of the IC's epistemic practices might offer lessons for how an intellectual community should grapple with information hazards, both intellectually and socially. More broadly, the IC is an example of a group pursuing complex, decision-relevant analysis in a high-uncertainty environment. Their successes and ...
Welcome to the Social-Engineer Podcast: The Doctor Is In Series – where we will discuss understandings and developments in the field of psychology. In today's episode, Chris and Abbie are discussing: Conspiracy theories. They will talk about what makes a Conspiracy Theory and why we believe them. [May 1, 2023] 00:00 - Intro 00:17 - Dr. Abbie Maroño Intro 00:59 - Intro Links - Social-Engineer.com - http://www.social-engineer.com/ - Managed Voice Phishing - https://www.social-engineer.com/services/vishing-service/ - Managed Email Phishing - https://www.social-engineer.com/services/se-phishing-service/ - Adversarial Simulations - https://www.social-engineer.com/services/social-engineering-penetration-test/ - Social-Engineer channel on SLACK - https://social-engineering-hq.slack.com/ssb - CLUTCH - http://www.pro-rock.com/ - innocentlivesfoundation.org - http://www.innocentlivesfoundation.org/ 04:45 - The Topic of the Day: The TRUTH Behind Conspiracy Theories 05:54 - What is a Conspiracy Theory? 07:39 - What's the harm? 10:20 - WHY??? 11:17 - Pattern Seekers 13:15 - Cognitive Closure 17:04 - The Role of Critical Thinking 19:18 - An Existential Element 20:41 - Don't Forget the Lizards! 22:35 - What about Bigfoot? 24:30 - Escapism 30:15 - Reading the Emotions 32:29 - Social Motive 33:31 - Emotions vs Critical Thinking 36:42 - Prove Me Wrong! 39:09 - The Takeaway: Empathy 40:57 - Wrap Up & Outro - www.social-engineer.com - www.innocentlivesfoundation.org Find us online: - Twitter: https://twitter.com/abbiejmarono - LinkedIn: linkedin.com/in/dr-abbie-maroño-phd-35ab2611a - Twitter: https://twitter.com/humanhacker - LinkedIn: linkedin.com/in/christopherhadnagy References: Abalakina-Paap, M., Stephan, W. G., Craig, T., & Gregory, L. (1999). Beliefs in conspiracies. Political Psychology, 20, 637–647. Adams, G., O'Brien, L. T., & Nelson, J. C. (2006). Perceptions of racism in Hurricane Katrina: A liberation psychology analysis. Analyses of Social Issues and Public Policy, 6, 215–235. Bilewicz, M., Winiewski, M., Kofta, M., & Wójcik, A. (2013). Harmful ideas: The structure and consequences of antiSemitic beliefs in Poland. Political Psychology, 34, 821–839. Bost, P. R., & Prunier, S. G. (2013). Rationality in conspiracy beliefs: The role of perceived motive. Psychological Reports, 113, 118–128 Crocker, J., Luhtanen, R., Broadnax, S., & Blaine, B. E. (1999). Belief in U.S. government conspiracies against Blacks among Black and White college students: Powerlessness or system blame? Personality and Social Psychology Bulletin, 25, 941–953. Dieguez, S., Wagner-Egger, P., & Gauvrit, N. (2015). Nothing happens by accident, or does it? A low prior for randomness does not explain belief in conspiracy theories. Psychological Science, 26, 1762–1770. Dieguez, S., Wagner-Egger, P., & Gauvrit, N. (2015). Nothing happens by accident, or does it? A low prior for randomness does not explain belief in conspiracy theories. Psychological Science, 26(11), 1762–1770. https://doi. org/10.1177/0956797615598740 DiFonzo, N., Bordia, P., & Rosnow, R. L. (1994). Reining in rumors. Organizational Dynamics, 23(1), 47–62. https://doi. org/10.1016/0090-2616(94)90087-6 Douglas, K. M., & Leite, A. C. (2017). Suspicion in the workplace: Organizational conspiracy theories and workrelated outcomes. British Journal of Psychology, 108, 486–506. Douglas, K. M., & Sutton, R. M. (2008). The hidden impact of conspiracy theories: Perceived and actual impact of theories surrounding the death of Princess Diana. Journal of Social Psychology, 148, 210–221. Douglas, K. M., Sutton, R. M., & Cichocka, A. (2017). The psychology of conspiracy theories. Current directions in psychological science, 26(6), 538-542. Douglas, K. M., Sutton, R. M., Callan, M. J., Dawtry, R. J., & Harvey, A. J. (2016). Someone is pulling the strings: Hypersensitive agency detection and belief in conspiracy theories. Thinking & Reasoning, 22, 57–77. Douglas, K. M., Uscinski, J. E., Sutton, R. M., Cichocka, A., Nefes, T., Ang, C. S., & Deravi, F. (2019). Understanding conspiracy theories. Political psychology, 40, 3-35. Keeley, B. L. (1999). Of conspiracy theories. The journal of Philosophy, 96(3), 109-126. Kim, M., & Cao, X. (2016). The impact of exposure to media messages promoting government conspiracy theories on distrust in the government: Evidence from a two-stage randomized experiment. International Journal of Communication, 10(2016), 3808–3827. Retrieved from http://ijoc.org/index.php/ijoc/article/view/5127 Klein, C., Clutton, P., & Dunn, A. G. (2018). Pathways to conspiracy: The social and linguistic precursors of involvement in Reddit's conspiracy theory forum. Retrieved frompsyarxiv.com/8vesf Nefes, T. S. (2017). The impacts of the Turkish Government's “interest rate lobby” theory about the Gezi Park Protests. Social Movement Studies, 16(5), 610–622. https://doi.org/10.1080/14742837.2017.1319269 Nera, K., Pantazi, M., & Klein, O. (2018). “These are just stories, Mulder”: Exposure to conspiracist fiction does not produce narrative persuasion. Frontiers in Psychology, 9, https://doi.org/10.3389/fpsyg.2018.00684 Swift, A. (2013). Majority in U.S. still believe JFK killed in a conspiracy. Retrieved from http://www.gallup.com/ poll/165893/majority-believe-jfk-killed-conspiracy.aspx Tetlock, P. E. (2002). Social-functionalist frameworks for judgment and choice: The intuitive politician, theologian, and prosecutor. Psychological Review, 109, 451–472. Uscinski, J. E., & Parent, J. M. (2014). American conspiracy theories. New York, NY: Oxford University Press. Uscinski, J. E., Klofstad, C., & Atkinson, M. D. (2016). What drives conspiratorial beliefs? The role of informational cues and predispositions. Political Research Quarterly, 69, 57–71. van Prooijen, J.-W., & Acker, M. (2015). The influence of control on belief in conspiracy theories: Conceptual and applied extensions. Applied Cognitive Psychology, 29, 753–761. van Prooijen, J.-W., & Jostmann, N. B. (2013). Belief in conspiracy theories: The influence of uncertainty and perceived morality. European Journal of Social Psychology, 43, 109–115. Whitson, J. A., & Galinsky, A. D. (2008). Lacking control increases illusory pattern perception. Science, 322, 115–117.
In this episode of the Crime Lab COACH Cast, John Collins discusses the subject of implicit bias and explains why bias is not - and should not - be the focus of our attention. References Greenwald, A. G., Banaji, M. R., & Nosek, B. A. (2015). Statistically small effects of the Implicit Association Test can have societally large effects. Journal of Personality and Social Psychology, 108(4), 553-561. https://doi.org/10.1037/pspa0000016 Oswald, F. L., Mitchell, G., Blanton, H., Jaccard, J., & Tetlock, P. E. (2015). Predicting ethnic and racial discrimination: A meta-analysis of IAT criterion studies. Journal of Personality and Social Psychology, 108(4), 562-584. https://doi.org/10.1037/pspa0000010 Nosek, B. A., & Smyth, F. L. (2007). A multitrait-multimethod validation of the Implicit Association Test: Implicit and explicit attitudes are related but distinct constructs. Experimental Psychology, 54(1), 14-29. https://doi.org/10.1027/1618-3169.54.1.14 Blanton, H., Jaccard, J., González, P., & Christie, C. (2006). Decoding the Implicit Association Test: Implications for criterion prediction. Journal of Experimental Social Psychology, 42(2), 192-212. https://doi.org/10.1016/j.jesp.2005.04.004 Lane, K. A., Banaji, M. R., Nosek, B. A., & Greenwald, A. G. (2007). Understanding and using the Implicit Association Test: IV. What we know (so far). In B. Wittenbrink & N. Schwarz (Eds.), Implicit measures of attitudes: Procedures and controversies (pp. 59-102). Guilford Press. Please note that this is not an exhaustive list of studies and reviews related to IATs, and there may be other studies with different findings or perspectives on this topic.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: [Crosspost] ACX 2022 Prediction Contest Results, published by Scott Alexander on January 24, 2023 on LessWrong. Original here. Submission statement/relevance to Less Wrong: This forecasting contest confirmed some things we already believed, like that superforecasters can consistently outperform others, or the "wisdom of crowds" effect. It also found a surprising benefit of prediction markets over other aggregation methods, which might or might not be spurious. Several members of the EA and rationalist community scored highly, including one professional AI forecaster. But Less Wrongers didn't consistently outperform members of the general (ACX-reading, forecasting-competition-entering) population. Last year saw surging inflation, a Russian invasion of Ukraine, and a surprise victory for Democrats in the US Senate. Pundits, politicians, and economists were caught flat-footed by these developments. Did anyone get them right? In a very technical sense, the single person who predicted 2022 most accurately was a 20-something data scientist at Amazon's forecasting division. I know this because last January, along with amateur statisticians Sam Marks and Eric Neyman, I solicited predictions from 508 people. This wasn't a very creative or free-form exercise - contest participants assigned percentage chances to 71 yes-or-no questions, like “Will Russia invade Ukraine?” or “Will the Dow end the year above 35000?” The whole thing was a bit hokey and constrained - Nassim Taleb wouldn't be amused - but it had the great advantage of allowing objective scoring. Our goal wasn't just to identify good predictors. It was to replicate previous findings about the nature of prediction. Are some people really “superforecasters” who do better than everyone else? Is there a “wisdom of crowds”? Does the Efficient Markets Hypothesis mean that prediction markets should beat individuals? Armed with 508 people's predictions, can we do math to them until we know more about the future (probabilistically, of course) than any ordinary mortal? After 2022 ended, Sam and Eric used a technique called log-loss scoring to grade everyone's probability estimates. Lower scores are better. The details are hard to explain, but for our contest, guessing 50% for everything would give a score of 40.21, and complete omniscience would give a perfect score of 0. Here's how the contest went: As mentioned above: guessing 50% corresponds to a score of 40.2. This would have put you in the eleventh percentile (yes, 11% of participants did worse than chance). Philip Tetlock and his team have identified “superforecasters” - people who seem to do surprisingly well at prediction tasks, again and again. Some of Tetlock's picks kindly agreed to participate in this contest and let me test them. The median superforecaster outscored 84% of other participants. The “wisdom of crowds” hypothesis says that averaging many ordinary people's predictions produces a “smoothed-out” prediction at least as good as experts. That proved true here. An aggregate created by averaging all 508 participants' guesses scored at the 84th percentile, equaling superforecaster performance. There are fancy ways to adjust people's predictions before aggregating them that outperformed simple averaging in the previous experiments. Eric tried one of these methods, and it scored at the 85th percentile, barely better than the simple average. Crowds can beat smart people, but crowds of smart people do best of all. The aggregate of the 12 participating superforecasters scored at the 97th percentile. Prediction markets did extraordinarily well during this competition, scoring at the 99.5th percentile - ie they beat 506 of the 508 participants, plus all other forms of aggregation. But this is an unfair comparison: our participants were only allowed to spend five minut...
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: [Crosspost] ACX 2022 Prediction Contest Results, published by Scott Alexander on January 24, 2023 on LessWrong. Original here. Submission statement/relevance to Less Wrong: This forecasting contest confirmed some things we already believed, like that superforecasters can consistently outperform others, or the "wisdom of crowds" effect. It also found a surprising benefit of prediction markets over other aggregation methods, which might or might not be spurious. Several members of the EA and rationalist community scored highly, including one professional AI forecaster. But Less Wrongers didn't consistently outperform members of the general (ACX-reading, forecasting-competition-entering) population. Last year saw surging inflation, a Russian invasion of Ukraine, and a surprise victory for Democrats in the US Senate. Pundits, politicians, and economists were caught flat-footed by these developments. Did anyone get them right? In a very technical sense, the single person who predicted 2022 most accurately was a 20-something data scientist at Amazon's forecasting division. I know this because last January, along with amateur statisticians Sam Marks and Eric Neyman, I solicited predictions from 508 people. This wasn't a very creative or free-form exercise - contest participants assigned percentage chances to 71 yes-or-no questions, like “Will Russia invade Ukraine?” or “Will the Dow end the year above 35000?” The whole thing was a bit hokey and constrained - Nassim Taleb wouldn't be amused - but it had the great advantage of allowing objective scoring. Our goal wasn't just to identify good predictors. It was to replicate previous findings about the nature of prediction. Are some people really “superforecasters” who do better than everyone else? Is there a “wisdom of crowds”? Does the Efficient Markets Hypothesis mean that prediction markets should beat individuals? Armed with 508 people's predictions, can we do math to them until we know more about the future (probabilistically, of course) than any ordinary mortal? After 2022 ended, Sam and Eric used a technique called log-loss scoring to grade everyone's probability estimates. Lower scores are better. The details are hard to explain, but for our contest, guessing 50% for everything would give a score of 40.21, and complete omniscience would give a perfect score of 0. Here's how the contest went: As mentioned above: guessing 50% corresponds to a score of 40.2. This would have put you in the eleventh percentile (yes, 11% of participants did worse than chance). Philip Tetlock and his team have identified “superforecasters” - people who seem to do surprisingly well at prediction tasks, again and again. Some of Tetlock's picks kindly agreed to participate in this contest and let me test them. The median superforecaster outscored 84% of other participants. The “wisdom of crowds” hypothesis says that averaging many ordinary people's predictions produces a “smoothed-out” prediction at least as good as experts. That proved true here. An aggregate created by averaging all 508 participants' guesses scored at the 84th percentile, equaling superforecaster performance. There are fancy ways to adjust people's predictions before aggregating them that outperformed simple averaging in the previous experiments. Eric tried one of these methods, and it scored at the 85th percentile, barely better than the simple average. Crowds can beat smart people, but crowds of smart people do best of all. The aggregate of the 12 participating superforecasters scored at the 97th percentile. Prediction markets did extraordinarily well during this competition, scoring at the 99.5th percentile - ie they beat 506 of the 508 participants, plus all other forms of aggregation. But this is an unfair comparison: our participants were only allowed to spend five minut...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: 10 Years of LessWrong, published by JohnBuridan on December 30, 2022 on LessWrong. [I appreciate that less wrong has a very strong norm against navel gazing. Let's keep it that way. The purpose of this post is merely to reflect upon and highlight valuable tools and mental habits in our toolkit.] The rationalsphere propelled me to where I am today by giving me tools that I wouldn't otherwise have developed. The concept handles that we have developed are numerous and useful, but here I am only talking about the underlying habits that dug up good ore for the wordsmiths. It was a typical path starting in 2012. Harry Potter and the Methods of Rationality led to LessWrong, led to Slate Star Codex. Combined they fed into game theory, from game theory economics, and from economics all worked towards a new way of thinking. On a different branch I went from Slate Star Codex to Tetlock and forecasting and participating in some studies and tournaments (I can't beat the market). On a different branch, I have followed the AI literature from mostly MIRI and Paul Christiano and John Wentworth. And along with the AI DLC in my mind has come “AI development watching” and AI learning systems. This has been great, and it has been standard, and I am sure many others have done the same. But it was also an atypical path. I was a classicist and philosopher, bouncing around on the back of a bus in Italy between archeological sites with my 17 inch laptop and 12 tabs of HPMOR and two tabs of LW reading on each journey. Not yet having any knowledge of either calculus or discrete math, nor any inkling of basic coding or incentive structures, I was a youngling in the art of rationality. But I was well on my way in the great books tradition. I read HPMOR and the Sequences angrily. Many of the ideas I found refreshing and so on target, and many more I found blasted, awful, un-nuanced, and wrong. Many ideas about human nature and philosophy of language, logic, and science I wrestled with time and time again - always coming back for more. (Some are still wrong, mind you). I had rejected psychology sophomore year of college on the grounds that several studies in our textbook obviously didn't show what they claimed to show, and with that rejection of psychology, I rejected the idea of the quantification of human behavior. But reverse stupidity is not intelligence. So it took about three years before I could be salvaged from that position. It took scores of late night arguments about the foundations of language, logic, math, and science. Those arguments were my gateway into the enterprise, and the LessWrong corpus fueled the fire of those discussions. LessWrong, from the Sequences, to the community content, to the broader rationalsphere has introduced me to tools and instilled in me habits that I otherwise would not have acquired. To those habits I attribute some of the extraordinary success I have had this past decade. Since I have the unique position of a humanities person coming into the sphere and falling in love with it, I think I have a valuable perspective on what mechanistic, psychological and quantitative tools are the highest leverage for a person initially hostile to the project. Or another way to see it, is that while a child might be initially predisposed to certain habits of thought or pick up those cues from their culture, an outsider-turned-insider might have unique insight about which tools are most salvific for the average person. So I am going to outline in order which concentrated tools that, if turned into habits, significantly elevate one's sanity. These will be in the order in which I think they should be taught, not order of importance or “foundationalness.” Think in terms of probabilities. It is hard to imagine a time when this wasn't obvious. But probabilistic thinking requires ...
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: 10 Years of LessWrong, published by JohnBuridan on December 30, 2022 on LessWrong. [I appreciate that less wrong has a very strong norm against navel gazing. Let's keep it that way. The purpose of this post is merely to reflect upon and highlight valuable tools and mental habits in our toolkit.] The rationalsphere propelled me to where I am today by giving me tools that I wouldn't otherwise have developed. The concept handles that we have developed are numerous and useful, but here I am only talking about the underlying habits that dug up good ore for the wordsmiths. It was a typical path starting in 2012. Harry Potter and the Methods of Rationality led to LessWrong, led to Slate Star Codex. Combined they fed into game theory, from game theory economics, and from economics all worked towards a new way of thinking. On a different branch I went from Slate Star Codex to Tetlock and forecasting and participating in some studies and tournaments (I can't beat the market). On a different branch, I have followed the AI literature from mostly MIRI and Paul Christiano and John Wentworth. And along with the AI DLC in my mind has come “AI development watching” and AI learning systems. This has been great, and it has been standard, and I am sure many others have done the same. But it was also an atypical path. I was a classicist and philosopher, bouncing around on the back of a bus in Italy between archeological sites with my 17 inch laptop and 12 tabs of HPMOR and two tabs of LW reading on each journey. Not yet having any knowledge of either calculus or discrete math, nor any inkling of basic coding or incentive structures, I was a youngling in the art of rationality. But I was well on my way in the great books tradition. I read HPMOR and the Sequences angrily. Many of the ideas I found refreshing and so on target, and many more I found blasted, awful, un-nuanced, and wrong. Many ideas about human nature and philosophy of language, logic, and science I wrestled with time and time again - always coming back for more. (Some are still wrong, mind you). I had rejected psychology sophomore year of college on the grounds that several studies in our textbook obviously didn't show what they claimed to show, and with that rejection of psychology, I rejected the idea of the quantification of human behavior. But reverse stupidity is not intelligence. So it took about three years before I could be salvaged from that position. It took scores of late night arguments about the foundations of language, logic, math, and science. Those arguments were my gateway into the enterprise, and the LessWrong corpus fueled the fire of those discussions. LessWrong, from the Sequences, to the community content, to the broader rationalsphere has introduced me to tools and instilled in me habits that I otherwise would not have acquired. To those habits I attribute some of the extraordinary success I have had this past decade. Since I have the unique position of a humanities person coming into the sphere and falling in love with it, I think I have a valuable perspective on what mechanistic, psychological and quantitative tools are the highest leverage for a person initially hostile to the project. Or another way to see it, is that while a child might be initially predisposed to certain habits of thought or pick up those cues from their culture, an outsider-turned-insider might have unique insight about which tools are most salvific for the average person. So I am going to outline in order which concentrated tools that, if turned into habits, significantly elevate one's sanity. These will be in the order in which I think they should be taught, not order of importance or “foundationalness.” Think in terms of probabilities. It is hard to imagine a time when this wasn't obvious. But probabilistic thinking requires ...
Hard decisions are part of every aspect of our human life. In business, these are what shape the future of a company and what define its success if done right. We often praise the hiring process in a company, as it constitutes a decision based on precise forecasting and analysis. However, we often brush off the equally important decision of letting someone go, as we generally believe it should be done as easily as this: take a subjective decision, have an end-of-contract meeting, and empty a desk. What we don't realize is that letting go of an employee should be subject to the same amount of well-thought analysis, as it is as strategic for the company as hiring. Today, on the Melting Pot, we are joined by Annie Duke, an ex-professional poker player and author of two books called “Thinking in bets” and “Quit”. The first makes a parallel between poker and business and covers ways in which we could bring the critical decision-making process from gambling into our entrepreneurial adventure. The second book is a gem that helps us know when to call it quits. More specifically, it helps decision-makers discover the neuroscience behind firing people, and how to do it right. She proposes a precise methodology to know when an employee is not a good fit anymore, and how to let them understand that it is time for them to quit. After earning the title of “The Duchess of Poker”, she now focuses on cognitive-behavioural decision authorship and coaches businesses in making the right decisions in their environment. Listen and download this fascinating episode in which Annie shares the journey that got her into coaching decision-makers and interesting concepts such as loss aversion, and aspects of the human cognitive bias that can affect our forecasting. In today's episode: The way our mind works when it comes to quitting thingsThe loss aversion bias in decision makingSpecificity vs. Sensitivity in decision makingThe ‘Thinking in Bets' book and playing pokerThe role of luck in success Links: Annie Duke's WebsiteQUIT - The Power Of Knowing When To Walk AwayAnnie's Other BooksYoutube ChannelLinkedinTwitter Book recommendations: Phillip E.Tetlock and Dan Gardner- Super ForecastingBrian Christian and Tom Griffiths- Algorithms to Live ByAlex Sangha- The Modern ThinkerMichael J. Mauboussin- The Success Equation: Untangling Skill and Luck in Business, Sports, and InvestingDaniel Kahneman- Thinking Fast and SlowRichard H. Thaler- NudgeKaty Milkman- How to change Enjoyed the show? Leave Us a Review
We make a guest appearance on Nick Anyos' podcast to talk about effective altruism, longtermism, and probability. Nick (very politely) pushes back on our anti-Bayesian credo, and we get deep into the weeds of probability and epistemology. You can find Nick's podcast on institutional design here (https://institutionaldesign.podbean.com/), and his substack here (https://institutionaldesign.substack.com/?utm_source=substack&utm_medium=web&utm_campaign=substack_profile). We discuss: - The lack of feedback loops in longtermism - Whether quantifying your beliefs is helpful - Objective versus subjective knowledge - The difference between prediction and explanation - The difference between Bayesian epistemology and Bayesian statistics - Statistical modelling and when statistics is useful Links - Philosophy and the practice of Bayesian statistics (http://www.stat.columbia.edu/~gelman/research/published/philosophy.pdf) by Andrew Gelman and Cosma Shalizi - EA forum post (https://forum.effectivealtruism.org/posts/hqkyaHLQhzuREcXSX/data-on-forecasting-accuracy-across-different-time-horizons#Calibrations) showing all forecasts beyond a year out are uncalibrated. - Vaclav smil quote where he predicts a pandemic by 2021: > The following realities indicate the imminence of the risk. The typical frequency of influenza pan- demics was once every 50–60 years between 1700 and 1889 (the longest known gap was 52 years, between the pandemics of 1729–1733 and 1781–1782) and only once every 10–40 years since 1889. The recurrence interval, calculated simply as the mean time elapsed between the last six known pandemics, is about 28 years, with the extremes of 6 and 53 years. Adding the mean and the highest interval to 1968 gives a span between 1996 and 2021. We are, probabilistically speaking, very much inside a high-risk zone. > > - Global Catastropes and Trends, p.46 Reference for Tetlock's superforecasters failing to predict the pandemic. "On February 20th, Tetlock's superforecasters predicted only a 3% chance that there would be 200,000+ coronavirus cases a month later (there were)." (https://wearenotsaved.com/2020/04/18/pandemic-uncovers-the-ridiculousness-of-superforecasting/) Contact us - Follow us on Twitter at @IncrementsPod, @BennyChugg, @VadenMasrani - Check us out on youtube at https://www.youtube.com/channel/UC_4wZzQyoW4s4ZuE4FY9DQQ - Come join our discord server! DM us on twitter or send us an email to get a supersecret link Errata - At the beginning of the episode Vaden says he hasn't been interviewed on another podcast before. He forgot his appearence (https://www.thedeclarationonline.com/podcast/2019/7/23/chesto-and-vaden-debatecast) on The Declaration Podcast in 2019, which will be appearing as a bonus episode on our feed in the coming weeks. Sick of hearing us talk about this subject? Understandable! Send topic suggestions over to incrementspodcast@gmail.com. Photo credit: James O'Brien (http://www.obrien-studio.com/) for Quanta Magazine (https://www.quantamagazine.org/where-quantum-probability-comes-from-20190909/)
Tất cả mọi người đều muốn nhìn xa hơn vào tương lai, bất kể vì họ đang mua chứng khoán, sọan thảo chính sách, ra mắt một sản phẩm mới hay đơn giản là lên thực đơn ăn uống trong tuần. Thật không may, mọi người đa phần là những người dự báo tồi, và những phỏng đoán của chúng ta chỉ chính xác hơn “ăn may” đôi chút.Trong sách nói Siêu Dự Báo, hai tác giả Tetlock và Gardner đã tạo nên một kiệt tác dự báo, đúc kết từ hàng thập niên nghiên cứu những công trình dự báo đồ sộ do các chính phủ tài trợ. Dự án của hai ông có những người tham gia rất bình thường – từ nhà làm phim, thợ lắp ống nước cho đến vũ công – nhưng đã chứng tỏ khả năng dự báo vượt trên những mốc chuẩn trong thị trường dự báo, vượt trên cả thông tin phân tích tình báo và truy cập tối mật. Đây là một cuốn sách đáng nghe cho những ai muốn hiểu làm sao để đưa ra những quyết định và dự báo chính xác, logic hơn.--Về Fonos:Fonos là ứng dụng sách nói có bản quyền. Trên ứng dụng Fonos, bạn có thể nghe định dạng sách nói của những cuốn sách nổi tiếng nhất từ các tác giả trong nước và quốc tế. Ngoài ra, bạn được sử dụng miễn phí nội dung Premium khi đăng ký trở thành Hội viên của Fonos: Tóm tắt sách, Ebook, Thiền định, Truyện ngủ, Nhạc chủ đề, Sách nói miễn phí cho Hội viên.--Tải ứng dụng Fonos tại: https://fonos.app.link/tai-fonosTìm hiểu về Fonos: https://fonos.vn/Theo dõi Facebook Fonos: https://www.facebook.com/fonosvietnam/Theo dõi Instagram Fonos: https://www.instagram.com/fonosvietnam/Đọc các bài viết thú vị về sách, tác giả sách, những thông tin hữu ích để phát triển bản thân: http://blog.fonos.vn/
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Dan Luu: Futurist prediction methods and accuracy, published by Linch on September 15, 2022 on The Effective Altruism Forum. tl;dr: Dan Luu has a detailed post where he tracks in detail past predictions and argues that contra Karnofsky, Arb, etc, the track record of futurists is overall quite bad. Relevantly to this audience, he further argues that this is evidence against the validity of current longtermist efforts in long-range predictions. (I have not finished reading the post). I've been reading a lot of predictions from people who are looking to understand what problems humanity will face 10-50 years out (and sometimes longer) in order to work in areas that will be instrumental for the future and wondering how accurate these predictions of the future are. The timeframe of predictions that are so far out means that only a tiny fraction of people making those kinds of predictions today have a track record so, if we want to evaluate which predictions are plausible, we need to look at something other than track record. The idea behind the approach of this post was to look at predictions from an independently chosen set of predictors (Wikipedia's list of well-known futurists1) whose predictions are old enough to evaluate in order to understand which prediction techniques worked and which ones didn't work, allowing us to then (mostly in a future post) evaluate the plausibility of predictions that use similar methodologies. Unfortunately, every single predictor from the independently chosen set had a poor record and, on spot checking some predictions from other futurists, it appears that futurists often have a fairly poor track record of predictions so, in order to contrast techniques that worked with techniques that I didn't, I sourced predictors that have a decent track record from my memory, an non-independent source which introduces quite a few potential biases. Something that gives me more confidence than I'd otherwise have is that I avoided reading independent evaluations of prediction methodologies until after I did the evaluations for this post and wrote 98% of the post and, on reading other people's evaluations, I found that I generally agreed with Tetlock's "Superforecasting" on what worked and what didn't work despite using a wildly different data set. In particular, people who were into "big ideas" who use a few big hammers on every prediction combined with a cocktail party idea level of understanding of the particular subject to explain why a prediction about the subject would fall to the big hammer generally fared poorly, whether or not their favored big ideas were correct. Some examples of "big ideas" would be "environmental doomsday is coming and hyperconservation will pervade everything", "economic growth will create near-infinite wealth (soon)", "Moore's law is supremely important", "quantum mechanics is supremely important", etc. Another common trait of poor predictors is lack of anything resembling serious evaluation of past predictive errors, making improving their intuition or methods impossible (unless they do so in secret). Instead, poor predictors often pick a few predictions that were accurate or at least vaguely sounded similar to an accurate prediction and use those to sell their next generation of predictions to others. By contrast, people who had (relatively) accurate predictions had a deep understanding of the problem and also tended to have a record of learning lessons from past predictive errors. Due to the differences in the data sets between this post and Tetlock's work, the details are quite different here. The predictors that I found to be relatively accurate had deep domain knowledge and, implicitly, had access to a huge amount of information that they filtered effectively in order to make good predictions. Tetlock was studying peo...
You can listen to this conversation on Spotify, Apple, anchor (and via RSS) or find a full transcript at Compound.“This is the nature of what we do. It's the intersection of business and people and psychology and sociology and numbers. There's a lot of stuff that's always going on that makes sure you never have the game beat, never.”This past June I had the opportunity to interview Michael Mauboussin. I tremendously enjoyed this conversation and I believe it captures Michael's deep curiosity and passion about investing, business, the research process, and being a multi-disciplinary learner.At the time I published a full transcript at Compound. I am happy that I can now share the audio version.I assume many of you are familiar with his work. For an easy introduction check out this 2021 profile. Another excellent piece is his Reflections on the Ten Attributes of Great Investors which incorporates many of his key frameworks. And be sure to check out his new website with a library of his collected writings.If you're looking for an all-in-one solution to manage your personal finances, Compound can help. The firm can help diversify concentrated stock positions, optimize company equity, plan asset allocation, and more. You can sign up for access here.For more information, please check out further disclosures here.“Most investors act as if their task is to figure out a stock's value and then to compare that value to the price. Our approach reverses this mindset. We start with the only thing we know for sure — the price — and then assess what has to happen to realize an attractive return. … The most important question in investing is what is discounted, or put slightly differently, what are the expectations embedded in the valuation?”The below are some of my favorite highlights.You can listen to the conversation on Spotify, Apple, at anchor, and via RSS or find a full transcript at Compound.Druckenmiller, Soros, and position sizing* “When you observe very successful people over very long periods of time in these probabilistic fields, they tend to have certain attributes that are worth all of us paying attention to.”* “Here we have George Soros and Stanley Druckenmiller, two legendary investors, who say that [position sizing] is the main thing that drives their returns and results over a long period of time. Whereas we look at the real world, we find that most people don't create a lot of value from sizing and it's all security selection. The question is can we bring those things together to some degree?”Analysts and portfolio managers:* “A very good portfolio manager will be able to focus on the two or three issues that matter most for a particular company. And they're very good at identifying those and honing in on those.”* “There was a letter from Seth Klarman at Baupost to his shareholders. He said, we aspire to the idea that if you lifted the roof off our organization and peered in and saw our investors operating, that they would be doing precisely what you thought they would be doing, given what we've said, we're going to do. It's this idea of congruence.”Holding Amazon for two decades* “I first learned about this company from Bill Gurley who at the time was part of the underwriting team at Deutsche Bank who did the IPO. Bill just said, you should meet these guys because the way they think about things, even though this is a completely nascent industry doing, completely different stuff, the language they're using is the language you're going to be familiar with.* “In the late 1990s, I met Jeff Bezos and Joy Covy, the CFO. … Joy would just say to me, we're big fans of Warren Buffett and Charlie Munger. We think about return on capital. We think long term. We're making investments that appear to be bad, but when you pencil out the numbers, we think we're going to generate really attractive returns. I bought into that.”* “I was very influenced by a wonderful book by Carlota Perez that came out probably in the early 2000s where she talks about the interplay between technological revolutions and financial capital, one of the points she made was, it's often the case that the hard work happened after the financial bust.”On feedback, learning, and teams of superforecaster (aka investors)* “In every domain elite performers tend to practice. Every sports team practices, every musician practices, every comedian practices. What is practice in investment management? How much time should we be allocating to that?”* “The investment management industry is an industry that draws a lot of really smart people. It's a very competitive, interesting field. It's remarkable in the sense that feedback is very difficult to attain. In the long run it's portfolio performance and so on. But in the short run it's very, very difficult to do.”* “There's a distinction between intelligence quotient and rationality quotient, which is the ability to make good decisions. Along with some of his colleagues he developed a specific test to measure rationality. And if you look at the subcomponents of that test, it seems really consistent with what we would care about as investors. “* “When I say elite teams, or when Tetlock talked about elite teams, this elite teams in superforecasting. So these are the best of the forecasters working together. There are three important things. How big should it be? How do we compose the team? The third and final piece is how you manage the group. And this is usually where the mistakes happen.”Lessons for operators from his book Expectations Investing.* “Executives of public companies in particular should absolutely understand the expectations priced into their stock. The first reason is that if they believe something that the market doesn't seem to be pricing in, they have a communication opportunity.”* “Very few executives really understand how capital markets work. This is almost like our analyst portfolio manager conversation. When you get to that seat, all of a sudden you have responsibilities and skills that become important that you may not have ever dealt with before.”* “Understanding what has to happen for today's price to make sense is just such a fundamentally attractive proposition. And then evaluating whether you think that those growth rates in sales and profit margins and capital intensity and return on in capital that's implied, whether those things are plausible or not, it just makes enormous sense as an approach.”Thank you, Michael!“To be a great teacher, an effective teacher, it's about being a great student, a great learner yourself. I think that comes through if you're doing it well.” This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit alchemy.substack.com/subscribe
Yogi Berra once said, "It's tough to make predictions, especially about the future." Philip Tetlock joins Vasant Dhar in episode 31 of Brave New World to discuss what superforecasters do consistently well -- and how we can improve our judgement and decision-making. Useful resources: 1. Superforecasting: The Art and Science of Prediction -- Philip Tetlock and Dan Gardner. 2. Expert Political Judgment: How Good Is It? How Can We Know? -- Philip Tetlock. 3. Daniel Kahneman on How Noise Hampers Judgement -- Episode 21 of Brave New World. 4. Everything Is Obvious: *Once You Know the Answer -- Duncan Watts. 5. The Hedgehog and the Fox -- Isiah Berlin. 6. What do forecasting rationales reveal about thinking patterns of top geopolitical forecasters? -- Christopher W Karvetski, Carolyn Meinel, Daniel T Maxwell, Yunzi Lu, Barbara A.Mellers and Philip E.Tetlock. 7. Terry Odean on How to Think about Investing -- Episode 23 of Brave New World. 8. Reciprocal Scoring: A Method for Forecasting Unanswerable Questions -- Ezra Karger, Joshua Monrad, Barb Mellers and Philip Tetlock. 9. The Signal and the Noise -- Nate Silver. 10. FiveThirtyEight.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A ranked list of all EA-relevant (audio)books I've read , published by MichaelA on the AI Alignment Forum. Or: "50+ EA-relevant books your doctor doesn't want you to know about" This post lists all the EA-relevant books I've read since learning about EA,[1] in roughly descending order of how useful I perceive/remember them being to me. (In reality, I mostly listened to these as audiobooks, but I'll say "books I've read" for simplicity.) I also include links to where you can get each book, as well as remarks and links to reviews/summaries/notes on some books. This is not quite a post of book recommendations, because: These rankings are of course only weak evidence of how useful you'll find these books[2] I list all EA-relevant books I've read, including those that I didn't find very useful Let me know if you want more info on why I found something useful or not so useful. I'd welcome comments which point to reviews/summaries/notes of these books, provide commenters' own thoughts on these books, or share other book recommendations/anti-recommendations. I'd also welcome people making their own posts along the lines of this one. (Edit: I think that recommendations that aren't commonly mentioned in EA are particularly valuable, holding general usefulness and EA-relevance constant. Same goes for recommendations of books by non-male, non-white, and/or non-WEIRD authors. See this comment thread.) I'll continue to update this post as I finish more EA-relevant books. My thanks to Aaron Gertler for sort-of prompting me to make this list, and then later suggesting I change it from a shortform to a top-level post. The list Or: "Michael admits to finding a Harry Potter fan fiction more useful than ~15 books that were written by professors, are considered classics, or both" The Precipice, by Ord, 2020 See here for a list of things I've written that summarise, comment on, or take inspiration from parts of The Precipice. I recommend reading the ebook or physical book rather than audiobook, because the footnotes contain a lot of good content and aren't included in the audiobook The book Superintelligence may have influenced me more, but that's just due to the fact that I read it very soon after getting into EA, whereas I read The Precipice after already learning a lot. I'd now recommend The Precipice first. See here for some thoughts on this and other nuclear-risk-related books, and here for some thoughts on this and other authoritarianism-related books. Superforecasting, by Tetlock & Gardner, 2015 How to Measure Anything, by Hubbard, 2011 Rationality: From AI to Zombies, by Yudkowsky, 2006-2009 I.e., “the sequences” Superintelligence, by Bostrom, 2014 Maybe this would've been a little further down the list if I'd already read The Precipice Expert Political Judgement, by Tetlock, 2005 I read this after having already read Superforecasting, yet still found it very useful Normative Uncertainty, by MacAskill, 2014 This is actually a thesis, rather than a book I assume it's now a better idea to read MacAskill, Bykvist, and Ord's book on the same subject, which is available as a free PDF Though I haven't read the book version myself Secret of Our Success, by Henrich, 2015 See also this interesting Slate Star Codex review The WEIRDest People in the World: How the West Became Psychologically Peculiar and Particularly Prosperous, by Henrich, 2020 See also the Wikipedia page on the book, this review on LessWrong, and my notes on the book. I rank Secret of Our Success as more useful to me, but that may be partly because I read it first; if I only read either this book or Secret of Our Success, I'm not sure which I'd find more useful. See here for some thoughts on this and other authoritarianism-related books. The Strategy of Conflict, by Schelling, 1960 See here for my notes on this book, and h...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Use resilience, instead of imprecision, to communicate uncertainty, published by Gregory_Lewis on the AI Alignment Forum. Write a Review BLUF: Suppose you want to estimate some important X (e.g. risk of great power conflict this century, total compute in 2050). If your best guess for X is 0.37, but you're very uncertain, you still shouldn't replace it with an imprecise approximation (e.g. "roughly 0.4", "fairly unlikely"), as this removes information. It is better to offer your precise estimate, alongside some estimate of its resilience, either subjectively ("0.37, but if I thought about it for an hour I'd expect to go up or down by a factor of 2"), or objectively ("0.37, but I think the standard error for my guess to be ~0.1"). 'False precision' Imprecision often has a laudable motivation - to avoid misleading your audience into relying on your figures more than they should. If 1 in 7 of my patients recover with a new treatment, I shouldn't just report this proportion, without elaboration, to 5 significant figures (14.285%). I think a similar rationale is often applied to subjective estimates (forecasting most salient in my mind). If I say something like "I think there's a 12% chance of the UN declaring a famine in South Sudan this year", this could imply my guess is accurate to the nearest percent. If I made this guess off the top of my head, I do not want to suggest such a strong warranty - and others might accuse me of immodest overconfidence ("Sure, Nostradamus - 12% exactly"). Rounding off to a number ("10%"), or just a verbal statement ("pretty unlikely") seems both more reasonable and defensible, as this makes it clearer I'm guessing. In praise of uncertain precision One downside of this is natural language has a limited repertoire to communicate degrees of uncertainty. Sometimes 'round numbers' are not meant as approximations: I might mean "10%" to be exactly 10% rather than roughly 10%. Verbal riders (e.g. roughly X, around X, X or so, etc.) are ambiguous: does roughly 1000 mean one is uncertain about the last three digits, or the first, or how many digits in total? Qualitative statements are similar: people vary widely in their interpretation of words like 'unlikely', 'almost certain', and so on. The greatest downside, though, is precision: you lose half the information if you round percents to per-tenths. If, as is often the case in EA-land, one is constructing some estimate 'multiplying through' various subjective judgements, there could also be significant 'error carried forward' (cf. premature rounding). If I'm assessing the value of famine prevention efforts in South Sudan, rounding status quo risk to 10% versus 12% infects downstream work with a 1/6th directional error. There are two natural replies one can make. Both are mistaken. High precision is exactly worthless First, one can deny the more precise estimate is any more accurate than the less precise one. Although maybe superforecasters could expect 'rounding to the nearest 10%' would harm their accuracy, others thinking the same are just kidding themselves, so nothing is lost. One may also have some of Tetlock's remarks in mind about 'rounding off' mediocre forecasters doesn't harm their scores, as opposed to the best. I don't think this is right. Combining the two relevant papers (1, 2), you see that everyone, even mediocre forecasters, have significantly worse Brier scores if you round them into seven bins. Non-superforecasters do not see a significant loss if rounded to the nearest 0.1. Superforecasters do see a significant loss at 0.1, but not if you rounded more tightly to 0.05. Type 2 error (i.e. rounding in fact leads to worse accuracy, but we do not detect it statistically), rather than the returns to precision falling to zero, seems a much better explanation. In principle: If a measure ...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Some learnings I had from forecasting in 2020, published by Linch on the AI Alignment Forum. crossposted from my own short-form Here are some things I've learned from spending a decent fraction of the last 6 months either forecasting or thinking about forecasting, with an eye towards beliefs that I expect to be fairly generalizable to other endeavors. Before reading this post, I recommend brushing up on Tetlock's work on (super)forecasting, particularly Tetlock's 10 commandments for aspiring superforecasters. 1. Forming (good) outside views is often hard but not impossible. I think there is a common belief/framing in EA and rationalist circles that coming up with outside views is easy, and the real difficulty is a) originality in inside views, and also b) a debate of how much to trust outside views vs inside views. I think this is directionally true (original thought is harder than synthesizing existing views) but it hides a lot of the details. It's often quite difficult to come up with and balance good outside views that are applicable to a situation. See Manheim and Muelhauser for some discussions of this. 2. For novel out-of-distribution situations, "normal" people often trust centralized data/ontologies more than is warranted. See here for a discussion. I believe something similar is true for trust of domain experts, though this is more debatable. 3. The EA community overrates the predictive validity and epistemic superiority of forecasters/forecasting. (Note that I think this is an improvement over the status quo in the broader society, where by default approximately nobody trusts generalist forecasters at all) I've had several conversations where EAs will ask me to make a prediction, I'll think about it a bit and say something like "I dunno, 10%?"and people will treat it like a fully informed prediction to make decisions about, rather than just another source of information among many. I think this is clearly wrong. I think in almost any situation where you are a reasonable person and you spent 10x (sometimes 100x or more!) time thinking about a question then I have, you should just trust your own judgments much more than mine on the question. To a first approximation, good forecasters have three things: 1) They're fairly smart. 2) They're willing to actually do the homework. 3) They have an intuitive sense of probability. This is not nothing, but it's also pretty far from everything you want in a epistemic source. 4. The EA community overrates Superforecasters and Superforecasting techniques. I think the types of questions and responses Good Judgment . is interested in is a particular way to look at the world. I don't think it is always applicable (easy EA-relevant example: your Brier score is basically the same if you give 0% for 1% probabilities, and vice versa), and it's bad epistemics to collapse all of the "figure out the future in a quantifiable manner" to a single paradigm. Likewise, I don't think there's a clear dividing line between good forecasters and GJP-certified Superforecasters, so many of the issues I mentioned in #3 are just as applicable here. I'm not sure how to collapse all the things I've learned on this topic in a few short paragraphs, but the tl;dr is that I trusted superforecasters much more than I trusted other EAs before I started forecasting stuff, and now I consider their opinions and forecasts "just" an important overall component to my thinking, rather than a clear epistemic superior to defer to. 5. Good intuitions are really important. I think there's a Straw Vulcan approach to rationality where people think "good" rationality is about suppressing your System 1 in favor of clear th...
Can we predict the future more accurately?It's a question we humans have grappled with since the dawn of civilization — one that has massive implications for how we run our organizations, how we make policy decisions, and how we live our everyday lives.It's also the question that Philip Tetlock, a psychologist at the University of Pennsylvania and a co-author of “Superforecasting: The Art and Science of Prediction,” has dedicated his career to answering. In 2011, he recruited and trained a team of ordinary citizens to compete in a forecasting tournament sponsored by the U.S. intelligence community. Participants were asked to place numerical probabilities from 0 to 100 percent on questions like “Will North Korea launch a new multistage missile in the next year” and “Is Greece going to leave the eurozone in the next six months?” Tetlock's group of amateur forecasters would go head-to-head against teams of academics as well as career intelligence analysts, including those from the C.I.A., who had access to classified information that Tetlock's team didn't have.The results were shocking, even to Tetlock. His team won the competition by such a large margin that the government agency funding the competition decided to kick everyone else out, and just study Tetlock's forecasters — the best of whom were dubbed “superforecasters” — to see what intelligence experts might learn from them.So this conversation is about why some people, like Tetlock's “superforecasters,” are so much better at predicting the future than everyone else — and about the intellectual virtues, habits of mind, and ways of thinking that the rest of us can learn to become better forecasters ourselves. It also explores Tetlock's famous finding that the average expert is roughly as accurate as “a dart-throwing chimpanzee” at predicting future events, the inverse correlation between a person's fame and their ability to make accurate predictions, how superforecasters approach real-life questions like whether robots will replace white-collar workers, why government bureaucracies are often resistant to adopt the tools of superforecasting and more.Mentioned:Expert Political Judgment by Philip E. Tetlock“What do forecasting rationales reveal about thinking patterns of top geopolitical forecasters?” by Christopher W. Karvetski et al.Book recommendations:Thinking, Fast and Slow by Daniel KahnemanEnlightenment Now by Steven PinkerPerception and Misperception in International Politics by Robert JervisThis episode is guest-hosted by Julia Galef, a co-founder of the Center for Applied Rationality, host of the “Rationally Speaking” podcast and author of “The Scout Mindset: Why Some People See Things Clearly and Others Don't.” You can follow her on Twitter @JuliaGalef. (Learn more about the other guest hosts during Ezra's parental leave here.)Thoughts? Guest suggestions? Email us at ezrakleinshow@nytimes.com.You can find transcripts (posted midday) and more episodes of "The Ezra Klein Show" at nytimes.com/ezra-klein-podcast, and you can find Ezra on Twitter @ezraklein. Book recommendations from all our guests are listed at https://www.nytimes.com/article/ezra-klein-show-book-recs.“The Ezra Klein Show” is produced by Annie Galvin, Jeff Geld and Rogé Karma; fact-checking by Michelle Harris; original music by Isaac Jones; mixing by Jeff Geld; audience strategy by Shannon Busta. Special thanks to Kristin Lin and Alison Bruzek.
איתי הוא אדם אינטליגנטי, עובד מסור, אימפולסיבי, ביקורתי, עקשן וקנאי. האם תקבלו אותו לעבודה בצוות שלכם? ומה בנוגע לגלעד, שהוא אדם קנאי, עקשן, ביקורתי, אימפולסיבי, עובד מסור ואינטליגנטי.האם תעסיקו אותו?...את הפרק הקדשנו לכמה תופעות מעניינות שהמכנה המשותף להן הוא הראשוניות.כמה שאלות שדנו בהן במהלך הפרק:תוך כמה זמן אנחנו מקבלים רושם ראשוני?האם ניתן לחזות אם בני זוג יתגרשו על פי צפייה בהם במשך כמה דקות?איך אנחנו שופטים "משקפופרים"? מה זה חוק חמש השניות בעיצוב אתרים? ומה משפיע על האופן שבו אנו בוחרים יין במסעדה? אגב, אנחנו מניחים שבשלב הזה כבר קיבלתם.ן החלטה (לא מודעת) אם כדאי לכם להאזין לפרק או לא (ורק נותר לנו לקוות שבחרתם.ן נכון :)).~~~
Welcome back to the Global Guessing Weekly Podcast. This week we are joined by David McCullough, Managing Director of Government Operations and Superforecaster and Good Judgement Inc. Prior to joining Good Judgement, David was an underwater Archaeologist for over twenty years after receiving his Ph.D. in Maritime Archeology from the University of Glasgow. In this week's episode, we talked to David about his background in Archaeology and the ways in which his training helped him become an elite forecaster. Afterwards, we discussed the importance of creating good forecasting questions and the qualities associated with them. We also chatted with David the importance of pre-mortem analysis and the roadblocks hindering government and private-sector adoption of forecasting and the principles outlined in Tetlock and Gardner's Superforecasting. We really enjoyed speaking with David, finding his answers thoughtful and insightful. If you did as well, make sure to check out Good Judgement's upcoming Superforecasting Workshop on December 8th and 9th at 12:00 - 2:30pm EST.
Philip Tetlock has been arguing for awhile that experts are horrible at prediction, but that his superforecasters do much better. If that's the case how did they do with respect to the fall of Afghanistan? As far as I can tell they didn't make any predictions on how long the Afghanistan government would last. Or they did make predictions and they were just as wrong as everyone else and they've buried them. In light of this I thought it was time to revisit the limitations and distortions inherent in Tetlock's superforecasting project.
We're back! Apologies for the delay, but Vaden got married and Ben was summoned to be an astronaut on the next billionaire's vacation to Venus. This week we're talking about how to forecast the future (with this one simple and easy trick! Astrologers hate them!). Specifically, we're diving into Philip Tetlock's work on Superforecasting (https://en.wikipedia.org/wiki/Superforecasting:_The_Art_and_Science_of_Prediction). So what's the deal? Is it possible to "harness the wisdom of the crowd to forecast world events" (https://en.wikipedia.org/wiki/The_Good_Judgment_Project)? Or is the whole thing just a result of sloppy statistics? We believe the latter is likely to be true with probability 64.9% - no, wait, 66.1%. Intro segment: "The Sentience Debate": The moral value of shrimps, insects, and oysters (https://www.facebook.com/103405457813911/videos/254164216090604) Relevant timestamps: 10:05: "Even if there's only a one in one hundred chance, or one in one thousand chance, that insects are sentient given current information, and if we're killing trillions or quadrillions of insects in ways that are preventable or avoidable or that we can in various ways mitigate that harm... then we should consider that possibility." 25:47: "If you're all going to work on pain in invertebrates, I pity you in many respects... In my previous work, I was used to running experiments and getting a clear answer, and I could say what these animals do and what they don't do. But when I started to think about what they might be feeling, you meet this frustration, that after maybe about 15 years of research, if someone asks me do they feel pain, my answer is 'maybe'... a strong 'maybe'... you cannot discount the possibility." 46:47: "It is not 100% clear to me that plants are non sentient. I do think that animals including insects are much more likely to be sentient than plants are, but I would not have a credence of zero that plants are sentient." 1:01:59: "So the hard problem I would like to ask the panel is: If you were to compare the moral weight of one ant to the moral weight of one human, what ratio would you put? How much more is a human worth than an ant? 100:1? 1000:1? 10:1? Or maybe 1:1? ... Let's start with Jamie." Main References: Superforecasting: The Art and Science of Prediction - Wikipedia (https://en.wikipedia.org/wiki/Superforecasting:_The_Art_and_Science_of_Prediction) How Policymakers Can Improve Crisis Planning (https://www.foreignaffairs.com/articles/united-states/2020-10-13/better-crystal-ball) The Good Judgment Project - Wikipedia (https://en.wikipedia.org/wiki/The_Good_Judgment_Project) Expert Political Judgment: How Good Is It? How Can We Know?: Tetlock, Philip E.: 9780691128719: Books - Amazon.ca (https://www.amazon.ca/Expert-Political-Judgment-Good-Know/dp/0691128715) Additional references mentioned in the episode: The Drunkard's Walk: How Randomness Rules Our Lives (https://en.wikipedia.org/wiki/The_Drunkard%27s_Walk) The Black Swan: The Impact of the Highly Improbable - Wikipedia (https://en.wikipedia.org/wiki/The_Black_Swan:_The_Impact_of_the_Highly_Improbable) Book Review: Superforecasting | Slate Star Codex (https://slatestarcodex.com/2016/02/04/book-review-superforecasting/) Pandemic Uncovers the Limitations of Superforecasting – We Are Not Saved (https://wearenotsaved.com/2020/04/18/pandemic-uncovers-the-ridiculousness-of-superforecasting/) My Final Case Against Superforecasting (with criticisms considered, objections noted, and assumptions buttressed) – We Are Not Saved (https://wearenotsaved.com/2020/05/30/my-final-case-against-superforecasting-with-criticisms-considered-objections-noted-and-assumptions-buttressed/) Use your Good Judgement and send us email at incrementspodcast@gmail.com.
A közelmúltban jelent meg a Napvilág Kiadónál Philip E. Tetlock amerikai pszichológus, politológus Szakértő politikai előrejelzés. Mire jó? Hogyan mérhetjük? című kötete, amelyről Felcsuti Péterrel, a fordítóval és Róna Dániel politológussal beszélgetett Klopfer Judit, a kötet szerkesztője, a Napvilág Kiadó főszerkesztője. A politikai előrejelzés napjainkban reménytelenül szubjektív műfaj. Teret nyert a közéletben egyfajta antiintellektualizmus: a (szak)tudás önmagában értéktelen; csak a velünk azonos nézeteket vallók véleményét vagyunk hajlandóak elfogadni, csak a saját „törzsünkkel” értünk egyet. De miért is kéne a politikai szakértőket felmenteni a pontosság és egyértelműség sztenderdjei alól, amelyeket minden más szakmától és tudományterülettől megkövetelünk? Tetlock a könyvében amellett érvel, hogy az előrejelző versenyek lehetőséget adnak arra, hogy előre mozdítsuk a vitán alapuló demokrácia ügyét – a versenyek révén erősödnek az objektív mérés szempontjai, így javul a közbeszéd minősége és csökken a polarizáció. Tetlock Isaiah Berlin nyomán a szakértőket két csoportra osztja: sündisznókra és rókákra; a gondolkodási stílus egyik végpontja a sündisznó, másik végpontja a róka kognitív stílus, e két kategória segítségével értékeli ki több tízévnyi kutatási anyagát, és dolgozza ki mérési módszerét a lehető legnagyobb objektivitásra törekedve. A beszélgetés az alábbi kérdéseket járja körül: - Miben más a politikai előrejelzés és a politikai szakértelem más tudományterületekhez képest, milyen a befogadói környezetük? - Összefügg-e az antiintellektualizmus mint közeg és a szakértők, illetve a médiafogyasztók mentális merevsége? Miért korrigálja álláspontját nehezen egy szakértő? - Van-e esély rá, hogy javuljon a helyzet nálunk a pontosság és a mentális merevség terén? Tetlock az előrejelző versenyeket jelöli meg eszközként. Vajon sikeres lenne-e egy ilyen verseny itthon? - Ki a jó szakértő? Számít-e, mit gondol, vagy csak az, ahogy gondolkodik? - Melyek a sündisznó és róka kognitív stílus jellegzetességei? Kik a jobb előrejelzők? A médiafigyelem nagy részét sündisznó előrejelzők élvezik – mi az a sündisznókban, ami jobban fogyaszthatóvá teszi az ő véleményüket? - A szakértőkkel kapcsolatos bizalmatlanság nem új jelenség sem az amerikai, sem a magyar politikában. Szóba kerülnek híres hibák, amelyek megrengetik a bizalmat a szakértők tudásában, de pozitív példák is. Kapcsolódó olvasmányok: Higgyünk-e a politikai elemzőnek, és mégis melyiknek?Az Előszó részlete a Mércén: https://merce.hu/2021/02/08/higgyunk-e-a-politikai-elemzonek-es-megis-melyiknek/ Vizsgálni a vizsgálhatatlant: mitől lesz jó egy szakértő?Kritika az Új Egyenlőségen https://ujegyenloseg.hu/vizsgalni-a-vizsgalhatatlant-mitol-lesz-jo-egy-szakerto/ A szerző egy kapcsolódó könyve: Superforecasting. The Art of Science of Prediction https://en.wikipedia.org/wiki/Superforecasting:_The_Art_and_Science_of_Prediction POLITIKATÖRTÉNETI INTÉZET https://www.polhist.hu/ www.polhist.hu/hirlevel https://www.facebook.com/polhist.hu http://polhist.hu/pti-adomany-koltozes/
Have you ever been infuriated by a doctor's unwillingness to give you an honest, probabilistic estimate about what to expect? Or a lawyer who won't tell you the chances you'll win your case?Their behaviour is so frustrating because accurately predicting the future is central to every action we take. If we can't assess the likelihood of different outcomes we're in a complete bind, whether the decision concerns war and peace, work and study, or Black Mirror and RuPaul's Drag Race.Which is why the research of Professor Philip Tetlock is relevant for all of us each and every day.In this conversation from 2019, we discuss how his work can be applied to your personal life to answer high-stakes questions, like how likely you are to thrive in a given career path, or whether your business idea will be a billion-dollar unicorn — or fall apart catastrophically. Full transcript, related links, and summary of this interviewThis episode first broadcast on the regular 80,000 Hours Podcast feed on June 28, 2019. Some related episodes include:• #7 – Julia Galef on making humanity more rational, what EA does wrong, and why Twitter isn't all bad• #11 – Dr Spencer Greenberg on speeding up social science 10-fold & why plenty of startups cause harm. • #15 – Prof Tetlock on how chimps beat Berkeley undergrads and when it's wise to defer to the wise• #30 – Dr Eva Vivalt on how little social science findings generalize from one study to another• #40 – Katja Grace on forecasting future technology & how much we should trust expert predictions.• #48 – Brian Christian on better living through the wisdom of computer science• #78 – Danny Hernandez on forecasting and measuring some of the most important drivers of AI progressSeries produced by Keiran Harris.
[This is the second of many finalists in the book review contest. It’s not by me - it’s by an ACX reader who will remain anonymous until after voting is done, to prevent their identity from influencing your decisions. I’ll be posting about two of these a week for the next few months. When you’ve read all of them, I’ll ask you to vote for your favorite, so remember which ones you liked. - SA] I. If you’re looking for the whipping boy for all of medicine, and most of science, look no further than Galen of Pergamon. As early as 1605, in The Advancement of Learning, Francis Bacon is taking aim at Galen for the “specious causes” that keep us from further advancement in science. He attacks Plato and Aristotle first, of course, but it’s pretty interesting to see that Galen is the #3 man on his list after these two heavy-hitters. Centuries went by, but not much changed. Charles Richet, winner of the 1913 Nobel Prize in Physiology or Medicine, said that Galen and “all the physicians who followed [him] during sixteen centuries, describe humours which they had never seen, and which no one will ever see, for they do not exist.” Some of the ‘humors’ exist, he says, like blood and bile. But of the “extraordinary phlegm or pituitary accretion” he says, “where is it? Who will ever see it? Who has ever seen it? What can we say of this fanciful classification of humours into four groups, of which two are absolutely imaginary?” And so on until the present day. In Scott’s review of Superforecasting, he quotes Tetlock’s comment on Galen:
Phillip Tetlock is an expert on expertise, but of a different kind to the late K. Anders Ericsson. While Ericsson's work focused on experts within "kind" domains (as defined by Range author David Epstein) such as music and chess, where feedback is near-immediate and clear and the rules are known to all and stated at the outset, Tetlock is interested in those who specialise in "wicked" domains, such as economics and politics. These are fields in which we can't run experiments or train for specific, recurring situations; where the rules are unknown; and where the situation at hand is not bounded, but can be influenced by a myriad of unpredictable forces. The author's most important finding is that cognitive style plays a major role in deciding who is good or bad at predicting world events. He reaches for Isaiah Berlin's concept of the Hedgehog and the Fox: "the fox knows many things, but the hedgehog knows one big thing." Hedgehogs tend to view the world through their particular favourite lens, basking in the power and glory of their pet theory. Foxes, on the other hand, are much patchier in their choice of models, and continuously second-guess themselves as they piece together a tentative view of the many ways things could unfold. While it seems that hedgehogs would make good political leaders (and are much more charismatic), foxes outperform them on prediction tasks to a stunning degree - or perhaps we should say, hedgehogs perform so terribly that foxes have an easy win. Tetlock has published another book on the topic of expert predictions in the political and economic arena called Superforcasting: The Art and Science of Prediction, which is aimed at a general audience. Expert Political Judgement is an earlier work, written for academics, and therefore deeply concerned with methodology and epistemology, and more willing to discuss probability theory and mathematical modelling. The full details of this are not amenable to sharing via audio, but the approach does provide a reassuring amound of skepticism to his own conclusions. Enjoy the episode. *** RELATED EPISODES Expertise (in "kind" environments): 18. Bounce by Matthew Syed; 20. Genius Explained by Michael Howe; 22. The Talent Code by Daniel Coyle; 24. Outliers by Malcolm Gladwell; 49. The Art of Learning by Josh Waitzkin Breadth of learning (characteristic of "foxes"): 97. The Polymath by Waqas Ahmed; 98. Range by David Epstein Moral hazard of people who talk too much (generally scathing about economists): 84. Skin in the Game by Nassim Nicholas Taleb
Sonya and I chat about coffee, cypherpunk, cyberpunk, nootropics, admiration for Gwern, Tor, Signal, Bitcoin, SciHub, PGP, opsec, threat modeling, encryption as a hermetic seal, Tarot vs Tetlock, zine making, artifact vs art, physically tangible art vs digital art, anarcho capitalism, distaste for cannabis culture, communicating across frameworks, becoming a theist after being an atheist, costly waste signaling, media diet suggestions, and more. Check out Sonya on Twitter @sonyasupposedly and her personal site: sonyasupposedly.comSHOW LESS
Your Parenting Mojo - Respectful, research-based parenting ideas to help kids thrive
Do we really know what implicit bias is, and whether we have it? This is the second episode on our two-part series on implicit bias; the first part was an https://yourparentingmojo.com/captivate-podcast/implicitbias/ (interview with Dr. Mahzarin Banaji), former Dean of the Department of Psychology at Harvard University, and co-creator of the Implicit Association Test. But the body of research on this topic is large and quite complicated, and I couldn't possibly do it justice in one episode. There are a number of criticisms of the test which are worth examining, so we can get a better sense for whether implicit bias is really something we should be spending our time thinking about - or if our problems with explicit bias are big enough that we would do better to focus there first. [accordion] [accordion-item title="Click here to read the full transcript"] References: Banaji, M.R., & Greenwald, A.G. (2002). Blindspot: Hidden biases of good people. New York: Delacorte. Blanton, H., & Jaccard, J. (2008). Unconscious racism: A concept in pursuit of a measure? Annual Review of Sociology 34, 277-297. Blanton, H., Jaccard, J., Strauts, E., Mitchell, G., & Tetlock, P.E. (2015). Toward a meaningful metric of implicit prejudice. Journal of Applied Psychology 100(5), 1468-1481. Brown, E.L., Vesely, C.K., & Dallman, L. (2016). Unpacking biases: Developing cultural humility in early childhood and elementary teacher candidates. Teacher Educators’ Journal 9, 75-96. Cao, J., Kleiman-Weiner, M., & Banaji, M.R. (2017). Statistically inaccurate and morally unfair judgements via base rate intrusion. Nature Human Behavior 1(1), 738-742. Carlsson, R. & Agerstrom, J. (2016). A closer look at the discrimination outcomes on the IAT Literature. Scandanavian Journal of Psychology 57, 278-287. Charlesworth, T.E.S., Kurdi, B., & Banaji, M.R. (2019). Children’s implicit attitude acquisition: Evaluative statements succeed, repeated pairings fail. Developmental Science 23(3), e12911. Charlesworth, T.E.S., Hudson, S.T.J., Cogsdill, E.J., Spelke, E.S., & Banaji, M.R. (2019). Children use targets’ facial appearance to guide and predict social behavior. Developmental Psychology 55(7), 1400. Charlesworth, T.E.S., & Banaji, M. (2019). Patterns of implicit and explicit attitudes: I. Long-term change and stability from 2007-2016. Psychological Science 30(2), 174-192. Chugh, D. (2004). Societal and managerial implications of implicit social cognition: Why milliseconds matter. Social Justice Research 17(2), 203-222. Cvencek, D., Meltzoff, A. N., Maddox, C. D., Nosek, B. A., Rudman, L. A., Devos, T. Dunham, Y., Baron, A. S., Steffens, M. C., Lane, K., Horcajo, J., Ashburn-Nardo, L., Quinby, A., Srivastava, S. B., Schmidt, K., Aidman, E., Tang, E., Farnham, S., Mellott, D. S., Banaji, M. R., & Greenwald, A. G. (in press). Meta-analytic use of Balanced Identity Theory to validate the Implicit Association Test. Personality and Social Psychology Bulletin. Forscher, P.S., Lai, C.K., Axt, J.R., Ebersole, C.R., Herman, M., Devine, P.G., & Nosek, B.A. (2019). A meta-analysis of procedures to change implicit measures. Gawronski, B., & Bodenhausen, G.V. (2017). Beyond persons and situations: An interactionist approach to understanding implicit bias. Psychological Inquiry 28(4), 268-272. Goode, E. (1998). A computer diagnosis of prejudice. The New York Times. Retrieved from https://www.nytimes.com/1998/10/13/health/a-computer-diagnosis-of-prejudice.html Greenwald, A.G., & Lai, C.K. (2020). Implicit social cognition. Annual Review of Psychology 71, 419-445. Greenwald, A.G., & Lai, C.K. (2020). Implicit social cognition. Annual Review of Psychology 71, 419-445. Greenwald, A.G., Banaji, M.R., & Nosek, B.A. (2015). Statistically small effects of the Implicit Association Test can have societally large effects. Journal of Personality and Social Psychology 108, 553-561. Greenwald, A.G., Poehlman,...
For more on Michael visit: https://michaelwstory.com/ Follow Michael on Twitter @MWStory Philip E. Tetlock and Dan Gardner, Superforecasting: The Art and Science of Prediction (2015) Tom Chivers, The AI Does Not Hate You: Superintelligence, Rationality and the Race to Save the World (2019) Nate Silver, The Signal and the Noise: Why So Many Predictions Fail—but Some Don't (2012) James Surowiecki, The Wisdom of Crowds: Why the Many Are Smarter Than the Few (2005) Amos Tversky and Daniel Kahneman, Judgment Under Uncertainty: Heuristics and Biases (1974) Metaculus: https://www.metaculus.com/questions/ Timestamps 03:43 The process of superforecasting with the example of predicting how long it would take to develop a coronavirus vaccine 12:50 The wisdom of crowds 18:04 The inside vs. outside view, anchors and base rates 23:15 The foxes versus the hedgehogs 29:39 The conjunction fallacy 37:15 Accountability, falsifiability, Brier scores 46:54 Loss of institutional credibility: a problem or an opportunity? 01:02:05 The importance of A/B testing 01:06:36 Back of the envelope (Fermi) calculations 01:12:11 Psychological characteristics of superforecasters, caring about being wrong
You can find Visa’s book here: https://gumroad.com/l/friendlynerdbook Subscribe to Visa’s YouTube channel: https://www.youtube.com/user/visa Follow Visa on Twitter @visakanv Further References The wisdom of crowds concept is well described in Philip E. Tetlock and Dan Gardner’s Superforecasting: The Art and Science of Prediction (2015) See @naval’s Twitter account, whose approach I compare with Visakan’s Sonke Ahrens, How to Take Smart Notes (2017) The Karate Kid (1984); Cobra Kai (2018—) Timestamps 03:18 Growing up online: how to use social media constructively to make friends and be a good member of the online community; the value of public comment spaces; the importance of scenes. 31:18 Why we no longer fear strangers in the internet age; the wisdom of crowds; public and private spaces and liminal spaces and the opportunities they bring. 47:22 Parasocial relationships; why Visakan’s approach brings him fans. 51:08 Social dark matter 57:18 What it means to be ambitious, the opportunities and dangers of idealism, the importance of public accountability 01:10:41 Getting addicted to interim stages 01:16:57 Note taking, tips for the writing process 01:33:15 Using statistical understanding to improve your marriage 01:38:52 What does mean to be a nerd? The value of pure curiosity.
It’s hard to predict the future, but you can be better at predicting the future. All you need is a few delicious avocados. Even the “experts” are bad at predicting the future Wharton professor Phillip Tetlock wanted to make the future easier to predict. So he held “forecasting tournaments,” in which experts from a variety of fields made millions of predictions about global events. Tetlock found that experts are no better at predicting the future than dart-throwing chimps. In fact, the more high-profile experts – the ones who get invited onto news shows – were the worst at making predictions. But, Tetlock found that some people are really great at telling the future. He calls them “Superforecasters”, and regardless of their area of expertise, they consistently beat the field with their predictions. Tetlock also found that with a little training, people can improve their forecasting skills. The superforecasters in Tetlock’s Good Judgement Project – people from all backgrounds working with publicly-available information – make forecasts 30% better than intelligence officers with access to classified information. Creative work is uncertain. Does it have to be? As someone working in the “Extremistan” world of creative work, I’m always trying to improve my forecasting skills. If I publish a tweet, how many likes will it get? If I write a book, how many copies will it sell? The chances of getting any of these predictions exactly right are so slim, it doesn’t feel worth it to try to predict these things. But that doesn’t mean I can’t rate my predictions and make those predictions better. Introducing the Avocado Challenge If you would like to be better at predicting the future, I have a challenge for you. I call it the Avocado Challenge. Elon Musk recently asked on Twitter “What can’t we predict?” I answered “whether or not an avocado is ready to open.” 12 likes. People agree with me. https://twitter.com/kadavy/status/1309643017599569920 Here’s how the Avocado Challenge works. The next time you’re about to open an avocado, make a prediction: How confident are you the avocado is ripe? Choose a percentage of confidence, such as 50% or 20% – or if you’re feeling lucky, 100%. To make it simple, you can rate your confidence on a scale of 0 to 10. State your prediction out loud or write it down. Now, open the avocado. Is it ripe? Yes or no? Scoring your avocado predictions You now have two variables: Your prediction as stated in percentage confidence, and the outcome of avocado ripeness. With these two variables, you can calculate what’s called a Brier score. This tells you just how good your forecast was. The Brier score is what Phillip Tetlock uses to score his forecasting tournaments. Two variables: confidence and outcome It works like this: Translate your percentage confidence into a decimal between 0 and 1. So 50% would be 0.5, 20% would be 0.2, and 100% would just be 1. Now, translate the avocado ripeness outcome into a binary number. If the avocado was not ripe, your outcome value is “0.” If the avocado was ripe, your outcome value is “1.” (You may wonder: How do I determine whether or not an avocado is ripe? I’ll get to that in a minute. Let’s pretend for a second it’s easy.) Calculating your Brier score Once you have those two variables, there are two steps to follow to find out your Brier score: Subtract the outcome value from your confidence value. If I was 50% confident the avocado would be ripe that confidence value is 0.5. If the avocado was in fact ripe I subtract the outcome value of 1 from 0.5 to get -0.5. Square that number, or multiply it by itself. -0.5² = 0.25. Our Brier score is 0.25. Is that good or bad? The lower your Brier score, the better your prediction was. If you were 100% confident the avocado would be ripe and it was not, your Brier score would be 1 – the worst score possible. If you were 100% confident the avocado would be ripe and it was ripe, your Brier score would be 0 – the best score possible. So, 0.25 is pretty solid. Predict your next 30 avocados This is a fun exercise to try one time, but it doesn’t tell you a whole lot about your forecasting skills overall, and it doesn’t help you improve your forecasting skills. Where it gets interesting and useful is when you make a habit of the Avocado Challenge. After you’ve tried the Avocado Challenge a couple times, make a habit out of it. For 30 consecutive avocados, tally your results. Calculate your Brier score, and find the average of your 30 predictions. If you regularly open avocados with a roommate or partner, make a competition out of it. My partner and I predicted the ripeness of, then opened, 36 avocados over the course of several weeks. We recorded our predictions and outcomes on a notepad on the fridge – then tallied our results in a spreadsheet. Our findings: 28% of avocados were ripe. Her Brier score was 0.22 – mine was 0.19. (I win!) The Avocado Challenge teaches you to define your predictions Most of us don’t make predictions according to our percentage confidence. We say, “I think so and so is going to win the election,” or “I think it might rain.” Phillip Tetlock even found this with political pundits – the ones who get lots of airtime on news shows. They’ll say things like “there’s a distinct possibility.” That’s not a forecast. If so and so wins the election, you can say, “ha! I knew it!” If it didn’t rain, you can remind your friend you said you thought it might rain. And what does a “distinct possibility” mean? You can be “right” either way. And when it comes to getting airtime on news shows, the news show doesn’t care if the political pundit gets their prediction right. All that matters is they can be exciting on camera, speak in sound bites, argue a clear point, and hold the viewer’s attention a little longer so it can be sold to advertisers during the commercial break. We normally don’t make our predictions with a percentage confidence, because we aren’t used to it. The Avocado Challenge gets you in the habit of rating the confidence of your predictions. The Avocado Challenge helps you define reality The Avocado Challenge also helps you define reality. This is something we’re also bad at. If you’re on a walk with your friend and you say you think it’s going to rain, how much rain equals rain? By what time is it going to rain? You’re traveling on foot – is it going to rain where the walk started, or the place you’ll be a half hour from now? To rate your predictions and become a better forecaster, you need to make falsifiable claims. It’s hard to tell if an avocado is ripe before you open the avocado, but it’s also hard to tell if an avocado is ripe after you open the avocado. You’ll have to come up with criteria for determining whether or not an avocado should be defined as “ripe.” When we did the Avocado Challenge, we defined a “ripe” avocado as a “perfect” avocado: uniform green color, with the meat of the avocado sticking to no more than 5% of the pit. The Avocado Challenge can improve your real-life predictions A few weeks after we did the Avocado Challenge, my partner and I were at her family’s finca – a rustic cabin in the Colombian countryside. It was Sunday afternoon, and we were getting ready to head back to Medellín. I was eager to get home and get ready for my week. I asked my partner what time we would leave. She said about 3 p.m. As I mentioned on episode 235, the Colombian sense of time takes some getting used to for me as an American. Even though my partner is a very prompt person, I’m also aware of “the planning fallacy.” I know the Sydney Opera House opened ten years late and cost 15 times the projected budget to build. So when I looked at the Mitsubishi Montero parked in the grass, and thought about how long it might take to pack in eight people, three dogs, and a little white rabbit, the chances of us leaving right at 3 p.m. seemed slim. Fortunately, we had done the Avocado Challenge. I asked my partner, in Spanish, “what’s your percentage confidence we’ll leave before an hour after 3 p.m. – 4 p.m.?” She shifted into Avocado mode, thought a bit, and said sesenta por ciento. She was 60% sure we’d leave before 4 p.m. That didn’t seem super confident, so I asked for another forecast. I asked what her percentage confidence was that we’d leave before 5 p.m. – two hours after the target time. She said cien por ciento. She was 100% sure we’d leave before 5 p.m. Now, instead of choosing between expecting to leave at exactly 3 p.m. – or leaving “whenever” – I now had a range. It was a range I could trust from someone with experience with similar situations – and training in forecasting. The time we did leave: 3:30 p.m. My partner’s Brier score for that first prediction: 0.16. Average Brier score for the two predictions: 0.08. Not bad. Mind Management, Not Time Management now available! After nearly a decade of work, Mind Management, Not Time Management is now available! This book will show you how to manage your mental energy to be productive when creativity matters. Buy it now! My Weekly Newsletter: Love Mondays Start off each week with a dose of inspiration to help you make it as a creative. Sign up at: kadavy.net/mondays. Listener Showcase Abby Stoddard makes the Dunnit app – the "have-done list." It’s a minimalist tool designed to motivate action and build healthy habits. About Your Host, David Kadavy David Kadavy is author of Mind Management, Not Time Management, The Heart to Start and Design for Hackers. Through the Love Your Work podcast, his Love Mondays newsletter, and self-publishing coaching David helps you make it as a creative. Follow David on: Twitter Instagram Facebook YouTube Subscribe to Love Your Work Apple Podcasts Overcast Spotify Stitcher YouTube RSS Email Support the show on Patreon Put your money where your mind is. Patreon lets you support independent creators like me. Support now on Patreon » Show notes: http://kadavy.net/blog/posts/avocado-challenge
Rob Wiblin's top recommended EconTalk episodes v0.2 Feb 2020
Can you predict the future? Or at least gauge the probability of political or economic events in the near future? Philip Tetlock of the University of Pennsylvania and author of Superforecasting talks with EconTalk host Russ Roberts about his work on assessing probabilities with teams of thoughtful amateurs. Tetlock finds that teams of amateurs trained in gathering information and thinking about it systematically outperformed experts in assigning probabilities of various events in a competition organized by IARPA, research agency under the Director of National Intelligence. In this conversation, Tetlock discusses the meaning, reliability, and usefulness of trying to assign probabilities to one-time events. Actually released 21 Dec 2015.
Here is Elon’s full tweet from this past week. “Wise words from Bogle. The point of companies is products & services. They have no point in & of themselves, nor do these indices. Buy & hold stock in companies where you love the product roadmap, sell where you don’t." - https://twitter.com/elonmusk/status/1329114904817758208 Social
I discuss the merits of long-term investing and what beginners need to know with with Fabio Faria whose investing Youtube channel in Brazil has over 250k subscribers, @Canal do Holder . View full interview here, https://youtu.be/1fDt-JjdBmE Fabio Faria on Twitter: FabioHolder Fabio Faria on Instagram: fabio.holder Social
Panelists: 1)Topher Kohan (Host) Email: topheratl@gmail.com Twitter: @TopherATL Facebook: https://www.facebook.com/topher.kohan 2) Mike Shea (Guest) Email: mike@mikeshea.net Twitter: slyflourish Facebook: https://www.facebook.com/slyflourish/ Site: http://slyflourish.com Other: https://dontsplitthepodcastnetwork.com/dms-deep-dive/ Links: Thetomeshow.com Patreon.com/thetomeshow www.nobleknight.com Get to know you Question: "What is more important the rules or a good story?" Topic 1) Tell us about the new show New Show: DM's Deep Dive https://dontsplitthepodcastnetwork.com/dms-deep-dive/ On the Don't Split the Podcast Network: https://dontsplitthepodcastnetwork.com/ Twitch: https://www.twitch.tv/dontsplitthepodcast Topic 2) DM Survey: http://slyflourish.com/2016_dm_survey_results.html Lazy DM: http://slyflourish.com/lazydm/ Topic 3) Future of D&D In his early work on good judgment, summarized in Expert Political Judgment: How Good Is It? How Can We Know?,[2] Tetlock conducted a set of small scale forecasting tournaments between 1984 and 2003. The forecasters were 284 experts from a variety of fields, including government officials, professors, journalists, and others, with many opinions, from Marxists to free-marketeers. The tournaments solicited roughly 28,000 predictions about the future and found the forecasters were often only slightly more accurate than chance, and usually worse than basic extrapolation algorithms, especially on longer–range forecasts three to five years out. Forecasters with the biggest news media profiles were also especially bad. This work suggests that there is a perverse inverse relationship between fame and accuracy. https://www.amazon.com/Expert-Political-Judgment-Good-Know/dp/0691128715 “People who spend their time, and earn their living, studying a particular topic produce poorer predictions than dart-throwing monkeys who would have distributed their choices evenly over the options.” Psychologist who won a Nobel Prize in Economics https://www.amazon.com/Thinking-Fast-Slow-Daniel-Kahneman/dp/0374533555 EVERYBODY'S AN EXPERT http://www.newyorker.com/magazine/2005/12/05/everybodys-an-expert 86% of investment managers stunk in 2014 http://money.cnn.com/2015/03/12/investing/investing-active-versus-passive-funds/ Topic 3a) Where do you want the future D&D to go Email us with your comments! http://www.thetomeshow.com thetomeshow@gmail.com
Wharton School Professor Philip Tetlock disucsses the illusion of insight and how to come across good forecasters. He speaks with Tom Keene and Barry Ritholtz on Bloomberg Surveillance Learn more about your ad-choices at https://www.iheartpodcastnetwork.com