Podcasts about good judgment project

  • 17PODCASTS
  • 29EPISODES
  • 53mAVG DURATION
  • ?INFREQUENT EPISODES
  • Apr 7, 2025LATEST

POPULARITY

20172018201920202021202220232024


Best podcasts about good judgment project

Latest podcast episodes about good judgment project

The 92 Report
126. Robert de Neufville, Writer and Superforecaster

The 92 Report

Play Episode Listen Later Apr 7, 2025 40:02


Show Notes: Robert de Neufville dropped out of grad school after spending over a decade in grad school and not finishing his PhD. This was around the time of the financial crisis. Robert realized that after a decade in academia he was less employable than when he graduated from Harvard. He had done a lot of teaching at Berkeley and San Francisco State, but found himself struggling to find a job. He eventually moved to Hawaii to work freelance editing projects. He moved there because he had a friend who wanted to rent out his house. Working as a Forecaster and Political Writer Currently, Robert is working as a forecaster and political writer. He has a sub-stack newsletter called Telling the Future, which has about 1500 subscribers. While he is not particularly happy writing about politics right now, he believes it's necessary for his career and personal growth. Therapy and Political Theory Robert discusses their first period after college and therapy. He mentions the stigma surrounding therapy and the importance of normalizing it. However, he eventually reached a breaking point. He didn't know what he wanted to do after college. He drove to New York and worked at several different places, including consulting and Booz and Allen, which he ultimately found lacked meaning and decided to pursue a more intellectual career. He knew that he liked thinking and writing about things, so he applied to grad school for political science, where he studied political theory and moral issues related to community living. However, he found the academic culture at Berkeley to be toxic and, combined with an unhealthy lifestyle, he decided it was not for him.  Robert touches on his difficult childhood, which was characterized by narcissistic parents and abusive mother. He eventually sought therapy and found that he felt better, but struggled to complete his dissertation. He dropped out of grad school, despite their professors' concerns, and was diagnosed with chronic PTSD. Finding Solace in Teaching Robert found solace in teaching, but disliked the part where he had to grade students. Some people had unhealthy relationships with grades, and he felt he had to refer them to suicide watch. He realized that teaching was great because it allowed them to understand a topic better by explaining it to others. He found that teaching was the only way they could truly understand a topic, but he realized he didn't want to do academic work. Additionally, he found that there was a backlog of people who wanted to become political theory professors who spend their time teaching adjuncts and spending money on conferences and job opportunities. Robert believes that his experience in grad school was intellectually rewarding and that his training and political theory shaped who he is. Writing for Love and Money Robert  talks about his experience writing for mainstream publications like The Economist, National Interest, California magazine, The Big Think, and The Washington Monthly. He shares his struggles with freelance writing, as he finds it slow and fussy, and finds it frustrating to be paid for work that takes time to complete. He also discusses his writing about forecasting, becoming a skilled judgmental forecaster. He makes money by producing forecasts for various organizations, which is a relatively new field. He encourages readers to support writers they love and consider paying for their work, as it is hard and not very rewarding. Forecasting Methods and Examples The conversation turns to Robert's writing and forecasting. He explains his approach to forecasting and how he uses history to guide his predictions. He shares his method of estimating the probability of events in the future, which involves looking back at similar elections and establishing a base rate. This helps in estimating the probability of what is going to happen in a specific situation. Robert also mentions that there are some situations that require more analytical thinking, such as discovering AGI or other technologies. He talks about The Phil Tetlock project, a government agency that helped invent the internet, aimed to determine if anyone could forecast geopolitical questions. The research showed that people were terrible at it, even analysts and pundits. However, a certain percentage of people consistently outperformed intelligence analysts using methodical extrapolations. Robert participated in the tournament and qualified as a super forecaster in his first year. He works with Metaculus and the Good Judgment Project, which produces probabilistic forecasts for decision-makers. The forecasting community is now working on making forecasts useful, such as understanding the reasons behind people's forecasts rather than just the number they produce. Influential Harvard Courses and Professors Robert stresses that he found his interaction with fellow students to be most enriching, and he appreciated Stanley Hoffmann's class on Ethics and International relations, which was taught through a humanist lens and emphasized the importance of morality. He also enjoyed watching the list of movies and reading academic articles alongside his classes, which eventually informed his teaching. He also mentions Adrienne Kennedy's playwriting class, which he found exciting and engaging. He enjoys table reads and reading people's plays fresh off the presses and believes that these experiences have shaped his forecasting skills. Timestamps: 03:16: Robert's Move to Hawaii and Career Challenges  06:16: Current Endeavors and Writing Career  07:58: Therapy and Early Career Struggles  10:14: Grad School Experience and Academic Challenges  22:41: Teaching and Forecasting Career 26:21: Forecasting Techniques and Projects  41:27: Impact of Harvard and Influential Professors  Links: Substack newsletter: https://tellingthefuture.substack.com/ LinkedIn: https://www.linkedin.com/in/robertdeneufville/ Featured Non-profit: The featured non-profit of this episode of The 92 Report is recommended by Patrick Jackson who reports: “Hi I'm Patrick Ian Jackson, class of 1992. The featured nonprofit of this episode of The 92 report is His Hands Free Clinic, located in Cedar Rapids, Iowa. Since 1992 His Hands Free Clinic has been seeking to honor God by helping the uninsured and underinsured in our community. The clinic is a 501, c3 nonprofit ministry providing free health care to Cedar Rapids and the surrounding communities. I love the work of this organization. The church that I pastor, First Baptist Church, Church of the Brethren, has been a regular contributor to the clinic for the past couple of years. You can learn more about their work at WWW dot his hands clinic.org, and now here is Will Bachman with this week's episode.” To learn more about their work, visit: www.HisHandsClinic.org.  

Star Spangled Gamblers
Building the Forecasting Community: Lessons from the Sports Betting World

Star Spangled Gamblers

Play Episode Listen Later Sep 25, 2024 47:03


David Glidden (@dglid) draws lessons from the sports betting world for building the forecasting community. Timestamps 0:00: Intro begins 0:09: Saul Munn 1:26: David Glidden 1:52: Washington DC Forecasting and Prediction Markets Meetup 6:54: Interview with Glidden begins 13:32: Importance of building forecasting community 14:39: Effective altruism 22:02: DC Forecasting and Prediction Markets Meetup 24:49: Sports betting 40:53: Repeatable lines in prediction markets Show Notes September 26 DC Forecasting and Prediction Markets Meetup RSVP: https://partiful.com/e/zpObY6EmiQEkgpcJB6Aw DC Forecasting and Prediction Markets Meetup Manifund: https://manifund.org/projects/forecasting-meetup-network---washington-dc-pilot-4-meetups For more information on the meetup, DM David Glidden @dglid. Trade on Polymarket.com, the world's largest prediction market. Follow Star Spangled Gamblers on Twitter @ssgamblers.

Increments
#69 - Contra Scott Alexander on Probability

Increments

Play Episode Listen Later Jun 20, 2024 105:09


After four episodes spent fawning over Scott Alexander's "Non-libertarian FAQ", we turn around and attack the good man instead. In this episode we respond to Scott's piece "In Continued Defense of Non-Frequentist Probabilities", and respond to each of his five arguments defending Bayesian probability. Like moths to a flame, we apparently cannot let the probability subject slide, sorry people. But the good news is that before getting there, you get to here about some therapists and pedophiles (therapeutic pedophelia?). What's the probability that Scott changes his mind based on this episode? We discuss Why we're not defending frequentism as a philosophy The Bayesian interpretation of probability The importance of being explicit about assumptions Why it's insane to think that 50% should mean both "equally likely" and "I have no effing idea". Why Scott's interpretation of probability is crippling our ability to communicate How super are Superforecasters? Marginal versus conditional guarantees (this is exactly as boring as it sounds) How to pronounce Samotsvety and are they Italian or Eastern European or what? References In Continued Defense Of Non-Frequentist Probabilities (https://www.astralcodexten.com/p/in-continued-defense-of-non-frequentist) Article on superforecasting by Gavin Leech and Misha Yugadin (https://progress.institute/can-policymakers-trust-forecasters/) Essay by Michael Story on superforecasting (https://www.samstack.io/p/five-questions-for-michael-story) Existential risk tournament: Superforecasters vs AI doomers (https://forecastingresearch.org/news/results-from-the-2022-existential-risk-persuasion-tournament) and Ben's blogpost about it (https://benchugg.com/writing/superforecasting/) The Good Judgment Project (https://goodjudgment.com/) Quotes During the pandemic, Dominic Cummings said some of the most useful stuff that he received and circulated in the British government was not forecasting. It was qualitative information explaining the general model of what's going on, which enabled decision-makers to think more clearly about their options for action and the likely consequences. If you're worried about a new disease outbreak, you don't just want a percentage probability estimate about future case numbers, you want an explanation of how the virus is likely to spread, what you can do about it, how you can prevent it. - Michael Story (https://www.samstack.io/p/five-questions-for-michael-story) Is it bad that one term can mean both perfect information (as in 1) and total lack of information (as in 3)? No. This is no different from how we discuss things when we're not using probability. Do vaccines cause autism? No. Does drinking monkey blood cause autism? Also no. My evidence on the vaccines question is dozens of excellent studies, conducted so effectively that we're as sure about this as we are about anything in biology. My evidence on the monkey blood question is that nobody's ever proposed this and it would be weird if it were true. Still, it's perfectly fine to say the single-word answer “no” to both of them to describe where I currently stand. If someone wants to know how much evidence/certainty is behind my “no”, they can ask, and I'll tell them. - SA, Section 2 Socials Follow us on Twitter at @IncrementsPod, @BennyChugg, @VadenMasrani Come join our discord server! DM us on twitter or send us an email to get a supersecret link Help us calibrate our credences and get exclusive bonus content by becoming a patreon subscriber here (https://www.patreon.com/Increments). Or give us one-time cash donations to help cover our lack of cash donations here (https://ko-fi.com/increments). Click dem like buttons on youtube (https://www.youtube.com/channel/UC_4wZzQyoW4s4ZuE4FY9DQQ) What's your credence in Bayesianism? Tell us over at incrementspodcast@gmail.com.

Slate Star Codex Podcast
Mantic Monday 4/24/23

Slate Star Codex Podcast

Play Episode Listen Later Apr 29, 2023 12:05


https://astralcodexten.substack.com/p/mantic-monday-42423 Can AIs Predict The Future? By Which We Mean The Past? If we asked GPT-4 to play a prediction market, how would it do? Actual GPT-4 probably would just give us some boring boilerplate about how the future is uncertain and it's irresponsible to speculate. But what if AI researchers took some other model that had been trained not to do that, and asked it? This would take years to test, as we waited for the events it predicted to happen. So instead, what if we took a model trained off text from some particular year (let's say 2020) and asked it to predict forecasting questions about the period 2020 - 2023. Then we could check its results immediately! This is the basic idea behind Zou et al (2022), Forecasting Future World Events With Neural Networks. They create a dataset, Autocast, with 6000 questions from forecasting tournaments Metaculus, Good Judgment Project, and CSET Foretell. Then they ask their AI (a variant of GPT-2) to predict them, given news articles up to some date before the event happened. Here's their result:

ai gpt zou mantic good judgment project autocast
The Data Scientist Show
Becoming a superforecaster, decision science for better human predictions - Pavel Atanasov-the data scientist show#036

The Data Scientist Show

Play Episode Listen Later May 17, 2022 111:29


(Highlights below)Pavel is a decision scientist and co-founder at Pytho, using decision science to measure and improve human judgment & prediction. He has a phd in psychology and decision science from the University of Pennsylvania, focusing on crowd predictions. Pavel's twitter: https://twitter.com/PavelDAtanasov. Follow Daliana (https://twitter.com/DalianaLiu) for more updates on data science and this podcast. Highlights: (00:01:10) how he got into decision science (00:14:38) what makes someone a super forecaster (00:16:20) three elements of becoming a super forecaster (00:24:37) how to effectively update our opinions 00:30:05 how he designed experiments to find out what was a better system (00:48:27) why humans sometimes are better than algorithm (01:14:50) how to collect data and information better (01:33:25) why you should quit (01:42:30) the future of decision science Superforecasting book, based on the Good Judgment Project: https://www.amazon.com/Superforecasting-Science-Prediction-Philip-Tetlock/dp/0804136718 Blogs about forecasting: Vox's Future Perfect series: https://www.vox.com/future-perfect Astral Codex Ten: https://astralcodexten.substack.com/

The Electric Wire
The Future of Utilities with Rebecca Ryan and Lauren Azar

The Electric Wire

Play Episode Listen Later Mar 16, 2022 46:58


Rebecca Ryan and Lauren Azar join the Electric Wire for a conversation about change in the energy industry and in our world. Rebecca, one of the nation's leading futurists, discusses local and global trends that everyone should be preparing for. Lauren, one of America's most respected utility attorneys, leads the conversation about change in the energy industry, as they answer the following questions, and more: · How do we get those in the utility industry to start thinking like futurists? · How important are governmental or industry goals to get where we want to go? · What does the future look like for EV's and electrification? · What are the craziest changes coming by 2050 that we haven't thought of yet? Cari Anne Renlund, Vice President, General Counsel and Secretary of Madison Gas and Electric Co., joins Kristin Gilkes, Executive Director of the Customers First! Coalition, as co-host (and bonus guest) for a lively and inspiring conversation about the direction of the industry. Links from the Episode: More on MGE's 2030 and 2050 Carbon Reduction Goals: https://www.mge2050.com/en/ IPCC: Sixth Assessment Report https://www.ipcc.ch/assessment-report/ar6/ Note from Rebecca: The Good Judgment Project (https://www.gjopen.com) makes predictions on EV penetration and MANY other things. The Good Judgment Project was founded by the author of Superforecasting, Phil Tetlock who found that regular people can learn to do foresight as well or better than trained CIA agents. Long Bets is also does long term wagering about the future, fueled by a bunch of futurists and thought leaders. I just dropped on and Steven Pinker has a challenge about Bioterrorism. How to Become a Cyborg from MIT Technology Review: https://www.technologyreview.com/2018/06/19/142228/here-are-some-ways-to-upgrade-yourself-one-body-part-at-a-time/

The Colin McEnroe Show
You couldn't have predicted we'd do this show about predicting the future

The Colin McEnroe Show

Play Episode Listen Later Jan 20, 2022 49:00


Humans have been trying for, well, forever to predict the future. But how helpful is predicting the future, really? And what factors determine whether someone is successful at doing it, or not? This hour, we try to predict whether predicting the future is useful, and understand why we’re so interested in doing so. GUESTS: Amanda Rees - A historian of science based at the University of York who works on the history of the future, and author of the book “Human.” Warren Hatch - A superforecaster, and CEO of the Good Judgment Project. Allan Lichtman - A distinguished professor of history at American University, his most recent book is “Thirteen Cracks: Repairing American Democracy After Trump.” He is known for accurately predicting the outcome of presidential elections since 1984. Support the show: http://www.wnpr.org/donateSee omnystudio.com/listener for privacy information.

The Nonlinear Library
EA - Long-Term Future Fund: July 2021 grant recommendations by abergal

The Nonlinear Library

Play Episode Listen Later Jan 18, 2022 28:43


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Long-Term Future Fund: July 2021 grant recommendations, published by abergal on January 18, 2022 on The Effective Altruism Forum. Introduction The Long-Term Future Fund made the following grants through July 2021: Total grants: $1,427,019 Number of grantees: 18 Payout date: July 2021 Report authors: Asya Bergal (Chair), Oliver Habryka, Adam Gleave, Evan Hubinger, Luisa Rodriguez This payout report is substantially delayed, but we're hoping to get our next payout report covering grants through November out very soon. Updates since our last payout report: We took on Luisa Rodriguez as a guest manager in June and July. We switched from a round-based system to rolling applications. We received $675,000 from Jaan Tallinn through the Survival and Flourishing Fund. Public payout reports for EA Funds grantees are now optional. Consider applying for funding from the Long-Term Future Fund here. Grant reports Note: Many of the grant reports below are very detailed. Public reports are optional for our grantees, and we run all of our payout reports by grantees before publishing them. We think carefully about what information to include to maximize transparency while respecting grantees' preferences. We encourage anyone who thinks they could use funding to positively influence the long-term trajectory of humanity to apply for a grant. Grant reports by Asya Bergal Any views expressed below are my personal views and not the views of my employer, Open Philanthropy. In particular, receiving funding from the Long-Term Future Fund should not be read as an indication that an organization or individual has an elevated likelihood of receiving funding from Open Philanthropy. Correspondingly, not receiving funding from the Long-Term Future Fund (or any risks and reservations noted in the public payout report) should not be read as an indication that an organization or individual has a diminished likelihood of receiving funding from Open Philanthropy. Ezra Karger, Pavel Atanasov, Philip Tetlock ($572,000) Existential risk forecasting tournaments. This grant is to Ezra Karger, Pavel Atanasov, and Philip Tetlock to run an existential risk forecasting tournament. Philip Tetlock is a professor at the University of Pennsylvania; he is known in part for his work on The Good Judgment Project, a multi-year study of the feasibility of improving the accuracy of probability judgments, and for his book Superforecasting: The Art and Science of Prediction, which details findings from that study. Pavel Atanasov is a decision psychologist currently working as a Co-PI on two NSF projects focused on predicting the outcomes of clinical trials. He previously worked as a post-doctoral scholar with Philip Tetlock and Barbara Mellers at the Good Judgement Project, and as a consultant for the SAGE research team that won the last season of IARPA's Hybrid Forecasting Competition. Ezra Karger is an applied microeconomist working in the research group of the Federal Reserve Bank of Chicago; he is a superforecaster who has participated in several IARPA-sponsored forecasting tournaments and has worked with Philip Tetlock on some of the methods proposed in the tournament below. Paraphrasing from the proposal, the original plan for the tournament was as follows: Have a panel of subject-matter experts (SMEs) choose 10 long-run questions about existential risks and 20 short-run, resolvable, early warning indicator questions as inputs for the long-run questions. Ask the SMEs to submit forecasts and rationales for each question. Divide the superforecasters into two groups. Ask each person to forecast the same questions as the SMEs and explain those forecasts. For short-run questions, evaluate forecasts using a proper scoring rule, like Brier or logarithmic scores. For long-run questions, use reciprocal scoring to incentivize ac...

The Nonlinear Library: EA Forum Top Posts
How many people would be killed as a direct result of a US-Russia nuclear exchange? by Luisa_Rodriguez

The Nonlinear Library: EA Forum Top Posts

Play Episode Listen Later Dec 11, 2021 93:44


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How many people would be killed as a direct result of a US-Russia nuclear exchange?, published by Luisa_Rodriguez on the AI Alignment Forum. Write a Review Summary In this post, I estimate the number of fatalities caused directly by nuclear detonations in the US/NATO and Russia. I model these effects in Guesstimate using expert surveys and interviews, forecasts made by Good Judgment Project superforecasters, academic research, and media coverage of international relations, along with academic research into the effects of nuclear war and nuclear weapons policy. There are many determinants that factor into the number of people that would die as a direct result of nuclear detonations during a US-Russia nuclear exchange. I consider the following six factors the most important. They make up the key parameters in my model: The targeting strategy (i.e. what kinds of targets will each country attack?) The number of military facilities each country might target Whether each country would also target cities, in addition to military facilities If they were to target cities, the number of cities each country might target The sizes of the nuclear weapons in each country's nuclear arsenal The population size of the cities that might be targeted during an exchange When I take all of these factors into account, I expect that we'd see a total of 51 million deaths caused directly by nuclear detonations on military and civilian targets in NATO countries and Russia (90% confidence interval: 30 million — 75 million deaths). December 8 2019 Update In light of feedback from Carl Schulman, Kit Harris, MichaelA, David Denkenberger, Topher Brennan, and others, I've made several revisions to this post that are now reflected in the text, figures, and estimates in the body of this post. The original post can still be found here. The changes that had the largest bearing on my results included: Changing the way I estimate the number of nuclear weapons that would be used in a countervalue nuclear exchange in expectation so that I don't accidentally truncate the tails of the distributions (details here and here). Generating a formula that can be directly entered into Guesstimate to estimate the number of deaths caused by a countervalue nuclear exchange rather than using a simplified formula to estimate the parameters for triangular distributions that are then entered into Guesstimate (details here and here). After making these revisions, my estimate of the number of people that would be killed directly by nuclear detonations during a US-Russia nuclear exchange is about 51 million (90% confidence interval: 30 million — 75 million deaths) — ~43% more than my original estimate of 35 million (90% confidence interval: 23 million — 50 million deaths). The impacts that each individual change had on my results can be seen here. I've also added a bit more discussion on the probability that a countervalue nuclear exchange would escalate, and sensitivity analysis so that people who disagree with my views on this can see how the results change under more pessimistic assumptions. My sensitivity analysis shows that, if you're more pessimistic than me about the probability of countervalue targeting and escalation, around 88 million people would be killed in expectation during a US-Russia nuclear exchange (details here and here). Thanks again to those who offered feedback, and also to Jaime Sevilla, Ozzie Gooen, Max Daniel, and Marinella Capriati for feedback and technical support implementing the revisions. Project Overview This is the third post in Rethink Priorities' series on nuclear risks. In the first post, I look into which plausible nuclear exchange scenarios should worry us most, ranking them based on their potential to cause harm. In the second post, I explore the make-up and survivability of the US and R...

Increments
#29 - Some Scattered Thoughts on Superforecasting

Increments

Play Episode Listen Later Aug 16, 2021 45:20


We're back! Apologies for the delay, but Vaden got married and Ben was summoned to be an astronaut on the next billionaire's vacation to Venus. This week we're talking about how to forecast the future (with this one simple and easy trick! Astrologers hate them!). Specifically, we're diving into Philip Tetlock's work on Superforecasting (https://en.wikipedia.org/wiki/Superforecasting:_The_Art_and_Science_of_Prediction). So what's the deal? Is it possible to "harness the wisdom of the crowd to forecast world events" (https://en.wikipedia.org/wiki/The_Good_Judgment_Project)? Or is the whole thing just a result of sloppy statistics? We believe the latter is likely to be true with probability 64.9% - no, wait, 66.1%. Intro segment: "The Sentience Debate": The moral value of shrimps, insects, and oysters (https://www.facebook.com/103405457813911/videos/254164216090604) Relevant timestamps: 10:05: "Even if there's only a one in one hundred chance, or one in one thousand chance, that insects are sentient given current information, and if we're killing trillions or quadrillions of insects in ways that are preventable or avoidable or that we can in various ways mitigate that harm... then we should consider that possibility." 25:47: "If you're all going to work on pain in invertebrates, I pity you in many respects... In my previous work, I was used to running experiments and getting a clear answer, and I could say what these animals do and what they don't do. But when I started to think about what they might be feeling, you meet this frustration, that after maybe about 15 years of research, if someone asks me do they feel pain, my answer is 'maybe'... a strong 'maybe'... you cannot discount the possibility." 46:47: "It is not 100% clear to me that plants are non sentient. I do think that animals including insects are much more likely to be sentient than plants are, but I would not have a credence of zero that plants are sentient." 1:01:59: "So the hard problem I would like to ask the panel is: If you were to compare the moral weight of one ant to the moral weight of one human, what ratio would you put? How much more is a human worth than an ant? 100:1? 1000:1? 10:1? Or maybe 1:1? ... Let's start with Jamie." Main References: Superforecasting: The Art and Science of Prediction - Wikipedia (https://en.wikipedia.org/wiki/Superforecasting:_The_Art_and_Science_of_Prediction) How Policymakers Can Improve Crisis Planning (https://www.foreignaffairs.com/articles/united-states/2020-10-13/better-crystal-ball) The Good Judgment Project - Wikipedia (https://en.wikipedia.org/wiki/The_Good_Judgment_Project) Expert Political Judgment: How Good Is It? How Can We Know?: Tetlock, Philip E.: 9780691128719: Books - Amazon.ca (https://www.amazon.ca/Expert-Political-Judgment-Good-Know/dp/0691128715) Additional references mentioned in the episode: The Drunkard's Walk: How Randomness Rules Our Lives (https://en.wikipedia.org/wiki/The_Drunkard%27s_Walk) The Black Swan: The Impact of the Highly Improbable - Wikipedia (https://en.wikipedia.org/wiki/The_Black_Swan:_The_Impact_of_the_Highly_Improbable) Book Review: Superforecasting | Slate Star Codex (https://slatestarcodex.com/2016/02/04/book-review-superforecasting/) Pandemic Uncovers the Limitations of Superforecasting – We Are Not Saved (https://wearenotsaved.com/2020/04/18/pandemic-uncovers-the-ridiculousness-of-superforecasting/) My Final Case Against Superforecasting (with criticisms considered, objections noted, and assumptions buttressed) – We Are Not Saved (https://wearenotsaved.com/2020/05/30/my-final-case-against-superforecasting-with-criticisms-considered-objections-noted-and-assumptions-buttressed/) Use your Good Judgement and send us email at incrementspodcast@gmail.com.

unSILOed with Greg LaBlanc
Perfectly Confident and Good Judgment: Your Keys to Succeeding in Life feat. Don Moore

unSILOed with Greg LaBlanc

Play Episode Listen Later Jun 4, 2021 53:55


Can overconfidence harm you? Despite popular belief, psychologists and experts state that overconfidence can also lead to failure. Avoiding a different perspective can limit your judgment. How much confidence is good? How do you make sure you're not underconfident?Don Moore, Associate Dean for Academic Affairs at Berkeley Haas talks about his book called Perfectly Confident: How To Calibrate Your Decisions Wisely. Together with host Greg, they unpack ways people can weigh their options properly. They cover topics like Confirmation Bias, the story of Jeff Rubin, and other tips on spotting overconfidence signals. Make sure to listen to the episode as he shares stories about the Good Judgment Project where they forecast and weigh in on political and diplomatic events based on understanding people's perspectives. If you're looking for a book that'll guide the individual and team decision-making process, this one's definitely a great read!Episode Quotes:On confirmation bias, causing us to miss signals that might affect our success:“If we think about it in a Bayesian framework, one way that a lot of economists have modeled overconfidence is too tight a prior. That is we're too sure that our noisy private signal is accurate and underestimate the error and noise. That's built into it as a product of our idiosyncratic history or excessive faith in our intuitive judgment. You can think of confirmation bias as underweighting the informational value and evidence that we encounter.”When does overconfidence benefit us? "The public benefits of some circumstances in which overconfidence can lead to persistence. In excess of what expected value calculations might justify doesn't then imply everyone should be overconfident. We can be grateful to those who make sacrifices at their own expense without wanting to follow suit."How can we prevent hierarchical decisions to affect the culture of diversity in the workplace? “Trying to extract information from the crowd before the boss weighs in with his or her opinion. So one simple way to do that is before a meeting where you have to come to some important decision, you force everybody to vote to pre-commit or to identify their preferred list of candidates or outcome options or whatever it is that you're talking about.”Show Links:Order Book - Perfectly Confident: How To Calibrate Your DecisionsThe Good Judgment ProjectOther Books Mentioned:Superforecasting The Art and Science of PredictionThinking in Bets: Making Smarter Decision When You Don't Have All the Facts

Pb Living - A daily book review
A Book Review - Superforecasting: The Art and Science of Prediction Book by Dan Gardner and Philip E. Tetlock

Pb Living - A daily book review

Play Episode Listen Later Aug 22, 2020 10:11


In Superforecasting, Tetlock and coauthor Dan Gardner offer a masterwork on prediction, drawing on decades of research and the results of a massive, government-funded forecasting tournament. The Good Judgment Project involves tens of thousands of ordinary people—including a Brooklyn filmmaker, a retired pipe installer, and a former ballroom dancer—who set out to forecast global events. Some of the volunteers have turned out to be astonishingly good. They've beaten other benchmarks, competitors, and prediction markets. They've even beaten the collective judgment of intelligence analysts with access to classified information. They are "superforecasters."In this groundbreaking and accessible book, Tetlock and Gardner show us how we can learn from this elite group. Weaving together stories of forecasting successes (the raid on Osama bin Laden's compound) and failures (the Bay of Pigs) and interviews with a range of high-level decision makers, from David Petraeus to Robert Rubin, they show that good forecasting doesn't require powerful computers or arcane methods. It involves gathering evidence from a variety of sources, thinking probabilistically, working in teams, keeping score, and being willing to admit error and change course. Superforecasting offers the first demonstrably effective way to improve our ability to predict the future—whether in business, finance, politics, international affairs, or daily life—and is destined to become a modern classic. --- This episode is sponsored by · Anchor: The easiest way to make a podcast. https://anchor.fm/app --- Send in a voice message: https://anchor.fm/pbliving/message Support this podcast: https://anchor.fm/pbliving/support

31 Days to a More Effective Compliance Program

Imagine that as a CCO, you could create a team which might well dramatically improve your company’s compliance and risk forecasting ability, but to do so you would be required to expose just how unreliable the professional corporate forecasters have been. Could you do so and, more importantly, would you do so? Most generally this is the predictive capability that organizations have used. However, the new “superforecasting” movement, led by Philip E. Tetlock and others, has been gaining strength to help improve this capability. The concepts around superforecasting came of age after the intelligence failures leading up to the Iraq War. This led to the founding of the Good Judgment Project, which had as a key component a multi-year predictive tournament, which was a series of gaming exercises pitting amateurs against professional intelligence analysts. The results of the Good Judgment Project. Today, I explain its applicability to compliance. Three key takeaways: Imagine you could create a team which might well dramatically improve your company’s compliance and risk forecasting ability. It is essential to track the prediction outcomes and provide timely feedback to improve forecasting going forward. Like any innovation, there must be a commitment from management on moving forward.

EARadio
EAG 2019 SF: Fireside Chat with Philip Tetlock

EARadio

Play Episode Listen Later Oct 21, 2019 59:48


Philip Tetlock is an expert on forecasting; he’s spent decades studying how people make predictions — from political pundits to CIA analysts. In this conversation, he discusses a wide range of topics, including prediction algorithms, the long-term future, and his Good Judgment Project (which identified common traits of the most skilled forecasters). To learn more … Continue reading EAG 2019 SF: Fireside Chat with Philip Tetlock

NonProphets
Ep. 69: Marc Koehler interview

NonProphets

Play Episode Listen Later Jan 11, 2019


Episode 69 of the NonProphets podcast, Atief, Robert, and Scott talk to Marc Koehler. Marc is the Vice President of Good Judgment Inc., in charge of government relations and co-head of analytics as well as an accomplished superforecaster. He was simultaneously the top forecaster on the internal Intelligence Community Prediction Market, the Good Judgment Project superforecaster prediction market, and on Hypermind. Marc was formerly in the Foreign Service with postings to the Office of the Vice President, the White House Situation Room, the National Security Council, the Pentagon, and the State Department, and overseas in China, Taiwan, South Korea, Nepal, Hong Kong and Sweden. Marc talks with us about working with Vice President Dick Cheney and Scooter Libby (1:18); the Pentagon Joint Staff and the Office of Net Assessments (4:55); ACE tournament Good Judgment Project and ICPM forecasting—open-source vs. classified information (6:55); how superforecasters think (11:35); how to set up the best predictive intelligence capacity for the US Government (21:50); formulating the right questions (29:27); when the truth is not wanted—North Korean plutonium reactor project in Syria incident (36:42), forecasting Kim Jong-un (39:39); China (47:28); whether we are headed into a Thucydides trap with China? (49:07); the possibility of a new Taiwan Straits Crisis (54:55); and what’s next for Good Judgment (1:00:04). As always, you can reach us at nonprophetspod.wordpress.com or at nonprophetspod@gmail.com. (recorded 01/04/2019)

NonProphets
Ep. 46: Sichere Prognosen (Bruno Jahn Interview)

NonProphets

Play Episode Listen Later Nov 19, 2017


Episode 45 of the NonProphets podcast, in which Robert and Scott interview superforecaster Bruno Jahn about his background in forecasting and about his forthcoming book, Sichere Prognosen in unsicheren Zeiten (Certain Forecasts in Uncertain Times). We talk with Bruno about how he got started with the Good Judgment Project (01:04), how to tell the difference between accuracy and luck in forecasting (10:00), whether expertise can get in the way of forecasting (23:20), his book on forecasting (31:00), how forecasting can be an antidote to fake news (43:19), the value of Twitter and online information sources (55:07), what superforecasters are like (1:01:18), whether we will see a "Jamaica coalition" in Germany (1:08:35), and the collapse of center-left parties in Europe (1:14:28). As always, you may reach us at nonprophetspod.wordpress.com or nonprophetspod@gmail.com. (recorded 11/01/2017)

Trend Following with Michael Covel
Ep. 604: Mega Decision-Making Episode with Michael Covel on Trend Following Radio

Trend Following with Michael Covel

Play Episode Listen Later Nov 5, 2017 223:20


Today’s mega combo episode is Chris Voss, Robert Cialdini, Philip Tetlock, Spyros Makridakis, and Tim Ferriss. Chris Voss is the author of, Never Split The Difference: Negotiating As If Your Life Depended On It. Chris is a former international hostage negotiator for the FBI. He has had an amazing career full of great experience and insights. Chris first entered the FBI in 1983 and has been involved with over 150 kidnapping cases. Robert Cialdini is best known for writing Influence: The Psychology of Persuasion published back in 1984. Robert is the “go to man” for understanding effective persuasion. Reciprocity, commitment and consistency, social proof, authority, liking and scarcity are six key principles of influence he teaches. His new book, Pre-Suasion: A Revolutionary Way to Influence and Persuade, introduces a seventh key principle of influence. Philip Tetlock is a Canadian American political science writer currently at The Wharton School of the University of Pennsylvania. He is right at the intersection of psychology, political science and organizational behavior. His book, Superforecasting: The Art and Science of Prediction, is probabilistic thinking defined. Phil is also a co-principle investigator of The Good Judgment Project, a study on the art and science of prediction and forecasting. Spyros Makridakis is Rector of the Neapolis University of Pafos NUP and an Emeritus Professor of Decision Sciences at INSEAD as well as the University of Piraeus and one of the world’s leading experts on forecasting, with many journal articles and books on the subject. He is organizer of the Makridakis Competitions, known in the forecasting literature as the M-Competitions. Tim Ferriss is an author, blogger and motivational speaker known for his bestselling books. Tim has revolutionized the idea of writing a book; he has engineered the process of a bestseller. In this episode of Trend Following Radio: Aversion to negotiation Negotiating skills Never pretend people are rational Business negotiations compared to hostage negotiations Lying three times “How” and “Why” questions What are superforecasters? Probabilistic thinking Looking at data Location independence 80/20 rule Known knowns, known unknowns, and unknown unknowns Uncertainty How to publish a book

NonProphets
Ep. 41: Welton Chang Interview—From the DIA to Superforecasting

NonProphets

Play Episode Listen Later Sep 30, 2017 85:16


Episode 41 of the NonProphets podcast, in which Atief, Robert and Scott interview Welton Chang, a fellow Superforecaster, former Defense Intelligence Agency analyst, stationed in South Korea and with two deployments to Iraq. He is currently a Ph.D. candidate in the Good Judgment laboratory at the University of Pennsylvania, with Phil Tetlock and Barbara Mellers as advisors. Military intelligence – Korea, Iraq (4:00). Confronting being wrong – the nature of judgment and cognition (7:15). Vizzini's Princess Bride conundrum (12:15) ( https://www.youtube.com/watch?v=9s0UURBihH8). AI – algorithms and models – should we trust them, and the garbage in garbage out problem (12:50). Spaghetti chart of Afghanistan: perhaps an accurate representation (18:45)? Limits of modern warfare – restrictions (22:30).  Rationality – Trump, Kim, Rex, nukes (33:00)? What is a good way to train forecasters? Welton’s work helping develop training material for the Good Judgment Project (50:40). Improving group dynamics for better decisions (57:00). Bayes' theorem and practice (1:20:00). We close with Welton's cats @percyandportia, Instagram celebrities (1:21:20). As always, you can reach us at nonprophetspod.com, or nonprophetspod@gmail.com. (recorded 9/20/2017)  

Michael Covel's Trend Following
Ep. 425: Philip Tetlock Interview with Michael Covel on Trend Following Radio

Michael Covel's Trend Following

Play Episode Listen Later Feb 19, 2016 44:19


My guest today is Philip Tetloc, a Canadian American political science writer currently at The Wharton School of the University of Pennsylvania. He is right at the intersection of psychology, political science and organizational behavior. Phil is also a co-principle investigator of The Good Judgment Project, a study on the art and science of prediction and forecasting. The topic is his book Superforecasting: The Art and Science of Prediction. In this episode of Trend Following Radio we discuss: What are superforecasters? Probabilistic thinking Looking at aggregate data Jump in! --- I'm MICHAEL COVEL, the host of TREND FOLLOWING RADIO, and I'm proud to have delivered 10+ million podcast listens since 2012. Investments, economics, psychology, politics, decision-making, human behavior, entrepreneurship and trend following are all passionately explored and debated on my show. To start? I'd like to give you a great piece of advice you can use in your life and trading journey… cut your losses! You will find much more about that philosophy here: https://www.trendfollowing.com/trend/ You can watch a free video here: https://www.trendfollowing.com/video/ Can't get enough of this episode? You can choose from my thousand plus episodes here: https://www.trendfollowing.com/podcast My social media platforms: Twitter: @covel Facebook: @trendfollowing LinkedIn: @covel Instagram: @mikecovel Hope you enjoy my never-ending podcast conversation!

Trend Following with Michael Covel
Ep. 425: Philip Tetlock Interview with Michael Covel on Trend Following Radio

Trend Following with Michael Covel

Play Episode Listen Later Feb 18, 2016 44:19


Today on Trend Following Radio Michael Covel interviews Philip Tetlock. Phil is a Canadian American political science writer currently at The Wharton School of the University of Pennsylvania. He is right at the intersection of psychology, political science and organizational behavior. His book, “Superforecasting: The Art and Science of Prediction,” is about probabilistic thinking defined. Phil is also a co-principle investigator of The Good Judgment Project, a study on the art and science of prediction and forecasting.Michael starts off asking, “Regular folks can beat the experts at their own game?” Phil says essentially that is correct. He started The Good Judgment Project in 2011. It was based around forecasting and was funded by the government. He was shocked by the amount of “regular” people he recruited for his study that were able to compete with, or do a better job predicting than professionals working for agencies such as the NSA.Michael and Phil move onto discussing the Iraq war. They discuss what the actual probability may have been of Saddam Hussein having weapons of mass destruction. George Bush claimed that it was a “slam dunk” when clearly there was not a 100% probability of weapons of mass destruction being there. Michael asks, “When is society going to adopt more of a probability mindset?” Phil says that soft subjective human judgment is going by the way side. Pundits saying, “Someday this will happen” without any real substance, will come to a stop. As long as a forecaster can say, “This may happen in the future” then they can never really be held accountable for being wrong. Michael brings up the example of Robert Rubin. Robert worked for Goldman Sachs and was under Bill Clinton during his presidency. He was a great probabilistic thinker. Everyone loved him until the 2008 crash. Phil uses him as an example of even the best prediction people getting it wrong.Bottom line, superforecasters look for aggregated data. They know there is interesting data laying around and they tend to look at crowd indicators heavily. The distinction between superforecasters and regular forecasters is their ability to start with the outside view and move to the inside slowly. Regular forecasters start with the inside view and rarely look at the outside view. Superforecasters also believe in fate less than regular forecasters do. When you highlight all the low probability events surrounding outcomes, such as the lottery, many chose to think the event was decided by “fate” or just “meant to be.” Superforecasters think in a way of “well someone had to win, and they did.” In this episode of Trend Following Radio: What are superforecasters? Probabilistic thinking Looking at aggregate data

EdgeCast
Philip Tetlock - EDGE Master Class 2015: A Short Course in Superforecasting, Class V Part II [9.22.15]

EdgeCast

Play Episode Listen Later Sep 22, 2015 29:25


PHILIP E. TETLOCK (https://www.edge.org/memberbio/philip_tetlock), Political and Social Scientist, is the Annenberg University Professor at the University of Pennsylvania, with appointments in Wharton, psychology and political science. He is co-leader of the Good Judgment Project, a multi-year forecasting study, the author of Expert Political Judgment and (with Aaron Belkin) Counterfactual Thought Experiments in World Politics, and co-author (with Dan Gardner) of Superforecasting: The Art & Science of Prediction . Class V Part II: https://www.edge.org/conversation/philip_tetlock-edge-master-class-2015-a-short-course-in-superforecasting-class-v

EdgeCast
Philip Tetlock - EDGE Master Class 2015: A Short Course in Superforecasting, Class V Part I [9.22.15]

EdgeCast

Play Episode Listen Later Sep 22, 2015 24:42


PHILIP E. TETLOCK (https://www.edge.org/memberbio/philip_tetlock), Political and Social Scientist, is the Annenberg University Professor at the University of Pennsylvania, with appointments in Wharton, psychology and political science. He is co-leader of the Good Judgment Project, a multi-year forecasting study, the author of Expert Political Judgment and (with Aaron Belkin) Counterfactual Thought Experiments in World Politics, and co-author (with Dan Gardner) of Superforecasting: The Art & Science of Prediction . Class V Part I: https://www.edge.org/conversation/philip_tetlock-edge-master-class-2015-a-short-course-in-superforecasting-class-v

EdgeCast
Philip Tetlock - EDGE Master Class 2015: A Short Course in Superforecasting, Class IV Part II [9.15.15]

EdgeCast

Play Episode Listen Later Sep 15, 2015 34:14


PHILIP E. TETLOCK (https://www.edge.org/memberbio/philip_tetlock), Political and Social Scientist, is the Annenberg University Professor at the University of Pennsylvania, with appointments in Wharton, psychology and political science. He is co-leader of the Good Judgment Project, a multi-year forecasting study, the author of Expert Political Judgment and (with Aaron Belkin) Counterfactual Thought Experiments in World Politics, and co-author (with Dan Gardner) of Superforecasting: The Art & Science of Prediction . Class IV Part II: https://www.edge.org/conversation/philip_tetlock-edge-master-class-2015-a-short-course-in-superforecasting-class-iv

EdgeCast
Philip Tetlock - EDGE Master Class 2015: A Short Course in Superforecasting, Class III Part I [9.1.15]

EdgeCast

Play Episode Listen Later Sep 1, 2015 31:07


PHILIP E. TETLOCK (https://www.edge.org/memberbio/philip_tetlock), Political and Social Scientist, is the Annenberg University Professor at the University of Pennsylvania, with appointments in Wharton, psychology and political science. He is co-leader of the Good Judgment Project, a multi-year forecasting study, the author of Expert Political Judgment and (with Aaron Belkin) Counterfactual Thought Experiments in World Politics, and co-author (with Dan Gardner) of Superforecasting: The Art & Science of Prediction . Class III Part I: https://www.edge.org/conversation/philip_tetlock-edge-master-class-2015-a-short-course-in-superforecasting-class-iii

EdgeCast
Philip Tetlock - EDGE Master Class 2015: A Short Course in Superforecasting, Class III Part II [9.1.15]

EdgeCast

Play Episode Listen Later Sep 1, 2015 46:31


PHILIP E. TETLOCK (https://www.edge.org/memberbio/philip_tetlock), Political and Social Scientist, is the Annenberg University Professor at the University of Pennsylvania, with appointments in Wharton, psychology and political science. He is co-leader of the Good Judgment Project, a multi-year forecasting study, the author of Expert Political Judgment and (with Aaron Belkin) Counterfactual Thought Experiments in World Politics, and co-author (with Dan Gardner) of Superforecasting: The Art & Science of Prediction . Class III Part II: https://www.edge.org/conversation/philip_tetlock-edge-master-class-2015-a-short-course-in-superforecasting-class-iii

EdgeCast
Philip Tetlock - EDGE Master Class 2015: A Short Course in Superforecasting, Class II Part I [8.24.15]

EdgeCast

Play Episode Listen Later Aug 24, 2015 50:03


PHILIP E. TETLOCK (https://www.edge.org/memberbio/philip_tetlock), Political and Social Scientist, is the Annenberg University Professor at the University of Pennsylvania, with appointments in Wharton, psychology and political science. He is co-leader of the Good Judgment Project, a multi-year forecasting study, the author of Expert Political Judgment and (with Aaron Belkin) Counterfactual Thought Experiments in World Politics, and co-author (with Dan Gardner) of Superforecasting: The Art & Science of Prediction . Class II Part I: https://www.edge.org/conversation/philip_tetlock-edge-master-class-2015-a-short-course-in-superforecasting-class-ii

EdgeCast
Philip Tetlock - EDGE Master Class 2015: A Short Course in Superforecasting, Class II Part II [8.24.15]

EdgeCast

Play Episode Listen Later Aug 24, 2015 47:53


PHILIP E. TETLOCK (https://www.edge.org/memberbio/philip_tetlock), Political and Social Scientist, is the Annenberg University Professor at the University of Pennsylvania, with appointments in Wharton, psychology and political science. He is co-leader of the Good Judgment Project, a multi-year forecasting study, the author of Expert Political Judgment and (with Aaron Belkin) Counterfactual Thought Experiments in World Politics, and co-author (with Dan Gardner) of Superforecasting: The Art & Science of Prediction . Class II Part II: https://www.edge.org/conversation/philip_tetlock-edge-master-class-2015-a-short-course-in-superforecasting-class-ii

EdgeCast
Philip Tetlock - EDGE Master Class 2015: A Short Course in Superforecasting, Class I Part I [8.17.15]

EdgeCast

Play Episode Listen Later Aug 17, 2015 39:43


PHILIP E. TETLOCK (https://www.edge.org/memberbio/philip_tetlock), Political and Social Scientist, is the Annenberg University Professor at the University of Pennsylvania, with appointments in Wharton, psychology and political science. He is co-leader of the Good Judgment Project, a multi-year forecasting study, the author of Expert Political Judgment and (with Aaron Belkin) Counterfactual Thought Experiments in World Politics, and co-author (with Dan Gardner) of Superforecasting: The Art & Science of Prediction . Class I Part I: https://www.edge.org/conversation/philip_tetlock-edge-master-class-2015-a-short-course-in-superforecasting-class-i

EdgeCast
Philip Tetlock - EDGE Master Class 2015: A Short Course in Superforecasting, Class I Part II [8.17.15]

EdgeCast

Play Episode Listen Later Aug 17, 2015 45:03


PHILIP E. TETLOCK (https://www.edge.org/memberbio/philip_tetlock), Political and Social Scientist, is the Annenberg University Professor at the University of Pennsylvania, with appointments in Wharton, psychology and political science. He is co-leader of the Good Judgment Project, a multi-year forecasting study, the author of Expert Political Judgment and (with Aaron Belkin) Counterfactual Thought Experiments in World Politics, and co-author (with Dan Gardner) of Superforecasting: The Art & Science of Prediction . Class I Part II: https://www.edge.org/conversation/philip_tetlock-edge-master-class-2015-a-short-course-in-superforecasting-class-i