Podcasts about superforecasters

  • 29PODCASTS
  • 36EPISODES
  • 51mAVG DURATION
  • 1MONTHLY NEW EPISODE
  • Nov 22, 2024LATEST

POPULARITY

20172018201920202021202220232024


Best podcasts about superforecasters

Latest podcast episodes about superforecasters

web3 with a16z
Prediction Markets and Beyond

web3 with a16z

Play Episode Listen Later Nov 22, 2024 108:05


with @atabarrok @skominers @smc90We've heard a lot about the premise and the promise of prediction markets for a long time, but they finally hit the main stage with the most recent election. So what worked (and didn't) this time? Are they better than pollsters, journalists, domain experts, superforecasters?So in this conversation, we tease apart the hype from the reality of prediction markets, from the recent election to market foundations... going more deeply into the how, why, and where these markets work. We also discuss the design challenges and opportunities, including implications for builders throughout. And we also cover other information aggregation mechanisms -- from peer prediction to others -- given that prediction markets are part of a broader category of information-elicitation and information-aggregation mechanisms.Where do (and don't) blockchain and crypto technologies come in -- and what specific features (decentralization, transparency, real-time, open source, etc.) matter most, and in what contexts? Finally, we discuss applications for prediction and decision markets -- things we could do right away to in the near-to distant future -- touching on everything from corporate decisions and scientific replication to trends like AI, DeSci, futarchy/ governance, and more?Our special expert guests are Alex Tabarrok, professor of economics at George Mason University and Chair in Economics at the Mercatus Center; and Scott Duke Kominers, research partner at a16z crypto, and professor at Harvard Business School  -- both in conversation with Sonal Chokshi.RESOURCES(from links to research mentioned to more on the topics discussed)The Use of Knowledge in Society by Friedrich Hayek (American Economic Review, 1945)Everything is priced in by rsd99 (r/wallstreetbets, 2019)Idea Futures (aka prediction markets, information markets) by Robin Hanson (1996)Auctions: The Social Construction of Value  by Charles SmithSocial value of public information by Stephen Morris and Hyun Song Shin (American Economic Review, December 2002)Using prediction markets to estimate the reproducibility of scientific research by Anna Dreber, Thomas Pfeiffer, Johan Almenberg, Siri Isaksson, Brad Wilson, Yiling Chen, Brian Nosek, and Magnus Johannesson (Proceedings of the National Academy of Sciences (November 2015)A solution to the single-question crowd wisdom problem by Dražen Prelec, Sebastian Seung, and John McCoy (Nature, January 2017)Targeting high ability entrepreneurs using community information: Mechanism design in the field by Reshmaan Hussam, Natalia Rigol, and Benjamin Roth (American Economic Review, March 2022)Information aggregation mechanisms: concept, design, and implementation for a sales forecasting problem by Charles Plott and Kay-Yut Chen, Hewlett Packard Laboratories (March 2002)If I had a million [on deciding to dump the CEO or not] by Robin Hanson (2008)Futarchy: Vote values, but bet beliefs by Robin Hanson (2013)From prediction markets to info finance by Vitalik Buterin (November 2024)Composability is innovation by Linda Xie (June 2021)Composability is to software as compounding interest is to finance by Chris Dixon (October 2021)resources & research on DAOs, a16z crypto  

80k After Hours
Highlights: #200 – Ezra Karger on what superforecasters and experts think about existential risks

80k After Hours

Play Episode Listen Later Sep 18, 2024 22:54


This is a selection of highlights from episode #200 of The 80,000 Hours Podcast. These aren't necessarily the most important, or even most entertaining parts of the interview — and if you enjoy this, we strongly recommend checking out the full episode:Ezra Karger on what superforecasters and experts think about existential risksAnd if you're finding these highlights episodes valuable, please let us know by emailing podcast@80000hours.org.Highlights:Luisa's intro (00:00:00)Why we need forecasts about existential risks (00:00:26)Headline estimates of existential and catastrophic risks (00:02:43)What explains disagreements about AI risks? (00:06:18)Learning more doesn't resolve disagreements about AI risks (00:08:59)A lot of disagreement about AI risks is about when AI will pose risks (00:11:31)Cruxes about AI risks (00:15:17)Is forecasting actually useful in the real world? (00:18:24)Highlights put together by Simon Monsour, Milo McGuire, and Dominic Armstrong

80,000 Hours Podcast with Rob Wiblin
#200 – Ezra Karger on what superforecasters and experts think about existential risks

80,000 Hours Podcast with Rob Wiblin

Play Episode Listen Later Sep 4, 2024 169:24


"It's very hard to find examples where people say, 'I'm starting from this point. I'm starting from this belief.' So we wanted to make that very legible to people. We wanted to say, 'Experts think this; accurate forecasters think this.' They might both be wrong, but we can at least start from here and figure out where we're coming into a discussion and say, 'I am much less concerned than the people in this report; or I am much more concerned, and I think people in this report were missing major things.' But if you don't have a reference set of probabilities, I think it becomes much harder to talk about disagreement in policy debates in a space that's so complicated like this." —Ezra KargerIn today's episode, host Luisa Rodriguez speaks to Ezra Karger — research director at the Forecasting Research Institute — about FRI's recent Existential Risk Persuasion Tournament to come up with estimates of a range of catastrophic risks.Links to learn more, highlights, and full transcript.They cover:How forecasting can improve our understanding of long-term catastrophic risks from things like AI, nuclear war, pandemics, and climate change.What the Existential Risk Persuasion Tournament (XPT) is, how it was set up, and the results.The challenges of predicting low-probability, high-impact events.Why superforecasters' estimates of catastrophic risks seem so much lower than experts', and which group Ezra puts the most weight on.The specific underlying disagreements that superforecasters and experts had about how likely catastrophic risks from AI are.Why Ezra thinks forecasting tournaments can help build consensus on complex topics, and what he wants to do differently in future tournaments and studies.Recent advances in the science of forecasting and the areas Ezra is most excited about exploring next.Whether large language models could help or outperform human forecasters.How people can improve their calibration and start making better forecasts personally.Why Ezra thinks high-quality forecasts are relevant to policymakers, and whether they can really improve decision-making.And plenty more.Chapters:Cold open (00:00:00)Luisa's intro (00:01:07)The interview begins (00:02:54)The Existential Risk Persuasion Tournament (00:05:13)Why is this project important? (00:12:34)How was the tournament set up? (00:17:54)Results from the tournament (00:22:38)Risk from artificial intelligence (00:30:59)How to think about these numbers (00:46:50)Should we trust experts or superforecasters more? (00:49:16)The effect of debate and persuasion (01:02:10)Forecasts from the general public (01:08:33)How can we improve people's forecasts? (01:18:59)Incentives and recruitment (01:26:30)Criticisms of the tournament (01:33:51)AI adversarial collaboration (01:46:20)Hypotheses about stark differences in views of AI risk (01:51:41)Cruxes and different worldviews (02:17:15)Ezra's experience as a superforecaster (02:28:57)Forecasting as a research field (02:31:00)Can large language models help or outperform human forecasters? (02:35:01)Is forecasting valuable in the real world? (02:39:11)Ezra's book recommendations (02:45:29)Luisa's outro (02:47:54)Producer: Keiran HarrisAudio engineering: Dominic Armstrong, Ben Cordell, Milo McGuire, and Simon MonsourContent editing: Luisa Rodriguez, Katy Moore, and Keiran HarrisTranscriptions: Katy Moore

Increments
#69 - Contra Scott Alexander on Probability

Increments

Play Episode Listen Later Jun 20, 2024 105:09


After four episodes spent fawning over Scott Alexander's "Non-libertarian FAQ", we turn around and attack the good man instead. In this episode we respond to Scott's piece "In Continued Defense of Non-Frequentist Probabilities", and respond to each of his five arguments defending Bayesian probability. Like moths to a flame, we apparently cannot let the probability subject slide, sorry people. But the good news is that before getting there, you get to here about some therapists and pedophiles (therapeutic pedophelia?). What's the probability that Scott changes his mind based on this episode? We discuss Why we're not defending frequentism as a philosophy The Bayesian interpretation of probability The importance of being explicit about assumptions Why it's insane to think that 50% should mean both "equally likely" and "I have no effing idea". Why Scott's interpretation of probability is crippling our ability to communicate How super are Superforecasters? Marginal versus conditional guarantees (this is exactly as boring as it sounds) How to pronounce Samotsvety and are they Italian or Eastern European or what? References In Continued Defense Of Non-Frequentist Probabilities (https://www.astralcodexten.com/p/in-continued-defense-of-non-frequentist) Article on superforecasting by Gavin Leech and Misha Yugadin (https://progress.institute/can-policymakers-trust-forecasters/) Essay by Michael Story on superforecasting (https://www.samstack.io/p/five-questions-for-michael-story) Existential risk tournament: Superforecasters vs AI doomers (https://forecastingresearch.org/news/results-from-the-2022-existential-risk-persuasion-tournament) and Ben's blogpost about it (https://benchugg.com/writing/superforecasting/) The Good Judgment Project (https://goodjudgment.com/) Quotes During the pandemic, Dominic Cummings said some of the most useful stuff that he received and circulated in the British government was not forecasting. It was qualitative information explaining the general model of what's going on, which enabled decision-makers to think more clearly about their options for action and the likely consequences. If you're worried about a new disease outbreak, you don't just want a percentage probability estimate about future case numbers, you want an explanation of how the virus is likely to spread, what you can do about it, how you can prevent it. - Michael Story (https://www.samstack.io/p/five-questions-for-michael-story) Is it bad that one term can mean both perfect information (as in 1) and total lack of information (as in 3)? No. This is no different from how we discuss things when we're not using probability. Do vaccines cause autism? No. Does drinking monkey blood cause autism? Also no. My evidence on the vaccines question is dozens of excellent studies, conducted so effectively that we're as sure about this as we are about anything in biology. My evidence on the monkey blood question is that nobody's ever proposed this and it would be weird if it were true. Still, it's perfectly fine to say the single-word answer “no” to both of them to describe where I currently stand. If someone wants to know how much evidence/certainty is behind my “no”, they can ask, and I'll tell them. - SA, Section 2 Socials Follow us on Twitter at @IncrementsPod, @BennyChugg, @VadenMasrani Come join our discord server! DM us on twitter or send us an email to get a supersecret link Help us calibrate our credences and get exclusive bonus content by becoming a patreon subscriber here (https://www.patreon.com/Increments). Or give us one-time cash donations to help cover our lack of cash donations here (https://ko-fi.com/increments). Click dem like buttons on youtube (https://www.youtube.com/channel/UC_4wZzQyoW4s4ZuE4FY9DQQ) What's your credence in Bayesianism? Tell us over at incrementspodcast@gmail.com.

The Nonlinear Library
EA - Summary of Situational Awareness - The Decade Ahead by OscarD

The Nonlinear Library

Play Episode Listen Later Jun 8, 2024 30:29


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Summary of Situational Awareness - The Decade Ahead, published by OscarD on June 8, 2024 on The Effective Altruism Forum. Original by Leopold Aschenbrenner, this summary is not commissioned or endorsed by him. Short Summary Extrapolating existing trends in compute, spending, algorithmic progress, and energy needs implies AGI (remote jobs being completely automatable) by ~2027. AGI will greatly accelerate AI research itself, leading to vastly superhuman intelligences being created ~1 year after AGI. Superintelligence will confer a decisive strategic advantage militarily by massively accelerating all spheres of science and technology. Electricity use will be a bigger bottleneck on scaling datacentres than investment, but is still doable domestically in the US by using natural gas. AI safety efforts in the US will be mostly irrelevant if other actors steal the model weights of an AGI. US AGI research must employ vastly better cybersecurity, to protect both model weights and algorithmic secrets. Aligning superhuman AI systems is a difficult technical challenge, but probably doable, and we must devote lots of resources towards this. China is still competitive in the AGI race, and China being first to superintelligence would be very bad because it may enable a stable totalitarian world regime. So the US must win to preserve a liberal world order. Within a few years both the CCP and USG will likely 'wake up' to the enormous potential and nearness of superintelligence, and devote massive resources to 'winning'. USG will nationalise AGI R&D to improve security and avoid secrets being stolen, and to prevent unconstrained private actors from becoming the most powerful players in the world. This means much of existing AI governance work focused on AI company regulations is missing the point, as AGI will soon be nationalised. This is just one story of how things could play out, but a very plausible and scarily soon and dangerous one. I. From GPT-4 to AGI: Counting the OOMs Past AI progress Increases in 'effective compute' have led to consistent increases in model performance over several years and many orders of magnitude (OOMs) GPT-2 was akin to roughly a preschooler level of intelligence (able to piece together basic sentences sometimes), GPT-3 at the level of an elementary schooler (able to do some simple tasks with clear instructions), and GPT-4 similar to a smart high-schooler (able to write complicated functional code, long coherent essays, and answer somewhat challenging maths questions). Superforecasters and experts have consistently underestimated future improvements in model performance, for instance: The creators of the MATH benchmark expected that "to have more traction on mathematical problem solving we will likely need new algorithmic advancements from the broader research community". But within a year of the benchmark's release, state-of-the-art (SOTA) models went from 5% to 50% accuracy, and are now above 90%. Professional forecasts made in August 2021 expected the MATH benchmark score of SOTA models to be 12.7% in June 2022, but the actual score was 50%. Experts like Yann LeCun and Gary Marcus have falsely predicted that deep learning will plateau. Bryan Caplan is on track to lose a public bet for the first time ever after GPT-4 got an A on his economics exam just two months after he bet no AI could do this by 2029. We can decompose recent progress into three main categories: Compute: GPT-2 was trained in 2019 with an estimated 4e21 FLOP, and GPT-4 was trained in 2023 with an estimated 8e24 to 4e25 FLOP.[1] This is both because of hardware improvements (Moore's Law) and increases in compute budgets for training runs. Adding more compute at test-time, e.g. by running many copies of an AI, to allow for debate and delegation between each instance, could further boos...

What It's Like To Be...
(BONUS) From Choiceology: “The Superforecasters”

What It's Like To Be...

Play Episode Listen Later Mar 19, 2024 38:28


We've got a special bonus episode for you this week! Our friends over at Choiceology, the podcast hosted by Katy Milkman, explores the lessons that can be gleaned from behavioral economics. They tell true stories with high-stakes, sharing what the latest research tells us about making better decisions. In this episode, called “The Superforecasters”, they examine President Barack Obama's decision to move forward with the raid on what was thought to be Osama bin Laden's compound in Pakistan back in 2011. But, as you'll hear, there were doubts, and the costs of getting the mission wrong would have been massive. How was that decision made? And what can it tell us about how to make better predictions?Katy Milkman is a professor at the Wharton School at the University of Pennsylvania and the author of the national bestseller How to Change. You can learn more about Choiceology at the show's website.Follow us on Instagram!Got a comment or suggestion for us? You can reach us via email at jobs@whatitslike.comWant to be on the show? Leave a message on our voice mailbox at (919) 213-0456. We'll ask you to answer two questions: What do people think your job is like and what is it actually like? What's a word or phrase that only someone from your profession would be likely to know and what does it mean?

The Nonlinear Library
LW - Superforecasting the Origins of the Covid-19 Pandemic by DanielFilan

The Nonlinear Library

Play Episode Listen Later Mar 13, 2024 2:15


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Superforecasting the Origins of the Covid-19 Pandemic, published by DanielFilan on March 13, 2024 on LessWrong. The Good Judgement Project got some superforecasters to retrocast whether COVID started via zoonotic spillover or a lab leak. They in aggregate gave a 75% chance of zoonosis, but there was a range of views. GJP's executive summary is at the end of this linkpost. Here is a link to the summary of the report on their substack, and here is a link to the full report (which is a total of 6 pages of content). h/t John Halstead for drawing my attention to this. Superforecasters assess that natural zoonosis is three times more likely to be the cause of the Covid-19 pandemic than either a biomedical research-related accident or some other process or mechanism. Asked to assign a probability to what caused the emergence of SARS-CoV-2 in human populations, more than 50 Superforecasters engaged in extensive online discussions starting on December 1, 2023. In aggregate, they assessed that the pandemic was: 74% likely to have been caused by natural zoonosis (meaning that SARS-CoV-2 emerged in human populations as the result of the infection of a person with coronavirus directly from a naturally infected non-human animal); 25% likely to have been caused by a biomedical research-related accident (meaning that SARS-CoV-2 emerged in human populations as the result of the accidental infection of a laboratory worker with a natural coronavirus; or the accidental infection of researchers with a natural coronavirus during biomedical fieldwork; or the accidental infection of a laboratory worker with an engineered coronavirus; "research" includes civilian biomedical, biodefense, and bioweapons research); 1% likely to have been caused by some other process or mechanism (to include possibilities like the deliberate release of the virus into human populations, irrespective of whether it was an act in accordance with state policy, or the development of the virus due to drug resistance in humans). The Superforecasters made more than 750 comments when developing their assessments. This survey was conducted in the period from December 2023 to February 2024. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

The Nonlinear Library: LessWrong
LW - Superforecasting the Origins of the Covid-19 Pandemic by DanielFilan

The Nonlinear Library: LessWrong

Play Episode Listen Later Mar 13, 2024 2:15


Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Superforecasting the Origins of the Covid-19 Pandemic, published by DanielFilan on March 13, 2024 on LessWrong. The Good Judgement Project got some superforecasters to retrocast whether COVID started via zoonotic spillover or a lab leak. They in aggregate gave a 75% chance of zoonosis, but there was a range of views. GJP's executive summary is at the end of this linkpost. Here is a link to the summary of the report on their substack, and here is a link to the full report (which is a total of 6 pages of content). h/t John Halstead for drawing my attention to this. Superforecasters assess that natural zoonosis is three times more likely to be the cause of the Covid-19 pandemic than either a biomedical research-related accident or some other process or mechanism. Asked to assign a probability to what caused the emergence of SARS-CoV-2 in human populations, more than 50 Superforecasters engaged in extensive online discussions starting on December 1, 2023. In aggregate, they assessed that the pandemic was: 74% likely to have been caused by natural zoonosis (meaning that SARS-CoV-2 emerged in human populations as the result of the infection of a person with coronavirus directly from a naturally infected non-human animal); 25% likely to have been caused by a biomedical research-related accident (meaning that SARS-CoV-2 emerged in human populations as the result of the accidental infection of a laboratory worker with a natural coronavirus; or the accidental infection of researchers with a natural coronavirus during biomedical fieldwork; or the accidental infection of a laboratory worker with an engineered coronavirus; "research" includes civilian biomedical, biodefense, and bioweapons research); 1% likely to have been caused by some other process or mechanism (to include possibilities like the deliberate release of the virus into human populations, irrespective of whether it was an act in accordance with state policy, or the development of the virus due to drug resistance in humans). The Superforecasters made more than 750 comments when developing their assessments. This survey was conducted in the period from December 2023 to February 2024. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

The Nonlinear Library
EA - Language models surprised us by Ajeya

The Nonlinear Library

Play Episode Listen Later Aug 30, 2023 11:22


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Language models surprised us, published by Ajeya on August 30, 2023 on The Effective Altruism Forum. Note: This post was crossposted from Planned Obsolescence by the Forum team, with the author's permission. The author may not see or respond to comments on this post. Most experts were surprised by progress in language models in 2022 and 2023. There may be more surprises ahead, so experts should register their forecasts now about 2024 and 2025. Kelsey Piper co-drafted this post. Thanks also to Isabel Juniewicz for research help. If you read media coverage of ChatGPT - which called it 'breathtaking', 'dazzling', 'astounding' - you'd get the sense that large language models (LLMs) took the world completely by surprise. Is that impression accurate? Actually, yes. There are a few different ways to attempt to measure the question "Were experts surprised by the pace of LLM progress?" but they broadly point to the same answer: ML researchers, superforecasters, and most others were all surprised by the progress in large language models in 2022 and 2023. Competitions to forecast difficult ML benchmarks ML benchmarks are sets of problems which can be objectively graded, allowing relatively precise comparison across different models. We have data from forecasting competitions done in 2021 and 2022 on two of the most comprehensive and difficult ML benchmarks: the MMLU benchmark and the MATH benchmark. First, what are these benchmarks? The MMLU dataset consists of multiple choice questions in a variety of subjects collected from sources like GRE practice tests and AP tests. It was intended to test subject matter knowledge in a wide variety of professional domains. MMLU questions are legitimately quite difficult: the average person would probably struggle to solve them. At the time of its introduction in September 2020, most models only performed close to random chance on MMLU (~25%), while GPT-3 performed significantly better than chance at 44%. The benchmark was designed to be harder than any that had come before it, and the authors described their motivation as closing the gap between performance on benchmarks and "true language understanding": Natural Language Processing (NLP) models have achieved superhuman performance on a number of recently proposed benchmarks. However, these models are still well below human level performance for language understanding as a whole, suggesting a disconnect between our benchmarks and the actual capabilities of these models. Meanwhile, the MATH dataset consists of free-response questions taken from math contests aimed at the best high school math students in the country. Most college-educated adults would get well under half of these problems right (the authors used computer science undergraduates as human subjects, and their performance ranged from 40% to 90%). At the time of its introduction in January 2021, the best model achieved only about ~7% accuracy on MATH. The authors say: We find that accuracy remains low even for the best models. Furthermore, unlike for most other text-based datasets, we find that accuracy is increasing very slowly with model size. If trends continue, then we will need algorithmic improvements, rather than just scale, to make substantial progress on MATH. So, these are both hard benchmarks - the problems are difficult for humans, the best models got low performance when the benchmarks were introduced, and the authors seemed to imply it would take a while for performance to get really good. In mid-2021, ML professor Jacob Steinhardt ran a contest with superforecasters at Hypermind to predict progress on MATH and MMLU. Superforecasters massively undershot reality in both cases. They predicted that performance on MMLU would improve moderately from 44% in 2021 to 57% by June 2022. The actual performance was 68%, which s...

The Nonlinear Library
EA - What do XPT forecasts tell us about AI risk? by Forecasting Research Institute

The Nonlinear Library

Play Episode Listen Later Jul 19, 2023 51:59


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: What do XPT forecasts tell us about AI risk?, published by Forecasting Research Institute on July 19, 2023 on The Effective Altruism Forum. This post was co-authored by the Forecasting Research Institute and Rose Hadshar. Thanks to Josh Rosenberg for managing this work, Zachary Jacobs and Molly Hickman for the underlying data analysis, Coralie Consigny and Bridget Williams for fact-checking and copy-editing, the whole FRI XPT team for all their work on this project, and our external reviewers. In 2022, the Forecasting Research Institute (FRI) ran the Existential Risk Persuasion Tournament (XPT). From June through October 2022, 169 forecasters, including 80 superforecasters and 89 experts, developed forecasts on various questions related to existential and catastrophic risk. Forecasters moved through a four-stage deliberative process that was designed to incentivize them not only to make accurate predictions but also to provide persuasive rationales that boosted the predictive accuracy of others' forecasts. Forecasters stopped updating their forecasts on 31st October 2022, and are not currently updating on an ongoing basis. FRI plans to run future iterations of the tournament, and open up the questions more broadly for other forecasters. You can see the overall results of the XPT here. Some of the questions were related to AI risk. This post: Sets out the XPT forecasts on AI risk, and puts them in context. Lays out the arguments given in the XPT for and against these forecasts. Offers some thoughts on what these forecasts and arguments show us about AI risk. TL;DR XPT superforecasters predicted that catastrophic and extinction risk from AI by 2030 is very low (0.01% catastrophic risk and 0.0001% extinction risk). XPT superforecasters predicted that catastrophic risk from nuclear weapons by 2100 is almost twice as likely as catastrophic risk from AI by 2100 (4% vs 2.13%). XPT superforecasters predicted that extinction risk from AI by 2050 and 2100 is roughly an order of magnitude larger than extinction risk from nuclear, which in turn is an order of magnitude larger than non-anthropogenic extinction risk (see here for details). XPT superforecasters more than quadruple their forecasts for AI extinction risk by 2100 if conditioned on AGI or TAI by 2070 (see here for details). XPT domain experts predicted that AI extinction risk by 2100 is far greater than XPT superforecasters do (3% for domain experts, and 0.38% for superforecasters by 2100). Although XPT superforecasters and experts disagreed substantially about AI risk, both superforecasters and experts still prioritized AI as an area for marginal resource allocation (see here for details). It's unclear how accurate these forecasts will prove, particularly as superforecasters have not been evaluated on this timeframe before. The forecasts In the table below, we present forecasts from the following groups: Superforecasters: median forecast across superforecasters in the XPT. Domain experts: median forecasts across all AI experts in the XPT. (See our discussion of aggregation choices (pp. 20-22) for why we focus on medians.) QuestionForecastersN203020502100AI Catastrophic risk (>10% of humans die within 5 years)Superforecasters880.01%0.73%2.13%Domain experts300.35%5%12%AI Extinction risk (human population

The Nonlinear Library
LW - Existential Risk Persuasion Tournament by PeterMcCluskey

The Nonlinear Library

Play Episode Listen Later Jul 18, 2023 13:31


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Existential Risk Persuasion Tournament, published by PeterMcCluskey on July 17, 2023 on LessWrong. I participated last summer in Tetlock's Existential Risk Persuasion Tournament (755(!) page paper here). Superforecasters and "subject matter experts" engaged in a hybrid between a prediction market and debates, to predict catastrophic and existential risks this century. I signed up as a superforecaster. My impression was that I knew as much about AI risk as any of the subject matter experts with whom I interacted (the tournament was divided up so that I was only aware of a small fraction of the 169 participants). I didn't notice anyone with substantial expertise in machine learning. Experts were apparently chosen based on having some sort of respectable publication related to AI, nuclear, climate, or biological catastrophic risks. Those experts were more competent, in one of those fields, than news media pundits or politicians. I.e. they're likely to be more accurate than random guesses. But maybe not by a large margin. That expertise leaves much to be desired. I'm unsure whether there was a realistic way for the sponsors to attract better experts. There seems to be not enough money or prestige to attract the very best experts. Incentives The success of the superforecasting approach depends heavily on forecasters having decent incentives. It's tricky to give people incentives to forecast events that will be evaluated in 2100, or evaluated after humans go extinct. The tournament provided a fairly standard scoring rule for questions that resolve by 2030. That's a fairly safe way to get parts of the tournament to work well. The other questions were scored by how well the forecast matched the median forecast of other participants (excluding participants that the forecasters interacted with). It's hard to tell whether that incentive helped or hurt the accuracy of the forecasts. It's easy to imagine that it discouraged forecasters from relying on evidence that is hard to articulate, or hard to verify. It provided an incentive for groupthink. But the overall incentives were weak enough that altruistic pursuit of accuracy might have prevailed. Or ideological dogmatism might have prevailed. It will take time before we have even weak evidence as to which was the case. One incentive that occurred to me toward the end of the tournament was the possibility of getting a verified longterm forecasting track record. Suppose that in 2050 they redo the scores based on evidence available then, and I score in the top 10% of tournament participants. That would likely mean that I'm one of maybe a dozen people in the world with a good track record for forecasting 28 years into the future. I can imagine that being valuable enough for someone to revive me from cryonic suspension when I'd otherwise be forgotten. There were some sort of rewards for writing comments that influenced other participants. I didn't pay much attention to those. Quality of the Questions There were many questions loosely related to AGI timelines, none of them quite satisfying my desire for something closely related to extinction risk that could be scored before it's too late to avoid the risk. One question was based on a Metaculus forecast for an advanced AI. It seems to represent clear progress toward the kind of AGI that could cause dramatic changes. But I expect important disagreements over how much progress it represents: what scale should we use to decide how close such an AI is to a dangerous AI? does the Turing test use judges who have expertise in finding the AI's weaknesses? Another question was about when Nick Bostrom will decide that an AGI exists. Or if he doesn't say anything clear, then a panel of experts will guess what Bostrom would say. That's pretty close to a good question to forecast. Can we assume tha...

The Nonlinear Library: LessWrong
LW - Existential Risk Persuasion Tournament by PeterMcCluskey

The Nonlinear Library: LessWrong

Play Episode Listen Later Jul 18, 2023 13:31


Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Existential Risk Persuasion Tournament, published by PeterMcCluskey on July 17, 2023 on LessWrong. I participated last summer in Tetlock's Existential Risk Persuasion Tournament (755(!) page paper here). Superforecasters and "subject matter experts" engaged in a hybrid between a prediction market and debates, to predict catastrophic and existential risks this century. I signed up as a superforecaster. My impression was that I knew as much about AI risk as any of the subject matter experts with whom I interacted (the tournament was divided up so that I was only aware of a small fraction of the 169 participants). I didn't notice anyone with substantial expertise in machine learning. Experts were apparently chosen based on having some sort of respectable publication related to AI, nuclear, climate, or biological catastrophic risks. Those experts were more competent, in one of those fields, than news media pundits or politicians. I.e. they're likely to be more accurate than random guesses. But maybe not by a large margin. That expertise leaves much to be desired. I'm unsure whether there was a realistic way for the sponsors to attract better experts. There seems to be not enough money or prestige to attract the very best experts. Incentives The success of the superforecasting approach depends heavily on forecasters having decent incentives. It's tricky to give people incentives to forecast events that will be evaluated in 2100, or evaluated after humans go extinct. The tournament provided a fairly standard scoring rule for questions that resolve by 2030. That's a fairly safe way to get parts of the tournament to work well. The other questions were scored by how well the forecast matched the median forecast of other participants (excluding participants that the forecasters interacted with). It's hard to tell whether that incentive helped or hurt the accuracy of the forecasts. It's easy to imagine that it discouraged forecasters from relying on evidence that is hard to articulate, or hard to verify. It provided an incentive for groupthink. But the overall incentives were weak enough that altruistic pursuit of accuracy might have prevailed. Or ideological dogmatism might have prevailed. It will take time before we have even weak evidence as to which was the case. One incentive that occurred to me toward the end of the tournament was the possibility of getting a verified longterm forecasting track record. Suppose that in 2050 they redo the scores based on evidence available then, and I score in the top 10% of tournament participants. That would likely mean that I'm one of maybe a dozen people in the world with a good track record for forecasting 28 years into the future. I can imagine that being valuable enough for someone to revive me from cryonic suspension when I'd otherwise be forgotten. There were some sort of rewards for writing comments that influenced other participants. I didn't pay much attention to those. Quality of the Questions There were many questions loosely related to AGI timelines, none of them quite satisfying my desire for something closely related to extinction risk that could be scored before it's too late to avoid the risk. One question was based on a Metaculus forecast for an advanced AI. It seems to represent clear progress toward the kind of AGI that could cause dramatic changes. But I expect important disagreements over how much progress it represents: what scale should we use to decide how close such an AI is to a dangerous AI? does the Turing test use judges who have expertise in finding the AI's weaknesses? Another question was about when Nick Bostrom will decide that an AGI exists. Or if he doesn't say anything clear, then a panel of experts will guess what Bostrom would say. That's pretty close to a good question to forecast. Can we assume tha...

Choiceology with Katy Milkman
The Superforecasters: With Guests Leon Panetta, Peter Bergen & Barbara Mellers

Choiceology with Katy Milkman

Play Episode Listen Later Jun 5, 2023 37:48


There are moments in life where it seems as though everything is riding on one important decision. If only we had a crystal ball to see the future, we could make those decisions with greater confidence. Fortune-telling aside, there are actually methods to improve our predictions—and our decisions.In this episode of Choiceology with Katy Milkman, we look at what makes some people “superforecasters.” In 2010, the United States government had been looking for Al Qaeda leader and perpetrator of the 9/11 attacks, Osama bin Laden, for nearly a decade. Years of intelligence gathering all over the world had come up short. It seemed every new tip was a dead end. But one small group of CIA analysts uncovered a tantalizing clue that led them to a compound in Pakistan. Soon, the president of the United States would be faced with a difficult choice: to approve the top-secret mission or not.We will hear this story from two perspectives. Peter Bergen is a national security commentator and author of the book The Rise and Fall of Osama bin Laden. He interviewed Osama bin Laden in 1997.Former CIA director Leon Panetta led the United States government's hunt for bin Laden and describes the night his mission came to a dramatic conclusion.Next, Katy speaks with Barbara Mellers about research that shows how so-called superforecasters make more accurate predictions despite facing uncertainty and conflicting information. You can read more in the paper titled "Identifying and Cultivating Superforecasters as a Method of Improving Probabilistic Predictions."Barabara Mellers is the I. George Heyman University Professor of both marketing at the Wharton School and of psychology at the School of Arts and Sciences at the University of Pennsylvania. Choiceology is an original podcast from Charles Schwab. For more on the series, visit schwab.com/podcast.If you enjoy the show, please leave a ⭐⭐⭐⭐⭐ rating or review on Apple Podcasts. Important DisclosuresAll expressions of opinion are subject to change without notice in reaction to shifting market conditions.The comments, views, and opinions expressed in the presentation are those of the speakers and do not necessarily represent the views of Charles Schwab.Data contained herein from third-party providers is obtained from what are considered reliable sources. However, its accuracy, completeness or reliability cannot be guaranteed.The policy analysis provided by the Charles Schwab & Co., Inc., does not constitute and should not be interpreted as an endorsement of any political party.All corporate names are for illustrative purposes only and are not a recommendation, offer to sell, or a solicitation of an offer to buy any security.Investing involves risk, including loss of principal.The book, How to Change: The Science of Getting from Where You Are to Where You Want to Be, is not affiliated with, sponsored by, or endorsed by Charles Schwab & Co., Inc. (CS&Co.). Charles Schwab & Co., Inc. (CS&Co.) has not reviewed the book and makes no representations about its content.Apple Podcasts and the Apple logo are trademarks of Apple Inc., registered in the U.S. and other countries.Google Podcasts and the Google Podcasts logo are trademarks of Google LLC.Spotify and the Spotify logo are registered trademarks of Spotify AB.(0623-3UG1)

The Nonlinear Library
EA - Wisdom of the Crowd vs. "the Best of the Best of the Best" by nikos

The Nonlinear Library

Play Episode Listen Later Apr 5, 2023 20:37


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Wisdom of the Crowd vs. "the Best of the Best of the Best", published by nikos on April 4, 2023 on The Effective Altruism Forum. Summary This post asks whether we can improve forecasts for binary questions merely by selecting a few accomplished forecasters from a larger pool. Using Metaculus data, it compares the Community Prediction (a recency-weighted median of all forecasts) with a counterfactual Community Prediction that combines forecasts from only the best 5, 10, ..., 30 forecasters based on past performance (the "Best") and a counterfactual Community Prediction with all other forecasters (the "Rest") and the Metaculus Prediction, Metaculus' proprietary aggregation algorithm that weighs forecasters based on past performance and extremises forecasts (i.e. pushes them towards either 0 or 1) The ensemble of the "Best" almost always performs worse on average than the Community Prediction with all forecasters The "Best" outperforms the ensemble of all other forecaster (the "Rest") in some instances. the "Best" never outperform the "Rest" on average for questions with more than 200 forecasters performance of the "Best" improves as their size increases. They never outperform the "Rest" on average at size 5, sometimes outperform it at size 10-20 and reliably outperform it for size 20+ (but only for questions with fewer than 200 forecasters) The Metaculus Prediction on average outperforms all other approaches in most instances, but may have less of an advantage against the Community Prediction for questions with more forecasters The code is published here. Conflict of interest noteI am an employee of Metaculus. I think this didn't influence my analysis, but then of course I'd think that, and there may be things I haven't thought about. Introduction Let's say you had access to a large number of forecasters and you were interested in getting the best possible forecast for something. Maybe you're running a prediction platform (good job!). Or you're the head of an important organisation that needs to make an important decision. Or you just really really really want to correctly guess the weight of an ox. What are you going to do? Most likely, you would ask everyone for their forecast, throw the individual predictions together, stir a bit, and pull out some combined forecast. The easiest way to do this is to just take the mean or median of all individual forecasts, or, probably better for binary forecasts, the geometric mean of odds. If you stir a bit harder, you could get a weighted, rather than an unweighted combination of forecasts. That is, when combining predictions you give forecasters different weights based on their past performance. This seems like an obvious idea, but in reality it is really hard to pull off. This is called the forecast combination puzzle: estimating weights from past data is often noisy or biased and therefore a simple unweighted ensemble often performs best. Instead of estimating precise weights, you could just decide to take the X best forecasters based on past performance and use only their forecasts to form a smaller ensemble. (Effectively, this would just give those forecasters a weight of 1 and everyone else a weight of 0). Presumably, when choosing your X, there is a trade-off between "having better forecasters" and "having more forecasters" (see this and this analysis on why more forecasters might be good). (Note that what I'm analysing here is not actually a selection of the best available forecasters. The selection process is quite distinct from the one used for say Superforecasters, or Metaculus Pro Forecasters, who are identified using a variety of criteria. And see the Discussion section for additional factors not studied here that would likely affect the performance of such a forecasting group.) Methods To get some insights, I analys...

Problem Solvers
How to Predict the Future (But for Real)

Problem Solvers

Play Episode Listen Later Sep 5, 2022 47:01


Do you wish you could predict the future? Not in a street-corner psychic kind of way, but in a more personal, meaningful way — to know what's coming, and to know what decisions you should make? We hear from experts (including the head of a group called the Superforecasters!) who explain how to do just that. And for more just like this, pick up Jason Feifer's new book "Build for Tomorrow".

Entrepreneur Network Podcast
How to Predict the Future (But for Real)

Entrepreneur Network Podcast

Play Episode Listen Later Sep 5, 2022 47:07


Do you wish you could predict the future? Not in a street-corner psychic kind of way, but in a more personal, meaningful way — to know what's coming, and to know what decisions you should make? We hear from experts (including the head of a group called the Superforecasters!) who explain how to do just that. And for more just like this, pick up Jason Feifer's new book "Build for Tomorrow".

Pessimists Archive Podcast
Real Ways to Predict the Future

Pessimists Archive Podcast

Play Episode Listen Later Mar 31, 2022 48:55


Do you wish you could predict the future? Not in a street-corner psychic kind of way, but in a more personal, meaningful way. How can you know what's coming, and to know what decisions you should make? To answer that, we talk to many experts — including the head of a group called the Superforecasters! — who explain how to do just that. The “Build For Tomorrow” book is almost here! Grab your copy at jasonfeifer.com/book Get in touch! Newsletter: jasonfeifer.bulletin.com Website: jasonfeifer.com Instagram: @heyfeifer Twitter: @heyfeifer Sponsors: Indeed.com/ARCHIVE NetSuite.com/BFT Redhat.com/command-line-heroes Learn more about your ad choices. Visit megaphone.fm/adchoices

The Nonlinear Library: EA Forum Top Posts
Use resilience, instead of imprecision, to communicate uncertainty by Gregory_Lewis

The Nonlinear Library: EA Forum Top Posts

Play Episode Listen Later Dec 11, 2021 12:02


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Use resilience, instead of imprecision, to communicate uncertainty, published by Gregory_Lewis on the AI Alignment Forum. Write a Review BLUF: Suppose you want to estimate some important X (e.g. risk of great power conflict this century, total compute in 2050). If your best guess for X is 0.37, but you're very uncertain, you still shouldn't replace it with an imprecise approximation (e.g. "roughly 0.4", "fairly unlikely"), as this removes information. It is better to offer your precise estimate, alongside some estimate of its resilience, either subjectively ("0.37, but if I thought about it for an hour I'd expect to go up or down by a factor of 2"), or objectively ("0.37, but I think the standard error for my guess to be ~0.1"). 'False precision' Imprecision often has a laudable motivation - to avoid misleading your audience into relying on your figures more than they should. If 1 in 7 of my patients recover with a new treatment, I shouldn't just report this proportion, without elaboration, to 5 significant figures (14.285%). I think a similar rationale is often applied to subjective estimates (forecasting most salient in my mind). If I say something like "I think there's a 12% chance of the UN declaring a famine in South Sudan this year", this could imply my guess is accurate to the nearest percent. If I made this guess off the top of my head, I do not want to suggest such a strong warranty - and others might accuse me of immodest overconfidence ("Sure, Nostradamus - 12% exactly"). Rounding off to a number ("10%"), or just a verbal statement ("pretty unlikely") seems both more reasonable and defensible, as this makes it clearer I'm guessing. In praise of uncertain precision One downside of this is natural language has a limited repertoire to communicate degrees of uncertainty. Sometimes 'round numbers' are not meant as approximations: I might mean "10%" to be exactly 10% rather than roughly 10%. Verbal riders (e.g. roughly X, around X, X or so, etc.) are ambiguous: does roughly 1000 mean one is uncertain about the last three digits, or the first, or how many digits in total? Qualitative statements are similar: people vary widely in their interpretation of words like 'unlikely', 'almost certain', and so on. The greatest downside, though, is precision: you lose half the information if you round percents to per-tenths. If, as is often the case in EA-land, one is constructing some estimate 'multiplying through' various subjective judgements, there could also be significant 'error carried forward' (cf. premature rounding). If I'm assessing the value of famine prevention efforts in South Sudan, rounding status quo risk to 10% versus 12% infects downstream work with a 1/6th directional error. There are two natural replies one can make. Both are mistaken. High precision is exactly worthless First, one can deny the more precise estimate is any more accurate than the less precise one. Although maybe superforecasters could expect 'rounding to the nearest 10%' would harm their accuracy, others thinking the same are just kidding themselves, so nothing is lost. One may also have some of Tetlock's remarks in mind about 'rounding off' mediocre forecasters doesn't harm their scores, as opposed to the best. I don't think this is right. Combining the two relevant papers (1, 2), you see that everyone, even mediocre forecasters, have significantly worse Brier scores if you round them into seven bins. Non-superforecasters do not see a significant loss if rounded to the nearest 0.1. Superforecasters do see a significant loss at 0.1, but not if you rounded more tightly to 0.05. Type 2 error (i.e. rounding in fact leads to worse accuracy, but we do not detect it statistically), rather than the returns to precision falling to zero, seems a much better explanation. In principle: If a measure ...

The Nonlinear Library: EA Forum Top Posts
Some learnings I had from forecasting in 2020 by Linch

The Nonlinear Library: EA Forum Top Posts

Play Episode Listen Later Dec 10, 2021 4:45


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Some learnings I had from forecasting in 2020, published by Linch on the AI Alignment Forum. crossposted from my own short-form Here are some things I've learned from spending a decent fraction of the last 6 months either forecasting or thinking about forecasting, with an eye towards beliefs that I expect to be fairly generalizable to other endeavors. Before reading this post, I recommend brushing up on Tetlock's work on (super)forecasting, particularly Tetlock's 10 commandments for aspiring superforecasters. 1. Forming (good) outside views is often hard but not impossible. I think there is a common belief/framing in EA and rationalist circles that coming up with outside views is easy, and the real difficulty is a) originality in inside views, and also b) a debate of how much to trust outside views vs inside views. I think this is directionally true (original thought is harder than synthesizing existing views) but it hides a lot of the details. It's often quite difficult to come up with and balance good outside views that are applicable to a situation. See Manheim and Muelhauser for some discussions of this. 2. For novel out-of-distribution situations, "normal" people often trust centralized data/ontologies more than is warranted. See here for a discussion. I believe something similar is true for trust of domain experts, though this is more debatable. 3. The EA community overrates the predictive validity and epistemic superiority of forecasters/forecasting. (Note that I think this is an improvement over the status quo in the broader society, where by default approximately nobody trusts generalist forecasters at all) I've had several conversations where EAs will ask me to make a prediction, I'll think about it a bit and say something like "I dunno, 10%?"and people will treat it like a fully informed prediction to make decisions about, rather than just another source of information among many. I think this is clearly wrong. I think in almost any situation where you are a reasonable person and you spent 10x (sometimes 100x or more!) time thinking about a question then I have, you should just trust your own judgments much more than mine on the question. To a first approximation, good forecasters have three things: 1) They're fairly smart. 2) They're willing to actually do the homework. 3) They have an intuitive sense of probability. This is not nothing, but it's also pretty far from everything you want in a epistemic source. 4. The EA community overrates Superforecasters and Superforecasting techniques. I think the types of questions and responses Good Judgment . is interested in is a particular way to look at the world. I don't think it is always applicable (easy EA-relevant example: your Brier score is basically the same if you give 0% for 1% probabilities, and vice versa), and it's bad epistemics to collapse all of the "figure out the future in a quantifiable manner" to a single paradigm. Likewise, I don't think there's a clear dividing line between good forecasters and GJP-certified Superforecasters, so many of the issues I mentioned in #3 are just as applicable here. I'm not sure how to collapse all the things I've learned on this topic in a few short paragraphs, but the tl;dr is that I trusted superforecasters much more than I trusted other EAs before I started forecasting stuff, and now I consider their opinions and forecasts "just" an important overall component to my thinking, rather than a clear epistemic superior to defer to. 5. Good intuitions are really important. I think there's a Straw Vulcan approach to rationality where people think "good" rationality is about suppressing your System 1 in favor of clear th...

The Ezra Klein Show
Predicting the Future Is Possible. ‘Superforecasters' Know How.

The Ezra Klein Show

Play Episode Listen Later Dec 3, 2021 52:51


Can we predict the future more accurately?It's a question we humans have grappled with since the dawn of civilization — one that has massive implications for how we run our organizations, how we make policy decisions, and how we live our everyday lives.It's also the question that Philip Tetlock, a psychologist at the University of Pennsylvania and a co-author of “Superforecasting: The Art and Science of Prediction,” has dedicated his career to answering. In 2011, he recruited and trained a team of ordinary citizens to compete in a forecasting tournament sponsored by the U.S. intelligence community. Participants were asked to place numerical probabilities from 0 to 100 percent on questions like “Will North Korea launch a new multistage missile in the next year” and “Is Greece going to leave the eurozone in the next six months?” Tetlock's group of amateur forecasters would go head-to-head against teams of academics as well as career intelligence analysts, including those from the C.I.A., who had access to classified information that Tetlock's team didn't have.The results were shocking, even to Tetlock. His team won the competition by such a large margin that the government agency funding the competition decided to kick everyone else out, and just study Tetlock's forecasters — the best of whom were dubbed “superforecasters” — to see what intelligence experts might learn from them.So this conversation is about why some people, like Tetlock's “superforecasters,” are so much better at predicting the future than everyone else — and about the intellectual virtues, habits of mind, and ways of thinking that the rest of us can learn to become better forecasters ourselves. It also explores Tetlock's famous finding that the average expert is roughly as accurate as “a dart-throwing chimpanzee” at predicting future events, the inverse correlation between a person's fame and their ability to make accurate predictions, how superforecasters approach real-life questions like whether robots will replace white-collar workers, why government bureaucracies are often resistant to adopt the tools of superforecasting and more.Mentioned:Expert Political Judgment by Philip E. Tetlock“What do forecasting rationales reveal about thinking patterns of top geopolitical forecasters?” by Christopher W. Karvetski et al.Book recommendations:Thinking, Fast and Slow by Daniel KahnemanEnlightenment Now by Steven PinkerPerception and Misperception in International Politics by Robert JervisThis episode is guest-hosted by Julia Galef, a co-founder of the Center for Applied Rationality, host of the “Rationally Speaking” podcast and author of “The Scout Mindset: Why Some People See Things Clearly and Others Don't.” You can follow her on Twitter @JuliaGalef. (Learn more about the other guest hosts during Ezra's parental leave here.)Thoughts? Guest suggestions? Email us at ezrakleinshow@nytimes.com.You can find transcripts (posted midday) and more episodes of "The Ezra Klein Show" at nytimes.com/ezra-klein-podcast, and you can find Ezra on Twitter @ezraklein. Book recommendations from all our guests are listed at https://www.nytimes.com/article/ezra-klein-show-book-recs.“The Ezra Klein Show” is produced by Annie Galvin, Jeff Geld and Rogé Karma; fact-checking by Michelle Harris; original music by Isaac Jones; mixing by Jeff Geld; audience strategy by Shannon Busta. Special thanks to Kristin Lin and Alison Bruzek.

Entrepreneur's Handbook
#5. How To Make Million Dollar Decisions Like A Superforecaster w/ Regina Joseph | Sibylink + Pytho Founder

Entrepreneur's Handbook

Play Episode Listen Later Dec 2, 2021 45:21


Inspirational stories plus practical takeaways from the entrepreneurship world.Today's guest is Regina Joseph. She was a pioneer of digital publication with the first-ever digital magazine, Blender, at a time when no one believed anyone would ever read online. She also created prototypes for the early forms of digital ads.  She then went on to start one of the first-ever digital marketing agencies Engine.RDA. Her entire career she's been one step ahead of everyone else and forecasted where the market will go to abnormal accuracy. IARPA has declared her one of the world's only Superforecasters after an intense selection process. She continues to innovate today and also coaches others on how to make better strategic forecasts through her companies Pytho and Sibylink.  We hope enjoy the episode and don't forget to share it with others. You can learn more at http//www.entrepreneurshandbook.co.Find out more about Regina at http://www.superrj.com

Global Guessing Podcasts
Tom Liptay and Michael Story on Founding Maby & Forecasting Adoption (GGWP 11)

Global Guessing Podcasts

Play Episode Listen Later May 1, 2021 62:16


How do companies use forecasting? When will forecasting, as we practice it at Global Guessing, become more common and mainstream? What tools can I use to forecast better? If you're curious about the answers to any of these questions, then this episode is for you! In this week's episode of the Global Guessing Weekly Podcast we chatted with Tom Liptay and Michael Story, co-founders of Maby, Superforecasters, and former alumnus of Good Judgement. Maby is a forecasting platform created to help companies and investors forecast more accurately. Companies often have internal ways of forecasting revenues or market trends, but few use the systematic and scientific techniques and methodologies of quantified forecasting that we like to explore at Global Guessing. Hear about Maby, and Tom & Michael's journey to forecasting in this exciting episode. Tom Liptay: https://twitter.com/TLiptay Michael Story: https://twitter.com/MWStory Website: https://globalguessing.com/​​ Twitter: https://twitter.com/GlobalGuessing​​ LinkedIn: https://www.linkedin.com/company/global-guessing/ Note: We originally planned for this episode to come out next week. However, due to a scheduling issue we had to push this episode forward. This is why you'll hear us reference the episode as episode 12 rather than 11.

TRENDIFIER with Julian Dorey
#37 - Nick Gurol - NBA Top Shot, The NFT Market, & Dapper Labs

TRENDIFIER with Julian Dorey

Play Episode Listen Later Mar 3, 2021 196:37


Nick Gurol is an investor and teacher. Recently, Gurol was an early investor in NBA Top Shot NFTs. Dapper Labs, in partnership with the NBA & NBAPA, launched Top Shot in July 2019––and the market has seen a major influx of participants and capital in early 2021. ***TIMESTAMPS*** 6:06 - NBA Top Shot intro; NFTs; Dapper Labs & The Flow Protocol on the Ethereum blockchain; Hardcourt & Fantasy sports 15:22 - How Nick got involved in NBA Top Shot; Jonathan Bales & his original newsletter on NBA Top Shot; How the secondary market works; The NBA Ad campaign for Top Shot; The Daily Fantasy Sports community influx into Top Shot 23:05 - Series 1, Series 2 & Common, Rare, and Legendary Top Shot classes; How the NBA Top Shot Moments work; Dapper Labs Dr. Seuss project; NFTs and video games; The value of NFTs and Top Shot Moments; Scarcity 32:25 - How the Top Shot website and market works; Could the exchange be hacked?; Dapper’s fight against bots on the site; Coinbase  and Mt. Gox exchange examples 39:40 - Dapper Labs valuation; The theories behind the NBA halting their ads; The timeline of the NBA Top Shot explosion into the mainstream; The pack drop that bombed; Examples of Moments in packs 44:45 - Gurol’s “proprietary algorithm” for uncovering outliers in the NBA Top Shot Moments Market; Revisiting Rare, Common, & Legendary levels; Steph Curry 3-Pointer Moment’s crazy price 49:26 - Gurol’s investment; Revisiting Michael Lewis’ Flashboys book; Why Nick is bullish on Top Shot and the NFT Market; The grind and work behind Top Shot investing; When Gurol knew it was time to sell many of his moments 59:39 - Status of current Top Shot market cycle; Money Laundering protocols Dapper Labs has in place; Fine Art and historical use for money laundering, tax evasion, etc.; NFT Market hard to hide things? 1:04:34 - The timing of Top Shot’s rise; Revisiting the Gamestop, Reddit, Robinhood saga; Is the Top Shot market at a “top?”; Peak sales on Top Shot; The sports card market, PSA, Starstock, and the higher barriers to entry there 1:16:37 - Jared Dillian, The Seafood Tower Theory, & The Taxi Driver Theory 1:25:46 - The NBA’s marketability moving forward; Top Shot restriction in  China?; Anthony Fenu, Riley Horvath, Justin Baker, & Soar’s potential in the NFT market with volumetric content, holograms, and 3D capabilities; “Friction” and why Top Shot is a better investment than sports cards 1:49:29 - Dapper Labs 5% “rake” on transactions; Flow Protocol and its scalability with dollar integration; Cryptokitties; NFTs explained definitively; Sam Merrill’s Top Shot Moments 1:57:21 - Gurol’s national public feud with Kyrie Irving over his flat earth belief; The elaboration likelihood model; The psychology behind changing your mind; Anchoring; the 3 Doors Test; The effects low barriers to entry on public platforms has on our discourse; Reporting and journalism in the Trump Presidency 2:23:24 - Breyer Score; Running simulations on variables; Superforecasters discussion; The problem with thinking about who the President is before anything else 2:36:55 - Implicit Bias and Racism; Misty Copeland; Equity vs. Equality 2:46:40 - Reverse Racism; The Rooney Rule; Nepotism in the NFL 2:58:36 - Schools in  Covid; Tech access in schools due to Covid; Suburban vs. Urban schools 3:07:41 - Reopening schools; The problem with the government response to Covid; Restaurants opening vs. Schools ~ YouTube EPISODES & CLIPS: https://www.youtube.com/channel/UC0A-v_DL-h76F75xik8h03Q  ~ Show Notes: https://www.trendifier.com/podcastnotes  TRENDIFIER Website: https://www.trendifier.com  Julian's Instagram: https://www.instagram.com/julianddorey  ~ Beat provided by: https://freebeats.io  Music Produced by White Hot

The NFX Podcast
Adam Grant on Anti-Patterns of 10x Thinking with Pete Flint

The NFX Podcast

Play Episode Listen Later Feb 4, 2021 40:26


"Doubt what you know, be curious about what you don't, and update your views based on new data." NFX partner Pete Flint recently got together with organizational psychologist and bestselling author Adam Grant to discuss his new book, Think Again, about the counterintuitive and competitive advantages of rethinking. Adam and Pete share the frameworks for: - Rethinking vs. Contrarian Thinking - Techniques from the World’s Best Superforecasters - Jeff Bezos’s 2×2 Decision Framework - The Top Killers of Co-founder Relationships - Why Confident Humility Is A Founder Superpower - & more Read the NFX Essay - https://www.nfx.com/post/anti-patterns-of-10x-thinking/

Sped up Rationally Speaking
Rationally Speaking #145 - Phil Tetlock on "Superforecasting: The Art and Science of Prediction"

Sped up Rationally Speaking

Play Episode Listen Later Jan 3, 2021 53:08


Most people are terrible at predicting the future. But a small subset of people are significantly less terrible: the Superforecasters. On this episode of Rationally Speaking, Julia talks with professor Phil Tetlock, whose team of volunteer forecasters has racked up landslide wins in forecasting tournaments sponsored by the US government. He and Julia explore what his teams were doing right and what we can learn from them, the problem of meta-uncertainty, and how much we should expect prediction skill in one domain (like politics or economics) to carry over to other domains in real life. Sped up the speakers by ['1.07', '1.0']

Love Your Work
245. The Avocado Challenge: Tell The Future

Love Your Work

Play Episode Listen Later Nov 26, 2020 13:36


It’s hard to predict the future, but you can be better at predicting the future. All you need is a few delicious avocados. Even the “experts” are bad at predicting the future Wharton professor Phillip Tetlock wanted to make the future easier to predict. So he held “forecasting tournaments,” in which experts from a variety of fields made millions of predictions about global events. Tetlock found that experts are no better at predicting the future than dart-throwing chimps. In fact, the more high-profile experts – the ones who get invited onto news shows – were the worst at making predictions. But, Tetlock found that some people are really great at telling the future. He calls them “Superforecasters”, and regardless of their area of expertise, they consistently beat the field with their predictions. Tetlock also found that with a little training, people can improve their forecasting skills. The superforecasters in Tetlock’s Good Judgement Project – people from all backgrounds working with publicly-available information – make forecasts 30% better than intelligence officers with access to classified information. Creative work is uncertain. Does it have to be? As someone working in the “Extremistan” world of creative work, I’m always trying to improve my forecasting skills. If I publish a tweet, how many likes will it get? If I write a book, how many copies will it sell? The chances of getting any of these predictions exactly right are so slim, it doesn’t feel worth it to try to predict these things. But that doesn’t mean I can’t rate my predictions and make those predictions better. Introducing the Avocado Challenge If you would like to be better at predicting the future, I have a challenge for you. I call it the Avocado Challenge. Elon Musk recently asked on Twitter “What can’t we predict?” I answered “whether or not an avocado is ready to open.” 12 likes. People agree with me. https://twitter.com/kadavy/status/1309643017599569920  Here’s how the Avocado Challenge works. The next time you’re about to open an avocado, make a prediction: How confident are you the avocado is ripe? Choose a percentage of confidence, such as 50% or 20% – or if you’re feeling lucky, 100%. To make it simple, you can rate your confidence on a scale of 0 to 10. State your prediction out loud or write it down. Now, open the avocado. Is it ripe? Yes or no? Scoring your avocado predictions You now have two variables: Your prediction as stated in percentage confidence, and the outcome of avocado ripeness. With these two variables, you can calculate what’s called a Brier score. This tells you just how good your forecast was. The Brier score is what Phillip Tetlock uses to score his forecasting tournaments. Two variables: confidence and outcome It works like this: Translate your percentage confidence into a decimal between 0 and 1. So 50% would be 0.5, 20% would be 0.2, and 100% would just be 1. Now, translate the avocado ripeness outcome into a binary number. If the avocado was not ripe, your outcome value is “0.” If the avocado was ripe, your outcome value is “1.” (You may wonder: How do I determine whether or not an avocado is ripe? I’ll get to that in a minute. Let’s pretend for a second it’s easy.) Calculating your Brier score Once you have those two variables, there are two steps to follow to find out your Brier score: Subtract the outcome value from your confidence value. If I was 50% confident the avocado would be ripe that confidence value is 0.5. If the avocado was in fact ripe I subtract the outcome value of 1 from 0.5 to get -0.5. Square that number, or multiply it by itself. -0.5² = 0.25. Our Brier score is 0.25. Is that good or bad? The lower your Brier score, the better your prediction was. If you were 100% confident the avocado would be ripe and it was not, your Brier score would be 1 – the worst score possible. If you were 100% confident the avocado would be ripe and it was ripe, your Brier score would be 0 – the best score possible. So, 0.25 is pretty solid. Predict your next 30 avocados This is a fun exercise to try one time, but it doesn’t tell you a whole lot about your forecasting skills overall, and it doesn’t help you improve your forecasting skills. Where it gets interesting and useful is when you make a habit of the Avocado Challenge. After you’ve tried the Avocado Challenge a couple times, make a habit out of it. For 30 consecutive avocados, tally your results. Calculate your Brier score, and find the average of your 30 predictions. If you regularly open avocados with a roommate or partner, make a competition out of it. My partner and I predicted the ripeness of, then opened, 36 avocados over the course of several weeks. We recorded our predictions and outcomes on a notepad on the fridge – then tallied our results in a spreadsheet. Our findings: 28% of avocados were ripe. Her Brier score was 0.22 – mine was 0.19. (I win!)   The Avocado Challenge teaches you to define your predictions Most of us don’t make predictions according to our percentage confidence. We say, “I think so and so is going to win the election,” or “I think it might rain.” Phillip Tetlock even found this with political pundits – the ones who get lots of airtime on news shows. They’ll say things like “there’s a distinct possibility.” That’s not a forecast. If so and so wins the election, you can say, “ha! I knew it!” If it didn’t rain, you can remind your friend you said you thought it might rain. And what does a “distinct possibility” mean? You can be “right” either way. And when it comes to getting airtime on news shows, the news show doesn’t care if the political pundit gets their prediction right. All that matters is they can be exciting on camera, speak in sound bites, argue a clear point, and hold the viewer’s attention a little longer so it can be sold to advertisers during the commercial break. We normally don’t make our predictions with a percentage confidence, because we aren’t used to it. The Avocado Challenge gets you in the habit of rating the confidence of your predictions. The Avocado Challenge helps you define reality The Avocado Challenge also helps you define reality. This is something we’re also bad at. If you’re on a walk with your friend and you say you think it’s going to rain, how much rain equals rain? By what time is it going to rain? You’re traveling on foot – is it going to rain where the walk started, or the place you’ll be a half hour from now? To rate your predictions and become a better forecaster, you need to make falsifiable claims. It’s hard to tell if an avocado is ripe before you open the avocado, but it’s also hard to tell if an avocado is ripe after you open the avocado. You’ll have to come up with criteria for determining whether or not an avocado should be defined as “ripe.” When we did the Avocado Challenge, we defined a “ripe” avocado as a “perfect” avocado: uniform green color, with the meat of the avocado sticking to no more than 5% of the pit.     The Avocado Challenge can improve your real-life predictions A few weeks after we did the Avocado Challenge, my partner and I were at her family’s finca – a rustic cabin in the Colombian countryside. It was Sunday afternoon, and we were getting ready to head back to Medellín. I was eager to get home and get ready for my week. I asked my partner what time we would leave. She said about 3 p.m. As I mentioned on episode 235, the Colombian sense of time takes some getting used to for me as an American. Even though my partner is a very prompt person, I’m also aware of “the planning fallacy.” I know the Sydney Opera House opened ten years late and cost 15 times the projected budget to build. So when I looked at the Mitsubishi Montero parked in the grass, and thought about how long it might take to pack in eight people, three dogs, and a little white rabbit, the chances of us leaving right at 3 p.m. seemed slim.     Fortunately, we had done the Avocado Challenge. I asked my partner, in Spanish, “what’s your percentage confidence we’ll leave before an hour after 3 p.m. – 4 p.m.?” She shifted into Avocado mode, thought a bit, and said sesenta por ciento. She was 60% sure we’d leave before 4 p.m. That didn’t seem super confident, so I asked for another forecast. I asked what her percentage confidence was that we’d leave before 5 p.m. – two hours after the target time. She said cien por ciento. She was 100% sure we’d leave before 5 p.m. Now, instead of choosing between expecting to leave at exactly 3 p.m. – or leaving “whenever” – I now had a range. It was a range I could trust from someone with experience with similar situations – and training in forecasting. The time we did leave: 3:30 p.m. My partner’s Brier score for that first prediction: 0.16. Average Brier score for the two predictions: 0.08. Not bad. Mind Management, Not Time Management now available! After nearly a decade of work, Mind Management, Not Time Management is now available! This book will show you how to manage your mental energy to be productive when creativity matters. Buy it now! My Weekly Newsletter: Love Mondays Start off each week with a dose of inspiration to help you make it as a creative. Sign up at: kadavy.net/mondays. Listener Showcase Abby Stoddard makes the Dunnit app – the "have-done list." It’s a minimalist tool designed to motivate action and build healthy habits. About Your Host, David Kadavy David Kadavy is author of Mind Management, Not Time Management, The Heart to Start and Design for Hackers. Through the Love Your Work podcast, his Love Mondays newsletter, and self-publishing coaching David helps you make it as a creative. Follow David on: Twitter Instagram Facebook YouTube Subscribe to Love Your Work Apple Podcasts Overcast Spotify Stitcher YouTube RSS Email Support the show on Patreon Put your money where your mind is. Patreon lets you support independent creators like me. Support now on Patreon »     Show notes: http://kadavy.net/blog/posts/avocado-challenge  

La Era Del Yeti
La Mañanera del Yeti: 18/06//2020: La siguiente pandemia será digital, apuntes sobre la PS5, Superforecasters y más!

La Era Del Yeti

Play Episode Listen Later Jun 18, 2020 124:04


Hoy hablamos de como la siguiente pandemia será digital, apuntes sobre la PS5, Superforecasters y más!

TALKING POLITICS
Superforecasting

TALKING POLITICS

Play Episode Listen Later Mar 11, 2020 47:21


We talk to David Spiegelhalter, Professor of the Public Understanding of Risk, about the science of forecasting. Who or what are the superforecasters? How can they help governments make better decisions? And will intelligent machines ever be able to outdo the humans at seeing into the future? From Cummings to coronavirus, a conversation about the knowns, unknowns and what lies beyond that.Talking Points: Tetlock discovered that some people make better predictions than others.Some of the qualities that make this possible are deeply human, such as doggedness, determinedness, and openness to new information, but others are mathematical. Superforecasters are highly numerate: they have a sense of magnitude.Good superforecasters isolate themselves emotionally from the problem: you have to be cold about it. Think about George Soros shorting the pound. There’s a difference between having more superforecasting and more superforecasters. How do you integrate people like this into existing institutions?These people are often disruptive. Probabilistic information is finely grained: what does this mean for political decision making?Superforecasters aren’t decision makers: they give you the odds. But they are better than the betting markets.Betting markets reflect what people would like to happen rather than what they should think will happen. They aren’t cold enough.Tetlock’s book places a huge emphasis on human characteristics. Algorithms can do superforecasting only in repetitive, data rich restrictive problemsTetlockian problems are much more complex. People often make a category error when they think about what AI can do. Mentioned in this Episode: Superforecasting, by Philip Tetlock and Dan GardnerDavid’s book, The Art of StatisticsRadical Uncertainty by Mervyn King and John KayThe Black Swan by Nassim Nicholas TalebRisky Talk, David’s podcastFurther Learning: Philip Tetlock’s lunch with the FTDominic Cumming’s review of SuperforecastingAre you a fox or a hedgehog?And as ever, recommended reading curated by our friends at the LRB can be found here: lrb.co.uk/talking See acast.com/privacy for privacy and opt-out information.

Business Daily
The superforecasters

Business Daily

Play Episode Listen Later Mar 9, 2020 18:34


How to predict the future and beat the wisdom of the crowds. Manuela Saragosa speaks to Warren Hatch, chief executive of Good Judgement, a consultancy that specialises in superforecasters - individuals with a knack for predicting future events - and the techniques they use to make their guesses. We also hear from Andreas Katsouris from PredictIt, a political betting platform that harnesses the wisdom of the crowds in making predictions about politics. Producer: Laurence Knight (Photo: a crystal ball, Credit: Getty Images)

predictit superforecasters
Grand Theft Life
#9 - Will 'Rundles' Change The Way You Travel?

Grand Theft Life

Play Episode Listen Later Oct 2, 2019 47:03


As Airbnb gets set for IPO by expanding their narrative beyond accomodation we discuss the idea of ‘Rundles’ and what other ways the travel experience has evolved in our lifetime. [1:45] - Joel explains his interpretation of Tetlock’s idea of Triage and how it applies to forecasting. (links to a very brief summary of the ten main ideas in his book - Superforecasters’)[3:00] - Morgan Housel’s post - “How This All Happened” (required reading for anyone interested in what happened to the US economy since the end of WW2) [6:30] - Paul Graham’s quote on refragmentation listed below. [15:00] - Why retail travel stores still exist and Scott Galloway’s idea of ‘shrines’. [16:20] - Paul Graham gives an example of how to make your city a travel destination or startup hub. [19:50] - Airdna - short term rental data and analytics. (click the link and type in your city to get av. daily rate, occupancy, revenue etc.)[24:15] - The Rise of The Rundle. (watch this three minute video also listed below)[29:30] - Should we be regulating Airbnb’s in big cities? [33:00] - How people are creating their own jobs within Airbnb experiences.[36:00] - Do flight searches on Google, Kayak or Tripadvisor end in more purchases? (hint: it’s not even close)[39:30] - Are Yelp reviews corrupt? [42:00] - Joel’s prediction on who will win in the travel industry and it’s not an app. It's difficult to imagine now, but every night tens of millions of families would sit down together in front of their TV set watching the same show, at the same time, as their next door neighbors. What happens now with the Super Bowl used to happen every night. We were literally in sync. - Paul GrahamTLDL1. Rundle - The recurring revenue, bundle - arguably the biggest innovation in travel is business model.2. Millenial world traveler starter-kit. (aka Joel’s phone)3. While Airbnb has already established itself as a better option than hotels, they look beyond accomodation ahead of potential IPO. Bonus: For any real estate investors out there… DISCLAIMERJoel Shackleton works for Gold Investment Management. All opinions expressed by Joel and Broc or any podcast guests are solely their own opinions and do not reflect the opinion of Gold Investment Management. This Podcast and Substack is for informational purposes only and should not be relied upon for investment decisions. Clients of Gold Investment Management may hold positions discussed in this podcast. Get on the email list at reformedmillennials.substack.com

Ihmisiä, siis eläimiä
#3 Kaj Sotala. Tekoäly, kärsimys, altruismi

Ihmisiä, siis eläimiä

Play Episode Listen Later Dec 9, 2017 157:07


Rahoita podcastin tekoa Patreonissa. Pienikin tuki auttaa! https://www.patreon.com/vistbacka Videoversio: https://www.youtube.com/watch?v=N-MTOt2_RyE RSS: http://feeds.soundcloud.com/users/soundcloud:users:358481639/sounds.rss Podcastin kolmannen jakson vieraana on Kaj Sotala, joka tekee tutkimustyötä tekoälyn, teknologiaan liittyvien dystooppisten uhkien ja kärsimyksen vähentämisen äärellä. Jakso taltioitiin 14.11.2017. Joitakin jaksossa käsiteltyjä teemoja: • Ihmistä älykkäämpi tekoäly • Asiantuntijuus • Lapsinerot • Ihmistenväliset erot oppimisessa • Kompleksisten järjestelmien tulevien tilojen ennustaminen • Koneet ja empatia • Mielirikokset ja kärsimyssimulaatiot • Tietoisuus • Tekoälyn uhat niin todellisuudessa, populaarikulttuurissa kuin julkisessa keskustelussakin • Tekoälyn kehitysnopeus • Kognitiotiede • Scifi • Kehitystö • Efektiivinen altruismi • Tarpeellinen ja tarpeeton kärsimys • Emotionaalinen resilienssi • Verta vuotavaan nenään liittyvä sosiaalinen hassuus • Meditaatio • Laajan tunnekirjon arvo • Epäonnistuminen • Löytymistä vastustavien esineiden aiheuttama raivo. Linkkejä keskustelun tiimoilta: • Kajn paperi "How Feasible Is the Rapid Development of Artificial Superintelligence?": http://kajsotala.fi/assets/2017/10/how_feasible.pdf • Freddie Mercury -paperi: https://www.ncbi.nlm.nih.gov/pubmed/27079680 • Superforecasters-kirja: https://www.goodreads.com/book/show/23995360-superforecasting • Kehitystö: https://kehitysto.wordpress.com/ • Efektiivinen altruismi: https://en.wikipedia.org/wiki/Effective_altruism • Kärsimyskeskeinen etiikka: https://foundational-research.org/the-case-for-suffering-focused-ethics/ • Stiftung für Effektiven Altruismus: https://ea-stiftung.org/ • GiveWell: https://www.givewell.org/ • 80,000 Hours: https://80000hours.org/ • Urban Philantropy: http://urbanphilbykp.com/causes/lettuce-live/ • Kajn kotisivut: http://kajsotala.fi • Kajn tutkimusjulkaisut: http://kajsotala.fi/academic-papers • Kajn työnantaja: https://foundational-research.org • Kajn Twitter: https://twitter.com/xuenay • Kajn Facebook: https://www.facebook.com/Xuenay • Kajn blogi Kehitystössä: https://kehitysto.wordpress.com/author/xuenay/ ----- Ihmisiä, siis eläimiä -podcast rakastaa ymmärrystä avartavia näkökulmia. Syvän tiedonjanon ajaman ohjelman visiona on luoda asioiden ytimeen pureutuvaa, hitaampaa mediaa. Podcastin keskeisiä teemoja ovat tiede ja taide, tavallinen ja erikoinen, yksilö ja yhteiskunta sekä ihminen ja muu luonto. Ohjelman vetäjä, ymmärrykseltään keskeneräinen mutta utelias Henry Vistbacka on sekatekijä, muusikko ja kirjoittaja. Podcastin yhteistyökumppanina toimii Helsingin Vallilassa päämajaansa pitävä, tiedettä raaka-aineenaan käyttävä taiteellinen tuotantoyhtiö Artlab. • Ihmisiä, siis eläimiä Facebookissa: https://facebook.com/ihmisiis • Ihmisiä, siis eläimiä Twitterissä: https://twitter.com/ihmisiis • Ihmisiä, siis eläimiä Instagramissa: https://www.instagram.com/ihmisiis • Ihmisiä, siis eläimiä Soundcloudissa: https://soundcloud.com/ihmisiis • Ihmisiä, siis eläimiä Kiekussa: https://www.kieku.com/channel/Ihmisi%C3%A4%2C%20siis%20el%C3%A4imi%C3%A4 • Studio podcastin takana: https://artlab.fi

freddie mercury stiftung jakso podcastin teko givewell syv ohjelman verta artificial superintelligence artlab superforecasters ihmist linkkej soundcloudissa joitakin effektiven altruismus kaj sotala henry vistbacka
Trend Following with Michael Covel
Ep. 425: Philip Tetlock Interview with Michael Covel on Trend Following Radio

Trend Following with Michael Covel

Play Episode Listen Later Feb 18, 2016 44:19


Today on Trend Following Radio Michael Covel interviews Philip Tetlock. Phil is a Canadian American political science writer currently at The Wharton School of the University of Pennsylvania. He is right at the intersection of psychology, political science and organizational behavior. His book, “Superforecasting: The Art and Science of Prediction,” is about probabilistic thinking defined. Phil is also a co-principle investigator of The Good Judgment Project, a study on the art and science of prediction and forecasting.Michael starts off asking, “Regular folks can beat the experts at their own game?” Phil says essentially that is correct. He started The Good Judgment Project in 2011. It was based around forecasting and was funded by the government. He was shocked by the amount of “regular” people he recruited for his study that were able to compete with, or do a better job predicting than professionals working for agencies such as the NSA.Michael and Phil move onto discussing the Iraq war. They discuss what the actual probability may have been of Saddam Hussein having weapons of mass destruction. George Bush claimed that it was a “slam dunk” when clearly there was not a 100% probability of weapons of mass destruction being there. Michael asks, “When is society going to adopt more of a probability mindset?” Phil says that soft subjective human judgment is going by the way side. Pundits saying, “Someday this will happen” without any real substance, will come to a stop. As long as a forecaster can say, “This may happen in the future” then they can never really be held accountable for being wrong. Michael brings up the example of Robert Rubin. Robert worked for Goldman Sachs and was under Bill Clinton during his presidency. He was a great probabilistic thinker. Everyone loved him until the 2008 crash. Phil uses him as an example of even the best prediction people getting it wrong.Bottom line, superforecasters look for aggregated data. They know there is interesting data laying around and they tend to look at crowd indicators heavily. The distinction between superforecasters and regular forecasters is their ability to start with the outside view and move to the inside slowly. Regular forecasters start with the inside view and rarely look at the outside view. Superforecasters also believe in fate less than regular forecasters do. When you highlight all the low probability events surrounding outcomes, such as the lottery, many chose to think the event was decided by “fate” or just “meant to be.” Superforecasters think in a way of “well someone had to win, and they did.” In this episode of Trend Following Radio: What are superforecasters? Probabilistic thinking Looking at aggregate data

All in the Mind
Psychology of a Mars mission, Superforecasters, MPs guide to mental health, Recovery College

All in the Mind

Play Episode Listen Later Dec 15, 2015 28:11


As Tim Peake is launched on his trip to spend 6 months on the International Space Station Claudia Hammond talks to Alexander Kumar, the doctor who has been to Antarctica to investigate the psychology of a human mission to Mars. How will the confined spaces, the dark and distance from planet Earth affect Mars astronauts of the future? Professor Philip Tetlock explains why his newly discovered elite group of so-called Superforecasters are so good at predicting global events. Claudia talks to MP James Morris about why some of his constituents are coming to him and his staff for help in a mental health crisis. He talks about the advice available for other MPs and constituency staff in the same situation. Claudia visits the South London and Maudsley Recovery college to find out how their educational courses are helping people in south London with their mental health.

Rationally Speaking
Rationally Speaking #145 - Phil Tetlock on "Superforecasting: The Art and Science of Prediction"

Rationally Speaking

Play Episode Listen Later Oct 18, 2015 55:45


Most people are terrible at predicting the future. But a small subset of people are significantly less terrible: the Superforecasters. On this episode of Rationally Speaking, Julia talks with professor Phil Tetlock, whose team of volunteer forecasters has racked up landslide wins in forecasting tournaments sponsored by the US government. He and Julia explore what his teams were doing right and what we can learn from them, the problem of meta-uncertainty, and how much we should expect prediction skill in one domain (like politics or economics) to carry over to other domains in real life.

James vs Ignorance
Episode 1 - Michael Story On Superforecasters

James vs Ignorance

Play Episode Listen Later Oct 8, 2015 34:22


Can experts predict the future? I speak to a "Superforecaster" about how new research is showing how ordinary people can make accurate forecasts about world events.

interview politics superforecasters superforecaster
Power, Politics, and Preventive Action
Superforecasters, Software, and Spies: A Conversation With Jason Matheny

Power, Politics, and Preventive Action

Play Episode Listen Later Dec 31, 1969


This week I sat down with Dr. Jason Matheny, director of the Intelligence Advanced Research Projects Activity (IARPA).  IARPA invests in high-risk, high-payoff research programs to address national intelligence problems, from language recognition software to forecasting tournaments to evaluate strategies to “predict” the future. Dr. Matheny shed light on how IARPA selects cutting-edge research projects and how its work helps ensure intelligence guides sound decision- and policymaking.  He also offers his advice to young scientists just starting their careers. Listen to a fascinating conversation with the leader of one of the coolest research organizations in the U.S. government, and follow IARPA on Twitter @IARPANews.