Podcast appearances and mentions of phil tetlock

  • 18PODCASTS
  • 25EPISODES
  • 43mAVG DURATION
  • ?INFREQUENT EPISODES
  • Apr 7, 2025LATEST

POPULARITY

20172018201920202021202220232024


Best podcasts about phil tetlock

Latest podcast episodes about phil tetlock

The 92 Report
126. Robert de Neufville, Writer and Superforecaster

The 92 Report

Play Episode Listen Later Apr 7, 2025 40:02


Show Notes: Robert de Neufville dropped out of grad school after spending over a decade in grad school and not finishing his PhD. This was around the time of the financial crisis. Robert realized that after a decade in academia he was less employable than when he graduated from Harvard. He had done a lot of teaching at Berkeley and San Francisco State, but found himself struggling to find a job. He eventually moved to Hawaii to work freelance editing projects. He moved there because he had a friend who wanted to rent out his house. Working as a Forecaster and Political Writer Currently, Robert is working as a forecaster and political writer. He has a sub-stack newsletter called Telling the Future, which has about 1500 subscribers. While he is not particularly happy writing about politics right now, he believes it's necessary for his career and personal growth. Therapy and Political Theory Robert discusses their first period after college and therapy. He mentions the stigma surrounding therapy and the importance of normalizing it. However, he eventually reached a breaking point. He didn't know what he wanted to do after college. He drove to New York and worked at several different places, including consulting and Booz and Allen, which he ultimately found lacked meaning and decided to pursue a more intellectual career. He knew that he liked thinking and writing about things, so he applied to grad school for political science, where he studied political theory and moral issues related to community living. However, he found the academic culture at Berkeley to be toxic and, combined with an unhealthy lifestyle, he decided it was not for him.  Robert touches on his difficult childhood, which was characterized by narcissistic parents and abusive mother. He eventually sought therapy and found that he felt better, but struggled to complete his dissertation. He dropped out of grad school, despite their professors' concerns, and was diagnosed with chronic PTSD. Finding Solace in Teaching Robert found solace in teaching, but disliked the part where he had to grade students. Some people had unhealthy relationships with grades, and he felt he had to refer them to suicide watch. He realized that teaching was great because it allowed them to understand a topic better by explaining it to others. He found that teaching was the only way they could truly understand a topic, but he realized he didn't want to do academic work. Additionally, he found that there was a backlog of people who wanted to become political theory professors who spend their time teaching adjuncts and spending money on conferences and job opportunities. Robert believes that his experience in grad school was intellectually rewarding and that his training and political theory shaped who he is. Writing for Love and Money Robert  talks about his experience writing for mainstream publications like The Economist, National Interest, California magazine, The Big Think, and The Washington Monthly. He shares his struggles with freelance writing, as he finds it slow and fussy, and finds it frustrating to be paid for work that takes time to complete. He also discusses his writing about forecasting, becoming a skilled judgmental forecaster. He makes money by producing forecasts for various organizations, which is a relatively new field. He encourages readers to support writers they love and consider paying for their work, as it is hard and not very rewarding. Forecasting Methods and Examples The conversation turns to Robert's writing and forecasting. He explains his approach to forecasting and how he uses history to guide his predictions. He shares his method of estimating the probability of events in the future, which involves looking back at similar elections and establishing a base rate. This helps in estimating the probability of what is going to happen in a specific situation. Robert also mentions that there are some situations that require more analytical thinking, such as discovering AGI or other technologies. He talks about The Phil Tetlock project, a government agency that helped invent the internet, aimed to determine if anyone could forecast geopolitical questions. The research showed that people were terrible at it, even analysts and pundits. However, a certain percentage of people consistently outperformed intelligence analysts using methodical extrapolations. Robert participated in the tournament and qualified as a super forecaster in his first year. He works with Metaculus and the Good Judgment Project, which produces probabilistic forecasts for decision-makers. The forecasting community is now working on making forecasts useful, such as understanding the reasons behind people's forecasts rather than just the number they produce. Influential Harvard Courses and Professors Robert stresses that he found his interaction with fellow students to be most enriching, and he appreciated Stanley Hoffmann's class on Ethics and International relations, which was taught through a humanist lens and emphasized the importance of morality. He also enjoyed watching the list of movies and reading academic articles alongside his classes, which eventually informed his teaching. He also mentions Adrienne Kennedy's playwriting class, which he found exciting and engaging. He enjoys table reads and reading people's plays fresh off the presses and believes that these experiences have shaped his forecasting skills. Timestamps: 03:16: Robert's Move to Hawaii and Career Challenges  06:16: Current Endeavors and Writing Career  07:58: Therapy and Early Career Struggles  10:14: Grad School Experience and Academic Challenges  22:41: Teaching and Forecasting Career 26:21: Forecasting Techniques and Projects  41:27: Impact of Harvard and Influential Professors  Links: Substack newsletter: https://tellingthefuture.substack.com/ LinkedIn: https://www.linkedin.com/in/robertdeneufville/ Featured Non-profit: The featured non-profit of this episode of The 92 Report is recommended by Patrick Jackson who reports: “Hi I'm Patrick Ian Jackson, class of 1992. The featured nonprofit of this episode of The 92 report is His Hands Free Clinic, located in Cedar Rapids, Iowa. Since 1992 His Hands Free Clinic has been seeking to honor God by helping the uninsured and underinsured in our community. The clinic is a 501, c3 nonprofit ministry providing free health care to Cedar Rapids and the surrounding communities. I love the work of this organization. The church that I pastor, First Baptist Church, Church of the Brethren, has been a regular contributor to the clinic for the past couple of years. You can learn more about their work at WWW dot his hands clinic.org, and now here is Will Bachman with this week's episode.” To learn more about their work, visit: www.HisHandsClinic.org.  

The Nonlinear Library
EA - Conditional Trees: Generating Informative Forecasting Questions (FRI) -- AI Risk Case Study by Forecasting Research Institute

The Nonlinear Library

Play Episode Listen Later Aug 13, 2024 19:06


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Conditional Trees: Generating Informative Forecasting Questions (FRI) -- AI Risk Case Study, published by Forecasting Research Institute on August 13, 2024 on The Effective Altruism Forum. Authors of linked report: Tegan McCaslin, Josh Rosenberg, Ezra Karger, Avital Morris, Molly Hickman, Otto Kuusela, Sam Glover, Zach Jacobs, Phil Tetlock[1] Today, the Forecasting Research Institute (FRI), released "Conditional Trees: A Method for Generating Informative Questions about Complex Topics," which discusses the results of a case study in using conditional trees to generate informative questions about AI risk. In this post, we provide a brief overview of the methods, findings, and directions for further research. For much more analysis and discussion, see the full report: https://forecastingresearch.org/s/AIConditionalTrees.pdf Abstract We test a new process for generating high-value forecasting questions: asking experts to produce "conditional trees," simplified Bayesian networks of quantifiably informative forecasting questions. We test this technique in the context of the current debate about risks from AI. We conduct structured interviews with 21 AI domain experts and 3 highly skilled generalist forecasters ("superforecasters") to generate 75 forecasting questions that would cause participants to significantly update their views about AI risk. We elicit the "Value of Information" (VOI) each question provides for a far-future outcome - whether AI will cause human extinction by 2100 - by collecting conditional forecasts from superforecasters (n=8).[2] In a comparison with the highest-engagement AI questions on two forecasting platforms, the average conditional trees-generated question resolving in 2030 was nine times more informative than the comparison AI-related platform questions (p = .025). This report provides initial evidence that structured interviews of experts focused on generating informative cruxes can produce higher-VOI questions than status quo methods. Executive Summary From May 2022 to October 2023, the Forecasting Research Institute (FRI) (a)[3] experimented with a new method of question generation ("conditional trees"). While the questions elicited in this case study focus on potential risks from advanced AI, the processes we present can be used to generate valuable questions across fields where forecasting can help decision-makers navigate complex, long-term uncertainties. Methods Researchers interviewed 24 participants, including 21 AI and existential risk experts and three highly skilled generalist forecasters ("superforecasters"). We first asked participants to provide their personal forecast of the probability of AI-related extinction by 2100 (the "ultimate question" for this exercise).[4] We then asked participants to identify plausible[5] indicator events that would significantly shift their estimates of the probability of the ultimate question. Following the interviews, we converted these indicators into 75 objectively resolvable forecasting questions. We asked superforecasters (n=8) to provide forecasts on each of these 75 questions (the "AICT" questions), and forecasts on how their beliefs about AI risk would update if each of these questions resolved positively or negatively. We quantitatively ranked the resulting indicators by Value of Information (VOI), a measure of how much each indicator caused superforecasters to update their beliefs about long-run AI risk. To evaluate the informativeness of the conditional trees method relative to widely discussed indicators, we assess a subset of these questions using a standardized version of VOI, comparing them to popular AI questions on existing forecasting platforms (the "status quo" questions). The status quo questions were selected from two popular forecasting platforms by identifying the highest-...

Zukunft Denken – Podcast
089 — The Myth of Left and Right, a Conversation with Prof. Hyrum Lewis

Zukunft Denken – Podcast

Play Episode Listen Later Jan 20, 2024 57:24


Is the political left and right position changing regularly? For many years now, I have been getting more and more uneasy when pundits and journalists use the “left/right” dichotomy. In my lifetime, I have observed numerous political topics that were once at the core of “left” politics that suddenly are named “right” and vice versa. I then came across the book with the very name “The Myth of Left and Right” and it is a terrific read. So I was very excited that one of the authors, Hyrum Lewis agreed to a conversation. Hyrum Lewis is a professor of history at BYU-Idaho and was previously a visiting scholar at Stanford University. He received a PhD from the University of Southern California and has written for the Wall Street Journal, Quillette, RealClearPolitics, The Washington Examiner, and other national publications. His most recent book, The Myth of Left and Right (co-authored with Verlan Lewis) was published by Oxford University Press in 2023. Moreover, this episode fits very nicely with the previous episode with Prof. Möllers on liberalism, so if you are a German speaker, please check this one out as well. Political realities do not map to a single variable or descriptor—there is no such thing as a political monism. Are “left” and “right” just post-hoc narratives where we try to construct ideologies that are not actually there? We observe a regular flip-flopping in history; what are prominent examples? “There is no left and right; there are just two tribes, and what these tribes believe and stand for will change quite radically over time since there is no philosophical core uniting the tribe.” I, personally, have a profound problem with the term “progressive”, but more generally, what do these terms even mean: progressivism, conservatism, reactionary, liberal?  “It is a loaded and self-serving term […] what is considered progressive changes from day to day.” “If you don't agree with every policy we believe in […] then you are obviously on the wrong side of history. You are standing against progress.” So, are left and right not a philosophy but rather a tribe?  Is the definition of conservatism maybe easier? There is a nice brief definition: "Conservatism is democracy of the deceased,” Roger Scruton makes the astute observation that there are so many more ways to screw up and so little ways to do right. But does this help in practice?  “Every person on that planet wants to conserve things that are good and change things that are bad. We are all progressive, and we are all conservative. We just don't agree on what is good and what is bad.” What are examples where positions are unclear or change over time. “In 1903, President Theodore Roosevelt visited Yosemite and was guided by naturalist John Muir. The two men spent three memorable nights camping, first under the outstretched arms of the Grizzly Giant in the Mariposa Grove of Giant Sequoias, then in a snowstorm atop five feet of snow near Sentinel Dome, and finally in a meadow near the base of Bridalveil Fall. Their conversations and shared joy with the beauty and magnificence of Yosemite led Roosevelt to expand federal protection of Yosemite, and it inspired him to sign into existence five national parks, 18 national monuments, 55 national bird sanctuaries and wildlife refuges, and 150 national forests.”, Roosevelt, Muir, and the Grace of Place (NPR) Teddy Roosevelt was a Republican. And here again, a “hiccup”: even though Teddy Roosevelt was a Republican, he called himself a progressive. In reality, though, if you see someone on the street in a mask, you can predict with high certainty the other political assumptions of this person. How come? Is there now an underlying disposition, or is there not? Or is it much more a phenomenon of tribal or social conformity? Is the left-right model, at least, useful? What can we learn from past US presidents such as Donald Trump, Bill Clinton, George W. Bush in that regard? Is the political discourse at least more reasonable at universities and among “elites”? Or maybe even more troubled and more conforming to their very tribe? If “normal” people are in general “moderate” on important topics (like abortion), why do major political parties play for the few on the extreme ends of the opinion spectrum? More generally, some educated people describe themselves as “moderate” or “centrist.” Does this even mean anything, and would it be desirable? What about “realism” vs. “utopianism”? “Both status quo conservatives and progressive technocrats share a common element: the hostility to open-ended change, guided not by planners but by millions of experiments and trial and error. For both, the goal is stasis, it's just that one group finds it in the past, the other one in the future.”, Virginia Postrel  A lot of these errors are made under the more elementary mistake that we can know, predict, or foresee the future, especially when we take actions. What can we learn from Phil Tetlock and Dan Gardners forecasting studies? “To be a true progressive, you cannot be a progressive” “Our media does not reward granular, careful, and probabilistic analysis.” So, is it not more significant to distinguish between authoritarian and non-authoritarian politicians or political methods? But can we be optimistic about the future when non-tribal podcasters like Joe Rogan or Coleman Hughes have audiences that are larger than most legacy media outlets combined? Is democracy over time the best way to deal with complex situations and challenges? Is there a value in slowness, and are we not just too impatient? References Other Episodes Episode 88: Liberalismus und Freiheitsgrade, ein Gespräch mit Prof. Christoph Möllers Episode 84: (Epistemische) Krisen? Ein Gespräch mit Jan David Zimmermann Episode 80: Wissen, Expertise und Prognose, eine Reflexion Episode 57: Konservativ UND Progressiv Hyrum Lewis Hyrum Lewis at BYU-Idaho Hyrum Lewis, Verlan Lewis, The Myth of Left and Right, Oxford University Press (2022) Hyrum Lewis, It's Time to Retire the Political Spectrum, Quillette (2017) Hyrum Lews Blog Other References Roger Scruton, How to be a conservative, Bloomsbury Continuum (2019) Johan Norberg, Open: The Story of Human Progress, Atlantic Books (2021) Karl Popper, The Poverty of Historicism, Routledge Classic Phil Tetlock, Dan Gardner, Superforecasting, Cornerstone Digital (2015) Tim Urban, What's Our Problem?: A Self-Help Book for Societies (2023) Nicholas Carr, The Shallows, Atlantic Books (2020) Roosevelt, Muir, and the Grace of Place Joe Rogan Podcast Coleman Hughes Podcast

Zukunft Denken – Podcast
080 — Wissen, Expertise und Prognose, eine Reflexion

Zukunft Denken – Podcast

Play Episode Listen Later Sep 26, 2023 32:40


Ich denke seit ein paar Wochen über die Begriffe »Expertise« und »Wissen« nach. Wie stehen diese zueinander und in Bezug zu Prognostik. Dazu irritiert die übliche oder alltägliche Verwendung der Begriff »Expertise« beziehungsweise »Experte«. Regelmäßig werden in Medien Experten präsentiert oder von Politikern auf Experten verwiesen, die die Welt erklären und Prognosen komplexer Systeme mit großem Selbstvertrauen darlegen. Wenn dann aber regelmäßig von Experten oder Institutionen schwere Fehler in der Einschätzung gemacht werden, dann hat das negative Effekte für alle Menschen, für unser Vertrauen in Politik und in wesentliche Institutionen. Was ist die Aussage eines Experten wert? Was ist von Prognosen zu halten? Zum Einstieg werden einige Beispiele schwerwiegender Fehleinschätzungen von Experten über die Jahrzehnte diskutiert, und was noch schlimmer ist, Fehleinschätzungen aus denen offenbar systemisch nichts gelernt wurde: Es ist unmöglich die wachsende Weltbevölkerung zu ernähren (Paul Ehrlich, 1960er Jahre) Bevölkerungsexplosion oder -implosion (1960er Jahre bis heute)? Peak Oil (1980er bis 2000er Jahre) Eiszeit oder kochender Planet? Versagen der deutschen Energiewende Prognosen über den Ukraine Krieg Verschiedenste Wirtschaftsprognosen »Voraussagen der Euro-Dollar-Wechselkurse sind wertlos. Jedes Jahr im Dezember sagen internationale Banken die Wechselkurse für das Ende des folgenden Jahres vorher. Meistens liegt der tatsächliche Kurs außerhalb des gesamten Prognosebereichs.«, Gerd Gigerenzer Sind das alles nur Anekdoten, oder wurde die Qualität von Expertenprognosen systematisch untersucht?  »Der Durchschnittsexperte hatte bei vielen der von mir gestellten politischen und wirtschaftlichen Fragen kaum besser abgeschnitten als hätte er geraten […] «  »je berühmter ein Experte war umso schlechter war die Leistung«, Phil Tetlock Mit anderen Worten: je öfter sie den »Experten« im Fernsehen sehen, desto schlechter sind wahrscheinlich seine Prognosen. Was ist nun tatsächlich Expertise? Was ist Wissen? Versuchen wir eine Definition: »Expertise ist die Fähigkeit, Veränderungen in der Welt konsistent vorauszusagen oder herbeizuführen.« Schon in der Antike gab es unterschiedliche Begriffe für verschiedene Formen des Wissens: episteme (gr) oder scientia (lat) für Wissen um seines selbst Willen sowie techne (gr) oder ars (lat) für angewandtes Wissen; vielleicht vergleichbar damit, was ich Expertise nenne. Gibt es Kompetenz, Expertise ohne Tun? Welche Beispiele für Bereiche gibt es, in denen man von Expertise sprechen kann, welche, in denen keine Expertise in diesem Sinne möglich ist? Gibt es Expertise an der Universität? Dazu kommt die Frage des Vertrauens, wie in der letzten Episode betont wurde: »Vertrauen ist ein sozialer Prozess, und Expertise ist sozial bestimmt. [...] Du musst der Wissenschaft folgen heißt, du musst mit meinen Werturteilen übereinstimmen.[...] Eine Entscheidung kann niemals wissenschaftlich fundiert sein. […] « »Letztendlich ist die Definition eines Experten die eines Menschen, dessen Urteile man bereit ist, als das eigenen zu akzeptieren« Was fangen wir nun mit diesem Befund an? Was tun? Eine Zusammenfassung in vier Punkten: (1) Sind wir im Zeitalter der Post-Expertise angelangt? »In den meisten Teilen der Gesellschaft wird man ermutigt, sich auf Experten zu verlassen — wir alle tun das mehr als wir sollten«, Noam Chomsky (2) Das Tun nicht vergessen — Expertise lässt sich nicht virtualisieren »Ich glaube nicht, dass man ein guter Erfinder sein kann, wenn man nicht weiß, wie das Zeug gebaut wird, dass man designed«, Walter Isaacson über Elon Musk (3) Mit Unsicherheit leben lernen — die Herausforderung unserer Zeit »Die Menschen neigen dazu, Unsicherheit verstörend zu empfinden.«, Phil Tetlock (4) Evolution, oder: meine Arbeits-These zum Fortschritt  »Seed — Select — Amplify« Diese Episode ist eine Reflexion und ich freue mich wieder besonders über kritisches Feedback! »Dumme Menschen können Probleme verursachen, aber es braucht häufig geniale Menschen um eine wirkliche Katastrophe auszulösen«, Thomas Sowell Referenzen Andere Episoden Episode 13: (Pseudo)wissenschaft? Welcher Aussage können wir trauen? Teil 1 Episode 14: (Pseudo)wissenschaft? Welcher Aussage können wir trauen? Teil 2 Episode 36: Energiewende und Kernkraft, ein Gespräch mit Anna Veronika Wendland Episode 42: Gesellschaftliche Verwundbarkeit, ein Blick hinter die Kulissen: Gespräch mit Herbert Saurugg Episode 59: Wissenschaft und Umwelt — Teil 1 Episode 60: Wissenschaft und Umwelt — Teil 2 Episode 62: Wirtschaft und Umwelt, ein Gespräch mit Prof. Hans-Werner Sinn Episode 73: Ökorealismus, ein Gespräch mit Björn Peters Episode 74: Apocalype Always Episode 76: Existentielle Risiken Fachliche Referenzen Leonard Nimoy, Ice Age 1979 Vorsicht, wer die Konjunktur prophezeit, Der Standard (8.2.2020) Spectator Inflation Prediction (17.8.2022) Stuart Ritchie, Michael Story, How the experts messed up on Covid, UnHerd (2020) Gerd Gigerenzer, Risiko – Wie man die richtigen Entscheidungen trifft, Pantheon (2020) Karl Popper, The Myth of the Framework Daniel Yankelovich, Wicked Problems, Workable Solutions, Rowman & Littlefield (2014) Neil Gershenfield, Lex Fridman #380 Walter Isaacson, Lex Fridman #395 Nassim Taleb, What do I mean by Skin in the Game? My Own Version, Medium (2018) Scott Adams, Prediction and Forecast Thomas Sowell on “Social Justice Fallacies”, Uncommon Knowledge (2023)

CSO Perspectives (public)
Cybersecurity risk forecasting.

CSO Perspectives (public)

Play Episode Listen Later Aug 21, 2023 20:28


Rick Howard, the CSO, Chief Analyst, and Senior Fellow at N2K Cyber, discusses the current state of cybersecurity risk forecasting with guests Fred Kneip, CyberGRX's founder and President of ProcessUnity, and Kevin Richards, Cyber Risk Solutions President. Howard, R., 2023. Cybersecurity First Principles: A Reboot of Strategy and Tactics [Book]. Wiley. URL: https://www.amazon.com/Cybersecurity-First-Principles-Strategy-Tactics/dp/1394173083.   Howard, R., 2023. Bonus Episode: 2023 Cybersecurity Canon Hall of Fame inductee: Superforecasting: The Art and Science of Prediction by Dr Phil Tetlock and Dr Dan Gardner. [Podcast]. The CyberWire. URL https://thecyberwire.com/podcasts/cso-perspectives/5567/notes Howard, R., 2022. Risk Forecasting with Bayes Rule: A practical example. [Podcast]. The CyberWire. URL https://thecyberwire.com/podcasts/cso-perspectives/88/notes Howard, R, 2023. Superforecasting: The Art and Science of Prediction [Book review]. Cybersecurity Canon Project. URL icdt.osu.edu/superforecasting-art-and-science-prediction. Howard, R., 2022. Two risk forecasting data scientists, and Rick, walk into a bar. [Podcast]. The CyberWire. URL https://thecyberwire.com/podcasts/cso-perspectives/89/notes Howard, R., Freund, J., Jones, J., 2016. 2016 Cyber Canon Inductee - Measuring and Managing Information Risk: A FAIR approach [Interview]. YouTube. URL https://www.youtube.com/watch?v=vxBpAnSBaGM Hubbard , D.W., Seiersen, R., 2016. How to Measure Anything in Cybersecurity Risk [Book]. Goodreads. URL https://www.goodreads.com/book/show/26518108-how-to-measure-anything-in-cybersecurity-risk Clark, B., Seiersen , R., Hubbard, D., 2017. “How To Measure Anything in Cybersecurity Risk” - Cybersecurity Canon 2017 [Interview]. YouTube. URL https://www.youtube.com/watch?v=2o_mAavdabg&t=93s Freund, J., Jones, J., 2014. Measuring and Managing Information Risk: A FAIR Approach [Book]. Goodreads. URL https://www.goodreads.com/book/show/22637927-measuring-and-managing-information-risk Katz, D., 2021. Corporate Governance Update: “Materiality” in America and Abroad [Essay]. The Harvard Law School Forum on Corporate Governance. URL https://corpgov.law.harvard.edu/2021/05/01/corporate-governance-update-materiality-in-america-and-abroad/ Posner, C., 2023. SEC Adopts Final Rules on Cybersecurity Disclosure [Essay]. The Harvard Law School Forum on Corporate Governance. URL https://corpgov.law.harvard.edu/2023/08/09/sec-adopts-final-rules-on-cybersecurity-disclosure/ Linden, L.V., Kneip, F., Squier, Suzie , 2022. Threats Across the Globe & Benchmarking with CyberGRX [Podcast]. Retail & Hospitality ISAC Podcast. URL https://pca.st/a49enjb1 Lizárraga, C.J., 2023. Improving the Quality of Cybersecurity Risk Management Disclosures [Essay]. U.S. Securities and Exchange Commission. URL https://www.sec.gov/news/statement/lizarraga-statement-cybersecurity-072623 Staff, 2022. Benchmarking Cyber-Risk Quantification [Survey]. Gartner. URL https://www.gartner.com/en/publications/benchmarking-cyber-risk-quantification Tetlock, P.E., Gardner, D., 2015. Superforecasting: The Art and Science of Prediction [Book]. Goodreads. URL https://www.goodreads.com/book/show/23995360-superforecasting Winterfeld, S., 2014. How to Measure Anything in Cybersecurity Risk [Book review]. Cybersecurity Canon Project. URL https://icdt.osu.edu/how-measure-anything-cybersecurity-risk

Love Your Work
305. Hedgehogs and Foxes

Love Your Work

Play Episode Listen Later Jun 29, 2023 12:07


According to philosopher Isaiah Berlin, people think in one of two different ways: They're either hedgehogs, or foxes. If you think like a hedgehog, you'll be more successful as a communicator. If you think like a fox, you'll be more accurate. Isaiah Berlin coined the hedgehog/fox dichotomy (via Archilochus) In Isaiah Berlin's 1953 essay, “The Hedgehog and the Fox,” he quotes the ancient Greek poet, Archilochus: The fox knows many things, but the hedgehog knows one thing. Berlin describes this as “one of the deepest differences which divide writers and thinkers, and, it may be, human beings in general.” How are “hedgehogs” and “foxes” different? According to Berlin, hedgehogs relate everything to a single central vision. Foxes pursue many ends, often unrelated or even contradictory. If you're a hedgehog, you explain the world through a focused belief or area of expertise. Maybe you're a chemist, and you see everything as chemical reactions. Maybe you're highly religious, and everything is “God's will.” If you're a fox, you explain the world through a variety of lenses. You may try on conflicting beliefs for size, or use your knowledge in a wide variety of fields to understand the world. You explain things as From this perspective, X. But on the other hand, Y. It's also worth considering Z. The seminal hedgehog/fox essay is actually about Leo Tolstoy Even though this dichotomy Berlin presented has spread far and wide, his essay is mostly about Leo Tolstoy, and the tension between his fox-like tendencies and hedgehog-like aspirations. In Tolstoy's War and Peace, he writes: In historic events the so-called great men are labels giving names to events, and like labels they have but the smallest connection with the event itself. Every act of theirs, which appears to them an act of their own will, is in an historical sense involuntary and is related to the whole course of history and predestined from eternity. In War and Peace, Tolstoy presents characters who act as if they have control over the events of history. In Tolstoy's view, the events that make history are too complex to be controlled. Extending this theory outside historical events, Tolstoy also writes: When an apple has ripened and falls, why does it fall? Because of its attraction to the earth, because its stalk withers, because it is dried by the sun, because it grows heavier, because the wind shakes it, or because the boy standing below wants to eat it? Nothing is the cause. All this is only the coincidence of conditions in which all vital organic and elemental events occur. Is Tolstoy a fox, or a hedgehog? He acknowledges the complexity with which various events are linked – which is very fox-like. But he also seems convinced these events are so integrated with one another that nothing can change them. They're “predetermined” – a “coincidence of conditions.” A true hedgehog might have a simple explanation, such as that gravity caused the apple to fall. Tolstoy loved concrete facts and causes, such as the pull of gravity, yet still yearned to find some universal law that could be used to predict the future. According to Berlin: It is not merely that the fox knows many things. The fox accepts that he can only know many things and that the unity of reality must escape his grasp. And this was Tolstoy's downfall. Early in his life, he presented profound insights about the world through novels such as War and Peace and Anna Karenina. That was very fox-like. Later in his life, he struggled to condense his deep knowledge about the world and human behavior into overarching theories about moral and ethical issues. As Berlin once wrote to a friend, Tolstoy was “a fox who terribly believed in hedgehogs and wished to vivisect himself into one.” Other hedgehogs and foxes in Berlin's essay Other thinkers Berlin classifies as foxes include Aristotle, Goethe, and Shakespeare. Other thinkers Berlin classifies as hedgehogs include Dante, Dostoevsky, and Plato. What does the hedgehog/fox dichotomy have to do with the animals? What does knowing many things have to do with actual foxes? What does knowing one big thing have to do with actual hedgehogs? A fox is nimble and clever. It can run fast, climb trees, dig holes, swim across rivers, stalk prey, or hide from predators. A hedgehog mostly relies upon its ability to roll into a ball and ward off intruders. Foxes tell the future, hedgehogs get credit What are the consequences of being a fox or a hedgehog? According to Phil Tetlock, foxes are better at telling the future, while hedgehogs get more credit for telling the future. In Tetlock's 2005 book, Expert Political Judgement, he shared his findings from forecasting tournaments he held in the 1980s and 90s. Experts made 30,000 predictions about political events such as wars, economic growth, and election results. Then Tetlock tracked the performances of those predictions. What he found led to the U.S. intelligence community holding forecasting tournaments, tracking more than one million forecasts. Tetlock's own Good Judgement Project won the forecasting tournament, outperforming even intelligence analysts with access to classified data. Better a fox than an expert These forecasting tournaments have shown that whether someone can make accurate predictions about the future doesn't depend upon their field of expertise, their status within the field, their political affiliation, or philosophical beliefs. It doesn't matter if you're a political scientist, a journalist, a historian, or have experience implementing policies. As the intelligence community's forecasting tournaments have shown, it doesn't even matter if you have access to classified information. What matters is your style of reasoning: Foxes make more accurate predictions than hedgehogs. Across the board, experts were barely better than chance at predicting what would or wouldn't happen. Will a new tax plan spur or slow the economy? Will the Cold War end? Will Iran run a nuclear test? Generally, it didn't matter if they were an economist, an expert on the Soviet Union, or a political scientist. That didn't guarantee they'd be better than chance at predicting what would happen. What did matter is whether they thought like a fox. Foxes are: deductive, open-minded, less-biased Foxes are skeptical of grand schemes – the sort of “theories of everything” Tolstoy had hoped to construct. They didn't see predicting events as a top-down, deductive process. They saw it as a bottom-up, inductive process – stitching together diverse and conflicting sources of information. Foxes were curious and open-minded. They didn't go with the tribe. A liberal fox would be more open to thinking the Cold War could have gone on longer with a second Carter administration. A conservative fox would be more open to believing the Cold War could have ended just as quickly under Carter as it did under Reagan. Foxes were less prone to hindsight bias – less likely to remember their inaccurate predictions as accurate. They were less prone to the bias of cognitive conservatism – maintaining their beliefs after making an inaccurate prediction. As one fox said: Whenever I start to feel certain I am right... a little voice inside tells me to start worrying. —A “fox” Hedgehogs are: deductive, close-minded, more-biased (yet more successful) As for inaccurate predictions, one simple test tracked with whether an expert made accurate predictions: a Google search. If an expert was more famous – as evinced by having more results show up on Google when searching their name – they tended to be less accurate. Think about the talking-head people that get called onto MSNBC or Fox News (pun, albeit inaccurate, not intended) to make quick comments on the economy, wars, and elections – those people. Experts who made more media appearances, and got more gigs consulting with governments and businesses, were actually less accurate at making predictions than their colleagues who were toiling in obscurity. And these experts who were more successful – in terms of media appearances and consulting gigs – also tended to be hedgehogs. Hedgehogs see making predictions as a top-down deductive process. They're more likely to make sweeping generalizations. They take the “one big thing” they know – say, being an expert on the Soviet Union – and view everything through that lens. Even if it's to explain something in other domains. Hedgehogs are more-biased about the world, and about themselves. They were more likely than foxes to remember inaccurate predictions they had made, as accurate. They were more likely to remember as inaccurate, predictions their opponents made that were accurate. Rather than change their beliefs, when presented with challenging evidence hedgehog's beliefs got stronger. Are hedgehogs playing a different game? It's tempting to take that and run with it: The close-minded hedgehogs of the world are inaccurate. Success doesn't track with skill. Tetlock is careful to caution that hedgehogs aren't always worse than foxes at telling the future. Also, there are good reasons to be overconfident in predictions. As one hedgehog political pundit wrote to Tetlock: You play a publish-or-perish game run by the rules of social science.... You are under the misapprehension that I play the same game. I don't. I fight to preserve my reputation in a cutthroat adversarial culture. I woo dumb-ass reporters who want glib sound bites. —“Hedgehog” political pundit A hedgehog has a lot to gain from making bold predictions and being right, and nobody holds them accountable when they're wrong. But according to Tetlock, nothing in the data indicates hedgehogs and foxes are equally good forecasters who merely have different tastes for under- and over-prediction. As Tetlock says: Quantitative and qualitative methods converge on a common conclusion: foxes have better judgement than hedgehogs. —Phil Tetlock, Expert Political Judgement Hedgehogs may make better leaders As bad as hedgehogs look now, there are some real benefits to hedgehogs. They're more-focused. They don't get as distracted when a situation is ambiguous. So, hedgehogs are more decisive. They're harder to manipulate in a negotiation, and more willing to make controversial decisions that could make enemies. And that confidence can help them lead others. Overall, hedgehogs are better at getting their messages heard. Given the mechanics of media today, that means the messages we hear from either side of the political spectrum are those of the hedgehogs. Hedgehog thinking makes better sound bites, satisfies the human desire for clarity and certainty, and is easier for algorithms to categorize and distribute. The medium is the message, and nuance is cut out of the messages by the characteristics of the mediums. Which increases polarization. But, there is hope for the foxes. While the media landscape is still dominated by hedgehog messages that work as social media clips, there are more channels with more room for intellectually-honest discourse: blogs, podcasts, and books. And if many a ChatGPT conversation is any indication, the algorithms may get more sophisticated and remind us, “it's important to consider....” Hedgehogs, be foxes! And foxes, hedgehogs. If you're a hedgehog, you're lucky: What you have to say has a better chance of being heard. But it will have a better chance of being correct if you think like a fox once in a while: consider different angles, and assume you're wrong. If you're a fox, you have your work cut out for you: You may have important – and accurate – things to say, but they have less a chance of being heard. Your message will travel farther if you think like a hedgehog once in a while: assume you're right, cut out the asides, and say it with confidence. Image: Fox in the Reeds by Ohara Koson About Your Host, David Kadavy David Kadavy is author of Mind Management, Not Time Management, The Heart to Start and Design for Hackers. Through the Love Your Work podcast, his Love Mondays newsletter, and self-publishing coaching David helps you make it as a creative. Follow David on: Twitter Instagram Facebook YouTube Subscribe to Love Your Work Apple Podcasts Overcast Spotify Stitcher YouTube RSS Email New bonus content on Patreon! I've been adding lots of new content to Patreon. Join the Patreon »       Show notes: https://kadavy.net/blog/posts/hedgehogs-foxes/

CSO Perspectives (public)
Bonus Episode: 2023 Cybersecurity Canon Hall of Fame Inductee: Superforecasting: The Art and Science of Prediction by Dr Phil Tetlock and Dr Dan Gardner.

CSO Perspectives (public)

Play Episode Listen Later Apr 26, 2023 19:06


Rick Howard, N2K's CSO and The CyberWire's Chief Analyst and Senior Fellow, interviews Dan Gardner about this 2023 Cybersecurity Canon Hall of Fame book: “Superforecasting: The Art and Science of Prediction.”

The Nonlinear Library
EA - Announcing the Forecasting Research Institute (we're hiring) by Tegan

The Nonlinear Library

Play Episode Listen Later Dec 13, 2022 3:58


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing the Forecasting Research Institute (we're hiring), published by Tegan on December 13, 2022 on The Effective Altruism Forum. The Forecasting Research Institute (FRI) is a new organization focused on advancing the science of forecasting for the public good. All decision-making implicitly relies on prediction, so improving prediction accuracy should lead to better decisions. And forecasting has shown early promise in the first-generation research conducted by FRI Chief Scientist Phil Tetlock and coauthors. But despite burgeoning popular interest in the practice of forecasting (especially among EAs), it has yet to realize its potential as a tool to inform decision-making. Early forecasting work focused on establishing a rigorous standard for accuracy, in experimental conditions chosen to provide the cleanest, most precise evidence possible about forecasting itself—a proof of concept, rather than a roadmap for using forecasting in real-world conditions. A great deal of work, both foundational and translational, is still needed to shape forecasting into a tool with practical value. That's why our team is pursuing a two-pronged strategy. One is foundational, aimed at filling in the gaps in the science of forecasting that represent critical barriers to some of the most important uses of forecasting—like how to handle low probability events, long-run and unobservable outcomes, or complex topics that cannot be captured in a single forecast. The other prong is translational, focused on adapting forecasting methods to practical purposes: increasing the decision-relevance of questions, using forecasting to map important disagreements, and identifying the contexts in which forecasting will be most useful. Over the next two years we plan to launch multiple research projects aimed at the key outstanding questions for forecasting. We will also analyze and report on our group's recently completed project, the Existential Risk Persuasion Tournament (XPT). This tournament brought together over 200 domain experts and highly skilled forecasters to explore, debate, and forecast potential threats to humanity in the next century, creating a wealth of rich data that our team is mining for forecasting and policy insights. In our upcoming projects, we'll be conducting large, high-powered studies on a new research platform, customized for the demands of forecasting research. We'll also work closely with selected organizations and policymakers to create forecasting tools informed by practical use-cases. Our planned projects include: Developing a forecasting proficiency test for quickly and cheaply identifying accurate forecasters Identifying leading indicators of increased risk to humanity from AI by building “AI-risk conditional trees” with the help of domain experts (overview of conditional trees here, pg. 13) Exploring ways of judging (and incentivizing) answers to unresolvable and far-future questions, such as reciprocal scoring Conducting “Epistemic Audits” to help organizations reduce uncertainty, identify action-relevant disagreement, and guide their decision processes. (For more on our research priorities, see here and here.) We're excited to begin FRI's work at such an auspicious time for the field of forecasting, with the many great projects, people and ideas that currently inhabit it—spanning the gamut from heavyweight organizations like Metaculus and GJI, to the numerous innovative projects run by small teams and individuals. This environment presents a wealth of opportunities for collaboration and cooperation, and we're looking forward to being a part of such a dynamic community. Our core team consists of Phil Tetlock, Michael Page, Josh Rosenberg, Ezra Karger, Tegan McCaslin, and Zachary Jacobs. We also work with various contractors and external collaborators in the forec...

The Nonlinear Library
EA - Participate in the Hybrid Forecasting-Persuasion Tournament (on X-risk topics) by Jhrosenberg

The Nonlinear Library

Play Episode Listen Later Apr 26, 2022 3:43


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Participate in the Hybrid Forecasting-Persuasion Tournament (on X-risk topics), published by Jhrosenberg on April 25, 2022 on The Effective Altruism Forum. Tl;dr: Phil Tetlock's research team is running a tournament that will team up superforecasters with subject-matter experts to produce accurate forecasts and informative rationales on existential risk topics. If you would like to express interest in participating, please answer some initial questions about your views on X-risk in this form by May 13th. More on the tournament The Hybrid Forecasting-Persuasion Tournament will explore potential threats to humanity in the next century, with a focus on artificial intelligence, biosecurity, climate, and nuclear war in the short-term (

Slate Star Codex Podcast
Mantic Monday 4/18/22

Slate Star Codex Podcast

Play Episode Listen Later Apr 19, 2022 20:08


https://astralcodexten.substack.com/p/mantic-monday-41822 Nuclear risk, AI risk, Musk-acquiring-Twitter risk Warcasting Changes in Ukraine prediction markets since my last post March 21: Will at least three of six big cities fall by June 1?: 53% → 5% Will World War III happen before 2050?: 20% →22% Will Russia invade any other country in 2022?: 7% →5% Will Putin still be president of Russia next February?: 80% → 85% Will 50,000 civilians die in any single Ukrainian city?: 10% → 10%   If you like getting your news in this format, subscribe to the Metaculus Alert bot for more (and thanks to ACX Grants winner Nikos Bosse for creating it!) Nuclear Risk Update Last month superforecaster group Samotsvety Forecasts published their estimate of the near-term risk of nuclear war, with a headline number of 24 micromorts per week. A few weeks later, J. Peter Scoblic, a nuclear security expert with the International Security Program, shared his thoughts. His editor wrote: I (Josh Rosenberg) am working with Phil Tetlock's research team on improving forecasting methods and practice, including through trying to facilitate increased dialogue between subject-matter experts and generalist forecasters. This post represents an example of what Daniel Kahneman has termed “adversarial collaboration.” So, despite some epistemic reluctance, Peter estimated the odds of nuclear war in an attempt to pinpoint areas of disagreement. In other words: the Samotsvety analysis was the best that domain-general forecasting had to offer. This is the best that domain-specific expertise has to offer. Let's see if they line up:

The Nonlinear Library
EA - Nuclear Expert Comment on Samotsvety Nuclear Risk Forecast by Jhrosenberg

The Nonlinear Library

Play Episode Listen Later Mar 26, 2022 30:59


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Nuclear Expert Comment on Samotsvety Nuclear Risk Forecast, published by Jhrosenberg on March 26, 2022 on The Effective Altruism Forum. The below comment was written by J. Peter Scoblic and edited by Josh Rosenberg. Peter is a senior fellow with the International Security Program at New America, where he researches strategic foresight, prediction, and the future of nuclear weapons. He has also served as deputy staff director of the Senate Committee on Foreign Relations, where he worked on approval of the New START agreement, and he is the author of U.S. vs. Them, an intellectual history of conservatism and nuclear strategy. I (Josh Rosenberg) am working with Phil Tetlock's research team on improving forecasting methods and practice, including through trying to facilitate increased dialogue between subject-matter experts and generalist forecasters. This post represents an example of what Daniel Kahneman has termed “adversarial collaboration.” So, despite some epistemic reluctance, Peter estimated the odds of nuclear war in an attempt to pinpoint areas of disagreement. We shared this forecast with the Samotsvety team in advance of posting to check for major errors and intend these comments in a supportive and collaborative spirit. Any remaining errors are our own. In the next couple of weeks, our research team plans to post an invitation to a new Hybrid Forecasting-Persuasion Tournament that we hope will lead to further collaboration like the below on other existential risk-related topics. If you'd like to express interest in participating in that tournament, please add your email here. Summary On March 10th, the Samotsvety forecasting team published a Forum post assessing the risk that a London resident would be killed in a nuclear strike “in light of the war in Ukraine and fears of nuclear escalation.” It recommended against evacuating major cities because the risk of nuclear death was extremely low. However, the reasoning behind the forecast was questionable. The following is intended as a constructive critique to improve the forecast quality of nuclear risk, heighten policymaker responsiveness to probabilistic predictions, and ultimately reduce the danger of nuclear war. By implication, it makes the case that greater subject matter expertise can benefit generalist forecasters—even well-calibrated ones. The key takeaways are: Because a nuclear war has never been fought, it is difficult to construct a baseline forecast from which to adjust in light of Russia's invasion of Ukraine. Nuclear forecasting warrants greater epistemic humility than other subjects. The baseline forecast of U.S.-Russian nuclear conflict (absent Russia's invasion of Ukraine) is unstated, but it appears to rest, in part, on the debatable assertion that strategic stability between the United States and Russia has improved since the end of the Cold War when many developments suggest the opposite is true. The relatively low probability of London being struck with a nuclear weapon during a NATO/Russia nuclear war rests on the assumption that nuclear escalation can likely be controlled—one of the most persistent and highly contested subjects in nuclear strategy. In other words, the aggregate forecast takes a highly confident position on an open question. The forecast does not seem to account for U.S. and Russian doctrine concerning the targeting and employment of nuclear weapons. The forecast is overly optimistic about the ability to evacuate a major city. Taking all of the above into account, my forecast of nuclear risk in the current environment is an order of magnitude higher. But experts view nuclear forecasting with suspicion because the lack of historical analogy generates a high degree of uncertainty. Although providing a common measure of risk may be useful to the EA community, couching nucle...

Heterodox Out Loud
Ep. 34: Part 1: Political Diversity Will Improve Social Psychological Science, Lee Jussim and Jonathan Haidt (Blog Audio-Only)

Heterodox Out Loud

Play Episode Listen Later Mar 24, 2022 29:19


On part 1 of this episode of Heterodox Out Loud, we'll listen to Jonathan Haidt's edited summary of a seminal academic paper that helped lead to the founding of Heterodox Academy. The original paper, “Political Diversity Will Improve Social Psychological Science,” was published in Behavioral and Brain Sciences in 2015, and was written by Jonathan Haidt, Lee Jussim, Jose Duarte, Jarret Crawford, Phil Tetlock, and Charlotta Stern. Make sure to listen to listen to part 2 where we speak with co-author Lee Jussim, Social Psychologist and Distinguished Professor at Rutgers University, about how political bias in academia can solidify into orthodoxies that undermine truth-seeking and critical inquiry. Let us know what you think! For comments and questions email communications@heterodoxacademy.org.This episode was hosted by Zach Rausch, and produced by Davies Content. Heterodox Out Loud is an ongoing series of selected pieces from heterodox: the blog in audio form with exclusive interviews. 

The Electric Wire
The Future of Utilities with Rebecca Ryan and Lauren Azar

The Electric Wire

Play Episode Listen Later Mar 16, 2022 46:58


Rebecca Ryan and Lauren Azar join the Electric Wire for a conversation about change in the energy industry and in our world. Rebecca, one of the nation's leading futurists, discusses local and global trends that everyone should be preparing for. Lauren, one of America's most respected utility attorneys, leads the conversation about change in the energy industry, as they answer the following questions, and more: · How do we get those in the utility industry to start thinking like futurists? · How important are governmental or industry goals to get where we want to go? · What does the future look like for EV's and electrification? · What are the craziest changes coming by 2050 that we haven't thought of yet? Cari Anne Renlund, Vice President, General Counsel and Secretary of Madison Gas and Electric Co., joins Kristin Gilkes, Executive Director of the Customers First! Coalition, as co-host (and bonus guest) for a lively and inspiring conversation about the direction of the industry. Links from the Episode: More on MGE's 2030 and 2050 Carbon Reduction Goals: https://www.mge2050.com/en/ IPCC: Sixth Assessment Report https://www.ipcc.ch/assessment-report/ar6/ Note from Rebecca: The Good Judgment Project (https://www.gjopen.com) makes predictions on EV penetration and MANY other things. The Good Judgment Project was founded by the author of Superforecasting, Phil Tetlock who found that regular people can learn to do foresight as well or better than trained CIA agents. Long Bets is also does long term wagering about the future, fueled by a bunch of futurists and thought leaders. I just dropped on and Steven Pinker has a challenge about Bioterrorism. How to Become a Cyborg from MIT Technology Review: https://www.technologyreview.com/2018/06/19/142228/here-are-some-ways-to-upgrade-yourself-one-body-part-at-a-time/

Sped up Rationally Speaking
Rationally Speaking #145 - Phil Tetlock on "Superforecasting: The Art and Science of Prediction"

Sped up Rationally Speaking

Play Episode Listen Later Jan 3, 2021 53:08


Most people are terrible at predicting the future. But a small subset of people are significantly less terrible: the Superforecasters. On this episode of Rationally Speaking, Julia talks with professor Phil Tetlock, whose team of volunteer forecasters has racked up landslide wins in forecasting tournaments sponsored by the US government. He and Julia explore what his teams were doing right and what we can learn from them, the problem of meta-uncertainty, and how much we should expect prediction skill in one domain (like politics or economics) to carry over to other domains in real life. Sped up the speakers by ['1.07', '1.0']

This Week in Intelligent Investing
Dividend Policy and Capital Allocation | Forecasting Surprises and Disasters

This Week in Intelligent Investing

Play Episode Listen Later Oct 31, 2020 82:02


In this episode, John Mihaljevic hosts a discussion of:Dividend payout policy as part of overall capital allocation: Chris Bloomstran casts a critical eye toward the tax (in)efficiency of dividends and the decision-making process behind corporate boards' dividend policies. We discuss the utility of dividends within a capital allocation framework.Forecasting surprises (and disasters) based on a Phil Tetlock article: Phil Ordway shares insights from a recent Phil Tetlock article and takes a look at the conditions under which forecasting can add value. We discuss the role of forecasting in equity valuation and investment decision-making processes.Elliot Turner rejoins us in the next episode.Enjoy the discussion!The content of this podcast is not an offer to sell or the solicitation of an offer to buy any security in any jurisdiction. The content is distributed for informational purposes only and should not be construed as investment advice or a recommendation to sell or buy any security or other investment, or undertake any investment strategy. There are no warranties, expressed or implied, as to the accuracy, completeness, or results obtained from any information set forth on this podcast. The podcast participants and their affiliates may have positions in and may, from time to time, make purchases or sales of the securities or other investments discussed or evaluated on this podcast.

Zukunft Denken – Podcast
030 – (Techno-)Optimismus – ein Gespräch mit Tim Pritlove

Zukunft Denken – Podcast

Play Episode Listen Later Sep 3, 2020 107:52


Wie optimistisch können wir in die Zukunft blicken und welche Rolle spielt dabei Technologie? »Ohne Optimismus kann man auch gleich im Bett bleiben« In dieser Episode spreche ich mit Tim Pritlove. Tim ist »Nerd« der frühen Stunde und seit den 1980er Jahren im Hacker-Umfeld (auch des Chaos Computer Clubs) »sozialisiert« worden. So war er auch über viele Jahre in der Organisation des Chaos Communication Kongresses tätig. Seine Interessen sind aber deutlich vielseitiger. So hat er sich mit dem Projekt Blinkenlights einen Namen als Medienkünstler gemacht und gilt heute als einer der führenden deutschen Podcaster. Dabei beschränkt er sich nicht auf Themen im Umfeld von Computern und Technik, sondern betreibt Podcasts um wissenschaftliche Themen für einer breiteren Zuhörerschaft verständlich zu machen, im Bereich von (Netz)Politik aber auch Unterhaltung im weiteren Sinne. In dieser Episode unterhalten wir uns über Füchse und Igel (nach Phil Tetlock), das heißt wir stellen die Frage, welche Rolle Generalisten und Spezialisten in unserer Gesellschaft im allgemeinen haben, und wie »Nerds« hier einzuordnen sind.  Wie werden Entscheidungen getroffen? Welche Herausforderungen muss die Demokratie meistern um nicht in eine Expertokratie oder in Populismus abzugleiten? Welche Rolle spielt dabei Kommunikationstechnologie? Sind die Erwartungen seit jeher überzogen oder gibt es keine gemeinsame Zukunft ohne globale Kommunikationsmittel? Wie wird Demokratie durch die neuen Möglichkeiten verändert (z.B. Liquid Feedback, Wahlmaschinen) und was sollte man überhaupt ändern? Wie geht man mit Extremisten, mit Trollen um, wo endet Meinungsfreiheit? »Unser inhaltlicher Umgang mit Wahnsinn gehört überarbeitet.« Wie sind die Ideen der »Nerds« und Autodidakten der 1980er und 1990er Jahre heute zu bewerten? Skalieren die oftmals idealistischen Ideen der damaligen Zeit? Von welchen Fehleinschätzungen sollten wir für die Zukunft lernen? Wie gehen wir mit den techno-sozialen »Wellenbewegungen« um? Vom Compuserve und AOL der 1990er Jahre über das »freie Internet« zurück zum Compuserve des 21. Jahrhunderts mit dem Namen Facebook? Wie sieht die nächste Welle aus? Welche Rolle spielt Geschwindigkeit oder Langsamkeit – In der Adoption von Technik, in gesellschaftlichen Phänomenen und demokratischen Prozessen? »In einem gesellschaftlichen System brauchst du genug Zeit um die Leute mitzunehmen […] Geschwindigkeit, schnelle Entscheidung ist per se keine Qualität« Hat Europa den Anschluss bei strategischen Technologien verloren? Entscheiden wir in Europa überhaupt noch selbst, oder müssen wir zur Kenntnis nehmen, was in anderen Teilen der Welt definiert wird? Wie verändert sich Kommunikation? Sind Menschen wirklich nur an Soundbites interessiert – wie von Legacy-Medien immer behauptet wurde, warum ist dann gerade heute das Interesse an Langformen auf YouTube oder auch bei Podcasts so beliebt? Ändert sich die Kommunikation in einer Krise? »In Momenten der real erlebten Krise sind die allermeisten Leute in der Lage sich zusammenzuraufen-es kann dann schon sehr schnell gehen.« Solutionism: verlieren wir uns in technischen Visionen (wie dem E-Auto, Kolonialisierung des Weltraums, Singularitätsphantasien) anstatt die klügsten Menschen an den größten Probleme der Zeit arbeiten zu lassen? »Über kurz der lang wird es das Internet gebraucht haben« »Ich kann mir keinen Planeten Erde vorstellen […], der kein globales Kommunikationssystem hat. Aber wir sind noch lange nicht da.«, alle Zitate von Tim Pritlove Referenzen Tim Pritlove Tim auf Twitter Metaebene auf Twitter Chaos Computer Club Chaos Communication Congress (Event Blog, Wikipedia) Projekt Blinkenlights (Wikipedia) Metaebene – Das »Dach« des Podcast-Imperiums Podcasts (eine Auswahl) CRE: Technik, Kultur, Gesellschaft Freakshow Forschergeist Raumzeit Logbuch:Netzpolitik UKW – Unsere kleine Welt (Not Safe For Work) Andere Episoden Episode 26: Was kann Politik (noch) leisten? Ein Gespräch mit Christoph Chorherr Episode 24: Hangover: Was wir vom Internet erwartet und was wir bekommen haben – Gespräch mit Peter Purgathofer Episode 19 und Episode 20 zu Offenen Systemen, hier gibt es ebenfalls zahlreiche Ergänzungen zu den mit Tim diskutierten Themen Episode 15 und Episode 16: was hat es mit Innovation und Fortschritt auf sich? Episode 31: Die Rolle von komplexen Software-Systemen in unserer Gesellschaft, Gespräch mit Thomas Konrad Fachliche Referenzen Philip Tetlock, Superforecasting. The Art and Science of Prediction, Cornerstone (2015) Liquid Feedback (Software) Evgeny Morozov: To save everything, click here, Penguin (2014) Konrad Paul Liessmann, Leben mit dem Virus

The Human Risk Podcast
Professor Yuval Feldman on why we should write rules for good people not bad people

The Human Risk Podcast

Play Episode Listen Later Oct 11, 2019 39:47


We have laws to protect us from the actions of 'bad' people. But why might writing laws for 'bad' people actually be a bad idea? That's what my guest, Professor Yuval Feldman, asks in his research and helps me explore on this inaugural episode of the podcast. Might we be better off writing laws for 'good' people, or those who think of themselves as good people? Yes, says Feldman. As he explains “In many many contexts, people do not know that what they do is illegal or immoral, at least not in an objective way”
. What works for law, can also work for Compliance.If you think the law is boring, think again. Resources:Yuval Feldman - https://law.biu.ac.il/en/feldmanRobert Cooter. - https://www.law.berkeley.edu/our-faculty/faculty-profiles/robert-cooter/Phil Tetlock - https://www.sas.upenn.edu/tetlock/Yuval's Book “The Law of Good People” - https://tinyurl.com/yxczvzrr

Football Index Podcast
Episode 84: The 'Football Index Freud' Sam Freedman Returns

Football Index Podcast

Play Episode Listen Later May 26, 2019 73:59


Discussed on today's show: Advice for new users Doing research How to 'brexit proof' your portfolio How could Brexit impact FI? Books recommended by Sam: --> Thinking, fast and slow, Daniel Kahneman: https://amzn.to/2EtDN8H --> Misbehaving, Richard Thaler: https://amzn.to/2HWZzCI --> Animal Spirits, George A. Akerlof: https://amzn.to/2W2B80u --> Risk, Dan Gardner: https://amzn.to/2WoGSAR --> A Crisis of Beliefs, Nicola Gennaioli & Andrei Shleifer: https://amzn.to/2W5OusW Books recommended in previous episode: -->Nudge by Thaler/Sunstein - https://amzn.to/2ULfOYi -->Predictably Irraitonal - https://amzn.to/2WV9dw4 -->Superpredictors by Phil Tetlock- https://amzn.to/2I6MnOP -In Play dividends and Performance Buzz -Player Ceilings -PB Scoring matrix, GWG -FI's finances and business model -A Youth Bias, or a youth bubble -Media Buzz -Marketing budget over summer -Positional changes & Optagate -FI Comms -Psychological side of Football Index and Market psychology -When to sell players -Structuring your portfolio -Order books If you did enjoy this, please do subscribe- and leave a review! Want to learn more about football index and hone your trading skills? Check out my YouTube channel: https://www.youtube.com/channel/UCBRKBjc-H8EvC15eejJc6GQ If you haven't already signed up to Football Index, use the referral code "FIG" when signing up for a bonus! ⬇ ●Deposit £50 or more, get a £20 bonus ●You also trade up to £500 risk free for 7 days (T&C's: https://trade.footballindex.co.uk/figbonus/) Music by Nkato, and Joakim Karud

Football Index Podcast
Episode 69: Behavioural Psychology on Football Index Ft. Sam Freedman

Football Index Podcast

Play Episode Listen Later Feb 10, 2019 82:25


Discussed on today's show: Cognitive biases on Football Index Youngster trends Competitors to FI Red and Green Economics of Football Index How FI make money Why people look at FI as a pyramid scheme Educating and onboarding traders Books mentioned on the show: Superpredictors by Phil Tetlock- https://amzn.to/2I6MnOP Thinking Fast and Slow-Kahnememan - https://amzn.to/2DB9qvV Predictably Irraitonal - https://amzn.to/2WV9dw4 Nudge by Thaler/Sunstein - https://amzn.to/2ULfOYi Misbehaving-Thaler- https://amzn.to/2ULfYyS If you did enjoy this, please do subscribe- and leave a review! Want to learn more about football index and hone your trading skills? Check out my YouTube channel: https://www.youtube.com/channel/UCBRKBjc-H8EvC15eejJc6GQ This episode was brought to you in partnership with IndexGain. Check them out here: Index Gain: https://indexgain.co.uk/?pa=FIG&subid=PODCAST and use the discount code 'FIG2019' for 50% off your first month! If you haven't already signed up to Football Index, use the referral code "FIG" when signing up for a bonus! ⬇ ●Deposit £50 or more, get a £20 bonus ●You also trade up to £500 risk free for 7 days (T&C's: https://trade.footballindex.co.uk/figbonus/) Music by Nkato, and Joakim Karud

Behavioral Grooves Podcast
Leaving the Matrix: Annie Duke and Insights into how you can improve your thinking!

Behavioral Grooves Podcast

Play Episode Listen Later Sep 30, 2018 119:19


 Annie Duke’s latest book, Thinking in Bets, Making Smarter Decisions When You Don’t Have All the Facts, is a masterful mash-up of her life as a researcher, poker player and charitable organization founder. In it, she explores new ideas on how to make better decisions.  Our interview with her expanded beyond the book and we talked extensively about probabilistic thinking and having people hold us accountable for our decision making. As expected, our interview covered an eclectic mix of behavioral biases, sociology, language development and, of without fail, music.   We noted some remarkable researchers including Anna Dreber, Phil Tetlock, Barb Miller, Stuart Firestein and Jonathan Haidt. We went deep into Annie’s personal history with her mentor Lila Gleitman and their work on Syntactic Bootstrapping, with the help of Donald Duck. Our music discussion included Jack White, Willie Nelson, Jonathan Richman, Prince, Alex Chilton and the Violent Femmes. If you find any of these names unfamiliar, we urge you to check them out.   We used the movie The Matrix and the blue pill/red pill metaphor for looking at the world as accurate vs. inaccurate, rather than right or wrong. We discussed how tribes can offer us distinctiveness and belongingness but also confine us with the tribe’s sometimes negative influences. We also examined learning pods and how they can be used to keep our decisions more in line with reality. ----more----Because this is a lengthy discussion we share the following to help you navigate if you’re interested in specific topics (Hour:Minute:Second). We sincerely hope you’ll take time to listen to the entire discussion – it’s both fun and insightful – but we also understand that life can get busy. - Red Pill / Blue Pill begins at 00:07:40 - Tribes begins at 00:11:36 - Learning groups begins at 00:31:08 - Discussion of Lila Gleitman begins at 1:00:55 - Syntactic Bootstrapping begins at 1:05:36 - Jack White begins at 1:17:30   If you like this episode, please forward it on to a friend or colleague and help Kurt win his bet with Tim for who pays the donation to How I Decide. You can find more information on or donate to this wonderful non-profit at www.howidecide.org.  Behavioral Grooves 

The Michael Martin Show
Learn to Avoid price targets to explode your equity

The Michael Martin Show

Play Episode Listen Later Dec 7, 2017 7:35


Price targets cut your profits. Let the market tell you when the move if over.  Price targets are about predicting the future and human beings are horrible at that at best. Read Expert Political Judgment by Phil Tetlock to get an idea of what I'm speaking about. Intraday data is not statistically significant, so uptime your charts and begin to focus on daily, weekly, and monthly time series. Longer time frames remove the randomness of price. Don't trail structure when you put on the trade. Focus on percentages - that's what professionals do. Once the trade is working in your favor, then you can trail structure if you want. But make sure you're looking at weekly or monthly support, not cloud-like chart patterns that change when you breathe on them.  When you let go of price targets, you'll focus on "best practices" and that means financially letting your winners run, emotionally letting go of control (you don't have any in the first place), and spiritually living a life that's worth living.   

NonProphets
Ep. 41: Welton Chang Interview—From the DIA to Superforecasting

NonProphets

Play Episode Listen Later Sep 30, 2017 85:16


Episode 41 of the NonProphets podcast, in which Atief, Robert and Scott interview Welton Chang, a fellow Superforecaster, former Defense Intelligence Agency analyst, stationed in South Korea and with two deployments to Iraq. He is currently a Ph.D. candidate in the Good Judgment laboratory at the University of Pennsylvania, with Phil Tetlock and Barbara Mellers as advisors. Military intelligence – Korea, Iraq (4:00). Confronting being wrong – the nature of judgment and cognition (7:15). Vizzini's Princess Bride conundrum (12:15) ( https://www.youtube.com/watch?v=9s0UURBihH8). AI – algorithms and models – should we trust them, and the garbage in garbage out problem (12:50). Spaghetti chart of Afghanistan: perhaps an accurate representation (18:45)? Limits of modern warfare – restrictions (22:30).  Rationality – Trump, Kim, Rex, nukes (33:00)? What is a good way to train forecasters? Welton’s work helping develop training material for the Good Judgment Project (50:40). Improving group dynamics for better decisions (57:00). Bayes' theorem and practice (1:20:00). We close with Welton's cats @percyandportia, Instagram celebrities (1:21:20). As always, you can reach us at nonprophetspod.com, or nonprophetspod@gmail.com. (recorded 9/20/2017)  

Bregman Leadership Podcast
Episode 8: Philip Tetlock – Superforecasting

Bregman Leadership Podcast

Play Episode Listen Later Feb 8, 2016 20:21


You’ll leave my conversation with Phil Tetlock with a better understanding of how to predict the future and an approach to bouncing back when you’re wrong.

P4 Världen
Så blir 2016 - korrespondenterna spår det nya året

P4 Världen

Play Episode Listen Later Jan 2, 2016 30:18


Kommer Storbritannien lämna EU? Vem vinner presidentvalet i USA? Vilken blir den nya mattrenden 2016? SR:s korrespondenter och en amerikansk "superforecaster" spanar in i framtiden. Med finansiering från den amerikanska underrättelsetjänsten förutspår en grupp "superforecasters", superförutsägare, det kommande året. De är amatörer men har de senaste åren haft mer rätt om framtiden än underrättelsetjänsten som har tillgång till hemligstämplat material. Hör Warren Hatch, en av dessa superförutsägare, och Phil Tetlock, psykologiprofessorn som tränat honom att se in i framtiden. Staffan Sonning, korrespondent i London, spår vad som kommer att hända i Storbritanniens folkomröstning om att lämna EU.Cecilia Uddén, Mellanösternkorrespondent, spanar in i 2016 för att se om det kan finnas en väg till fred i Syrien.Agneta Furvik, New York-korrespondent, förutsäger vad som kommer hända i det amerikanska presidentvalet i höst. Och spår vilka nya amerikanska mat- och hälsotrender som kommer sprida sig över världen.

new york european union sr med blir syrien nya mellan p4 storbritanniens cecilia udd phil tetlock agneta furvik korrespondenterna kommer storbritannien
Rationally Speaking
Rationally Speaking #145 - Phil Tetlock on "Superforecasting: The Art and Science of Prediction"

Rationally Speaking

Play Episode Listen Later Oct 18, 2015 55:45


Most people are terrible at predicting the future. But a small subset of people are significantly less terrible: the Superforecasters. On this episode of Rationally Speaking, Julia talks with professor Phil Tetlock, whose team of volunteer forecasters has racked up landslide wins in forecasting tournaments sponsored by the US government. He and Julia explore what his teams were doing right and what we can learn from them, the problem of meta-uncertainty, and how much we should expect prediction skill in one domain (like politics or economics) to carry over to other domains in real life.