POPULARITY
Dry Bulk Shipping Sector With EDRY, GOGL, SB, SHIP & USEA MODERATOR: Mr. Gregory Lewis , Head of Maritime Research - BTIG FEATURED PANELISTS Mr. Aristides Pittas, Chairman & CEO - EuroDry Ltd. (NASDAQ: EDRY) Mr. Peder Simonsen, Interim CEO & CFO - Golden Ocean Group Ltd. (NASDAQ: GOGL) (OSLO:GOGL) Dr. Loukas Barmparis , President - Safe Bulkers, Inc. (NYSE: SB) Mr. Stamatis Tsantanis , Chairman & CEO - Seanergy Maritime Holdings Corp. (NASDAQ: SHIP), Founder, Chairman & CEO - United Maritime Corporation (NASDAQ: USEA) Capital Link Shipping Sector Webinars 2024 For more information please visit: https://webinars.capitallink.com/2024/shipping/
The Black Panther Party's Oakland Community School is turning 50 this Saturday and theres an event to celebrate it. We are joined this morning by Fredrika Newton – the Co-founder of the Dr. Huey P. Newton Foundation, former Black Panther Party member, and widow of Dr. Minister Huey P. Newton, as well as Gregory Lewis – a former student and alumni of the Oakland Community School. —- Subscribe to this podcast: https://plinkhq.com/i/1637968343?to=page Get in touch: lawanddisorder@kpfa.org Follow us on socials @LawAndDis: https://twitter.com/LawAndDis; https://www.instagram.com/lawanddis/ The post Black Panther School Celebrates 50th Anniv w/ Fredrika Newton & Gregory Lewis appeared first on KPFA.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Say how much, not more or less versus someone else, published by Gregory Lewis on December 29, 2023 on The Effective Altruism Forum. Or: "Underrated/overrated" discourse is itself overrated. BLUF: "X is overrated", "Y is neglected", "Z is a weaker argument than people think", are all species of second-order evaluations: we are not directly offering an assessment of X, Y, or Z, but do so indirectly by suggesting another assessment, offered by someone else, needs correcting up or down. I recommend everyone cut this habit down ~90% in aggregate for topics they deem important, replacing the great majority of second-order evaluations with first-order evaluations. Rather than saying whether you think X is over/under rated (etc.) just try and say how good you think X is. The perils of second-order evaluation Suppose I say "I think forecasting is underrated". Presumably I mean something like: I think forecasting should be rated this highly (e.g. 8/10 or whatever) I think others rate forecasting lower than this (e.g. 5/10 on average or whatever) So I think others are not rating forecasting highly enough. Yet whether "Forecasting is overrated" is true or not depends on more than just "how good is forecasting?" It is confounded by questions of which 'others' I have in mind, and what their views actually are. E.g.: Maybe you disagree with me - you think forecasting is overrated - but it turns out we basically agree on how good forecasting is. Our apparent disagreement arises because you happen to hang out in more pro-forecasting environments than I do. Or maybe we hang out in similar circles, but we disagree in how to assess the prevailing vibes. We basically agree on how good forecasting is, but differ on what our mutual friends tend to really think about it. (Obviously, you could also get specious agreement of two-wrongs-make-a-right variety: you agree with me forecasting is underrated despite having a much lower opinion of it than I do, because you assess third parties having an even lower opinion still) These are confounders as they confuse the issue we (usually) care about: how good or bad forecasting is, not the inaccuracy of others nor in which direction they err re. how good they think forecasting is. One can cut through this murk by just assessing the substantive issue directly. I offer my take on how good forecasting is: if folks agree with me, it seems people generally weren't over or under- rating forecasting after all. If folks disagree, we can figure out - in the course of figuring out how good forecasting is - whether one of us is over/under rating it versus the balance of reason, not versus some poorly scribed subset of prevailing opinion. No phantom third parties to the conversation are needed - or helpful to - this exercise. In praise of (kind-of) objectivity, precision, and concreteness This is easier said than done. In the forecasting illustration above, I stipulated 'marks out of ten' as an assessment of the 'true value'. This is still vague: if I say forecasting is '8/10', that could mean a wide variety of things - including basically agreeing with you despite you giving a different number to me. What makes something 8/10 versus 7/10 here? It is still a step in the right direction. Although my '8/10' might be essentially the same as your '7/10', there probably some substantive difference between 8/10 and 5/10, or 4/10 and 6/10. It is still better than second order evaluation, which adds another source of vagueness: although saying for myself forecasting is X/10 is tricky, it is still harder to do this exercise on someone else's (or everyone else's) behalf. And we need not stop there. Rather than some singular measure like 'marks out of 10' for 'forecasting' as a whole, maybe we have some specific evalution or recommendation in mind. Perhaps: "Most members o...
ANALYST PANEL Mr. Robert Bugbee, President - Scorpio Tankers Inc. (STNG); President & Director - Eneti Inc. (NETI) Mr. Gregory Lewis, Head of Maritime Research – BTIG Mr. Frode Mørkedal, Managing Director, Equity Research - Clarksons Securities AS Mr. Chris Robertson, Vice President - Deutsche Bank Mr. Omar Nokta, Lead Shipping Researcher – Jefferies Mr. Liam Burke, Managing Director – B Riley Securities 15th Annual Capital Link New York #Maritime Forum Metropolitan Club, New York Partnered with #DNB and in cooperation with #nyse & #nasdaq For more info please visit here: https://forums.capitallink.com/shipping/2023NYmaritime/agenda.html
ANALYST PANEL Mr. Robert Bugbee, President - Scorpio Tankers Inc. (STNG); President & Director - Eneti Inc. (NETI) Mr. Gregory Lewis, Head of Maritime Research – BTIG Mr. Frode Mørkedal, Managing Director, Equity Research - Clarksons Securities AS Mr. Chris Robertson, Vice President - Deutsche Bank Mr. Omar Nokta, Lead Shipping Researcher – Jefferies Mr. Liam Burke, Managing Director – B Riley Securities 15th Annual Capital Link New York #Maritime Forum Metropolitan Club, New York Partnered with #DNB and in cooperation with #nyse & #nasdaq For more info please visit here: https://forums.capitallink.com/shipping/2023NYmaritime/agenda.html
This piece defends a strong form of epistemic modesty: that, in most cases, one should pay scarcely any attention to what you find the most persuasive view on an issue, hewing instead to an idealized consensus of experts. I start by better pinning down exactly what is meant by ‘epistemic modesty', go on to offer a variety of reasons that motivate it, and reply to some common objections. Along the way, I show common traps people being inappropriately modest fall into. I conclude that modesty is a superior epistemic strategy, and ought to be more widely used - particularly in the EA/rationalist communities. [gdoc] Provocation I argue for this: In virtually all cases, the credence you hold for any given belief should be dominated by the balance of credences held by your epistemic peers and superiors. One's own convictions should weigh no more [...] ---Outline:(00:45) Provocation(01:05) Introductions and clarifications(01:10) A favourable motivating case(03:08) Weaker and stronger forms of modesty(04:25) Motivations for more modesty(04:42) The symmetry case(06:53) Compressed sensing of (and not double-counting) the object level(09:00) Repeated measures, brains as credence censors, and the wisdom of crowds(11:09) Deferring to better brains(12:38) Inference to the ideal epistemic observer(15:26) Excursus: Against common justifications for immodesty(16:21) Being ‘well informed' (or even true expertise) is not enough(18:02) Common knowledge ‘silver bullet arguments'(19:47) Debunking the expert class (but not you)(22:51) Private evidence and pet arguments(24:52) Objections(25:04) In theory(25:08) There's no pure ‘outside view'[12](25:56) Immodestly modest?(28:55) In practice(29:25) Trivial (and less trivial) non-use cases(31:40) In theory, the world should be mad(34:45) Empirically, the world is mad(37:22) Expert groups are seldom in reflective equilibrium(42:05) Somewhat satisfying Shulman(42:55) Practical challenges to modesty(44:21) Community benefits to immodesty(47:25) Conclusion: a pean, and a plea(47:53) Rationalist/EA exceptionalism(50:46) To discover, not summarise(53:00) Paradoxically pathological modesty(54:28) Coda(55:02) Acknowledgements--- First published: October 29th, 2017 Source: https://forum.effectivealtruism.org/posts/WKPd79PESRGZHQ5GY/in-defence-of-epistemic-modesty --- Narrated by TYPE III AUDIO.
This piece defends a strong form of epistemic modesty: that, in most cases, one should pay scarcely any attention to what you find the most persuasiv… --- First published: October 29th, 2017 Source: https://forum.effectivealtruism.org/posts/WKPd79PESRGZHQ5GY/in-defence-of-epistemic-modesty --- Narrated by TYPE III AUDIO.
Imagine this: Oliver: … Thus we see that donating to the opera is the best way of promoting the arts. Eleanor: Okay, but I'm principally interested in improving human welfare. Oliver: Oh! Well I think it is also the case that donating to the opera is best for improving human welfare too. Generally, what is best for one thing is usually not the best for something else, and thus Oliver's claim that donations to opera are best for the arts and human welfare is surprising. We may suspect bias: that Oliver's claim that the Opera is best for the human welfare is primarily motivated by his enthusiasm for opera and desire to find reasons in favour, rather than a cooler, more objective search for what is really best for human welfare. The rest of this essay tries to better establish what is going on [...] ---Outline:(01:31) Varieties of convergence(07:00) Proxy measures and prediction(08:20) Pragmatic defeat and Poor Propagation(13:23) EA examples(18:18) Conclusion--- First published: January 24th, 2016 Source: https://forum.effectivealtruism.org/posts/omoZDu8ScNbot6kXS/beware-surprising-and-suspicious-convergence --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
Imagine this: • > Oliver: … Thus we see that donating to the opera is the best way of promoting the arts.> > Eleanor: Okay, but I'm principally int… --- First published: January 24th, 2016 Source: https://forum.effectivealtruism.org/posts/omoZDu8ScNbot6kXS/beware-surprising-and-suspicious-convergence --- Narrated by TYPE III AUDIO.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA Organization Updates: August 2023, published by JP Addison on August 15, 2023 on The Effective Altruism Forum. We're featuring some opportunities and job listings at the top of this post. Some have (very) pressing deadlines. You can see previous updates on the "EA Organization Updates (monthly series)" topic page, or in our repository of past newsletters. Notice that there's also an "org update" tag, where you can find more news and updates that are not part of this consolidated series. These monthly posts originated as the "Updates" section of the EA Newsletter. Organizations submit their own updates, which we edit for clarity. (If you think your organization should be getting emails about adding their updates to this series, please apply here.) Opportunities and jobs Opportunities Consider also checking opportunities listed on the EA Opportunities Board. Applications are open for a number of conferences EA Global: Boston (27-29 October) is for people who have a solid understanding of effective altruism, and who are taking significant actions on the basis of key ideas in the movement. Apply by 13 October. EAGxBerlin, (8-10 September) is aimed at people in Western Europe. Tickets cost €0-80. Apply by 18 August. EAGxAustralia (22 - 24 September) is for people in Australia and New Zealand. Tickets are $75-150 (AUD). Apply by 8 September. EAGxPhilippines (20 - 22 October) is for people in South East and East Asia. Tickets are $0-100. Apply by 30 September. The Good Food Conference 2023 will be back in-person this year at the historic Fort Mason Center in San Francisco on Sept. 18-20. This year's program is bursting with insights and inspiration from big-picture plenaries and fascinating flash talks to in-depth technical sessions and solutions-focused workshops led by a stellar lineup of scientists, policymakers, private sector leaders, and other brilliant humans. Check out the detailed program to learn more, and register here to help build a future where alternative proteins are no longer alternative. Opportunities to take action SoGive is conducting a survey of people who (have the capacity to) give £10k+. They are trying to help the community fill the gaps in support so more major donors can give more, and give more effectively. Read more here. Job listings Consider also exploring jobs listed on "Job listing (open)." Centre for the Governance of AI Head of Operations (Mostly Oxford, £60K - £80K +benefits, apply by 21 August) Epoch ML Hardware Researcher (Remote, $70K - $120K apply by 20 August) Family Empowerment Media Project Director (Remote with international travel, apply by 21 August) Fish Welfare Initiative Operations Lead/Associate (Remote in a time-zone compatible with IST, apply by 3 September) GiveWell Senior Researchers (Remote or Oakland, California; $193.1K - $209K) Content Editors (Remote or Oakland, California; $90.6K - $98K) IDinsight Associate and Senior Associate (Philippines) Technical Delivery Manager/Director (India or Kenya) Social Scientist/Economist - (Morocco or Senegal) Open Philanthropy Operations Associate - Biosecurity & Pandemic Preparedness (Washington, DC, $114.2k) Recruiter (Remote, San Francisco/Washington, DC, $108.3k) Organization Updates 80,000 Hours This month on The 80,000 Hours Podcast, Rob interviewed: Ezra Klein on existential risk from AI and what DC could do about it Holden Karnofsky's four-part playbook for dealing with AI And Luisa interviewed: Hannah Boettcher on the mental health challenges that come with trying to have a big impact Markus Anderljung on how to regulate cutting-edge AI models 80,000 Hours also re-released an updated version of a classic three-part article series written by Gregory Lewis, exploring how many lives a doctor saves. They also updated their article on working in US AI policy. Anima Internati...
In this episode, we discuss recordings of “Wranitzky: Orchestral Works, Vol. 6” (Naxos) by Czech Chamber Philharmonic Orchestra, Pardubice / Marek Štilec, “Dvořák: String Quartet Op 106; Coleridge-Taylor: Fantasiestücke” (Hyperion) by Takács Quartet, “Fagerlund: Terral, Strings to the Bone, Chamber Symphony” (BIS) by Sharon Bezaly, Tapiola Sinfonietta / John Storgårds, “Anyone's Quiet: Let It Rain to You” (Fresh Sound New Talent) by Noah Stoneman, “Technocats: The Music of Gregg Hill” (Cold Plunge Records) by TechnoCats, “Organ Monk Going Home” (Sunnyside) by Gregory Lewis. The Adult Music Podcast is featured in: Feedspot's 100 Best Jazz Podcasts Episode 127 Deezer Playlist “Wranitzky: Orchestral Works, Vol. 6” (Naxos) Czech Chamber Philharmonic Orchestra, Pardubice / Marek Štilec https://open.spotify.com/album/3qwz3DEu8oHgRKqPVcSKbK https://music.apple.com/us/album/wranitzky-orchestral-works-vol-6/1689803978 “Dvořák: String Quartet Op 106; Coleridge-Taylor: Fantasiestücke”(Hyperion) Takács Quartet https://open.spotify.com/album/5XR5n1yMFOeF52YwvzgxtM https://music.apple.com/us/album/dvořák-string-quartet-op-106-coleridge-taylor-fantasiestücke/1696231013 “Fagerlund: Terral, Strings to the Bone, Chamber Symphony” (BIS) Sharon Bezaly, Tapiola Sinfonietta / John Storgårds https://open.spotify.com/album/642PXnwWwRBNcDGomuB2F5 https://music.apple.com/us/album/terral-strings-to-the-bone-chamber-symphony/1679907168 “Anyone's Quiet: Let It Rain to You” (Fresh Sound New Talent) Noah Stoneman https://open.spotify.com/album/5Bzw6nguDXNOoVWtgVCLA6 https://music.apple.com/us/album/anyones-quiet-let-it-rain-to-you/1694755777 “Technocats: The Music of Gregg Hill” (Cold Plunge Records) TechnoCats https://open.spotify.com/album/1XV1sxgWffftGbV7YIzly0 https://music.apple.com/us/album/technocats-the-music-of-gregg-hill/1698437937 “Organ Monk Going Home” (Sunnyside) Gregory Lewis https://open.spotify.com/album/48lxntlzzOeOkOo49ATYvc https://music.apple.com/us/album/organ-monk-going-home/1695641229 Be sure to check out: "Same Difference: 2 Jazz Fans, 1 Jazz Standard" Johnny Valenzuela and Tony Habra look at several versions of the same Jazz standard each week, play snippets from each version, discuss the history of the original and the different versions.
This edition of eponymous food stories involves two noodle dishes, and both of them are classic comfort foods that you can easily find in pre-made frozen versions in most grocery stores. But both of them started out as entrées for fancy people. Research: Britannica, The Editors of Encyclopaedia. "Stroganov Family". Encyclopedia Britannica, 6 Apr. 2023, https://www.britannica.com/topic/Stroganov-family Britannica, The Editors of Encyclopaedia. "Luisa Tetrazzini". Encyclopedia Britannica, 25 Jun. 2023, https://www.britannica.com/biography/Luisa-Tetrazzini “Chicken Tetrazzini.” Daily News Republican. Oct. 30, 1909. https://www.newspapers.com/image/582035221/?terms=%22chicken%20Tetrazzini%22%20&match=1 Eremeeva, Jennifer. “The Definitive Beef Stroganoff.” The Moscow Times. Nov. 6, 2020. https://www.themoscowtimes.com/2019/02/20/the-definitive-beef-stroganov-a64566 Gattey, Charles Nelson. “Luisa Tetrazzini: the Florentine Nightingale.” Amadeus Press. 1995. Accessed online: https://archive.org/details/luisatetrazzinif0000gatt/page/144/mode/2up Lew, Mike. “Beef Stroganoff Is Named for Who Exactly?” Bon Appetit. Jan. 16, 2014. https://www.bonappetit.com/entertaining-style/trends-news/article/origin-of-beef-stroganoff Goldstein, Darra. “A Taste of Russia.” Russian Information Service. 1999. Hillibish, Jim. “Tetrazzini Leftover Will Leave Them Singing.” The State Journal-Register. Nov. 22, 2022. https://www.sj-r.com/story/news/2012/11/23/tetrazzini-leftover-will-leave-them/45812546007/ Kurlansky, Mark. “Salt: A World History.” Thorndike Press. 2002. “Luisa Tetrazzini, Diva, Dies in ” New York Times. April 29, 1940. https://timesmachine.nytimes.com/timesmachine/1940/04/29/92957232.pdf?pdf_redirect=true&ip=0 McNamee, Gregory Lewis. "beef Stroganoff". Encyclopedia Britannica, 31 Oct. 2022, https://www.britannica.com/topic/beef-Stroganoff Peters, Erica J. “San Francisco: A Food Biography.” Rowman & Littlefield. 2013. Price, Mary and Vincent. “A Treasury of Great Recipes.” Ampersand Press, 1965. Rattray, Diana. “Chicken Tetrazzini Casserole.” The Spruce Eats. Nov. 11, 2021. https://www.thespruceeats.com/chicken-tetrazzini-3053005 Sifton, Sam. “Chicken Tetrazzini, the Casserole Even Snobs Love.” New York Times Magazine. Sept 29, 2016. https://www.nytimes.com/2016/10/02/magazine/chicken-tetrazzini-the-casserole-even-snobs-love.html Snow, Glenna H. “Peasants of Russia Thrive on Monotonous, Though Well Balanced Diet, Says Editor.” The Akron Beacon Journal. May 14, 1934. https://www.newspapers.com/image/228861067/?terms=%22beef%20stroganoff%22%20&match=1 Syutkin, Pavel and Olga. “The History and Mystery of Beef Stroganoff.” Moscow Times. Dec. 3, 2022. https://www.themoscowtimes.com/2022/12/03/the-history-and-mystery-of-beef-stroganoff-a79582 “Tetrazzini Here, Meets With Injunction.” New York Times. Nov. 25, 1910. https://timesmachine.nytimes.com/timesmachine/1910/11/25/102052010.pdf?pdf_redirect=true&ip=0 Tetrazzini, Luisa. “My Life of Song.” Arno Press. 1977. (Reprint edition.) https://archive.org/details/mylifeofsong0000tetr/page/68/mode/2up “To San Franciscans, I Am Luisa,” Declares Mme. Tetrazzini.” The San Francisco Chronicle. March 12, 1913. https://www.newspapers.com/image/457433091/?terms=Luisa%20Tetrazzini&match=2 “Turkey Tetrazzini.” Saveur. https://www.saveur.com/article/Recipes/Turkey-Tetrazzini/ Webster, Jessica. “Chicken Tetrazzini, or how I stopped worrying and learned to love the mess.” The Ann Arbor News. May 12, 2010. https://www.annarbor.com/entertainment/food-drink/giadas-chicken-tetrazzini/ Welch, Douglas. “Squirrel Cage.” The Tribune. May 17, 1967. https://www.newspapers.com/image/321669094/?terms=Luisa%20Tetrazzini&match=1 “Who Are the Indigenous Peoples of Russia?” Cultural Survival. Feb. 20, 2014. https://www.culturalsurvival.org/news/who-are-indigenous-peoples-russia#:~:text=The%20smallest%20of%20these%20Indigenous,live%20beyond%20the%20Arctic%20Circle. See omnystudio.com/listener for privacy information.
Gregory Lewis and C. Michael Gibson discuss how omecamtiv mecarbil affects exercise tolerance in patients with chronic HFrEF.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: In defence of epistemic modesty [distillation], published by Luise on May 10, 2023 on The Effective Altruism Forum. This is a distillation of In defence of epistemic modesty, a 2017 essay by Gregory Lewis. I hope to make the essay's key points accessible in a quick and easy way so more people engage with them. I thank Gregory Lewis for helpful comments on an earlier version of this post. Errors are my own. Note: I sometimes use the first person (“I claim”/”I think”) in this post. This felt most natural but is not meant to imply any of the ideas or arguments are mine. Unless I clearly state otherwise, they are Gregory Lewis's. What I Cut I had to make some judgment calls on what is essential and what isn't. Among other things, I decided most math and toy models weren't essential. Moreover, I cut the details on the “self-defeating” objection, which felt quite philosophical and probably not relevant to most readers. Furthermore, it will be most useful to treat all the arguments brought up in this distillation as mere introductions, while detailed/conclusive arguments may be found in the original post and the literature. Claims I claim two things: You should practice strong epistemic modesty: On a given issue, adopt the view experts generally hold, instead of the view you personally like. EAs/rationalists in particular are too epistemically immodest. Let's first dive deeper into claim 1. Claim 1: Strong Epistemic Modesty To distinguish the view you personally like from the view strong epistemic modesty favors, call the former “view by your own lights” and the latter “view all things considered”. In detail, strong epistemic modesty says you should do the following to form your view on an issue: Determine the ‘epistemic virtue' of people who hold a view on the issue. By ‘epistemic virtue' I mean someone's ability to form accurate beliefs, including how much the person knows about the issue, their intelligence, how truth-seeking they are, etc. Determine what everyone's credences by their own lights are. Take an average of everyone's credences by their own lights (including yourself), weighting them by their epistemic virtue. The product is your view all things considered. Importantly, this process weighs your credences by your own lights no more heavily than those of people with similar epistemic virtue. These people are your ‘epistemic peers'. In practice, you can round this process to “use the existing consensus of experts on the issue or, if there is none, be uncertain”. Why? Intuition Pump Say your mom is convinced she's figured out the one weird trick to make money on the stock market. You are concerned about the validity of this one weird trick, because of two worries: Does she have a better chance at making money than all the other people with similar (low) amounts of knowledge on the stock market who're all also convinced they know the one weird trick? (These are her epistemic peers.) How do her odds of making money stack up against people working full-time at a hedge fund with lots of relevant background and access to heavy analysis? (These are the experts.) The point is that we are all sometimes like the mom in this example. We're overconfident, forgetting that we are no better than our epistemic peers, be the question investing, sports bets, musical taste, or politics. Everyone always thinks they are an exception and have figured [investing/sports/politics] out. It's our epistemic peers that are wrong! But from their perspective, we look just as foolish and misguided as they look to us. Not only do we treat our epistemic peers incorrectly, but also our epistemic superiors. The mom in this example didn't seek out the expert consensus on making money on the stock market (maybe something like “use algorithms” and “you don't stand a chance”). Instead, she may have li...
In this episode of 80k After Hours, Perrin Walker reads Arden Koehler and Benjamin Hilton's problem profile on preventing catastrophic pandemics.Here's the original piece if you'd like to learn more.You might also want to check out our full profile on Reducing global catastrophic biological risks, by Gregory Lewis.Get this episode by subscribing to our more experimental podcast on the world's most pressing problems and how to solve them: type 80k After Hours into your podcasting app.Editing and narration: Perrin WalkerAudio proofing: Katy Moore
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Most small probabilities aren't pascalian, published by Gregory Lewis on August 7, 2022 on The Effective Altruism Forum. Summary: We routinely act to prevent, mitigate, or insure against risks with P = 'one-in-a-million'. Risks similarly or more probable than this should not prompt concerns about 'pascal's mugging' etc. Motivation Reckless appeals to astronomical stakes often prompt worries about pascal's mugging or similar. Sure, a 10^-20 chance of 10^40 has the same expected value as 10^20 with P = 1, but treating them as equivalent when making decisions is counter-intuitive. Thus one can (perhaps should) be wary of lines which amount to "The scale of the longterm future is so vast we can basically ignore the probability - so long as it is greater than 10^-lots - to see x-risk reduction is the greatest priority." Most folks who work on (e.g.) AI safety do not think the risks they are trying to reduce are extremely (nor astronomically) remote. Pascalian worries are unlikely to apply to attempts to reduce a baseline risk of 1/10 or 1/100. They also are unlikely to apply if the risk is a few orders of magnitude less (or a few orders of magnitude less tractable to reduce) than some suppose. Despite this, I sometimes hear remarks along the lines of "I only think this risk is 1/1000 (or 1/10 000, or even 'a bit less than 1%') so me working on this is me falling for Pascal's wager." This is mistaken: an orders-of-magnitude lower risk (or likelihood of success) makes, all else equal, something orders of magnitude less promising, but it does not mean it can be dismissed out-of-hand. Exactly where the boundary should be drawn for pascalian probabilities is up for grabs (10^-10 seems reasonably pascalian, 10^-2 definitely not). I suggest a very conservative threshold at '1 in a million': human activity in general (and our own in particular) is routinely addressed to reduce, mitigate, or insure against risks between 1/1000 and 1/1 000 000, and we typically consider these activities 'reasonable prudence' rather than 'getting mugged by mere possibility'. Illustrations Among many other things: Aviation and other 'safety critical' activities One thing which can go wrong when flying an airliner is an engine stops working. Besides all the engineering and maintenance to make engines reliable, airlines take many measures to mitigate this risk: Airliners have more than one engine, and are designed and operated so that they are able to fly and land at a nearby airport 'on the other engine' should one fail at any point in the flight. Pilots practice in initial and refresher simulator training how to respond to emergencies like an engine failure (apparently engine failure just after take-off is the riskiest) Pilots also make a plan before each flight what to do 'just in case' an engine fails whilst they are taking off. This risk is very remote: the rate of (jet) engine failure is something like 1 per 400 000 flight hours. So for a typical flight, maybe the risk is something like 10^-4 to 10^-5. The risk of an engine failure resulting in a fatal crash is even more remote: the most recent examples I could find happened in the 90s. Given the millions of airline flights a year, '1 in a million flights' is comfortable upper bound. Similarly, the individual risk-reduction measures mentioned above are unlikely to be averting that many micro(/nano?) crashes. A pilot who (somehow) manages to skive off their recurrency training or skip the pre-flight briefing may still muddle through if the risk they failed to prepare for realises. I suspect most consider the diligent practice by pilots for events they are are unlikely to ever see in their career admirable rather than getting suckered by Pascal's mugging. Aviation is the poster child of safety engineering, but it is not unique. Civil engineering di...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Terminate deliberation based on resilience, not certainty, published by Gregory Lewis on June 5, 2022 on The Effective Altruism Forum. BLUF: “We should ponder until it no longer feels right to ponder, and then to choose one of the acts it feels most right to choose. If pondering comes at a cost, we should ponder only if it seems we will be able to separate better options from worse options quickly enough to warrant the pondering” - Trammell Introduction Many choices are both highly uncertain, highly consequential, and difficult to develop a highly-confident impression of which option is best. In EA-land, the archetypal example is career choice, but similar dilemmas are common in corporate decision-making (e.g. grant-making, organisational strategy) and life generally (e.g. “Should I marry Alice?” “Should I have children with Bob?”). Thinking carefully about these choices is wise, and my impression is people tend to err in the direction of too little contemplation - stumbling over important thresholds that set the stage for the rest of their lives. One of the (perhaps the main) key messages of effective altruism is our altruistic efforts err in a similar direction. Pace more cynical explanations, I think (e.g.) the typical charitable donor (or typical person) is “using reason to try and do good”. Yet they use reason too little and decide too soon whilst the ‘returns to (further) reason' remain high, and so typically fall far short of what they could have accomplished. One can still have too much of a good thing: ‘reasonableness' ‘prudence' or even ‘caution' can be wise, but ‘indecisiveness' less so. I suspect many can recall occasions of “analysis paralysis”, or even when they suspect prolonged fretting worsened the quality of the decision they finally made. I think folks in EA-land tend to err in this opposite direction, and I find myself giving similar sorts of counsel on these themes to those who (perhaps unwisely) seek my advice. So I write. Certainty, resilience, and value of (accessible) information It is widely appreciated that we can be more certain (or confident) of some things than others: I can be essentially certain I have two eyes, whilst guessing with little confidence whether it will rain tomorrow. We can often (and should even oftener) use numbers and probability to express different levels of certainty - e.g. P(I have two eyes) ~ 1 (- 10^-(lots)); P(rain tomorrow) ~ 0.5. Less widely appreciated is the idea that our beliefs can vary not only in their certainty but how much we expect this certainty to change. This ‘second-order certainty' sometimes goes under the heading ‘credal (¬)fragility' or credal resilience. These can come apart, especially in cases of uncertainty. I might think the chances of ‘The next coin I flip will land heads' and ‘Rain tomorrow' are 50/50, but I'd be much more surprised if my confidence in the former changed to 90% than the latter: coins tend fair, whilst real weather forecasts I consult often surprise my own guesswork, and prove much more reliable. For coins, I have resilient uncertainty; for the weather, non-resilient uncertainty. Credal resilience is - roughly - a forecast of the range or spread of our future credences. So when applied to ourselves, it is partly a measure of what resources we will access to improve our guesswork. My great-great-great-grandfather, sans access to good meteorology, probably had much more resilient uncertainty about tomorrow's weather. However the ‘resource' need not be more information, sometimes it is simply more ‘thinking time': a ‘snap judgement' I make on a complex matter may also be one I expect to shift markedly if I take time to think it through more carefully, even if I only spend that time trying to better weigh information I already possess. Thus credal resilience is one part of va...
(note: time stamps are without ads & may be off a little)At this CrimeCon live taping, Beth and Wendy talk about Queho, a Native American man from the Las Vegas area of Nevada. Queho has been credited with the deaths of 23 people in the early 20th century. He was declared Nevada's “Public Enemy No. 1,” and the state's first mass murderer. But was he really? First, we dive into the setting (05:55), the killers early life (16:37) and the timeline (20:16). Then, we get into "Where are they now?" (30:36) followed by our takeaways and what we think made the perp snap (35:16).We close out the show with some How Not to Get Murdered tips and listener questions(40:21). This episode was researched & scripted by Wendy & Beth Williams.Thanks for listening! This is a weekly podcast and new episodes drop every Thursday, so until next time... look alive guys, it's crazy out there!SponsorsBetter HelpGet 10% off your first month!Betterhelp.com/fruitBest FiendsDownload Best Fiends free on the Apple App Store or Google Play!https://apps.apple.com/us/app/best-fiends-puzzle-adventure/id868013618https://play.google.com/store/apps/details?id=com.Seriously.BestFiends&hl=en_US&gl=USWhere to find us:Our Facebook page is Fruitloopspod and our discussion group is Fruitloopspod Discussion on Facebook; https://www.facebook.com/groups/fruitloopspod/We are also on Twitter and Instagram @fruitloopspodPlease send any questions or comments to fruitloopspod@gmail.com or leave us a voicemail at 602-935-6294. We just might read your email or play your voicemail on the show!Want to Support the show?You can support the show by rating and reviewing Fruitloops on iTunes, or anywhere else that you get your podcasts from. We would love it if you gave us 5 stars!You can make a donation on the Cash Apphttps://cash.me/$fruitloopspodOr become a monthly Patron through our Podbean Patron pagehttps://patron.podbean.com/fruitloopspodFootnotesHistorynet Staff. (06/12/2006). Queho: An Indian Outcast. Historynet. Retrieved 03/31/2022 from https://www.historynet.com/queho-an-indian-outcast/Weiser, Kathy. (November 2019). Queho – Renegade Indian Outlaw or Scapegoat?. Legends of America. Retrieved 03/31/2022 from https://www.legendsofamerica.com/nv-queho/Las Vegas Review Journal. (02/07/1999). Queho. Retrieved 03/31/2022 from https://www.reviewjournal.com/news/queho/Robinson, T Jay DMD. (01/14/2021). Why Does My Child Have Two Rows of Teeth?Junior Smiles. Retrieved 04/22/2022 from https://kidsdentalsmile.com/child-two-rows-teeth/Wikipedia contributors. (03/13/2022). Queho. Wikipedia, The Free Encyclopedia. Retrieved 04/26/2022 from https://en.wikipedia.org/w/index.php?title=Queho&oldid=1076948337Daily Independent. (12/05/1910). Government After Renegade Piute. Retrieved 04/26/2022 from https://www.newspapers.com/image/620555192/Nevada State Journal. (01/31/1919). Poses Find Mutilated Bodies Two Mining Men. Retrieved 04/26/2022 from https://www.newspapers.com/image/75041387/Forensic Genealogy. (09/19/2010). Answer to Quiz #273. Retrieved 04/27/2022 from http://www.forensicgenealogy.info/contest_273_results.htmlThe Sacramento Bee. (03/09/1911). Patterson Appears at Las Vegas and Denies He Is Dead. Retrieved 04/27/2022 from https://www.newspapers.com/image/616679162Feller, Walter. (08/25/2017). The Renegade. Desert Gazette. Retrieved 04/27/2022 from https://desertgazette.com/blog/?p=2572HistoryAustin, Shelbi. (05/13/2022). 10 Things to Know About Nevada. US News & World Report. Retrieved 04/09/2022 from https://www.usnews.com/news/best-states/articles/2019-05-13/10-things-to-know-about-nevadaNational Park Service. (n.d.). The Great Basin. Retrieved 04/11/2022 from https://www.nps.gov/grba/planyourvisit/the-great-basin.htmHistory.com Editors. (11/09/2009). Nevada. Retrieved 04/11/2022 from https://www.history.com/topics/us-states/nevadaZorn, R. J.;McNamee, Gregory Lewis. (06/24/2021). Nevada. Encyclopedia Britannica. Retrieved 04/11/2022 from https://www.britannica.com/place/Nevada-stateBell, Josh. (09/13/2019). A Brief History of Nevada's Indigenous Paiute Tribe. Culture Trip. Retrieved 04/24/2022 from https://theculturetrip.com/north-america/usa/nevada/articles/a-brief-history-of-nevadas-indigenous-paiute-tribe/Snow Mountain Pow Wow. https://www.lvpaiutetribe.com/pow-wowManhattan Gold & Silver. (10/27/2021). What Was The Nevada Silver Rush, And Why Was It Special? Retrieved 04/24/2022 from https://www.mgsrefining.com/blog/2021/10/27/what-was-the-nevada-silver-rush-and-why-was-it-special/ushistory.org. (2022). The Mining Boom. Retrieved 04/24/2022 from https://www.ushistory.org/us/41a.aspLas Vegas Paiute Tribe. (n.d.). History and Culture. Retrieved 04/26/2022 from https://www.lvpaiutetribe.com/historyMusic“Abyss” by Alasen: ●https://soundcloud.com/alasen●https://twitter.com/icemantrap ●https://instagram.com/icemanbass/●https://soundcloud.com/therealfrozenguy●Licensed under Creative Commons: By Attribution 3.0 License“Bleeping Demo”, “Master Disorder” & “Furious Freak” by Kevin MacLeodLink: https://incompetech.filmmusic.io/song/3791-furious-freakLicense: http://creativecommons.org/licenses/by/4.0/https://creativecommons.org/licenses/by/3.0/http://creativecommons.org/licenses/by/4.0/Connect with us on:Twitter @FruitLoopsPodInstagram https://www.instagram.com/fruitloopspodFacebook https://www.facebook.com/Fruitloopspod and https://www.facebook.com/groups/fruitloopspod
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Rational predictions often update predictably, published by Gregory Lewis on May 15, 2022 on The Effective Altruism Forum. BLUF: One common supposition is a rational forecast, because it ‘prices in' anticipated evidence, should follow a (symmetric) random walk. Thus one should not expect a predictable trend in rational forecasts (e.g. 10% day 1, 9% day 2, 8% day 3, etc.), nor commonly see this pattern when reviewing good forecasting platforms. Yet one does, and this is because the supposition is wrong: ‘pricing in' and reflective equilibrium constraints only entail the present credence is the expected value of a future credence. Skew in the anticipated distribution can give rise to the commonly observed “steady pattern of updates in one direction”. Such skew is very common: it is typical in 'will event happen by date' questions, and one's current belief often implies skew in the expected distribution of future credences. Thus predictable directions in updating are unreliable indicators of irrationality. Introduction Forecasting is common (although it should be commoner) and forecasts (whether our own or others) change with further reflection or new evidence. There are standard metrics to assess how good someone's forecasting is, like accuracy and calibration. Another putative metric is something like ‘crowd anticipation': if I predict P(X) = 0.8 when the consensus is P(X) = 0.6, but over time this consensus moves to P(X) = 0.8, regardless of how the question resolves, I might take this to be evidence I was ‘ahead of the curve' in assessing the right probability which should have been believed given the evidence available. This leads to scepticism about the rationality of predictors which show a pattern of ‘steadily moving in one direction' for a given question: e.g. P(X) = 0.6, then 0.63, 0.69, 0.72 . and then the question resolves affirmatively. Surely a more rational predictor, observing this pattern, would try and make forecasts an observer couldn't reliably guess to be higher or lower in the future, so forecast values follow something like a (symmetrical) random walk. Yet forecast aggregates (and individual forecasters) commonly show these directional patterns if tracking a given question. Scott Alexander noted curiosity about this behaviour; Eliezer Yudkowsky has confidently asserted it is an indicator of sub-par Bayesian updating. Yet the forecasters regularly (and predictably) notching questions up or down as time passes are being rational. Alexander's curiosity was satisfied by various comments on his piece, but I write here as this understanding may be tacit knowledge to regular forecasters, yet valuable to explain to a wider audience. Reflective equilibrium, expected value, and skew One of the typical arguments against steadily directional patterns is they suggest a violation of reflective equilibrium. In the same way if I say P(X) = 0.8, I should not expect to believe P(X) = 1 (i.e. it happened) more than 80% of the time, I shouldn't expect to believe P(X) [later] > P(X) [now] more likely than not. If I did, surely I should start ‘pricing that in' and updating my current forecast upwards. Market analogies are commonly appealed to in making this point: if we know the stock price of a company is more likely than not to go up, why haven't we bid up the price already? This motivation is mistaken. Reflective equilibrium only demands one's current forecast is the expected value of one's future credence. So although the mean of P(X) [later] should equal P(X) [now], there are no other constraints on the distribution. If it is skewed, then you can have predictable update direction without irrationality: e.g. from P(X) = 0.8, I may think in a week I will - most of the time - have notched this forecast slightly upwards, but less of the time notching it further downwa...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Have You Ever Doubted Whether You're Good Enough To Pursue Your Career?, published by lynettebye on May 11, 2022 on The Effective Altruism Forum. This post is cross posted on my blog. People often look to others that they deem particularly productive and successful and come up with (often fairly ungrounded) guesses for how these people accomplish so much. Instead of guessing, I want to give a peek behind the curtain. I interviewed eleven people I thought were particularly successful, relatable, or productive. We discussed topics ranging from productivity to career exploration to self-care. The Peak behind the Curtain interview series is meant to help dispel common myths and provide a variety of takes on success and productivity from real people. To that end, I've grouped responses on common themes to showcase a diversity of opinions on these topics. This first post covers “Have you ever doubted whether you're good enough to pursue your career?” and other personal struggles. My guests include: Abigail Olvera was a U.S. diplomat last working at the China Desk. Abi was formerly stationed at the US Embassies in Egypt and Senegal and holds a Master's of Global Affairs from Yale University. Full interview. Ajeya Cotra is a Senior Research Analyst at Open Philanthropy where she worked on a framework for estimating when transformative AI may be developed, as well as various cause prioritization and worldview diversification projects. Ajeya received a B.S. in Electrical Engineering and Computer Science from UC Berkeley. Full interview. Ben Garfinkel was a research fellow at the Future of Humanity Institute at the time of the interview. He is now the Acting Director of the Centre for the Governance of AI. Ben earned a degree in Physics and in Mathematics and Philosophy from Yale University, before deciding to study for a DPhil in International Relations at the University of Oxford. Full interview. Daniel Ziegler researched AI safety at OpenAI. He has since left to do AI safety research at Redwood Research. Full interview. Eva Vivalt did an Economics Ph.D. and Mathematics M.A. at the University of California, Berkeley after a master's in Development Studies at Oxford University. She then worked at the World Bank for two years and founded AidGrade before finding her way back to academia. Full interview. Gregory Lewis is a DPhil Scholar at the Future of Humanity Institute, where he investigates long-run impacts and potential catastrophic risk from advancing biotechnology. Previously, he was an academic clinical fellow in public health medicine and before that a junior doctor. He holds a master's in public health and a medical degree, both from Cambridge University. Full interview. Helen Toner is Director of Strategy at Georgetown's Center for Security and Emerging Technology (CSET). She previously worked as a Senior Research Analyst at Open Philanthropy. She is a member of the board of directors for OpenAI. Helen holds an MA in Security Studies from Georgetown. Full interview not available. Jade Leung is Governance Lead at OpenAI. She was the inaugural Head of Research & Partnerships with the Centre for the Governance of Artificial Intelligence (GovAI), housed at Oxford's Future of Humanity Institute. She completed her DPhil in AI Governance at the University of Oxford and is a Rhodes scholar. Full interview. Julia Wise serves as a contact person for the effective altruism community and helps local and online groups support their members. She serves on the board of GiveWell and writes about effective altruism at Giving Gladly. She was president of Giving What We Can from 2017-2020. Before joining CEA, Julia was a social worker, and studied sociology at Bryn Mawr College. Full interview. Michelle Hutchinson holds a PhD in Philosophy from the University of Oxford, where her thesis ...
ACCEL Lite: Featured ACCEL Interviews on Exciting CV Research
Heart failure patients experience exercise intolerance and functional limitation every day. While the targeting of functional capacity is an important unmet need, this study found that the change in peak oxygen uptake did not differ between patients taking omecamtiv mecarbil and placebo. How do current guideline-directed medical therapies for heart failure impact exercise capacity? How does omecamtive mecarbil differ from other heart failure pharmacotherapies? In this interview, Gregory Lewis, MD and Jeroen J. Bax PhD, MD, FACC, with Ioannis Mastoris, MD, MPHcand, discuss Late Breaker: METEORIC-HF: The Effect Of Omecamtiv Mecarbil On Exercise Tolerance In Patients With Chronic HFrEF.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Samotsvety Nuclear Risk Forecasts — March 2022, published by NunoSempere on March 10, 2022 on The Effective Altruism Forum. Thanks to Misha Yagudin, Eli Lifland, Jonathan Mann, Juan Cambeiro, Gregory Lewis, @belikewater, and Daniel Filan for forecasts. Thanks to Jacob Hilton for writing up an earlier analysis from which we drew heavily. Thanks to Clay Graubard for sanity checking and to Daniel Filan for independent analysis. This document was written in collaboration with Eli and Misha, and we thank those who commented on an earlier version. Overview In light of the war in Ukraine and fears of nuclear escalation, we turned to forecasting to assess whether individuals and organizations should leave major cities. We aggregated the forecasts of 8 excellent forecasters for the question What is the risk of death in the next month due to a nuclear explosion in London? Our aggregate answer is 24 micromorts (7 to 61) when excluding the most extreme on either side. A micromort is defined as a 1 in a million chance of death. Chiefly, we have a low baseline risk, and we think that escalation to targeting civilian populations is even more unlikely. For San Francisco and most other major cities, we would forecast 1.5-2x lower probability (12-16 micromorts). We focused on London as it seems to be at high risk and is a hub for the effective altruism community, one target audience for this forecast. Given an estimated 50 years of life left, this corresponds to ~10 hours lost, or ~6% of productive time lost per month given a 40 hour work week. The forecaster range without excluding extremes was
welcome to the nonlinear library, where we use text-to-speech software to convert the best writing from the rationalist and ea communities into audio. this is: Finding equilibrium in a difficult time, published by Julia_Wise on the effective altruism forum. To start: I don't want to say that self-isolation is that bad in the scheme of things. People have lost their lives, they've lost loved ones. Healthcare workers are working hard, at their own risk, to protect us all. Some other workers don't have a choice about continuing to work in person. And for some immunocompromised people and their families, self-isolation is the reality much or all of the time. But I'm writing for those of us who aren't physically ill, are doing some amount of self-isolation or social distancing because of the pandemic, and are not finding it easy. Most of this isn't specific to EAs, but I hope it's useful. We are all having a hard time with this I assume I'm not the only person who finished last week and realized I'd gotten very little work done. We're all anxious about the situation in different ways. This is a hard, weird time. I don't expect to have normal work weeks for a while, and you probably shouldn't expect that either (especially if you're newly working from home or if you have children who are suddenly out of school). And if you're affected by job loss, of course things are even more upside-down. Focus on the basics: Sleep. Eat nourishing food. Get some exercise and sunshine. Connect with other people. These things are literally a public health measure — you're protecting your immune system. On information: If you're like me, you've found yourself reading more about this topic than is useful for any practical purpose. Think about diminishing marginal returns: what's the amount and kind of information about this that will benefit you? And when does it start to produce very little value? Here's the advice Gregory Lewis (a medical doctor and public health specialist who works on biorisk at the Future of Humanity Institute) gave to his colleagues: I'd recommend some information hygiene. The typical person doesn't need ‘up to the minute' information on what is going on worldwide, and generally it takes time for instant reports to resolve into a clear picture. Further, typical media reporting will tend to be biased in the very alarming direction (e.g., the typical ‘live feed': “New case in A!” “New Case in B!” “Event C cancelled due to coronavirus fear!”). Social media tends not to be much better regarding bias, and worse with regard to reliability. In other words, especially for those worried about this, staying glued to the screen can get a very high yield of anxiety for a very poor yield of useful action-relevant information. Here are some good sources of information (which is the bulk of my information diet): Generally: WHO Public health matters Johns Hopkins Center for Health Security newsletter For the data: JH mapping dashboard Worldometer (slightly easier to divvy out some time courses) Typically good commentary/analysis/explanation Tom Inglesby's Twitter (both for itself, and for links to CHS's other work) John Campbell's Youtube Trevor Bedford's Twitter for virology. On working remotely: When the Great Plague of London sent Isaac Newton and other Cambridge students home for a year in 1665, he did some of his best work including the famous falling-apple realization. Maybe once you settle in, you'll have a productive time in a different environment than usual. If you're used to working from a desk and switch to working from a couch or bed, you're risking hurting your body. (After a two-week stretch of writing from bed a lot, my husband had serious wrist pain for weeks.) Please set up a good workspace where you can use your computer without putting your neck, back, and wrists in awkward positions. More: Wirecutter on equipment for working from home (though you can make an ergonomic setup for much less - here's mine.) Making profession...
welcome to the nonlinear library, where we use text-to-speech software to convert the best writing from the rationalist and ea communities into audio. this is: Reality is often underpowered, published by Gregory_Lewis on the effective altruism forum. Introduction When I worked as a doctor, we had a lecture by a paediatric haematologist, on a condition called Acute Lymphoblastic Leukaemia. I remember being impressed that very large proportions of patients were being offered trials randomising them between different treatment regimens, currently in clinical equipoise, to establish which had the edge. At the time, one of the areas of interest was, given the disease tended to have a good prognosis, whether one could reduce treatment intensity to reduce the long term side-effects of the treatment whilst not adversely affecting survival. On a later rotation I worked in adult medicine, and one of the patients admitted to my team had an extremely rare cancer,[1] with a (recognised) incidence of a handful of cases worldwide per year. It happened the world authority on this condition worked as a professor of medicine in London, and she came down to see them. She explained to me that treatment for this disease was almost entirely based on first principles, informed by a smattering of case reports. The disease unfortunately had a bleak prognosis, although she was uncertain whether this was because it was an aggressive cancer to which current medical science has no answer, or whether there was an effective treatment out there if only it could be found. I aver that many problems EA concerns itself with are closer to the second story than the first. That in many cases, sufficient data is not only absent in practice but impossible to obtain in principle. Reality is often underpowered for us to wring the answers from it we desire. Big units of analysis, small samples The main driver of this problem for ‘EA topics' is that the outcomes of interest have units of analysis for which the whole population (leave alone any sample from it) is small-n: e.g. outcomes at the level of a whole company, or a whole state, or whole populations. For these big unit of analysis/small sample problems, RCTs face formidable in principle challenges: Even if by magic you could get (e.g.) all countries on earth to agree to randomly allocate themselves to policy X or Y, this is merely a sample size of ~200. If you're looking at companies relevant to cage-free campaigns, or administrative regions within a given state, this can easily fall another order of magnitude. These units of analysis tend highly heterogeneous, almost certainly in ways that affect the outcome of interest. Although the key ‘selling point' of the RCT is it implicitly controls for all confounders (even ones you don't know about), this statistical control is a (convex) function of sample size, and isn't hugely impressive at ~ 100 per arm: it is well within the realms of possibility for the randomisation happen to give arms with unbalanced allocation of any given confounding factor. ‘Roughly' (in expectation) balanced intervention arms are unlikely to be good enough in cases where the intervention is expected to have much less effect on the outcome than other factors (e.g. wealth, education, size, whatever), thus an effect size that favours one arm or the other can be alternatively attributed to one of these. Supplementing this raw randomisation by explicitly controlling for confounders you suspect (cf. block randomisation, propensity matching, etc.) has limited value when don't know all the factors which plausibly ‘swamp' the likely intervention effect (i.e. you don't have a good predictive model for the outcome but-for the intervention tested). In any case, they tend to trade-off against the already scarce resource of sample size. These ‘small sample' problems aren't peculiar to RCTs, but endemic to all other empirical approaches. The wealth of econometric and quasi-experimental methods (e.g. IVs, ...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Use resilience, instead of imprecision, to communicate uncertainty, published by Gregory_Lewis on the AI Alignment Forum. Write a Review BLUF: Suppose you want to estimate some important X (e.g. risk of great power conflict this century, total compute in 2050). If your best guess for X is 0.37, but you're very uncertain, you still shouldn't replace it with an imprecise approximation (e.g. "roughly 0.4", "fairly unlikely"), as this removes information. It is better to offer your precise estimate, alongside some estimate of its resilience, either subjectively ("0.37, but if I thought about it for an hour I'd expect to go up or down by a factor of 2"), or objectively ("0.37, but I think the standard error for my guess to be ~0.1"). 'False precision' Imprecision often has a laudable motivation - to avoid misleading your audience into relying on your figures more than they should. If 1 in 7 of my patients recover with a new treatment, I shouldn't just report this proportion, without elaboration, to 5 significant figures (14.285%). I think a similar rationale is often applied to subjective estimates (forecasting most salient in my mind). If I say something like "I think there's a 12% chance of the UN declaring a famine in South Sudan this year", this could imply my guess is accurate to the nearest percent. If I made this guess off the top of my head, I do not want to suggest such a strong warranty - and others might accuse me of immodest overconfidence ("Sure, Nostradamus - 12% exactly"). Rounding off to a number ("10%"), or just a verbal statement ("pretty unlikely") seems both more reasonable and defensible, as this makes it clearer I'm guessing. In praise of uncertain precision One downside of this is natural language has a limited repertoire to communicate degrees of uncertainty. Sometimes 'round numbers' are not meant as approximations: I might mean "10%" to be exactly 10% rather than roughly 10%. Verbal riders (e.g. roughly X, around X, X or so, etc.) are ambiguous: does roughly 1000 mean one is uncertain about the last three digits, or the first, or how many digits in total? Qualitative statements are similar: people vary widely in their interpretation of words like 'unlikely', 'almost certain', and so on. The greatest downside, though, is precision: you lose half the information if you round percents to per-tenths. If, as is often the case in EA-land, one is constructing some estimate 'multiplying through' various subjective judgements, there could also be significant 'error carried forward' (cf. premature rounding). If I'm assessing the value of famine prevention efforts in South Sudan, rounding status quo risk to 10% versus 12% infects downstream work with a 1/6th directional error. There are two natural replies one can make. Both are mistaken. High precision is exactly worthless First, one can deny the more precise estimate is any more accurate than the less precise one. Although maybe superforecasters could expect 'rounding to the nearest 10%' would harm their accuracy, others thinking the same are just kidding themselves, so nothing is lost. One may also have some of Tetlock's remarks in mind about 'rounding off' mediocre forecasters doesn't harm their scores, as opposed to the best. I don't think this is right. Combining the two relevant papers (1, 2), you see that everyone, even mediocre forecasters, have significantly worse Brier scores if you round them into seven bins. Non-superforecasters do not see a significant loss if rounded to the nearest 0.1. Superforecasters do see a significant loss at 0.1, but not if you rounded more tightly to 0.05. Type 2 error (i.e. rounding in fact leads to worse accuracy, but we do not detect it statistically), rather than the returns to precision falling to zero, seems a much better explanation. In principle: If a measure ...
Thoughts in Between: exploring how technology collides with politics, culture and society
Angus Mercer, Sophie Dannreuther and Gregory Lewis are three of the people behind, “Future Proof”, a report released last week by the Centre for Long Term Resilience on how the UK and the world can become more resilient to extreme risks in the decades ahead. In the wake of the pandemic, thinking about how we mitigate catastrophic risk seems both urgent and important. In this conversation, we discuss the most important steps governments can take, with a particular focus on biosecurity. I hope you find it both sobering and optimistic!-----------------Thanks to Cofruition for consulting on and producing the show. You can learn more about Entrepreneur First at www.joinef.com and subscribe to my weekly newsletter at tib.matthewclifford.com
Saturday 9/19/2015 4:30pm Mountain time. Pozitively Dee HIV/AIDS Discussion we will discuss HIV and mental health. I will have Colette C'ann Perkins on a Therapist/Health Educator discussing why mental health plays a big part in living with HIV and what one needs to do to take care of their mental health. Call in and join the discussion 347-855-8118 or listen onlinewww.blogusa.com — with Ieshia Advocate'Hiv Scott,Gregory Lewis, Been There
ANALYST PANEL Moderator: Mr. Robert Bugbee, President - ENETI Inc. (NETI) & Scorpio Tankers Inc (STNG) Panelists: Mr. Gregory Lewis, Head of Maritime Research – BTIG Mr. Liam Burke, Managing Director – B.Riley Securities Mr. Turner Holm, Head Of Research – Clarksons Securities AS Mr. Chris Robertson, Vice President – Deutsche Bank Mr. Jorgen Lian, Head of Shipping Equity Research - DNB Markets Mr. Omar Nokta, Lead Shipping Researcher – Jefferies Mr. Tate Sullivan, Managing Director, Senior Industrials Analyst, Maritime Research – Maxim Group Capital Link's 14th Annual New York Maritime Forum Wednesday, September 21, 2022 Metropolitan Club in New York City For more information on the program please visit here: https://forums.capitallink.com/shipping/2022NYmaritime/