The Nonlinear Library

Follow The Nonlinear Library
Share on
Copy link to clipboard

The Nonlinear Library allows you to easily listen to top EA and rationalist content on your podcast player. We use text-to-speech software to create an automatically updating repository of audio content from the EA Forum, Alignment Forum, LessWrong, and o

The Nonlinear Fund


    • Nov 15, 2022 LATEST EPISODE
    • daily NEW EPISODES
    • 12m AVG DURATION
    • 3,059 EPISODES


    Search for episodes from The Nonlinear Library with a specific topic:

    Latest episodes from The Nonlinear Library

    EA - Want advice on management/organization-building? by Ben Kuhn

    Play Episode Listen Later Nov 15, 2022 2:19


    Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Want advice on management/organization-building?, published by Ben Kuhn on November 15, 2022 on The Effective Altruism Forum. Someone observed to me recently that there are a lot of new EA organizations whose founders don't have much experience building teams and could benefit from advice from someone further along in the scaling process. Is this you, or someone you know? If so, I'd be interested in talking about your management/org-building challenges :) You can reach out through the forum's messaging feature or email me (ben dot s dot kuhn at the most common email address suffix). About me / credentials: I'm the CTO of Wave, a startup building financial infrastructure for unbanked people in sub-Saharan Africa. I joined as an engineer in the very early days (employee #6), became CTO in 2019, and subsequently grew the engineering team from ~2 to 70+ engineers while the company overall scaled to ~2k people. Along the way I had to address a large number of different team-scaling problems, both within engineering and across the rest of Wave. I also write a blog with some advice posts that people have found useful. Example areas I might have useful input on: Hiring: clarifying roles, writing job descriptions, developing interview loops, executing hiring processes, headcount planning... People management: coaching, feedback, handling unhappy or underperforming people, designing processes for things like performance reviews Organizational structure: grouping people into teams, figuring out good boundaries between teams, adding management layers About you: a leader at an organization that's experiencing (or about to experience) a bunch of growth, and could use advice on how to navigate the scaling problems that come with that. Structure: pretty uncertain since I've never done this before, but I'm thinking some sort of biweekly or weekly checkin (after an initial convo to determine fit—I'll be doing this on a volunteer basis with a smallish chunk of time, which means I may need to prioritize folks based on where I feel the most useful). Disclaimer: this is an experiment—I've never done this before, and giving good advice is hard, so I can't guarantee that I'll be useful :) Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.

    EA - The NY Times Interviewed SBF on Sunday by Lauren Maria

    Play Episode Listen Later Nov 15, 2022 1:41


    Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The NY Times Interviewed SBF on Sunday, published by Lauren Maria on November 14, 2022 on The Effective Altruism Forum. The NY times did an interview with SBF yesterday that "stretched past midnight". Here are some quotes from the article: “Had I been a bit more concentrated on what I was doing, I would have been able to be more thorough,” he said. “That would have allowed me to catch what was going on on the risk side.” Mr. Bankman-Fried, who is based in the Bahamas, declined to comment on his current location, citing safety concerns. Lawyers for FTX and Mr. Bankman-Fried did not respond to requests for comment."Meanwhile, at a meeting with Alameda employees on Wednesday, Ms. Ellison explained what had caused the collapse, according to a person familiar with the matter. Her voice shaking, she apologized, saying she had let the group down. Over recent months, she said, Alameda had taken out loans and used the money to make venture capital investments, among other expenditures. Around the time the crypto market crashed this spring, Ms. Ellison explained, lenders moved to recall those loans, the person familiar with the meeting said. But the funds that Alameda had spent were no longer easily available, so the company used FTX customer funds to make the payments. Besides her and Mr. Bankman-Fried, she said, two other people knew about the arrangement: Mr. Singh and Mr. Wang."I "gifted" this article here, but if it doesn't work for some reason, I can post it again in the comments. Edited to add an extra quote Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.

    LW - Will we run out of ML data? Evidence from projecting dataset size trends by Pablo Villalobos

    Play Episode Listen Later Nov 14, 2022 4:31


    Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Will we run out of ML data? Evidence from projecting dataset size trends, published by Pablo Villalobos on November 14, 2022 on LessWrong. Summary: Based on our previous analysis of trends in dataset size, we project the growth of dataset size in the language and vision domains. We explore the limits of this trend by estimating the total stock of available unlabeled data over the next decades. Read the full paper in arXiv. Our projections predict that we will have exhausted the stock of low-quality language data by 2030 to 2050, high-quality language data before 2026, and vision data by 2030 to 2060. This might slow down ML progress. All of our conclusions rely on the unrealistic assumptions that current trends in ML data usage and production will continue and that there will be no major innovations in data efficiency. Relaxing these and other assumptions would be promising future work. Historical projectionCompute projectionLow-quality language stock2032.4[2028.4 ; 2039.2] 2040.5[2034.6 ; 2048.9]High-quality language stock2024.5[2023.5 ; 2025.7]2024.1[2023.2 ; 2025.3]Image stock2046[2037 ; 2062.8]2038.8[2032 ; 2049.8]Table 1: Median and 90% CI exhaustion dates for each pair of projections. Background Chinchilla's wild implications argued that training data would soon become a bottleneck for scaling large language models. At Epoch we have been collecting data about trends in ML inputs, including training data. Using this dataset, we estimated the historical rate of growth in training dataset size for language and image models. Projecting the historical trend into the future is likely to be misleading, because this trend is supported by an abnormally large increase in compute in the past decade. To account for this, we also employ our compute availability projections to estimate the dataset size that will be compute-optimal in future years using the Chinchilla scaling laws. We estimate the total stock of English language and image data in future years using a series of probabilistic models. For language, in addition to the total stock of data, we estimate the stock of high-quality language data, which is the kind of data commonly used to train large language models. We are less confident in our models of the stock of vision data because we spent less time on them. We think it is best to think of them as lower bounds rather than accurate estimates. Results Finally, we compare the projections of training dataset size and total data stocks. The results can be seen in the figure above. Datasets grow much faster than data stocks, so if current trends continue, exhausting the stock of data is unavoidable. The table above shows the median exhaustion years for each intersection between projections. In theory, these dates might signify a transition from a regime where compute is the main bottleneck to growth of ML models to a regime where data is the taut constraint. In practice, this analysis has serious limitations, so the model uncertainty is very high. A more realistic model should take into account increases in data efficiency, the use of synthetic data, and other algorithmic and economic factors. In particular, we have seen some promising early advances on data efficiency, so if lack of data becomes a larger problem in the future we might expect larger advances to follow. This is particularly true because unlabeled data has never been a constraint in the past, so there is probably a lot of low-hanging fruit in unlabeled data efficiency. In the particular case of high-quality data, there are even more possibilities, such as quantity-quality tradeoffs and learned metrics to extract high-quality data from low-quality sources. All in all, we believe that there is about a 20% chance that the scaling (as measured in training compute) of ML models will significantly slow down b...

    EA - Suggestion - separate out the FTX threads somehow by Arepo

    Play Episode Listen Later Nov 14, 2022 0:46


    Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Suggestion - separate out the FTX threads somehow, published by Arepo on November 14, 2022 on The Effective Altruism Forum. In general the frontpage has felt overloaded to me, but that's a broader topic of which this is an acute example. For now, could we just quickly set up a way to stop other subjects from getting drowned out? Maybe just filter FTX-related posts from off the frontpage, with an announcement at the top that you've done so and linking to the FTX crisis tag? Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.

    EA - Money Stuff: FTX's Balance Sheet Was Bad by Elliot Temple

    Play Episode Listen Later Nov 14, 2022 1:44


    Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Money Stuff: FTX's Balance Sheet Was Bad, published by Elliot Temple on November 14, 2022 on The Effective Altruism Forum. Matt Levine explains that FTX's balance sheet (which has been leaked) is a nightmare. Sample quote: And then the basic question is, how bad is the mismatch [between liabilities and assets on the balance sheet]. Like, $16 billion of dollar liabilities and $16 billion of liquid dollar-denominated assets? Sure, great. $16 billion of dollar liabilities and $16 billion worth of Bitcoin assets? Not ideal, incredibly risky, but in some broad sense understandable. $16 billion of dollar liabilities and assets consisting entirely of some magic beans that you bought in the market for $16 billion? Very bad. $16 billion of dollar liabilities and assets consisting mostly of some magic beans that you invented yourself and acquired for zero dollars? WHAT? Never mind the valuation of the beans; where did the money go? What happened to the $16 billion? Spending $5 billion of customer money on Serum would have been horrible, but FTX didn't do that, and couldn't have, because there wasn't $5 billion of Serum available to buy. FTX shot its customer money into some still-unexplained reaches of the astral plane and was like “well we do have $5 billion of this Serum token we made up, that's something?” No it isn't! Mirror to avoid paywall: (Loading the article in a private browsing window would probably work too.) Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.

    EA - Stop Thinking about FTX. Think About Getting Zika Instead. by jeberts

    Play Episode Listen Later Nov 14, 2022 22:01


    Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Stop Thinking about FTX. Think About Getting Zika Instead., published by jeberts on November 14, 2022 on The Effective Altruism Forum. (Or about getting malaria, or hepatitis C (see below), or another exciting disease in an artisanal, curated list of trials in the UK, Canada, and US by 1Day Sooner.) Hi! My name is Jake. I got dysentery as part of a human challenge trial for a vaccine against Shigella, a group of bacteria that are the primary cause of dysentery globally. I quite literally shtposted through it on Twitter and earned fifteen minutes of Internet fame. I now work for 1Day Sooner, which was founded as an advocacy group in early 2020 for Covid-19 human challenge trials. (Who knew dysentery could lead to a career change?) 1Day is also involved in a range of other things, including pandemic preparedness policy and getting hepatitis C challenge trials off the ground. What I want to focus on right now is Zika. Specifically, I want to convince people assigned female at birth aged 18-40 in the DC-Baltimore/DMV area reading this post to take a few minutes to consider signing up for screening for the first-ever human challenge trial for Zika virus (ZIKV) at Johns Hopkins University in Baltimore. Even if you fall outside that category, I figure this might be something interesting to ponder over, and probably less stressful than the cryptocurrency debacle that shall not be named. 1Day Sooner is not compensated in any way by the study/Hopkins for this, nor do I/we represent the study in any official sense. I happen to have become very fascinated by this one in particular because it represents how challenge trials can be used for pandemic prevention with comparatively few resources. This post is meant to inform you of the study with a bit more detail from an EA perspective, but does not supplant information provided by the study staff. (If you're a DMV male like me bummed you can't take part, ask me about current or upcoming malaria vaccine and monoclonal antibody trials taking place at the University of Maryland — I'll be screening next week! If you're not in the DMV or otherwise just can't do anything for Zika or malaria, you can still sign up for our volunteer base and newsletter, which will help you keep tabs on future studies. Something we're very excited about is the emerging push for hepatitis C challenge studies, see the link above.) Zika 101 Zika is a mainly mosquito-borne disease that has been known since 1947 — The 2015-2016 western hemisphere epidemic showed that Zika could cause grave birth defects (congenital Zika syndrome, CZS) — The disease is very mild in adults at present The Zika virus (ZIKV) was discovered in the Zika forest of Uganda in 1947, and a few years later, we learned it could cause what was universally assumed to be extremely mild disease in humans. The 2015-16 Zika epidemic that started in South America, and which was particularly severe in Brazil, proved otherwise. This is when it became clear that Zika was linked to horrific birth defects. Zika has since earned its place on the WHO priority pathogen list. To briefly review the basics of Zika: The Zika virus is a flavivirus, in the same family as dengue fever, yellow fever, and Chikungunya, among others. Flaviviruses are RNA viruses, which generally lack genomic proofreading ability and are thus more prone to mutation. Zika infection sometimes causes Zika fever, though asymptomatic cases are very common. Zika fever is usually very, very mild, and direct deaths are extremely rare. Zika is much more of a concern if you are pregnant — in about 7% of infections during pregnancy, the virus infects a fetus and causes serious, even fatal, birth defects. These defects, which most frequently include microcephaly, are referred to as congenital Zika syndrome (CZS). Zika is also thought to increase t...

    EA - Proposals for reform should come with detailed stories by Eric Neyman

    Play Episode Listen Later Nov 14, 2022 4:26


    Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Proposals for reform should come with detailed stories, published by Eric Neyman on November 14, 2022 on The Effective Altruism Forum. Following the collapse of FTX, many of us have been asking ourselves: What should members of the EA community have done differently, given the information they had at the time? I think that proposed answers to this question are considerably more useful if they meet (and explicitly address) the following three criteria: (1) The (expected) benefits of the proposal outweigh the costs, given information available at the time. (2) Implementing the proposal would have been realistic. (3) There is a somewhat detailed, plausible story of how the proposal could have led to a significantly better outcome with regard to the FTX collapse. Hopefully it's clear why we want the first two of these. (1) is EA bread and butter. And (2) is important to consider given how decentralized EA is: if your proposal is "all EAs should refuse funding from a billionaire without a credible audit of their finances", you face a pretty impossible coordination problem. But most of the proposals I've seen have been failing (3). Here's a proposal I've heard: The people SBF was seeking out to run the FTX Foundation should have requested an audit of FTX's finances at the start. This isn't enough for me to construct a detailed, plausible story for things going better. In particular, I think one of two things would have happened: The FTX leadership would have agreed to a basic audit that wouldn't have uncovered underlying problems (much as the entire finance industry failed to uncover the problems). The FTX leadership would have refused to accede to the audit. In the first case, the proposal fails at accomplishing its goal. In the second case, I want more details! What should the FTX Foundation people have done in response, and how could their actions have led to a better outcome? A more detailed proposal: The people SBF was seeking out to run the FTX Foundation should have requested a thorough, rigorous audit of FTX's finances at the start. If FTX leadership had refused, they should have refused to run the FTX Foundation. This still isn't detailed enough: SBF would have just asked someone else to run the Foundation. It seems like the default outcome is that someone who's a bit worse at the job would have been in charge instead. A more detailed proposal: The people SBF was seeking out to run the FTX Foundation should have requested a thorough, rigorous audit of FTX's finances at the start. If FTX leadership had refused, they should have refused to run the FTX Foundation and made it public that FTX leadership had refused the audit. Then, EA leaders should have discouraged major EA organizations from taking money from the FTX Foundation and promoted a culture of looking down on anyone who took money from the Foundation. This is starting to get to the point where something could have realistically changed as a result of the actions. Maybe the pressure for transparency would have been strong enough that SBF would have acceded to an audit -- though I still think the audit wouldn't have uncovered anything. Or maybe he wouldn't have acceded, and for a while EA organizations would have refused his money, before eventually giving in to the significant incentives to take the money. Or maybe they would have refused his money for many years -- in that case, I would question whether criterion (1) is still satisfied (given only information we would have had at the time, remember). But at least we have a somewhat detailed story to argue about. Without a detailed story, it's easy to convince yourself that the change you propose would have plausibly averted the disaster we face. A detailed story forces you to spell out your assumptions in a way that makes it easier for other people to poke ...

    EA - Theories of Welfare and Welfare Range Estimates by Bob Fischer

    Play Episode Listen Later Nov 14, 2022 15:29


    Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Theories of Welfare and Welfare Range Estimates, published by Bob Fischer on November 14, 2022 on The Effective Altruism Forum. Key Takeaways Many theories of welfare imply that there are probably differences in animals' welfare ranges. However, these theories do not agree about the sizes of those differences. The Moral Weight Project assumes that hedonism is true. This post tries to estimate how different our welfare range estimates could be if we were to assume some other theory of welfare. We argue that even if hedonic goods and bads (i.e., pleasures and pains) aren't all of welfare, they're a lot of it. So, probably, the choice of a theory of welfare will only have a modest (less than 10x) impact on the differences we estimate between humans' and nonhumans' welfare ranges. Introduction This is the third post in the Moral Weight Project Sequence. The aim of the sequence is to provide an overview of the research that Rethink Priorities conducted between May 2021 and October 2022 on interspecific cause prioritization. The aim of this post is to suggest a way to quantify the impact of assuming hedonism on welfare range estimates. Motivations Theories of welfare disagree about the determinants of welfare. According to hedonism, the determinants of welfare are positively and negatively valenced experiences. According to desire satisfaction theory, the determinants are satisfied and frustrated desires. According to a garden variety objective list theory, the determinants are something like knowledge, developing and maintaining friendships, engaging in meaningful activities, and so on. Now, some animals probably have more intense pains than others; some probably have richer, more complex desires; some are able to acquire more sophisticated knowledge of the world; others can make stronger, more complex relationships with others. If animals systematically vary with respect to their ability to realize the determinants of welfare, then they probably vary in their welfare ranges. That is, some of them can probably realize more positive welfare at a time than others; likewise, some of them can probably realize more negative welfare at a time than others. As a result, animals probably vary with respect to the differences between the best and worst welfare states they can realize. The upshot: many theories of welfare imply that there are probably differences in animals' welfare ranges. However, theories of welfare do not obviously agree about the sizes of those differences. Consider a garden variety objective list theory on which the following things contribute positively to welfare: acting autonomously, gaining knowledge, having friends, being in a loving relationship, doing meaningful work, creating valuable institutions, experiencing pleasure, and so on. Now consider a simple version of hedonism (i.e., one that rejects the higher / lower pleasure distinction) on which just one thing contributes positively to welfare: experiencing pleasure. Presumably, while many nonhuman animals (henceforth, animals) can experience pleasure, they can't realize many of the other things that matter according to the objective list theory. Given as much, it's plausible that if the objective list theory is true, there will be larger differences in welfare ranges between many humans and animals than there will be if hedonism is true. For practical and theoretical reasons, the Moral Weight Project assumes that hedonism is true. On the practical side, we needed to make some assumptions to make any progress in the time we had available. On the theoretical side, there are powerful arguments for hedonism. Still, those who reject hedonism will rightly wonder about the impact of assuming hedonism. How different would our welfare range estimates be if we were to assume some other theory of welfare? In the ...

    EA - Jeff Bezos announces donation plans (in response to question) by david reinstein

    Play Episode Listen Later Nov 14, 2022 2:28


    Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Jeff Bezos announces donation plans (in response to question), published by david reinstein on November 14, 2022 on The Effective Altruism Forum. For a change, some good billionaire philanthropy news Caveats: It seems to have been in response to the question "Do you plan to give away the majority of your wealth in your lifetime?"; I don't know whether he encouraged them to ask this I suspect this is not entirely 'news'; iirc he has made noises like this in the past Amazon founder Jeff Bezos plans to give away the majority of his $124 billion net worth during his lifetime, telling CNN in an exclusive interview he will devote the bulk of his wealth to fighting climate change and supporting people who can unify humanity in the face of deep social and political divisions. This seems potentially promising: he seems to prioritize effectiveness. “The hard part is figuring out how to do it in a levered way,” he said, implying that even as he gives away his billions, he is still looking to maximize his return. “It's not easy. Building Amazon was not easy. It took a lot of hard work, a bunch of very smart teammates, hard-working teammates, and I'm finding — and I think Lauren is finding the same thing — that charity, philanthropy, is very similar.” “There are a bunch of ways that I think you could do ineffective things, too,” he added. “So you have to think about it carefully and you have to have brilliant people on the team.” Bezos' methodical approach to giving stands in sharp contrast to that of his ex-wife, the philanthropist MacKenzie Scott, who recently gave away nearly $4 billion to 465 organizations in the span of less than a year. In terms of specifics, the Earth Fund seems relatively good, to me: Bezos has committed $10 billion over 10 years, or about 8% of his current net worth, to the Bezos Earth Fund, which Sánchez co-chairs. Among its priorities are reducing the carbon footprint of construction-grade cement and steel; pushing financial regulators to consider climate-related risks; advancing data and mapping technologies to monitor carbon emissions; and building natural, plant-based carbon sinks on a large scale. I'm less enthusiastic about the “Bezos Courage and Civility Award” which seems celebrity-driven (as much as I love Dolly Parton) and perhaps less likely to target global priorities/effectiveness. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.

    EA - Effective Peer Support Network in FTX crisis (Update) by Emily

    Play Episode Listen Later Nov 14, 2022 2:22


    Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Effective Peer Support Network in FTX crisis (Update), published by Emily on November 14, 2022 on The Effective Altruism Forum. Are you going through a rough patch due to the FTX crisis?Or do you want to help EA peers who are? Following up on our post from last Friday, 'Get and Give Mental Health Support During These Hard Times', we [Rethink Wellbeing and the Mental Health Navigator] set up a support network to react to the many people in our community affected by the FTX crisis. The number of people who have joined is growing! These are the two modes of action: (1) In this table, you can find experienced supporters. These supporters want to help (for free), and you can just contact them. The community health team, as well as a growing number of coaches and therapists that are informed about EA and the FTX crisis, are already listed. If you have experience in supporting others and would like to dedicate >=1 hour in the next few weeks to support one or more EA peers that are having a hard time, you can take 1 min to add your details to the table. We likely need more support to take care of this situation. Consultants might be helpful, too. (2) You can join our new Peer Support Network Slack here.People can share and discuss their issues, get together in groups and 1:1, as well as get support from the trained helpers as well as peers: It enables you to chat with a trained helper anonymously—if you use a Nickname and an email address that doesn't contain your name. You can create closed sub-channels here for a small group of people with similar issues that want to support each other more closely. One can tackle specific topics in sub-channels, e.g., dealing with loss, future worries, or personal crises. Feel free to send this to anyone who might need or wish to provide help!Rethink Wellbeing was supposed to receive FTX funding to implement Effective Peer Support. Finding an alternative funding source (USD 25-63k), we could also provide a simple first version of mental health support groups and 1:1s to people affected by the FTX crisis quickly. Let us know if you'd like to make that possible. Around 300 EAs have signed up to participate in Effective Peer Support already, mainly after our post in October. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.

    EA - The FTX crisis highlights a deeper cultural problem within EA - we don't sufficiently value good governance by Fods12

    Play Episode Listen Later Nov 14, 2022 7:28


    Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The FTX crisis highlights a deeper cultural problem within EA - we don't sufficiently value good governance, published by Fods12 on November 14, 2022 on The Effective Altruism Forum. Introduction In this piece, I will explain why I don't think the collapse of FTX and resulting fallout for Future Fund and EA community in general is a one-off or 'black swan' event as some have argued on this forum. Rather, I think that what happened was part of a broader pattern of failures and oversights that have been persistent within EA and EA-adjacent organisations since the beginning of the movement. As a disclaimer, I do not have any inside knowledge or special expertise about FTX or any of the other organisations I will mention in this post. I speak simply as a long-standing and concerned member of the EA community. Weak Norms of Governance The essential point I want to make in this post is that the EA community has not been very successful in fostering norms of transparency, accountability, and institutionalisation of decision-making. Many EA organisations began as ad hoc collections of like-minded individuals with very ambitions goals but relatively little career experience. This has often led to inadequate organisational structures and procedures being established for proper management of personal, financial oversight, external auditing, or accountability to stakeholders. Let me illustrate my point with some major examples I am aware of from EA and EA-adjacent organisations: Weak governance structures and financial oversight at the Singularity Institute, leading to the theft of over $100,000 in 2009. Inadequate record keeping, rapid executive turnover, and insufficient board oversight at the Centre for Effective Altruism over the period 2016-2019. Inadequate financial record keeping at 80,000 Hours during 2018. Insufficient oversight, unhealthy power dynamics, and other harmful practices reported at MIRI/CFAR during 2015-2017. Similar problems reported at the EA-adjacent organisation Leverage Research during 2017-2019. 'Loose norms around board of directors and conflicts of interests between funding orgs and grantees' at FTX and the Future Fund from 2021-2022. While these specific issues are somewhat diverse, I think what they have in common is an insufficient emphasis on principles of good organisational governance. This ranges from the most basic such as clear objectives and good record keeping, to more complex issues such as external auditing, good systems of accountability, transparency of the organisation to its stakeholders, avoiding conflicts of interest, and ensuring that systems exist to protect participants in asymmetric power relationships. I believe that these aspects of good governance and robust institution building have not been very highly valued in the broader EA community. In my experience, EAs like to talk about philosophy, outreach, career choice, and other nerdy stuff. Discussing best practise of organisational governance and systems of accountability doesn't seem very high status or 'sexy' in the EA space. There has been some discussion of such issues on this forum (e.g. this thoughtful post), but overall EA culture seems to have failed to properly absorb these lessons. EA projects are often run by small groups of young idealistic people who have similar educational and social backgrounds, who often socialise together, and (in many cases) participate in romantic relationships with one another - The case of Sam Bankman-Fried and Caroline Ellison is certainly not the only such example in the EA community. The EA culture seems to be heavily influenced by start-up culture and entrepreneurialism, with a focus on moving quickly and relying on finding highly-skilled and highly-aligned people and then providing them funding and space to work with minimal oversi...

    EA - Lessons from taking group members to an EAGx conference by Irene H

    Play Episode Listen Later Nov 14, 2022 8:27


    Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Lessons from taking group members to an EAGx conference, published by Irene H on November 14, 2022 on The Effective Altruism Forum. We (Irene and Jelle) run the EA Eindhoven university group and went to EAGxRotterdam (4-6 November 2022) with 16 of our members. Our members told us they really enjoyed the conference and have plans for how they want to pursue their EA journeys. We are excited that almost all Dutch universities have EA groups now and felt EAGxRotterdam was the capstone of this year in which the Dutch EA community has really taken off. These are some of the lessons we learned and best practices we discovered in our preparation for the conference as well as what we did during the conference itself. We hope other group organizers can benefit from these lessons too. We of course hope to visit many more EA conferences in the future and grow as community builders, which means this guide is a work in progress. We are also excited to learn about the best practices of other community builders and welcome their suggestions. Feel free to place comments! Some things to keep in mind when reading this Circumstance-specific things that were the case for us: Our group is only a few months old, our members all became engaged with EA only recently (most of them through our Introduction Fellowship) and our two organizers were the only people who had ever attended an EA conference before. This is why we put quite a lot of effort into encouraging members to apply and helping them prepare. Rotterdam is only a 1-hour train ride from Eindhoven, so it was relatively easy for us to convince members to attend. Some of our members stayed with friends or traveled back and forth every day. The conference was on the weekend between two exam weeks at our university and a few of our members cited this as a reason for not applying. One of our members got accepted but never showed up to the conference in order to work on a class assignment. We really regret these things but do not know what we could have done about them. Because this conference in the Netherlands was such a rare event, we also advised some members to apply even though we were not 100% sure if they were engaged enough in EA. We would probably be stricter with our advice to them about this for conferences that are further away. For members who had not done an Introduction Fellowship (or equivalent), we made it clear that the conference was not going to be useful if they did not do some kind of preparation. We agreed with them that they would go over the EA Handbook and scheduled a 1-on-1 to discuss the preparation. In the end, all people who were interested were willing to do this preparation and carried it out. We spoke to them after to conference and they told us they found the conference interesting and returned with new ideas. We had 3 people show up at our collective application night, which is not a lot, but they all applied. We had approximately 10 people in total show up at our preparation evenings (we hosted 2 in a student café on our campus). We were both volunteers at EAGxRotterdam, had a lot of 1-on-1s scheduled and Jelle was also a speaker. Irene also met a community builder at the conference who was mainly there to guide his members, but in our case, that was not our only priority. Encourage and help members to apply Plan other programs (especially the Introduction Fellowship) so that they finish in time to still have space for promoting the conference and giving members the time to apply. Pitch the conference during your programs and events Host a collective application night Guide for Collective application night by the EAGxRotterdam team that we used Focus on members you think would benefit most from the conference Members who have already done at least an Introduction Fellowship (or equivalent) Have 1-on-1...

    EA - AI Safety Microgrant Round by Chris Leong

    Play Episode Listen Later Nov 14, 2022 4:34


    Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AI Safety Microgrant Round, published by Chris Leong on November 14, 2022 on The Effective Altruism Forum. We are pleased to announce an AI Safety Microgrants Round, which will provide micro-grants to field-building projects and other initiatives that can be done with less. We believe there are projects and individuals in the AI Safety space who lack funding but have high agency and potential. We think individuals helping to fund projects will be particularly important given recent changes in the availability of funding. For this reason, we decided to experiment with a small micro-grant round as a test and demonstration of this concept. To keep the evaluation simple, we're focusing on field-building projects rather than projects that would require complex evaluation (we expect that most technical projects would be much more difficult to evaluate). We are offering microgrants up to $2,000 USD with the total size of this round being $6,000 USD (we know that this is a tiny round, but we are running this as a proof-of-concept). One possible way this could pan out would be two grants of $2000 and two of $1000, although we aren't wedded and we are fine with request for less. We want to fund grant requests of this size where EA funds possibly has a bit too much overhead. The process is as follows: Fill out this form at microgrant.ai (

    EA - NY Times on the FTX implosion's impact on EA by AllAmericanBreakfast

    Play Episode Listen Later Nov 14, 2022 4:31


    Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: NY Times on the FTX implosion's impact on EA, published by AllAmericanBreakfast on November 14, 2022 on The Effective Altruism Forum. The impact of the FTX scandal on EA is starting to hit the news. The coverage in this NY Times article seems fair to me. I also think that the FTX Future Fund leadership decision to jointly resign both was the right thing to do, and comes across that way in the article. Will MacAskill, I think, is continuing to show leadership interfacing with the media - it's a big transition from his book tour not long ago to giving quotes to the press about FTX's implosion. The article focuses on the impact this has had on EA: [The collapse of FTX] has also dealt a significant blow to the corner of philanthropy known as effective altruism, a philosophy that advocates applying data and evidence to doing the most good for the many and that is deeply tied to Mr. Bankman-Fried, one of its leading proponents and donors. Now nonprofits are scrambling to replace millions in grant commitments from Mr. Bankman-Fried's charitable vehicles, and members of the effective altruism community are asking themselves whether they might have helped burnish his reputation... ... For a relatively young movement that was already wrestling over its growth and focus, such a high-profile scandal implicating one of the group's most famous proponents represents a significant setback. The article mentions the FTX Future Fund joint resignation, focusing on the grants that will not be able to be honored and what those might have helped. The article talks about Will MacAskill inspiring SBF to switch his career plans to pursue earning to give, but doesn't try to blame the fraud on utilitarianism or on EA. This is my opinion, but I'm just confused by people's eagerness to blame this on utilitarianism or the EA movement. The common-sense American lens to view these sorts of outcomes is a framework of personal responsibility. If SBF committed fraud, that is indicative of a problem with his personal character, not the moral philosophy he claims to subscribe to. His connection to the movement in fact predates the vast fortune he won and lost in the cryptocurrency field. Over lunch a decade ago while he was still in college, Mr. Bankman-Fried told Mr. MacAskill, the philosopher, that he wanted to work on animal-welfare issues. Mr. MacAskill suggested the young man could do more good earning large sums of money and donating the bulk of it to good causes instead. Mr. Bankman-Fried went into finance with the stated intention of making a fortune that he could then give away. In an interview with The New York Times last month about effective altruism, Mr. Bankman-Fried said he planned to give away a vast majority of his fortune in the next 10 to 20 years to effective altruist causes. He did not respond to a request for comment for this article. Contrary to my expectation, the article was pretty straightforward in describing the global health/longtermism aspects of EA: Effective altruism focuses on the question of how individuals can do as much good as possible with the money and time available to them. Historically, the community focused on low-cost medical interventions, such as insecticide-treated bed nets to prevent mosquitoes from giving people malaria. More recently many members of the movement have focused on issues that could have a greater impact on the future, like pandemic prevention and nuclear nonproliferation as well as preventing artificial intelligence from running amok and sending people to distant planets to increase our chances of survival as a species. Probably the most critical aspect of the article was this: Benjamin Soskis, senior research associate in the Center on Nonprofits and Philanthropy at the Urban Institute, said that the issues raised by Mr. Bankman-Fried's rev...

    EA - Wrong lessons from the FTX catastrophe by burner

    Play Episode Listen Later Nov 14, 2022 6:40


    Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Wrong lessons from the FTX catastrophe, published by burner on November 14, 2022 on The Effective Altruism Forum. There remains a large amount of uncertainty about what exactly happened inside FTX and Alameda. I do not offer any new takes on what occurred. However, there are the inklings of some "lessons" from the situation that are incorrect regardless of how the details flesh out. Pointing these out now may seem in poor taste while many are still coming to terms with what happened, but it important to do so before they become the canonical lessons. 1. Ambition was a mistake There have been a number of calls for EA to "go back to bed nets." It is notable that this refrain conflates the alleged illegal and unethical behavior from FTX/Alameda with the philosophical position of longtermism. Rather than evaluating the two issues as distinct, the call seems to assume both were born out of the same runaway ambition. Logically, this is obviously not the case. Longtermism grew in popularity during SBFs rise, and Future Fund did focus on longtermist projects, but FTX/Alameda situation has no bearing on the truth of longtermism and associated project's prioritization. To the extent that both becoming incredibly rich and affecting the longterm trajectory of humanity are "ambitious" goals, ambition is not the problem. Committing financial crimes is a problem. Longtermism has problems, like knowing how to act given uncertainty about the future. But an enlightened understanding of ambition accommodates these problems: We should be ambitious in our goals while understanding our limitations in solving them. There is an uncomfortable reality that SBF symbolized a new level of ambition for EA. That sense of ambition should be retained. His malfeasance should not be. This is not to say that there may be lessons to learn about transparency, overconfidence, centralized power, trusting leaders, etc from these events. But all of these are distinct from a lesson about ambition, which depends more on vague allusions to Icarus than argument. 2. No more earning to give I am not sure that this is being learned as a "lesson" or if this situation simply "leaves a bad taste" in EA's mouths about earning to give. The alleged actions of the FTX/Alameda team in no way suggest that earning money to donate to effective causes is a poor career path. Certain employees of FTX/Alameda seem to have been doing distinctly unethical work, such as grossly mismanaging client funds. One of the only arguments for why one might be open to working a possibly unethical job like trading crypto currency is because an ethically motivated actor would do that job in a more ethical way than the next person would (from Todd and MacAskill). Earning to give never asked EAs to pursue unethical work, and encouraged them to pursue any line of work in an ethically upstanding way. (I emphasize Todd and MacAskill conclude in their 2017 post: "We believe that in the vast majority of cases, it's a mistake to pursue a career in which the direct effects of the work are seriously harmful"). Earning to give in the way it has always been advised-by doing work that is basically ethical-continues to be a highly promising route to impact. This is especially true when total EA assets have significantly depreciated. There is a risk that EAs do not continue to pursue earning to give, thinking either that it is icky post-FTX or that someone else has it covered. This is a poor strategy. It is imperative that some EAs who are well suited for founding companies shift their careers into entrepreneurship as soon as possible. 3. General FUD from nefarious actors As the EA community reals from the FTX/Alameda blow up, a number of actors with histories of hating EA have chimed in with threads about how this catastrophe is in line with X thing they alre...

    EA - How the FTX crash damaged the Altruistic Agency by Markus Amalthea Magnuson

    Play Episode Listen Later Nov 14, 2022 10:43


    Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How the FTX crash damaged the Altruistic Agency, published by Markus Amalthea Magnuson on November 13, 2022 on The Effective Altruism Forum. Introduction First, I'd like to express my deepest sympathies to everyone affected by the FTX breakdown. A lot of people have lost their life savings and are suffering terribly, something I do not wish to diminish in any way. Many are affected in far worse ways than I can even imagine, and I hope as many of those as possible will be able to find the support they need to get through these challenging times. I got confirmation this week on Wednesday (Nov 9) that payouts from the FTX Future Fund had stopped. Since I had an outstanding grant with them, this was of great concern to me personally. The days following that, and what is still unravelling minute by minute, it seems, was the complete meltdown of FTX and all related entities, including the Future Fund. You are all following these events in other places, so I won't go into that much, but I wanted to offer a perspective from a Future Fund grantee and the specific ways something like this can do damage. This post is also a call for support since all funding for my next year of operations has suddenly evaporated entirely. Specifically, if you or someone you know is a grantmaker or donor to EA meta/infrastructure/operations projects and would be interested in funding a new organisation with a good track record, please get in touch. I'd be happy to share much more details and data on the proven value so far, my grant application to the Future Fund, and anything else that might be of interest. I will also mention the benefits of being vigilant about organisational structure and how it can save your organisation in the long run, even though it might be an upfront and ongoing cost of time and money. Background I founded the Altruistic Agency in January this year. The idea was to apply my knowledge and experience from 15+ years as a full-stack developer to help organisations and others in the EA community with tech expertise. I hypothesised that the kind of work I had done mostly in a commercial context for most of my professional life is also highly valuable to nonprofits/charities and others doing high-impact work within EA cause areas. I have always been fond of meta projects, and this seemed like something that could greatly increase productivity (like technology does), especially by saving people lots of time that they could instead spend on their core mission. Thanks to a grant from the EA Infrastructure Fund, I was able to test this hypothesis full-time and spent the first half of this year providing free tech support to many EA organisations and individuals. During that time, I worked with around 45 of them in areas such as existential risk, animal advocacy, climate, legal research, mental health, and effective fundraising. The work ranged from small tasks, such as improving email deliverability, fixing website bugs, and making software recommendations, to larger tasks, such as building websites, doing security audits, and software integration. The response from the community was overwhelmingly positive, and the data from the January–June pilot phase of the Altruistic Agency indicates high value. I solved issues (on median) in a fifth of the time it would have taken organisations themselves. The data indicates just one person doing this work for six months saved EA organisations at least 900 hours of work. Additionally, many respondents said they learned a lot from working with me, both about their own systems and setups and about tech in general. Not least, increased awareness of security issues in code and systems, which will only become more crucial over time, and can be significantly harmful if not dealt with properly. Insights from the pilot phase told me two important t...

    LW - Announcing Nonlinear Emergency Funding by KatWoods

    Play Episode Listen Later Nov 13, 2022 0:25


    Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing Nonlinear Emergency Funding, published by KatWoods on November 13, 2022 on LessWrong. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.

    EA - The FTX Situation: Wait for more information before proposing solutions by D0TheMath

    Play Episode Listen Later Nov 13, 2022 4:00


    Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The FTX Situation: Wait for more information before proposing solutions, published by D0TheMath on November 13, 2022 on The Effective Altruism Forum. Edit: Eli has a great comment on this which I suggest everyone read. He corrects me on a few things, and gives his far more informed takes. I'm slightly scared that EA will overcorrect in an irrelevant direction to the FTX situation in a way I think is net harmful, and I think a major reason for this fear is seeing lots of people espousing conclusions about solutions to problems without us actually knowing what the problems are yet. Some examples of this I've seen recently on the forum follow. Integrity It is uncertain whether SBF intentionally committed fraud, or just made a mistake, but people seem to be reacting as if the takeaway from this is that fraud is bad. These articles are mostly saying things of the form 'if FTX engaged in fraud, then EA needs to make sure people don't do more fraud in the service of utilitarianism.' from a worrying-about-group-think perspective, this is only a little less concerning than directly saying 'FTX engaged in fraud, so EA should make sure people don't do more fraud'. Even though these articles aren't literally saying that FTX engaged in fraud in the service of utilitarianism, I worry these articles will shift the narrative EA tells itself towards up-weighting hypotheses which say FTX engaged in fraud in the service of utilitarianism, especially in worlds where it turned out that FTX did commit fraud, but it was motivated by pride, or other selfish desires. Dating Some have claimed FTX's downfall happened as a result of everyone sleeping with each other, and this interpretation is not obviously unpopular on the forum. This seems quite unlikely compared to alternative explanations, and the post Women and Effective Altruism takes on a tone & content I find toxic to community epistemics[1], and anticipate wouldn't fly on the forum a week ago. I worry the reason we see this post now is that EA is confused, wants to do something, and is really searching for anything to blame for the FTX situation. If you are confused about what your problems are, you should not go searching for solutions! You should ask questions, make predictions, and try to understand what's going on. Then you should ask how you could have prevented or mitigated the bad events, and ask whether those prevention and mitigation efforts would be worth their costs. I think this problem is important to address, and am uncertain about whether this post is good or bad on net. The point is that I'm seeing a bunch of heated emotions on the forum right now, this is not like the forum I'm used to, and lots of these heated discussions seem to be directed towards pushing new EA policy proposals rather than trying to figure out what's going on. Vetting funding We could immediately launch a costly investigation to see who had knowledge of fraud that occurred before we actually know if fraud occured or why. In worlds where we're wrong about whether or why fraud occurred this would be very costly. My suggestion: wait for information to costlessly come out, discuss what happened when not in the midst of the fog and emotions of current events, and then decide whether we should launch this costly investigation. Adjacently, some are arguing EA could have vetted FTX and Sam better, and averted this situation. This reeks of hindsight bias! Probably EA could not have done better than all the investors who originally vetted FTX before giving them a buttload of money! Maybe EA should investigate funders more, but arguments for this are orthogonal to recent events, unless CEA believes their comparative advantage in the wider market is high-quality vetting of corporations. If so, they could stand to make quite a bit of money selling this service,...

    EA - SBF, extreme risk-taking, expected value, and effective altruism by vipulnaik

    Play Episode Listen Later Nov 13, 2022 35:40


    Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: SBF, extreme risk-taking, expected value, and effective altruism, published by vipulnaik on November 13, 2022 on The Effective Altruism Forum. NOTE: I have some indirect associations with SBF and his companies, though probably less so than many of the others who've been posting and commenting on the forum. I don't expect anything I write here to meaningfully affect how things play out in the future for me, so I don't think this creates a conflict of interest, but feel free to discount what I say. NOTE 2: I'm publishing this post without having spent the level of effort polishing and refining it that I normally try to spend. This is due to the time-sensitive nature of the subject matter and because I expect to get more value from being corrected in the comments on the post than from refining the post myself. If errors are pointed out, I will try to correct them, but may not always be able to make timely corrections, so if you're reading the post, please also check the comments to check for flaws identified by comments. The collapse of Sam Bankman-Fried (SBF) and his companies FTX and Alameda Research is the topic du jour on the Effective Altruism Forum, and there have been several posts on the Forum discussing what happened and what we can learn from it. The post FTX FAQ provides a good summary of what we know as of the time I'm writing this post. I'm also funding work on a timeline of FTX collapse (still a work in progress, but with enough coverage already to be useful if you are starting with very little knowledge). Based on information so far, fraud and deception on the part of SBF (and/or others in FTX and/or Alameda Research) likely happened and were likely key to the way things played out and the extent of damage caused. The trigger seems to be the big loan that FTX provided to Alameda Research to bail it out, using customer funds for the purpose. If FTX hadn't bailed out Alameda, it's quite likely that the spectacular death of FTX we saw (with depositors losing all their money as well) wouldn't have happened. But it's also plausible that without the loan, the situation with Alameda Research was dire enough that Alameda Research, and then FTX, would have died due to the lack of funds. Hopefully that would have been a more graceful death with less pain to depositors. That is a very important difference. Nonetheless, I suspect that by the time of the bailout, we were already at a kind of endgame. In this post, I try to step back a bit from the endgame, and even get away from the specifics of FTX and Alameda Research (that I know very little about) and in fact even get away from the specifics of SBF's business practices (where again I know very little). Rather, I talk about SBF's overall philosophy around risk and expected value, as he has articulated himself, and has been approvingly amplified by several EA websites and groups. I think the philosophy was key to the overall way things played out. And I also discuss the relationship between the philosophy and the ideas of effective altruism, both in the abstract and as specifically championed by many leaders in effective altruism (including the team at 80,000 Hours). My goal is to encourage people to reassess the philosophy and make appropriate updates. I make two claims: Claim 1: SBF engages in extreme risk-taking that is a crude approximation to the idea of expected value maximization as perceived by him. Claim 2: At least part of the motivation for SBF's risk-taking comes from ideas in effective altruism, and in particular specific points made by EA leaders including people affiliated with 80,000 Hours. While personality probably accounts for a lot of SBF's decisions, the role of EA ideas as a catalyst cannot be dismissed based on the evidence. Here are a few things I am not claiming (some of these are discussed ...

    LW - Weekly Roundup #5 by Zvi

    Play Episode Listen Later Nov 13, 2022 9:25


    Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Weekly Roundup #5, published by Zvi on November 11, 2022 on LessWrong. A note about what isn't in this roundup: Nothing about the midterms or crypto/FTX. I wrote about Twitter earlier this week. The midterms are better covered elsewhere, I do not have anything unique to say about them. As for the situation with FTX, it is rapidly developing and I do not yet feel I have sufficient handle on it that saying things would be net helpful. I continue to recommend starting on that topic by reading Matt Levine (both in general, and for this in particular). There were going to bea few additional things related to Twitter and Musk, but the section rapidly spiraled out of control as news came on top of news and I didn't know if my statements were current, so I'm moving it to another draft and we'll see if it ends up making it or not. For now, I will simply say that how badly and quickly you want to Find Out, which can only be done by Fing Around, is increasingly the question of our age. It is indeed weird to have so much going on and have that result in having less to say for the moment, while thoughts are gathered and things become clearer. Seems right, though, and also some background writing tasks got more urgent this week. Bad News More on the chess cheating scandals (via MR). My main contention continues to be that we are far too unwilling to punish competitors on the basis of statistical evidence in cheating cases. FIDE's ‘99.999% sure' policy, or whatever exactly it is, is Obvious Nonsense. No one is going to jail or being executed, and there are plenty of options well short of ‘can never again play competitive chess.' Where would I put the bar? I don't think it is crazy to put it at 51% if your estimates are fair and well calibrated. Certainly if I thought someone was 50/50 to be a savage cheater, I would not invite them to join my Magic (or chess) team or play group, help promote them, or anything like that. If I was running an invitational chess tournament, I would consider even a 10% chance to be a rather severe demerit to the extent that I had the freedom to consider it. My guess is that in practice the right answer for Serious Official Punishments is in the 75%-90% range. Uber tests using push notifications to deliver ads. I am trying to imagine a world where this is a good business decision and failing. Are falling retirement ages good or bad for people? Joe's question also raises the question of, if we are or become importantly poorer than we used to be, we should progressively raise the age to collect full benefits. On reflection, I continue to think that this is an area where people are going to make decisions poorly, in both directions. Some people will retire far too early, either run out of funds or grow bored and lonely and have their health degenerate faster than it otherwise would have. Those that realize this is happening will then mostly have few good options to undo what was done. Others will refuse to retire for too long, although I expect this group to be smaller. My guess is that of those who keep working too long at their job, a lot of them would benefit from changing jobs and actively doing something else more than they would benefit from full retirement. Do I plan to retire? My basic answer is no, at least not until very late in the game when I am unable to meaningfully work. I do not see such a decision improving my life. My hope is that I can fully retire from having to do work for or because of the money while continuing to do plenty of work. I don't want to be keeping up quite this pace of work indefinitely, I likely should be taking more time to relax than I am as it is. I'm working on that. In Magic: The Gathering, Aaron Forsythe asks why the Standard format is dying and gets a lot of answers, including this one by Brian Kowal. Standard was ...

    LW - The Alignment Community Is Culturally Broken by sudo -i

    Play Episode Listen Later Nov 13, 2022 3:02


    Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The Alignment Community Is Culturally Broken, published by sudo -i on November 13, 2022 on LessWrong. Disclaimer: These are entirely my thoughts. I'm posting this before it's fully polished because it never will be. Epistemic status: Moderately confident. Deliberately provocative title. Apparently, the Bay Area rationalist community has a burnout problem. I have no idea if it's worse than base rate, but I've been told it's pretty bad. I suspect that the way burnout manifests in the rationalist community is uniquely screwed up. I was crying the other night because our light cone is about to get ripped to shreds. I'm gonna do everything I can to do battle against the forces that threaten to destroy us. You've heard this story before. Short timelines. Tick. Tick. I've been taking alignment seriously for about a year now, and I'm ready to get serious. I've thought hard about what my strengths are. I've thought hard about what I'm capable of. I'm dropping out of Stanford, I've got something that looks like a plan, I've got the rocky theme song playing, and I'm ready to do this. A few days later, I saw this post. And it reminded me of everything that bothers me about the EA community. Habryka covered the object level problems pretty well, but I need to communicate something a little more... delicate. I understand that everyone is totally depressed because qualia is doomed. I understand that we really want to creatively reprioritize. I completely sympathize with this. I want to address the central flaw of Akash+Olivia+Thomas's argument in the Buying Time post, which is that actually, people can improve at things. There's something deeply discouraging about being told "you're an X% researcher, and if X>Y, then you should stay in alignment. Otherwise, do a different intervention." No other effective/productive community does this. I don't know how to put this, but the vibes are deeply off. The appropriate level of confidence to have about a statement like "I can tell how good of an alignment researcher you will be after a year of you doing alignment research" feels like it should be pretty low. At a year, there's almost certainly ways to improve that haven't been tried. Especially in a community so mimetically allergic to the idea of malleable human potential. Here's a hypothesis. I in no way mean to imply that this is the only mechanism by which burnout happens in our community, but I think it's probably a pretty big one. It's not nice to be in a community that constantly hints that you might just not be good enough and that you can't get good enough. Our community seems to love treating people like mass-produced automatons with a fixed and easily assessable "ability" attribute. (Maybe you flippantly read that sentence and went "yeah it's called g factor lulz." In that case, maybe reflect on good of a correlate g is in absolute terms for the things you care about.). If we want to actually accomplish anything, we need to encourage people to make bigger bets, and to stop stacking up credentials so that fellow EAs think they have a chance. It's not hubris to believe in yourself. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.

    LW - What's the Alternative to Independence? by jefftk

    Play Episode Listen Later Nov 13, 2022 1:59


    Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: What's the Alternative to Independence?, published by jefftk on November 13, 2022 on LessWrong. When I talk about teaching my kids to be independent, trying to get them to where they can do more on their own, one response I often get is, paraphrasing: I like spending time with my kids and don't see it as a burden. Bringing them to school, playing with them, making their food: these are all chances to connect and enjoy being together. They'll be old enough not to need me soon enough, and in the meantime I enjoy bringing them to the playground. So, first, I also like spending time with my kids! We do a lot of things together, and I'm happy about that. But it's also common that they'll want to do things that I can't do with them: One wants to go to the park and the other wants to stay home. One of them is ready to go to school and wants to get there early to play with friends, but the other isn't ready yet. With a third child now this comes up even more. At times: The older two want to go over to a friend's house, but the baby is napping. I'm still feeding the baby breakfast when it's time for school. The best time for the baby's afternoon nap conflicts with school pickup. The alternative to doing things on their own is typically not us doing the same things together. Instead, it's at least one kid needing to accept doing something they like much less, and typically a lot more indoor time. I do think there is some truth in the original point, though. There are times when the alternative to "they go to the park" is just "I take them to the park". Sometimes that's fine (they want to play with friends, I want to write), other times less so (they want to play monster and I don't have anything that's actually more important). With this approach you do need to be thoughtful about making sure you're spending an amount of time with them that you all are, and will be, happy with. Comment via: facebook, mastodon Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.

    EA - Announcing Nonlinear Emergency Funding by Kat Woods

    Play Episode Listen Later Nov 13, 2022 0:58


    Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing Nonlinear Emergency Funding, published by Kat Woods on November 13, 2022 on The Effective Altruism Forum. Like most of you, we at Nonlinear are horrified and saddened by recent events concerning FTX. Some of you counting on Future Fund grants are suddenly finding yourselves facing an existential financial crisis, so, inspired by the Covid Fast Grants program, we're trying something similar for EA. If you are a Future Fund grantee and

    EA - Thoughts on legal concerns surrounding the FTX situation by Molly

    Play Episode Listen Later Nov 13, 2022 7:12


    Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Thoughts on legal concerns surrounding the FTX situation, published by Molly on November 13, 2022 on The Effective Altruism Forum. As Open Phil's managing counsel, and as a member of the EA community, I've received an onslaught of questions relating to the FTX fiasco in the last few days. Unfortunately, I've been unable to answer most of them either because I don't have the relevant legal expertise, or because I can't provide legal advice to non-clients (and Open Philanthropy is my only client in this situation), or because the facts I'd need to answer the questions just aren't available yet. The biggest topic of concern is something along the lines of: if I received FTX-related grant money, what am I supposed to do? Will it be clawed back? This post aims to provide legal context on this topic; it doesn't address ethical or other practical perspectives on the topic. Before diving into that, I want to offer this context: this is the first few days of what is going to be a multi-year legal process. It will be drawn out and tedious. We cannot will the information we want into existence, so we can't be as strategic as we'd like. But there's an upside to that. Emotions are high and many people are probably not in a great mental place to make big decisions. This externally-imposed waiting period can be put to good use as a time to process. I understand that for some people who received FTX-related grant money, waiting doesn't feel like an option; people need to know whether they can pay rent, or if their organization still exists. I hope some of the information below will provide a little more context for individual situations and decisions. I also committed to putting out an explainer on clawbacks. That is here, though I think the information in this post is more useful. Bankruptcy and Clawbacks Background The information in this section is based on publicly available information and general conversations with bankruptcy lawyers. I do not have access to any nonpublic information about this case. None of this should be taken as legal advice to you. FTX filed for bankruptcy on Friday (November 11th, 2022). More specifically, Alameda Research Ltd. filed a voluntary petition for bankruptcy under Chapter 11 of the Bankruptcy Code, by filing a standard form in the United States Bankruptcy Court for the District of Delaware. The filing includes 134 “debtor entities” (listed as Annex I); it looks like this covers the full FTX corporate group. This means that the full FTX group is now under the protection of the bankruptcy court, and over the coming months, all of the assets in the debtor group will be administered for the benefit of FTX's creditors. By filing under Chapter 11 (instead of Chapter 7), FTX has preserved the option of emerging out of the bankruptcy proceeding and continuing to operate in some capacity. You can read a useful explainer on the bankruptcy process here. The rules in the Bankruptcy Code are ultimately trying to ensure a fair outcome for creditors. This includes capturing certain payment transactions that occurred in the past. Basically, the debtor can reach back in time to undo deals it made and recoup monies it paid; this money comes back into the estate through clawbacks and gets redistributed to creditors according to the bankruptcy rules. Clawbacks There are two main types of clawback processes. The first and most common (called a “preference claim”) target transactions that happened in the 90 days prior to the bankruptcy filing. Essentially, if you received money from an FTX entity in the debtor group anytime on or after approximately August 11, 2022, the bankruptcy process will probably ask you, at some point, to pay all or part of that money back. It's almost impossible to say right now whether any specific grant or contract will be subject to clawb...

    EA - Hubris and coldness within EA (my experience) by James Gough

    Play Episode Listen Later Nov 13, 2022 2:32


    Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Hubris and coldness within EA (my experience), published by James Gough on November 13, 2022 on The Effective Altruism Forum. Hi all. Like a lot of people that have had a connection to EA I am appalled by the close connection between the FTX scandal and EA. But not surprised. The EA community events I attended totally killed my passion for EA. I attended an EA global conference in London and left feeling really really sad. Before the conference I was told I was not important enough or not worth the time to get career advice. One person I'd met before at local EA events made it clear that he didn't want to waste time talking to me (this was in the guide btw to make it clear if you don't think someone is worth your time). Well it certainly made me unconfident and uncomfortable to approach anyone else. I found the whole thing miserable. Everyone went out to take photo for the conference and I didn't bother. I don't want to be part of a community that I didn't feel happy in. On a less personal level, I overheard some unpleasant conversations about how EA should only be reserved for the intellectual elite (whatever the fuck that is) and how diversity didn't really matter. How they were annoyed that women got talks just for being women. Honestly, the whole place just reeked of hubris - everyone was so sure they were right, people had no interest in you as a person. I have never experienced more unfriendly, self-important, uncompassionate people in my life (I am 31 now). It was of course the last time I was ever involved with anything EA related. Maybe you read this and can dismiss it with yeah but issues are too important to waste time with petty small talk or showing interest in others. Or your subjective experience doesn't matter. Or we talk about rationality and complex ideas here , not personal opinions. But that is the whole point I'm trying to make. When you take away the human element, when you're so focused on grandiose ideas and certain of your perfect rationality, you end up dismissing the fast thinking necessary to make good ethical decisions. Anyone that values human kindness would run a mile from someone that doesn't have the respect to listen to someone talking to them and makes clear that their video game is valued above that person. Similarly to the long history of Musk's contempt for ordinary people. EA just seems so focused on being ethical that it forgot how to be nice. In my opinion, a new more inclusive organisation with a focus on making a positive impact needs to be created - with a better name. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.

    EA - In favour of ‘personal policies' by Michelle Hutchinson

    Play Episode Listen Later Nov 13, 2022 11:31


    Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: In favour of ‘personal policies', published by Michelle Hutchinson on November 13, 2022 on The Effective Altruism Forum. I'm pretty sad this week and isolating due to having covid. I thought I'd try to have something positive come out of that, and write up a blog post I've had in the back of my mind for a while. I wanted to share my positive experience with setting ‘policies' for myself. They're basically heuristics I use to avoid having to make as many decisions (particular about things that are somewhat stressful). I got the idea, and suggestions for ways to implement it, from Brenton Mayer (thank you!). What are personal policies? There are various classes of decisions I make periodically, for which I'd like to have an answer in advance rather than deciding in individual cases. Those are the kinds of cases in which I try to make a ‘policy decision' going forward. This is the kind of thing we do all the time for particular types of actions. For example, someone might decide to be a vegetarian. From then, they no longer consider in each individual instance whether they should eat some dish with meat in, they've made a blanket rule not to do so in any individual case. There are a number of things like ‘being a vegetarian' which we're used to. We're less likely to make up our own of these ‘rules I plan to live by'. A way we might frame it when we do is as ‘getting into a habit'. I sometimes prefer the framing of ‘policy' in that it's instantaneous (whereas something can't really be a habit until you've done it a few times) and it sounds like a clear decision you're acting on. A way I like to think of this is: For tricky repeating decisions, make them only once. Having said that, for long run policies, it's likely you'll want to have periodic re-examinations of them to check you still endorse them. I keep a list of my policies, both to make them easier to remember and to come back and re-evaluate them. Use cases and benefits Make faster decisions Having a policy for some type of decision means you don't have to spend time making a decision in each specific case. One of my friends has the policy of always running for a train if there's one she wants to be on which looks about to leave. This is the kind of situation where we're often unsure what to do - is it impossible to make the train however fast I am? Do I have plenty of time because it's not about to leave? Time spent dithering increases the chance you miss the train even if you then choose to run for it. So having made the decision in advance means over the long run you'll catch more trains than you otherwise would have. (And given the downside is basically some extra cardio, this seems easily worth it.) Make better decisions More thoughtful decisions: Even aside from cases where you don't have enough time to think much about a decision (like catching a train), it's worth putting more time into a blanket policy than an individual decision. Rather than half-thinking through a type of decision a number of times, you might act better if you make a careful/thorough decision once and then act on that repeatedly (When the decisions are relevantly similar.). More objective decisions: In some cases, individual decision points are emotionally laden in a way that will bias your decision. In cases like that, you might make a more objective decision in advance than you would in the moment. For example, you might find it hard not to donate to a charity that's raising money on the street if you have to decide whether to do that on the spur of the moment. You might feel your decision will be more objective if you think beforehand about the circumstances under which you do and don't want to donate to charities when you pass fundraisers for them (eg yes for particular interventions, no for others). Help from others: I often find i...

    EA - Noting an unsubstantiated belief about the FTX disaster by Yitz

    Play Episode Listen Later Nov 13, 2022 3:02


    Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Noting an unsubstantiated belief about the FTX disaster, published by Yitz on November 13, 2022 on The Effective Altruism Forum. There is a narrative about the FTX collapse that I have noticed emerging as a commonly-held belief, despite little concrete evidence for or against it. The belief goes something like this: Sam Bankman Fried did what he did primarily for the sake of "Effective Altruism," as he understood it. Even though from a purely utilitarian perspective his actions were negative in expectation, he justified the fraud to himself because it was "for the greater good." As such, poor messaging on our part may be partially at fault for his downfall. This take may be more or less plausible, but it is also unsubstantiated. As Astrid Wilde noted on Twitter, there is a distinct possibility that the causality of the situation may have run the other way, with SBF as a conman taking advantage of the EA community's high-trust environment to boost himself. Alternatively (or additionally), it also seems quite plausible to me that the downfall of FTX had something to do with the social dynamics of the company, much as Enron's downfall can be traced back to [insert your favorite theory for why Enron collapsed here]. We do not, and to some degree cannot, know what SBF's internal monologue has been, and if we are to update our actions responsibly in order to avoid future mistakes of this magnitude (which we absolutely should do), we must deal with the facts as they most likely are, not as we would like or fear them to be. All of this said, I strongly suspect that in ten years from now, conventional wisdom will hold the above belief as being basically cannon, regardless of further evidence in either direction. This is because it presents an intrinsically interesting, almost Hollywood villain-esque narrative, one that will surely evoke endless "hot takes" which journalists, bloggers, etc. will have a hard time passing over. Expect this to become the default understanding of what happened (from outsiders at least), and prepare accordingly. At the same time, be cautious when updating your internal beliefs so as not to assume automatically that this story must be the truth of the matter. We need to carefully examine where our focus in self-improvement should lie moving forward, and it may not be the case that a revamping of our internal messaging is necessary (though it may very well be in the end; I certainly do not feel qualified to make that final call, only to point out what I recognize from experience as a temptingly powerful story beat which may influence it). Primarily on the Effective Altruism forum, but also on Twitter. See e.g. "pro fanaticism" messaging from some community factions, though it should be noted that this has always been a minority position. With roughly 80% confidence, conditional on 1.) No obviously true alternative story coming out about FTX that totally accounts for all their misdeeds somehow, and 2.) This post (or one containing the same observation), not becoming widely cited (since feedback loops can get complex and I don't want to bother accounting for that). Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.

    LW - Noting an unsubstantiated communal belief about the FTX disaster by Yitz

    Play Episode Listen Later Nov 13, 2022 0:27


    Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Noting an unsubstantiated communal belief about the FTX disaster, published by Yitz on November 13, 2022 on LessWrong. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.

    EA - FTX FAQ by Hamish Doodles

    Play Episode Listen Later Nov 13, 2022 5:17


    Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: FTX FAQ, published by Hamish Doodles on November 13, 2022 on The Effective Altruism Forum. What is this? There's a lot of information flying around at the moment, and I'm going to try and organise it into an FAQ. I expect I have made a lot of mistakes, so please don't assume any specific claim here is true. This is definitely not legal or financial advice or anything like that. Please let me know if anything is wrong/unclear/misleading. Please suggest quesetions and/or answers in the comments. Update: actually, I would advise against wading into the comments. I'm erring on the side of brevity, so if you need more information follow the links. What is FTX? FTX is a Cryptocurrency Derivatives Exchange. It is now bankrupt. Who is Sam Bankman-Fried (SBF)? The founder of FTX. He was recently a billionaire and the richest person under 30. How is FTX connected to effective altruism? In the last couple of years, effective altruism received millions of dollars of funding from SBF and FTX via the Future Fund. SBF was following a strategy of "make tons of money to give it to charity." This is called "earning to give", and it's an idea that was spread by EA in early-to-mid 2010s. SBF was definitely encouraged onto his current path by engaging with EA. SBF was something of a "golden boy" to EA. For example, this. How did FTX go bankrupt? FTX gambled with user deposits rather than keeping them in reserve. Binance, a competitor, triggered a run on the bank where depositors attempted to get their money out. It looked like Binance was going to acquire FTX at one point, but they pulled out after due diligence. Now FTX and SBF are bunkrupt, and SBF will likely be convicted of felony. Source How bad is this? "It is therefore very likely to lead to the loss of deposits which will hurt the lives of 10,000s of people eg here" SBF will likely be convicted of felony. Source Did SBF definitely do something illegal and/or immoral? The vibe I'm reading is "very likely, but not yet certain." Does EA still have funding? Yes. Before FTX there was Open Philanthropy (OP), which is mostly funded by Dustin Muskovitz and Cari Tuna. None of this is connected to FTX, and OP's funding is unaffected. Is Open Philanthropy funding impacted? Global health and wellbeing funding will continue as normal. Because the total pool of funding to longtermism has shrunk, Open Philanthropy will have to raise the bar on longtermist grant making. Thus, Open Philanthropy is "pausing most new longtermist funding commitments" (longtermism includes AI, Biosecurity, and Community Growth) for a couple of months to recalibrate. Source If you got money from FTX, do you have to give it back? It's possible, but we don't know. What if you've already spent money from FTX? It's still possible that you may have to give it back. Again, we don't know. If you got money from FTX, should you give it back? You probably shouldn't, at least for the moment. If you gave the money back, there's the possibility that because it wasn't done through the proper legal channels you end up having to give the money back again. If you got money from FTX, should you spend it? Probably not. At least for the next few days. You may have to give it back. I feel terrible about having FTX money. Reading this may help. What if I'm still expecting FTX money? The board of the FTX future fund has all resigned, but "grantees may email grantee-reachout@googlegroups.com." How can I get support/help? "Here's the best place to reach us if you'd like to talk. I know a form isn't the warmest, but a real person will get back to you soon." (source) Some mental health advice here. How are people reacting? Will MacAskill: "If there was deception and misuse of funds, I am outraged, and I don't know which emotion is stronger: my utter rage at Sam (and others?) for causing such ha...

    LW - Covid 11/10/22: Into the Background by Zvi

    Play Episode Listen Later Nov 13, 2022 7:17


    Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Covid 11/10/22: Into the Background, published by Zvi on November 10, 2022 on LessWrong. There was a lot of news this week. Elon Musk continued to Do Things at Twitter. America had midterm elections. The polls were roughly accurate, with ‘candidate quality' mattering more than expected especially on the Republican side. FTX imploded. For now I am letting Matt Levine handle coverage of what happened. None of it had anything directly to do with Covid. So this will be short. Executive Summary I hear there were midterms. And that FTX imploded. Covid death number down a lot, presumably not a real effect. Let's run the numbers. The Numbers Predictions Predictions from Last Week: 255k cases (+7%) and 2,600 deaths (-1%) Results: 242k cases (+1%) and 1,993 deaths (-24%!). Predictions for Next Week: 250k cases (+4%) and 2,400 deaths (+20%). That death number is super weird. At first I thought ‘what, they can count votes or they can count deaths but not both at the same time?' or ‘election day isn't actually a holiday is it?' Then the case number came in flat even in the South, although Alabama didn't report any cases at all (which wasn't a big enough effect for me to adjust). Some of the drop is that last week had a spike of about 250 deaths in North Carolina. Still leaves the majority of the gap unexplained. I don't know. I don't see how there could have been a large real drop in deaths, and if it was a reporting issue we would have seen a decline in cases. Also, in the regions where we see a decline in deaths, West and South, we don't see relatively few cases. So the reporting explanations don't make that much sense here, and it seems unlikely cases actually rose a lot while being underreported or anything like that. It does raise uncertainty in deaths a lot for next week, and to some extent also for cases, in the usual ‘was this real or was it both fake and in need of reversal' dilemma. We shall see. The good news is that there is not much practical impact on decision making, unless this is all hiding a tidal wave of new infections. That is possible. I would still not expect anything like last year's wave, and for things not go on too long before stabilizing, but the timing between weather and sub-variants would make some sense. Deaths Cases Booster Boosting and Variants Nothing. Physical World Modeling Babies born during lockdown more often miss developmental milestones (study). I doubt this leads to that much in the way of permanent impacts. It still seems rather not good, the effect sizes here are quite large. Study finds that Paxlovid reduces risk of Long Covid. Would be weird if it didn't. China Via MR, Nucleic Acid Testing ‘now accounts for 1.3% of China's GDP.' Zero Covid is a rather large drag on China's economy that will be cumulative over time. From the comments, which also interestingly feature massive negative votes everywhere: I'm a teacher in China and EVERYONE gets tested at least THREE times per week. It is such a waste of time and money. Meanwhile full surface paranoia continues. The Zero Covid principle and the constant testing would not be my approach, but I understand. The surfaces obsession is something different. Everyone has to know at this point that it doesn't do anything. China reiterates Covid Zero Policy (Bloomberg), calls for more targeted and also ‘more decisive' measures. No signs they will abandon surface paranoia. This seems like it likely gets worse before it gets better. And Now An Important Message An SNL sketch, best of the season by far. If you haven't yet, watch it. Long Covid Article from Yale on chronic disease in general and how Long Covid may help us get a better understanding of what it is and how to deal with it. Does not provide much new. Mostly it is the same old ‘now that we have an Official Recognition for a chronic disease m...

    EA - In favour of compassion, and against bandwagons of outrage by Emrik

    Play Episode Listen Later Nov 13, 2022 3:33


    Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: In favour of compassion, and against bandwagons of outrage, published by Emrik on November 13, 2022 on The Effective Altruism Forum. I hate to add to the number of FTX posts on the forum, but after some (imo) inappropriate and unkind memes and comments in the Dank EA Memes fb group and elsewhere, I wanted to push back against what seems like a bandwagon of anger and ridicule spiralling too far, and I wish to call attention to it. But first, I should point out that I personally, at this time, know not nearly enough to make confident conclusions regarding what's happened at FTX. That means I will not make any morally relevant judgments. I will especially not insinuate them without sufficient evidence. That just amounts to irresponsibly fuelling the bandwagon while maintaining plausible deniability, which is arguably worse. You are not required to pretend to know more than you do just so you can empathise with the outrage of your friends. That shouldn't be how friendship works. This topic is not without nuance. There's a good case to be made for why ridicule can be pro-social, and I think Alex makes it here: "Ridicule makes clear our commitment to punishing ultimately harmful behavior, in a tit-for-tat sense; we are not the government so we cannot lock up wrongdoers, and acting as a vigilante assassin is precluded by other issues, so our top utility-realizing option is to meme harmful behavior out of the sphere of social acceptability." I don't disagree with condemning someone for having behaved unethically. It's a necessary part of maintaining civil society, and it enables people to cooperate and trade in good faith. But if you accuse someone of having ill-advisedly forsaken ethics in the (putative) service of the greater good, then retaliating by forsaking compassion in the service of unchecked mockery can't possibly make anything better. Why bother with compassion, you might ask? After all, compassion is superfluous for positive-sum cooperation. What we really need for essential social institutions to work at all is widespread trust in the basic ethics of people we trade with. So when a public figure gets caught depreciating that trust, it's imperative that we send a strong signal that this is completely unacceptable. This I all agree with. Judicious punishments are essential for safeguarding prevailing social institutions. Plain fact. But what if prevailing social institutions are unjust? When we jump on a bandwagon for humiliating the accused transgressor after their life has already fallen apart, we are exercising our instincts for mob justice, and we are indirectly strengthening the norm for coercing deviants more generally. Advocating punitive attitudes trades off against advocating for compassion to some extent. Especially if the way you're trying to advocate for punishments is by means of gleefwly inflicting harm. In a society where most people are all too eager to join in when they see their in-group massing against deviants, and where groups have wildly different opinions on who the deviants are in the first place, we need an alternative set of principles. Compassion is a corrective on unjust social norms. It lets us see more clearly where prevailing ethics strays from what's kind and good. In essence, that's the whole purpose of effective altruism: to do better than the unquestioned norms that's been handed down to us. Hence why I hope we can outgrow--or at least lend nuance to--our reflexive instinct to punish, and instead cultivate whatever embers of compassion we can find. Let that be our cultural contribution, because the alternative, advocating punitive sentiments, just isn't a neglected cause area. example, example, example, and example. comment Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.

    EA - Ways to buy time by Akash

    Play Episode Listen Later Nov 13, 2022 0:23


    Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Ways to buy time, published by Akash on November 12, 2022 on The Effective Altruism Forum. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.

    EA - Women and Effective Altruism by Keerthana Gopalakrishnan

    Play Episode Listen Later Nov 12, 2022 5:38


    Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Women and Effective Altruism, published by Keerthana Gopalakrishnan on November 12, 2022 on The Effective Altruism Forum. A lot has been talked about SBF/FTX/EA but this coverage reminds me that is time to talk about the toxicity of the culture within EA communities, especially as it relates to women. EA circles, much like the group house in Bahamas, are widely incestous where people mix their work life (in EA cause areas), their often polyamorous love life and social life in one amalgomous mix without strict separations. This is the default status quo. This means that if you're a reasonably attractive woman entering an EA community, you get a ton of sexual requests to join polycules, often from poly and partnered men. Some of these men control funding for projects and enjoy high status in EA communities and that means there are real downsides to refusing their sexual advances and pressure to say yes, especially if your career is in an EA cause area or is funded by them. There are also upsides, as reported by CoinDesk on Caroline Ellison. From experience it appears that, a ‘no' once said is not enough for many men in EA. Having to keep replenishing that ‘no' becomes annoying very fast, and becomes harder to give informed consent when socializing in the presence of alcohol/psychedelics. It puts your safety at risk. From experience, EA as a community, has very little respect for monogamy and many men, often competing with each other, will persuade you to join polyamory using LessWrong style jedi mindtricks while they stand to benefit from the erosion of your boundaries. (Edit: I have personally experienced this more than three times in less than one year of attending EA events and that is far too many times. ) So how do these men maintain polycules and find sexual novelty? EA meet ups of course. Several EA communities are grounds for predatory men in search of their nth polycule partner and to fill their “dancecards”. I have seen this in NYC EA circles, I have seen this in SF. I decided to stop socializing in EA circles a couple months ago due to this toxicity, the benefits are not worth the uncovered downside risk. I also am lucky enough to not work for an EA aligned organization / cause area and socially diversified enough to take that hit. The power enjoyed by men who are predatory, the rate of occurrence and a lack of visible push back equals to a tacit and somewhat widespread backing for this behaviour. My experience resonates with a few other women in SF I have spoken to. They have also met red pilled, exploitative men in EA/rationalist circles. EA/rationalism and redpill fit like yin and yang. Akin to how EA is an optimization of altruism with “suboptimal” human tendencies like morality and empathy stripped from it, red pill is an optimized sexual strategy with the humanity of women stripped from it. You'll also, surprisingly, encounter many women who are redpilled and manifest internalized misogyny in EA. How to check if you're one: if terms like SMV, hypergamy etc are part of your everyday vocabulary and thought processes, you might be affected. You'll also encounter many women who are unhappy participants in polygamous relationships; some of them really smart women who agree to be unhappy (dump him, sis). And if you're a lucky woman who hasn't experienced this in EA, great, and your experience does not need to negate those of others. Despite this culture, EA as a philosophy has a lot of good in it and they should fix this bug with some introspection. Now mind you, this is not a criticism of polyamory itself. If polyamorous love happens between consenting adults without favoritism in professional settings, all is well and good. But EA is an organization and community focused on a mission of altruism, enjoy huge swathes of donor money and exert socio-political ...

    EA - Internalizing the damage of bad-acting partners creates incentives for due diligence by tailcalled

    Play Episode Listen Later Nov 12, 2022 0:30


    Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Internalizing the damage of bad-acting partners creates incentives for due diligence, published by tailcalled on November 11, 2022 on The Effective Altruism Forum. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.

    EA - After recent FTX events, what are alternative sources of funding for longtermist projects? by CarolineJ

    Play Episode Listen Later Nov 12, 2022 2:21


    Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: After recent FTX events, what are alternative sources of funding for longtermist projects?, published by CarolineJ on November 12, 2022 on The Effective Altruism Forum. Now that we know "that it looks likely that there are many committed grants that the Future Fund will be unable to honor" according to the former team of FTX Future Funds, it would be useful for a number of us to have alternatives. Current large funders are re-considering their grant strategy (such as OpenPhil). These donors are going to be extremely busy in the next few weeks. If you're not too funding-constrained, it seems like a good and pro-social strategy is to wait until after the storm and let others who have urgent and important grants figure things out. However, it seems that this may not be true if you are very funding-restrained right now, or if some grants or fellowships have deadlines coming up soon that it may be useful to have on your radar. So, what are good places to apply for funding now? (and in the future too) To start, the FLI AI Existential Safety PhD Fellowship has a deadline on November 15. The Vitalik Buterin PhD Fellowship in AI Existential Safety is for PhD students who plan to work on AI existential safety research, or for existing PhD students who would not otherwise have funding to work on AI existential safety research. It will fund students for 5 years of their PhD, with extension funding possible. At universities in the US, UK, or Canada, annual funding will cover tuition, fees, and the stipend of the student's PhD program up to $40,000, as well as a fund of $10,000 that can be used for research-related expenses such as travel and computing. At universities not in the US, UK or Canada, the stipend amount will be adjusted to match local conditions. Fellows will also be invited to workshops where they will be able to interact with other researchers in the field. Applicants who are short-listed for the Fellowship will be reimbursed for application fees for up to 5 PhD programs, and will be invited to an information session about research groups that can serve as good homes for AI existential safety research. More about the fellowship here. What are other options? Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.

    LW - Rudeness, a useful coordination mechanic by Raemon

    Play Episode Listen Later Nov 12, 2022 4:15


    Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Rudeness, a useful coordination mechanic, published by Raemon on November 11, 2022 on LessWrong. I think the concept of "rudeness" is underappreciated. (Or, if people are appreciating it, they're doing so quietly where I can't find out about it) I think a lot of coordination-social-tech relies on there being some kind of karmic balance. A lot of actions aren't expressly illegal, and not even blatantly all-the-time socially sanctioned. But if you do them a bit, it's considered rude, and if you're rude all the time, you get a reputation for being rude, and this comes with some consequences (i.e. not invited to as many parties). The-concept-of-rudeness gives you a tool to softly shape your culture, and have some kind of standards, without having to be really rigid about it. [Edit to add:]I'm writing this post because I was writing another coordination/epistemic-norms post, and I found myself wanting to write the sentence "If you do [X thing], it should be considered a bit rude. If you do [X' worse version of X thing], it's more rude." And then I realized this was resting on some underlying assumptions about coordination-culture that might not be obvious to everyone. (i.e. that it's good to have some things be considered "rude") I come from a game-design background. In many games, there are multiple resources, and there are multiple game-mechanics for spending those resources, or having them interact with each other. You might have life-points, you might have some kind of "money" (which can store value and then be spent in arbitrary quantities), or renewable resources (like grains that grow back every year, and spoil if you leave them too long). Many good games have rich mechanics that you can fine-tune, to shape the player's experience. A flexible mechanic gives you knobs-to-turn, to make some actions more expensive or cheaper. The invention of "money" in real life was a useful coordination mechanic. The invention of "vague social capital that you accumulate doing high status respectable things, and which you can lose by doing low status unrespectable things" predates money by a long time, and is still sometimes useful in ways that money is not. A feeling-of-rudeness is one particular flavor of what "spending-down social capital" can feel like, from the inside of a social interaction. [/edit] Different cultures have different conceptions of "what is rude." Some of that is silly/meaningless, or actively harmful. Some of it is arbitrary (but maybe the arbitrariness is doing some secret social cohesion that's not obvious and autists should learn to respect anyway). In some cultures belching at a meal is rude, in others not belching at a meal is rude. I think there's probably value in having shared scripts for showing respect. Epistemic cultures have their own uses for "the rudeness mechanic." You might consider it rude to loudly assert people should agree with you, without putting forth evidence to support your claim. You might consider it rude to make a strong claim without being willing to make concrete bets based on it. Or, you might consider it rude to demand people to make bets on topics that are fuzzy and aren't actually amenable to forecasting. Rudeness depends on circumstance Different domains might warrant different kinds of conceptions-of-rudeness. In the military, I suspect "being rude to your superiors" is actually an important thing to discourage, so that decisions can get made quickly. But it can be actively harmful in innovation driven industries. An individual norm of "rudeness" can be context-dependent and depend on other norms. According to this Quora article, in Japan it's normally rude to tell your boss he's wrong, but also you're supposed to go out drinking with your boss where it's more okay to say rude things under the cover of alcohol. Problems with...

    EA - A personal statement on FTX by William MacAskill

    Play Episode Listen Later Nov 12, 2022 3:36


    Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A personal statement on FTX, published by William MacAskill on November 12, 2022 on The Effective Altruism Forum. This is a repost from a Twitter thread I made last night. It reads a little oddly when presented as a Forum post, but I wanted to have the content shared here for those not on Twitter. This is a thread of my thoughts and feelings about the actions that led to FTX's bankruptcy, and the enormous harm that was caused as a result, involving the likely loss of many thousands of innocent people's savings. Based on publicly available information, it seems to me more likely than not that senior leadership at FTX used customer deposits to bail out Alameda, despite terms of service prohibiting this, and a (later deleted) tweet from Sam claiming customer deposits are never invested. Some places making the case for this view include this article from Wall Street Journal, this tweet from jonwu.eth, this article from Bloomberg (and follow on articles). I am not certain that this is what happened. I haven't been in contact with anyone at FTX (other than those at Future Fund), except a short email to resign from my unpaid advisor role at Future Fund. If new information vindicates FTX, I will change my view and offer an apology. But if there was deception and misuse of funds, I am outraged, and I don't know which emotion is stronger: my utter rage at Sam (and others?) for causing such harm to so many people, or my sadness and self-hatred for falling for this deception. I want to make it utterly clear: if those involved deceived others and engaged in fraud (whether illegal or not) that may cost many thousands of people their savings, they entirely abandoned the principles of the effective altruism community. If this is what happened, then I cannot in words convey how strongly I condemn what they did. I had put my trust in Sam, and if he lied and misused customer funds he betrayed me, just as he betrayed his customers, his employees, his investors, & the communities he was a part of. For years, the EA community has emphasised the importance of integrity, honesty, and the respect of common-sense moral constraints. If customer funds were misused, then Sam did not listen; he must have thought he was above such considerations. A clear-thinking EA should strongly oppose “ends justify the means” reasoning. I hope to write more soon about this. In the meantime, here are some links to writings produced over the years. These are some relevant sections from What We Owe The Future: Here is Toby Ord in The Precipice: Here is Holden Karnofsky: Here are the Centre for Effective Altruism's Guiding Principles: If FTX misused customer funds, then I personally will have much to reflect on. Sam and FTX had a lot of goodwill – and some of that goodwill was the result of association with ideas I have spent my career promoting. If that goodwill laundered fraud, I am ashamed. As a community, too, we will need to reflect on what has happened, and how we could reduce the chance of anything like this from happening again. Yes, we want to make the world better, and yes, we should be ambitious in the pursuit of that. But that in no way justifies fraud. If you think that you're the exception, you're duping yourself. We must make clear that we do not see ourselves as above common-sense ethical norms, and must engage criticism with humility. I know that others from inside and outside of the community have worried about the misuse of EA ideas in ways that could cause harm. I used to think these worries, though worth taking seriously, seemed speculative and unlikely. I was probably wrong. I will be reflecting on this in the days and months to come, and thinking through what should change. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.

    EA - GiveWell is hiring a Research Analyst (apply by November 20) by GiveWell

    Play Episode Listen Later Nov 12, 2022 8:50


    Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: GiveWell is hiring a Research Analyst (apply by November 20), published by GiveWell on November 12, 2022 on The Effective Altruism Forum. GiveWell is looking for a Research Analyst to join our core interventions team, which investigates and makes funding decisions about programs we're already supporting at scale, including our top charities. The application deadline for this role is midnight Pacific Standard Time on Sunday, November 20. This role does not require any particular academic credentials or work experience. However, it does demand strong analytical and communication skills and a high degree of comfort in interpreting data. More details follow. We invite anyone who feels this role would be a good fit for them to apply by the above deadline. The role As a Research Analyst on GiveWell's core interventions team, you will help the team decide how hundreds of millions of dollars will be spent with the goal of saving and improving the lives of people living in the lowest-income communities in the world. The core interventions team focuses on programs that we are already supporting at scale—this includes our top charities. This team updates our cost-effectiveness estimates for these programs on a rolling basis. Your work will support our goal of always directing money to the most cost-effective funding opportunities available at the time of our grantmaking. You would be responsible for: Analyzing data on spending and numbers of people previously served by a program to generate estimates of cost per person reached in future programs. Adapting cost-effectiveness models to apply to new locations that a program may expand to, including determining which inputs to update and sourcing data for those inputs. Applying updates to our top charities cost-effectiveness analysis, a process which includes archiving the previous model, documenting the change in results, and managing a quality assurance procedure. Writing entries for cost-effectiveness analysis changelogs (examples here) and drafting updates to reports that summarize our cost-effectiveness analyses (example here). Creating cost-effectiveness analysis inputs for programs where we have a framework for these inputs but need to interpret messy data to apply that framework to a particular case. Examples include inputs for mosquito resistance to insecticides used in malaria nets, population-level burden of infection with certain parasites, and length of time a malaria net remains effective in a given context. About you Note: Confidence can sometimes hold us back from applying for a job. Here's a secret: there's no such thing as a "perfect" candidate. GiveWell is looking for exceptional people who want to make a positive impact through their work and help create an organization where everyone can thrive. So whatever background you bring with you, please apply if this role would make you excited to come to work every day. We expect you will be characterized by many of the below qualities. We encourage you to apply if you would use the majority of these characteristics to describe yourself: Conscientious: You are able and willing to carefully follow a process with many steps. You carefully document processes and sources. You are thoughtful about your approach and perform high-quality work, with or without supervision. You exhibit meticulous attention to detail, including fine print, and don't cut corners. (This doesn't mean you never make mistakes, but you learn from them and rarely repeat one.) Strong communicator: You write clearly and concisely. You clearly communicate what you believe and why, as well as what you are uncertain about. You can explain our cost-effectiveness analysis parameters and our reasons for changing them clearly and succinctly to a semi-informed audience (e.g., GiveWell staff and donors). Analytic...

    EA - CEA/EV + OP + RP should engage an independent investigator to determine whether key figures in EA knew about the (likely) fraud at FTX by Tyrone-Jay Barugh

    Play Episode Listen Later Nov 12, 2022 3:42


    Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: CEA/EV + OP + RP should engage an independent investigator to determine whether key figures in EA knew about the (likely) fraud at FTX, published by Tyrone-Jay Barugh on November 12, 2022 on The Effective Altruism Forum. I think that key EA orgs (perhaps collectively) like the Center for Effective Altruism/Effective Ventures, Open Philanthropy, and Rethink Priorities should consider engaging an independent investigator (with no connection to EA) to try to identify whether key figures in those organisations knew (or can reasonably be inferred to have known, based on other things they knew) about the (likely) fraud at FTX. The investigator should also be contactable (probably confidentially?) by members of the community and others who might have relevant information. Typically a lawyer might be engaged to carry out the investigation, particularly because of professional obligations in relation to confidentiality (subject to the terms of reference of the investigation) and natural justice. But other professionals also conduct independent investigations, and there is no in principle reason why a lawyer needs to lead this work. My sense is that this should happen very promptly. If anyone did know about the (likely) fraud at FTX, then delay potentially increases the risk that any such person hides evidence or spreads an alternative account that vindicates them. I'm torn about whether to post this, as it may well be something that leadership (or lawyers) in the key EA orgs are already thinking about, and posting this prematurely might result in those orgs being pressured to launch an investigation hastily with bad terms of reference. On the other hand, I've had the concern that there is no whistleblower protection in EA for some time (raised in my March 2022 post on legal needs within EA), and others (e.g. Carla Zoe C) have made this point earlier still. I am not posting this because I have a strong belief that anyone in a key EA org did know - I have no information in this regard beyond vague speculation I have seen on Twitter. If you have a better suggestion, I would appreciate you sharing it (even if anonymously). Epistemic status: pretty uncertain, slightly anxious this will make the situation worse, but on balance think worth raising. Relevant disclosure: I received a regrant from the FTX Future Fund to investigate the legal needs of effective altruist organisations. Edit: I want to clarify that I don't think that any particular person knew. I still trust all the same community figures I trusted one week ago, other than folks in the FTX business. For each 'High Profile EA' I can think of, I would be very surprised if that person in particular knew. But even if we think there is only a 0.1% chance that any of the most influential, say, 100 EAs knew, then the chance that none of them knew is 0.999^100, which is about 90.4% (assuming we naively treat those as independent events). If we care about the top 1000 most influential EAs, then we could get that 90.4% chance with just a 0.01% chance of failure. Edit: I think Ryan Carey's comment is further in the right direction than this post (subject to my view that an independent investigation should stick to fact-finding rather than making philosophical/moral calls for EA) plus I've also had other people contact me spitballing ideas that seem sensible. I don't know what the terms of reference of an investigation would be, but it does seem like simply answering "did anybody know" might be the wrong approach. If you have further suggestions for the sorts of things that should be considered, it might be worth dropping those into the comments. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.

    EA - How could we have avoided this? by Nathan Young

    Play Episode Listen Later Nov 12, 2022 2:07


    Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How could we have avoided this?, published by Nathan Young on November 12, 2022 on The Effective Altruism Forum. It seems to me that the information that betting so heavily on FTX and SBF was an avoidable failure. So what could we have done ex-ante to avoid it? You have to suggest things we could have actually done with the information we had. Some examples of information we had: First, the best counterargument: Then again, if we think we are better at spotting x-risks then these people maybe this should make us update towards being worse at predicting things. Also I know there is a temptation to wait until the dust settles, but I don't think that's right. We are a community with useful information-gathering technology. We are capable of discussing here. Things we knew at the time We knew that about half of Alameda left at one time. I'm pretty sure many are EAs or know them and they would have had some sense of this. We knew that SBF's wealth was a very high proportion of effective altruism's total wealth. And we ought to have known that something that took him down would be catastrophic to us. This was Charles Dillon's take, but he tweets behind a locked account and gave me permission to tweet it. Peter Wildeford noted the possible reputational risk 6 months ago: We knew that corruption is possible and that large institutions need to work hard to avoid being coopted by bad actors. Many people found crypto distasteful or felt that crypto could have been a scam. FTX's Chief Compliance Officer, Daniel S. Friedberg, had behaved fraudulently In the past. This from august 2021. In 2013, an audio recording surfaced that made mincemeat of UB's original version of events. The recording of an early 2008 meeting with the principal cheater (Russ Hamilton) features Daniel S. Friedberg actively conspiring with the other principals in attendance to (a) publicly obfuscate the source of the cheating, (b) minimize the amount of restitution made to players, and (c) force shareholders to shoulder most of the bill. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.

    EA - Naïve vs Prudent Utilitarianism by Richard Y Chappell

    Play Episode Listen Later Nov 12, 2022 8:37


    Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Naïve vs Prudent Utilitarianism, published by Richard Y Chappell on November 11, 2022 on The Effective Altruism Forum. Critics sometimes imagine that utilitarianism directs us to act disreputably whenever it appears (however fleetingly) that the act would have good consequences. Or whenever crudely calculating the most salient first-order consequences (in isolation) yields a positive number. This “naïve utilitarian” decision procedure is clearly daft, and not something that any sound utilitarian actually advocates. On the other hand, critics sometimes mistake this point for the claim that utilitarianism itself is plainly counterproductive, and necessarily advocates against its own acceptance. While that's always a conceptual possibility, I don't think it has any empirical credibility. Most who think otherwise are still making the mistake of conflating naïve utilitarianism with utilitarianism proper. The latter is a much more prudent view, as I'll now explain. Adjusting for Bias Imagine an archer, trying to hit a target on a windy day. A naive archer might ignore the wind, aim directly at the target, and (predictably) miss as their arrow is blown off-course. A more sophisticated archer will deliberately re-calibrate, superficially seeming to aim “off-target” but in a way that makes them more likely to hit. Finally, a master archer will automatically adjust as needed, doing what (to her) seems obviously how to hit the target, though to a naïve observer it might look like she was aiming awry. Is the best way to be a successful archer on a windy day to stop even trying to hit the target? Surely not. (It's conceivable that an evil demon might interfere in such a way as to make this so — i.e., so that only people genuinely trying to miss would end up hitting the target — but that's a much weirder case than what we're talking about.) The point is just that naïve targeting is likely to miss. Making appropriate adjustments to one's aim (overriding naive judgments of how to achieve the goal) is not at all the same thing as abandoning the goal altogether. And so it goes in ethics. Crudely calculating the expected utility of (e.g.) murdering your rivals and harvesting their vital organs, and naively acting upon such first-pass calculations, would be predictably disastrous. This doesn't mean that you should abandon the goal of doing good. It just means that you should pursue it in a prudent rather than naive manner. Metacoherence prohibits naïve utilitarianism “But doesn't utilitarianism direct us to maximize expected value?” you may ask. Only in the same way that norms of archery direct our archer to hit the target. There's nothing in either norm that requires (or even permits) it to be pursued naively, without obviously-called-for bias adjustments. This is something that has been stressed by utilitarian theorists from Mill and Sidgwick through to R.M. Hare, Pettit, and Railton—to name but a few. Here's a pithy listing from J.L. Mackie of six reasons why utilitarians oppose naïve calculation as a decision procedure: Shortage of time and energy will in general preclude such calculations. Even if time and energy are available, the relevant information commonly is not. An agent's judgment on particular issues is likely to be distorted by his own interests and special affections. Even if he were intellectually able to determine the right choice, weakness of will would be likely to impair his putting of it into effect. Even decisions that are right in themselves and actions based on them are liable to be misused as precedents, so that they will encourage and seem to legitimate wrong actions that are superficially similar to them. And, human nature being what it is, a practical working morality must not be too demanding: it is worse than useless to set standards so high that there is ...

    EA - IMPCO, don't injure yourself by returning FTXFF money for services you already provided by EliezerYudkowsky

    Play Episode Listen Later Nov 12, 2022 11:37


    Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: IMPCO, don't injure yourself by returning FTXFF money for services you already provided, published by EliezerYudkowsky on November 12, 2022 on The Effective Altruism Forum. In my possibly contrarian opinion, and speaking as somebody who I don't think actually got any money directly from FTX Future Fund that I can recall; also speaking for myself and hastily, having not run this post past any other major leaders in EA: You are not obligated to return funding that got to you ultimately by way of FTX; especially if it's been given for a service you already rendered, any more than the electrical utility ought to return FTX's money that's already been spent on electricity; especially if that would put you to hardship. This is not a for-the-greater-good argument; I don't think you're obligated to that much personal martyrdom in the first place, just like the utility company isn't so obligated. It's fine to be somebody who sells utilons for money, just like utilities sell electricity for money. People who work in the philanthropic sector, and don't capture all of the gain they create, do not thereby relinquish the solidity of their claim to the part of that gain they do capture, to below the levels of an electrical utility's claim to similar money. The money you hold could maybe possibly compensate some FTX users - if it doesn't just get immediately captured by users selling NFTs to Bahamian accounts, or the equivalent in later bankruptcy proceedings - but that's equally true of the electrical utility's money, or, heck, money held by any number of people richer than you. Plumbers who worked on the FTX building should likewise not anguish about that and give the money back; yes, even though plumbers are probably well above average in income for the Bahamas. You are not more deeply implicit in FTX's sins, by way of the FTX FF connection, than the plumber who worked directly on their building. I don't like the way that some people think about charity, like anybody who works in the philanthropic sector gives up the right to really own anything. You can try to do a little good in the world, or even sell a little good into the world, without signing up to be the martyr who gets all the blame for not being better when something goes wrong. You probably forwent some of your possible gains to work in the charity sector at all, and took a bit of a generally riskier job. (If you didn't know that, my condolences.) You may suddenly need to look for a new job. You did not sign away your legal or moral right to keep title to money that was already given you, if you've already performed the corresponding service, or you're still going to perform that service. If you can't perform the service anymore, then maybe return some of that money once it's clear that it'll actually make its way to FTX customers; but keep what covers the cost of a month to re-search for a job, or the month you already spent searching for that job. It's fine to call it your charity and your contribution that you undercharge for those utilons and don't capture as much value as you create - if you're nice enough to do that, which you don't have to be, you can be Lawful Neutral instead of Lawful Good and I won't think you're as cool but I'll still happily trade with you. (I apologize for resorting to the D&D alignment chart, at this point, but I really am not sure how to compactly express these ideas without those concepts, or concepts that I could define more elaborately that would mean the same thing.) That you're trying to be some degree of Good, by undercharging for the utilons you provide, doesn't mean you can't still hold that money that you got in Lawful Neutral exchange for the services you sold. Just like any ordinary plumber is entitled to do, if it turned out they unwittingly fixed the toilet of a bad bad pers...

    LW - Speculation on Current Opportunities for Unusually High Impact in Global Health by johnswentworth

    Play Episode Listen Later Nov 11, 2022 6:53


    Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Speculation on Current Opportunities for Unusually High Impact in Global Health, published by johnswentworth on November 11, 2022 on LessWrong. Epistemic Status: armchair speculation from a non-expert. Short version: I expect things to get pretty bad in the Sahel region over the next year in particular. The area is an obvious target for global health interventions even in good times, and impact is presumably higher in bad times. A simple baseline intervention: fill a backpack with antibiotics, fly to the region, and travel around distributing the antibiotics. What's The “Sahel” Region? The Sahel is a semi-arid region along the southern edge of the Sahara desert. Think roughly Mali, Niger, Chad and Sudan. Bad How? Based on statistics on the Sahel, it's one of the few remaining regions on Earth where the population is near Malthusian equilibrium. Fertility is high, contraception is rare; about half the population is under age 16. Infant mortality is around 6-8%, and ~a quarter of children are underweight. (Source: CIA World Factbook entries on Mali, Niger, Chad and Sudan.) Being near Malthusian equilibrium means that, when there's an economic downturn, a substantial chunk of the population dies. Die How? Traditional wisdom says: war, famine, disease. In this case, I'd expect famine to be the main instigator. Empty bellies then induce both violence and weak immune systems. On priors, I'd expect infectious disease to be the main proximate killer. The Next Year In Particular? The global economy has been looking rough, between the war in Ukraine shocking oil and food markets, and continuing post-Covid stagflation. Based on pulling a number out of my ass without looking at any statistics, I'd guess deaths from violence, starvation, and disease in the Sahel region will each be up an order of magnitude this year/next year compared to a good year (e.g. the first-quartile best year in the past decade). That said, the intervention we'll talk about is probably decently impactful even in a good year. So What's To Be Done? Just off the top of my head, one obvious baseline plan is: Fill a hiking backpack with antibiotics (buy them somewhere cheap!) Fly to N'Djamena or take a ferry to Timbuktu Obtain a motorbike or boat Travel around giving away antibiotics until you run out Repeat Note that you could, of course, substitute something else for "antibiotics" - maybe vitamins or antifungals or water purification tablets or iron supplements or some mix of those is higher marginal value. There are some possibly-nonobvious considerations here. First, we can safely assume that governments in the area are thoroughly corrupt at every level, and presumably the same goes for non-government bureaucracies; trying to route through a local bureaucratic machine is a recipe for failure. Thus, the importance of being physically present and physically distributing things oneself. On the other hand, physical safety is an issue, even more so if local food insecurity induces local violence or civil war. (That said, lots of Westerners these days act like they'll be immediately assaulted the moment they step into a “bad neighborhood” at night. Remember, folks, the vast majority of the locals are friendly the vast majority of the time, especially if you're going around obviously helping people. You don't need to be completely terrified of foreign territory. But, like, don't be completely naive about it either.) Also, it is important to explain what antibiotics are for and how to use them, and there will probably be language barriers. Literacy in these regions tends to be below 50%, and presumably the rural regions which most need the antibiotics also have the lowest literacy rates. How Much Impact? I'm not going to go all the way to estimating QALYs/$ here, but. according to this source, the antibiotic impor...

    LW - Instrumental convergence is what makes general intelligence possible by tailcalled

    Play Episode Listen Later Nov 11, 2022 6:48


    Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Instrumental convergence is what makes general intelligence possible, published by tailcalled on November 11, 2022 on LessWrong. TL;DR: General intelligence is possible because solving real-world problems requires solving common subtasks. Common subtasks are what give us instrumental convergence. Common subtasks are also what make AI useful; you want AIs to pursue instrumentally convergent goals. Capabilities research proceeds by figuring out algorithms for instrumentally convergent cognition. Consequentialism and search are fairly general ways of solving common subtasks. General intelligence is possible because solving real-world problems requires solving common subtasks No-free-lunch theorems assert that any cognitive algorithm is equally successful when averaged over all possible tasks. This might sound strange, so here's an intuition pump. Suppose you get a test like 2+2 = _ 32 = _ and so on. One cognitive algorithm would be to evaluate the arithmetic expression and fill the answer in as the result. This algorithm seems so natural that it's hard to imagine how the no-free-lunch theorem could apply to this; what possible task could ever make arithmetic score poorly on questions like the above? Easy: While an arithmetic evaluator would score well if you e.g. get 1 point for each expression you evaluate arithmetically, it would score very poorly if you e.g. lose 1 point for each expression you evaluate arithmetically. This doesn't matter much in the real world because you are much more likely to encounter situations where it's useful to do arithmetic right than you are to encounter situations where it's useful to do arithmetic wrong. No-free-lunch theorems point out that when you average all tasks, useful tasks like "do arithmetic correctly" are perfectly cancelled out by useless tasks like "do arithmetic wrong"; but in reality you don't average over all conceivable tasks. If there were no correlations between subtasks, there would be no generally useful algorithms. And if every goal required a unique algorithm, general intelligence would not exist in any meaningful sense; the generally-useful cognitions are what constitutes general intelligence. Common subtasks are what give us instrumental convergence Instrumental convergence basically reduces to acquiring and maintaining power (when including resources under the definition of power). And this is an instance of common subtasks: lots of strategies require power, so a step in lots of strategies is to accumulate or preserve power. Therefore, just about any highly capable cognitive system is going to be good at getting power. "Common subtasks" views instrumental convergence somewhat more generally than is usually emphasized. For instance, instrumental convergence is not just about goals, but also about cognitive algorithms. Convolutions and big matrix multiplications seem like a common subtask, so they can be considered instrumentally convergent in a more general sense. I don't think this is a major shift from how it's usually thought of; computation and intelligence are usually considered as instrumentally convergent goals, so why not algorithms too? Common subtasks are also what make AI useful; you want AIs to pursue instrumentally convergent goals The logic is simple enough: if you have an algorithm that solves a one-off task, then it is at most going to be useful once. Meanwhile, if you have an algorithm that solves a common task, then that algorithm is commonly useful. An algorithm that can classify images is useful; an algorithm that can classify a single image is not. This applies even to power-seeking. One instance of power-seeking would be earning money; indeed an AI that can autonomously earn money sounds a lot more useful than one that cannot. It even applies to "dark" power-seeking, like social manipulatio...

    EA - Thoughts on FTX and returning to our ideals by michel

    Play Episode Listen Later Nov 11, 2022 4:10


    Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Thoughts on FTX and returning to our ideals, published by michel on November 11, 2022 on The Effective Altruism Forum. I can't work today. I knew I wouldn't be able to work as soon as I woke up. The ordinary tasks I'd dedicate myself to feel so distant from the shitshow unfolding as I sit at my desk. The news of FTXs unraveling hit me hard. Most immediately, I feel for the people who entrusted their life savings to an organization that didn't deserve their trust. It's easy to extrapolate from the twitter updates I scroll past of people's net worth halving in hours, of people frantically trying to withdraw money they're not sure exists anymore: people's lives just got ruined. Families will struggle to send their kids to college. Young adults will need to take on new jobs to stay afloat. Poor parents who looked at FTX for a way out will lose their trust and so much more in a system that continues to fail them. I hope those responsible for gambling money that wasn't theirs to gamble are held responsible, and I dearly hope FTX can find a way to repay the people who trusted them. I grieve for those who trusted in FTX, and that includes people in the effective altruism community. We're not the victims – we benefited incredibly from Sam Bankman–Fried and others at FTX over the past years. But, to my knowledge, we had no idea that what appears to be fraud at this level was a possibility when we signed onto a billionaire's backing. Money changes the world, and I don't hate us for getting it. Up until this week, I had a very favorable impression of Sam Bankman-Fried. I saw him as an altruist who encapsulated what it meant to think big. No longer; doing good means acting with integrity. This feels like the moment that you learn that a childhood hero of yours might be no hero after all. A lot just changed. Projects from people in the effective altruism community that I think would have genuinely improved the world, like pandemic preparedness initiatives and charity startups, may be delayed for years – or never arrive at all. Our community's entire trust networks, emphasis on ambition, and expectation that standout ideas need not be held back my insufficient funds feel as if they're beginning to shake, and it's not clear how much they'll withstand over the coming months. At a personal level, too, those grant applications I've been waiting on to fund my past months of independent work seem awfully precarious. I know I'll be fine, but others won't be, including the people alive today and far beyond tomorrow who aspiring effective altruists are trying to help. That's something that weighs on me: while so much feels like it's changed, the problems in this world haven't. People will still die from preventable diseases today; we'll still experience a background noise of nuclear armageddon threats tomorrow; and emerging technologies coming in the next decades could still pose a threat to all sentient life. It's in this context – a community that implicitly trusted FTX in their efforts to do good being shaken up and the world's problems staying awfully still – that I'm drawn back to effective altruism's core project: trying to improve the world as much as we can with the resources that we have. Amidst everything shaking right now, I notice my personal commitment to effective altruism's ideals standing still. And I can picture my friends' commitments to effective altruism's ideals standing steadfast too. Over the past two years, engaging with this community has introduced me to some of the most incredible, kind-hearted people I know. I haven't talked to many of them as I've tried to gather my thoughts over the past day, but I bet they too are still committed to creating a world better than the one they were born into. Absolutely, we'll all need to reassess how we bring our altruistic proj...

    EA - Another FTX post: suggestions for change by SaraAzubuike

    Play Episode Listen Later Nov 11, 2022 4:07


    Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Another FTX post: suggestions for change, published by SaraAzubuike on November 11, 2022 on The Effective Altruism Forum. I suggested that we would have trouble with FTX and funding around 4 months ago. SBF has been giving lots of money to EA. He admits it's a massively speculative bubble. Crypto crash hurts the most vulnerable, because poor uneducated people put lots of money into it (Krugman). Crypto is currently small, but should be regulated and has potential contagion effects (BIS). EA as a whole is getting loose with it's money due to large crypto flows (MacAskill). An inevitable crypto crash leads to either a) bad optics leading to less interest in EA or b) lots of dead projects. It was quite obvious that this would happen--although the specific details with Alameda were not obvious. Stuart Buck and Blonergan (and a few others) were the only one who took me seriously at the time. Below are some suggestions for change. 1. The new button of "support" is great, but I think EA forum should have a way to sort by controversiality. And, have the EA forum algorithm occasionally (some ϵ% of the time), punt controversial posts back upwards to the front page. If you're like me, you read the forum sorted by Magic (New and Upvoted). But this promotes herd mentality. The red-teaming and self-criticism is excellent, but if the only way we aggregate how "good" is is by up-votes, that is flawed. Perhaps the best way to know that criticism has touched a nerve is to compute a fraction: how many members of the community disagree vs how many agree. (or, even better, if you are in an organization, use a weighted fraction, where you put lower weight on the people in the organization that are in positions of power (obviously difficult to implement in practice)) 2. More of you should consider anonymous posts. This is EA forum. I cannot believe that some of you delete your posts simply because it ends up being downvoted. Especially if you're working higher up in an EA org, you ought to be actively voicing your dissent and helping to monitor EA. For example, this is not good: "Members of the mutinous cohort told me that the movement's leaders were not to be taken at their word—that they would say anything in public to maximize impact. Some of the paranoia—rumor-mill references to secret Google docs and ruthless clandestine councils—seemed overstated, but there was a core cadre that exercised control over public messaging; its members debated, for example, how to formulate their position that climate change was probably not as important as runaway A.I. without sounding like denialists or jerks." (New Yorker) What makes EA, EA, what makes EA antifragile, is its ruthless transparency. If we are self-censoring because we have already concluded something is super effective, then there is no point in EA. Go do your own thing with your own money. Become Bill Gates. But don't associate with EA. 3. Finances should be partially anonymized. If an EA org receives some money above a certain threshold from an individual contribution, we should be transparent in saying that we will reject said money if it is not donated anonymously. You may protest that this would decrease the number of donations by rich billionaires. But take it this way: if they donate to EA, it's because they believe that EA can spend it better. Thus, they should be willing to donate anonymously, to not affect how EA spends money. If they don't donate to EA, then they can establish a different philanthropic organization and hire EA-adjacent staff, making for more competition. [Edit--see comments/revision] Revision: Blonergan took my previous post very seriously--apologies. Anonymizing finances may not be the best option. I am clearly naive about the legal implications. Perhaps other members of the community have suggestions about h...

    EA - Under what conditions should FTX grantees voluntarily return their grants? by sawyer

    Play Episode Listen Later Nov 11, 2022 0:49


    Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Under what conditions should FTX grantees voluntarily return their grants?, published by sawyer on November 11, 2022 on The Effective Altruism Forum. There is a possibility of clawbacks, in which case orgs could be legally obligated to return funds. But in cases in which we're not legally required to return a grant, could it still be morally good to do so? I think it could be useful to discuss this before even the basic details of the case have been settled, since there will be a high potential for motivated reasoning in favor of keeping the grants. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.

    EA - EA Images by Bob Jacobs

    Play Episode Listen Later Nov 11, 2022 2:23


    Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA Images, published by Bob Jacobs on November 11, 2022 on The Effective Altruism Forum. A while back there was a contest on this forum to design a flag for utilitarianism. Since getting money as an artist is very difficult, I entered hoping to win the prize money. However, after I submitted my design, the organizer changed the rules making my design retroactively ineligible. The organizer later deleted the posts, which not only means that I can no longer get the prize money, but also that my work is no longer visible on the site. Therefore, I decided to make this post to showcase not only these flag designs, but also some other works I made for EA but hadn't posted on the forum before. Flag of utilitarianism: Yellow stands for happiness, that which utilitarianism pursues White stands for morality, that which utilitarianism is The symbol is a sigma, since utilitarians care about the sum of all utility The symbol is also an hourglass, since utilitarians care about the (longterm) future consequences If you don't like the rounded design I also have a more angular design: Logo EA Brussels: It displays the Atomium, a famous building in Brussels: Logo EA Ghent: It incorporates elements from the logo of the University of Ghent (where I organize my group): And here is a banner I made for the facebook group (In Dutch you write "Gent"): I also made a bunch of banners and thumbnails for sequences on this site (although a lot of them are uncredited) and the images for the discussion norms. Lastly I made a symbol for slack/scout-mindset and moloch/soldier-mindset: There is a balance between Moloch (which I think of as the forces of exploitation) and Slack (which I think of as the forces of exploration). Scott Alexander writes: "Think of slack as a paradox – the Taoist art of winning competitions by not trying too hard at them. Moloch and Slack are opposites and complements, like yin and yang. Neither is stronger than the other, but their interplay creates the ten thousand things." Here is a Taijitu of Moloch and Slack as created by the inadequate equilibria and the hillock: You can find a higher resolution image and a svg-file of this symbol here If you want a graphic design for your EA projects, feel free to message me. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.

    EA - My reaction to FTX: appalled by Robert Wiblin

    Play Episode Listen Later Nov 11, 2022 3:10


    Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: My reaction to FTX: appalled, published by Robert Wiblin on November 11, 2022 on The Effective Altruism Forum. I thought I would repost this thread I wrote for Twitter. I've been waiting for the Future Fund people to have their say, and they have all resigned (). So now you can hear what I think. I am appalled. If media reports of what happened are at all accurate, what at least two people high up at FTX and Alameda have done here is inexcusable (e.g.). Making risky trades with depositors' funds without telling them is grossly immoral. (I'm gripped reading the news and Twitter like everyone else and this is all based on my reading between the lines of e.g.:,,, I also speak only for myself here.) Probably some story will come out about why they felt they had no choice, but one always has a choice to act with integrity or not to. One or more leaders at FTX have betrayed the trust of everyone who was counting on them. Most importantly FTX's depositors, who didn't stand to gain on the upside but were unwittingly exposed to a massive downside and may lose savings they and their families were relying on. FTX leaders also betrayed investors, staff, collaborators, and the groups working to reduce suffering and the risk of future tragedies that they committed to help. No plausible ethics permits one to lose money trading then take other people's money to make yet more risky bets in the hope that doing so will help you make it back. That basic story has blown up banks and destroyed lives many times through history. Good leaders resist the temptation to double down, and instead eat their losses up front. In his tweets Sam claims that he's working to get depositors paid back as much as possible. I hope that is his only focus and that it's possible to compensate the most vulnerable FTX depositors to the greatest extent. To people who have quit jobs or made life plans assuming that FTX wouldn't implode overnight, my heart goes out to you. This situation is fucked, not your fault and foreseen by almost no one. To those who quit their jobs hoping to work to reduce suffering and catastrophic risks using funds that have now evaporated: I hope that other donors can temporarily fill the gap and smooth the path to a new equilibrium level of funding for pandemic prevention, etc. I feel it's clear mistakes have been made. We were too quick to trust folks who hadn't proven they deserved that level of confidence. One always wants to believe the best about others. In life I've mostly found people to be good and kind, sometimes to an astonishing degree. Hindsight is 20/20 and this week's events have been frankly insane. But I will be less trusting of people with huge responsibilities going forward, maybe just less trusting across the board. Mass destruction of trust is exactly what results from this kind of wrong-doing. Some people are saying this is no surprise, as all of crypto was a Ponzi scheme from the start. I'm pretty skeptical of crypto having many productive applications, but there's a big dif between investing in good faith in a speculative unproven technology, and having your assets misappropriated from you. The first (foolish or not) is business. The second is illegal. I'll have more to say, maybe after I calm down, maybe not. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.

    EA - If EA has overestimated its projected funding, which decisions must be revised? by strawberry calm

    Play Episode Listen Later Nov 11, 2022 0:56


    Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: If EA has overestimated its projected funding, which decisions must be revised?, published by strawberry calm on November 11, 2022 on The Effective Altruism Forum. FTX, a big source of EA funding, has imploded. There's mounting evidence that FTX was engaged in theft/fraud, which would be straightforwardly unethical. There's been a big drop in the funding that EA organisations expect to receive over the next few years. Because these organisations were acting under false information, they would've made (ex-post) wrong decisions, which they will now need to revise. Which revisions are most pressing? Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.

    LW - The optimal angle for a solar boiler is different than for a solar panel by Yair Halberstadt

    Play Episode Listen Later Nov 11, 2022 2:53


    Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The optimal angle for a solar boiler is different than for a solar panel, published by Yair Halberstadt on November 10, 2022 on LessWrong. I have both photovoltaic panels, and a solar boiler on my roof. The solar boiler is in front, and the photovoltaic panels behind. You'll notice that they're at very different angles - the panel for the solar boiler is at approximately 55 degrees, whereas the photovoltaic panels are at about 10 - 20 degrees. Why? A solar boiler works by running water through a thin black panel, which gets very hot in the sun, and heats up the water. The hot water is then stored in a tank which can keep it hot for up to about a day. A photovoltaic panel generates electricity from sunlight. In Israel, this electricity is fed directly into the grid, and we get paid 0.48 ILS per KWH. For perspective we pay about 0.51 ILS per KWH, so we get a pretty good price - there's no point storing the electricity in a battery to make sure we use it ourselves. Both require direct sunlight to work, and both should be directly facing the sun to maximize efficiency.However the use cases for both are completely different: For the photovoltaic panels I want to maximize total energy produced. Since the sun moves around over the course of the day, the panels face south to maximize time in direct sunlight. And since the angle of the sun when it's in the south changes throughout the year, you'll want to angle the panels at about the average angle of the sun when it's in the south. This is equivalent to your latitude - in Israel 31 degrees. There are a few complexities: You'll want to capture some light the rest of the day, which pushes for the angle to be steeper, During the winter the sky is cloudy and the sun is weak so you won't be able to capture much energy anyway. This pushes for the panel to be flatter, to optimize for the summer. In practice it probably roughly evens out. In Israel the difference in efficiency between a completely flat panel and one at the optimal angle is only 10%. For a solar boiler however my aim is completely different. I can't heat more than a single tanks worth of hot water, which is enough for my family of 4 and a couple of guests to have hot water, as well as plenty left over for washing up and miscellaneous use. During the summer we easily get this. Thus there's no point optimizing the solar boiler for summer. Instead it has to be at a steep angle so that we still get a decent amount of hot water during the winter. In practice this works quite well, and we usually only need to turn on the electric boiler if we have guests or it's an especially cloudy day. The same considerations apply if you're in a country where you can't sell PV power back to the grid - you'll get more peak energy than you'll use during the summer, so it's worth optimizing the angle for other times of day and other times of year. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.

    Claim The Nonlinear Library

    In order to claim this podcast we'll send an email to with a verification link. Simply click the link and you will be able to edit tags, request a refresh, and other features to take control of your podcast page!

    Claim Cancel