Podcasts about ea global

  • 12PODCASTS
  • 362EPISODES
  • 30mAVG DURATION
  • 1MONTHLY NEW EPISODE
  • Nov 12, 2024LATEST

POPULARITY

20172018201920202021202220232024


Best podcasts about ea global

Latest podcast episodes about ea global

Effecting Our Altruism
This tiny organization lets Canadian donors maximize their impact

Effecting Our Altruism

Play Episode Listen Later Nov 12, 2024 40:04


This is what EAs talk about at afterparties. Or rather, afterparties of afterparties. Tax deductibility is not the sexiest topic I could find an interviewee for, but if you want to take a stand for your favourite effective charities, why would you be a sucker and pay as much taxes as the one who did their due diligence? SK is one of those soft-spoken and impossibly practical people you may never get the chance to meet. I learn a lot. Most importantly, how to hack the buffet line when it's-all-you-can-eat. The key message I want people to come away with after listening to this episode is the importance of due diligence, and also that EA Global events are dope. Enjoyed it? We recorded 2 fresh episodes at EAG Boston and they'll be out this month. All mistakes in this are mine. Please don't hesitate to reach out to me if you'd like me to make any corrections. Volunteer with RC Forward: https://docs.google.com/forms/d/e/1FAIpQLSf9FYiCu6ietnvKeIrgtKmn3hAARPcxGtT5uYYRQu9ZU-ELkQ/viewform RC Forward https://givingwhatwecan.org/ Center For Social Innovation https://socialinnovation.org/ Watch the full video episode on YouTube: https://www.youtube.com/@-effectingouraltruism Listen to the full audio podcast: https://pod.link/1754081644 Shoutout to Dina from Animal Justice for the production support! https://animaljustice.ca/  

The Nonlinear Library
EA - Who would you like to see speak at EA Global? by Jordan Pieters

The Nonlinear Library

Play Episode Listen Later Aug 8, 2024 2:21


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Who would you like to see speak at EA Global?, published by Jordan Pieters on August 8, 2024 on The Effective Altruism Forum. I'm Jordan, I recently joined as the Content Coordinator on the EA Global team at CEA, and I'd love to hear from the community about what content you'd like to see at future conferences. You can see upcoming conference dates here. How do we usually select content? Traditionally, our content selection focuses on: Informing attendees about important developments in relevant fields (eg. founders discussing new organisations or projects, researchers sharing their findings) Diving deeper into key ideas with experts Teaching new skills relevant to EA work Some recent sessions that were well-received included: Panel - When to shut down: Lessons from implementers on winding down projects Talk - Neela Saldanha: Improving policy take-up and implementation in scaling programs Workshop - Zac Hatfield Dodds: AI Safety under uncertainty However, we recognise that conference content can (and perhaps should) fulfil many other roles, so your suggestions shouldn't be constrained by how things have been done in the past. What kinds of suggestions are we looking for? We welcome suggestions in various forms: Specific speakers: Nominate people who you think would make great speakers (this can be yourself!). Topic proposals: Suggest topics that you believe deserve more attention. Session format ideas: Propose unique formats that could make sessions more engaging (e.g., discussion roundtables, workshops, debates). To get an idea of what types of content we've had in the past, check out recordings from previous EA Global conferences. We have limited content slots at our conferences, which means we can't promise to follow up on every suggestion. However, every suggestion helps us better understand what our attendees want to see and can provide jumping-off points for new ideas. How to Submit Your Suggestions: Comment on this post and discuss your ideas with other forum users. Fill out this form or email speakers@eaglobal.org if you'd prefer not to post publicly. Your input can help shape future EAGs to be even more impactful. I look forward to hearing your suggestions! Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

The Nonlinear Library
EA - AEBr 2024 Summit Retrospective by Leo Arruda

The Nonlinear Library

Play Episode Listen Later Aug 7, 2024 31:31


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AEBr 2024 Summit Retrospective, published by Leo Arruda on August 7, 2024 on The Effective Altruism Forum. [Versão em português abaixo] This retrospective provides an overview of the AEBr 2024 Summit, highlighting successes and areas for improvement. We welcome your feedback and comments to help enhance future EA Brazil conferences. Bottom Line Up Front The AEBr Summit 2024 was a success. It had 220 attendees who felt welcome and satisfied, generating around 850 connections at a cost of $45.45 per attendee. Date: June 29, 2024 Venue: INOVA.USP, University of São Paulo, São Paulo, Brazil. The first Brazilian national EA conference ~220 people attended (62% were new to EA) 36 speakers 24 sessions including talks, panels, meetups, and office hours 5 organizations represented at the Career fair Feedback survey results: A 9,1/10 likelihood to recommend score An average of 6,1 new connections per person Photos and video! The AEBr Summit Overview The AEBr Summit took place on June 29, 2024, at the University of São Paulo, São Paulo, Brazil. It was EA Brazil's first national meeting, with contributions from numerous experts and organizations aligned with Effective Altruism, primarily from the animal cause sector ( Sinergia animal, Animal Equality, Forum Animal, Mercy for Animals, Sea Shepherd and Sociedade Vegetariana Brasileira), as well as health and development organizations like Doebem, Associação Brasileira de Psicólogos Baseadas em Evidências and Impulso Gov. The primary goal was to expand and strengthen the Brazilian EA community by inviting both newcomers and experienced members to join together for a day of inspiring talks, workshops, and networking. The event was funded by the Centre for Effective Altruism and supported by the Innovation Center of the University of São Paulo ( INOVA USP). Delicious vegan meals were provided by Sociedade Vegetariana Brasileira (SVB), contributing to the event's success. The expectation was to bring together 200 people, so it was an accomplishment to register 255 applicants, with 220 attending, among speakers and volunteers. Highlights The event featured 220 attendees 36 speakers 40 very dedicated volunteers 26 sessions ( event program in Portuguese) 13 talks on the main causes of Effective Altruism. 7 workshops. 4 Q&A sessions with experts. 2 meet ups. 5 organizations represented at the Career fair Event photos and video (many thanks to Ivan Martucci for the professional editing). Based on the feedback responses from 60 attendees (27%): Average participant satisfaction: 9.2 out of 10 in Likelihood to Recommend. This is higher than the average for EA Global and EAGx. An average of 6,1 new connections per person. This is lower than most EAG and EAGx events, but the event lasted only one day. 46% of connections were considered potentially 'impactful' by the attendees (Note: assessing the impact of connections can be subjective). Team The AEBr Summit core team comprised Juana Maria, Leo Arruda, Ivan Martucci, and Adriana Arauzo. They were supported by Ollie Base and Arthur Malone from the CEA events team. The core team worked remotely except for some site visits. Two members of the team worked together in person in São Paulo during the final week. While remote work was effective, in-person collaboration proved significantly easier, especially since some members didn't know each other well beforehand. In hindsight, it would have been beneficial for the team to meet and work in person earlier, ideally two weeks in advance of the summit rather than just one. There were 40 volunteers divided into various roles: communication, logistics, food and catering, room supervision, speaker liaison, admissions, reception, and community health. Most volunteers were already engaged in the community, but a few were completely new. Special thanks to Ivan...

The Nonlinear Library
EA - Upcoming EA conferences in 2024 and 2025 by OllieBase

The Nonlinear Library

Play Episode Listen Later Aug 5, 2024 3:56


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Upcoming EA conferences in 2024 and 2025, published by OllieBase on August 5, 2024 on The Effective Altruism Forum. We're very excited to announce our EA conference schedule for the rest of this year and the first half of 2025. EA conferences will be taking place for the first time in Nigeria, Cape Town, Bengaluru, and Toronto, and returning to Berkeley, Sydney, and Singapore. EA Global: Boston 2024 applications are open, and close October 20. EAGxIndia will be returning this year in a new location: Bengaluru. See their full announcement here. EAGxAustralia has rebranded to EAGxAustra la sia to represent the fact that many attendees will be from the wider region, especially New Zealand. We're hiring the teams for both EAGxVirtual and EAGxSingapore. You can read more about the roles and how to apply here. EA Global will be returning to the same venues in the Bay Area and London in 2025. Here are the full details: EA Global EA Global: Boston 2024 | November 1-3 | Hynes Convention Center | applications close October 20 EA Global: Bay Area 2025 | February 21-23 | Oakland Marriott EA Global: London 2025 | June 6-8 | Intercontinental London (the O2) EAGx EAGxToronto | August 16-18 | InterContinental Toronto Centre | application deadline just extended, they now close August 12 EAGxBerkeley | September 7-8 | Lighthaven | applications close August 20 EA Nigeria Summit | September 7-8 | Chida Event Center, Abuja EAGxBerlin | September 13-15 | Urania, Berlin | applications close August 24 EA South Africa Summit | October 5 | Cape Town EAGxIndia | October 19-20 | Conrad Bengaluru | applications close October 5 EAGxAustralasia | November 22-24 | Aerial UTS, Sydney | applications open EAGxVirtual | November 15-17 EAGxSingapore | December 14-15 | Suntec Singapore We're aiming to launch applications for events later this year as soon as possible. Please go to the event page links above to apply. If you'd like to add EAG(x) events directly to your Google Calendar, use this link. Some notes on these conferences EA Global conferences are run in-house by the CEA events team, whereas EAGx conferences (and EA summits) are organised independently by members of the EA community with financial support and mentoring from CEA. EAGs have a high bar for admission and are for people who are very familiar with EA and are taking significant actions (e.g. full-time work or study) based on EA ideas. Admissions for EAGx conferences and EA Summits are processed independently by the organizers. These events are primarily for those who are newer to EA and interested in getting more involved. Please apply to all conferences you wish to attend - we would rather get too many applications for some conferences and recommend that applicants attend a different one than miss out on potential applicants to a conference. We offer travel support to help attendees who are approved for an event but who can't afford to travel. You can apply for travel support as you submit your application. Travel support funds are limited (though will vary by event), and we can only accommodate a small number of requests. Find more info on our website. Feel free to email hello@eaglobal.org with any questions, or comment below. You can contact EAGx organisers using the format [location]@eaglobalx.org (e.g. berkeley@eaglobalx.org and berlin@eaglobalx.org). Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

The Nonlinear Library
EA - The US-China Relationship and Catastrophic Risk (EAG Boston transcript) by EA Global

The Nonlinear Library

Play Episode Listen Later Jul 12, 2024 31:08


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The US-China Relationship and Catastrophic Risk (EAG Boston transcript), published by EA Global on July 12, 2024 on The Effective Altruism Forum. Introduction This post is a write-up of a panel discussion held at EA Global: Boston 2023 (27-29 October). The panel was moderated by Matthew Gentzel. Matthew currently co-leads Longview Philanthropy's program on nuclear weapons policy and co-manages the organization's Nuclear Weapons Policy Fund. He was joined by two other experts on US-China relations and related catastrophic risks: Tong Zhao, Senior Fellow for the Nuclear Policy Program and Carnegie China, Carnegie Endowment for International Peace Bill Drexel, Fellow for the Technology and National Security Program, Center for a New American Security Below is a transcript of the discussion, which we've lightly edited for clarity. The panelists covered the following main topics: Opening remarks summarizing the panelists' general views on the US-China relationship and related risks, with an initial focus on nuclear security before exploring other risks and dangerous technologies How to address different norms around sharing information Problems resulting from risk compensation Quick takes on which risks are overhyped and which are underhyped AI governance structures, the Chinese defense minister's dismissal, and the US's semiconductor export policies Ideas for calibrating how the US cooperates and/or competes with China Opening remarks Matthew: We'll start with opening remarks, then get into questions. Tong: Thank you so much. I think the catastrophic risk between the US and China is increasing, not least because the chance of serious military conflict between the two sides - most likely arising from a Taiwan Strait scenario - is growing. And in a major military conflict, the risk of nuclear escalation is certainly there. In a mostly strained scenario, this could lead to a nuclear winter if there's a massive nuclear exchange. Even a limited nuclear exchange or very serious conventional conflict between the two powers could destabilize the international geopolitical landscape and very negatively affect the normal development and progression of humanity. In the long run, I worry that both sides are preparing for a worst-case scenario of major conflict with each other, leading to de facto war mobilization efforts. In the case of China, strategists in Beijing are still worried that there is going to be an eventual showdown between the two sides. And therefore, China is working on developing the necessary military capabilities for that eventuality. It is developing its economic capacity to withstand international economic sanctions and its capability to influence the international narrative to avoid political isolation in a major crisis. And those efforts are leading to incremental decoupling in the economic and technological domains, as well as to general decoupling of policy expert communities on the two sides. As a result of this long-term competition and rivalry, I think long-term risks to humanity are generally downplayed. Part of China's recent policy change is a very rapid increase of its nuclear weapons capability. This does not necessarily mean that China aims to use nuclear weapons first in a future conflict. However, as China focuses on enhancing its nuclear and strategic military capabilities, it is paying less attention to the risks associated with such development. One example is China's increasing interest in having launch-under-attack or launch-on-warning nuclear capability. That means China will depart from its decades-long practice of maintaining a low-level status for its nuclear forces and shift towards a rapid-response posture, in which China's early warning system will provide Chinese leadership with a warning of any incoming missile attack. Before the in...

The Nonlinear Library
EA - 80,000 hours should remove OpenAI from the Job Board (and similar EA orgs should do similarly) by Raemon

The Nonlinear Library

Play Episode Listen Later Jul 3, 2024 11:03


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: 80,000 hours should remove OpenAI from the Job Board (and similar EA orgs should do similarly), published by Raemon on July 3, 2024 on The Effective Altruism Forum. I haven't shared this post with other relevant parties - my experience has been that private discussion of this sort of thing is more paralyzing than helpful. I might change my mind in the resulting discussion, but, I prefer that discussion to be public. I think 80,000 hours should remove OpenAI from its job board, and similar EA job placement services should do the same. (I personally believe 80k shouldn't advertise Anthropic jobs either, but I think the case for that is somewhat less clear) I think OpenAI has demonstrated a level of manipulativeness, recklessness, and failure to prioritize meaningful existential safety work, that makes me think EA orgs should not be going out of their way to give them free resources. (It might make sense for some individuals to work there, but this shouldn't be a thing 80k or other orgs are systematically funneling talent into) There plausibly should be some kind of path to get back into good standing with the AI Risk community, although it feels difficult to imagine how to navigate that, given how adversarial OpenAI's use of NDAs was, and how difficult that makes it to trust future commitments. The things that seem most significant to me: They promised the superalignment team 20% of their compute-at-the-time (which AFAICT wasn't even a large fraction of their compute over the coming years), but didn't provide anywhere close to that, and then disbanded the team when Leike left. Their widespread use of non-disparagement agreements, with non-disclosure clauses, which generally makes it hard to form accurate impressions about what's going on at the organization. Helen Toner's description of how Sam Altman wasn't forthright with the board. (i.e. "The board was not informed about ChatGPT in advance and learned about ChatGPT on Twitter. Altman failed to inform the board that he owned the OpenAI startup fund despite claiming to be an independent board member, giving false information about the company's formal safety processes on multiple occasions. And relating to her research paper, that Altman in the paper's wake started lying to other board members in order to push Toner off the board.") Hearing from multiple ex-OpenAI employees that OpenAI safety culture did not seem on track to handle AGI. Some of these are public (Leike, Kokotajlo), others were in private. This is before getting into more openended arguments like "it sure looks to me like OpenAI substantially contributed to the world's current AI racing" and "we should generally have a quite high bar for believing that the people running a for-profit entity building transformative AI are doing good, instead of cause vast harm, or at best, being a successful for-profit company that doesn't especially warrant help from EAs. I am generally wary of AI labs (i.e. Anthropic and Deepmind), and think EAs should be less optimistic about working at large AI orgs, even in safety roles. But, I think OpenAI has demonstrably messed up, badly enough, publicly enough, in enough ways that it feels particularly wrong to me for EA orgs to continue to give them free marketing and resources. I'm mentioning 80k specifically because I think their job board seemed like the largest funnel of EA talent, and because it seemed better to pick a specific org than a vague "EA should collectively do something." (see: EA should taboo "EA should"). I do think other orgs that advise people on jobs or give platforms to organizations (i.e. the organization fair at EA Global) should also delist OpenAI. My overall take is something like: it is probably good to maintain some kind of intellectual/diplomatic/trade relationships with OpenAI, but bad to continue ...

Star Spangled Gamblers
Trump's VP Pick: Qualifications versus Politics

Star Spangled Gamblers

Play Episode Listen Later Jun 17, 2024 42:04


3-Part Episode Part I: Pratik Chougule (@pjchougule), SSG TItle Belt Champ Ben Freeman (@benwfreeman1), and Title Belt Challenger Alex Chan (@ianlazaran) debate whether or not Trump cares about qualifications in his VP decision, or whether it will come down to politics. Part II: Doug Campbell (@tradeandmoney) analyzes how he won the 2023 Astral Codex Ten forecasting competition. Part III: Saul Munn explains how to organize the forecasting community 0:11: Pratik introduces VP segment 0:26: Pratik introduces Campbell segment 1:05: Pratik introduces Munn segment 4:09: VP segment begins 8:43: 2028 considerations 18:13: Campbell segment begins 22:55: Expertise and prediction 28:10: Interview with Munn begins 28:45: The importance of in-person events 29:00: Manifest's origins 30:46: The political gambling community 32:11: Transitioning from an online community 34:04: How to organize the forecasting community 36:05: University clubs 37:06: Reluctance to organize events 38:15: EA Global 40:59: Low bar to community-building Bet on who Trump will select as his running mate at Polymarket, the world's largest prediction market, at polymarket.com. SUPPORT US: Patreon: www.patreon.com/starspangledgamblers FOLLOW US ON SOCIAL: Twitter: www.twitter.com/ssgamblers 

The Nonlinear Library
LW - some thoughts on LessOnline by Raemon

The Nonlinear Library

Play Episode Listen Later May 9, 2024 7:17


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: some thoughts on LessOnline, published by Raemon on May 9, 2024 on LessWrong. I mostly wrote this for facebook, but it ended up being a whole-ass post so I figured I'd put it here too. I'm helping run "LessOnline: A Festival of Writers Who Are Wrong On the Internet (But Striving To Be Less So)". I'm incentivized to say nice things about the event. So, grain of salt and all. But, some thoughts, which roughly breakdown into: The vibe: preserving cozy/spaciousness of a small retreat at a larger festival The audience: "Reunion for the The Extended Family Blogosphere, both readers and writers." Manifest, and Summer Camp ... I. The Vibe I've been trying to explain the vibe I expect and it's tricksy. I think the vibe will be something like "CFAR Reunion meets Manifest." But a lot of people haven't been to a CFAR Reunion or to Manifest. I might also describe it like "the thing the very first EA Summit (before EA Global) was like, before it became EA Global and got big." But very few people went to that either. Basically: I think this will do a pretty decent job of having the feel of a smaller (~60 person), cozy retreat, but while being more like 200 - 400 people. Lightcone has run several ~60 person private retreats, which succeeded being a really spacious intellectual environment, with a pretty high hit rate for meeting new people who you might want to end up having a several hour conversation with. Realistically, with a larger event there'll be at least some loss of "cozy/spaciousness", and a somewhat lower hit rate for people you want to talk to with the open invites. But, I think Lightcone has learned a lot about how to create a really nice vibe. We've built our venue, Lighthaven, with "warm, delightful, focused intellectual conversation" as a primary priority. Whiteboards everywhere, lots of nooks and a fractal layout that makes it often feel like you're in a seclude private conversation by a firepit, even though hundreds of other people are nearby (often at another secluded private conversation with _their_ own firepit!) (It's sort of weird that this kind of venue is extremely rare. Many events are hotels, which feel vaguely stifling and corporate. And the nice spacious retreat centers we've used don't score well on the whiteboard front, and surprisingly not even that well on "lots of nooks") ... Large events tend to use "Swap Card" for causing people to meet each other. I do find Swap Card really good for nailing down a lot of short meetings. But it somehow ends up with a vibe of ruthless efficiency - lots of back-to-back 30 minute meetings, instead of a feeling of organic discovery. The profile feels like a "job fair professional" sort of thing. Instead we're having a "Names, Faces, and Conversations" document, where people write in a giant google doc about what questions and ideas are currently alive for them. People are encouraged to comment inline if they have thoughts, and +1 if they'd be into chatting about it. Some of this hopefully turns into 1-1 conversations, and if more people are interested it can organically grow into "hey let's hold a small impromptu group discussion about that in the Garden Nook" ... We'll also have a bunch of stuff that's just plain fun. We're planning a puzzle hunt that spans the event, and a dance concert led by the Fooming Shoggoths, with many songs that didn't make it onto their April 1st album. And the venue itself just lends itself to a feeling of whimsy and discovery. ... Another thing we're doing is encouraging people to bring their kids, and providing a day care to make that easier. I want this event to feel like something you can bring your whole life/self to. By default these sorts of events tend to not be very kid friendly. ... ... ... II. The Audience So that was a lot of words about The Vibe. The second question is "who a...

The Nonlinear Library: LessWrong
LW - some thoughts on LessOnline by Raemon

The Nonlinear Library: LessWrong

Play Episode Listen Later May 9, 2024 7:17


Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: some thoughts on LessOnline, published by Raemon on May 9, 2024 on LessWrong. I mostly wrote this for facebook, but it ended up being a whole-ass post so I figured I'd put it here too. I'm helping run "LessOnline: A Festival of Writers Who Are Wrong On the Internet (But Striving To Be Less So)". I'm incentivized to say nice things about the event. So, grain of salt and all. But, some thoughts, which roughly breakdown into: The vibe: preserving cozy/spaciousness of a small retreat at a larger festival The audience: "Reunion for the The Extended Family Blogosphere, both readers and writers." Manifest, and Summer Camp ... I. The Vibe I've been trying to explain the vibe I expect and it's tricksy. I think the vibe will be something like "CFAR Reunion meets Manifest." But a lot of people haven't been to a CFAR Reunion or to Manifest. I might also describe it like "the thing the very first EA Summit (before EA Global) was like, before it became EA Global and got big." But very few people went to that either. Basically: I think this will do a pretty decent job of having the feel of a smaller (~60 person), cozy retreat, but while being more like 200 - 400 people. Lightcone has run several ~60 person private retreats, which succeeded being a really spacious intellectual environment, with a pretty high hit rate for meeting new people who you might want to end up having a several hour conversation with. Realistically, with a larger event there'll be at least some loss of "cozy/spaciousness", and a somewhat lower hit rate for people you want to talk to with the open invites. But, I think Lightcone has learned a lot about how to create a really nice vibe. We've built our venue, Lighthaven, with "warm, delightful, focused intellectual conversation" as a primary priority. Whiteboards everywhere, lots of nooks and a fractal layout that makes it often feel like you're in a seclude private conversation by a firepit, even though hundreds of other people are nearby (often at another secluded private conversation with _their_ own firepit!) (It's sort of weird that this kind of venue is extremely rare. Many events are hotels, which feel vaguely stifling and corporate. And the nice spacious retreat centers we've used don't score well on the whiteboard front, and surprisingly not even that well on "lots of nooks") ... Large events tend to use "Swap Card" for causing people to meet each other. I do find Swap Card really good for nailing down a lot of short meetings. But it somehow ends up with a vibe of ruthless efficiency - lots of back-to-back 30 minute meetings, instead of a feeling of organic discovery. The profile feels like a "job fair professional" sort of thing. Instead we're having a "Names, Faces, and Conversations" document, where people write in a giant google doc about what questions and ideas are currently alive for them. People are encouraged to comment inline if they have thoughts, and +1 if they'd be into chatting about it. Some of this hopefully turns into 1-1 conversations, and if more people are interested it can organically grow into "hey let's hold a small impromptu group discussion about that in the Garden Nook" ... We'll also have a bunch of stuff that's just plain fun. We're planning a puzzle hunt that spans the event, and a dance concert led by the Fooming Shoggoths, with many songs that didn't make it onto their April 1st album. And the venue itself just lends itself to a feeling of whimsy and discovery. ... Another thing we're doing is encouraging people to bring their kids, and providing a day care to make that easier. I want this event to feel like something you can bring your whole life/self to. By default these sorts of events tend to not be very kid friendly. ... ... ... II. The Audience So that was a lot of words about The Vibe. The second question is "who a...

The Nonlinear Library
EA - Why you might be getting rejected from (junior) operations jobs by Eli Nathan

The Nonlinear Library

Play Episode Listen Later Apr 7, 2024 5:22


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Why you might be getting rejected from (junior) operations jobs, published by Eli Nathan on April 7, 2024 on The Effective Altruism Forum. Most recruiters aren't likely to give candidates feedback, partially due to the volume of candidates, and many people repeatedly get rejected from jobs and do not know why. Below I list some common reasons I expect candidates might get rejected from (generally junior) ops-type jobs.[1] Similar principles might apply to other role types, though I can't really speak to these. I'm listing these reasons in no particular order.[2] 1. Quality of writing The writing quality in your application materials should be extremely high. This is partially because you're competing against candidates with strong writing skills, but also because: Good writing quality is often a strong signal of conscientiousness, attention to detail, and other traits that are important in most ops jobs. Many ops roles simply require you to write a lot, often to important stakeholders or to large audiences - clear and concise writing quality will be important in these cases. I expect a lot of people overrate their professional writing skills (I certainly used to). This isn't something people tend to get explicitly trained on and it requires different skills than you might learn in a literature class - a focus on language being clear and concise rather than emotive or descriptive. 2. Quality of work tests This is perhaps obvious and unhelpful, but your work tests should be completed to a very high standard. The strongest candidates will be paying attention to the smallest details, so you'll need to as well. In many ops roles you'll need to present polished work every now and then - to senior stakeholders, colleagues, or the wider public. Work tests are often a way to see if you can perform at this level, even if less-than-perfect work would be good enough for your day-to-day tasks. You should probably be intensely checking your work tests before you submit them, perhaps by aiming to finish in 80-90% of the allotted time, and using the remainder to go over your work. If the work test is writing-based, you may even want to consider reading it aloud and seeing if it flows well and makes sense. 3. Quality of application materials I think this generally matters less than the two points above, but I'd make sure your CV, LinkedIn, and cover letters are as polished as possible and clearly outline experience that's relevant for the role you're applying for. Again, this includes things like writing and formatting quality. You should also probably be asking someone to review your CV - I've asked family members to do this in the past. This is a more specific point, but I also notice that a lot of people's CVs are too vague. "I am the president of my EA group" can mean anything, the bar for founding an EA group is ~zero, and I expect many EA groups to be inactive. But being president of your EA group can also mean a lot: perhaps you run lots of workshops (give quantities and details) and get large amounts of funding. But if you don't tell the hiring manager this they won't know, and they're unlikely to spend extra time investigating. In a similar vein, I've noticed that some people's EA Global applications (a process similar-ish to hiring) mention that they've founded Organisation X, only for its website to contain minimal information. To be clear, Organisation X may be a very impressive and accomplished project! But you should assume that the hiring manager is not familiar with it and will not spend much time investigating. It's often best to briefly explain what your organisation does, how many people work there and what your responsibilities are. 4. Unclearly relevant experience or interests Hiring managers are often looking for certain types of candidates. Even if you're very ...

The Nonlinear Library
EA - EA Global: we're improving 1-on-1s by frances lorenz

The Nonlinear Library

Play Episode Listen Later Apr 1, 2024 4:50


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA Global: we're improving 1-on-1s, published by frances lorenz on April 1, 2024 on The Effective Altruism Forum. Introducing our new 1-on-1 booking system EA Globals (EAGs) are weekend-long conferences that feature a variety of talks, panels, workshops, and networking opportunities related to effective altruism. Unlike other academic conferences, EAGs place a unique emphasis on 1-on-1 networking. Attendees are encouraged to connect via our event app, Swapcard, and book meetings to discuss potential projects, career goals, opportunities, and whatever else might be useful in a professional development context. Attendees often cite 1-on-1s among their most valuable experiences at the event. This year, the EAG team is hoping to test out new ideas. For example, we recently ran EA Global: Bay Area 2024, our first EAG specifically focussed on global catastrophic risks. In an attempt to generate novel experiments, the team decided to employ some first-principles reasoning to one of our key event components (see Figure 1). After attendees arrive at their 1-on-1, notice the chain of events becomes uncertain - our team (currently) has very little power to ensure each individual 1-on-1 is valuable. One could argue this is part of the nature of existence (i.e. some things are out of our control); however, our team worried this conclusion was suspiciously convenient. What might we be missing? To address this uncertainty, we are now assigning 1-on-1s based on a complex, astrological algorithm which uses robust celestial compatibility indicators. We feel confident in this change because: 1-on-1s are more likely to be a positive experience if both parties are cosmically guaranteed to get along. Your star sign determines much of your expected life trajectory, including your career. It follows that those with compatible star signs will have synergistic ideas and career paths; thus, early coordination is important and valuable. A concrete example Aanya is an Aries. Thus, she is partial to challenging the status quo. For example, through entrepreneurial ventures in alternative proteins or perhaps advocacy in the AI governance space. Taylor is a Libra. They value collaboration and harmony. Thus, they are suited to roles involving peacekeeping or managing international relations, perhaps also in a governance capacity. In a 1-on-1 meeting, we expect an Aries to bring bold ideas, which will then be tempered and balanced by the Libra's relational and strategic focus. According to the cosmos, this creates a perfect blend of bold action, which is also sustainable and feasible. In other words, the 1-on-1 is astrologically guaranteed to be valuable. Our algorithm proceeds to schedule a meeting on Swapcard at the optimal time, based on planet movement and relevant cosmic orientations. Under this system, we estimate a minimum likelihood of 60% that every meeting will result in at least one of: the creation of a new organisation, clarity on the worm wars, a communications strategy for EA, or a solution to the alignment problem (see Figure 2). Frequently asked questions Can I book 1-on-1s outside of the meetings that are assigned to me? Yes, you can still use your own free will to book 1-on-1s with attendees. However, Swapcard will auto-cancel a meeting if compatibility is too low. Please be mindful throughout the event as the team is trying to cultivate a trusting relationship with the cosmos. What if I have a 1-on-1 that isn't valuable? Can I provide you with feedback? This won't happen. Will you be applying this new strategy to content? By 2025, we hope to implement a system by which star compatibility between speakers and attendees is accounted for when attendees attempt to reserve a spot at a talk, workshop, or office hour. Will this apply to speed meetings? Yes, everyone will have a name tag w...

The Nonlinear Library
EA - There are no massive differences in impact between individuals by Sarah Weiler

The Nonlinear Library

Play Episode Listen Later Mar 14, 2024 32:00


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: There are no massive differences in impact between individuals, published by Sarah Weiler on March 14, 2024 on The Effective Altruism Forum. Or: Why aiming for the tail-end in an imaginary social impact distribution is not the most effective way to do good in the world "It is very easy to overestimate the importance of our own achievements in comparison with what we owe others." attributed to Dietrich Bonhoeffer, quoted in Tomasik 2014(2017) Summary In this essay, I argue that it is not useful to think about social impact from an individualist standpoint. I claim that there are no massive differences in impact between individual interventions, individual organisations, and individual people, because impact is dispersed across all the actors that contribute to the outcomes before any individual action is taken, all the actors that contribute to the outcomes after any individual action is taken, and all the actors that shape the taking of any individual action in the first place. I raise some concerns around adverse effects of thinking about impact as an attribute that follows a power law distribution and that can be apportioned to individual agents: Such a narrative discourages actions and strategies that I consider highly important, including efforts to maintain and strengthen healthy communities; Such a narrative may encourage disregard for common-sense virtues and moral rules; Such a narrative may negatively affect attitudes and behaviours among elites (who aim for extremely high impact) as well as common people (who see no path to having any meaningful impact); and Such a narrative may disrupt basic notions of moral equality and encourage a differential valuation of human lives in accordance with the impact potential an individual supposedly holds. I then reflect on the sensibility and usefulness of apportioning impact to individual people and interventions in the first place, and I offer a few alternative perspectives to guide our efforts to do good effectively. In the beginning, I give some background on the origin of this essay, and in the end, I list a number of caveats, disclaimers, and uncertainties to paint a fuller picture of my own thinking on the topic. I highly welcome any feedback in response to the essay, and would also be happy to have a longer conversation about any or all of the ideas presented - please do not hesitate to reach out in case you would like to engage in greater depth than a mere Forum comment :)! Context I have developed and refined the ideas in the following paragraphs at least since May 2022 - my first notes specifically on the topic were taken after I listened to Will MacAskill talk about "high-impact opportunities" at the opening session of my first EA Global, London 2022. My thoughts on the topic were mainly sparked by interactions with the effective altruism community (EA), either in direct conversations or through things that I read and listened to over the last few years. However, I have encountered these arguments outside EA as well, among activists, political strategists, and "regular folks" (colleagues, friends, family). My journal contains many scattered notes, attesting to my discomfort and frustration with the - in my view, misguided - belief that a few individuals can (and should) have massive amounts of influence and impact by acting strategically. This text is an attempt to pull these notes together, giving a clear structure to the opposition I feel and turning it into a coherent argument that can be shared with and critiqued by others. Impact follows a power law distribution: The argument as I understand it "[T]he cost-effectiveness distributions of the most effective interventions and policies in education, health and climate change, are close to power-laws [...] the top intervention is 2 or almost 3 orders of magni...

The Nonlinear Library
EA - Review of EA Global Bay Area 2024 (Global Catastrophic Risks) by frances lorenz

The Nonlinear Library

Play Episode Listen Later Mar 1, 2024 6:33


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Review of EA Global Bay Area 2024 (Global Catastrophic Risks), published by frances lorenz on March 1, 2024 on The Effective Altruism Forum. EA Global: Bay Area (Global Catastrophic Risks) took place February 2-4. We hosted 820 attendees, 47 of whom volunteered over the weekend to help run the event. Thank you to everyone who attended and a special thank you to our volunteers - we hope it was a valuable weekend! Photos and recorded talks You can now check out photos from the event. Recorded talks, such as the media panel on impactful GCR communication, Tessa Alexanian's talk on preventing engineered pandemics, Joe Carlsmith's discussion of scheming AIs, and more, are now available on our Youtube channel. A brief summary of attendee feedback Our post-event feedback survey received 184 responses. This is lower than our average completion rate - we're still accepting feedback responses and would love to hear from all our attendees. Each response helps us get better summary metrics and we look through each short answer. To submit your feedback, you can visit the Swapcard event page and click the Feedback Survey button. The survey link can also be found in a post-event email sent to all attendees with the subject line, "EA Global: Bay Area 2024 | Thank you for attending!" Key metrics The EA Global team uses a several key metrics to estimate the impact of our events. These metrics, and the questions we use in our feedback survey to measure them, include: Likelihood to recommend (How likely is it that you would recommend EA Global to a friend or colleague with similar interests to your own? Discrete scale from 0 to 10, 0 being not at all likely and 10 being extremely likely) Number of new connections[1] (How many new connections did you make at this event?) Number of impactful connections[2] (Of those new connections, how many do you think might be impactful connections?) Number of Swapcard meetings per person (This data is pulled from Swapcard) Counterfactual use of attendee time (To what extent was this EA Global a good use of your time, compared to how you would have otherwise spent your time? A discrete scale ranging from, "a waste of my time, 10x the counterfactual") The likelihood to recommend for this event was higher compared to last year's EA Global: Bay Area and our EA Global 2023 average (i.e. the average across the three EA Globals we hosted in 2023) (see Table 1). Number of new connections was slightly down compared to the 2023 average, while the number of impactful connections was slightly up. The counterfactual use of time reported by attendees was slightly higher overall than Boston 2023 (the first EA Global we used this metric at), though there was also an increase in the number of reports that the event was a worse use of attendees' time (see Figure 1). Metric (average of all respondents) EAG BA 2024 (GCR) EAG BA 2023 EAG 2023 average Likelihood to recommend (0 - 10) 8.78 8.54 8.70 Number of new connections 9.05 11.5 9.72 Number of impactful connections 4.15 4.8 4.09 Swapcard meetings per person 6.73 5.26 5.24 Table 1. A summary of key metrics from the post-event feedback surveys for EA Global: Bay Area 2024 (GCRs), EA Global: Bay Area 2023, and the average from all three EA Globals hosted in 2023. Feedback on the GCRs focus 37% of respondents rated this event more valuable than a standard EA Global, 34% rated it roughly as valuable and 9% as less valuable. 20% of respondents had not attended an EA Global event previously (Figure 2). If the event had been a regular EA Global (i.e. not focussed on GCRs), most respondents predicted they would have still attended. To be more precise, approximately 90% of respondents reported having over 50% probability of attending the event in the absence of a GCR ...

The Nonlinear Library
EA - Upcoming EA conferences in 2024 by OllieBase

The Nonlinear Library

Play Episode Listen Later Feb 22, 2024 3:27


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Upcoming EA conferences in 2024, published by OllieBase on February 22, 2024 on The Effective Altruism Forum. In an unsurprising move, the Centre for Effective Altruism will be organising and supporting conferences for the EA community all over the world in 2024, including three new EAGx locations: Copenhagen, Toronto and Austin. We currently have the following events scheduled: EA Global EA Global: London | (May 31-June 2) | Intercontinental London (the O2) - applications close 19 May EA Global: Boston | (November 1-3) | Hynes Convention Center - applications close 20 October EAGx EAGxAustin | (April 13-14) | University of Texas, Austin - applications close 31 March EAGxNordics | (April 26-28) | CPH Conference, Copenhagen - applications close 7 April EAGxUtrecht | (July 5-7) | Jaarbeurs, Utrecht EAGxToronto | (August, provisional) EAGxBerkeley | (September, provisional) EAGxBerlin | (September 13-15) | Urania, Berlin EAGxAustralia | (November) | Sydney We also hope to announce an EAGxLondon for early April very soon. A university venue was tentatively booked for late March, but the venue asked to reschedule. We're in the process of finalising a new date. We also expect to announce more events throughout the year. Applications for EAG London, EAG Boston, EAGxNordics and EAGxAustin are open. Applications for EAGxLondon will open as soon as the date is confirmed. We expect applications for the other conferences to open approximately 3 months before the event. Please go to the event page links above to apply. If you'd like to add EAG(x) events directly to your Google Calendar, use this link. Some notes on these conferences: EA Globals are run in-house by the CEA events team, whereas EAGx conferences are organised independently by members of the EA community with financial support and mentoring from CEA. EA Global conferences have a high bar for admission and are for people who are very familiar with EA and are taking significant actions (e.g. full-time work or study) based on EA ideas. Admissions for EAGx conferences are processed independently by the EAGx conference organizers. These events are primarily for those who are newer to EA and interested in getting more involved. Please apply to all conferences you wish to attend once applications open - we would rather get too many applications for some conferences and recommend that applicants attend a different one than miss out on potential applicants to a conference. Travel support funds for events this year are limited (though will vary by event), and we can only accommodate a small number of requests. If you do not end up receiving travel support, this is likely the result of limited funds, rather than an evaluation of your potential for impact. When planning around an event, we recommend you act under the assumption that we will not be able to grant your travel funding request (unless it has already been approved). Find more info on our website. Feel free to email hello@eaglobal.org with any questions, or comment below. You can contact EAGx organisers using the format [location]@eaglobalx.org (e.g. austin@eaglobalx.org and nordics@eaglobalx.org). Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

The Nonlinear Library
LW - Things I've Grieved by Raemon

The Nonlinear Library

Play Episode Listen Later Feb 18, 2024 3:45


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Things I've Grieved, published by Raemon on February 18, 2024 on LessWrong. I think grieving is a fundamental rationality skill. Often, the difference between the Winning Move, and your Current Path, is that there is something really beautiful and good about your current path. Or there was something actually horrifying about reality that makes the Winning Move necessary. There is a skill to engaging with, but eventually letting go, of things that are beautiful and good but which you can't have right now. There is a skill to facing horror. I think these are a general skill, of looking at the parts of reality you don't want to accept, and... accepting them. When you are good at the skill, you can (often) do it quickly. But, I definitely recommend taking your time with cultivating that skill. My experience is that even when I thought I had grieved major things I would turn out to be wrong and have more processing to do. I originally wrote this list without commentary, as sort of elegant, poetic appendix to my previous post on Deliberate Grieving. But I was afraid people would misinterpret it - that they would think I endorsed simply letting things go and getting over them and moving on. That is an important end result, but trying to rush to that will tie yourself up in knots and leave you subtly broken. Each of the following included lots of listening to myself, and listening to reality, forming a best guess as to whether I actually did need to grieve the thing or if there were clever Third Options that allowed me to Have All The Things. Things I have grieved Relationships with particular people. The idea that I will ever get a satisfying closure on some of those relationships. The idea that I will get Justice in particular circumstances where I think I was wronged, but the effort to figure that out and get social consensus on the wrongness wasn't really worth anyone's time. Getting to have a free weekend that one particular time, where it became clear that, actually, the right thing for me to do that-particular weekend was to book last minute tickets for EA Global and fly to London. The idea that a rationalist community could work, in Berkeley in particular, in the way I imagined in 2017. The idea that it's necessarily the right strategy, to solve coordination problems with in-depth nuance, allowing persnickety rationalists to get along, and allowing companies to scale with a richness of humanity and complex goals.... ...instead of looking for ways to simplify interfaces, so that we don't need to coordinate around that nuance. You do often need nuanced strategies and communication, but they don't have the same shape I imagined in 2020. The idea that I get to live in a small, cute, village... doing small, cute, village things... and ignore the looming existential risk that threatens the village. That even though I decided that my morality would never demand that I be a hero... there nonetheless just isn't a coherent, enduring shape that fits my soul that doesn't make that the thing I ultimately want for myself. Even if it's hard. That idea that, despite feeling old and cynical... it doesn't actually seem that useful to feel old and cynical, and I should probably find some self-narrative that has whatever good things the Cynical Oldness is trying to protect, without the counterproductive bits. Most generally: That the world is big, and problems are many, and competent people are rare, and most long-lasting problems actually just require a someone quite intelligent, dedicated and agentic to solve them. Those people exist, and they are often motivated to solve various small and medium sized problems. But there are too many small and medium sized problems that nonetheless require a really competent person to deal with. There will often be small and medium sized problems tha...

The Nonlinear Library
EA - CEA is spinning out of Effective Ventures by Eli Nathan

The Nonlinear Library

Play Episode Listen Later Jan 18, 2024 1:54


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: CEA is spinning out of Effective Ventures, published by Eli Nathan on January 18, 2024 on The Effective Altruism Forum. The Centre for Effective Altruism (CEA) is spinning out as a project of Effective Ventures Foundation UK and Effective Ventures US (known collectively as Effective Ventures or EV) to become an independent organisation. As EV decentralises, we expect that bringing our operations in-house and establishing our own legal entities will better allow us to accomplish our mission and goals. We'd like to extend a deep thank you to the EV team for all their hard work in helping us to scale our programs, and in providing essential support and leadership over the last few years. Alongside our new CEO joining the team next month, this means that we're entering a new era for CEA. We're excited to build an operations team that can align more closely with our needs, as well as a governance structure that allows us to be independent and better matches our purpose. As EV's largest project and one with many complex and interwoven programs, we expect this spin-out process will take some time, likely between 12-24 months. This is because we'll need to set up new legal structures, hire new staff, manage visas and intellectual property, and handle various other items. We expect this spin-out to not affect the external operations of our core products, and generally not be particularly noticeable from the outside - EA Global and the EA Forum, for example, will continue to run as they would otherwise. We expect to start hiring for our new operations team over the coming months, and will have various job postings live soon - likely across finance, legal, staff support, and other areas. If you're potentially interested in these types of roles, you can fill out the expression of interest form here. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

The Nonlinear Library
EA - CEA is fundraising, and funding constrained by Ben West

The Nonlinear Library

Play Episode Listen Later Nov 20, 2023 14:54


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: CEA is fundraising, and funding constrained, published by Ben West on November 20, 2023 on The Effective Altruism Forum. Tl;dr The Centre for Effective Altruism (CEA) has an expected funding gap of $3.6m in 2024. Some example things we think are worth doing but are unlikely to have funding for by default: Funding a Community Building Grant in Boston Funding travel grants for EAG(x) attendees Note that these are illustrative of our current cost-effectiveness bar (as opposed to a binding commitment that the next dollar we receive will go to one of these things). In collaboration with EA Funds we have produced models where users can plug in their own parameters to determine the relative value of a donation to CEA versus EA Funds. Intro The role of an interim executive is weird: whereas permanent CEOs like to come in with a bold new vision (ideally one which blames all the organization's problems on their predecessor), interim CEOs are stuck staying the course. Fortunately for me, I mostly liked the course CEA was on when I came in. The past few years seem to have proven the value of the EA community: my own origin cause area of animal welfare has been substantially transformed (e.g. as recounted by Jakub here), and even as AI safety has entered the global main stage many of the people doing research, engineering, and other related work have interacted with CEA's projects. Of course, this is not to say that CEA's work is a slamdunk. In collaboration with Caleb and Linch at EA Funds, I have included below some estimates of whether marginal donations to CEA are more impactful than those to EA Funds, and a reasonable confidence interval very comfortably includes the possibility that you should donate elsewhere. We are fortunate to count the Open Philanthropy Project (and in particular Open Phil's GCR Capacity Building program) among the people who believe we are a good use of funding, but they (reasonably) prefer to not fund all of our budget, leaving us with a substantial number of projects which we believe would produce value if we could fund starting or scaling them. This post outlines where we expect marginal donations to go and the value we expect to come from those donations. You can donate to CEA here. If you are interested in donating and have further questions, feel free to email me (ben.west@centreforeffectivealtruism.org). I will also try to answer questions in the comments. The basic case for CEA Community building is sometimes motivated by the following: suppose you spent a year telling everyone you know about EA and getting them excited. Probably you could get at least one person excited. Then this means that you will have doubled your lifetime impact, as both you and this other person will go on to do good things. That's a pretty good ROI for one year of work! This story is overly simplistic, but is roughly my motivation for working on (and donating to) community building: it's a leveraged way to do good in the world. And it does seem to be the case that many people whose work seems impactful attribute some of their impact to CEA: The Open Philanthropy longtermist survey in 2020 identified CEA among the top tier of important influences on people's journey towards work improving the long-term future, with about half of CEA's demonstrated value coming through events (EA Global and EAGx conferences) and half through our other programs. The 80,000 Hours user survey in 2022 identified CEA as the EA-related resource which has influenced the most people's career plans (in addition to 80k itself), with 64% citing the EA Forum as influential and 44% citing EAG. This selection of impact stories illustrates some of the ways we've helped people increase their impact by providing high-quality discussion spaces to consider their ideas, values and options for and about maki...

The Nonlinear Library
EA - Numerical Breakdown of 47 1-on-1s as an EAG First-Timer (Going All Out Strategy) by Harry Luk

The Nonlinear Library

Play Episode Listen Later Nov 7, 2023 21:40


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Numerical Breakdown of 47 1-on-1s as an EAG First-Timer (Going All Out Strategy), published by Harry Luk on November 7, 2023 on The Effective Altruism Forum. tl;dr Just attended my first ever EA Global conference (EAG Boston last week) and I have nothing but positive things to say. In total, I had about 47 one-on-one conversations depending on how you count the informal 1:1s (43 scheduled via SwapCard, while the other noteworthy conversations happened at meetups, the organization fair, office hours and unofficial satellite events). I came into the conference with an open mind, wanting to talk to others who are smarter than me, more experienced than me, and experts in their own domain. I invited redteaming of our nonprofit StakeOut.AI's mission/TOC, and gathered both positive and negative feedback throughout EAG. I came out of the conference with new connections, a refined strategy for our nonprofit startup going forward and lots of resources. I am so grateful for everyone that met with me (as I'm a small potato who at many times felt out of his depth during EAG, and likely one of the most junior EAs attending). I thank all the organizers, volunteers, helpers, speakers and attendees who made the event a huge success. The post below goes over The Preparation, the Statistics and Breakdown, why consider going all out at an EAG, 12 Practical Tips for Doing 30+ 1:1s and potential future improvements. The Preparation To be honest, as a first-time attendee, I really didn't know what to expect nor how to prepare for the conference. I had heard good things and was recommended to go by fellow EAs, but I had my reservations. Luckily, an email titled "Join us for an EAG first-timers online workshop!" by the EA Global Team came to the rescue. Long story short, I highly recommend anyone new to EAG to attend the online workshop prior to the conference if you want to make your first EAG a success. Few highlights I will note here: Watch this presentation from 2022's San Francisco EAG that outlines how you can get the most out of the event Take your time and fill out this EA Conference: Planning Worksheet for a step-by-step guide on conference planning, including setting your EAG goals and expectations Also fill out the career planning worksheet (if relevant): EA Conference: Career Plan Requesting 1:1s Pre-conference I was quite hesitant at first about introducing myself on SwapCard and trying to schedule 1:1s. This all changed after watching the presentation and attending the "Join us for an EAG first-timers online workshop!" virtual event. Something that was repeated over and over again from this presentation, the online workshop, and talking to others is the value of the 1:1s. People told me most sessions will be recorded and hence can be watched later, but having the 1:1s is where the true value is at EAG. After hearing it from so many people, I made 1:1s a core part of my conference planning and did not regret it. As I'm writing this after the conference, I can see why 1:1s are said to be the true value of EAG. I estimate that 80% (maybe even closer to 90%, I would know better after I sort through the notes) of the 1:1 conversations I had were beneficial and had a positive impact on either me or the direction of our nonprofit, StakeOut.AI. How Many 1:1s? In terms of how many 1:1s, here is the range I gathered from different sources: Attendees will typically have four to ten 1:1s Getting to 20 1:1s is a great number Having 30 1:1s is amazing but very tiring Someone reached 35 1:1s once, and that was insane Since I wanted to maximize my EAG experience, I set the goal of 30 and started reaching out via SwapCard one week before the conference. Reach Out Early The main reason for starting early is because everyone is busy at the conferences, and everyone is trying to optimize their sch...

The Nonlinear Library
LW - AMA: Earning to Give by jefftk

The Nonlinear Library

Play Episode Listen Later Nov 7, 2023 1:49


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AMA: Earning to Give, published by jefftk on November 7, 2023 on LessWrong. This week the Effective Altruism Forum is running an Effective Giving Spotlight, and they asked if I could post an Ask Me Anything (AMA) on my experience earning to give. Some background: I was earning to give from 2009 to 2022, except for a few months in 2017 when I worked on expanding access to the financial system in Ethiopia and looking into AI risk disagreements. I've been a Giving What We Can member since 2013, making a public pledge to continue with effective giving. For most of this time my wife and I were donating 50% of our pre-tax income, for a total of $2.1M. This has been about 50-50 between trying to help the EA community grow into the best version of itself and funding global poverty reduction (details, thoughts, more recent but still obsolete thoughts). In 2016 I gave a EA Global talk (transcript) on earning to give, which gives more background on the idea and how I've neen thinking about it. That's a lot of links, and it's fine to ask questions even if you haven't read any of them! I'm happy to take questions on earning to give, or anything else within EA. Here are some example questions I'd be happy to answer if there's interest: Where do individual donors earning to give have an advantage over foundations and funds? How should you decide whether to use a fund? How have I thought about how much to donate? How much is enough? Why did I stop earning to give? Why am I still donating some even though I'm funded by EA donors? Feel free to comment on any platform, but if you're having trouble deciding then the EA Forum post is ideal. Comment via: the EA Forum Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

The Nonlinear Library
EA - Curious about EAGxVirtual? Ask the team anything! by OllieBase

The Nonlinear Library

Play Episode Listen Later Nov 4, 2023 1:48


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Curious about EAGxVirtual? Ask the team anything!, published by OllieBase on November 4, 2023 on The Effective Altruism Forum. EAGxVirtual 2023 , a free online effective altruism conference (November 17-19), is just two weeks away! The event will bring together EAs from around the world, and will facilitate discussions about how we can work on pressing problems, connections between attendees and diverse fields, and more. Apply here by 16 November. We've recently published some more details about the event and we want to invite you to ask us about what to expect from the event. Please post your questions as comments by the end of the day on Sunday (5 November) and we'll aim to respond by the end of the day on Monday (6 November). Some question prompts: Unsure about applying? We encourage everyone with a genuine interest in EA to apply , and we're accepting a vast majority of people. Let us know what you're uncertain about with the application process. Undecided whether to go? Tell us why and we can help you. We'll probably be biased but we'll try our best to present considerations on both sides - it won't be a good use of time for everyone! Unsure how to prepare? You can find some tips on the EA Global topic page but we're happy to help with your specific case if you need more tips! Uncertain how to set up a group activity (a screening, a meet-up etc.) for the event? Share your thoughts below and we can help you plan! We look forward to hearing from you! Sasha, Dion (EAGxVirtual / EA Anywhere) and Ollie (CEA) Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

Effective Altruism Forum Podcast
“How has FTX's collapse impacted EA?” by AnonymousEAForumAccount

Effective Altruism Forum Podcast

Play Episode Listen Later Oct 17, 2023 32:43


Summary of Findings It has been almost a year since FTX went bankrupt on November 11, 2022. Some of the ways that has impacted EA have been obvious, like the shuttering of the FTX Foundation, expected to be one of the biggest EA funders. But recent discussions show that the broader impact of the FTX scandal on EA isn't well understood, and that there is a desire for more empirical evidence on this topic. To that end, I have aggregated a variety of publicly available EA metrics to improve the evidence base. Unfortunately, these wide-ranging perspectives clearly show a broad-based and worrisome deterioration of EA activity in the aftermath of FTX.Previous attempts to quantify how FTX impacted EA have focused on surveys of members of the EA community, university students, the general public, and university group organizers. These surveys were conducted in the months following FTX's collapse. Their results have been [...] ---Outline:(00:04) Summary of Findings(04:22) Detailed Findings(04:25) Notes on Data Sources and Presentation(05:21) Donation Data(05:24) EA Funds: Donation and Donor Data(08:10) GWWC Pledges(08:43) EA Newsletter Subscriptions(09:41) Attitude Data(09:44) Survey of EA Community(12:20) Surveys of University Populations and General Public(13:47) Engagement Metrics(13:51) EffectiveAltruism.org Web Traffic(16:52) EA Forum(19:09) Google Search Interest(20:39) EA Global and EA Global X(22:51) Virtual Programs(23:47) 80k Metrics(24:22) University Group Accelerator Program(24:43) Additional data that would shed more light on FTX's impact and its causes(27:42) Conclusions(29:24) Appendix: Meta commentary about data collection and distributionThe original text contained 2 footnotes which were omitted from this narration. --- First published: October 17th, 2023 Source: https://forum.effectivealtruism.org/posts/vXzEnBcG7BipkRSrF/how-has-ftx-s-collapse-impacted-ea --- Narrated by TYPE III AUDIO.

The Nonlinear Library
EA - EA Survey 2022: What Helps People Have an Impact and Connect with Other EAs by WillemSleegers

The Nonlinear Library

Play Episode Listen Later Aug 3, 2023 11:58


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA Survey 2022: What Helps People Have an Impact and Connect with Other EAs, published by WillemSleegers on August 3, 2023 on The Effective Altruism Forum. Summary Personal contact with EAs was the most commonly selected influence on EAs' ability to have a positive impact (40.9%), followed by 80,000 Hours (31.4%) and local EA groups (19.8%). Compared to 2020, EA Global, the EA Forum, EAGx, and 80,000 Hours one-on-one career discussion showed a small increase in being reported as important sources for greater impact. Relatively more sources decreased, some quite significantly, in being reported as important (e.g., GiveWell, 80,000 Hours website and podcast). Local EA groups were the most commonly cited source for making a new connection (35.5%), followed by a personal connection (34.1%), EA Global (26.2%), and EAGx (24.0%). Compared to 2020, relatively more respondents indicate having made an interesting and valuable new personal connection via EA Global and EAGx, and fewer via most other sources. Introduction In this post, we report on what factors EAs say helped them have a positive impact or create new connections. Note that we significantly shortened the EA Survey this year, meaning there are fewer community-related questions than in the previous EA Survey. Positive Influences We asked about which factors, within the last 12 months, had the largest influence on your personal ability to have a positive impact, allowing respondents to select up to three options. On average, respondents selected 2.36 options (median 3). Personal contact with EAs stood out as the most common factor selected by respondents (40.9%), followed by 80,000 Hours (combined, 31.4%), and local EA groups (19.8%). Personal contact, 80,000 Hours, and EA groups were also the top three factors that respondents reported as being important for getting them involved in EA. 2020 vs. 2022 We asked the same question in 2020, although we changed some of the response categories. This year we included 80,000 Hours (job board), the online EA community (other than EA Forum), and Virtual programs, but dropped Animal Charity Evaluators, The Life You Can Save, and Animal Advocacy Careers. These categories were dropped due to low endorsement rates in previous years. Compared to 2020, we see an increase in EA Global, the EA Forum, EAGx, and 80,000 Hours one-on-one career discussion as important sources for greater impact. These increases were quite small in most cases, with the biggest change observed for EAGx (from 6% to 13%). We saw multiple decreases, some quite sizable, in local EA groups, 80,000 Hours (both the website and podcast), Books, GiveWell, Articles/blogs, Giving What We Can, LessWrong, Slate Star Codex/Astral Codex Ten, podcasts, and Facebook groups. A portion of the decrease of the website of 80,000 Hours can be attributed to the addition of the 80,000 Hours job board category in this year's survey. Last year, respondents may have included this category in the website category of 80,000 Hours, while this year it was its own category. Including the job board category with the website category leads to a smaller decrease between 2022 and 2020, although it does not fully account for it. It's important to recall that these questions asked about which factors had had the largest influence within the last 12 months. Thus, the percentages of respondents who have been influenced by these factors at some point are likely larger than those reporting having been influenced in the last 12 months within this survey. Responses to this question might also be expected to change more, across years, than our questions which are not limited to the last 12 months. Gender Respondents who indicated identifying as a man were more likely to select the EA Forum, GiveWell, the 80,000 Hours podcast and one-on-one career discu...

The Nonlinear Library
EA - EA Survey 2022: How People Get Involved in EA by WillemSleegers

The Nonlinear Library

Play Episode Listen Later Jul 21, 2023 11:08


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA Survey 2022: How People Get Involved in EA, published by WillemSleegers on July 21, 2023 on The Effective Altruism Forum. Summary Personal contact (22.6%), 80,000 Hours (13.5%), and a book, article, or blog post (13.1%) are the most common sources where respondents first hear about EA. 80,000 Hours, local or university EA groups, personal contacts, and podcasts have become more common as sources of where respondents first encounter EA. Facebook, Giving What We Can, LessWrong, Slate Star Codex / Astral Codex Ten, The Life You Can Save, and GiveWell have become less common. Respondents whose gender selection was 'woman', 'non-binary', or 'prefer to self-describe', were much more likely to have first heard of EA via a personal contact (30.2%) compared to respondents whose gender selection was 'man' (18.4%). 80,000 Hours (58.0%), personal contact with EAs (44.0%), and EA groups (36.8%) are the most common factors important for getting involved in EA. 80,000 Hours, EA Groups, and EAGx have been increasing in importance over the last years. EA Global, personal contact with EAs, and the online EA community saw a noticeable increase in importance for helping EAs get involved between 2020 and 2022. Personal contact with EAs, EA groups, the online EA community, EA Global, and EAGx stand out as being particularly important among highly engaged respondents for getting involved. Respondents who identified as non-white, as well as women, non-binary, and respondents who preferred to self-describe, were generally more likely to select factors involving social contact with EAs (e.g., EA group, EAGx) as important. Where do people first hear about EA? Personal contacts continue to be the most common place where people first hear about EA (22.6%), followed by 80,000 Hours (13.5%) and a book, article, or blog post (13.1%). Comparison across all years The plot below shows changes in where people report first hearing of EA across time (since we ran the first EA Survey in 2014). We generally observe that the following routes into EA have been increasing in importance over time: 80,000 Hours Local or university EA groups Personal contacts Podcasts And the following sources have been decreasing in importance: Facebook Giving What We Can LessWrong Slate Star Codex / Astral Codex Ten The Life You Can Save GiveWell Comparison across cohorts Several of the patterns observed in the previous section are also observed when we look at where different cohorts of EA respondents first encountered EA. We see that more recent cohorts are more likely to have encountered EA via 80,000 Hours and podcasts, and are less likely to have encountered EA via Giving What We Can, LessWrong, and GiveWell. No clear cohort effects were observed for other sources. Note that the figure below omits categories with few observations (e.g., EA Global, EAGx). Further Details We asked respondents to provide further details about their responses, and provide a breakdown for some of the larger categories. Details of other categories are available on request. 80,000 Hours The largest number of respondents who first heard of EA through 80,000 Hours reported doing so through an independent search, e.g., they were searching online for "ethical careers" and found 80,000 Hours. The second largest category was via the website (which is potentially closely related, i.e., contact with the website resulting from independent search). Relatively much smaller proportions mentioned reaching 80,000 Hours through other categories, including more active outreach (e.g., advertisements). Books, articles, and blogs A book was cited as the most common source of encountering EA when considering the category of books, articles, and blogs. Books Books by Peter Singer were by far the most frequently cited books, followed by Doing Good Better by Willia...

The Nonlinear Library
EA - CEA: still doing CEA things by Ben West

The Nonlinear Library

Play Episode Listen Later Jul 15, 2023 5:16


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: CEA: still doing CEA things, published by Ben West on July 15, 2023 on The Effective Altruism Forum. This is a linkpost for our new and improved public dashboard, masquerading as a mini midyear update It's been a turbulent few months, but amidst losing an Executive Director, gaining an Interim Managing Director, and searching for a CEO, CEA has done lots of cool stuff so far in 2023. The headline numbers 4,336 conference attendees (2,695 EA Global, 1,641 EAGx) 133,041 hours of engagement on the Forum, including 60,507 hours of engagement with non-Community posts (60% of total engagement on posts) 26 university groups and 33 organizers in UGAP 622 participants in Virtual Programs There's much more, including historical data and a wider range of metrics, in the dashboard! Updates The work of our Community Health & Special Projects and Communications teams lend themselves less easily to stat-stuffing, but you can read recent updates from both: Community Health & Special Projects: Updates and Contacting Us How CEA's communications team is thinking about EA communications at the moment What else is new? Our staff, like many others in the community (and beyond), have spent more time this year thinking about how we should respond to the rapidly evolving AI landscape. We expect more of the community's attention and resources to be directed toward AI safety at the margin, and are asking ourselves how best to balance this with principles-first EA community building. Any major changes to our strategy will have to wait until our new CEO is in place, but we have been looking for opportunities to improve our situational awareness and experiment with new products, including: Exploring and potentially organizing a large conference focussed on existential risk and/or AI safety Learning more about and potentially supporting some AI safety groups Supporting AI safety communications efforts These projects are not yet ready to be announcements or commitments, but we thought it worth sharing at a high level as a guide to the direction of our thinking. If they intersect with your projects or plans, please let us know and we'll be happy to discuss more. It's worth reiterating that our priorities haven't changed since we wrote about our work in 2022: helping people who have heard about EA to deeply understand the ideas, and to find opportunities for making an impact in important fields. We continue to think that top-of-funnel growth is likely already at or above healthy levels, so rather than aiming to increase the rate any further, we want to make that growth go well. You can read more about our strategy here, including how we make some of the key decisions we are responsible for, and a list of things we are not focusing on. And it remains the case that we do not think of ourselves as having or wanting control over the EA community. We believe that a wide range of ideas and approaches are consistent with the core principles underpinning EA, and encourage others to identify and experiment with filling gaps left by our work. Impact stories And finally, it wouldn't be a CEA update without a few #impact-stories: Online Training for Good posted about their EU Tech Policy Fellowship on the EA Forum. 12/100+ applicants they received came from the Forum, and 6 of these 12 successfully made it on to the program, out of 17 total program slots. Community Health & Special Projects Following the TIME article about sexual misconduct, people have raised a higher-than-usual number of concerns from the past that they had noticed or experienced in the community but hadn't raised at the time. In many of these cases we've been able to act to reduce risk in the community, such as warning people about inappropriate behavior and removing people from CEA spaces when their past behavior has caused harm. Communicati...

The Nonlinear Library
EA - Global Innovation Fund projects its impact to be 3x GiveWell Top Charities by jh

The Nonlinear Library

Play Episode Listen Later Jun 1, 2023 1:52


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Global Innovation Fund projects its impact to be 3x GiveWell Top Charities, published by jh on June 1, 2023 on The Effective Altruism Forum. The Global Innovation Fund (GIF) is a non-profit, impact-first investment fund headquartered in London that primarily works with mission-aligned development agencies (USAID, SIDA, Global Affairs Canada, UKAID). Through grants, loans and equity investments, they back innovations with the potential for social impact at a large scale, whether these are new technologies, business models, policy practices or behavioural insights. Recently, they made a bold but little publicized projection in their 2022 Impact Report (page 18): "We project every dollar that GIF has invested to date will be three times as impactful as if that dollar had been spent on long-lasting, insecticide-treated bednets... This is three times higher than the impact per dollar of Givewell's top-rated charities, including distribution of anti-malarial insecticide-treated bednets. By Givewell's estimation, their top charities are 10 times as cost-effective as cash transfers." The report says they have invested $112m since 2015. This is a short post to highlight GIF's projection to the EA community and to invite comments and reactions.Here are a few initial points: It's exciting to see an organization with relatively traditional funders comparing its impact to GiveWell's top charities (as well as cash transfers). I would want to see more information on how they did their calculations before taking a view on their projection. In any case, based on my conversations with GIF, and what I've understood about their methodology, I think their projection should be taken seriously. I can see many ways it could be either an overestimate or an underestimate. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

The Nonlinear Library
EA - Will AI end everything? A guide to guessing | EAG Bay Area 23 by Katja Grace

The Nonlinear Library

Play Episode Listen Later May 26, 2023 28:18


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Will AI end everything? A guide to guessing | EAG Bay Area 23, published by Katja Grace on May 25, 2023 on The Effective Altruism Forum. Below is the video and transcript for my talk from EA Global, Bay Area 2023. It's about how likely AI is to cause human extinction or the like, but mostly a guide to how I think about the question and what goes into my probability estimate (though I do get to a number!) The most common feedback I got for the talk was that it helped people feel like they could think about these things themselves rather than deferring. Which may be a modern art type thing, like "seeing this, I feel that my five year old could do it", but either way I hope this empowers more thinking about this topic, which I view as crucially important. You can see the slides for this talk here Introduction Hello, it's good to be here in Oakland. The first time I came to Oakland was in 2008, which was my first day in America. I met Anna Salamon, who was a stranger and who had kindly agreed to look after me for a couple of days. She thought that I should stop what I was doing and work on AI risk, which she explained to me. I wasn't convinced, and I said I'd think about it; and I've been thinking about it. And I'm not always that good at finishing things quickly, but I wanted to give you an update on my thoughts. Two things to talk about Before we get into it, I want to say two things about what we're talking about. There are two things in this vicinity that people are often talking about. One of them is whether artificial intelligence is going to literally murder all of the humans. And the other one is whether the long-term future – which seems like it could be pretty great in lots of ways – whether humans will get to bring about the great things that they hope for there, or whether artificial intelligence will take control of it and we won't get to do those things. I'm mostly interested in the latter, but if you are interested in the former, I think they're pretty closely related to one another, so hopefully there'll also be useful things. The second thing I want to say is: often people think AI risk is a pretty abstract topic. And I just wanted to note that abstraction is a thing about your mind, not the world. When things happen in the world, they're very concrete and specific, and saying that AI risk is abstract is kind of like saying World War II is abstract because it's 1935 and it hasn't happened yet. Now, if it happens, it will be very concrete and bad. It'll be the worst thing that's ever happened. The rest of the talk's gonna be pretty abstract, but I just wanted to note that. A picture of the landscape of guessing So this is a picture. You shouldn't worry about reading all the details of it. It's just a picture of the landscape of guessing [about] this, as I see it. There are a bunch of different scenarios that could happen where AI destroys the future. There's a bunch of evidence for those different things happening. You can come up with your own guess about it, and then there are a bunch of other people who have also come up with guesses. I think it's pretty good to come up with your own guess before, or at some point separate to, mixing it up with everyone else's guesses. I think there are three reasons that's good. First, I think it's just helpful for the whole community if numerous people have thought through these things. I think it's easy to end up having an information cascade situation where a lot of people are deferring to other people. Secondly, I think if you want to think about any of these AI risk-type things, it's just much easier to be motivated about a problem if you really understand why it's a problem and therefore really believe in it. Thirdly, I think it's easier to find things to do about a problem if you understand exactly why it's a p...

EARadio
Hilary Greaves | On the desire to make a difference | Global Priorities Institute 2022

EARadio

Play Episode Listen Later May 14, 2023 40:35


You can watch this talk with the video on the GPI YouTube channel. This presentation was given at the 10th Oxford Workshop on Global Priorities Research, June 2022.You can find the full transcript here: https://globalprioritiesinstitute.org...Find out more about the Global Priorities Institute: https://globalprioritiesinstitute.org/Effective Altruism is a social movement dedicated to finding ways to do the most good possible, whether through charitable donations, career choices, or volunteer projects. EA Global conferences are gatherings for EAs to meet. You can also listen to this talk along with its accompanying video on YouTube.

The Nonlinear Library
EA - I want to read more stories by and about the people of Effective Altruism by Gemma Paterson

The Nonlinear Library

Play Episode Listen Later May 13, 2023 6:26


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: I want to read more stories by and about the people of Effective Altruism, published by Gemma Paterson on May 12, 2023 on The Effective Altruism Forum. TL; DR I want to read more stories by and about the people of the Effective Altruism movement But like, fun ones, not CVs I've added a tag for EA origin stories and tagged a bunch of relevant posts from the forum If I've missed some, please tag them The community experiences tag has a lot of others that don't quite fit I think it is important to emphasise the personal in the effective altruism movement - you never know if your story is enough to connect with someone (especially if you don't fit the stereotypical EA mold) I would also be very interested in reading folks' answers to the “What is your current plan to improve the world?” question from the EA Global application - it's really helpful to see other people's thought processes (you can read mine here) Why? At least for me, what grabbed and kept my attention when I first heard about EA were the stories of people on the ground trying to do effective altruism. The audacity of a group of students looking at the enormity of suffering in the world but then pushing past that overwhelm. Recognising that they could use their privileges to make a dent if they really gave it a go. The folks behind Charity Entrepreneurship who didn't stop at one highly effective charity but decided to jump straight into making an non-profit incubator to multiply their impact - building out, in my opinion, some of the coolest projects in the movement. I love that the 80,000 hours podcast takes the concept behind Big Talk seriously It's absurd but amazing! I love the ethos of practicality within the movement. It isn't about purity, it isn't about perfection, it's about actually changing the world. These are the people I'd back to build a robust Theory of Change that might just move us towards Fully Automated Luxury Gay Space Communism Maybe that google doc already exists? I have never been the kind of person who had role models. I have always been a bit too cynical to put people on a pedestal. I had respect for successful people and tried to learn what I could from them but I didn't have heroes. But my response to finding the EA movement was, “Fuck, these people are cool.” I think there is a problem with myth making and hero worshipping within EA. I do agree that it is healthier to Live Without Idols. However, I don't think we should live without stories. The stories I'm more interested in are the personal ones. Of people actually going out and living their values. Examples of trades offs that real people make that allow them to be ambitiously altruistic in a way that suits them. That show that it is fine to care about lots of things. That it is okay to make changes in your life when you get more or better information. I think about this post a lot because I agree that if people think that “doing effective altruism” means they have to live like monks and change their whole lives then they'll just reject it. Making big changes is hard. People aren't perfect. I can trace huge number of positive changes in my life to my decision to take EA seriously but realistically it was my personal IRL and parasocial connections to the people of EA that gave me the space and support to make these big changes in my life. In the footnotes and in this post about my EA story, I've included a list of podcasts, blog posts and other media by people within EA that were particularly influential and meaningful to me (if you made them then thank you

EARadio
Teruji Thomas | The Multiverse and the Veil: Population Ethics Under Uncertainty | Global Priorities Institute 2022

EARadio

Play Episode Listen Later May 13, 2023 35:44


You can watch this talk with the video on the GPI YouTube channel. This presentation was given at the 10th Oxford Workshop on Global Priorities Research, June 2022.Full transcript is available here: https://globalprioritiesinstitute.org...Find out more about the Global Priorities Institute: https://globalprioritiesinstitute.org/Effective Altruism is a social movement dedicated to finding ways to do the most good possible, whether through charitable donations, career choices, or volunteer projects. EA Global conferences are gatherings for EAs to meet. You can also listen to this talk along with its accompanying video on YouTube.

EARadio
Benjamin Tereick | Creating short term proxies for long term forecasts | Global Priorities Institute 2022

EARadio

Play Episode Listen Later May 12, 2023 30:47


You can watch this talk with the video on the GPI YouTube channel. This presentation was given at the 10th Oxford Workshop on Global Priorities Research, June 2022.You can find the full transcript here: https://globalprioritiesinstitute.org...Find out more about the Global Priorities Institute: https://globalprioritiesinstitute.org/Effective Altruism is a social movement dedicated to finding ways to do the most good possible, whether through charitable donations, career choices, or volunteer projects. EA Global conferences are gatherings for EAs to meet. You can also listen to this talk along with its accompanying video on YouTube.

EARadio
Oded Galor | The Journey of Humanity | Global Priorities Institute 2022

EARadio

Play Episode Listen Later May 7, 2023 55:59


You can view this talk with the video on the GPI YouTube channel. Public Lecture: The Journey of Humanity - Oded Galor 10 June 2022In The Journey of Humanity Oded Galor offers a revelatory explanation of how humanity became, only very recently, the unique species to have escaped a life of subsistence poverty, enjoying previously unthinkable wealth and longevity. He reveals why this process has been so unequal around the world, resulting in the great disparities between nations that exist today. He shows why so many of our efforts to improve lives have failed and how they might succeed.The full transcript is available here: https://globalprioritiesinstitute.org...Find out more about the Global Priorities Institute: https://globalprioritiesinstitute.org/You can find more information about Oded Galor here: https://www.odedgalor.com/The Journey of Humanity: https://www.penguin.co.uk/books/44495...Effective Altruism is a social movement dedicated to finding ways to do the most good possible, whether through charitable donations, career choices, or volunteer projects. EA Global conferences are gatherings for EAs to meet. You can also listen to this talk along with its accompanying video on YouTube.

EARadio
Jeffrey Sanford Russell | Problems for Impartiality | Global Priorities Institute 2022

EARadio

Play Episode Listen Later May 6, 2023 55:50


You can view this talk with the video on the GPI YouTube channel. Parfit Memorial Lecture 2022 hosted by the Global Priorities Institute 16 June 2022The Parfit Memorial Lecture is an annual distinguished lecture series established by the Global Priorities Institute in memory of Professor Derek Parfit. The aim is to encourage research among academic philosophers on topics related to global priorities research - using evidence and reason to figure out the most effective ways to improve the world. This year, we were delighted to have Jeffrey Sanford Russell deliver the Parfit Memorial Lecture. The Parfit Memorial lecture is organised in conjunction with the Atkinson Memorial Lecture.The full transcript is available here: https://globalprioritiesinstitute.org...Find out more about the Global Priorities Institute: https://globalprioritiesinstitute.org/Effective Altruism is a social movement dedicated to finding ways to do the most good possible, whether through charitable donations, career choices, or volunteer projects. EA Global conferences are gatherings for EAs to meet. You can also listen to this talk along with its accompanying video on YouTube.

EARadio
Jeff Sebo | Artificial Sentience and the Ethics of Connected Minds | Global Priorities Institute 2022

EARadio

Play Episode Listen Later May 5, 2023 35:29


You can watch this talk with the video on the GPI YouTube channel. This presentation was given at the 10th Oxford Workshop on Global Priorities Research, June 2022.The full transcript is available here: https://globalprioritiesinstitute.org...Find out more about the Global Priorities Institute: https://globalprioritiesinstitute.org/Effective Altruism is a social movement dedicated to finding ways to do the most good possible, whether through charitable donations, career choices, or volunteer projects. EA Global conferences are gatherings for EAs to meet. You can also listen to this talk along with its accompanying video on YouTube.

The Nonlinear Library
EA - Upcoming EA conferences in 2023 by OllieBase

The Nonlinear Library

Play Episode Listen Later May 4, 2023 3:10


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Upcoming EA conferences in 2023, published by OllieBase on May 4, 2023 on The Effective Altruism Forum. The Centre for Effective Altruism will be organizing and supporting conferences for the EA community all over the world for the remainder of 2023, including the first-ever EA conferences in Poland, NYC and the Philippines. We currently have the following events scheduled: EA Global EA Global: London | (May 19–21) | Tobacco Dock - applications close 11:59 pm UTC Friday 5 May EA Global: Boston | (October 27–29) | Hynes Convention Center EAGx EAGxWarsaw | (June 9–11) | POLIN EAGxNYC | (August 18–20) | Convene, 225 Liberty St. EAGxBerlin | (September 8–10) | Urania EAGxAustralia | (September 22–24, provisional) | Melbourne EAGxPhilippines | (October 20–22, provisional) EAGxVirtual | (November 17–19, provisional) Applications for EAG London, EAG Boston, EAGxWarsaw and EAGxNYC are open, and we expect applications for the other conferences to open approximately 3 months before the event. Please go to the event page links above to apply. Please note again that applications to EAG London close 11:59 pm UTC Friday 5 May. If you'd like to add EA events like these directly to your Google Calendar, use this link. Some notes on these conferences: EA Globals are run in-house by the CEA events team, whereas EAGx conferences are organized independently by local community builders with financial support and mentoring from CEA. EA Global conferences have a high bar for admission and are for people who are very familiar with EA and are taking significant actions (e.g. full-time work or study) based on EA ideas. Admissions for EAGx conferences are processed independently by the EAGx conference organizers. These events are primarily for those who are newer to EA and interested in getting more involved and who are based in the region the conference is taking place in (e.g. EAGxWarsaw is primarily for people who are interested in EA and are based in Eastern Europe). Please apply to all conferences you wish to attend once applications open — we would rather get too many applications for some conferences and recommend that applicants attend a different one than miss out on potential applicants to a conference. Travel support funds for events this year are limited (though will vary by event), and we can only accommodate a small number of requests. If you do not end up receiving travel support, this is likely the result of limited funds, rather than an evaluation of your potential for impact. When planning around an event, we'd recommend you act under the assumption that we will not be able to grant your travel funding request (unless it has already been approved). Find more info on our website. Feel free to email hello@eaglobal.org with any questions, or comment below. You can also contact EAGx organisers using the format [location]@eaglobalx.org (e.g. warsaw@eaglobalx.org, nyc@eaglobalx.org). Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.

EARadio
Andreas Mogensen, David Thorstad | Tough enough? Robust satisficing as a decision norm for long term | Global Priorities Institute 2021 (Academic talks)

EARadio

Play Episode Listen Later May 1, 2023 32:41


 You can watch this talk with the video on the GPI YouTube channel. Presentation given March 2021.The full transcript is available here: https://globalprioritiesinstitute.org...Read the working paper: https://globalprioritiesinstitute.org...Find out more about the Global Priorities Institute: https://globalprioritiesinstitute.org/Effective Altruism is a social movement dedicated to finding ways to do the most good possible, whether through charitable donations, career choices, or volunteer projects. EA Global conferences are gatherings for EAs to meet. You can also listen to this talk along with its accompanying video on YouTube.

EARadio
Orri Stefansson | Should welfare equality be a global priority? | Global Priorities Institiute 2021

EARadio

Play Episode Listen Later Apr 30, 2023 58:13


You can view this talk with the video on the GPI YouTube channel. Parfit Memorial Lecture 2021 hosted by the Global Priorities Institute 14 June 2021.The Parfit Memorial Lecture is an annual distinguished lecture series established by the Global Priorities Institute in memory of Professor Derek Parfit. The aim is to encourage research among academic philosophers on topics related to global priorities research - using evidence and reason to figure out the most effective ways to improve the world. This year, we were delighted to have Professor Orri Stefansson deliver the Parfit Memorial Lecture. The Parfit Memorial lecture is organised in conjunction with the Atkinson Memorial Lecture.The full transcript is available here: https://globalprioritiesinstitute.org...Find out more about the Global Priorities Institute: https://globalprioritiesinstitute.org/Parfit Memorial Lecture 2021: https://globalprioritiesinstitute.org...Atkinson Memorial Lecture 2021: https://globalprioritiesinstitute.org...Effective Altruism is a social movement dedicated to finding ways to do the most good possible, whether through charitable donations, career choices, or volunteer projects. EA Global conferences are gatherings for EAs to meet. You can also listen to this talk along with its accompanying video on YouTube.

EARadio
Teruji Thomas | A paradox for tiny probabilities and enormous values | Global Priorities Institiute 2020

EARadio

Play Episode Listen Later Apr 29, 2023 27:50


Presentation given December 2020.The full transcript is available here: https://globalprioritiesinstitute.org...Read the working paper: https://globalprioritiesinstitute.org...Find out more about the Global Priorities Institute: https://globalprioritiesinstitute.org/Effective Altruism is a social movement dedicated to finding ways to do the most good possible, whether through charitable donations, career choices, or volunteer projects. EA Global conferences are gatherings for EAs to meet. You can also listen to this talk along with its accompanying video on YouTube.

EARadio
Andreas Mogensen | The good news about just saving | Global Priorities Institute 2020

EARadio

Play Episode Listen Later Apr 28, 2023 50:00


You can watch this talk with the video on the GPI YouTube channel. Presented as part of the Global Priorities Seminar series 12 June 2020.The full transcript is available here: https://globalprioritiesinstitute.org...Find out more about the Global Priorities Institute: https://globalprioritiesinstitute.org/ Effective Altruism is a social movement dedicated to finding ways to do the most good possible, whether through charitable donations, career choices, or volunteer projects. EA Global conferences are gatherings for EAs to meet. You can also listen to this talk along with its accompanying video on YouTube.

The Nonlinear Library
EA - Story of a career/mental health failure by zekesherman

The Nonlinear Library

Play Episode Listen Later Apr 27, 2023 14:54


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Story of a career/mental health failure, published by zekesherman on April 27, 2023 on The Effective Altruism Forum. I don't know if it should be considered important as it's only a single data point, but I want to share the story of how my EA career choice and mental health went terribly wrong. My career choice In college I was strongly motivated to follow the most utilitarian career path. In my junior year I decided to pursue investment banking for earning to give. As someone who had a merely good GPA and did not attend a top university, this would have been difficult, but I pushed hard for networking and recruiting, and one professional told me I had a 50-50 chance of becoming an investment banking analyst right out of college (privately I thought he was a bit too optimistic). Otherwise, I would have to get some lower-paying job in finance, and hopefully move into banking at a later date. However I increasingly turned against the idea of earning to give, for two major reasons. First, 80,000 Hours and other people in the community said that EA was more talent- rather than funding-constrained, and that earning to give was overrated. Second, more specifically, people in the EA community informed me that program managers in government and philanthropy controlled much higher budgets than I could reasonably expect to earn. Basically, it appeared easier to become in charge of effectively allocating $5 million of other people's money, compared to earning a $500,000 salary for oneself. Earning to give meant freer control of funding, but program management meant a greater budget. While I read 80k Hours' article on program management, I was most persuaded by advice I got from Jason Gaverick Matheny and Carl Shulman, and also a few non-EA people I met from the OSTP and other government agencies, who had more specific knowledge and advice. It seemed that program management in science and technology (especially AI, biotechnology, etc) was the best career path. And the best way to achieve it seemed to be starting with graduate education in science and technology, ideally a PhD (I decided on computer science, partly because it gave the most flexibility to work on a wide variety of cause areas). Finally, the nail in the coffin for my finance ambitions was an EA Global conference where Will MacAskill said to think less about finding a career that was individually impactful, and think more about leveraging your unique strengths to bring something new to the table for the EA community. While computer science wasn't rare in EA, I thought I could be special by leveraging my military background and pursuing a more cybersecurity- or defense-related career, which was neglected in EA. Still, I had two problems to overcome for this career path. The first problem was that I was an econ major and had a lot of catching up to do in order to pursue advanced computer science. The second problem was that it wasn't as good of a personal fit for me compared to finance. I've always found programming and advanced mathematics to be somewhat painful and difficult to learn, whereas investment banking seemed more readily engaging. And 80k Hours as well as the rest of the community gave me ample warnings about how personal fit was very, very important. I disregarded these warnings about personal fit for several reasons: I'd always been more resilient and scrupulous compared to other people and other members of the EA community. Things like living on a poverty budget and serving in the military, which many other members of the EA community have considered intolerable or unsustainable for mental health, were fine for me. As one of the more "hardcore" EAs, I generally regarded warnings of burnout as being overblown or at least less applicable to someone like me, and I suspected that a lot of people in the EA ...

EARadio
Andreas Mogensen - What does the future ask of us? | Global Priorities Insititute 2019 (Academic talks)

EARadio

Play Episode Listen Later Apr 24, 2023 33:20


You can view this talk with the video on the GPI YouTube channel. Presentation given at the Global Priorities Institute December 2019.The full transcript is available here: https://globalprioritiesinstitute.org...Find out more about the Global Priorities Institute: https://globalprioritiesinstitute.org/Effective Altruism is a social movement dedicated to finding ways to do the most good possible, whether through charitable donations, career choices, or volunteer projects. EA Global conferences are gatherings for EAs to meet. You can also listen to this talk along with its accompanying video on YouTube.

EARadio
Teruji Thomas | The Asymmetry, Uncertainty, and the Long Term | Global Priorities Institute 2019

EARadio

Play Episode Listen Later Apr 23, 2023 25:14


You can view this talk with the video on the GPI YouTube channel.  Presentation given at the Global Priorities Institute September 2019.The full transcript is available here: https://globalprioritiesinstitute.org...Find out more about the Global Priorities Institute: https://globalprioritiesinstitute.org/Effective Altruism is a social movement dedicated to finding ways to do the most good possible, whether through charitable donations, career choices, or volunteer projects. EA Global conferences are gatherings for EAs to meet. You can also listen to this talk along with its accompanying video on YouTube.

EARadio
Andreas Mogensen | The Only Ethical Argument for Positive Delta? | Global Priorities Institute 2019

EARadio

Play Episode Listen Later Apr 22, 2023 24:43


You can view this talk with the video on the GPI YouTube channel. Originally presented at the Global Priorities Institute June 2019The full transcript is available here: https://globalprioritiesinstitute.org...Find out more about the Global Priorities Institute: https://globalprioritiesinstitute.org/Effective Altruism is a social movement dedicated to finding ways to do the most good possible, whether through charitable donations, career choices, or volunteer projects. EA Global conferences are gatherings for EAs to meet. You can also listen to this talk along with its accompanying video on YouTube.

EARadio
Christian J. Tarsney - The Epistemic Challenge to Longtermism | Global Priorities Institute 2019

EARadio

Play Episode Listen Later Apr 21, 2023 36:03


You can view this talk with the video on the GPI YouTube channel. Originally presented at the Global Priorities Institute June 2019.The full transcript is available here: https://globalprioritiesinstitute.org...Find out more about the Global Priorities Institute: https://globalprioritiesinstitute.org/Effective Altruism is a social movement dedicated to finding ways to do the most good possible, whether through charitable donations, career choices, or volunteer projects. EA Global conferences are gatherings for EAs to meet. You can also listen to this talk along with its accompanying video on YouTube.

The Nonlinear Library
EA - The EA Relationship Escalator by ProbablyGoodCouple

The Nonlinear Library

Play Episode Listen Later Apr 1, 2023 3:16


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The EA Relationship Escalator, published by ProbablyGoodCouple on April 1, 2023 on The Effective Altruism Forum. When dating, you may typically encounter the relationship escalator, which is a social script for how relationships are supposed to unfold that is oriented around default societal expectations. Potential partners are supposed to follow a progressive set of steps and achieve specific visible milestones towards a clear goal. This escalator frequently looks like: Meet on a dating app Go on a few dates Hold hands, kiss Become romantically exclusive Fall in love Meet the parents Have a long weekend together Vacation together Move in together Get engaged Get married Buy a house Have kids Have a dog Of course the steps are approximate and may not unfold literally in this order, but it should be pretty close. Moreover, where you are on this escalator and how long it has been since the previous step is often used to judge whether a relationship is sufficiently significant, serious, good, healthy, committed, etc. and to tell whether the relationship is worth pursuing. Effective altruists, as a community though, are rarely mainstream like this. This suggests that it may be helpful for EAs looking to date other EAs to have a very different and more customized social script to follow to judge how their unique EA relationship is unfolding. I recommend the EA relationship escalator look like this: Meet at EA Global but definitely do not flirt Re-meet at a house party where flirting is allowed Comment on their EA Forum posts Reach out on their “Date Me” doc Go on a few dates Become awkwardly personally and professionally intertwined Thoroughly assess together the conflicts of interest inherent in your relationship Talk to your HR department Talk to their HR department Talk to the CEA Community Health team Make a spreadsheet together to thoroughly quantify the relevant risks and benefits of your romantic relationship Decide to go for it Finally hold hands, kiss Synchronize your pomodoro schedules together Create a shared Complice coworking room / Cuckoo coworking room for just you two Take the same personality tests and quantify each other Introduce them to your polycule, hope they get along Fall in love Move in to the same EA group house Synchronize your GWWC pledge donation schedules Have your relationship details exposed by a burner account on the EA Forum Have the EA Forum moderator team encode your relationship details in rot13 Meet the parents Vacation together, but exclusively touring various EA hubs Decide to fight the same x-risk Raise free range chickens together, start an animal sanctuary Create a Manifold market around whether or not you will get married Get engaged Get married Maximize utility Hopefully this EA Relationship Escalator helps give young EAs a social script to be able to find love with each other and be able to understand where their relationship currently is at. After all, you may spend 80,000 hours of your life in a good marriage so it's important to get this right! Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.

The Nonlinear Library
EA - Design changes & the community section (Forum update March 2023) by Lizka

The Nonlinear Library

Play Episode Listen Later Mar 21, 2023 12:04


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Design changes & the community section (Forum update March 2023), published by Lizka on March 21, 2023 on The Effective Altruism Forum. We're sharing the results of the Community-Frontpage test, and we've released a Forum redesign — I discuss it below. I also outline some things we're thinking about right now. As always, we're also interested in feedback on these changes. We'd be really grateful if you filled out this (very quick) survey on the redesign that might help give us a sense of what people are thinking. You can also comment on this post with your thoughts or reach out to forum@centreforeffectivealtruism.org. Results of the Community-Frontpage test & more thoughts on community posts A little over a month ago, we announced a test: we'd be trying out separating “Community” posts from other kinds by creating a “Community” section on the Frontpage of the Forum. We've gotten a lot of feedback; we believe that the change was an improvement, so we're planning on keeping it for the near future, with some modifications. We might still make some changes like switching from a section to tabs, especially depending on new feedback and on how related projects go. Outcomes Information we gathered We sourced user feedback from different places: User interviews with people at EA Global and elsewhere (at least 20 interviews, different people doing the interviewing) Responses to a quick survey on how we can improve discussions on the Forum (45 responses) Metrics (mostly used as sanity checks) Engagement with the Forum overall (engagement on the Forum is 7% lower than the previous month, which is within the bounds we set ourselves and there's a lot of fluctuation, so we're just going to keep monitoring this) Engagement with Community posts (it dropped 8%, which may just be tracking overall engagement, and again, we're going to keep monitoring it) There are still important & useful Community posts every week (subjective assessment)(there are) The team's experience with the section, and whether we thought the change was positive overall Outcomes and themes: The responses we got were overwhelmingly positive about the change. People told us directly (in user interviews and in passing) that the change was improving their experience on the Forum. We also personally thought that the change had gone very well — likely better than we'd expected as a ~70% best outcome. And here are the results from the survey: The metrics we're tracking (listed above) were within the bounds we'd set, and we were mostly using them as sanity checks. There were, of course, some concerns, and critical or constructive feedback. Confusion about what “Community” means Not everyone was clear on which posts should actually go in the section; the outline I gave before was unclear. I've updated the guidance I had originally given to Forum facilitators and moderators (based on their feedback and just sitting down and trying to get a more systematic categorization), and I'm sharing the updated version here. Concerns that important conversations would be missed Some people expressed a worry that having a section like this would hide discussions that the community needs to have, like processing the FTX collapse and what we should learn from it, or how we can create a more welcoming environment for different groups of people. We were also pretty worried about this; I think this was the thing that I thought was most likely going to get us to reverse the change. However, the worry doesn't seem to be realizing. It looks like engagement hasn't fallen significantly on Community posts relative to other posts, and important conversations have been continuing. Some recent posts on difficult community topics have had lots of comments (the discussion of the recent TIME article currently has 159 comments), and Community posts have...

The Nonlinear Library
EA - Why SoGive is publishing an independent evaluation of StrongMinds by ishaan

The Nonlinear Library

Play Episode Listen Later Mar 18, 2023 10:35


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Why SoGive is publishing an independent evaluation of StrongMinds, published by ishaan on March 17, 2023 on The Effective Altruism Forum. Executive summary We believe the EA community's confidence in the existing research on mental health charities hasn't been high enough to use it to make significant funding decisions. Further research from another EA research agency, such as SoGive, may help add confidence and lead to more well-informed funding decisions. In order to increase the amount of scrutiny on this topic, SoGive has started conducting research on mental health interventions, and we plan to publish a series of articles starting in the next week and extending out over the next few months. The series will cover literature reviews of academic and EA literature on mental health and moral weights. We will be doing in-depth reviews and quality assessments on work by the Happier Lives Institute pertaining to StrongMinds, the RCTs and academic sources from which StrongMinds draws its evidence, and StrongMinds' internally reported data. We will provide a view on how impactful we judge StrongMinds to be. What we will publish From March to July 2023, SoGive plans to publish a series of analyses pertaining to mental health. The content covered will include Methodological notes on using existing academic literature, which quantifies depression interventions in terms of standardised mean differences, numbers needed to treat, remission rates and relapse rates; as well as the "standard deviation - years of depression averted" framework used by Happier Lives Institute. Broad, shallow reviews of academic and EA literature pertaining to the question of what the effect of psychotherapy is, as well as how this intersects with various factors such as number of sessions, demographics, and types of therapy. We will focus specifically on how the effect decays after therapy, and publish a separate report on this. Deep, narrow reviews of the RCTs and meta-analyses that are most closely pertaining to the StrongMind's context. Moral weights frameworks, explained in a manner which will allow a user to map dry numbers such as effect sizes to more visceral subjective feelings, so as to better apply their moral intuition to funding decisions. Cost-effective analyses which combine academic data and direct evidence from StrongMinds to arrive at our best estimate at what a donation to StrongMinds does. We hope these will empower others to check our work, do their own analyses of the topic, and take the work further. How will this enable higher impact donations? In the EA Survey conducted by Rethink Priorities, 60% of EA community members surveyed were in favour of giving "significant resources'' to mental health interventions, with 24% of those believing it should be a "top priority" or "near top priority" and 4% selecting it as their "top cause". Although other cause areas performed more favourably in the survey, this still appears to be a moderately high level of interest in mental health. Some EA energy has now gone into this area - for example, Charity Entrepreneurship incubated Canopie, Mental Health Funder's Circle, and played a role in incubating Happier Lives Institute. They additionally launched Kaya Guides and Vina Plena last year. We also had a talk from Friendship Bench at last year's EA Global. Our analysis will focus on StrongMinds. We chose StrongMinds because we know the organisation well. SoGive's founder first had a conversation with StrongMinds in 2015 (thinking of his own donations) having seen a press article about them and having considered them a potentially high impact charity. Since then, several other EA orgs have been engaging with StrongMinds. Evaluations of StrongMinds specifically have now been published by both Founders Pledge and Happier Lives Institute, and Str...

The Nonlinear Library
EA - Racial and gender demographics at EA Global in 2022 by Amy Labenz

The Nonlinear Library

Play Episode Listen Later Mar 10, 2023 13:07


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Racial and gender demographics at EA Global in 2022, published by Amy Labenz on March 10, 2023 on The Effective Altruism Forum. CEA has recently conducted a series of analyses to help us better understand how people of different genders and racial backgrounds experienced EA Global events in 2022 (not including EAGx). In response to some requests (like this one), we wanted to share some preliminary findings. This post is primarily going to report on some summary statistics. We are still investigating pieces of this picture but wanted to get the raw data out fast for others to look at, especially since we suspect this may help shed light on other broad trends within EA. High-level summary Attendees: 33% of registered attendees (and 35% of applicants) at EA Global events in 2022 self-reported as female or non-binary. 33% of registered attendees (and 38% of applicants) self-reported as people of color (“POC”). Experiences: Attendees generally find EA Global welcoming (4.51/5 with 1–5 as options) and are likely to recommend it to others (9.03/10 with 0–10 as options). Women and non-binary attendees reported that they find EA Global slightly less welcoming (4.46/5 compared to 4.56/5 for men and 4.51 overall). Otherwise, we found no statistically significant difference in terms of feelings of welcomeness and overall recommendation scores across groups in terms of gender and race/ethnicity. Speakers: 43% of speakers and MCs at EA Global events in 2022 were female or non-binary. 28% of speakers and MCs were people of color. Some initial takeaways: A more diverse set of people apply to and attend EAG than complete the EA survey. Welcomingness and likelihood to recommend scores for women and POC were very similar to the overall scores. There is a small but statistically significant difference in welcomingness scores for women. We are not sure what to make of the fact that the application stats for POC were higher than the admissions stats. We are currently investigating whether this demographic is newer to EA (our best guess) and if that might be influencing the admissions rate. One update for our team is that women / non-binary speaker stats are higher compared to the applicant pool and this is not the case for POC. We had not realized that prior to conducting this analysis. The 2022 speaker statistics appear to be broadly in line with our statistics since London 2018 when we started tracking. We had significantly less diverse speakers prior to EAG London 2018. Applicants and registrants For EA Globals in 2022, our applicant pool was slightly more diverse in terms of race/ethnicity than our attendee pool (38% of applicants were POC vs. 33% of attendees), and around the same in terms of gender (35% of applicants were female or non-binary vs. 33% of attendees). For comparison, our attendee pool has about the same composition in terms of gender as the respondents in the 2020 EA Survey and is more diverse in terms of race/ethnicity than that survey. We had much more racial diversity at EAGx events outside of the US and Europe (e.g. EAGxSingapore, EAGxLatAm, and EAGxIndia, where POC were the majority). Generally, EAGx attendees end up later attending EAGs, so we think the events could result in more attendees from these locations. (However, due to funding constraints and their impact on travel grants, we expect this will not impact EAGs in 2023 as much as it might have otherwise.) Experiences of attendees Overall, attendees tend to find EA Global welcoming (4.51/5 with 1–5 as options) and are likely to recommend it to others (9.03/10 with 0–10 as options). Women and non-binary attendees reported slightly lower average scores on whether EA Global was “a place where [they] felt welcome” (women and non-binary attendees reported an average score of 4.46/5 vs an average of 4.56/5 for me...

The Nonlinear Library
EA - Global catastrophic risks law approved in the United States by JorgeTorresC

The Nonlinear Library

Play Episode Listen Later Mar 7, 2023 1:59


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Global catastrophic risks law approved in the United States, published by JorgeTorresC on March 7, 2023 on The Effective Altruism Forum. Executive Summary The enactment of the Global Catastrophic Risk Management Act represents a significant step forward in global catastrophic risk management. It is the first time a nation has undertaken a detailed analysis of these risks. The law orders the United States government to establish actions for prevention, preparation, and resilience in the face of catastrophic risks. Specifically, the United States government will be required to: Present a global catastrophic risk assessment to the US Congress. Develop a comprehensive risk mitigation plan involving the collaboration of sixteen designated US national agencies. Formulate a strategy for risk management under the leadership of the Secretary of Homeland Security and the Administrator of the Federal Emergency Management of the US. Conduct a national exercise to test the strategy. Provide recommendations to the US Congress. This legislation recognizes as global catastrophic risks: global pandemics, nuclear war, asteroid and comet impacts, supervolcanoes, sudden and severe changes in climate, and threats arising from the use and development of emerging technologies (such as artificial intelligence or engineered pandemics). Our article presents an overview of the legislation, followed by a comparative discussion of the international legislation of GCRs. Furthermore, we recommend considering similar laws for adoption within the Spanish-speaking context. Read more (in Spanish) Riesgos Catastróficos Globales is a science-advocacy and research organization working on improving the management of global risks in Spanish-Speaking countries. You can support our organization with a donation. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.

The Nonlinear Library
EA - Apply to attend EA conferences in Europe by OllieBase

The Nonlinear Library

Play Episode Listen Later Feb 28, 2023 2:55


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Apply to attend EA conferences in Europe, published by OllieBase on February 28, 2023 on The Effective Altruism Forum. Europe is about to get significantly warmer and lighter. People like warmth and light, so we (CEA) have been busy organising several EA conferences in Europe over the next few months in partnership with local community-builders and EA groups: EAGxCambridge will take place at Guildhall, 17–19 March. Applications are open now and will close on Friday (3 March). Speakers include Lord Martin Rees, Saloni Dattani (Our World In Data) and Anders Sandberg (including a live interview for the Hear This Idea podcast). EAGxNordics will take place at Munchenbryggeriet, Stockholm 21–23 April. Applications are open now and will close 28 March. If you register by 5 March, you can claim a discounted early bird ticket. EA Global: London will take place at Tobacco Dock, 19–21 May 2023. Applications are open now. If you were already accepted to EA Global: Bay Area, you can register for EAG London now; you don't need to apply again. EAGxWarsaw will take place at POLIN, 9–11 June 2023. Applications will open in the coming weeks. You can apply to all of these events using the same application details, bar a few small questions specific to each event. Which events should I apply to? (mostly pulled from our FAQ page) EA Global is mostly aimed at people who have a solid understanding of the core ideas of EA and who are taking significant actions based on those ideas. Many EA Global attendees are already professionally working on effective-altruism-inspired projects or working out how best to work on such projects. EA Global is for EAs around the world and has no location restrictions (though we recommend applying ASAP if you will need a visa to enter the UK). EAGx conferences have a lower bar. They are for people who are: Familiar with the core ideas of effective altruism; Interested in learning more about what to do with these ideas. EAGx events also have a more regional focus: EAGxCambridge is for people who are based in the UK or Ireland, or have plans to move to the UK within the next year; EAGxNordics is primarily for people in the Nordics, but also welcomes international applications; EAGxWarsaw is primarily for people based in Eastern Europe but also welcomes international applications. If you want to attend but are unsure about whether to apply, please err on the side of applying! See e.g. Expat Explore on the “Best Time to Visit Europe” Pew Research Center surveyed Americans on this matter (n = 2,260) and concluded that “Most Like It Hot”. There seem to be significant health benefits, though some people dislike sunlight. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.

The Nonlinear Library
EA - EA Global in 2022 and plans for 2023 by Eli Nathan

The Nonlinear Library

Play Episode Listen Later Feb 23, 2023 5:15


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA Global in 2022 and plans for 2023, published by Eli Nathan on February 23, 2023 on The Effective Altruism Forum. Summary We ran three EA Global events in 2022, in London, San Francisco, and Washington, D.C. These conferences all had ~1300–1500 people each and were some of our biggest yet: These events had an average score of 9.02 to the question “How likely is it that you would recommend EA Global to a friend or colleague with similar interests to your own?”, rated out of 10. Those who filled out our feedback survey (which was a minority of attendees, around 1200 individuals in total across all three events) reported over 36,000 new connections made. This was the first time we ran three EA Globals in one year since 2017, and we only had ~1200 attendees total across all three of those events. We hosted and recorded lots of new content, a substantial amount of which is located on our YouTube channel. This was the first time trialing out a EA conference in D.C. of any kind. We generally received positive feedback about this event from attendees and stakeholders. Plans for 2023 We're reducing our spending in a lot of ways, most significantly by cutting some meals and the majority of travel grants, which we expect may somewhat reduce the overall ratings of our events. Please note that this is a fairly dynamic situation and we may update our spending plans as our financial situation changes. We're doing three EA Globals in 2023, in the Bay Area and London again, and with our US east coast event in Boston rather than D.C. As well as EA Globals, there are also several upcoming EAGx events, check out the full list of confirmed and provisional events below. EA Global: Bay Area | 24–26 February EAGxCambridge | 17–19 March EAGxNordics | 21–23 April EA Global: London | 19–21 May EAGxWarsaw | 9–11 June [provisional] EAGxNYC | July / August [provisional] EAGxBerlin | Early September [provisional] EAGxAustralia | Late September [provisional] EA Global: Boston | Oct 27–Oct 29 EAGxVirtual | November [provisional] We're aiming to have similar-sized conferences, though with the reduction in travel grants we expect the events to perhaps be a little smaller, maybe around 1000 people per EA Global. We recently completed a hiring round and now have ~4 FTEs working on the EA Global team. We've recently revamped our website and incorporated it into effectivealtruism.org — see here. We've switched over our backend systems from Zoho to Salesforce. This will help us integrate better with the rest of CEA's products, and will hopefully create a smoother front and backend that's better suited to our users. (Note that the switchover itself has been somewhat buggy, but we are clearing these up and hope to have minimal issues moving forwards.) We're also trialing a referral system for applications, where we've given a select number of advisors working in EA community building the ability to admit people to the conference. If this goes well we may expand this program next year. Growth areas Food got generally negative reviews in 2022: Food is a notoriously hard area to get right and quality can vary a lot between venues, and we often have little or no choice between catering options. We've explored ways to improve the food quality, including hiring a catering consultant, but a lot of these options are cost prohibitive, and realistically we expect food quality to continue to be an issue moving forwards. Swapcard (our event application app) also got generally negative reviews in 2022: We explored and tested several competitor apps, though none of them seem better than Swapcard. We explored working with external developers to build our own event networking app, but eventually concluded that this would be too costly in terms of both time and money. We've been working with Swapcard to roll out new featur...