Podcasts about open philanthropy

  • 53PODCASTS
  • 428EPISODES
  • 30mAVG DURATION
  • 1EPISODE EVERY OTHER WEEK
  • Nov 25, 2025LATEST

POPULARITY

20172018201920202021202220232024


Best podcasts about open philanthropy

Latest podcast episodes about open philanthropy

Effective Altruism Forum Podcast
“Open Philanthropy Is Now Coefficient Giving” by Aaron Gertler

Effective Altruism Forum Podcast

Play Episode Listen Later Nov 25, 2025 3:46


Big news from Open Philanthropy Coefficient Giving today: Today, Open Philanthropy is becoming Coefficient Giving. Our mission remains the same, but our new name marks our next chapter as we double down on our longstanding goal of helping more funders increase their impact. We believe philanthropy can be a far more vital force for progress than it is today; too often, great opportunities to help others go unfunded. As Coefficient Giving, our aim is to make it as easy as possible for donors to find and fund them. (For more on how we chose our new name, what's changing, and what's staying the same in this next chapter, see here.) The linked essay, from Coefficient CEO Alexander Berger, shares more about the change, our approach to giving, and why we're focused on growing our work with funders outside of Good Ventures. I also wanted to highlight some details that might be of particular interest to a Forum audience. If you have other questions, leave a comment and I'll do my best to respond! Any changes to your relationship with EA? Nope. While we do lots of work outside traditional EA cause areas, we still see EA as a community [...] --- First published: November 18th, 2025 Source: https://forum.effectivealtruism.org/posts/vkvtu6xbvfkHPhJkC/open-philanthropy-is-now-coefficient-giving --- Narrated by TYPE III AUDIO.

Faster, Please! — The Podcast

My fellow pro-growth/progress/abundance Up Wingers in America and around the world:What really gets AI optimists excited isn't the prospect of automating customer service departments or human resources. Imagine, rather, what might happen to the pace of scientific progress if AI becomes a super research assistant. Tom Davidson's new paper, How Quick and Big Would a Software Intelligence Explosion Be?, explores that very scenario.Today on Faster, Please! — The Podcast, I talk with Davidson about what it would mean for automated AI researchers to rapidly improve their own algorithms, thus creating a self-reinforcing loop of innovation. We talk about the economic effects of self-improving AI research and how close we are to that reality.Davidson is a senior research fellow at Forethought, where he explores AI and explosive growth. He was previously a senior research fellow at Open Philanthropy and a research scientist at the UK government's AI Security Institute.In This Episode* Making human minds (1:43)* Theory to reality (6:45)* The world with automated research (10:59)* Considering constraints (16:30)* Worries and what-ifs (19:07)Below is a lightly edited transcript of our conversation. Making human minds (1:43). . . you don't have to build any more computer chips, you don't have to build any more fabs . . . In fact, you don't have to do anything at all in the physical world.Pethokoukis: A few years ago, you wrote a paper called “Could Advanced AI Drive Explosive Economic Growth?,” which argued that growth could accelerate dramatically if AI would start generating ideas the way human researchers once did. In your view, population growth historically powered kind of an ideas feedback loop. More people meant more researchers meant more ideas, rising incomes, but that loop broke after the demographic transition in the late-19th century but you suggest that AI could restart it: more ideas, more output, more AI, more ideas. Does this new paper in a way build upon that paper? “How quick and big would a software intelligence explosion be?”The first paper you referred to is about the biggest-picture dynamic of economic growth. As you said, throughout the long run history, when we produced more food, the population increased. That additional output transferred itself into more people, more workers. These days that doesn't happen. When GDP goes up, that doesn't mean people have more kids. In fact, the demographic transition, the richer people get, the fewer kids they have. So now we've got more output, we're getting even fewer people as a result, so that's been blocked.This first paper is basically saying, look, if we can manufacture human minds or human-equivalent minds in any way, be it by building more computer chips, or making better computer chips, or any way at all, then that feedback loop gets going again. Because if we can manufacture more human minds, then we can spend output again to create more workers. That's the first paper.The second paper double clicks on one specific way that we can use output to create more human minds. It's actually, in a way, the scariest way because it's the way of creating human minds which can happen the quickest. So this is the way where you don't have to build any more computer chips, you don't have to build any more fabs, as they're called, these big factories that make computer chips. In fact, you don't have to do anything at all in the physical world.It seems like most of the conversation has been about how much investment is going to go into building how many new data centers, and that seems like that is almost the entire conversation, in a way, at the moment. But you're not looking at compute, you're looking at software.Exactly, software. So the idea is you don't have to build anything. You've already got loads of computer chips and you just make the algorithms that run the AIs on those computer chips more efficient. This is already happening, but it isn't yet a big deal because AI isn't that capable. But already, one year out, Epoch, this AI forecasting organization, estimates that just in one year, it becomes 10 times to 1000 times cheaper to run the same AI system. Just wait 12 months, and suddenly, for the same budget, you are able to run 10 times as many AI systems, or maybe even 1000 times as many for their most aggressive estimate. As I said, not a big deal today, but if we then develop an AI system which is better than any human at doing research, then now, in 10 months, you haven't built anything, but you've got 10 times as many researchers that you can set to work or even more than that. So then we get this feedback loop where you make some research progress, you improve your algorithms, now you've got loads more researchers, you set them all to work again, finding even more algorithmic improvements. So today we've got maybe a few hundred people that are advancing state-of-the-art AI algorithms.I think they're all getting paid a billion dollars a person, too.Exactly. But maybe we can 10x that initially by having them replaced by AI researchers that do the same thing. But then those AI researchers improve their own algorithms. Now you have 10x as many again, you have them building more computer chips, you're just running them more efficiently, and then the cycle continues. You're throwing more and more of these AI researchers at AI progress itself, and the algorithms are improving in what might be a very powerful feedback loop.In this case, it seems me that you're not necessarily talking about artificial general intelligence. This is certainly a powerful intelligence, but it's narrow. It doesn't have to do everything, it doesn't have to play chess, it just has to be able to do research.It's certainly not fully general. You don't need it to be able to control a robot body. You don't need it to be able to solve the Riemann hypothesis. You don't need it to be able to even be very persuasive or charismatic to a human. It's not narrow, I wouldn't say, it has to be able to do literally anything that AI researchers do, and that's a wide range of tasks: They're coding, they're communicating with each other, they're managing people, they are planning out what to work on, they are thinking about reviewing the literature. There's a fairly wide range of stuff. It's extremely challenging. It's some of the hardest work in the world to do, so I wouldn't say it's now, but it's not everything. It's some kind of intermediate level of generality in between a mere chess algorithm that just does chess and the kind of AGI that can literally do anything.Theory to reality (6:45)I think it's a much smaller gap for AI research than it is for many other parts of the economy.I think people who are cautiously optimistic about AI will say something like, “Yeah, I could see the kind of intelligence you're referring to coming about within a decade, but it's going to take a couple of big breakthroughs to get there.” Is that true, or are we actually getting pretty close?Famously, predicting the future of technology is very, very difficult. Just a few years before people invented the nuclear bomb, famous, very well-respected physicists were saying, “It's impossible, this will never happen.” So my best guess is that we do need a couple of fairly non-trivial breakthroughs. So we had the start of RL training a couple of years ago, became a big deal within the language model paradigm. I think we'll probably need another couple of breakthroughs of that kind of size.We're not talking a completely new approach, throw everything out, but we're talking like, okay, we need to extend the current approach in a meaningfully different way. It's going to take some inventiveness, it's going to take some creativity, we're going to have to try out a few things. I think, probably, we'll need that to get to the researcher that can fully automate OpenAI, is a nice way of putting it — OpenAI doesn't employ any humans anymore, they've just got AIs there.There's a difference between what a model can do on some benchmark versus becoming actually productive in the real world. That's why, while all the benchmark stuff is interesting, the thing I pay attention to is: How are businesses beginning to use this technology? Because that's the leap. What is that gap like, in your scenario, versus an AI model that can do a theoretical version of the lab to actually be incorporated in a real laboratory?It's definitely a gap. I think it's a pretty big gap. I think it's a much smaller gap for AI research than it is for many other parts of the economy. Let's say we are talking about car manufacturing and you're trying to get an AI to do everything that happens there. Man, it's such a messy process. There's a million different parts of the supply chain. There's all this tacit knowledge and all the human workers' minds. It's going to be really tough. There's going to be a very big gap going from those benchmarks to actually fully automating the supply chain for cars.For automating what OpenAI does, there's still a gap, but it's much smaller, because firstly, all of the work is virtual. Everyone at OpenAI could, in principle, work remotely. Their top research scientists, they're just on a computer all day. They're not picking up bricks and doing stuff like that. So also that already means it's a lot less messy. You get a lot less of that kind of messy world reality stuff slowing down adoption. And also, a lot of it is coding, and coding is almost uniquely clean in that, for many coding tasks, you can define clearly defined metrics for success, and so that makes AI much better. You can just have a go. Did AI succeed in the test? If not, try something else or do a gradient set update.That said, there's still a lot of messiness here, as any coder will know, when you're writing good code, it's not just about whether it does the function that you've asked it to do, it needs to be well-designed, it needs to be modular, it needs to be maintainable. These things are much harder to evaluate, and so AIs often pass our benchmarks because they can do the function that you asked it to do, the code runs, but they kind of write really spaghetti code — code that no one wants to look at, that no one can understand, and so no company would want to use that.So there's still going to be a pretty big benchmark-to-reality gap, even for OpenAI, and I think that's one of the big uncertainties in terms of, will this happen in three years versus will this happen in 10 years, or even 15 years?Since you brought up the timeline, what's your guess? I didn't know whether to open with that question or conclude with that question — we'll stick it right in the middle of our chat.Great. Honestly, my best guess about this does change more often than I would like it to, which I think tells us, look, there's still a state of flux. This is just really something that's very hard to know about. Predicting the future is hard. My current best guess is it's about even odds that we're able to fully automate OpenAI within the next 10 years. So maybe that's a 50-50.The world with AI research automation (10:59). . . I'm talking about 30 percent growth every year. I think it gets faster than that. If you want to know how fast it eventually gets, you can think about the question of how fast can a kind of self-replicating system double itself?So then what really would be the impact of that kind of AI research automation? How would you go about quantifying that kind of acceleration? What does the world look like?Yeah, so many possibilities, but I think what strikes me is that there is a plausible world where it is just way, way faster than almost everyone is expecting it to be. So that's the world where you fully automate OpenAI, and then we get that feedback loop that I was talking about earlier where AIs make their algorithms way more efficient, now you've got way more of them, then they make their algorithms way more efficient again, now they're way smarter. Now they're thinking a hundred times faster. The feedback loop continues and maybe within six months you now have a billion superintelligent AIs running on this OpenAI data center. The combined cognitive abilities of all these AIs outstrips the whole of the United States, outstrips anything we've seen from any kind of company or entity before, and they can all potentially be put towards any goal that OpenAI wants to. And then there's, of course, the risk that OpenAI's lost control of these systems, often discussed, in which case these systems could all be working together to pursue a particular goal. And so what we're talking about here is really a huge amount of power. It's a threat to national security for any government in which this happens, potentially. It is a threat to everyone if we lose control of these systems, or if the company that develops them uses them for some kind of malicious end. And, in terms of economic impacts, I personally think that that again could happen much more quickly than people think, and we can get into that.In the first paper we mentioned, it was kind of a thought experiment, but you were really talking about moving the decimal point in GDP growth, instead of talking about two and three percent, 20 and 30 percent. Is that the kind of world we're talking about?I speak to economists a lot, and —They hate those kinds of predictions, by the way.Obviously, they think I'm crazy. Not all of them. There are economists that take it very seriously. I think it's taken more seriously than everyone else realizes. It's like it's a bit embarrassing, at the moment, to admit that you take it seriously, but there are a few really senior economists who absolutely know their stuff. They're like, “Yep, this checks out. I think that's what's going to happen.” And I've had conversation with them where they're like, “Yeah, I think this is going to happen.” But the really loud, dominant view where I think people are a little bit scared to speak out against is they're like, “Obviously this is sci-fi.”One analogy I like to give to people who are very, very confident that this is all sci-fi and it's rubbish is to imagine that we were sitting there in the year 1400, imagine we had an economics professor who'd been studying the rate of economic growth, and they've been like, “Yeah, we've always had 0.1 percent growth every single year throughout history. We've never seen anything higher.” And then there was some kind of futurist economist rogue that said, “Actually, I think that if I extrapolate the curves in this way and we get this kind of technology, maybe we could have one percent growth.” And then all the other economists laugh at them, tell them they're insane – that's what happened. In 1400, we'd never had growth that was at all fast, and then a few hundred years later, we developed industrial technology, we started that feedback loop, we were investing more and more resources in scientific progress and in physical capital, and we did see much faster growth.So I think it can be useful to try and challenge economists and say, “Okay, I know it sounds crazy, but history was crazy. This crazy thing happened where growth just got way, way faster. No one would've predicted it. You would not have predicted it.” And I think being in that mindset can encourage people to be like, “Yeah, okay. You know what? Maybe if we do get AI that's really that powerful, it can really do everything, and maybe it is possible.”But to answer your question, yeah, I'm talking about 30 percent growth every year. I think it gets faster than that. If you want to know how fast it eventually gets, you can think about the question of how fast can a kind of self-replicating system double itself? So ultimately, what the economy is going to be like is it's going to have robots and factories that are able to fully create new versions of themselves. Everything you need: the roads, the electricity, the robots, the buildings, all of that will be replicated. And so you can look at actually biology and say, do we have any examples of systems which fully replicate themselves? How long does it take? And if you look at rats, for example, they're able to double the number of rats by grabbing resources from the environment, and giving birth, and whatnot. The doubling time is about six weeks for some types of rats. So that's an example of here's a physical system — ultimately, everything's made of physics — a physical system that has some intelligence that's able to go out into the world, gather resources, replicate itself. The doubling time is six weeks.Now, who knows how long it'll take us to get to AI that's that good? But when we do, you could see the whole physical economy, maybe a part that humans aren't involved with, a whole automated city without any humans just doubling itself every few weeks. If that happens, and the amount of stuff we're able to reduce as a civilization is doubling again on the order of weeks. And, in fact, there are some animals that double faster still, in days, but that's the kind of level of craziness. Now we're talking about 1000 percent growth, at that point. We don't know how crazy it could get, but I think we should take even the really crazy possibilities, we shouldn't fully rule them out.Considering constraints (16:30)I really hope people work less. If we get this good future, and the benefits are shared between all . . . no one should work. But that doesn't stop growth . . .There's this great AI forecast chart put out by the Federal Reserve Bank of Dallas, and I think its main forecast — the one most economists would probably agree with — has a line showing AI improving GDP by maybe two tenths of a percent. And then there are two other lines: one is more or less straight up, and the other one is straight down, because in the first, AI created a utopia, and in the second, AI gets out of control and starts killing us, and whatever. So those are your three possibilities.If we stick with the optimistic case for a moment, what constraints do you see as most plausible — reduced labor supply from rising incomes, social pushback against disruption, energy limits, or something else?Briefly, the ones you've mentioned, people not working, 100 percent. I really hope people work less. If we get this good future, and the benefits are shared between all — which isn't guaranteed — if we get that, then yeah, no one should work. But that doesn't stop growth, because when AI and robots can do everything that humans do, you don't need humans in the loop anymore. That whole thing is just going and kind of self-replicating itself and making as many goods as services as we want. Sure, if you want your clothes to be knitted by a human, you're in trouble, then your consumption is stuck. Bad luck. If you're happy to consume goods and services produced by AI systems or robots, fine if no one wants to work.Pushback: I think, for me, this is the biggest one. Obviously, the economy doubling every year is very scary as a thought. Tech progress will be going much faster. Imagine if you woke up and, over the course of the year, you go from not having any telephones at all in the world, to everyone's on their smartphones and social media and all the apps. That's a transition that took decades. If that happened in a year, that would be very disconcerting.Another example is the development of nuclear weapons. Nuclear weapons were developed over a number of years. If that happened in a month, or two months, that could be very dangerous. There'd be much less time for different countries, different actors to figure out how they're going to handle it. So I think pushback is the strongest one that we might as a society choose, “Actually, this is insane. We're going to go slower than we could.” That requires, potentially, coordination, but I think there would be broad support for some degree of coordination there.Worries and what-ifs (19:07)If suddenly no one has any jobs, what will we want to do with ourselves? That's a very, very consequential transition for the nature of human society.I imagine you certainly talk with people who are extremely gung-ho about this prospect. What is the common response you get from people who are less enthusiastic? Do they worry about a future with no jobs? Maybe they do worry about the existential kinds of issues. What's your response to those people? And how much do you worry about those things?I think there are loads of very worrying things that we're going to be facing. One class of pushback, which I think is very common, is worries about employment. It's a source of income for all of us, employment, but also, it's a source of pride, it's a source of meaning. If suddenly no one has any jobs, what will we want to do with ourselves? That's a very, very consequential transition for the nature of human society. I think people aren't just going to be down to just do it. I think people are scared about three AI companies literally now taking all the revenues that all of humanity used to be earning. It is naturally a very scary prospect. So that's one kind of pushback, and I'm sympathetic with it.I think that there are solutions, if we find a way to tax AI systems, which isn't necessarily easy, because it's very easy to move physical assets between countries. It's a lot easier to tax labor than capital already when rich people can move their assets around. We're going to have the same problem with AI, but if we can find a way to tax it, and we maintain a good democratic country, and we can just redistribute the wealth broadly, it can be solved. So I think it's a big problem, but it is doable.Then there's the problem of some people want to stop this now because they're worried about AI killing everyone. Their literally worry is that everyone will be dead because superintelligent AI will want that to happen. I think there's a real risk there. It's definitely above one percent, in my opinion. I wouldn't go above 10 percent, myself, but I think it's very scary, and that's a great reason to slow things down. I personally don't want to stop quite yet. I think you want to stop when the AI is a bit more powerful and a bit more useful than it is today so it can kind of help us figure out what to do about all of this crazy stuff that's coming.On what side of that line is AI as an AI researcher?That's a really great question. Should we stop? I think it's very hard to stop just after you've got the AI researcher AI, because that's when it's suddenly really easy to go very, very fast. So my out-of-the-box proposal here, which is probably very flawed, would be: When we're within a few spits distance — not spitting distance, but if you did that three times, and we can see we're almost at that AI automating OpenAI — then you pause, because you're not going to accidentally then go all the way. It is actually still a little bit a fair distance away, but it's actually still, at that point, probably a very powerful AI that can really help.Then you pause and do what?Great question. So then you pause, and you use your AI systems to help you firstly solve the problem of AI alignment, make extra, double sure that every time we increase the notch of AI capabilities, the AI is still loyal to humanity, not to its own kind of secret goals.Secondly, you solve the problem of, how are we going to make sure that no one person in government or no one CEO of an AI company ensures that this whole AI army is loyal to them, personally? How are we going to ensure that everyone, the whole world gets influenced over what this AI is ultimately programmed to do? That's the second problem.And then there's just a whole host of other things: unemployment that we've talked about, competition between different countries, US and China, there's a whole host of other things that I think you want to research on, figure out, get consensus on, and then slowly ratchet up the capabilities in what is now a very safe and controlled way.What else should we be working on? What are you working on next?One problem I'm excited about is people have historically worried about AI having its own goals. We need to make it loyal to humanity. But as we've got closer, it's become increasingly obvious, “loyalty to humanity” is very vague. What specifically do you want the AI to be programmed to do? I mean, it's not programmed, it's grown, but if it were programmed, if you're writing a rule book for AI, some organizations have employee handbooks: Here's the philosophy of the organization, here's how you should behave. Imagine you're doing that for the AI, but you're going super detailed, exactly how you want your AI assistant to behave in all kinds of situations. What should that be? Essentially, what should we align the AI to? Not any individual person, probably following the law, probably loads of other things. I think basically designing what is the character of this AI system is a really exciting question, and if we get that right, maybe the AI can then help us solve all these other problems.Maybe you have no interest in science fiction, but is there any film, TV, book that you think is useful for someone in your position to be aware of, or that you find useful in any way? Just wondering.I think there's this great post called “AI 2027,” which lays out a concrete scenario for how AI could go wrong or how maybe it could go right. I would recommend that. I think that's the only thing that's coming top of mind. I often read a lot of the stuff I read is I read a lot of LessWrong, to be honest. There's a lot of stuff from there that I don't love, but a lot of new ideas, interesting content there.Any fiction?I mean, I read fiction, but honestly, I don't really love the AI fiction that I've read because often it's quite unrealistic, and so I kind of get a bit overly nitpicky about it. But I mean, yeah, there's this book called Harry Potter and the Methods of Rationality, which I read maybe 10 years ago, which I thought was pretty fun.On sale everywhere The Conservative Futurist: How To Create the Sci-Fi World We Were Promised Faster, Please! is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber. This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit fasterplease.substack.com/subscribe

Effective Altruism Forum Podcast
“Leaving Open Philanthropy, going to Anthropic” by Joe_Carlsmith

Effective Altruism Forum Podcast

Play Episode Listen Later Nov 6, 2025 32:02


(Audio version, read by the author, here, or search for "Joe Carlsmith Audio" on your podcast app.) Last Friday was my last day at Open Philanthropy. I'll be starting a new role at Anthropic in mid-November, helping with the design of Claude's character/constitution/spec. This post reflects on my time at Open Philanthropy, and it goes into more detail about my perspective and intentions with respect to Anthropic – including some of my takes on AI-safety-focused people working at frontier AI companies. (I shared this post with Open Phil and Anthropic comms before publishing, but I'm speaking only for myself and not for Open Phil or Anthropic.)On my time at Open PhilanthropyI joined Open Philanthropy full-time at the beginning of 2019.[1] At the time, the organization was starting to spin up a new “Worldview Investigations” team, aimed at investigating and documenting key beliefs driving the organization's cause prioritization – and with a special focus on how the organization should think about the potential impact at stake in work on transformatively powerful AI systems.[2] I joined (and eventually: led) the team devoted to this effort, and it's been an amazing project to be a part of. I remember [...] ---Outline:(00:51) On my time at Open Philanthropy(08:11) On going to Anthropic --- First published: November 3rd, 2025 Source: https://forum.effectivealtruism.org/posts/EFF6wSRm9h7Xc6RMt/leaving-open-philanthropy-going-to-anthropic --- Narrated by TYPE III AUDIO.

Joe Carlsmith Audio
Leaving Open Philanthropy, going to Anthropic

Joe Carlsmith Audio

Play Episode Listen Later Nov 3, 2025 32:09


On a career move, and on AI-safety-focused people working at AI companies. Text version here: https://joecarlsmith.com/2025/11/03/leaving-open-philanthropy-going-to-anthropic/

80,000 Hours Podcast with Rob Wiblin
Holden Karnofsky on dozens of amazing opportunities to make AI safer – and all his AGI takes

80,000 Hours Podcast with Rob Wiblin

Play Episode Listen Later Oct 30, 2025 270:19


For years, working on AI safety usually meant theorising about the ‘alignment problem' or trying to convince other people to give a damn. If you could find any way to help, the work was frustrating and low feedback.According to Anthropic's Holden Karnofsky, this situation has now reversed completely.There are now large amounts of useful, concrete, shovel-ready projects with clear goals and deliverables. Holden thinks people haven't appreciated the scale of the shift, and wants everyone to see the large range of ‘well-scoped object-level work' they could personally help with, in both technical and non-technical areas.Video, full transcript, and links to learn more: https://80k.info/hk25In today's interview, Holden — previously cofounder and CEO of Open Philanthropy — lists 39 projects he's excited to see happening, including:Training deceptive AI models to study deception and how to detect itDeveloping classifiers to block jailbreakingImplementing security measures to stop ‘backdoors' or ‘secret loyalties' from being added to models in trainingDeveloping policies on model welfare, AI-human relationships, and what instructions to give modelsTraining AIs to work as alignment researchersAnd that's all just stuff he's happened to observe directly, which is probably only a small fraction of the options available.Holden makes a case that, for many people, working at an AI company like Anthropic will be the best way to steer AGI in a positive direction. He notes there are “ways that you can reduce AI risk that you can only do if you're a competitive frontier AI company.” At the same time, he believes external groups have their own advantages and can be equally impactful.Critics worry that Anthropic's efforts to stay at that frontier encourage competitive racing towards AGI — significantly or entirely offsetting any useful research they do. Holden thinks this seriously misunderstands the strategic situation we're in — and explains his case in detail with host Rob Wiblin.Chapters:Cold open (00:00:00)Holden is back! (00:02:26)An AI Chernobyl we never notice (00:02:56)Is rogue AI takeover easy or hard? (00:07:32)The AGI race isn't a coordination failure (00:17:48)What Holden now does at Anthropic (00:28:04)The case for working at Anthropic (00:30:08)Is Anthropic doing enough? (00:40:45)Can we trust Anthropic, or any AI company? (00:43:40)How can Anthropic compete while paying the “safety tax”? (00:49:14)What, if anything, could prompt Anthropic to halt development of AGI? (00:56:11)Holden's retrospective on responsible scaling policies (00:59:01)Overrated work (01:14:27)Concrete shovel-ready projects Holden is excited about (01:16:37)Great things to do in technical AI safety (01:20:48)Great things to do on AI welfare and AI relationships (01:28:18)Great things to do in biosecurity and pandemic preparedness (01:35:11)How to choose where to work (01:35:57)Overrated AI risk: Cyberattacks (01:41:56)Overrated AI risk: Persuasion (01:51:37)Why AI R&D is the main thing to worry about (01:55:36)The case that AI-enabled R&D wouldn't speed things up much (02:07:15)AI-enabled human power grabs (02:11:10)Main benefits of getting AGI right (02:23:07)The world is handling AGI about as badly as possible (02:29:07)Learning from targeting companies for public criticism in farm animal welfare (02:31:39)Will Anthropic actually make any difference? (02:40:51)“Misaligned” vs “misaligned and power-seeking” (02:55:12)Success without dignity: how we could win despite being stupid (03:00:58)Holden sees less dignity but has more hope (03:08:30)Should we expect misaligned power-seeking by default? (03:15:58)Will reinforcement learning make everything worse? (03:23:45)Should we push for marginal improvements or big paradigm shifts? (03:28:58)Should safety-focused people cluster or spread out? (03:31:35)Is Anthropic vocal enough about strong regulation? (03:35:56)Is Holden biased because of his financial stake in Anthropic? (03:39:26)Have we learned clever governance structures don't work? (03:43:51)Is Holden scared of AI bioweapons? (03:46:12)Holden thinks AI companions are bad news (03:49:47)Are AI companies too hawkish on China? (03:56:39)The frontier of infosec: confidentiality vs integrity (04:00:51)How often does AI work backfire? (04:03:38)Is AI clearly more impactful to work in? (04:18:26)What's the role of earning to give? (04:24:54)This episode was recorded on July 25 and 28, 2025.Video editing: Simon Monsour, Luke Monsour, Dominic Armstrong, and Milo McGuireAudio engineering: Milo McGuire, Simon Monsour, and Dominic ArmstrongMusic: CORBITCoordination, transcriptions, and web: Katy Moore

80,000 Hours Podcast with Rob Wiblin
GPT-7 Might Democratise Bioweapons. Are We Ready? | Andrew Snyder-Beattie

80,000 Hours Podcast with Rob Wiblin

Play Episode Listen Later Oct 2, 2025 151:11


Conventional wisdom is that safeguarding humanity from the worst biological risks — microbes optimised to kill as many as possible — is difficult bordering on impossible, making bioweapons humanity's single greatest vulnerability. Andrew Snyder-Beattie thinks conventional wisdom could be wrong.Andrew's job at Open Philanthropy is to spend hundreds of millions of dollars to protect as much of humanity as possible in the worst-case scenarios — those with fatality rates near 100% and the collapse of technological civilisation a live possibility.Video, full transcript, and links to learn more: https://80k.info/asbAs Andrew lays out, there are several ways this could happen, including:A national bioweapons programme gone wrong, in particular Russia or North KoreaAI advances making it easier for terrorists or a rogue AI to release highly engineered pathogensMirror bacteria that can evade the immune systems of not only humans, but many animals and potentially plants as wellMost efforts to combat these extreme biorisks have focused on either prevention or new high-tech countermeasures. But prevention may well fail, and high-tech approaches can't scale to protect billions when, with no sane people willing to leave their home, we're just weeks from economic collapse.So Andrew and his biosecurity research team at Open Philanthropy have been seeking an alternative approach. They're proposing a four-stage plan using simple technology that could save most people, and is cheap enough it can be prepared without government support. Andrew is hiring for a range of roles to make it happen — from manufacturing and logistics experts to global health specialists to policymakers and other ambitious entrepreneurs — as well as programme associates to join Open Philanthropy's biosecurity team (apply by October 20!).Fundamentally, organisms so small have no way to penetrate physical barriers or shield themselves from UV, heat, or chemical poisons. We now know how to make highly effective ‘elastomeric' face masks that cost $10, can sit in storage for 20 years, and can be used for six months straight without changing the filter. Any rich country could trivially stockpile enough to cover all essential workers.People can't wear masks 24/7, but fortunately propylene glycol — already found in vapes and smoke machines — is astonishingly good at killing microbes in the air. And, being a common chemical input, industry already produces enough of the stuff to cover every indoor space we need at all times.Add to this the wastewater monitoring and metagenomic sequencing that will detect the most dangerous pathogens before they have a chance to wreak havoc, and we might just buy ourselves enough time to develop the cure we'll need to come out alive.Has everyone been wrong, and biology is actually defence dominant rather than offence dominant? Is this plan crazy — or so crazy it just might work?That's what host Rob Wiblin and Andrew Snyder-Beattie explore in this in-depth conversation.What did you think of the episode? https://forms.gle/66Hw5spgnV3eVWXa6Chapters:Cold open (00:00:00)Who's Andrew Snyder-Beattie? (00:01:23)It could get really bad (00:01:57)The worst-case scenario: mirror bacteria (00:08:58)To actually work, a solution has to be low-tech (00:17:40)Why ASB works on biorisks rather than AI (00:20:37)Plan A is prevention. But it might not work. (00:24:48)The “four pillars” plan (00:30:36)ASB is hiring now to make this happen (00:32:22)Everyone was wrong: biorisks are defence dominant in the limit (00:34:22)Pillar 1: A wall between the virus and your lungs (00:39:33)Pillar 2: Biohardening buildings (00:54:57)Pillar 3: Immediately detecting the pandemic (01:13:57)Pillar 4: A cure (01:27:14)The plan's biggest weaknesses (01:38:35)If it's so good, why are you the only group to suggest it? (01:43:04)Would chaos and conflict make this impossible to pull off? (01:45:08)Would rogue AI make bioweapons? Would other AIs save us? (01:50:05)We can feed the world even if all the plants die (01:56:08)Could a bioweapon make the Earth uninhabitable? (02:05:06)Many open roles to solve bio-extinction — and you don't necessarily need a biology background (02:07:34)Career mistakes ASB thinks are common (02:16:19)How to protect yourself and your family (02:28:21)This episode was recorded on August 12, 2025Video editing: Simon Monsour and Luke MonsourAudio engineering: Milo McGuire, Simon Monsour, and Dominic ArmstrongMusic: CORBITCamera operator: Jake MorrisCoordination, transcriptions, and web: Katy Moore

Clearer Thinking with Spencer Greenberg
The most important century (with Holden Karnofsky)

Clearer Thinking with Spencer Greenberg

Play Episode Listen Later Aug 27, 2025 108:10


Has society reached ‘peak progress'? Can we sustain the level of economic growth that technology has enabled over the last century? Have researchers plucked the last of science's "low-hanging fruit?" Why did early science innovators have outsized impact per capita? As fields mature, why does per-researcher output fall? Can a swarm of AI systems materially accelerate research? What does exponential growth hide about the risk of collapse? Will specialized AI outcompete human polymaths? Is quality of life still improving - and how confident are we in those measures? Is it too late to steer away from the attention economy? Can our control over intelligent systems scale as we develop their power? Will AI ever be capable of truly understanding human values? And if we reach that point, will it choose to align itself?Holden Karnofsky is a Member of Technical Staff at Anthropic, where he focuses on the design of the company's Responsible Scaling Policy and other aspects of preparing for the possibility of highly advanced AI systems in the future. Prior to his work with Anthropic, Holden led several high-impact organizations as the co-founder and co-executive director of charity evaluator GiveWell, and one of three Managing Directors of grantmaking organization Open Philanthropy. You can read more about ideas that matter to Holden at his blog Cold Takes.Further reading:Holden's "most important century" seriesResponsible scaling policiesHolden's thoughts on sustained growthStaffSpencer Greenberg — Host / DirectorJosh Castle — ProducerRyan Kessler — Audio EngineerUri Bram — FactotumWeAmplify — TranscriptionistsIgor Scaldini — Marketing ConsultantMusicBroke for FreeJosh WoodwardLee RosevereQuiet Music for Tiny Robotswowamusiczapsplat.comAffiliatesClearer ThinkingGuidedTrackMind EasePositlyUpLift[Read more]

How I Learned to Love Shrimp
Lewis Bollard on the strategies that win, the traps to avoid, and why.

How I Learned to Love Shrimp

Play Episode Listen Later Aug 19, 2025 83:53 Transcription Available


Lewis Bollard should be a familiar name for those trying to improve the lives of farmed animals. For almost 10 years, he's been running the farm animal welfare program at Open Philanthropy, the largest funder in the fight against factory farming. It's hard to think of someone who's had a large positive influence on the modern farm animal welfare movement. In this conversation, we discuss the many different approaches we can take to help animals. We talk about why some strategies are less promising than they initially seem, common misconceptions he sees amongst advocates, why creating good strategy is hard, where the movement is under-investing and what he's learned from other social movements. See the full transcript on our Buzzsprout.Chapters:What Lewis has changed his mind on (00:02:33)The challenges of institutional meat reduction (00:06:05) Lewis' pessimism on animal welfare litigation (00:11:58)The case for animal welfare technologies (00:14:42)Why blocking new farms may not help (or even make things worse) (00:18:24)What Lewis thinks advocates commonly get wrong (00:23:11)Incrementalism vs moonshots & the speed of social change (00:26:50)What is the movement under-investing in? (00:36:44)Challenges in scaling large organisations (00:41:46)Prop 12 and the future of US legislation (00:45:43)How can we improve our political advocacy? (00:50:01)What can we do in the Global South? (00:55:32)How will transformative AI affect animal advocacy? (01:07:43)What is tough about Lewis' role? (01:15:41)Resources:For more about Open Phil as a funder, check out our episode with Amanda HungerfordLewis' substack Lewis AMA on the EA ForumLewis' TwitterIf you enjoy the show, please leave a rating and review us - we would really appreciate it! Likewise, feel free to share it with anyone who you think might enjoy it. You can send us feedback and guest recommendations via Twitter or email us at hello@howilearnedtoloveshrimp.com. Enjoy!

The Lunar Society
The surprising economics of the meat industry – Lewis Bollard

The Lunar Society

Play Episode Listen Later Aug 7, 2025 68:05


A deep dive with Lewis Bollard, who leads Open Philanthropy's strategy for Farm Animal Welfare, on the surprising economics of the meat industry.Why is factory farming so efficient? How can we make the lives of the 23+ billion animals living on factory farms more bearable? How far off are the moonshots (e.g., brainless chickens, cultivated meats, etc.) to end this mass suffering? And why does the meat industry have such a surprising amount of political influence?For decades, innovation in the meat industry has actually made the conditions for animals worse. Can the next few decades of tech reverse this pattern?Watch on YouTube; listen on Apple Podcasts or Spotify.Donation match fundraiserThe welfare of animals on factory farms is so systemically neglected that just $1 can help avert 10 years of animal suffering.After learning more about the outsized opportunities to help, I decided to give $250,000 as a donation match to farmkind.giving/dwarkesh. FarmKind directs your contributions to the most effective charities in this area.Please consider contributing, even if it's a small amount. Together, we can double each other's impact and give a total of $500,000.Bluntly, there are some listeners who are in a position to give much more. Given how neglected this topic is, one such person could singlehandedly change the game for 10s of billions of animals. If you're considering donating $50k or more, please reach out directly to Lewis and his team by emailing andres@openphilanthropy.org.Timestamps(00:00:00) – The astonishing efficiency of factory farming(00:07:18) – It was a mistake making this about diet(00:09:54) – Tech that's sparing 100s of millions of animals/year(00:16:16) – Brainless chickens and higher welfare breeds(00:28:21) – $1 can prevent 10 years of animal suffering(00:37:26) – Situation in China and the developing world(00:41:41) – How the meat lobby got a lock on Congress(00:53:23) – Business structure of the meat industry(00:57:42) – Corporate campaigns are underrated Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe

Centre for Cities
City Talks: Remote working and the future of cities

Centre for Cities

Play Episode Listen Later Jul 16, 2025 50:40


Chief Executive Andrew Carter is joined by Matt Clancy, Senior Programme Officer at Open Philanthropy and the creator of New Things Under the Sun, a living literature review on social science research about innovation. Drawing on Matt's interest in science, innovation and technology they discuss hybrid working, productivity, cities and the future of work.

Effective Altruism Forum Podcast
“Open Philanthropy: Reflecting on our Recent Effective Giving RFP” by Melanie Basnak

Effective Altruism Forum Podcast

Play Episode Listen Later Jun 21, 2025 7:37


Earlier this year, we launched a request for proposals (RFP) from organizations that fundraise for highly cost-effective charities. The Livelihood Impact Fund supported the RFP, as did two donors from Meta Charity Funders. We're excited to share the results: $1,565,333 in grants to 11 organizations. We estimate a weighted average ROI of ~4.3x across the portfolio, which means we expect our grantees to raise more than $6 million in adjusted funding over the next 1-2 years. Who's receiving funding These organizations span different regions, donor audiences, and outreach strategies. Here's a quick overview: Charity Navigator (United States) — $200,000 Charity Navigator recently acquired Causeway, through which they now recommend charities with a greater emphasis on impact across a portfolio of cause areas. This grant supports Causeway's growth and refinement, with the aim of nudging donors toward curated higher-impact giving funds. Effectief Geven (Belgium) — $108,000 Newly incubated, with [...] ---Outline:(00:49) Who's receiving funding(04:32) Why promising applications sometimes didn't meet our bar(05:54) What we learned--- First published: June 16th, 2025 Source: https://forum.effectivealtruism.org/posts/prddJRsZdFjpm6yzs/open-philanthropy-reflecting-on-our-recent-effective-giving --- Narrated by TYPE III AUDIO.

Using the Whole Whale Podcast
AI Fundraising Agent Experiments Hint At Potential Glimpse Into Future of Fundraising

Using the Whole Whale Podcast

Play Episode Listen Later Apr 11, 2025 20:25


In this week's episode of Nonprofit Newsfeed by Whole Whale, George and Nick dive into an engaging discussion packed with insights for nonprofit enthusiasts.  AI in Fundraising: The conversation shifts to an intriguing experiment reported by TechCrunch, where Sage Future, backed by Open Philanthropy, tasked AI models with fundraising for charity. These AI agents, with human oversight, raised $270 for Helen Keller International by engaging in tasks like setting up social media accounts and creating promotional content. While AI's role in automating communication is acknowledged, the hosts caution against over-reliance due to potential brand risks, especially for sensitive issues. Environmental Advocacy: The episode touches on the historical narrative of leaded gasoline and its eventual phase-out in 2021, highlighting the critical role of governmental oversight in protecting public health. The hosts use this story to emphasize the importance of maintaining robust environmental regulations. USDA Grant Freeze Impact: A pressing issue discussed is the USDA's grant freeze, which has left nonprofits like Pasa Sustainable Agriculture in financial turmoil. With $3 million in unpaid reimbursements, the organization had to furlough most of its staff, underscoring the dire consequences of such funding disruptions on local communities and farms. Community-Driven Violence Prevention: The Circle of Brotherhood's innovative efforts in Miami's Liberty City are celebrated for their community-based approach to violence prevention. By providing unarmed, de-escalative security services, the organization works alongside local youth centers to foster a safer environment, demonstrating the power of community engagement over traditional security methods.

Nonprofit News Feed Podcast
AI Fundraising Agent Experiments Hint At Potential Glimpse Into Future of Fundraising

Nonprofit News Feed Podcast

Play Episode Listen Later Apr 11, 2025 20:25


In this week's episode of Nonprofit Newsfeed by Whole Whale, George and Nick dive into an engaging discussion packed with insights for nonprofit enthusiasts.  AI in Fundraising: The conversation shifts to an intriguing experiment reported by TechCrunch, where Sage Future, backed by Open Philanthropy, tasked AI models with fundraising for charity. These AI agents, with human oversight, raised $270 for Helen Keller International by engaging in tasks like setting up social media accounts and creating promotional content. While AI's role in automating communication is acknowledged, the hosts caution against over-reliance due to potential brand risks, especially for sensitive issues. Environmental Advocacy: The episode touches on the historical narrative of leaded gasoline and its eventual phase-out in 2021, highlighting the critical role of governmental oversight in protecting public health. The hosts use this story to emphasize the importance of maintaining robust environmental regulations. USDA Grant Freeze Impact: A pressing issue discussed is the USDA's grant freeze, which has left nonprofits like Pasa Sustainable Agriculture in financial turmoil. With $3 million in unpaid reimbursements, the organization had to furlough most of its staff, underscoring the dire consequences of such funding disruptions on local communities and farms. Community-Driven Violence Prevention: The Circle of Brotherhood's innovative efforts in Miami's Liberty City are celebrated for their community-based approach to violence prevention. By providing unarmed, de-escalative security services, the organization works alongside local youth centers to foster a safer environment, demonstrating the power of community engagement over traditional security methods.

Effective Altruism Forum Podcast
“Projects I'd like to see in the GHW meta space” by Melanie Basnak

Effective Altruism Forum Podcast

Play Episode Listen Later Mar 20, 2025 14:32


In my past year as a grantmaker in the global health and wellbeing (GHW) meta space at Open Philanthropy, I've identified some exciting ideas that could fill existing gaps. While these initiatives have significant potential, they require more active development and support to move forward. The ideas I think could have the highest impact are: Government placements/secondments in key GHW areas (e.g. international development), and Expanded (ultra) high-net-worth ([U]HNW) advising Each of these ideas needs a very specific type of leadership and/or structure. More accessible options I'm excited about — particularly for students or recent graduates — could involve virtual GHW courses or action-focused student groups. I can't commit to supporting any particular project based on these ideas ahead of time, because the likelihood of success would heavily depend on details (including the people leading the project). Still, I thought it would be helpful to [...] ---Outline:(01:19) Introduction(02:30) Project ideas(02:33) Fellowships and Placements(02:37) Placement orgs for governments and think tanks(03:06) Fellowships/Placements at GHW Organizations(03:57) More, and different, effective giving organizations(04:03) More (U)HNW advising(05:14) Targeting different niche demographics(05:50) Filling more geographic gaps(06:08) Infrastructure support for GHW organizations(06:38) EA-inspired GHW courses(06:56) BlueDot Impact for GHW(07:40) Incorporating EA content into university courses(08:35) Useful GHW events(08:51) Events bringing together EA and mainstream GHD orgs(09:57) Career panels or similar(10:13) More, and different, student groups(10:18) Action-focused student groups(11:34) Policy-focused grad student groups(11:51) Less thought-through ideas(13:12) Perceived impact and fitThe original text contained 2 footnotes which were omitted from this narration. --- First published: March 18th, 2025 Source: https://forum.effectivealtruism.org/posts/pAE6zfAgceCop6vcE/projects-i-d-like-to-see-in-the-ghw-meta-space --- Narrated by TYPE III AUDIO.

Clearer Thinking with Spencer Greenberg
AI, US-China relations, and lessons from the OpenAI board (with Helen Toner)

Clearer Thinking with Spencer Greenberg

Play Episode Listen Later Feb 26, 2025 81:57


Read the full transcript here. Is it useful to vote against a majority when you might lose political or social capital for doing so? What are the various perspectives on the US / China AI race? How close is the competition? How has AI been used in Ukraine? Should we work towards a global ban of autonomous weapons? And if so, how should we define "autonomous"? Is there any potential for the US and China to cooperate on AI? To what extent do government officials — especially senior policymakers — worry about AI? Which particular worries are on their minds? To what extent is the average person on the street worried about AI? What's going on with the semiconductor industry in Taiwan? How hard is it to get an AI model to "reason"? How could animal training be improved? Do most horses fear humans? How do we project ourselves onto the space around us?Helen Toner is the Director of Strategy and Foundational Research Grants at Georgetown's Center for Security and Emerging Technology (CSET). She previously worked as a Senior Research Analyst at Open Philanthropy, where she advised policymakers and grantmakers on AI policy and strategy. Between working at Open Philanthropy and joining CSET, Helen lived in Beijing, studying the Chinese AI ecosystem as a Research Affiliate of Oxford University's Center for the Governance of AI. Helen holds an MA in Security Studies from Georgetown, as well as a BSc in Chemical Engineering and a Diploma in Languages from the University of Melbourne. Follow her on Twitter at @hlntnr. StaffSpencer Greenberg — Host / DirectorJosh Castle — ProducerRyan Kessler — Audio EngineerUri Bram — FactotumWeAmplify — TranscriptionistsMusicBroke for FreeJosh WoodwardLee RosevereQuiet Music for Tiny Robotswowamusiczapsplat.comAffiliatesClearer ThinkingGuidedTrackMind EasePositlyUpLift[Read more]

The Economics Show with Soumaya Keynes
Is innovation slowing down? With Matt Clancy

The Economics Show with Soumaya Keynes

Play Episode Listen Later Feb 10, 2025 34:54


Productivity growth in the developed world has been on a downward trend since the 1960s. Meanwhile, gains in life expectancy have also slowed. And yet the number of dollars and researchers dedicated to R&D grows every year. In today's episode, the FT's Chief Data Reporter, John Burn-Murdoch, asks whether western culture has lost its previous focus on human progress and become too risk-averse, or whether the problem is simply that the low-hanging fruit of scientific research has already been plucked. He does so in conversation with innovation economist Matt Clancy, who is the author of the New Things Under the Sun blog, and a research fellow at Open Philanthropy, a non-profit foundation based in San Francisco that provides research grants.John Burn-Murdoch writes a column each week for the Financial Times. You can find it hereSubscribe on Apple, Spotify, Pocket Casts or wherever you listen.Presented by John Burn-Murdoch. Produced by Edith Rousselot. The editor is Bryant Urstadt. Manuela Saragosa is the executive producer. Audio mix and original music by Breen Turner. The FT's head of audio is Cheryl Brumley.Read a transcript of this episode on FT.com Hosted on Acast. See acast.com/privacy for more information.

Effective Altruism Forum Podcast
“GiveWell raised less than its 10th percentile forecast in 2023” by Rasool

Effective Altruism Forum Podcast

Play Episode Listen Later Jan 28, 2025 1:53


In 2023[1] GiveWell raised $355 million - $100 million from Open Philanthropy, and $255 million from other donors. In their post on 10th April 2023, GiveWell forecast the amount they expected to raise in 2023, albeit with wide confidence intervals, and stated that their 10th percentile estimate for total funds raised was $416 million, and 10th percentile estimate for funds raised outside of Open Philanthropy was $260 million. 10th percentile estimateMedian estimateAmount raisedTotal$416 million$581 million$355 millionExcluding Open Philanthropy$260 million$330 million$255 million Regarding Open Philanthropy, the April 2023 post states that they "tentatively plans to give $250 million in 2023", however Open Philanthropy gave a grant of $300 million to cover 2023-2025, to be split however GiveWell saw fit, and it used $100 million of that grant in 2023. However for other donors I'm not sure what caused the missed estimate Credit to 'Arnold' on GiveWell's December 2024 Open Thread for [...] The original text contained 2 footnotes which were omitted from this narration. --- First published: January 19th, 2025 Source: https://forum.effectivealtruism.org/posts/RdbDH4T8bxWwZpc9h/givewell-raised-less-than-its-10th-percentile-forecast-in --- Narrated by TYPE III AUDIO.

Effective Altruism Forum Podcast
“Funding Diversification for Mid-Large EA Organizations is Nearly Impossible in the Short-Medium Term” by MarcusAbramovitch

Effective Altruism Forum Podcast

Play Episode Listen Later Dec 28, 2024 19:00


Summary There's a near consensus that EA needs funding diversification but with Open Phil accounting for ~90% of EA funding, that's just not possible due to some pretty basic math. Organizations and the community would need to make large tradeoffs and this simply isn't possible/worth it at this time. Lots of people want funding diversification It has been two years since the FTX collapse and one thing everyone seems to agree on is that we need more funding diversification. These takes range from off-hand wishes “it sure would be great if funding in EA were more diversified”, to organizations trying to get a certain percentage of their budgets from non-OP sources/saying they want to diversify their funding base[1][2][3][4][5][6][7][8] to Open Philanthropy/Good Ventures themselves wanting to see more funding diversification[9]. Everyone seems to agree; other people should be giving more money to the EA projects. The Math  Of course, I [...] ---Outline:(00:34) Lots of people want funding diversification(01:11) The Math(03:47) Weighted Average(05:03) Making a lot of money to donate is difficult(09:18) Solutions(09:21) 1. Get more funders(10:35) 2. Spend Less(12:49) 3. Splitting up Open Philanthropy into Several Organizations(13:52) 4. More For-Profit EA Work/EA Organizations Charging for Their Work(16:23) 5. Acceptance(16:59) My Personal Solution(17:26) Conclusion(17:59) Further Readings--- First published: December 27th, 2024 Source: https://forum.effectivealtruism.org/posts/x8JrwokZTNzgCgYts/funding-diversification-for-mid-large-ea-organizations-is --- Narrated by TYPE III AUDIO.

Effective Altruism Forum Podcast
“Funding Diversification for Mid-Large EA Organizations is Nearly Impossible in the Short-Medium Term” by MarcusAbramovitch

Effective Altruism Forum Podcast

Play Episode Listen Later Dec 28, 2024 18:48


Summary There's a near consensus that EA needs funding diversification but with Open Phil accounting for ~90% of EA funding, that's just not possible due to some pretty basic math. Organizations and the community would need to make large tradeoffs and this simply isn't possible/worth it at this time. Lots of people want funding diversification It has been two years since the FTX collapse and one thing everyone seems to agree on is that we need more funding diversification. These takes range from off-hand wishes “it sure would be great if funding in EA were more diversified”, to organizations trying to get a certain percentage of their budgets from non-OP sources/saying they want to diversify their funding base(1,2,3,4,5,6,7,8) to Open Philanthropy/Good Ventures themselves wanting to see more funding diversification(9). Everyone seems to agree; other people should be giving more money to the EA projects. The Math  Of course, I [...] ---Outline:(00:07) Summary(00:29) Lots of people want funding diversification(01:10) The Math(03:46) Weighted Average(05:02) Making a lot of money to donate is difficult(09:17) Solutions(09:21) 1. Get more funders(10:34) 2. Spend Less(12:48) 3. Splitting up Open Philanthropy into Several Organizations(13:51) 4. More For-Profit EA Work/EA Organizations Charging for Their Work(16:22) 5. Acceptance(16:58) My Personal Solution(17:25) Conclusion(18:01) 1 I was approached at several EAGs, including a few weeks ago in Boston to donate to certain organizations specifically because they want to get a certain %X (30, 50, etc.) from non-OP sources but I'm sure I can find organizations who are very public about this(18:20) 2--- First published: December 27th, 2024 Source: https://forum.effectivealtruism.org/posts/x8JrwokZTNzgCgYts/funding-diversification-for-mid-large-ea-organizations-is --- Narrated by TYPE III AUDIO.

Hear This Idea
#82 – Tom Kalil on Institutions for Innovation (with Matt Clancy)

Hear This Idea

Play Episode Listen Later Dec 14, 2024 77:37


Tom Kalil is the CEO of Renaissance Philanthropy. He also served in the White House for two presidents (under Obama and Clinton); where he helped establish incentive prizes in government through challenge.gov; in addition to dozens of science and tech program. More recently Tom served as the Chief Innovation Officer at Schmidt Futures, where he helped launch Convergent Research. Matt Clancy is an economist and a research fellow at Open Philanthropy. He writes ‘New Things Under the Sun', which is a living literature review on academic research about science and innovation. We talked about: What is ‘influence without authority'? Should public funders sponsor more innovation prizes? Can policy entrepreneurship be taught formally? Why isn't ultra-wealthy philanthropy much more ambitious? What's the optimistic case for increasing US state capacity? What was it like being principal staffer to Gordon Moore? What is Renaissance Philanthropy? You can get in touch through our website or on Twitter. Consider leaving us an honest review wherever you're listening to this — it's the best way to support the show. Thanks for listening!

The Foresight Institute Podcast
Existential Hope Podcast: Adam Marblestone | Solving Science's Biggest Gaps

The Foresight Institute Podcast

Play Episode Listen Later Nov 29, 2024 51:24


Adam Marblestone is the CEO of Convergent Research. He is working with a large and growing network of collaborators and advisors to develop a strategic roadmap for future FROs. Outside of CR, he serves on the boards of several non-profits pursuing new methods of funding and organizing scientific research including Norn Group and New Science, and as an interviewer for the Hertz Foundation. Previously, he was a Schmidt Futures Innovation Fellow, a Fellow with the Federation of American Scientists (FAS), a research scientist at Google DeepMind, Chief Strategy Officer of the brain-computer interface company Kernel, a research scientist at MIT, a PhD student in biophysics with George Church and colleagues at Harvard, and a theoretical physics student at Yale. He has also previously helped to start companies like BioBright, and advised foundations such as Open Philanthropy.Session SummaryIn this episode of the Existential Hope Podcast, our guest is Adam Marblestone, CEO of Convergent Research. Adam shares his journey from working on nanotechnology and neuroscience to pioneering a bold new model for scientific work and funding: Focused Research Organizations (FROs). These nonprofit, deep-tech startups are designed to fill critical gaps in science by building the infrastructure needed to accelerate discovery. Tune in to hear how FROs are unlocking innovation, tackling bottlenecks across fields, and inspiring a new approach to advancing humanity's understanding of the world.Full transcript, list of resources, and art piece: https://www.existentialhope.com/podcastsExistential Hope was created to collect positive and possible scenarios for the future so that we can have more people commit to creating a brighter future, and to begin mapping out the main developments and challenges that need to be navigated to reach it. Existential Hope is a Foresight Institute project.Hosted by Allison Duettmann and Beatrice ErkersFollow Us: Twitter | Facebook | LinkedIn | Existential Hope InstagramExplore every word spoken on this podcast through Fathom.fm. Hosted on Acast. See acast.com/privacy for more information.

Effective Altruism Forum Podcast
“We can protect millions of kids from a global killer — without billions of dollars (Washington Post)” by Aaron Gertler

Effective Altruism Forum Podcast

Play Episode Listen Later Sep 24, 2024 2:25


This is a link post. This WaPo piece announces the Partnership for a Lead-Free Future (PLF), a collaboration led by Open Philanthropy, USAID, and UNICEF. It was co-authored by Alexander Berger (Open Phil's CEO) and Samantha Power, head of USAID. Ten years ago, when residents of Flint, Mich., were exposed to toxic levels of lead in their drinking water, 1 in 20 children in the city had elevated blood lead levels that placed them at risk for heart disease, strokes, cognitive deficits and developmental delays — health effects that residents still grapple with to this day. It was only after activists rallied, organized and advocated relentlessly that national attention focused on Flint, and officials committed nearly half a billion dollars to clean up Flint's water. Today, there is a lead poisoning crisis raging on a far greater scale — and hardly anyone is talking about it. [...] The partnership will [...] --- First published: September 23rd, 2024 Source: https://forum.effectivealtruism.org/posts/soeJ4XNnLoyWpiFsK/we-can-protect-millions-of-kids-from-a-global-killer-without --- Narrated by TYPE III AUDIO.

Effective Altruism Forum Podcast
“Announcing the Lead Exposure Action Fund” by Alexander_Berger, Emily Oehlsen

Effective Altruism Forum Podcast

Play Episode Listen Later Sep 24, 2024 9:34


This is a link post. One of Open Philanthropy's goals for this year is to experiment with collaborating with other funders. Today, we're excited to announce our biggest collaboration to date: the Lead Exposure Action Fund (LEAF). Lead exposure in low- and middle-income countries is a devastating but highly neglected issue. The Global Burden of Disease study estimates 1.5 million deaths per year attributable to lead poisoning. Despite this burden, lead poisoning has only received roughly $15 million per year in philanthropic funding until recently. That is less than 1% of the funding that goes towards diseases like tuberculosis or malaria, which are themselves considered neglected. The goal of LEAF is to accelerate progress toward a world free of lead exposure by making grants to support measurement, mitigation, and mainstreaming awareness of the problem. Our partners have already committed $104 million, and we plan for LEAF to allocate that [...] ---Outline:(01:54) Why we chose to work on lead(04:54) What LEAF hopes to achieve(05:30) The LEAF team(06:01) An experiment for Open Philanthropy(06:49) Grantmaking so farThe original text contained 3 footnotes which were omitted from this narration. --- First published: September 23rd, 2024 Source: https://forum.effectivealtruism.org/posts/z5PvTSa54pdxxw72W/announcing-the-lead-exposure-action-fund --- Narrated by TYPE III AUDIO.

The Road to Accountable AI
Helen Toner: AI Safety in a World of Uncertainty

The Road to Accountable AI

Play Episode Listen Later Sep 19, 2024 41:15 Transcription Available


Join Professor Kevin Werbach in his discussion with Helen Toner, Director of Strategy and Foundational Research Grants at Georgetown's Center for Security and Emerging Technology. In this episode, Werbach and Toner discuss how the public views AI safety and ethics and both the positive and negative outcomes of advancements in AI. We discuss Toner's lessons from the unsuccessful removal of Sam Altman as the CEO of OpenAI, oversight structures to audit and approve the AI companies deploy, and the role of the government in AI accountability. Finally, Toner explains how businesses can take charge of their responsible AI deployment.   Helen Toner is the Director of Strategy and Foundational Research Grants at Georgetown's Center for Security and Emerging Technology (CSET). She previously worked as a Senior Research Analyst at Open Philanthropy, where she advised policymakers and grantmakers on AI policy and strategy. Between working at Open Philanthropy and joining CSET, Helen lived in Beijing, studying the Chinese AI ecosystem as a Research Affiliate of Oxford University's Center for the Governance of AI. From 2021-2023, she served on the board of OpenAI, the creator of ChatGPT.  Helen Toner's TED Talk: How to Govern AI, Even if it's Hard to Predict Helen Toner on the OpenAI Coup “It was about trust and accountability” (Financial Times)   Want to learn more? Engage live with Professor Werbach and other Wharton faculty experts in Wharton's new  Strategies for Accountable AI online executive education program. It's perfect for managers, entrepreneurs, and advisors looking to harness AI's power while addressing its risks  

The Nonlinear Library
EA - EA Organization Updates: September 2024 by Toby Tremlett

The Nonlinear Library

Play Episode Listen Later Sep 19, 2024 9:56


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA Organization Updates: September 2024, published by Toby Tremlett on September 19, 2024 on The Effective Altruism Forum. If you would like to see EA Organization Updates as soon as they come out, consider subscribing to this tag. Some of the opportunities and job listings we feature in this update have (very) pressing deadlines (see AI Alignment Teaching Fellow opportunities at BlueDot Impact, September 22, and Institutional Foodservice Fellow at the Good Food Institute, September 18). You can see previous updates on the "EA Organization Updates (monthly series)" topic page, or in our repository of past newsletters. Notice that there's also an "org update" tag, where you can find more news and updates that are not part of this consolidated series. These monthly posts originated as the "Updates" section of the monthly EA Newsletter. Organizations submit their own updates, which we edit for clarity. (If you'd like to share your updates and jobs via this series, please apply here.) Opportunities and jobs Opportunities Consider also checking opportunities listed on the EA Opportunity Board and the Opportunities to Take Action tag. ALLFED published a new database containing numerous research projects that prospective volunteers can assist with. Explore the database and apply here. Apply to the upcoming AI Safety Fundamentals: Alignment course by October 6 to learn about the risks from AI and how you can contribute to the field. The Animal Advocacy Careers Introduction to Animal Advocacy Course has been revamped. The course is for those wishing to kickstart a career in animal advocacy. Giv Effektivt (DK) needs ~110 EU citizens to become members before the new year in order to offer tax deductions of around 450.000DKK ($66.000) for 2024-25 donations. Become a member now for 50DKK ($7). An existing donor will give 100DKK for each new member until the organization reaches 300 members. Anima International's Animal Advocacy Training Center released a new online course - Fundraising Essentials. It's a free, self-paced resource with over two hours of video content for people new to the subject. Job listings Consider also exploring jobs listed on the Job listing (open) tag. For even more roles, check the 80,000 Hours Job Board. BlueDot Impact AI Alignment Teaching Fellow (Remote, £4.9K-£9.6K, apply by September 22nd) Centre for Effective Altruism Head of Operations (Remote, £107.4K / $179.9K, apply by October 7th) Cooperative AI Foundation Communications Officer (Remote, £35K-£40K, apply by September 29th) GiveWell Senior Researcher (Remote, $200K-$220.6K) Giving What We Can Global CEO (Remote, $130K+, apply by September 30th) Open Philanthropy Operations Coordinator/Associate (San Francisco, Washington, DC, $99.6K-$122.6K) If you're interested in working at Open Philanthropy but don't see an open role that matches your skillset, express your interest. Epoch AI Question Writer, Math Benchmark (Contractor Position) (Remote, $2K monthly + $100-$1K performance-based bonus) Senior Researcher, ML Distributed Systems (Remote, $150K-$180K) The Good Food Institute Managing Director, GFI India (Hybrid (Mumbai, Delhi, Hyderabad, or Bangalore), ₹4.5M, apply by October 2nd) Institutional Foodservice Fellow (Independent Contractor) (Remote in US, $3.6K biweekly, apply by September 18th) Organization updates The organization updates are in alphabetical order (0-A-Z). 80,000 Hours There is one month left to win $5,000 career grants by referring your friends or colleagues to 80,000 Hours' free career advising. Also, the organization released a blog post about the recent updates to their AI-related content, as well as a post about pandemic preparedness in relation to mpox and H5N1. On the 80,000 Hours Podcast, Rob interviewed: Nick Joseph on whether Anthropic's AI safety policy is up to the task...

Hear This Idea
#78 – Jacob Trefethen on Global Health R&D

Hear This Idea

Play Episode Listen Later Sep 8, 2024 150:16


Jacob Trefethen oversees Open Philanthropy's science and science policy programs. He was a Henry Fellow at Harvard University, and has a B.A. from the University of Cambridge. You can find links and a transcript at www.hearthisidea.com/episodes/trefethen In this episode we talked about open source the risks and benefits of open source AI models. We talk about: Life-saving health technologies which probably won't exist in 5 years (without a concerted effort) — like a widely available TB vaccine, and bugs which stop malaria spreading How R&D for neglected diseases works — How much does the world spend on it? How do drugs for neglected diseases go from design to distribution? No-brainer policy ideas for speeding up global health R&D Comparing health R&D to public health interventions (like bed nets) Comparing the social returns to frontier (‘Progress Studies') to global health R&D Why is there no GiveWell-equivalent for global health R&D? Won't AI do all the R&D for us soon? You can get in touch through our website or on Twitter. Consider leaving us an honest review wherever you're listening to this — it's the best free way to support the show. Thanks for listening!

The Nonlinear Library
EA - Fungal diseases: Health burden, neglectedness, and potential interventions by Rethink Priorities

The Nonlinear Library

Play Episode Listen Later Sep 4, 2024 7:21


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Fungal diseases: Health burden, neglectedness, and potential interventions, published by Rethink Priorities on September 4, 2024 on The Effective Altruism Forum. Editorial note This report is a "shallow" investigation, as described here, and was commissioned by Open Philanthropy and produced by Rethink Priorities from January to February 2023. We revised the report for publication. Open Philanthropy does not necessarily endorse our conclusions, nor do the organizations represented by those who were interviewed. Our report focuses on exploring fungal diseases as a potential new cause area for Open Philanthropy. We assessed the current and future health burden of fungal diseases, provided an overview of current interventions and the main gaps and barriers to address the burden, and discussed some plausible options for philanthropic spending. We reviewed the scientific and gray literature and spoke with five experts. While revising the report for publication, we learned of a new global burden study ( Denning et al., 2024) whose results show an annual incidence of 6.5 million invasive fungal infections, and 3.8 million total deaths from fungal diseases (2.5 million of which are "directly attributable" to fungal diseases). The study's results align with this report's estimate of annual 1.5 million to 4.6 million deaths (80% confidence) but were not considered in this report. We don't intend this report to be Rethink Priorities' final word on fungal diseases. We have tried to flag major sources of uncertainty in the report and are open to revising our views based on new information or further research. Executive summary While fungal diseases are very common and mostly mild, some forms are life-threatening and predominantly affect low- and middle-income countries (LMICs). The evidence base on the global fungal disease burden is poor, and estimates are mostly based on extrapolations from the few available studies. Yet, all experts we talked to agree that current burden estimates (usually stated as >1.7M deaths/year) likely underestimate the true burden. Overall, we think the annual death burden could be 1.5M - 4.6M (80% CI), which would exceed malaria and HIV/AIDS deaths combined.[1] Moreover, our best guess is that fungal diseases cause 8M - 49M DALYs (80% CI) per year, but this is based on our own back-of-the-envelope calculation of high-uncertainty inputs. Every expert we spoke with expects the burden to increase substantially in the future, though no formal estimates exist. We project that deaths and DALYs could grow to approximately 2-3 times the current burden until 2040, though this is highly uncertain. This will likely be partly due to a rise in antifungal resistance, which is especially problematic as few treatment classes exist and many fungal diseases are highly lethal without treatment. We estimate that only two diseases (chronic pulmonary aspergillosis [CPA] and candidemia/invasive candidiasis [IC/C]) account for ~39%-45% of the total death and DALY burden. Moreover, a single fungal pathogen (Aspergillus fumigatus) accounts for ~50% of the burden. Thus, much of the burden can be reduced by focusing on only a few of the fungal diseases or on a few pathogens. Available estimates suggest the top fungal diseases have highest burdens in Asia and LMICs, and that they most affect immunocompromised individuals. Fungal diseases seem very neglected in all areas we considered (research/R&D, advocacy/lobbying, philanthropic spending, and policy interventions) and receive little attention even in comparison to other diseases which predominantly affect LMICs. For example, we estimate the research funding/death ratio for malaria to be roughly 20 times higher than for fungal diseases. Moreover, fewer than 10 countries have national surveillance systems for fungal infections, an...

The Nonlinear Library
EA - Meta Charity Funders: Launching the 3rd round by Joey

The Nonlinear Library

Play Episode Listen Later Aug 31, 2024 2:50


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Meta Charity Funders: Launching the 3rd round, published by Joey on August 31, 2024 on The Effective Altruism Forum. Meta Charity Funders is a funding circle consisting of ~10 funders, that funds meta initiatives and serves as an alternative to Open Philanthropy and EA funds. We're launching the third round of Meta Charity Funders. Apply for funding by September 30th or join the circle as a donor. We expect all applicants to have read this post's twin post, "Meta Charity Funders: What you need to know when applying" to understand how to write your application. Focus of this round We expect to fund many initiatives not on this list, but some projects that members of our circle have expressed extra interest in funding this round are: Ultra-high-net-worth-individual advising. However, we want to stress that we believe the skillset to do this well is rare, and these types of applications will be extra scrutinized. Effective Giving/Giving multiplier-organizations. For example, the ones incubated by CE's Effective Giving Incubation program. Career development programs that increase the number of individuals working in high-impact areas- including GCR reduction, animal welfare and Global Health. Especially in regions where there are currently fewer opportunities to engage in such programs. Information for this round Process The expected process is as follows: Applications open: August 30th 100 words in the summary; this should give us a quick overview of the project. In the full project description, please include a main summarizing document no longer than 2 pages. This is all we can commit to reading in the first stage. Any extra material will only be read if we choose to proceed with your application. When choosing the "Meta" category, please be as truthful as possible. It's obvious (and reflects negatively on the application) when a project has deliberately been placed in a category in which it does not belong. Applications close: September 29th Initial application review finishes: October 6th If your project has been filtered out during the initial application review (which we expect 60-80% of applications will), we will let you know around the end of October. Interviews, due diligence, deliberations: October 7th - November 24th If your application has passed the initial application review, we will discuss it during our gatherings, and we might reach out to you to gather more information, for example, by conducting an interview. N.B. This is not a commitment to fund you. Decisions made: November 25th We expect to pay out the grants in the weeks following Novermber 25th. Historical support: You can get a sense of what we have supported in historical rounds by reading our posts on our first and secound rounds. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

Rocking Our Priors
How did East Asia Get Rich?

Rocking Our Priors

Play Episode Listen Later Aug 30, 2024 77:03


Dr Oliver Kim has completed his PhD at Berkeley, recently appointed at Open Philanthropy. He does awesome research, carefully examining the drivers of structural transformation. We discussed: Why do you think East Asia is the only world region to have converged with the West? How have big data and computational tools changed our understanding of structural transformation? Oliver's website: https://oliverwkim.com/ His substack: https://www.global-developments.org/

The Nonlinear Library
EA - How Platinum Helps Draw Attention to Japan's Role in Global Health Funding by Open Philanthropy

The Nonlinear Library

Play Episode Listen Later Aug 27, 2024 5:30


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How Platinum Helps Draw Attention to Japan's Role in Global Health Funding, published by Open Philanthropy on August 27, 2024 on The Effective Altruism Forum. Japan spent more than $19.6 billion on overseas development assistance (ODA) in 2023, making it the third largest single-country donor behind the US and Germany. Open Philanthropy's Global Aid Policy (GAP) team, which is devoted to increasing government aid and guiding it toward more cost-effective approaches, believes there may be opportunities to increase the impact of this aid through targeted advocacy efforts. They estimate that in Western nations like the UK, for every $1,000 spent on ODA, aid advocacy funders spend around $2.60 attempting to support and inform its allocation. Meanwhile, in Japan, advocacy spending is a mere $0.25 for the same amount - more than 10 times less. Accordingly, the GAP program has prioritized work in Japan. The following case study highlights one grantee helping to drive this work forward. ***** One day in March 2023, in the district of Wayanad near India's southern tip, hundreds of villagers lined up for an uncommon service from an unexpected source: a check-up on their lung health, courtesy of Fujifilm. The Japanese company, best known for its cameras, was taking a different kind of picture. Its portable, 3.5 kg battery-powered X-ray machine, designed to deliver hospital-grade diagnostics, enables tuberculosis screenings in regions where medical facilities usually lack the necessary technology. This scene was just one stop on an illuminating trip to India for a group of Japanese journalists and youth activists. From Toyota Tsusho's Sakra World Hospital to Eisai's efforts to combat neglected tropical diseases (NTDs) in Yarada village, each site visit highlighted Japanese businesses and researchers contributing to global health initiatives. Recognizing this opportunity, Open Philanthropy supported Platinum, a Tokyo-based PR firm, in organizing a trip across India aimed at boosting the Japanese public's awareness of urgent global health issues, particularly tuberculosis and neglected tropical diseases (NTDs). Sixteen people attended: six journalists, representing outlets ranging from a long-running daily newspaper to a popular economics broadcast, and 10 youth activists sourced from PoliPoli's Reach Out Project, an Open Philanthropy-funded initiative that incubates charities focused on global health advocacy. Our Senior Program Officer for Global Aid Policy, Norma Altshuler, thought the initiative was timely given recent trends in Japan's ODA spending. Between 2019 and 2022, the share of Japanese ODA allocated to global health doubled (or tripled, including COVID-19 relief). To sustain this momentum, Open Philanthropy is supporting Japanese groups that aim to preserve or grow Japan's commitment to prioritizing global health initiatives. In a post-trip interview with Open Philanthropy, Soichi Murayama, who helped organize the trip, says one challenge of Japan's media landscape "is that Japanese media doesn't cover global health very often." Murayama attributes the dearth of dedicated coverage to limited reader interest, creating a feedback loop where minimal reporting leads to low awareness, which in turn reduces appetite for such stories. Ryota Todoroki, a medical student who participated in the trip, echoes this sentiment: "NTDs are often seen as a foreign issue with no relevance to Japan, so changing this perception is a major challenge." The Fujifilm initiative in Wayanad provides an example of how connecting Japanese companies to global health efforts can help illustrate the impact of foreign aid. This approach not only highlights Japan's technological contributions but also links economic interests with humanitarian efforts. To gauge the impact of awareness campaigns, PR pr...

The Nonlinear Library
EA - Open Philanthropy is hiring people to… help hire more people! by maura

The Nonlinear Library

Play Episode Listen Later Aug 17, 2024 2:01


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Open Philanthropy is hiring people to… help hire more people!, published by maura on August 17, 2024 on The Effective Altruism Forum. Open Philanthropy needs more recruiters. We'd love to see you apply, even if you've never worked in hiring before. "Recruiting" is sometimes used to narrowly refer to headhunting or outreach. At Open Phil, though, "recruiting" includes everything related to the hiring process. Our recruiting team manages the systems and people that take us from applications to offers. We design evaluations, interview candidates, manage stakeholders, etc.[1] We're looking for An operations mindset. Recruiting is project management, first and foremost; we want people who can reliably juggle lots of balls without dropping them. Interpersonal skills. We want clear communicators with good people judgment. Interest in Open Phil's mission. This is an intentionally broad definition-see below! What you don't need Prior recruiting experience. We'll teach you! To be well-networked or highly immersed in EA. You should be familiar with the areas Open Phil works in (such as global health and wellbeing and global catastrophic risks), but if you're wondering "Am I EA enough for this?", you almost certainly are. The job application will be posted to OP's website in coming weeks, but isn't there yet as of this post; we're starting with targeted outreach to high-context audiences (you!) before expanding our search to broader channels. If this role isn't for you but might be for someone in your network, please send them our way-we offer a reward if you counterfactually refer someone we end up hiring. 1. ^ The OP recruiting team also does headhunting and outreach, though, and we're open to hiring more folks to help with that work, too! If that sounds exciting to you, please apply to the current recruiter posting and mention an interest in outreach work in the "anything else" field. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

The Nonlinear Library
EA - Funding for work that builds capacity to address risks from transformative AI by GCR Capacity Building team (Open Phil)

The Nonlinear Library

Play Episode Listen Later Aug 14, 2024 9:38


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Funding for work that builds capacity to address risks from transformative AI, published by GCR Capacity Building team (Open Phil) on August 14, 2024 on The Effective Altruism Forum. Post authors: Eli Rose, Asya Bergal Posting in our capacities as members of Open Philanthropy's Global Catastrophic Risks Capacity Building team. This program, together with our separate program for programs and events on global catastrophic risk, effective altruism, and other topics, has replaced our 2021 request for proposals for outreach projects. If you have a project which was in-scope for that program but isn't for either of these, you can apply to our team's general application instead. We think it's possible that the coming decades will see "transformative" progress in artificial intelligence, i.e., progress that leads to changes in human civilization at least as large as those brought on by the Agricultural and Industrial Revolutions. It is currently unclear to us whether these changes will go well or poorly, and we think that people today can do meaningful work to increase the likelihood of positive outcomes. To that end, we're interested in funding projects that: Help new talent get into work focused on addressing risks from transformative AI. Including people from academic or professional fields outside computer science or machine learning. Support existing talent in this field (e.g. via events that help build professional networks). Contribute to the discourse about transformative AI and its possible effects, positive and negative. We refer to this category of work as "capacity-building", in the sense of "building society's capacity" to navigate these risks. Types of work we've historically funded include training and mentorship programs, events, groups, and resources (e.g., blog posts, magazines, podcasts, videos), but we are interested in receiving applications for any capacity-building projects aimed at risks from advanced AI. This includes applications from both organizations and individuals, and includes both full-time and part-time projects. Apply for funding here. Applications are open until further notice and will be assessed on a rolling basis. We're interested in funding work to build capacity in a number of different fields which we think may be important for navigating transformative AI, including (but very much not limited to) technical alignment research, model evaluation and forecasting, AI governance and policy, information security, and research on the economics of AI. This program is not primarily intended to fund direct work in these fields, though we expect many grants to have both direct and capacity-building components - see below for more discussion. Categories of work we're interested in Training and mentorship programs These are programs that teach relevant skills or understanding, offer mentorship or professional connections for those new to a field, or provide career opportunities. These could take the form of fellowships, internships, residencies, visitor programs, online or in-person courses, bootcamps, etc. Some examples of training and mentorship programs we've funded in the past: BlueDot's online courses on technical AI safety and AI governance. MATS's in-person research and educational seminar programs in Berkeley, California. ML4Good's in-person AI safety bootcamps in Europe. We've previously funded a number of such programs in technical alignment research, and we're excited to fund new programs in this area. But we think other relevant areas may be relatively neglected - for instance, programs focusing on compute governance or on information security for frontier AI models. For illustration, here are some (hypothetical) examples of programs we could be interested in funding: A summer research fellowship for individuals with technical backgr...

The Nonlinear Library
EA - Funding for programs and events on global catastrophic risk, effective altruism, and other topics by GCR Capacity Building team (Open Phil)

The Nonlinear Library

Play Episode Listen Later Aug 13, 2024 3:41


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Funding for programs and events on global catastrophic risk, effective altruism, and other topics, published by GCR Capacity Building team (Open Phil) on August 13, 2024 on The Effective Altruism Forum. Post authors: Eli Rose, Asya Bergal Posting in our capacities as members of Open Philanthropy's Global Catastrophic Risks Capacity Building team. Note: This program, together with our separate program for work that builds capacity to address risks from transformative AI, has replaced our 2021 request for proposals for outreach projects. If you have a project which was in-scope for that program but isn't for either of these, you can apply to our team's general application instead. Apply for funding here. Applications are open until further notice and will be assessed on a rolling basis. This is a wide-ranging call for applications, seeking to fund programs and events in a variety of areas of interest to Open Philanthropy - including effective altruism, global catastrophic risks, biosecurity, AI for epistemics, forecasting, and other areas. In general, if the topic of your program or event falls within one of our GCR focus areas, or if it's similar to work we've funded in the past in our GCR focus areas, it may be a good fit for this program. If you're unsure about whether to submit your application, we'd encourage you to err on the side of doing so. By "programs and events" we mean scholarship or fellowship programs, internships, residencies, visitor programs, courses[1], seminars, conferences, workshops, retreats, etc., including both in-person and online activities. We're open to funding programs or events aimed at individuals at any career stage, and with a wide range of potential purposes, including teaching new skills, providing new career opportunities, offering mentorship, or facilitating networking. Examples of programs and events of this type we've funded before include: Condor Camp, a summer program for Brazilian students interested in existential risk work. The Future of Humanity Institute's Research Scholars Program supporting early-career researchers in global catastrophic risk. Effective Altruism Global, a series of conferences for individuals interested in effective altruism. Future Forum, a conference aimed at bringing together members of several communities interested in emerging technology and the future. A workshop on using AI to improve epistemics, organized by academics from NYU, the Forecasting Research Institute, the AI Objectives Institute and Metaculus. AI-focused work We have a separate call up for work that builds societal capacity to address risks from transformative AI. If your program or event is focused on transformative AI and/or risks from transformative AI, we prefer you apply to that call instead. However, which call you apply to is unlikely to make a difference to the outcome of your application. Application information Apply for funding here. The application form asks for information about you, your project/organization (if relevant), and the activities you're requesting funding for. We're interested in funding both individual/one-off programs and events, and organizations that run or support programs and events. We expect to make most funding decisions within 8 weeks of receiving an application (assuming prompt responses to any follow-up questions we may have). You can indicate on our form if you'd like a more timely decision, though we may or may not be able to accommodate your request. Applications are open until further notice and will be assessed on a rolling basis. 1. ^ To apply for funding for the development of new university courses, please see our separate Course Development Grants RFP. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

The Nonlinear Library
LW - Provably Safe AI: Worldview and Projects by bgold

The Nonlinear Library

Play Episode Listen Later Aug 10, 2024 13:12


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Provably Safe AI: Worldview and Projects, published by bgold on August 10, 2024 on LessWrong. In September 2023, Max Tegmark and Steve Omohundro proposed "Provably Safe AI" as a strategy for AI Safety. In May 2024, a larger group delineated the broader concept of "Guaranteed Safe AI" which includes Provably Safe AI and other related strategies. In July, 2024, Ben Goldhaber and Steve discussed Provably Safe AI and its future possibilities, as summarized in this document. Background In June 2024, ex-OpenAI AI Safety Researcher Leopold Aschenbrenner wrote a 165-page document entitled "Situational Awareness, The Decade Ahead" summarizing AI timeline evidence and beliefs which are shared by many frontier AI researchers. He argued that human-level AI is likely by 2027 and will likely lead to superhuman AI in 2028 or 2029. "Transformative AI" was coined by Open Philanthropy to describe AI which can "precipitate a transition comparable to the agricultural or industrial revolution". There appears to be a significant probability that Transformative AI may be created by 2030. If this probability is, say, greater than 10%, then humanity must immediately begin to prepare for it. The social changes and upheaval caused by Transformative AI are likely to be enormous. There will likely be many benefits but also many risks and dangers, perhaps even existential risks for humanity. Today's technological infrastructure is riddled with flaws and security holes. Power grids, cell service, and internet services have all been very vulnerable to accidents and attacks. Terrorists have attacked critical infrastructure as a political statement. Today's cybersecurity and physical security barely keeps human attackers at bay. When these groups obtain access to powerful cyberattack AI's, they will likely be able to cause enormous social damage and upheaval. Humanity has known how to write provably correct and secure software since Alan Turing's 1949 paper. Unfortunately, proving program correctness requires mathematical sophistication and it is rare in current software development practice. Fortunately, modern deep learning systems are becoming proficient at proving mathematical theorems and generating provably correct code. When combined with techniques like "autoformalization," this should enable powerful AI to rapidly replace today's flawed and insecure codebase with optimized, secure, and provably correct replacements. Many researchers working in these areas believe that AI theorem-proving at the level of human PhD's is likely about two years away. Similar issues plague hardware correctness and security, and it will be a much larger project to replace today's flawed and insecure hardware. Max and Steve propose formal methods grounded in mathematical physics to produce provably safe physical designs. The same AI techniques which are revolutionizing theorem proving and provable software synthesis are also applicable to provable hardware design. Finally, today's social mechanisms like money, contracts, voting, and the structures of governance, will also need to be updated for the new realities of an AI-driven society. Here too, the underlying rules of social interaction can be formalized, provably effective social protocols can be designed, and secure hardware implementing the new rules synthesized using powerful theorem proving AIs. What's next? Given the huge potential risk of uncontrolled powerful AI, many have argued for a pause in Frontier AI development. Unfortunately, that does not appear to be a stable solution. Even if the US paused its AI development, China or other countries could gain an advantage by accelerating their own work. There have been similar calls to limit the power of open source AI models. But, again, any group anywhere in the world can release their powerful AI model weig...

The Nonlinear Library
EA - Wild Animal Initiative has urgent need for more funding and more donors by Cameron Meyer Shorb

The Nonlinear Library

Play Episode Listen Later Aug 6, 2024 21:29


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Wild Animal Initiative has urgent need for more funding and more donors, published by Cameron Meyer Shorb on August 6, 2024 on The Effective Altruism Forum. Our room for more funding is bigger and more urgent than ever before. Our organizational strategy will be responsive both to the total amount raised and to how many people donate, so smaller donors will have an especially high impact this year. Good Ventures recently decided to phase out funding for several areas (GV blog, EA Forum post), including wild animal welfare. That's a pretty big shock to our movement. We don't know what exactly the impact will be, except that it's complicated. The purpose of this post is to share what we know and how we're thinking about things - primarily to encourage people to donate to Wild Animal Initiative this year, but also for anyone else who might be interested in the state of the wild animal welfare movement more broadly. Summary Track record Our primary goal is to support the growth of a self-sustaining interdisciplinary research community focused on reducing wild animal suffering. Wild animal welfare science is still a small field, but we're really happy with the momentum it's been building. Some highlights of the highlights: We generally get a positive response from researchers (particularly in animal behavior science and ecology), who tend to see wild animal welfare as a natural extension of their interest in conservation (unlike EAs, who tend to see those two as conflicting with each other). Wild animal welfare is increasingly becoming a topic of discussion at scientific conferences, and was recently the subject of the keynote presentation at one. Registration for our first online course filled to capacity (50 people) within a few hours, and just as many people joined the waitlist over the next few days. Room for more funding This is the first year in which our primary question is not how much more we can do, but whether we can avoid major budget cuts over the next few years. We raised less in 2023 than we did in 2022, so we need to make up for that gap. We're also going to lose our biggest donor because Good Ventures is requiring Open Philanthropy to phase out their funding for wild animal welfare. Open Phil was responsible for about half of our overall budget. The funding from their last grant to us will last halfway through 2026, but we need to decide soon how we're going to adapt. To avoid putting ourselves back in the position of relying on a single funder, our upcoming budgeting decisions will depend on not only how much money we raise, but also how diversified our funding is. That means gifts from smaller donors will have an unusually large impact. (The less you normally donate, the more disproportionate your impact will be, but the case still applies to basically everyone who isn't a multi-million-dollar foundation.) Specifically, our goal is to raise $240,000 by the end of the year from donors giving $10k or less. Impact of marginal donations We're evaluating whether we need to reduce our budget to a level we can sustain without Open Philanthropy. The more we raise this year - and the more donors who pitch in to make that happen - the less we'll need to cut. Research grants and staff-associated costs make up the vast majority of our budget, so we'd need to make cuts in one or both of those areas. Donations would help us avoid layoffs and keep funding external researchers. What we've accomplished so far Background If you're not familiar with Wild Animal Initiative, we're working to accelerate the growth of wild animal welfare science. We do that through three interconnected programs: We make grants to scientists who take on relevant projects, we conduct our own research on high-priority questions, and we do outreach through conferences and virtual events. Strategy...

The Nonlinear Library
EA - Announcing Open Philanthropy's AI governance and policy RFP by JulianHazell

The Nonlinear Library

Play Episode Listen Later Jul 17, 2024 22:45


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing Open Philanthropy's AI governance and policy RFP, published by JulianHazell on July 17, 2024 on The Effective Altruism Forum. AI has enormous beneficial potential if it is governed well. However, in line with a growing contingent of AI (and other) experts from academia, industry, government, and civil society, we also think that AI systems could soon (e.g. in the next 15 years) cause catastrophic harm. For example, this could happen if malicious human actors deliberately misuse advanced AI systems, or if we lose control of future powerful systems designed to take autonomous actions.[1] To improve the odds that humanity successfully navigates these risks, we are soliciting short expressions of interest (EOIs) for funding for work across six subject areas, described below. Strong applications might be funded by Good Ventures (Open Philanthropy's partner organization), or by any of >20 (and growing) other philanthropists who have told us they are concerned about these risks and are interested to hear about grant opportunities we recommend.[2] (You can indicate in your application whether we have permission to share your materials with other potential funders.) As this is a new initiative, we are uncertain about the volume of interest we will receive. Our goal is to keep this form open indefinitely; however, we may need to temporarily pause accepting EOIs if we lack the staff capacity to properly evaluate them. We will post any updates or changes to the application process on this page. Anyone is eligible to apply, including those working in academia, nonprofits, industry, or independently.[3] We will evaluate EOIs on a rolling basis. See below for more details. If you have any questions, please email us. If you have any feedback about this page or program, please let us know (anonymously, if you want) via this short feedback form. 1. Eligible proposal subject areas We are primarily seeking EOIs in the following subject areas, but will consider exceptional proposals outside of these areas, as long as they are relevant to mitigating catastrophic risks from AI: Technical AI governance: Developing and vetting technical mechanisms that improve the efficacy or feasibility of AI governance interventions, or answering technical questions that can inform governance decisions. Examples include compute governance, model evaluations, technical safety and security standards for AI developers, cybersecurity for model weights, and privacy-preserving transparency mechanisms. Policy development: Developing and vetting government policy proposals in enough detail that they can be debated and implemented by policymakers. Examples of policies that seem like they might be valuable (but which typically need more development and debate) include some of those mentioned e.g. here, here, and here. Frontier company policy: Developing and vetting policies and practices that frontier AI companies could volunteer or be required to implement to reduce risks, such as model evaluations, model scaling "red lines" and "if-then commitments," incident reporting protocols, and third-party audits. See e.g. here, here, and here. International AI governance: Developing and vetting paths to effective, broad, and multilateral AI governance, and working to improve coordination and cooperation among key state actors. See e.g. here. Law: Developing and vetting legal frameworks for AI governance, exploring relevant legal issues such as liability and antitrust, identifying concrete legal tools for implementing high-level AI governance solutions, encouraging sound legal drafting of impactful AI policies, and understanding the legal aspects of various AI policy proposals. See e.g. here. Strategic analysis and threat modeling: Improving society's understanding of the strategic landscape around transformative ...

The Nonlinear Library
EA - Thoughts on this $16.7M "AI safety" grant? by defun

The Nonlinear Library

Play Episode Listen Later Jul 16, 2024 1:31


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Thoughts on this $16.7M "AI safety" grant?, published by defun on July 16, 2024 on The Effective Altruism Forum. Open Philanthropy has recommended a total of $16.7M to the Massachusetts Institute of Technology to support research led by Neil Thompson on modeling the trends and impacts of AI and computing. 2020 - MIT - AI Trends and Impacts Research - $550,688 2022 - MIT - AI Trends and Impacts Research - $13,277,348 2023 - MIT - AI Trends and Impacts Research - $2,911,324 I've read most of their research, and I don't understand why Open Philanthropy thinks this is a good use of their money. Thompson's Google Scholar here. Thompson's most cited paper "The Computational Limits of Deep Learning" (2020) @gwern pointed out some flaws on Reddit. Thompson's latest paper "A Model for Estimating the Economic Costs of Computer Vision Systems that use Deep Learning" (2024) This paper has many limitations (as acknowledged by the author) and from an x-risks point of view, this paper seems irrelevant. What do you think about Open Philanthropy recommending a total of $16.7M for this work? Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

The Nonlinear Library
EA - Thoughts on these $1M and $500k AI safety grants? by defun

The Nonlinear Library

Play Episode Listen Later Jul 12, 2024 2:57


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Thoughts on these $1M and $500k AI safety grants?, published by defun on July 12, 2024 on The Effective Altruism Forum. Open Philanthropy had a request for proposals for "benchmarking LLM agents on consequential real-world tasks". At least two of the grants went to professors who are developing agents (advancing capabilities). $1,045,620 grant From https://www.openphilanthropy.org/grants/princeton-university-software-engineering-llm-benchmark/ Open Philanthropy recommended a grant of $1,045,620 to Princeton University to support a project to develop a benchmark for evaluating the performance of Large Language Model (LLM) agents in software engineering tasks, led by Assistant Professor Karthik Narasimhan. From Karthik Narasimhan's LinkedIn: "My goal is to build intelligent agents that learn to handle the dynamics of the world through experience and existing human knowledge (ex. text). I am specifically interested in developing autonomous systems that can acquire language understanding through interaction with their environment while also utilizing textual knowledge to drive their decision making." $547,452 grant From https://www.openphilanthropy.org/grants/carnegie-mellon-university-benchmark-for-web-based-tasks/ Open Philanthropy recommended a grant of $547,452 to Carnegie Mellon University to support research led by Professor Graham Neubig to develop a benchmark for the performance of large language models conducting web-based tasks in the work of software engineers, managers, and accountants. Graham Neubig is one of the co-founders of All Hands AI which is developing OpenDevin. All Hands AI's mission is to build AI tools to help developers build software faster and better, and do it in the open. Our flagship project is OpenDevin, an open-source software development agent that can autonomously solve software development tasks end-to-end. Webinar In the webinar when the RFP's were announced, Max Nadeau said (minute 19:00): "a lot of the time when you construct the benchmark you're going to put some effort into making the capable LLM agent that can actually demonstrate accurately what existing models are capable of, but for the most part we're imagining, for both our RFPs, the majority of the effort is spent on performing the measurement as opposed to like trying to increase performance on it". They were already aware that these grants would fund the development of agents and addressed this concern in the same webinar (minute 21:55). https://www.lesswrong.com/posts/7qGxm2mgafEbtYHBf/survey-on-the-acceleration-risks-of-our-new-rfps-to-study Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

The Nonlinear Library
EA - Seven Philanthropic Wins: The Stories That Inspired Open Phil's Offices by Open Philanthropy

The Nonlinear Library

Play Episode Listen Later Jul 3, 2024 6:09


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Seven Philanthropic Wins: The Stories That Inspired Open Phil's Offices, published by Open Philanthropy on July 3, 2024 on The Effective Altruism Forum. Since our early days, we've studied the history of philanthropy to understand what great giving looks like. The lessons we learned made us more ambitious and broadened our view of philanthropy's potential. The rooms in our San Francisco office pay tribute to this legacy. Seven of them are named after philanthropic "wins" - remarkable feats made possible by philanthropic funders. In this post, we'll share the story behind each win. Green Revolution During the second half of the twentieth century, the Green Revolution dramatically increased agricultural production in developing countries like Mexico and India. At a time of rapid population growth, this boost in production reduced hunger, helped to avert famine, and stimulated national economies. The Rockefeller Foundation played a key role by supporting early research by Norman Borlaug and others to enhance agricultural productivity. Applications of this research - developed in collaboration with governments, private companies, and the Ford Foundation - sparked the Green Revolution, which is estimated to have saved a billion people from starvation. Read more about the Rockefeller Foundation's role in the Green Revolution in Political Geography. The Pill In 1960, the FDA approved "the pill", an oral contraceptive that revolutionized women's reproductive health by providing a user-controlled family planning option. This groundbreaking development was largely funded by Katharine McCormick, a women's rights advocate and one of MIT's first female graduates. In the early 1950s, McCormick collaborated with Margaret Sanger, the founder of Planned Parenthood, to finance critical early-stage research that led to the creation of the pill. Today, the birth control pill stands as one of the most common and convenient methods of contraception, empowering generations of women to decide when to start a family. For a comprehensive history of the pill, try Jonathan Eig's The Birth of the Pill. Sesame Street In 1967, the Carnegie Corporation funded a feasibility study on educational TV programming for children, which led to the creation of the Children's Television Workshop and Sesame Street. Sesame Street became one of the most successful television ventures ever, broadcast in more than 150 countries and the winner of more than 200 Emmy awards. Research monitoring the learning progress of Sesame Street viewers has demonstrated significant advances in early literacy. A deeper look into how philanthropy helped to launch Sesame Street is available here. Nunn-Lugar The Nunn-Lugar Act (1991), also known as the Cooperative Threat Reduction Program, was enacted in response to the collapse of the USSR and the dangers posed by dispersed weapons of mass destruction. US Senators Sam Nunn and Richard Lugar led the initiative, focusing on the disarmament and securing of nuclear, chemical, and biological weapons from former Soviet states. In the course of this work, thousands of nuclear weapons were deactivated or destroyed. The act's inception and success were largely aided by the strategic philanthropy of the Carnegie Corporation and the MacArthur Foundation, which funded research at Brookings on the "cooperative security" approach to nuclear disarmament and de-escalation. Learn more about the Nunn-Lugar Act and its connection to philanthropy in this paper. Marriage Equality The Supreme Court's landmark ruling in Obergefell v. Hodges granted same-sex couples the right to marry, marking the culmination of decades of advocacy and a sizable cultural shift toward acceptance. Philanthropic funders - including the Gill Foundation and Freedom to Marry, an organization initially funded by the Evelyn and Wa...

VoxDev Talks
S4 Ep26: Maximising impact: Open Philanthropy's approach to choosing causes

VoxDev Talks

Play Episode Listen Later Jul 3, 2024 25:47


If you want to do good, and do not have unlimited funds, how do you choose? Which places, people, and situations are most deserving? Do you invest in economic benefits or lives saved? Open Philanthropy in an organisation that aims to rigorously optimise the impact of every dollar it spends. Emily Oehlsen tells Tim Phillips about its successes so far, and how it still sometimes gets it wrong.

The Nonlinear Library
EA - High Impact Engineers is Transitioning to a Volunteer-Led Model by Jessica Wen

The Nonlinear Library

Play Episode Listen Later Jul 2, 2024 7:25


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: High Impact Engineers is Transitioning to a Volunteer-Led Model, published by Jessica Wen on July 2, 2024 on The Effective Altruism Forum. Summary After over 2 years of operations, High Impact Engineers (HI-Eng) is reverting to a volunteer-led organisational model due to a middling impact outcome and a lack of funding. We wanted to thank all our subscribers, supporters, and contributors for being the driving force behind HI-Eng's achievements, which you can read about in our Impact Report. What is High Impact Engineers? High Impact Engineers (HI-Eng for short, pronounced high-enj) is an organisation dedicated to helping (physical - i.e. non-software) engineers increase their ability to have an outsized positive impact through their work. Why Is HI-Eng Winding Down? In December 2023, we sent out a community survey and solicited case studies and testimonials to evaluate our impact, which we wrote up in our Impact Report. As shown in the report, there is some evidence of behavioural and attitudinal changes in our members towards more impactful career outcomes due to interactions with our programmes, as well as some ongoing career transitions that we supported to some extent, but even after consultations with grantmakers and other community builders, we found it difficult to determine if this amount of impact would meet the bar for ongoing funding. As a result, we decided to (re-)apply for funding from the major EA funds (i.e. EAIF and Open Philanthropy), and they ended up deciding to not fund High Impact Engineers. Since our runway from the previous funding round was so short, we decided against trying to hire someone else to take over running HI-Eng, and the team is moving on to new opportunities. However, we still believe that engineers in EA are a valuable and persistently underserved demographic, and that this latent potential can be realised by providing a hub for engineers in EA to meet other like-minded engineers and find relevant resources. Therefore, we decided to maintain the most valuable and impactful programmes through the help of volunteers. Lessons Learnt There are already many resources available for new community builders (e.g. the EA Groups Resource Centre, this, this, this, and this EA Forum post, and especially this post by Sofia Balderson), so we don't believe that there is much we can add that hasn't already been said. However, here are some lessons we think are robustly good: 1. Having a funding cycle of 6 months is too short. 2. If you're looking to get set up and running quickly, getting a fiscal sponsor is great. We went with the Players Philanthropy Fund, but there are other options (including Rethink Priorities and maybe your national EA group). 3. Speak to other community builders, and ask for their resources! They're often more than happy to give you a copy of their systems, processes and documentation (minus personal data). 4. Pay for monthly subscriptions to software when setting up, even if it's cheaper to get an annual subscription. You might end up switching to a different software further down the line, and it's easier (and cheaper) to cancel a monthly subscription. 5. Email each of your subscriptions' customer service to ask for a non-profit discount (if you have non-profit status). They can save you up to 50% of the ticket price. (Jessica will write up her own speculative lessons learnt in a future forum post). What Will HI-Eng Look Like Going Forward? Jessica will continue managing HI-Eng as a volunteer, and is currently implementing the following changes in our programmes: Email newsletter: the final HI-Eng newsletter was sent in May. Future impactful engineering opportunities can be found on the 80,000 Hours job board or the EA Opportunities board. Any other impactful engineering jobs can be submitted to these boards ( submission...

The Eric Ries Show
Lessons From Co-Founding Facebook, And Now Asana | Dustin Moskovitz

The Eric Ries Show

Play Episode Listen Later Jun 20, 2024 71:23


Welcome to The Eric Ries Show. I sat down with Dustin Moskovitz, founder of not one but two iconic companies: Facebook and the collaborative work platform Asana. Needless to say, he's engaged in the most intense form of entrepreneurship there is. A huge part of what he's chosen to do with the hard-earned knowledge it gave him is dedicate himself and Asana to investing in employees' mental health, communication skills, and more. All of this matters to Dustin on a human level, but he also explains why putting people first is the only way to get the kind of results most founders can only dream of. We talked about how to get into that flow state, why preserving culture is crucial, his leadership style and how he decides when to be hands-on versus when to delegate, and how Asana reflects what he's learned about supporting people at all levels.  Dustin sums up the work Asana does this way: “Our individual practices are meant to restore coherence for the individual, our team practices are meant to restore coherence for the team, and Asana, the system, is meant to try and do it for the entire organization.” I'm delighted to share our conversation, which also covers: • How he uses AI and views its future • Why he founded a collaboration platform • How he applied the lessons of Facebook to building Asana • Why taking care of your mental health as a founder is crucial for the company as a whole • His thoughts on the evolution of Facebook • The importance of alignment with investors • His philanthropic work  • And so much more — Brought to you by: Mercury – The art of simplified finances. Learn more. DigitalOcean – The cloud loved by developers and founders alike. Sign up. Neo4j – The graph database and analytics leader. Learn more. — Where to find Dustin Moskovitz: • LinkedIn: https://www.linkedin.com/in/dmoskov/ • Threads: ​​https://www.threads.net/@moskov • Asana: https://asana.com/leadership#moskovitz Where to find Eric: • Newsletter: https://ericries.carrd.co/  • Podcast: https://ericriesshow.com/  • X: https://twitter.com/ericries  • LinkedIn: https://www.linkedin.com/in/eries/  • YouTube: https://www.youtube.com/@theericriesshow  — In This Episode We Cover: (00:00) Welcome to the Eric Ries Show (00:31) Meet our guest Dustin Moskovitz (04:02) How Dustin is using AI for creative projects (05:31) Dustin talks about the social media and SaaS era and his Facebook days (06:52) How Facebook has evolved from its original intention (10:27) The founding of Asana (14:35) Building entrepreneurial confidence (19:22) Making – and fixing – design errors at Asana  (20:32) The importance of committing to “soft” values. (25:27) Short-term profit over people and terrible advice from VCs (28:44) Crypto as a caricature of extractive behavior (30:47) The positive impacts of doing things with purpose (34:24) How Asana is ensuring its purpose and mission are permanently enshrined in the company (41:35) Battling entropy and meeting culture (44:31) Being employee-centric, the flow state, and Asana's strategy (47:51) The organizational equivalent of repressing emotions (52:57) Dustin as a Cassandra (56:51) Dustin talks about his philanthropic work and philosophy: Open Philanthropy, Good Ventures (1:02:05) Dustin's thoughts on AI and its future (1:07:20) Ethics, calculated risk, and thinking long-term — Referenced:  Asana: https://asana.com/ Conscious Leadership Group: https://conscious.is/ Ben Horowitz on managing your own psychology: https://a16z.com/whats-the-most-difficult-ceo-skill-managing-your-own-psychology/ The Infinite Game, by Simon Sinek Dr. John Sarno The 15 Commitments of Conscious Leadership Awareness: Conversations with the Masters, by Anthony de Mello Brené Brown: Dare to Lead , The Call to Courage (Netflix trailer)  Open Philanthropy Good Ventures GiveWell — Production and marketing by https://penname.co/. For inquiries about sponsoring the podcast, email jordan@penname.co  Eric may be an investor in the companies discussed.

The Nonlinear Library
EA - Advice for EA org staff and EA group organisers interacting with political campaigns by Catherine Low

The Nonlinear Library

Play Episode Listen Later Jun 17, 2024 12:50


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Advice for EA org staff and EA group organisers interacting with political campaigns, published by Catherine Low on June 17, 2024 on The Effective Altruism Forum. Compiled by CEA's Community Health team 2024 is the biggest year for elections in history(!), and while many of these elections have passed, some important elections are upcoming, including the UK and US elections, providing a potentially large opportunity to have an impact through political change. This post is intended 1. To make it easier for EA group organisers and organisation staff to adhere to the laws in relevant countries 2. And more generally, to help the community be able to take high impact actions now and in the future by reducing risks of polarisation of EA and the cause areas we care about. Two main concerns: Legal risks and risks around polarisation and epistemics Legal risks Charities and organisations associated with/funded by charities have constraints on what political activities they can do. See "More about legal risks." Note: This post is not legal advice. Our team is employed by US and UK charities (Effective Ventures US and UK). So, we have a little familiarity with the legal situations for groups/organisations that are based in the US or UK (many EA organisations), and groups/organisations that are funded by charities in the US or UK (even more EA groups and organisations). We have very little knowledge about the legal situation relating to other countries. It could be useful for groups/orgs in any country (including US and UK) to get independent legal advice. Risks around polarisation and epistemics These risks include EA becoming more associated with specific parties or parts of the political spectrum, in a way that makes EAs less able to collaborate with others Issues EA works on about becoming polarised / associated with a specific party EA falling into lower standards of reasoning, honesty, etc through feeling a need to compete in political arenas where good epistemics are not valued as highly Creating suspicion about whether EAs are primarily motivated by seeking power rather than doing the most good. Of course, the upside of doing political work could be extremely high. So our recommendation isn't for EAs to stop doing political work, but to be very careful to think through risks when choosing your actions. Some related ideas about the risks of polarisation and political advocacy: 1. Climate change policy and politics in the US 2. Lesson 7: Even among EAs, politics might somewhat degrade our typical epistemics and rigor 3. To Oppose Polarization, Tug Sideways 4. Politics on the EA Forum More about legal risks If your group/organisation is a charity or is funded by a charity In many (or maybe all?) places, charities or organisations funded by charities are NOT allowed to engage in political campaigning. E.g. US U.S. 501(c)(3) public charities are prohibited from "intervening in political campaigns" ( more detail). This includes organisations that are funded by US 501 (c)(3) charities (including Open Philanthropy's charitable arm, and Effective Ventures (which hosts EA Funds and CEA)). This includes financial support for a campaign, including reimbursing costs for people to engage in volunteer activities endorsing or disapproving of a candidate, referring to a candidate's characteristics or qualifications for office - in writing, speaking, mentions on the website, podcasts, etc. Language that could appear partisan like stating "holding elected officials accountable" could also imply disapproval. taking action to help or hurt the chances of a candidate. This can be problematic even if you or your charity didn't intend to help or hurt the candidate. staff taking political action that's seen as representing the organisation they work for E.g. attending rallies or door knocking as ...

Macroaggressions
#442: Peace Sells But Who's Buying?

Macroaggressions

Play Episode Listen Later May 29, 2024 54:52


It is pretty obvious that the Peace Corps. was a unique intelligence-gathering operation, but most people have not heard how it was established and who financed the operation over the decades. There is also a pipeline of talent for NGOs and spook agencies that runs through the Peace Corps., making it hidden in plain sight. The other fake do-gooder organization with questionable partnerships and sketchy financing is Greenpeace and its ties to John Podesta, George Soros, Bill Gates, and Planned Parenthood. It appears that they intend to preserve the planet through depopulation, which would explain their financial support from known eugenics operations and anti-humanity NGOs, such as the Tides Foundation, Open Philanthropy, and the World Wildlife Fund. The Octopus of Global Control Audiobook: https://amzn.to/3xu0rMm Anarchapulco 2024 Replay: www.Anarchapulco.com Promo Code: MACRO Sponsors: Chemical Free Body: https://www.chemicalfreebody.com Promo Code: MACRO C60 Purple Power: https://c60purplepower.com/ Promo Code: MACRO Wise Wolf Gold & Silver: www.Macroaggressions.gold True Hemp Science: https://truehempscience.com/ Haelan: https://haelan951.com/pages/macro Solar Power Lifestyle: https://solarpowerlifestyle.com/ Promo Code: MACRO LegalShield: www.DontGetPushedAround.com EMP Shield: www.EMPShield.com Promo Code: MACRO Christian Yordanov's Detoxification Program: https://members.christianyordanov.com/detox-workshop?coupon=MACRO Privacy Academy: https://privacyacademy.com/step/privacy-action-plan-checkout-2/?ref=5620 Coin Bit App: https://coinbitsapp.com/?ref=0SPP0gjuI68PjGU89wUv Macroaggressions Merch Store: https://www.teepublic.com/stores/macroaggressions?ref_id=22530 LinkTree: linktr.ee/macroaggressions Books: HYPOCRAZY: https://amzn.to/3VsPDp8 Controlled Demolition on Amazon: https://amzn.to/3ufZdzx The Octopus Of Global Control: Amazon: https://amzn.to/3VDWQ5c Barnes & Noble: https://bit.ly/39vdKeQ Online Connection: Link Tree: https://linktr.ee/Macroaggressions Instagram: https://www.instagram.com/macroaggressions_podcast/ Discord Link:  https://discord.gg/4mGzmcFexg Website: www.Macroaggressions.io Facebook: www.facebook.com/theoctopusofglobalcontrol Twitter: www.twitter.com/macroaggressio3 Twitter Handle: @macroaggressio3 Rumble: https://rumble.com/c/c-4728012 The Union Of The Unwanted LinkTree: https://linktr.ee/uotuw RSS FEED: https://uotuw.podbean.com/ Merch Store: https://www.teepublic.com/stores/union-of-the-unwanted?ref_id=22643&utm_campaign=22643&utm_medium=affiliate&utm_source

80,000 Hours Podcast with Rob Wiblin
#188 – Matt Clancy on whether science is good

80,000 Hours Podcast with Rob Wiblin

Play Episode Listen Later May 23, 2024 160:15


"Suppose we make these grants, we do some of those experiments I talk about. We discover, for example — I'm just making this up — but we give people superforecasting tests when they're doing peer review, and we find that you can identify people who are super good at picking science. And then we have this much better targeted science, and we're making progress at a 10% faster rate than we normally would have. Over time, that aggregates up, and maybe after 10 years, we're a year ahead of where we would have been if we hadn't done this kind of stuff."Now, suppose in 10 years we're going to discover a cheap new genetic engineering technology that anyone can use in the world if they order the right parts off of Amazon. That could be great, but could also allow bad actors to genetically engineer pandemics and basically try to do terrible things with this technology. And if we've brought that forward, and that happens at year nine instead of year 10 because of some of these interventions we did, now we start to think that if that's really bad, if these people using this technology causes huge problems for humanity, it begins to sort of wash out the benefits of getting the science a little bit faster." —Matt ClancyIn today's episode, host Luisa Rodriguez speaks to Matt Clancy — who oversees Open Philanthropy's Innovation Policy programme — about his recent work modelling the risks and benefits of the increasing speed of scientific progress.Links to learn more, highlights, and full transcript.They cover:Whether scientific progress is actually net positive for humanity.Scenarios where accelerating science could lead to existential risks, such as advanced biotechnology being used by bad actors.Why Matt thinks metascience research and targeted funding could improve the scientific process and better incentivise outcomes that are good for humanity.Whether Matt trusts domain experts or superforecasters more when estimating how the future will turn out.Why Matt is sceptical that AGI could really cause explosive economic growth.And much more.Chapters:Is scientific progress net positive for humanity? (00:03:00)The time of biological perils (00:17:50)Modelling the benefits of science (00:25:48)Income and health gains from scientific progress (00:32:49)Discount rates (00:42:14)How big are the returns to science? (00:51:08)Forecasting global catastrophic biological risks from scientific progress (01:05:20)What's the value of scientific progress, given the risks? (01:15:09)Factoring in extinction risk (01:21:56)How science could reduce extinction risk (01:30:18)Are we already too late to delay the time of perils? (01:42:38)Domain experts vs superforecasters (01:46:03)What Open Philanthropy's Innovation Policy programme settled on (01:53:47)Explosive economic growth (02:06:28)Matt's favourite thought experiment (02:34:57)Producer and editor: Keiran HarrisAudio engineering lead: Ben CordellTechnical editing: Simon Monsour, Milo McGuire, and Dominic ArmstrongAdditional content editing: Katy Moore and Luisa RodriguezTranscriptions: Katy Moore

Macroaggressions
#440: Climate Incorporated

Macroaggressions

Play Episode Listen Later May 22, 2024 64:31


The idea of “decarbonizing” the world is laughable and insane due to the obvious impossibility of the task, but many of the most powerful and wealthy institutes and foundations have invested billions to try to do the impossible. Or at least, try to convince you that they are. The manipulation of data, behaviors, and emotions with regard to the climate change grift is set to be pushed to the public for the next decade through groups such as Climate Central, Open Philanthropy, ClimateWorks Foundation, and the World Resources Institute. Their mission statements might sound altruistic, but depopulation is always just below the surface. The Octopus of Global Control Audiobook: https://amzn.to/3xu0rMm Anarchapulco 2024 Replay: www.Anarchapulco.com Promo Code: MACRO Sponsors: Chemical Free Body: https://www.chemicalfreebody.com Promo Code: MACRO C60 Purple Power: https://c60purplepower.com/ Promo Code: MACRO Wise Wolf Gold & Silver: www.Macroaggressions.gold True Hemp Science: https://truehempscience.com/ Haelan: https://haelan951.com/pages/macro Solar Power Lifestyle: https://solarpowerlifestyle.com/ Promo Code: MACRO LegalShield: www.DontGetPushedAround.com EMP Shield: www.EMPShield.com Promo Code: MACRO Christian Yordanov's Detoxification Program: https://members.christianyordanov.com/detox-workshop?coupon=MACRO Privacy Academy: https://privacyacademy.com/step/privacy-action-plan-checkout-2/?ref=5620 Coin Bit App: https://coinbitsapp.com/?ref=0SPP0gjuI68PjGU89wUv Macroaggressions Merch Store: https://www.teepublic.com/stores/macroaggressions?ref_id=22530 LinkTree: linktr.ee/macroaggressions Books: HYPOCRAZY: https://amzn.to/3VsPDp8 Controlled Demolition on Amazon: https://amzn.to/3ufZdzx The Octopus Of Global Control: Amazon: https://amzn.to/3VDWQ5c Barnes & Noble: https://bit.ly/39vdKeQ Online Connection: Link Tree: https://linktr.ee/Macroaggressions Instagram: https://www.instagram.com/macroaggressions_podcast/ Discord Link:  https://discord.gg/4mGzmcFexg Website: www.Macroaggressions.io Facebook: www.facebook.com/theoctopusofglobalcontrol Twitter: www.twitter.com/macroaggressio3 Twitter Handle: @macroaggressio3 Rumble: https://rumble.com/c/c-4728012 The Union Of The Unwanted LinkTree: https://linktr.ee/uotuw RSS FEED: https://uotuw.podbean.com/ Merch Store: https://www.teepublic.com/stores/union-of-the-unwanted?ref_id=22643&utm_campaign=22643&utm_medium=affiliate&utm_source

80,000 Hours Podcast with Rob Wiblin
#185 – Lewis Bollard on the 7 most promising ways to end factory farming, and whether AI is going to be good or bad for animals

80,000 Hours Podcast with Rob Wiblin

Play Episode Listen Later Apr 18, 2024 153:12


"The constraint right now on factory farming is how far can you push the biology of these animals? But AI could remove that constraint. It could say, 'Actually, we can push them further in these ways and these ways, and they still stay alive. And we've modelled out every possibility and we've found that it works.' I think another possibility, which I don't understand as well, is that AI could lock in current moral values. And I think in particular there's a risk that if AI is learning from what we do as humans today, the lesson it's going to learn is that it's OK to tolerate mass cruelty, so long as it occurs behind closed doors. I think there's a risk that if it learns that, then it perpetuates that value, and perhaps slows human moral progress on this issue." —Lewis BollardIn today's episode, host Luisa Rodriguez speaks to Lewis Bollard — director of the Farm Animal Welfare programme at Open Philanthropy — about the promising progress and future interventions to end the worst factory farming practices still around today.Links to learn more, highlights, and full transcript.They cover:The staggering scale of animal suffering in factory farms, and how it will only get worse without intervention.Work to improve farmed animal welfare that Open Philanthropy is excited about funding.The amazing recent progress made in farm animal welfare — including regulatory attention in the EU and a big win at the US Supreme Court — and the work that still needs to be done.The occasional tension between ending factory farming and curbing climate changeHow AI could transform factory farming for better or worse — and Lewis's fears that the technology will just help us maximise cruelty in the name of profit.How Lewis has updated his opinions or grantmaking as a result of new research on the “moral weights” of different species.Lewis's personal journey working on farm animal welfare, and how he copes with the emotional toll of confronting the scale of animal suffering.How listeners can get involved in the growing movement to end factory farming — from career and volunteer opportunities to impactful donations.And much more.Chapters:Common objections to ending factory farming (00:13:21)Potential solutions (00:30:55)Cage-free reforms (00:34:25)Broiler chicken welfare (00:46:48)Do companies follow through on these commitments? (01:00:21)Fish welfare (01:05:02)Alternatives to animal proteins (01:16:36)Farm animal welfare in Asia (01:26:00)Farm animal welfare in Europe (01:30:45)Animal welfare science (01:42:09)Approaches Lewis is less excited about (01:52:10)Will we end factory farming in our lifetimes? (01:56:36)Effect of AI (01:57:59)Recent big wins for farm animals (02:07:38)How animal advocacy has changed since Lewis first got involved (02:15:57)Response to the Moral Weight Project (02:19:52)How to help (02:28:14)Producer and editor: Keiran HarrisAudio engineering lead: Ben CordellTechnical editing: Simon Monsour, Milo McGuire, and Dominic ArmstrongAdditional content editing: Katy Moore and Luisa RodriguezTranscriptions: Katy Moore

Switch4Good
Should You Care About Farmed Animals? with Lewis Bollard

Switch4Good

Play Episode Listen Later Feb 28, 2024 48:38


Around the world, there are tens of billions of non-human animals trapped in the barbaric system of industrial animal agriculture. And unfortunately, that number is increasing. It's a severe problem, not just for the suffering animals, but for human health and the health of our planet.    Fortunately, there are people dedicating their lives to tackling this issue. And today, we're joined by one of these remarkable individuals.   Lewis Bollard is a dedicated animal advocate making meaningful change on a global scale. He is the Farm Animal Welfare Director at Open Philanthropy, which is a philanthropic funder addressing important and often neglected causes. Prior to working there, Lewis was the Policy Advisor and International Liaison to the CEO of The Humane Society of the United States.   Join us to hear our illuminating discussion with Lewis as we shine a spotlight on his quest for a more ethical world and his efforts to end factory farming once and for all!    “We chose farm animals just because of the sheer numbers. I mean the sheer scale of factory farming globally in terms of tens of billions of animals suffering. And as you say, I mean it is definitely especially neglected because people don't pay attention to them. I think that's partly, I mean, people will say yes because they're not smart, but I'm skeptical that's what's really going on. I think it's just convenient to not care about them, and it would be really inconvenient to take seriously their interests. I mean, as with you, I think that what ultimately matters is whether they can suffer. You know, I think across all these species, I suspect there's a lot more going on. I suspect they're smarter than we give them credit for. But I ultimately think that debate doesn't even really matter to our moral obligations. We just, shouldn't torture them. We shouldn't cause suffering to a being who can suffer.” - Lewis Bollard   What we discuss in this episode: - The eye-opening family trip that changed Lewis's life.  - Why do people fall off the vegan wagon? - Lewis's thoughts on alternative proteins and plant-based meats. - How the animal rights movement has evolved. - The most abused farm animals in the world. - Positive changes in the animal rights movement and the potential threat AI poses. - How governments around the world are promoting plant-based diets. - The work Lewis is doing to effect positive change. - Lewis's advice to those who want to make a positive impact but don't know where to start.   Resources: - Open Philanthropy: Lewis Bollard | Open Philanthropy - https://www.openphilanthropy.org/about/team/lewis-bollard/ - Lewis's Twitter/X: https://twitter.com/Lewis_Bollard - Vote for the Switch4Good podcast here: https://bit.ly/s4gpodcast   ★☆★ Click the link below to support the ADD SOY Act! ★☆★ https://switch4good.org/add-soy-act/ ★☆★ Share the website and get your resources here ★☆★ https://kidsandmilk.org/ ★☆★ Send us a voice message and ask a question. We want to hear from you! ★☆★ https://switch4good.org/podcast/ ★☆★ Dairy-Free Swaps Guide: Easy Anti-Inflammatory Meals, Recipes, and Tips ★☆★ https://switch4good.org/dairy-free-swaps-guide ★☆★SUPPORT SWITCH4GOOD★☆★ https://switch4good.org/support-us/ ★☆★ JOIN OUR PRIVATE FACEBOOK GROUP ★☆★  https://www.facebook.com/groups/podcastchat ★☆★ SWITCH4GOOD WEBSITE ★☆★ https://switch4good.org/ ★☆★ ONLINE STORE ★☆★ https://shop.switch4good.org/shop/ ★☆★ FOLLOW US ON INSTAGRAM ★☆★ https://www.instagram.com/Switch4Good/ ★☆★ LIKE US ON FACEBOOK ★☆★ https://www.facebook.com/Switch4Good/ ★☆★ FOLLOW US ON TWITTER ★☆★ https://mobile.twitter.com/Switch4GoodNFT ★☆★ AMAZON STORE ★☆★ https://www.amazon.com/shop/switch4good ★☆★ DOWNLOAD THE ABILLION APP ★☆★ https://app.abillion.com/users/switch4good

80,000 Hours Podcast with Rob Wiblin
#90 Classic episode – Ajeya Cotra on worldview diversification and how big the future could be

80,000 Hours Podcast with Rob Wiblin

Play Episode Listen Later Jan 12, 2024 179:17


Rebroadcast: this episode was originally released in January 2021.You wake up in a mysterious box, and hear the booming voice of God: “I just flipped a coin. If it came up heads, I made ten boxes, labeled 1 through 10 — each of which has a human in it. If it came up tails, I made ten billion boxes, labeled 1 through 10 billion — also with one human in each box. To get into heaven, you have to answer this correctly: Which way did the coin land?”You think briefly, and decide you should bet your eternal soul on tails. The fact that you woke up at all seems like pretty good evidence that you're in the big world — if the coin landed tails, way more people should be having an experience just like yours.But then you get up, walk outside, and look at the number on your box.‘3'. Huh. Now you don't know what to believe.If God made 10 billion boxes, surely it's much more likely that you would have seen a number like 7,346,678,928?In today's interview, Ajeya Cotra — a senior research analyst at Open Philanthropy — explains why this thought experiment from the niche of philosophy known as ‘anthropic reasoning' could be relevant for figuring out where we should direct our charitable giving.Links to learn more, summary, and full transcript.Some thinkers both inside and outside Open Philanthropy believe that philanthropic giving should be guided by ‘longtermism' — the idea that we can do the most good if we focus primarily on the impact our actions will have on the long-term future.Ajeya thinks that for that notion to make sense, there needs to be a good chance we can settle other planets and solar systems and build a society that's both very large relative to what's possible on Earth and, by virtue of being so spread out, able to protect itself from extinction for a very long time.But imagine that humanity has two possible futures ahead of it: Either we're going to have a huge future like that, in which trillions of people ultimately exist, or we're going to wipe ourselves out quite soon, thereby ensuring that only around 100 billion people ever get to live.If there are eventually going to be 1,000 trillion humans, what should we think of the fact that we seemingly find ourselves so early in history? Being among the first 100 billion humans, as we are, is equivalent to walking outside and seeing a three on your box. Suspicious! If the future will have many trillions of people, the odds of us appearing so strangely early are very low indeed.If we accept the analogy, maybe we can be confident that humanity is at a high risk of extinction based on this so-called ‘doomsday argument‘ alone.If that's true, maybe we should put more of our resources into avoiding apparent extinction threats like nuclear war and pandemics. But on the other hand, maybe the argument shows we're incredibly unlikely to achieve a long and stable future no matter what we do, and we should forget the long term and just focus on the here and now instead.There are many critics of this theoretical ‘doomsday argument', and it may be the case that it logically doesn't work. This is why Ajeya spent time investigating it, with the goal of ultimately making better philanthropic grants.In this conversation, Ajeya and Rob discuss both the doomsday argument and the challenge Open Phil faces striking a balance between taking big ideas seriously, and not going all in on philosophical arguments that may turn out to be barking up the wrong tree entirely.They also discuss:Which worldviews Open Phil finds most plausible, and how it balances themWhich worldviews Ajeya doesn't embrace but almost doesHow hard it is to get to other solar systemsThe famous ‘simulation argument'When transformative AI might actually arriveThe biggest challenges involved in working on big research reportsWhat it's like working at Open PhilAnd much moreProducer: Keiran HarrisAudio mastering: Ben CordellTranscriptions: Sofia Davis-Fogel

Pivot
AI Ethics at Code 2023

Pivot

Play Episode Listen Later Oct 25, 2023 28:43


Platformer's Casey Newton moderates a conversation at Code 2023 on ethics in artificial intelligence, with Ajeya Cotra, Senior Program Officer at Open Philanthropy, and Helen Toner, Director of Strategy at Georgetown University's Center for Security and Emerging Technology. The panel discusses the risks and rewards of the technology, as well as best practices and safety measures. Recorded on September 27th in Los Angeles. Learn more about your ad choices. Visit podcastchoices.com/adchoices