Podcasts about utilitarians

  • 20PODCASTS
  • 27EPISODES
  • 47mAVG DURATION
  • ?INFREQUENT EPISODES
  • Jan 24, 2024LATEST

POPULARITY

20172018201920202021202220232024


Best podcasts about utilitarians

Latest podcast episodes about utilitarians

Pigeon Hour
Best of Pigeon Hour

Pigeon Hour

Play Episode Listen Later Jan 24, 2024 107:33


Table of contentsNote: links take you to the corresponding section below; links to the original episode can be found there.* Laura Duffy solves housing, ethics, and more [00:01:16]* Arjun Panickssery solves books, hobbies, and blogging, but fails to solve the Sleeping Beauty problem because he's wrong on that one [00:10:47]* Nathan Barnard on how financial regulation can inform AI regulation [00:17:16]* Winston Oswald-Drummond on the tractability of reducing s-risk, ethics, and more [00:27:48]* Nathan Barnard (again!) on why general intelligence is basically fake [00:34:10]* Daniel Filan on why I'm wrong about ethics (+ Oppenheimer and what names mean in like a hardcore phil of language sense) [00:56:54]* Holly Elmore on AI pause, wild animal welfare, and some cool biology things I couldn't fully follow but maybe you can [01:04:00]* Max Alexander and I solve ethics, philosophy of mind, and cancel culture once and for all [01:24:43]* Sarah Woodhouse on discovering AI x-risk, Twitter, and more [01:30:56] * Pigeon Hour x Consistently Candid pod-crossover: I debate moral realism with Max Alexander and Sarah Hastings-Woodhouse [01:41:08]Intro [00:00:00]To wrap up the year of Pigeon Hour, the podcast, I put together some clips from each episode to create a best-of compilation. This was inspired by 80,000 Hours, a podcast that did the same with their episodes, and I thought it was pretty cool and tractable enough.It's important to note that the clips I chose range in length significantly. This does not represent the quality or amount of interesting content in the episode. Sometimes there was a natural place to break the episode into a five-minute chunk, and other times it wouldn't have made sense to take a five-minute chunk out of what really needed to be a 20-minute segment. I promise I'm not just saying that.So without further ado, please enjoy.#1: Laura Duffy solves housing, ethics, and more [00:01:16]In this first segment, Laura, Duffy, and I discuss the significance and interpretation of Aristotle's philosophical works in relation to modern ethics and virtue theory.AARON: Econ is like more interesting. I don't know. I don't even remember of all the things. I don't know, it seems like kind of cool. Philosophy. Probably would have majored in philosophy if signaling wasn't an issue. Actually, maybe I'm not sure if that's true. Okay. I didn't want to do the old stuff though, so I'm actually not sure. But if I could aristotle it's all wrong. Didn't you say you got a lot out of Nicomachi or however you pronounce that?LAURA: Nicomachian ethics guide to how you should live your life. About ethics as applied to your life because you can't be perfect. Utilitarians. There's no way to be that.AARON: But he wasn't even responding to utilitarianism. I'm sure it was a good work given the time, but like, there's like no other discipline in which we care. So people care so much about like, what people thought 2000 years ago because like the presumption, I think the justified presumption is that things have iterated and improved since then. And I think that's true. It's like not just a presumption.LAURA: Humans are still rather the same and what our needs are for living amongst each other in political society are kind of the same. I think America's founding is very influenced by what people thought 2000 years ago.AARON: Yeah, descriptively that's probably true. But I don't know, it seems like all the whole body of philosophers have they've already done the work of, like, compressing the good stuff. Like the entire academy since like, 1400 or whatever has like, compressed the good stuff and like, gotten rid of the bad stuff. Not in like a high fidelity way, but like a better than chance way. And so the stuff that remains if you just take the state of I don't know if you read the Oxford Handbook of whatever it is, like ethics or something, the takeaways you're going to get from that are just better than the takeaways you're going to get from a summary of the state of the knowledge in any prior year. At least. Unless something weird happened. And I don't know. I don't know if that makes sense.LAURA: I think we're talking about two different things, though. Okay. In terms of knowledge about logic or something or, I don't know, argumentation about trying to derive the correct moral theory or something, versus how should we think about our own lives. I don't see any reason as to why the framework of virtue theory is incorrect and just because it's old. There's many virtue theorists now who are like, oh yeah, they were really on to something and we need to adapt it for the times in which we live and the kind of societies we live in now. But it's still like there was a huge kernel of truth in at least the way of thinking that Aristotle put forth in terms of balancing the different virtues that you care about and trying to find. I think this is true. Right? Like take one virtue of his humor. You don't want to be on one extreme where you're just basically a meme your entire life. Everybody thinks you're funny, but that's just not very serious. But you don't want to be a boar and so you want to find somewhere in the middle where it's like you have a good sense of humor, but you can still function and be respected by other people.AARON: Yeah. Once again, I agree. Well, I don't agree with everything. I agree with a lot of what you just said. I think there was like two main points of either confusion or disagreement. And like, the first one is that I definitely think, no, Aristotle shouldn't be discounted or like his ideas or virtue ethics or anything like that shouldn't be discounted because they were canonical texts or something were written a long time ago. I guess it's just like a presumption that I have a pretty strong presumption that conditional on them being good, they would also be written about today. And so you don't actually need to go back to the founding texts and then in fact, you probably shouldn't because the good stuff will be explained better and not in weird it looks like weird terms. The terms are used differently and they're like translations from Aramaic or whatever. Probably not Aramaic, probably something else. And yeah, I'm not sure if you.LAURA: Agree with this because we have certain assumptions about what words like purpose mean now that we're probably a bit richer in the old conception of them like telos or happiness. Right. Udaimnia is much better concept and to read the original text and see how those different concepts work together is actually quite enriching compared to how do people use these words now. And it would take like I don't know, I think there just is a lot of value of looking at how these were originally conceived because popularizers of the works now or people who are seriously doing philosophy using these concepts. You just don't have the background knowledge that's necessary to understand them fully if you don't read the canonical text.AARON: Yeah, I think that would be true. If you are a native speaker. Do you know Greek? If you know Greek, this is like dumb because then you're just right.LAURA: I did take a quarter of it.AARON: Oh God. Oh my God. I don't know if that counts, but that's like more than anybody should ever take. No, I'm just kidding. That's very cool. No, because I was going to say if you're a native speaker of Greek and you have the connotations of the word eudaimonia and you were like living in the temper shuttle, I would say. Yeah, that's true actually. That's a lot of nuanced, connotation and context that definitely gets lost with translation. But once you take the jump of reading English translations of the texts, not you may as well but there's nothing super special. You're not getting any privileged knowledge from saying the word eudaimonia as opposed to just saying some other term as a reference to that concept or something. You're absorbing the connotation in the context via English, I guess, via the mind of literally the translators who have like.LAURA: Yeah, well see, I tried to learn virtue theory by any other route than reading Aristotle.AARON: Oh God.LAURA: I took a course specifically on Plato and Aristotle.AARON: Sorry, I'm not laughing at you. I'm just like the opposite type of philosophy person.LAURA: But keep going. Fair. But she had us read his physics before we read Nicomachi.AARON: Think he was wrong about all that.LAURA: Stuff, but it made you understand what he meant by his teleology theory so much better in a way that I could not get if I was reading some modern thing.AARON: I don't know, I feel like you probably could. No, sorry, that's not true. I don't think you could get what Aristotle the man truly believed as well via a modern text. But is that what you? Depends. If you're trying to be a scholar of Aristotle, maybe that's important. If you're trying to find the best or truest ethics and learn the lessons of how to live, that's like a different type of task. I don't think Aristotle the man should be all that privileged in that.LAURA: If all of the modern people who are talking about virtue theory are basically Aristotle, then I don't see the difference.AARON: Oh, yeah, I guess. Fair enough. And then I would say, like, oh, well, they should probably start. Is that in fact the state of the things in virtue theory? I don't even know.LAURA: I don't know either.#2 Arjun Panickssery solves books, hobbies, and blogging, but fails to solve the Sleeping Beauty problem because he's wrong on that one [00:10:47]All right, next, Arjun Panixery and I explore the effectiveness of reading books in retaining and incorporating knowledge, discussing the value of long form content and the impact of great literary works on understanding and shaping personal worldviews.ARJUN: Oh, you were in the book chat, though. The book rant group chat, right?AARON: Yeah, I think I might have just not read any of it. So do you want to fill me in on what I should have read?ARJUN: Yeah, it's group chat of a bunch of people where we were arguing about a bunch of claims related to books. One of them is that most people don't remember pretty much anything from books that they read, right? They read a book and then, like, a few months later, if you ask them about it, they'll just say one page's worth of information or maybe like, a few paragraphs. The other is that what is it exactly? It's that if you read a lot of books, it could be that you just incorporate the information that's important into your existing models and then just forget the information. So it's actually fine. Isn't this what you wrote in your blog post or whatever? I think that's why I added you to that.AARON: Oh, thank you. I'm sorry I'm such a bad group chat participant. Yeah, honestly, I wrote that a while ago. I don't fully remember exactly what it says, but at least one of the things that it said was and that I still basically stand by, is that it's basically just like it's increasing the salience of a set of ideas more so than just filling your brain with more facts. And I think this is probably true insofar as the facts support a set of common themes or ideas that are kind of like the intellectual core of it. It would be really hard. Okay, so this is not a book, but okay. I've talked about how much I love an 80,000 hours podcast, and I've listened to, I don't think every episode, but at least 100 of the episodes. And no, you're just, like, not going to definitely I've forgotten most of the actual almost all of the actual propositional pieces of information said, but you're just not going to convince me that it's completely not affecting either model of the world or stuff that I know or whatever. I mean, there are facts that I could list. I think maybe I should try.ARJUN: Sure.AARON: Yeah. So what's your take on book other long form?ARJUN: Oh, I don't know. I'm still quite confused or I think the impetus for the group chat's creation was actually Hanania's post where he wrote the case against most books or most was in parentheses or something. I mean, there's a lot of things going on in that post. He just goes off against a bunch of different categories of books that are sort of not closely related. Like, he goes off against great. I mean, this is not the exact take he gives, but it's something like the books that are considered great are considered great literature for some sort of contingent reason, not because they're the best at getting you information that you want.AARON: This is, like, another topic. But I'm, like, anti great books. In fact, I'm anti great usually just means old and famous. So insofar as that's what we mean by I'm like, I think this is a bad thing, or, like, I don't know, aristotle is basically wrong about everything and stuff like that.ARJUN: Right, yeah. Wait, we could return to this. I guess this could also be divided into its component categories. He spends more time, though, I think, attacking a certain kind of nonfiction book that he describes as the kind of book that somebody pitches to a publisher and basically expands a single essay's worth of content into with a bunch of anecdotes and stuff. He's like, most of these books are just not very useful to read, I guess. I agree with that.AARON: Yeah. Is there one that comes to mind as, like, an? Mean, I think of Malcolm Gladwell as, like, the kind of I haven't actually read any of his stuff in a while, but I did, I think, when I started reading nonfiction or with any sort of intent, I read. A bunch of his stuff or whatever and vaguely remember that this is basically what he like for better or.ARJUN: Um yeah, I guess so. But he's almost, like, trying to do it on purpose. This is the experience that you're getting by reading a Malcolm Gladwell book. It's like talib. Right? It's just him just ranting. I'm thinking, I guess, of books that are about something. So, like, if you have a book that's know negotiation or something, it'll be filled with a bunch of anecdotes that are of dubious usefulness. Or if you get a book that's just about some sort of topic, there'll be historical trivia that's irrelevant. Maybe I can think of an example.AARON: Yeah. So the last thing I tried to read, maybe I am but haven't in a couple of weeks or whatever, is like, the Derek Parfit biography. And part of this is motivated because I don't even like biographies in general for some reason, I don't know. But I don't know. He's, like, an important guy. Some of the anecdotes that I heard were shockingly close to home for me, or not close to home, but close to my brain or something. So I was like, okay, maybe I'll see if this guy's like the smarter version of Aaron Bergman. And it's not totally true.ARJUN: Sure, I haven't read the book, but I saw tweet threads about it, as one does, and I saw things that are obviously false. Right. It's the claims that he read, like, a certain number of pages while brushing his teeth. That's, like, anatomically impossible or whatever. Did you get to that part? Or I assumed no, I also saw.AARON: That tweet and this is not something that I do, but I don't know if it's anatomically impossible. Yeah, it takes a little bit of effort to figure out how to do that, I guess. I don't think that's necessarily false or whatever, but this is probably not the most important.ARJUN: Maybe it takes long time to brush his teeth.#3: Nathan Barnard on how financial regulation can inform AI regulation [00:17:16]In this next segment, Nathan Barnard and I dive into the complexities of AI regulation, including potential challenges and outcomes of governing AI in relation to economic growth and existential security. And we compare it to banking regulation as well.AARON: Yeah, I don't know. I just get gloomy for, I think justified reasons when people talk about, oh yeah, here's the nine step process that has to take place and then maybe there's like a 20% chance that we'll be able to regulate AI effectively. I'm being facetious or exaggerating, something like that, but not by a gigantic amount.NATHAN: I think this is pretty radically different to my mainline expectation.AARON: What's your mainline expectation?NATHAN: I suppose I expect like AI to come with an increasing importance past economy and to come up to really like a very large fraction of the economy before really crazy stuff starts happening and this world is going very anonymous. Anonymous, anonymous, anonymous. I know the word is it'd be very unusual if this extremely large sector economy which was impacted like a very large number of people's lives remains like broadly unregulated.AARON: It'll be regulated, but just maybe in a stupid way.NATHAN: Sure, yes, maybe in a stupid way. I suppose critically, do you expect the stupid way to be like too conservative or too like the specific question of AI accenture it's basically too conservative or too lenient or I just won't be able to interact with this.AARON: I guess generally too lenient, but also mostly on a different axis where just like I don't actually know enough. I don't feel like I've read learned about various governance proposals to have a good object level take on this. But my broad prior is that there are just a lot of ways to for anything. There's a lot of ways to regulate something poorly. And the reason insofar as anything isn't regulated poorly it's because of a lot of trial and error.NATHAN: Maybe.AARON: I mean, there's probably exceptions, right? I don't know. Tax Americana is like maybe we didn't just kept winning wars starting with World War II. I guess just like maybe like a counterexample or something like that.NATHAN: Yeah, I think I still mostly disagree with this. Oh, cool. Yeah. I suppose I see a much like broader spectrum between bad regulation and good regulation. I agree it's like very small amount. The space of optimal regulation is very small. But I think we have to hit that space for regulation to be helpful. Especially in this especially if you consider that if you sort of buy the AI extension safety risk then the downsides of it's not this quite fine balancing act between too much whether consumer protection and siphoning competition and cycling innovation too much. It's like trying to end this quite specific, very bad outcome which is maybe much worse than going somewhat slowering economic growth, at least somewhat particularly if we think we're going to get something. This is very explosive rates for economic growth really quite soon. And the cost of slowing down economic growth by weather even by quite a large percentage, very small compared to the cost of sort of an accidental catastrophe. I sort of think of Sony iconic growth as the main cost of main way regulation goes wrong currently.AARON: I think in an actual sense that is correct. There's the question of like okay, Congress in the states like it's better than nothing. I'm glad it's not anarchy in terms of like I'm glad we have a legislature.NATHAN: I'm also glad the United States.AARON: How reasons responsive is Congress? I don't think reasons responsive enough to make it so that the first big law that gets passed insofar as there is one or if there is one is on the pareto frontier trading off between economic growth and existential security. It's going to be way inside of that production frontier or whatever. It's going to suck on every action, maybe not every act but at least like some relevant actions.NATHAN: Yeah that doesn't seem like obviously true to me. I think Dodge Frank was quite a good law.AARON: That came after 2008, right?NATHAN: Yeah correct. Yeah there you go. No, I agree. I'm not especially confident about doing regulation before there's some quite bad before there's a quite bad warning shot and yes, if we're in world where we have no warning shots and we're just like blindsided by everyone getting turned into everyone getting stripped their Athens within 3 seconds, this is not good. Both in law we do have one of those shots and I think Glass Seagull is good law. Not good law is a technical term. I think Glass Steagall was a good piece of legislation. I think DoD Frank was a good piece of legislation. I think the 2008 Seamless Bill was good piece of legislation. I think the Troubled Assets Relief Program is a good piece of piece of legislation.AARON: I recognize these terms and I know some of them and others I do not know the contents of.NATHAN: Yeah so Glass Eagle was the financial regulation passed in 1933 after Great Depression. The Tropical Asset Relief Program was passed in I think 2008, moved 2009 to help recapitalize banks. Dodge Frank was the sort of landmark post financial cris piece of legislation passed in 2011. I think these are all good pieces of legislation now. I think like financial regulation is probably unusually good amongst US legislation. This is like a quite weak take, I guess. It's unusually.AARON: So. I don't actually know the pre depression financial history at all but I feel like the more relevant comparison to the 21st century era is what was the regulatory regime in 1925 or something? I just don't know.NATHAN: Yeah, I know a bit. I haven't read this stuff especially deeply and so I don't want to don't want to be so overcompensant here but sort of the core pieces which were sort of important for the sort of the Great Depression going very badly was yeah, no distinction between commercial banks and investment banks. Yes, such a bank could take much riskier. Much riskier. Things with like custom deposits than they could from 1933 until the Peel Glass Eagle. And combine that with no deposit insurance and if you sort of have the combination of banks being able to do quite risky things with depositors money and no deposit insurance, this is quite dangerously known. And glassy repeal.AARON: I'm an expert in the sense that I have the Wikipedia page up. Well, yeah, there was a bunch of things. Basically. There's the first bank of the United States. There's the second bank of the United States. There's the free banking era. There was the era of national banks. Yada, yada, yada. It looks like 19. Seven was there was some panic. I vaguely remember this from like, AP US history, like seven years ago or.NATHAN: Yes, I suppose in short, I sort of agree that the record of sort of non post Cris legislation is like, not very good, but I think record of post Cris legislation really, at least in the financial sector, really is quite good. I'm sure lots of people disagree with this, but this is my take.#4 Winston Oswald-Drummond on the tractability of reducing s-risk, ethics, and more [00:27:48]Up next, Winston Oswald Drummond and I talk about the effectiveness and impact of donating to various research organizations, such as suffering-focused S-risk organizations. We discuss tractability, expected value, and essentially where we should give our money.AARON: Okay, nice. Yeah. Where to go from here? I feel like largely we're on the same page, I feel like.WINSTON: Yeah. Is your disagreement mostly tractability? Then? Maybe we should get into the disagreement.AARON: Yeah. I don't even know if I've specified, but insofar as I have one, yes, it's trapped ability. This is the reason why I haven't donated very much to anywhere for money reasons. But insofar as I have, I have not donated to Clrcrs because I don't see a theory of change that connects the research currently being done to actually reducing s risks. And I feel like there must be something because there's a lot of extremely smart people at both of these orgs or whatever, and clearly they thought about this and maybe the answer is it's very general and the outcome is just so big in magnitude that anything kind.WINSTON: Of that is part of it, I think. Yeah, part of it is like an expected value thing and also it's just very neglected. So it's like you want some people working on this, I think, at least. Even if it's unlikely to work. Yeah, even that might be underselling it, though. I mean, I do think there's people at CRS and Clr, like talking to people at AI labs and some people in politics and these types of things. And hopefully the research is a way to know what to try to get done at these places. You want to have some concrete recommendations and I think obviously people have to also be willing to listen to you, but I think there is some work being done on that and research is partially just like a community building thing as well. It's a credible signal that you were smart and have thought about this, and so it gives people reason to listen to you and maybe that mostly pays off later on in the future.AARON: Yeah, that all sounds like reasonable. And I guess one thing is that I just don't there's definitely things I mean, first of all, I haven't really stayed up to date on what's going on, so I haven't even done I've done zero research for this podcast episode, for example. Very responsible and insofar as I've know things about these. Orgs. It's just based on what's on their website at some given time. So insofar as there's outreach going on, not like behind the scenes, but just not in a super public way, or I guess you could call that behind the scenes. I just don't have reason to, I guess, know about that. And I guess, yeah, I'm pretty comfortable. I don't even know if this is considered biting a bullet for the crowd that will be listening to this, if that's anybody but with just like yeah, saying a very small change for a very large magnitude, just, like, checks out. You can just do expected value reasoning and that's basically correct, like a correct way of thinking about ethics. But even I don't know how much you know specifically or, like, how much you're allowed want to reveal, but if there was a particular alignment agenda that I guess you in a broad sense, like the suffering focused research community thought was particularly promising and relative to other tractable, I guess, generic alignment recommendations. And you were doing research on that and trying to push that into the alignment mainstream, which is not very mainstream. And then with the hope that that jumps into the AI mainstream. Even if that's kind of a long chain of events. I think I would be a lot more enthusiastic about I don't know that type of agenda, because it feels like there's like a particular story you're telling where it cashes out in the end. You know what I mean?WINSTON: Yeah, I'm not the expert on this stuff, but I do think you just mean I think there's some things about influencing alignment and powerful AI for sure. Maybe not like a full on, like, this is our alignment proposal and it also handles Sris. But some things we could ask AI labs that are already building, like AGI, we could say, can you also implement these sort of, like, safeguards so if you failed alignment, you fail sort of gracefully and don't cause lots of suffering.AARON: Right?WINSTON: Yeah. Or maybe there are other things too, which also seem potentially more tractable. Even if you solve alignment in some sense, like aligning with whatever the human operator tells the AI to do, then you can also get the issue that malevolent actors can take control of the AI and then what they want also causes lots of suffering that type of alignment wouldn't. Yeah, and I guess I tend to be somewhat skeptical of coherent extrapolated volition and things like this, where the idea is sort of like it'll just figure out our values and do the right thing. So, yeah, there's some ways to push on this without having a full alignment plan, but I'm not sure if that counts as what you were saying.AARON: No, I guess it does. Yeah, it sounds like it does. And it could be that I'm just kind of mistaken about the degree to which that type of research and outreach is going on. That sounds like it's at least partially true.#5: Nathan Barnard (again!) on why general intelligence is basically fake [00:34:10]Up next, Nathan Barnard is back for his second episode. And we talked about the nature of general intelligence, its relationship with language and the implications of specialized brain functions on the understanding of human cognitive abilities.NATHAN: Yes. This like symbolic like symbolic, symbolic reasoning stuff. Yeah. So I think if I was, like, making the if I was, like, making the case for general intelligence being real, I wouldn't have symbolic reasoning, but I would have language stuff. I'd have this hierarchical structure thing, which.AARON: I would probably so I think of at least most uses of language and central examples as a type of symbolic reasoning because words mean things. They're like yeah. Pointers to objects or something like that.NATHAN: Yeah, I think it's like, pretty confidence isn't where this isn't a good enough description of general intelligence. So, for instance so if you bit in your brain called, I'm using a checklist, I don't fuck this up vernacular, I'm not making this cool. Lots of connects to use words like pointers as these arbitrary signs happens mostly in this area of the brain called Berkeley's area. But very famously, you can have Berkeley's epaxics who lose the ability to do language comprehension and use the ability to consistently use words as pointers, as signs to point to things, but still have perfect good spatial reasoning abilities. And so, conversely, people with brokers of fascia who fuck up, who have the broker's reason their brain fucks up will not be able to form fluent sentences and have some problems like unsigned syntax, and they'll still be able to have very good spatial reasoning. It could still, for instance, be like, good engineers. Would you like many problems which, like, cost engineering?AARON: Yeah, I totally buy that. I don't think language is the central thing. I think it's like an outgrowth of, like I don't know, there's like a simplified model I could make, which is like it's like an outgrowth of whatever general intelligence really is. But whatever the best spatial or graphical model is, I don't think language is cognition.NATHAN: Yes, this is a really big debate in psycholinguistics as to whether language is like an outgrowth of other abilities like the brain has, whether language whether there's very specialized language modules. Yeah, this is just like a very live debate in psycholinguistics moments. I actually do lean towards the reason I've been talking about this actually just going to explain this hierarchical structure thing? Yeah, I keep talking about it. So one theory for how you can comprehend new sentences, like, the dominant theory in linguistics, how you can comprehend new sentences, um, is you break them up into, like you break them up into, like, chunks, and you form these chunks together in this, like, tree structure. So something like, if you hear, like, a totally novel sentence like the pit bull mastiff flopped around deliciously or something, you can comprehend what the sentence means despite the fact you've never heard it. Theory behind this is you saw yes, this can be broken up into this tree structure, where the different, like, ah, like like bits of the sentence. So, like like the mastiff would be like, one bit, and then you have, like, another bit, which is like, the mastiff I can't remember I said rolled around, so that'd be like, another bit, and then you'd have connectors to our heart.AARON: Okay.NATHAN: So the massive rolling around one theory of one of the sort of distinctive things that humans have disabilities is like, this quite general ability to break things up into these these tree structures. This is controversial within psycholinguistics, but it's broadly an area which I broadly buy it because we do see harms to other areas of intelligence. You get much worse at, like, Ravens Progressive Matrices, for instance, when you have, like, an injury to brokers area, but, like, not worse at, like, tests like tests of space, of, like, spatial reasoning, for instance.AARON: So what is like, is there, like, a main alternative to, like, how humans.NATHAN: Understand language as far as this specificity of how we pass completely novel sentences, as far as where this is just like this is just like the the academic consensus. Okay.AARON: I mean, it sounds totally like right? I don't know.NATHAN: Yeah. But yeah, I suppose going back to saying, how far is language like an outgrowth of general intelligence? An outgrowth like general intelligence versus having much more specialized language modules? Yeah, I lean towards the latter, despite yeah, I still don't want to give too strong of a personal opinion here because I'm not a linguistic this is a podcast.AARON: You're allowed to give takes. No one's going to say this is like the academic we want takes.NATHAN: We want takes. Well, gone to my head is.AARON: I.NATHAN: Think language is not growth of other abilities. I think the main justification for this, I think, is that the loss of other abilities we see when you have damage to broker's area and verca's area.AARON: Okay, cool. So I think we basically agree on that. And also, I guess one thing to highlight is I think outgrowth can mean a couple of different things. I definitely think it's plausible. I haven't read about this. I think I did at some point, but not in a while. But outgrowth could mean temporarily or whatever. I think I'm kind of inclined to think it's not that straightforward. You could have coevolution where language per se encourages both its own development and the development of some general underlying trait or something.NATHAN: Yeah. Which seems likely.AARON: Okay, cool. So why don't humans have general intelligence?NATHAN: Right. Yeah. As I was sort of talking about previously.AARON: Okay.NATHAN: I think I think I'd like to use go back to like a high level like a high level argument is there appears to be very surprised, like, much higher levels of functional specialization in brains than you expect. You can lose much more specific abilities than you expect to be able to lose. You can lose specifically the ability a famous example is like facebindness, actually. You probably lose the ability to specifically recognize things which you're, like, an expert in.AARON: Who does it or who loses this ability.NATHAN: If you've damaged your fuse inform area, you'll lose the ability to recognize faces, but nothing else.AARON: Okay.NATHAN: And there's this general pattern that your brain is much more you can lose much more specific abilities than you expect. So, for instance, if you sort of have damage to your ventral, medial, prefrontal cortex, you can say the reasoning for why you shouldn't compulsively gamble but still compulsively gamble.AARON: For instance okay, I understand this not gambling per se, but like executive function stuff at a visceral level. Okay, keep going.NATHAN: Yeah. Some other nice examples of this. I think memory is quite intuitive. So there's like, a very famous patient called patient HM who had his hippocampus removed and so as a result, lost all declarative memory. So all memory of specific facts and things which happened in his life. He just couldn't remember any of these things, but still perfectly functioning otherwise. I think at a really high level, I think this functional specialization is probably the strongest piece of evidence against the general intelligence hypothesis. I think fundamentally, general intelligence hypothesis implies that, like, if you, like yeah, if you was, like, harm a piece of your brain, if you have some brain injury, you might like generically get worse at tasks you like, generically get worse at, like at like all task groups use general intelligence. But I think suggesting people, including general intelligence, like the ability to write, the ability to speak, maybe not speak, the ability to do math, you do have.AARON: This it's just not as easy to analyze in a Cogsy paper which IQ or whatever. So there is something where if somebody has a particular cubic centimeter of their brain taken out, that's really excellent evidence about what that cubic centimeter does or whatever, but that non spatial modification is just harder to study and analyze. I guess we'll give people drugs, right? Suppose that set aside the psychometric stuff. But suppose that general intelligence is mostly a thing or whatever and you actually can ratchet it up and down. This is probably just true, right? You can probably give somebody different doses of, like, various drugs. I don't know, like laughing gas, like like, yeah, like probably, probably weed. Like I don't know.NATHAN: So I think this just probably isn't true. Your working memory corrects quite strongly with G and having better working memory generic can make you much better at lots of tasks if you have like.AARON: Yeah.NATHAN: Sorry, but this is just like a specific ability. It's like just specifically your working memory, which is improved if you go memory to a drugs. Improved working memory. I think it's like a few things like memory attention, maybe something like decision making, which are all like extremely useful abilities and improve how well other cognitive abilities work. But they're all separate things. If you improved your attention abilities, your working memory, but you sort of had some brain injury, which sort of meant you sort of had lost ability to pass syntax, you would not get better at passing syntax. And you can also use things separately. You can also improve attention and improve working memory separately, which just it's not just this one dial which you can turn up.AARON: There's good reason to expect that we can't turn it up because evolution is already sort of like maximizing, given the relevant constraints. Right. So you would need to be looking just like injuries. Maybe there are studies where they try to increase people's, they try to add a cubic centimeter to someone's brain, but normally it's like the opposite. You start from some high baseline and then see what faculties you lose. Just to clarify, I guess.NATHAN: Yeah, sorry, I think I've lost the you still think there probably is some general intelligence ability to turn up?AARON: Honestly, I think I haven't thought about this nearly as much as you. I kind of don't know what I think at some level. If I could just write down all of the different components and there are like 74 of them and what I think of a general intelligence consists of does that make it I guess in some sense, yeah, that does make it less of an ontologically legit thing or something. I think I think the thing I want to get the motivating thing here is that with humans yet you can like we know humans range in IQ, and there's, like, setting aside a very tiny subset of people with severe brain injuries or development disorders or whatever. Almost everybody has some sort of symbolic reasoning that they can do to some degree. Whereas the smartest maybe I'm wrong about this, but as far as I know, the smartest squirrel is not going to be able to have something semantically represent something else. And that's what I intuitively want to appeal to, you know what I mean?NATHAN: Yeah, I know what you're guessing at. So I think there's like two interesting things here. So I think one is, could a squirrel do this? I'm guessing a squirrel couldn't do this, but a dog can, or like a dog probably can. A chimpanzee definitely can.AARON: Do what?NATHAN: Chimpanzees can definitely learn to associate arbitrary signs, things in the world with arbitrary signs.AARON: Yes, but maybe I'm just adding on epicentercles here, but I feel like correct me if I'm wrong, but I think that maybe I'm just wrong about this, but I would assume that Chicken Tees cannot use that sign in a domain that is qualitatively different from the ones they've been in. Right. So, like, a dog will know that a certain sign means sit or whatever, but maybe that's not a good I.NATHAN: Don'T know think this is basically not true.AARON: Okay.NATHAN: And we sort of know this from teaching.AARON: Teaching.NATHAN: There's like a famously cocoa de guerrilla. Also a bonobo whose name I can't remember were taught sign language. And the thing they were consistently bad at was, like, putting together sentences they could learn quite large vocabularies learning to associate by large, I mean in the hundreds of words, in the low hundreds of words which they could consistently use consistently use correctly.AARON: What do you mean by, like, in what sense? What is bonobo using?NATHAN: A very famous and quite controversial example is like, coco gorilla was like, saw a swan outside and signed water bird. That's like, a controversial example. But other things, I think, which are controversial here is like, the syntax part of putting water and bird together is the controversial part, but it's not the controversial part that she could see a swan and call that a bird.AARON: Yeah, I mean, this is kind of just making me think, okay, maybe the threshold for D is just like at the chimp level or something. We are like or whatever the most like that. Sure. If a species really can generate from a prefix and a suffix or whatever, a concept that they hadn't learned before.NATHAN: Yeah, this is a controversial this is like a controversial example of that the addition to is the controversial part. Yeah, I suppose maybe brings back to why I think this matters is will there be this threshold which AIS cross such that their reasoning after this is qualitatively different to their reasoning previously? And this is like two things. One, like a much faster increase in AI capabilities and two, alignment techniques which worked on systems which didn't have g will no longer work. Systems which do have g. Brings back to why I think this actually matters. But I think if we're sort of accepting it, I think elephants probably also if you think that if we're saying, like, g is like a level of chimpanzees, chimpanzees just, like, don't don't look like quantitatively different to, like, don't look like that qualitatively different to, like, other animals. Now, lots of other animals live in similar complex social groups. Lots of other animals use tools.AARON: Yeah, sure. For one thing, I don't think there's not going to be a discontinuity in the same way that there wasn't a discontinuity at any point between humans evolution from the first prokaryotic cells or whatever are eukaryotic one of those two or both, I guess. My train of thought. Yes, I know it's controversial, but let's just suppose that the sign language thing was legit with the waterbird and that's not like a random one off fluke or something. Then maybe this is just some sort of weird vestigial evolutionary accident that actually isn't very beneficial for chimpanzees and they just stumbled their way into and then it just enabled them to it enables evolution to bootstrap Shimp genomes into human genomes. Because at some the smartest or whatever actually, I don't know. Honestly, I don't have a great grasp of evolutionary biology or evolution at all. But, yeah, it could just be not that helpful for chimps and helpful for an extremely smart chimp that looks kind of different or something like that.NATHAN: Yeah. So I suppose just like the other thing she's going on here, I don't want to keep banging on about this, but you can lose the language. You can lose linguistic ability. And it's just, like, happens this happens in stroke victims, for instance. It's not that rare. Just, like, lose linguistic ability, but still have all the other abilities which we sort of think of as like, general intelligence, which I think would be including the general intelligence, like, hypothesis.AARON: I agree that's, like, evidence against it. I just don't think it's very strong evidence, partially because I think there is a real school of thought that says that language is fundamental. Like, language drives thought. Language is, like, primary to thought or something. And I don't buy that. If you did buy that, I think this would be, like, more damning evidence.#6 Daniel Filan on why I'm wrong about ethics (+ Oppenheimer and what names mean in like a hardcore phil of language sense) [00:56:54][Note: I forgot to record an intro segment here. Sorry!]AARON: Yeah. Yes. I'm also anti scam. Right, thank you. Okay, so I think that thing that we were talking about last time we talked, which is like the thing I think we actually both know stuff about instead of just like, repeating New York Times articles is my nuanced ethics takes and why you think about talk about that and then we can just also branch off from there.DANIEL: Yeah, we can talk about that.AARON: Maybe see where that did. I luckily I have a split screen up, so I can pull up things. Maybe this is kind of like egotistical or something to center my particular view, but you've definitely given me some of the better pushback or whatever that I haven't gotten that much feedback of any kind, I guess, but it's still interesting to hear your take. So basically my ethical position or the thing that I think is true is that which I think is not the default view. I think most people think this is wrong is that total utilitarianism does not imply that for some amount of suffering that could be created there exists some other extremely large arbitrarily, large amount of happiness that could also be created which would morally justify the former. Basically.DANIEL: So you think that even under total utilitarianism there can be big amounts of suffering such that there's no way to morally tip the calculus. However much pleasure you can create, it's just not going to outweigh the fact that you inflicted that much suffering on some people.AARON: Yeah, and I'd highlight the word inflicted if something's already there and you can't do anything about it, that's kind of neither here nor there as it pertains to your actions or something. So it's really about you increasing, you creating suffering that wouldn't have otherwise been created. Yeah. It's also been a couple of months since I've thought about this in extreme detail, although I thought about it quite a bit. Yeah.DANIEL: Maybe I should say my contrary view, I guess, when you say that, I don't know, does total utilitarianism imply something or not? I'm like, well, presumably it depends on what we mean by total utilitarianism. Right. So setting that aside, I think that thesis is probably false. I think that yeah. You can offset great amounts of suffering with great amounts of pleasure, even for arbitrary amounts of suffering.AARON: Okay. I do think that position is like the much more common and even, I'd say default view. Do you agree with that? It's sort of like the implicit position of people who are of self described total utilitarians who haven't thought a ton about this particular question.DANIEL: Yeah, I think it's probably the implicit default. I think it's the implicit default in ethical theory or something. I think that in practice, when you're being a utilitarian, I don't know, normally, if you're trying to be a utilitarian and you see yourself inflicting a large amount of suffering, I don't know. I do think there's some instinct to be like, is there any way we can get around this?AARON: Yeah, for sure. And to be clear, I don't think this would look like a thought experiment. I think what it looks like in practice and also I will throw in caveats as I see necessary, but I think what it looks like in practice is like, spreading either wild animals or humans or even sentient digital life through the universe. That's in a non as risky way, but that's still just maybe like, say, making the earth, making multiple copies of humanity or something like that. That would be an example that's probably not like an example of what an example of creating suffering would be. For example, just creating another duplicate of earth. Okay.DANIEL: Anything that would be like so much suffering that we shouldn't even the pleasures of earth outweighs.AARON: Not necessarily, which is kind of a cop out. But my inclination is that if you include wild animals, the answer is yes, that creating another earth especially. Yeah, but I'm much more committed to some amount. It's like some amount than this particular time and place in human industry is like that or whatever.DANIEL: Okay, can I get a feel of some other concrete cases to see?AARON: Yeah.DANIEL: So one example that's on my mind is, like, the atomic bombing of Hiroshima and Nagasaki, right? So the standard case for this is, like, yeah, what? A hundred OD thousand people died? Like, quite terrible, quite awful. And a lot of them died, I guess a lot of them were sort of some people were sort of instantly vaporized, but a lot of people died in extremely painful ways. But the countercase is like, well, the alternative to that would have been like, an incredibly grueling land invasion of Japan, where many more people would have died or know regardless of what the actual alternatives were. If you think about the atomic bombings, do you think that's like the kind of infliction of suffering where there's just not an offsetting amount of pleasure that could make that okay?AARON: My intuition is no, that it is offsettable, but I would also emphasize that given the actual historical contingencies, the alternative, the implicit case for the bombing includes reducing suffering elsewhere rather than merely creating happiness. There can definitely be two bad choices that you have to make or something. And my claim doesn't really pertain to that, at least not directly.#7: Holly Elmore on AI pause, wild animal welfare, and some cool biology things I couldn't fully follow but maybe you can [01:04:00]Up next, Holly Elmore and I discuss the complexities and implications of AI development and open sourcing. We talk about protests and ethical considerations around her, um, uh, campaign to pause the development of frontier AI systems until, until we can tell that they're safe.AARON: So what's the plan? Do you have a plan? You don't have to have a plan. I don't have plans very much.HOLLY: Well, right now I'm hopeful about the UK AI summit. Pause AI and I have planned a multi city protest on the 21 October to encourage the UK AI Safety Summit to focus on safety first and to have as a topic arranging a pause or that of negotiation. There's a lot of a little bit upsetting advertising for that thing that's like, we need to keep up capabilities too. And I just think that's really a secondary objective. And that's how I wanted to be focused on safety. So I'm hopeful about the level of global coordination that we're already seeing. It's going so much faster than we thought. Already the UN Secretary General has been talking about this and there have been meetings about this. It's happened so much faster at the beginning of this year. Nobody thought we could talk about nobody was thinking we'd be talking about this as a mainstream topic. And then actually governments have been very receptive anyway. So right now I'm focused on other than just influencing opinion, the targets I'm focused on, or things like encouraging these international like, I have a protest on Friday, my first protest that I'm leading and kind of nervous that's against Meta. It's at the Meta building in San Francisco about their sharing of model weights. They call it open source. It's like not exactly open source, but I'm probably not going to repeat that message because it's pretty complicated to explain. I really love the pause message because it's just so hard to misinterpret and it conveys pretty clearly what we want very quickly. And you don't have a lot of bandwidth and advocacy. You write a lot of materials for a protest, but mostly what people see is the title.AARON: That's interesting because I sort of have the opposite sense. I agree that in terms of how many informational bits you're conveying in a particular phrase, pause AI is simpler, but in some sense it's not nearly as obvious. At least maybe I'm more of a tech brain person or whatever. But why that is good, as opposed to don't give extremely powerful thing to the worst people in the world. That's like a longer everyone.HOLLY: Maybe I'm just weird. I've gotten the feedback from open source ML people is the number one thing is like, it's too late, there's already super powerful models. There's nothing you can do to stop us, which sounds so villainous, I don't know if that's what they mean. Well, actually the number one message is you're stupid, you're not an ML engineer. Which like, okay, number two is like, it's too late, there's nothing you can do. There's all of these other and Meta is not even the most powerful generator of models that it share of open source models. I was like, okay, fine. And I don't know, I don't think that protesting too much is really the best in these situations. I just mostly kind of let that lie. I could give my theory of change on this and why I'm focusing on Meta. Meta is a large company I'm hoping to have influence on. There is a Meta building in San Francisco near where yeah, Meta is the biggest company that is doing this and I think there should be a norm against model weight sharing. I was hoping it would be something that other employees of other labs would be comfortable attending and that is a policy that is not shared across the labs. Obviously the biggest labs don't do it. So OpenAI is called OpenAI but very quickly decided not to do that. Yeah, I kind of wanted to start in a way that made it more clear than pause AI. Does that anybody's welcome something? I thought a one off issue like this that a lot of people could agree and form a coalition around would be good. A lot of people think that this is like a lot of the open source ML people think know this is like a secret. What I'm saying is secretly an argument for tyranny. I just want centralization of power. I just think that there are elites that are better qualified to run everything. It was even suggested I didn't mention China. It even suggested that I was racist because I didn't think that foreign people could make better AIS than Meta.AARON: I'm grimacing here. The intellectual disagreeableness, if that's an appropriate term or something like that. Good on you for standing up to some pretty bad arguments.HOLLY: Yeah, it's not like that worth it. I'm lucky that I truly am curious about what people think about stuff like that. I just find it really interesting. I spent way too much time understanding the alt. Right. For instance, I'm kind of like sure I'm on list somewhere because of the forums I was on just because I was interested and it is something that serves me well with my adversaries. I've enjoyed some conversations with people where I kind of like because my position on all this is that look, I need to be convinced and the public needs to be convinced that this is safe before we go ahead. So I kind of like not having to be the smart person making the arguments. I kind of like being like, can you explain like I'm five. I still don't get it. How does this work?AARON: Yeah, no, I was thinking actually not long ago about open source. Like the phrase has such a positive connotation and in a lot of contexts it really is good. I don't know. I'm glad that random tech I don't know, things from 2004 or whatever, like the reddit source code is like all right, seems cool that it's open source. I don't actually know if that was how that right. But yeah, I feel like maybe even just breaking down what the positive connotation comes from and why it's in people's self. This is really what I was thinking about, is like, why is it in people's self interest to open source things that they made and that might break apart the allure or sort of ethical halo that it has around it? And I was thinking it probably has something to do with, oh, this is like how if you're a tech person who makes some cool product, you could try to put a gate around it by keeping it closed source and maybe trying to get intellectual property or something. But probably you're extremely talented already, or pretty wealthy. Definitely can be hired in the future. And if you're not wealthy yet I don't mean to put things in just materialist terms, but basically it could easily be just like in a yeah, I think I'll probably take that bit out because I didn't mean to put it in strictly like monetary terms, but basically it just seems like pretty plausibly in an arbitrary tech person's self interest, broadly construed to, in fact, open source their thing, which is totally fine and normal.HOLLY: I think that's like 99 it's like a way of showing magnanimity showing, but.AARON: I don't make this sound so like, I think 99.9% of human behavior is like this. I'm not saying it's like, oh, it's some secret, terrible self interested thing, but just making it more mechanistic. Okay, it's like it's like a status thing. It's like an advertising thing. It's like, okay, you're not really in need of direct economic rewards, or sort of makes sense to play the long game in some sense, and this is totally normal and fine, but at the end of the day, there's reasons why it makes sense, why it's in people's self interest to open source.HOLLY: Literally, the culture of open source has been able to bully people into, like, oh, it's immoral to keep it for yourself. You have to release those. So it's just, like, set the norms in a lot of ways, I'm not the bully. Sounds bad, but I mean, it's just like there is a lot of pressure. It looks bad if something is closed source.AARON: Yeah, it's kind of weird that Meta I don't know, does Meta really think it's in their I don't know. Most economic take on this would be like, oh, they somehow think it's in their shareholders interest to open source.HOLLY: There are a lot of speculations on why they're doing this. One is that? Yeah, their models aren't as good as the top labs, but if it's open source, then open source quote, unquote then people will integrate it llama Two into their apps. Or People Will Use It And Become I don't know, it's a little weird because I don't know why using llama Two commits you to using llama Three or something, but it just ways for their models to get in in places where if you just had to pay for their models too, people would go for better ones. That's one thing. Another is, yeah, I guess these are too speculative. I don't want to be seen repeating them since I'm about to do this purchase. But there's speculation that it's in best interests in various ways to do this. I think it's possible also that just like so what happened with the release of Llama One is they were going to allow approved people to download the weights, but then within four days somebody had leaked Llama One on four chan and then they just were like, well, whatever, we'll just release the weights. And then they released Llama Two with the weights from the beginning. And it's not like 100% clear that they intended to do full open source or what they call Open source. And I keep saying it's not open source because this is like a little bit of a tricky point to make. So I'm not emphasizing it too much. So they say that they're open source, but they're not. The algorithms are not open source. There are open source ML models that have everything open sourced and I don't think that that's good. I think that's worse. So I don't want to criticize them for that. But they're saying it's open source because there's all this goodwill associated with open source. But actually what they're doing is releasing the product for free or like trade secrets even you could say like things that should be trade secrets. And yeah, they're telling people how to make it themselves. So it's like a little bit of a they're intentionally using this label that has a lot of positive connotations but probably according to Open Source Initiative, which makes the open Source license, it should be called something else or there should just be like a new category for LLMs being but I don't want things to be more open. It could easily sound like a rebuke that it should be more open to make that point. But I also don't want to call it Open source because I think Open source software should probably does deserve a lot of its positive connotation, but they're not releasing the part, that the software part because that would cut into their business. I think it would be much worse. I think they shouldn't do it. But I also am not clear on this because the Open Source ML critics say that everyone does have access to the same data set as Llama Two. But I don't know. Llama Two had 7 billion tokens and that's more than GPT Four. And I don't understand all of the details here. It's possible that the tokenization process was different or something and that's why there were more. But Meta didn't say what was in the longitude data set and usually there's some description given of what's in the data set that led some people to speculate that maybe they're using private data. They do have access to a lot of private data that shouldn't be. It's not just like the common crawl backup of the Internet. Everybody's basing their training on that and then maybe some works of literature they're not supposed to. There's like a data set there that is in question, but metas is bigger than bigger than I think well, sorry, I don't have a list in front of me. I'm not going to get stuff wrong, but it's bigger than kind of similar models and I thought that they have access to extra stuff that's not public. And it seems like people are asking if maybe that's part of the training set. But yeah, the ML people would have or the open source ML people that I've been talking to would have believed that anybody who's decent can just access all of the training sets that they've all used.AARON: Aside, I tried to download in case I'm guessing, I don't know, it depends how many people listen to this. But in one sense, for a competent ML engineer, I'm sure open source really does mean that. But then there's people like me. I don't know. I knew a little bit of R, I think. I feel like I caught on the very last boat where I could know just barely enough programming to try to learn more, I guess. Coming out of college, I don't know, a couple of months ago, I tried to do the thing where you download Llama too, but I tried it all and now I just have like it didn't work. I have like a bunch of empty folders and I forget got some error message or whatever. Then I tried to train my own tried to train my own model on my MacBook. It just printed. That's like the only thing that a language model would do because that was like the most common token in the training set. So anyway, I'm just like, sorry, this is not important whatsoever.HOLLY: Yeah, I feel like torn about this because I used to be a genomicist and I used to do computational biology and it was not machine learning, but I used a highly parallel GPU cluster. And so I know some stuff about it and part of me wants to mess around with it, but part of me feels like I shouldn't get seduced by this. I am kind of worried that this has happened in the AI safety community. It's always been people who are interested in from the beginning, it was people who are interested in singularity and then realized there was this problem. And so it's always been like people really interested in tech and wanting to be close to it. And I think we've been really influenced by our direction, has been really influenced by wanting to be where the action is with AI development. And I don't know that that was right.AARON: Not personal, but I guess individual level I'm not super worried about people like you and me losing the plot by learning more about ML on their personal.HOLLY: You know what I mean? But it does just feel sort of like I guess, yeah, this is maybe more of like a confession than, like a point. But it does feel a little bit like it's hard for me to enjoy in good conscience, like, the cool stuff.AARON: Okay. Yeah.HOLLY: I just see people be so attached to this as their identity. They really don't want to go in a direction of not pursuing tech because this is kind of their whole thing. And what would they do if we weren't working toward AI? This is a big fear that people express to me with they don't say it in so many words usually, but they say things like, well, I don't want AI to never get built about a pause. Which, by the way, just to clear up, my assumption is that a pause would be unless society ends for some other reason, that a pause would eventually be lifted. It couldn't be forever. But some people are worried that if you stop the momentum now, people are just so luddite in their insides that we would just never pick it up again. Or something like that. And, yeah, there's some identity stuff that's been expressed. Again, not in so many words to me about who will we be if we're just sort of like activists instead of working on.AARON: Maybe one thing that we might actually disagree on. It's kind of important is whether so I think we both agree that Aipause is better than the status quo, at least broadly, whatever. I know that can mean different things, but yeah, maybe I'm not super convinced, actually, that if I could just, like what am I trying to say? Maybe at least right now, if I could just imagine the world where open eye and Anthropic had a couple more years to do stuff and nobody else did, that would be better. I kind of think that they are reasonably responsible actors. And so I don't k

The Nonlinear Library: LessWrong
LW - The God of Humanity, and the God of the Robot Utilitarians by Raemon

The Nonlinear Library: LessWrong

Play Episode Listen Later Aug 24, 2023 3:15


Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The God of Humanity, and the God of the Robot Utilitarians, published by Raemon on August 24, 2023 on LessWrong. My personal religion involves two gods - the god of humanity (who I sometimes call "Humo") and the god of the robot utilitarians (who I sometimes call "Robutil"). When I'm facing a moral crisis, I query my shoulder-Humo and my shoulder-Robutil for their thoughts. Sometimes they say the same thing, and there's no real crisis. For example, some naive young EAs try to be utility monks, donate all their money, never take breaks, only do productive things... but Robutil and Humo both agree that quality intellectual world requires slack and psychological health. (Both to handle crises and to notice subtle things, which you might need, even in emergencies) If you're an aspiring effective altruist, you should definitely at least be doing all the things that Humo and Robutil agree on. (i.e. get to to the middle point of Tyler Alterman's story here). But Humo and Robutil in fact disagree on some things, and disagree on emphasis. They disagree on how much effort you should spend to avoid accidentally recruiting people you don't have much use for. They disagree on how many high schoolers it's acceptable to accidentally fuck up psychologically, while you experiment with a new program to get them into. They disagree on how hard to push yourself to grow better/stronger/wiser/faster, and how much you should sacrifice to do so. Humo and Robutil each struggle to understand things differently. Robutil eventually acknowledges you need Slack, but it didn't occur to him initially. His understanding was born in the burnout and tunnel-vision of thousands of young idealists, and Humo eventually (patiently, kindly) saying "I told you so." (Robutil responds "but you didn't provide any arguments about how that maximized utility!". Humo responds "but I said it was obviously unhealthy!" Robutil says "wtf does 'unhealthy' even mean? taboo unhealthy!") It took Robutil longer still to consider that perhaps you not only need to prioritize your own wellbeing and your friendships, but that it can be valuable to prioritize them for their own sake, not just as part of a utilitarian calculus, because trying to justify them in utilitarian terms may be a subtly wrong step in the dance that leaves them hollow. Humo struggles to acknowledge that if you spend all your time making sure to uphold deontological commitments to avoid harming the people in your care, then this effort is in fact measured in real human beings who suffer and die because you took longer to scale up your program. In my headcanon, Humo and Robutil are gods who are old and wise, and they got over their naive struggles long ago. They respect each other as brothers. They understand that each of their perspectives is relevant to the overall project of human flourishing. They don't disagree as much as you'd naively expect, but they speak different languages and emphasize things differently. Humo might acknowledge that I can't take care of everyone, or even respond compassionately to all the people who show up in my life who I don't have time to help. But he says so with a warm, mournful compassion, whereas Robutil says in with brief, efficient ruthlessness. I find it useful to query them independently, and to imagine the wise version of each of them as best I can - even if my imagining is but a crude shadow of their idealized platonic selves. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

The Nonlinear Library
LW - The God of Humanity, and the God of the Robot Utilitarians by Raemon

The Nonlinear Library

Play Episode Listen Later Aug 24, 2023 3:15


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The God of Humanity, and the God of the Robot Utilitarians, published by Raemon on August 24, 2023 on LessWrong. My personal religion involves two gods - the god of humanity (who I sometimes call "Humo") and the god of the robot utilitarians (who I sometimes call "Robutil"). When I'm facing a moral crisis, I query my shoulder-Humo and my shoulder-Robutil for their thoughts. Sometimes they say the same thing, and there's no real crisis. For example, some naive young EAs try to be utility monks, donate all their money, never take breaks, only do productive things... but Robutil and Humo both agree that quality intellectual world requires slack and psychological health. (Both to handle crises and to notice subtle things, which you might need, even in emergencies) If you're an aspiring effective altruist, you should definitely at least be doing all the things that Humo and Robutil agree on. (i.e. get to to the middle point of Tyler Alterman's story here). But Humo and Robutil in fact disagree on some things, and disagree on emphasis. They disagree on how much effort you should spend to avoid accidentally recruiting people you don't have much use for. They disagree on how many high schoolers it's acceptable to accidentally fuck up psychologically, while you experiment with a new program to get them into. They disagree on how hard to push yourself to grow better/stronger/wiser/faster, and how much you should sacrifice to do so. Humo and Robutil each struggle to understand things differently. Robutil eventually acknowledges you need Slack, but it didn't occur to him initially. His understanding was born in the burnout and tunnel-vision of thousands of young idealists, and Humo eventually (patiently, kindly) saying "I told you so." (Robutil responds "but you didn't provide any arguments about how that maximized utility!". Humo responds "but I said it was obviously unhealthy!" Robutil says "wtf does 'unhealthy' even mean? taboo unhealthy!") It took Robutil longer still to consider that perhaps you not only need to prioritize your own wellbeing and your friendships, but that it can be valuable to prioritize them for their own sake, not just as part of a utilitarian calculus, because trying to justify them in utilitarian terms may be a subtly wrong step in the dance that leaves them hollow. Humo struggles to acknowledge that if you spend all your time making sure to uphold deontological commitments to avoid harming the people in your care, then this effort is in fact measured in real human beings who suffer and die because you took longer to scale up your program. In my headcanon, Humo and Robutil are gods who are old and wise, and they got over their naive struggles long ago. They respect each other as brothers. They understand that each of their perspectives is relevant to the overall project of human flourishing. They don't disagree as much as you'd naively expect, but they speak different languages and emphasize things differently. Humo might acknowledge that I can't take care of everyone, or even respond compassionately to all the people who show up in my life who I don't have time to help. But he says so with a warm, mournful compassion, whereas Robutil says in with brief, efficient ruthlessness. I find it useful to query them independently, and to imagine the wise version of each of them as best I can - even if my imagining is but a crude shadow of their idealized platonic selves. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

Brain in a Vat
African Ethics: A Superior Moral Framework? | Thaddeus Metz

Brain in a Vat

Play Episode Listen Later Nov 6, 2022 59:14


Utilitarians and Kantians struggle to account for the care we should provide loved ones over strangers. Can Ubuntu, or African Ethics, provide a framework that resolves this problem? Presenters: Mark Oppenheimer and Jason Werbeloff Editor and Producer: Jimmy Mullen

The Nonlinear Library
EA - Future Matters #4: AI timelines, AGI risk, and existential risk from climate change by Pablo

The Nonlinear Library

Play Episode Listen Later Aug 8, 2022 28:23


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Future Matters #4: AI timelines, AGI risk, and existential risk from climate change, published by Pablo on August 8, 2022 on The Effective Altruism Forum. But if it is held that each generation can by its own deliberate acts determine for good or evil the destinies of the race, then our duties towards others reach out through time as well as through space, and our contemporaries are only a negligible fraction of the “neighbours” to whom we owe obligations. The ethical end may still be formulated, with the Utilitarians, as the greatest happiness of the greatest number [...] This extension of the moral code, if it is not yet conspicuous in treatises on Ethics, has in late years been obtaining recognition in practice. John Bagnell Bury Future Matters is a newsletter about longtermism. Each month we collect and summarize longtermism-relevant research, share news from the longtermism community, and feature a conversation with a prominent researcher. You can also subscribe on Substack, listen on your favorite podcast platform and follow on Twitter. Research Jacob Steinhardt's AI forecasting: one year in reports and discusses the results of a forecasting contest on AI progress that the author launched a year ago. Steinhardt's main finding is that progress on all three capability benchmarks occurred much faster than the forecasters predicted. Moreover, although the forecasters performed poorly, they would—in Steinhardt's estimate—probably have outperformed the median AI researcher. That is, the forecasters in the tournament appear to have had more aggressive forecasts than the experts did, yet their forecasts turned out to be insufficiently, rather than excessively, aggressive. The contest is still ongoing; you can participate here. Tom Davidson's Social returns to productivity growth estimates the long-run welfare benefits of increasing productivity via R&D funding to determine whether it might be competitive with other global health and wellbeing interventions, such as cash transfers or malaria nets. Davidson's toy model suggests that average returns to R&D are roughly 20 times lower than Open Philanthropy's minimum bar for funding in this space. He emphasizes that only very tentative conclusions should be drawn from this work, given substantial limitations to his modelling. Miles Brundage discusses Why AGI timeline research/discourse might be overrated. He suggests that more work on the issue has diminishing returns, and is unlikely to narrow our uncertainty or persuade many more relevant actors that AGI could arrive soon. Moreover, Brundage is somewhat skeptical of the value of timelines information for decision-making by important actors. In the comments, Adam Gleave reports finding such information useful for prioritizing within technical AI safety research, and Carl Shulman points to numerous large philanthropic decisions whose cost-benefit depends heavily on AI timelines. In Two-year update on my personal AI timelines, Ajeya Cotra outlines how her forecasts for transformative AI (TAI) have changed since 2020. Her timelines have gotten considerably shorter: she now puts ~35% probability density on TAI by 2036 (vs. 15% previously) and her median TAI date is now 2040 (vs. 2050). One of the drivers of this update is a somewhat lowered threshold for TAI. While Cotra was previously imagining that a TAI model would have to be able to automate most of scientific research, she now believes that AI systems able to automate most of AI/ML research specifically would be sufficient to set off an explosive feedback loop of accelerating capabilities. Back in 2016, Katja Grace and collaborators ran a survey of machine learning researchers, the main results of which were published the following year. Grace's What do ML researchers think about AI in 2022? reports on the preliminary re...

The Nonlinear Library
EA - Punching Utilitarians in the Face by AppliedDivinityStudies

The Nonlinear Library

Play Episode Listen Later Jul 13, 2022 8:48


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Punching Utilitarians in the Face, published by AppliedDivinityStudies on July 13, 2022 on The Effective Altruism Forum. A fun game for avowed non-utilitarians is to invent increasingly exotic thought experiments to demonstrate the sheer absurdity of utilitarianism. Consider this bit from Tyler's recent interview with SBF: COWEN: Should a Benthamite be risk-neutral with regard to social welfare? BANKMAN-FRIED: Yes, that I feel very strongly about. COWEN: Okay, but let's say there's a game: 51 percent, you double the Earth out somewhere else; 49 percent, it all disappears. Would you play that game? And would you keep on playing that, double or nothing?.BANKMAN-FRIED: Again, I feel compelled to say caveats here, like, “How do you really know that's what's happening?” Blah, blah, blah, whatever. But that aside, take the pure hypothetical. COWEN: Then you keep on playing the game. So, what's the chance we're left with anything? Don't I just St. Petersburg paradox you into nonexistence? Pretty damning! It sure sounds pretty naive to just take any bet with positive expected value. Or from a more academic context, here is FTX Foundation CEO Nick Beckstead alongside Teruji Thomas: On your deathbed, God brings good news. he'll give you a ticket that can be handed to the reaper, good for an additional year of happy life on Earth. As you celebrate, the devil appears and asks “Won't you accept a small risk to get something vastly better? Trade that ticket for this one: it's good for 10 years of happy life, but with probability 0.999.” You accept. but then the devil asks again. “Trade that ticket for this one: it is good for 100 years of happy life–10 times as long–with probability 0.9992–just 0.1% lower.” An hour later, you've made 50,000 trades. You find yourself with a ticket for 1050,000 years of happy life that only works with probability 0.99950,000, less than one chance in 1021 Predictably, you die that very night. And it's not just risk! There are damning scenarios downright disproving utilitarianism around every corner. Joe Carlsmith: Suppose that oops: actually, red's payout is just a single, barely-conscious, slightly-happy lizard, floating for eternity in space. For a sufficiently utilitarian-ish infinite fanatic, it makes no difference. Burn the Utopia. Torture the kittens. .in the land of the infinite, the bullet-biting utilitarian train runs out of track. It's looking quite bad for utilitarianism at this point. But of course, one man's modus ponens is another man's modus tollens, and so I submit to you that actually, it is the thought experiments which are damned by all this. I take the case for “common sense ethics” seriously, meaning that a correct ethical system should, for the most part, advocate for things in a way that lines up with what people actually feel and believe is right. But if your entire argument against utilitarianism is based on ginormous numbers, tiny probabilities, literal eternities and other such nonsense, you are no longer on the side of moral intuitionism. Rather, your arguments are wildly unintuitive, your “thought experiments” literally unimaginable, and each “intuition pump” overtly designed to take advantage of known cognitive failures. The real problem isn't even that these scenarios are too exotic, it's that coming up with them is trivial, and thus proves nothing. Consider, with apologies to Derek Parfit: Suppose that I am driving at midnight through some desert. My car breaks down. You are a stranger, and the only other driver near. I manage to stop you, and I ask for help. As you are against utilitarianism, you have committed to the following doctrine: when a stranger asks for help at midnight in the desert, you will give them the help they need free of charge. Unless they are a utilitarian, in which case you will punch them in the ...

Wednesday Night Drinking Practice
WNDP .230 Theresa Scott ”Warm it up buddy”.

Wednesday Night Drinking Practice

Play Episode Listen Later Apr 20, 2022 114:13


Legendary rocker Theresa Scott joined us to talk abut boats, dancing and of course music. Theresa is known for her work in "Pamper the Madman" andc currently "The Utilitarians", and a couple of her band mates Chris Tady and Billy Guilfoyle came by to root her on. We are so happy to be sponsored by "Fric and Frac" this week. Great food in a local treasure. Make sure to swing by for "Wu Tang Wednesday's". Plus "The Majestic Restaurant", "Jaykco Guitar Straps", "Found Sound KC" and "The Soup Kitchen". Thanks everyone! Keep practicing! https://www.facebook.com/groups/717691869243722 https://www.facebook.com/pamperthemadmankcmo https://fricandfrac39.square.site/ http://majestickc.com/ https://www.jaykco.com/ https://www.facebook.com/foundsound.kc/ https://thesoupkitchenkc.com/ https://www.patreon.com/wndp          

Wednesday Night Drinking Practice
WNDP .225 Billy Guifoyle ”Tales from Isabella's basement'.

Wednesday Night Drinking Practice

Play Episode Listen Later Mar 16, 2022 117:41


Artist, drummer and amazing human being William Guifoyle was our guest. He plays in several outfits and currently "The Utilitarians" is the band you will hear two new tunes from. Also from that band, Chris Tady joined us for this fun and hilarious evening. This week we welcome an old friend but a new regular sponsor, "Fric and Frac". Our favorite neighborhood bar and restaurant provided some amazing food. And every Wednesday now is "Wu Tang Wednesday's". Kung Fu movies and music related to it! Check out this landmark on 39th street. And of course the whiskey came from our dear friends at "The Majestic Restaurant", downtown fine dining with live jazz in the historic "Fitzpatrick" building at 10th and Broadway. Plus, "Found Sound KC", "The Soup Kitchen" and "Jaykco Guitar Straps". Please, please please check out our "Patreon". Thanks everyone. Stay safe!  https://www.facebook.com/groups/717691869243722/about https://fricandfrac39.square.site/ http://majestickc.com/ https://thesoupkitchenkc.com/ https://www.jaykco.com/ https://www.facebook.com/foundsound.kc https://www.patreon.com/wndp    

The Nonlinear Library
EA - Doubts about Track Record Arguments for Utilitarianism by Erich Grunewald

The Nonlinear Library

Play Episode Listen Later Feb 12, 2022 19:44


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Doubts about Track Record Arguments for Utilitarianism, published by Erich Grunewald on February 12, 2022 on The Effective Altruism Forum. Utilitarians occasionally submit as evidence in favour of their philosophy the fact that early utilitarians were advocates for social and political positions that have now become norms. One example among many, from the authors of utilitarianism.net: While not constituting an argument per se, it is worth noting that utilitarian moral reasoning has a strong track record of contributing to humanity's collective moral progress. [...] As a progressive social reformer, Jeremy Bentham defended issues such as the separation of church and state; the abolition of slavery and of capital punishment; legal regulations to protect criminals and non-human animals from cruel treatment; and the decriminalization of homosexuality. [...] John Stuart Mill defended the provision of social welfare for the poor and of freedom of speech. He was the second MP in the UK Parliament to call for women's suffrage and advocated for gender equality more generally. [...] Henry Sidgwick advocated for women's education and the freedom of education from religious doctrines. They conclude that, though early utilitarians "were still far from getting everything right, their utilitarian reasoning led them to escape many of the moral prejudices of their time and reach more enlightened moral and political positions". (They do write that this is not an "argument per se". I'm not sure what they mean by that. The OED says that an argument is "a reason or set of reasons given in support of an idea, action or theory". The track record as presented does seem like a reason given in support of utilitarianism, so I will call it an argument in this post, with the reservation that I may be missing some subtlety.) Or here in Peter Singer's words, from this interview: I think utilitarianism [...] is actually a reforming impulse. [...] Bentham and [later] utilitarians have been against slavery, they've been for women's rights. They've been for the rights of gay people long before anybody else dared to even talk about that. They've been against cruelty to animals. They've been for prison reform. There's a long list of things that utilitarians have been trying to reduce the amount of suffering in relation to and I'm very happy to be part of that tradition and to think of utilitarianism not merely as something for philosophers to talk about, but something that motivates people to act. (And he is too modest to mention it, but of course we can add to that illustrious list Singer's own work on animal welfare and global poverty.) This is a somewhat different version, where the track record doesn't show that utilitarianism is correct, exactly, but that it's useful – that it produces good outcomes. But it has in common with the first that the track record is given as a reason in support of utilitarianism. Summary I think it's true that early utilitarians were better on social and political issues than e.g. Kant and his followers. However, I think this (implied) difference is exaggerated (70% confidence); and I think this tells us very little about the soundness of the utilitarian philosophy (75% confidence). I base (2) mainly on three points: The various interpretations of the track record argument that I could think of don't seem very convincing. There are potentially confounding variables, in particular the culture that early utilitarians grew up in. The sample size is very small. Why the Difference between Utilitarians and Kantians May Be Exaggerated Jeremy Bentham (1748-1832), John Stuart Mill (1806-1873) and Henry Sidgwick (1838-1900) are sometimes contrasted with a dour, inflexible Immanuel Kant (1724-1804). (I'm using Kantian ethics here because that is what I know better, but you can pro...

The Nonlinear Library: EA Forum Top Posts
Virtues for Real-World Utilitarians by Stefan_Schubert, Lucius_Caviola

The Nonlinear Library: EA Forum Top Posts

Play Episode Listen Later Dec 11, 2021 2:26


welcome to the nonlinear library, where we use text-to-speech software to convert the best writing from the rationalist and ea communities into audio. This is: Virtues for Real-World Utilitarians, published by Stefan_Schubert, Lucius_Caviola on the effective altruism forum. This is a linkpost for In this paper, we argue that utilitarians who try to act on utilitarianism in the real world face many psychological obstacles, ranging from selfishness to moral biases to limits to epistemic and instrumental rationality. To overcome the most important of these obstacles, utilitarians need to cultivate a number of virtues. We argue that utilitarians should prioritize six virtues. Moderate altruism - to set aside some of their resources for others. Moral expansiveness - to care about distant beneficiaries. Effectiveness-focus - to prioritize the most effective interventions. Truth-seeking - to overcome epistemic biases to find those effective interventions. Collaborativeness - to engage in fruitful collaboration with other utilitarians, as well as non-utilitarians. Determination - to consistently act on utilitarian principles with persistence and deliberation In addition, we argue that utilitarians should normally not engage in harm for the greater good, but should stick to common sense norms such as norms against lying and stealing. So in our view, real-world utilitarianism converges with common sense morality in some respects. Utilitarians should follow common sense norms and should not feel that they have to sacrifice almost all of their resources for others, in contrast to what it might seem at first glance. But in other ways, real-world utilitarianism diverges from common sense morality. Because some opportunities to do good are so much more effective than others, utilitarians should cultivate virtues that allow them to take those opportunities, such as effectiveness-focus and moral expansiveness. Those virtues are not emphasized by common sense morality. Some of our suggested virtues are commonly associated with utilitarianism. Moral expansiveness is maybe the clearest example. By contrast, virtues such as truth-seeking, collaborativeness, and determination do not tend to be associated with utilitarianism, and are not conceptually tied to it. But empirically, it just turns out that they are very important in order to maximize utilitarian impact in the real world. thanks for listening. to help us out with the nonlinear library or to learn more, please visit nonlinear.org.

The Philosophers Stoned
Episode 51: The Utilitarians

The Philosophers Stoned

Play Episode Listen Later Oct 6, 2021 56:28


In the fourth episode on the intellectual climate that preceded the two World Wars, we discuss Utilitarianism. Jeremy Bentham (1748-1832) is widely regarded as the founder of Utilitarianism as a social and philosophical program. Bentham was the defacto leader of a group of British intellectuals called the Philosophical Radicals. The Radicals believed that British society needed to be drastically reformed from top to bottom in accordance with the Principle of Utility. All laws, all governments, all institutions of education and media, agriculture, industry, and so on, must conform to a single principle: that the best state of affairs is the greatest happiness of the greatest number.

Let's Chat Ethics
Intention, intention, intention: Are we all utilitarians when it comes to judging machines?

Let's Chat Ethics

Play Episode Listen Later Sep 2, 2021 67:29


This week Alba and Amanda discuss a new book called How Humans Judge Machine by Cesar A. Hidalgo.  Get the book on: https://www.judgingmachines.com/ Eric Schwitzgebel's Aiming for Moral Mediocrity: https://faculty.ucr.edu/~eschwitz/SchwitzAbs/MoralMediocrity.htm The puppy cartoon: https://images.app.goo.gl/C4zKG5hsfE6419Ra6

Sentientism
"Maybe moral systems are harmful! Like religion they are used to divide us." - Walter Veit - philosopher, scientist and sentientist - Sentientist Conversation

Sentientism

Play Episode Listen Later Apr 20, 2021 87:05


Walter (https://twitter.com/wrwveit & https://walterveit.com/) is an interdisciplinary scientist, philosopher & writer focusing on biology, minds & ethics. He publishes the ‘Science & Philosophy‘ series on Psychology Today & Medium. In these Sentientist Conversations we talk about the two most important questions: “what’s real?” & “what matters?" Sentientism is "evidence, reason & compassion for all sentient beings." Our conversation is also here on YouTube: https://youtu.be/xxSGItuaSn4​​. We discuss: - Defending Descartes, as a child! - Studying philosophy, politics, economics & science - Writing a PhD on the philosophy of consciousness. How sentience came to arise in a purely physical universe - Growing up in an agnostic family, attending church but "grumbling" - Finding it strange learning about god at school. Asking annoying questions :) - A brief early teenage phase of believing in god, then reverting to atheism - Church seemed like a "weird cult-like thing." "It just didn't make sense." - There are thousands of religions. They can't all be right. Just disbelieving in one more than religious people do - Being a "hard-core naturalist" but still feeling the pull of superstition - Finding naturalism reassuring. Can abandon the "f*cking scary stuff" (monsters, ghosts, hell) - We can just enjoy our lives & explore the universe - The pull of being part of something larger. The universe, a tribe, a sports team fan group... - The hesitation in Germany about collectivism - "It's a peculiar world we live in - it's exciting" - Does morality crumble without the normative force of a god. "A dude in the sky making up laws and we just have to follow them" - People sceptical of morality aren't sceptical about laws. You can break them but there might be consequences - Too much of morality seems arbitrary. But Bentham almost proposed utilitarianism as a sort of science, not morality - Utilitarians in the UK were engaged in politics & in improving the world - Instead of considering morality - just consider the facts re: "What do animals want from their own point of view" - Facts: animals exist. They can be harmed. They don't like suffering - Humans evolved as a social species. That makes us care - "Morality" might create more harms than benefits! - There is no dividing line between humans and other animals because we all have interests - Both morality and religion have been used to divide humans & animals - Even oppressive groups have divided people through an appeal to morality - The deep connection between morality and spirituality/religion. Often naturalists & atheists don't see the danger - Naturalists are attacked for being amoral but are no less moral - And much more... Full show notes are on Sentientism.info. I ran out of space! Sentientism is “Evidence, reason & compassion for all sentient beings.” More at https://sentientism.info/​​​​​​​​​​​​​​​. Join our "wall" https://sentientism.info/wall/ using this form: https://sentientism.info/im-a-sentientist. ​Everyone interested, Sentientist or not, is welcome to join our groups. Our main one is here: https://www.facebook.com/groups/sentientism. Thanks, Graham for the post-prod: https://twitter.com/cgbessellieu.

Let's THINK about it
Moral Mazes (part 2)

Let's THINK about it

Play Episode Listen Later Jan 20, 2021 18:21


Gut Decisions“The core of the managerial mystique is decision-making prowess” Robert JackallSo, if decisions were easy, they would be made by someone else, so it is only the big money, big risk decisions that are looked at to determine your prowess. 1,000’s of jobs and the future of the division are on the line. How do you make the call? By your gut.The rules of a manager are :“(1) Avoid making any decisions if at all possible; and(2) if a decision has to be made, involve as many people as you can so that, if things go south, you’re able to point in as many directions as possible.”You have heard of that moral dilemma thought experiment developed by Utilitarians, such as Peter Singer: the trolley experiment?In the corporate version, no one takes action: 5 people are hit by the trolley and then everyone blames everyone else for not jumping. Another great day at the office dodging responsibility. So, your primary GUT DECISION for your survival in company: Who is going to get blamed? BLAME TIME For managers, to be BLAMED is to be injured verbally in public. And since we know that “image is crucial” this is a serious threat. The wise manager knows it has nothing to do with facts or the merits of a case, but is a socially construed manifestation born largely of being in the Wrong place, at the Wrong time.As Jackall says: “Bureaucracy expands the freedom of those on top by giving them the power to restrict the freedom of those beneath.” ON THE FAST TRACK The goal here is to outrun your mistakes! Jump up the ladder, then when the person who replaces you inherits your screw-ups, you blame it on them and fire them.A manager can defer costs for short-term profits or gains. This sets up what Jackal calls “probationary crucibles” in which managers are tested under extreme pressures, reshaping them to make decisions for short-term expediency, for their own survival. In the end, the games played for a manager to “look good” and “meet the numbers” actually cost the company: it is a parasitic relationship that drains the company rather than keeping it healthy.There is a natural selfishness… people want to make the system work for themselves. And when they get to the top, they can’t criticize the system that got them there.Manager in Moral MazesFlexibility, & Dexterity with Symbols As you climb, the rules of the game are, you never publicly criticize or disagree with one another or the company policy. You just wear an agreeable face and use ambiguous language. But when blame time shows up, everyone has already built defenses and set up scapegoats. Jackall says the higher you go in the corporate world, the better you need to be with manipulating symbols without becoming attached or identified with them. Thus “truth” takes a backseat for the imperative of appearances, which champions adroit talk requiring moral flexibility and dexterity with symbols.And what happens when there is definitive proof of your mistakes? You say you were in accordance with the rules at the time, claiming that risk is necessary to make money, while you personally avoid risk by hiding in a bureaucracy. As Jackalll saysYou socialize the risks and harms of the corporate industry, while privatizing the benefits.THE BUREAUCRATIC ETHICJackall shows the contrast from the original protestant ethic: an ideology of self-confident, frugality, and independence. It championed stewardship responsibilities, where your word was your bond. But it also signaled success as God’s favor, and that was used to explain away the misery of the poor and unlucky.What has happened is that bureaucracy“breaks apart substance from appearance, action from responsibility, and language from meaning.”Robert JackallWith survival tied to such a fickle, mercurial fate corporate bureaucracy erodes internal, and external, morality. It generates its own rules and moral standards, primarily through social context: what is fashionable becomes true, since everyone is looking at each other for moral cues, but to rise in the ranks the only virtue to be found is self-interest masked as company loyalty.2008 The Great RecessionJackal has a 2009 essay added to Moral Mazes. It proves his 1988 book prophetic. Corporate Culture and Bureaucratic ethics expanded into a societal consciousness of short-term profits with super shady logic, yet everyone was doing it so it became conscionable. And it broke our economy.This is an egregious example of “socializing risk and privatizing profit.” It proves the protective power of bureaucracy, and encourages future recklessness.

Political Theory 101
Bentham, Mill, and the Utilitarians

Political Theory 101

Play Episode Listen Later Jan 3, 2021 94:05


Edmund and Benjamin run through the history of utilitarianism, focusing both on its radical and conservative aspects and running through some of the problems utilitarians have grappled with. Featuring Sidgwick, Moore, Singer, and Parfit, among others...

Microdosing Philosophy
Kant's Groundwork on Morals, Part one

Microdosing Philosophy

Play Episode Listen Later Dec 7, 2020 14:08


Kant takes the discussion of morality in a new direction, and challenges some central ideas of the Utilitarians. Happiness cannot be a guide to moral action. The only purely good thing in the world is a good will.

Let's THINK about it
Breakdown of Will

Let's THINK about it

Play Episode Listen Later Nov 15, 2020 25:08


Part IWe are in the era of "treat yo' self" and "you deserve it," which is at odds with the attitudes and behaviors of many successful people.What is will power? Ryder maps out three types of will found in psychoanalysis and how they relate to time. Freudian can be considered the "will to pleasure" based on your past, Adlerian is the "will to power (or superiority)" based around the present, and Frankl's logotherapy can be seen as the "will to meaning" based in part upon your future self.He who has a why to live for can bear almost any how.Friedrich Nietzsche Will itself can be seen as motivations transformed into incentives, these incentives turn our choice into actions, and that in turn become our behaviors. These actions/behaviors show our values to the world, and thus it becomes who we are.Part IISo, why do we do what we don't want to do... instead of doing what we say we want to do?Ainslie discusses the Utilitarian and Cognitive camps to talk about 'satisfaction models'. A key point is to think of your internal desires as an internal marketplace, with different factions jockeying to win their reward. This follows the same principles of an army or corporation, any large group, where commands can be issued, but it is up to the managers to motivate the underlings, and an underground economy is formed that actually determines what gets done.Utilitarians tend to think in logically: we do what rewards us. Yet, there are several instances where we hurt ourselves, or do not follow logical principles. It is as if the current temptation is stronger than our ability to delay gratification.This brings up the "survival function" where we discount the future for the present, complicating rewards by discounting their value over time. Humans, and animals, tend to pick the more immediate, and often poorer, reward rather than wait for the long-term reward that would allow us to achieve our stated goals.This invokes the "pleasure principle" that is built into humans, and we must consider the role of reason vs. pleasure. If reason only exists to fulfill our desires, then we can't rely on it to thwart our desires... unless we weaponize our desire. We must have a bigger, better, stronger desire that allows us to displace the short-term weak rewards we crave.As well, we can, with forethought, plan around our future failures. If we know we will be tempted, we can, like Odysseus, plan around our temptations, but this does not work when temptation or instinct is sprung upon you.Part IIITaking a closer look at why we function the way we do, Ainslie points out "exponential discounts" versus "hyperbolic discounts." Humans tend towards hyperbolic, but someone like a banker looks long-term and realizes that if rates stay steady the long-term game wins. The banker can then take advantage of the hyperbolic person who values something strongly in the immediate moment without planning ahead.The banker is an example of a long-term rewards thinker, and the hyperbolic is the rest of us reacting to our immediate need. Ainslie brings up that rewards and pleasures are not the same thing. Rewards are behaviors that you repeat, and they may be painful, while pleasures tend to be desire fulfillment.Consider the instinct of a mother bear to protect her cubs: this is behavior does not produce pleasure for individual, but it does reward the species. Similarly, humans have a list of behaviors that can be hacked and are not good for individual welfare, but play a role in gene propagation. Nature tends to make these species rewards "pleasurable" so that we undertake hoarding food or having sex. So, why has nature not figured out the hyperbolic behavior can be taken advantage of by the banker on an exponential, logical curve?We end on two analogies for how to consider long-term v. short-term rewards. One is perspective where we understand that the building at the end of the block is larger than the one we are standing next to, even though the one we are next to looms larger, taking up more attention. Rewards function this was as well: the nearer reward demands attention, blocking out the ability to stay focused on the long-term reward in the distance. The other is a "chain of predation" where small fish are eaten by progressively larger fish... but with rewards it functions in reverse where the small reward (the current itch) eats the larger rewards (mid-range goals) until we never get to the big long-term reward.0:56 Why does will power break down?3:29 Will and 3 schools of Psychoanalysis tied to Past/Present/Future6:49 Procrastination and Chunky Monkey: Who am I?110:01 Illogical Decisions or Survival Discounting?15:05 physics for the mind: weaponize your desires17:09 Outwit your self: Odysseus19:38 Pleasure for survival: instinct vs. logic23:45 Rewards Perspective and the Reverse Chain of Predation. 

The Theology Pugcast
Adiaphora: What's Indifferent? Less Than You May Suppose.

The Theology Pugcast

Play Episode Listen Later Mar 23, 2020 63:20


Today Chris introduces the subject of "Adiaphora"--a term which means "things indifferent". He notes that it is a fine term, so long as you use it correctly. He addresses its use by the Cynics, and then Glenn discusses briefly how it was used by some of the Reformers in a very narrow sense. The trouble with it according to the gang is it has become a catchall for anyone who wants to shut down discussion on the meaning of certain things--reducing them to matters of taste. It is a favorite category for the Utilitarians in the church who wish to dismiss the promotion of Beauty entirely, and even narrow the application of Truth and Goodness to a very small set of concerns. If, as Abraham Kuyper said, ‘There is not a square inch in the whole domain of our human existence over which Christ, who is Sovereign over all, does not cry, Mine!’--then how can we say something is indifferent? Everything matters to the Christian because everything belongs to Christ. --- Support this podcast: https://anchor.fm/the-theology-pugcast/support

Podcast – The Anarchist Experience
The Anarchist Experience – 219

Podcast – The Anarchist Experience

Play Episode Listen Later Jun 10, 2019


The Anarchist Experience Episode 219 Season 3 Episode 39 Download Episode MC is in Mongolia. So, Rich E Rich reads the news. HEADLINES: – The GoFundMe Border Wall Is Finished. It’s Not Stopping Migrants From Coming In. – Utilitarians – Flimsy Justifications to Restrict Your Freedom – Don’t forget the “Galt’s Gulch Tactic.” Shrug-off the […]

Father George William Rutler Homilies
2018-12-16 - Third Sunday of Advent

Father George William Rutler Homilies

Play Episode Listen Later Dec 16, 2018 15:35


16 December 2018 Third Sunday of Advent Luke 3:10-18 + Homily 15 Minutes 38 Seconds Link to the Readings - USA Version http://www.usccb.org/bible/readings/121618.cfm (from the parish bulletin)    There could be no easier subject for comment than happiness. The best classical pagan philosophers, even if they did not believe a Creator intended that humans should share in his “delight” at what he had made, taught that we were meant to be happy. Some nineteenth-century “Utilitarians” like Jeremy Bentham, thought that this happiness meant a sense of pleasure without pain.     As usual, the ancients like Aristotle were more sophisticated than many intellectually clumsy moderns and made a connection between pleasure and virtue. They called this “eudaimonia.” That is to say, you cannot honestly feel good unless you do what is good, and you cannot do what is good unless you yourself are good. But as “only God is good,” real happiness demands that humans give God permission to impart His goodness to our souls. It is possible to fake happiness, and that is why there is so much unhappiness in our culture, which disdains virtue. One can create an illusion of happiness, but it is a kind of moral stage set, and its falseness is revealed in the frightening explosion in drug use, and the seventy per cent increase in suicides among young people in the past decade.    Real happiness is not the result of painlessness, but comes from dealing with pain the right way. This is why the Scriptures curiously remark almost nonchalantly: “It is better to go to the house of mourning than to go to the house of feasting” (Ecclesiastes 7:2a). Jesus promises a joy that “might be full” (John 15:11), and at the same time He was a “man of sorrows” (Isaiah 53: 3). On Gaudete Sunday, which means “Rejoice,” the Church sneaks a peak into the joy of Heaven.     Here is the confidence that man’s destiny, willed by God and which can only be thwarted by the selfish will of corrupted humans, is participation in endless happiness. The word for this is more than happiness based on happenstance, but joy rooted in eternal harmony, effectively only with God. The ancient Greeks stretched for a truth that they could not fully express: real happiness, which is joy, is the state of holiness.    This is why Saint Paul says that Christians must rejoice “always” (Philippians 4:4a). Always – and not just after a Happy Hour at McGinty’s tavern, or holding a winning ticket at the Kentucky Derby--because the creature’s source and object of all joy is the Creator.    Chesterton asked rhetorically in his Ballad of the White Horse: “Do you have joy without a cause?” His point: there is no joy without a cause. That would be like having health by chance. Joy is joyful precisely because it has a cause that never fails. Approaching Christmas, the Church sings in astonishment that the Word was made flesh, and when He left this world, He promised that He would never leave us comfortless.

Open Stacks
#9 Bart Schultz & Kate Hennessy: Public Figures, Private Lives

Open Stacks

Play Episode Listen Later Jul 10, 2017 48:22


Philosopher Bart Schultz discusses "The Happiness Philosophers" which explores the ideas and the eccentric lives of the great Utilitarians. Kate Hennessy gives an inside look into the family life of her grandmother, famed Catholic activist Dorothy Day.   Open Stacks is the official podcast of the Seminary Co-op Bookstores. This episode was produced by Kit Brennen and Imani Jackson.    

Philosopher's Zone
William Godwin: rebel for love, happiness, and anarchy

Philosopher's Zone

Play Episode Listen Later Jun 4, 2017 25:02


The Utilitarians have given us a big idea to live by, transforming life in ways we take for granted today. One of their number challenges us to take it further.

Very Bad Wizards
Episode 56: Moral Heroes and Drunk Utilitarians

Very Bad Wizards

Play Episode Listen Later Nov 3, 2014 70:42


Following up their discussion of moral villains, Dave and Tamler argue about what makes a moral hero. Tamler defends Sharon Krause’s view that honor values can motivate heroic behavior. Dave accuses Tamler of being inconsistent (nothing wrong with that) and slightly Kantian (NOOOOOO!!!). In the final segment, we’re back on the same page fawning over Susan Wolf’s paper “Moral Saints.” Plus, are drunks more likely to be utilitarians? And why does Dave hate Temple Grandin?LinksThe Cold Logic of Drunk People by Emma Green [theatlantic.com]Duke, A. A., & Bègue, L. (2015). The drunk utilitarian: Blood alcohol concentration predicts utilitarian responses in moral dilemmas. Cognition, 134, 121-127. [sciencedirect.com]Wolf, S. (1982). Moral saints. The Journal of Philosophy, 419-439. [verybadwizards.com]Krause, S. R. (2002).  Honor and democratic reform (Ch. 5) [verybadwizards.com]. In Liberalism with honor [amazon.com affiliate link]. Harvard University Press. 

Very Bad Wizards
Episode 7: Psychopaths and Utilitarians Pt. 2 (Now with more poo poo)

Very Bad Wizards

Play Episode Listen Later Nov 4, 2012 67:00


After a clip from The Third Man, Dave and Tamler continue their discussion from Episode 6 on Ted Bundy, utilitarians, and trolley problems. They also talk about Tamler’s TED talk envy, inappropriate acts with trees, and make a plea for more listener feedback. The second segment begins with the long-awaited return of the ‘eat the poo-poo’ clip, but this time in a somewhat relevant context. Dave and Tamler then discuss the role that emotions play in moral judgment and the role they should play. If we feel disgust at someone’s behavior,  does that mean the behavior is morally wrong? Tune in to find out…LinksThe Third Man Ferris Wheel Scene (maybe Dave will see this movie one day) Dave’s TEDx talk, bumped up to TED (129,000 views)Tamler’s TEDx talk, not as much bumping up.  (676 views) “Consequentialist are Psychopaths” The Splintered Mind  blog postEat the poo pooYuck by Dan Kelly"Grime and Punishment." Brief review of disgust and moral judgment from The Jury Expert by Yoel Inbar (the brains--and brawn--behind all the disgust work) and David P.

Very Bad Wizards
Episode 6: Trolleys, Utilitarians, and Psychopaths (Part 1)

Very Bad Wizards

Play Episode Listen Later Oct 20, 2012 61:39


Tamler contemplates ending it all because he can't get 'Call Me Maybe' out of his head, and Dave doesn't try to talk him out of it. This is followed by a discussion about drones, psychopaths, Canadians, Elle Fanning, horrible moral dilemmas, and the biggest rivalry in Ethics: utilitarians vs. Kantians.   Links"Why I Refuse to Vote for Barack Obama." by Conor Friedersdorf"Why I Refuse to Refuse to Vote for Obama" by Robert Wright.Dave's study "The Mismeasure of Morals"The write-up of Dave's study in The Economist: "Goodness Has Nothing to Do With It"

Bio-Ethics Bites
Brain Chemistry and Moral Decision-Making

Bio-Ethics Bites

Play Episode Listen Later Jan 4, 2012 16:48


Answers to moral questions, it seems, depend on how much serotonin there is flowing through your brain. In the future might we be able to alter people's moral behaviour with concoctions of chemicals? A train is hurtling towards five people; it's out of control. You are standing on a footbridge, standing next to a very obese man. The only way to save the five is to push the man over the footbridge to his certain death: his bulk would stop the train and save five lives. Should you do it? Should you give him a shove? Most people would say no. Utilitarians say yes, you should take one life to save five. Now it turns out that the answer you give will depend on how much serotonin there is flowing through your brain. This raises an intriguing possibility: in the future might we be able to alter people's moral behaviour with concoctions of chemicals? That's been the research focus of Molly Crockett, now based in Zurich, but formerly of Cambridge University

Faith Community Church
Does the End Justify the Means? - Audio

Faith Community Church

Play Episode Listen Later May 23, 2010 36:00


Would you please open your Bibles up this morning to the Book of Colossians 2? I just want to bring us back to our theme and our purpose because its been a little while since I covered this. Were doing a series on ethical world views. In Colossians 2, Paul gives the Christians a word of ammunition. In Verse 8 (page 1166 of pew Bibles), he says, See to it that no one takes you captive through hollow and deceptive philosophy, which depends on human tradition and the basic principles of this world rather than on Christ. His warning is to Christians, and hes saying, Listen, if you are not careful, you can be swept up. You can be susceptible. You too can succumb to the philosophies that are in the world until you end up resembling the world rather than Christ until you become captive in your mind to a mindset that is not of God, so he says to the church-to Christians, Because you live in this world, you need to be careful. Be careful what you espouse, what you believe, because there are influences around you that can steer you down a wrong path. So thats why were on this series. These messages arent the kind of messages that inspire you. Boy, I just feel good about myself or I feel good about my faith. This isnt that kind of a message. These arent designed to be. These are more designed to educate us and to make us aware and also to realize that things are not always as cut and dried as they seem. Sometimes as Christians, we think, Oh, we have it all figured it. We know black from white. Its all clear. And its not. There are issues that we should struggle with as believers in ethics. Whats right and whats clear to do if left up to you and me is not always clearly seen. Were going to watch a clip this morning because our subject matter is utilitarianism, which is a school of thought-a philosophy of generalism. It says there are no universal laws. We talked about antinomianism a few weeks ago which says every person makes up their own right and wrong. You are your own god. Well, in generalism or utilitarianism, we say, Well, there are general rules, but there are no universal rules. Really, what is good is what brings about the greatest good for the greatest amount of people. Thats what utilitarians believe. What brings about the greatest good for the greatest number of people-that is what is good. The end justifies the means. All is well that…? Ends well, exactly. You may have even muttered that. Well, is that true? Is all well that ends well? Does the end justify the means? Utilitarians would say that if you start off to do a good act and it has a bad consequence, that was a bad act. If you try to rescue somebody from a car thats balancing on a cliff and in the process of rescuing, the car tips over and they fall to their death, that was a bad act even though your intentions were right. The transverse is also true if you have no intention of doing good. In fact, you might be intending to do harm, but if it ends up being good for some strange reason, that was a good deed. Everything is consumed with the end, with the outcome; and the outcome-not the actual act-determines whether something is right or good. Were going to explore that prevalent world view this morning from Scripture and talk about that philosophy. The first thing were gonna do is were gonna watch a clip from the movie Seven Pounds. Has anyone seen that movie with Will Smith? Were gonna watch a short clip, and Im gonna tell you what happens in the movie. If you say, Oh, Ive been wanting to see that movie, youve had over two years to see this movie (congregation laughing). Your time is up, so I would encourage you to just plug your ears if you dont want to know what this movie is about-the surprise ending, which some people liked and some people didnt. Well talk about it. Lets watch this clip from the movie, Seven Pounds. [Clip is played.] Thats how the movie starts, and then it takes you on a lot of twists and turns, and you come to find out that this character was in a car accident. He caused the accident, and innocent lives were lost because of his negligence in driving, so he is racked with guilt. He is consumed by guilt, and he comes up with a plan of how to make it right, how to make his life count for something good. Hes young; hes healthy; hes in the prime of life, and he knows people who need his organs-his eyes, his heart. His girlfriend is dying of a diseased heart, and then she has a transplant. She is going to die. His brother needs one of his organs. Other people that hes come to know that are good people need other organs that he has, so he decides the best thing he can do with his life since he cost the lives of other innocent people is to surrender his life, so he commits suicide. He fills the bathtub up with water and ice to preserve his organs. He has a poisonous jellyfish come in the tub, and it stings him and he dies. Theres a note there for the ambulance workers to be careful because the floor is wet, and theres a poisonous jellyfish in the water. Hes warning them. Hes really been a good person. He gives his house away to a poor family, and you see these people become the recipients of his organs. You see them living and enjoying life. Its one of those movies that leaves you just thinking. My wife wants to go to a movie and be entertained and feel happy. She wants a happy ending, and this didnt do that for her; but for me, I said I liked the film because it made me think. It made me think about ethics. So a utilitarian would look at that and say, Well, heres a story. The greatest good was done. These people were helped. These people had happy endings. Their lives were prolonged. This guy got his vision back. All these wonderful things happened, and its good. Others of you would say, No, its wrong because he took a life. He committed suicide. He killed somebody-even though it was himself. Thou shalt not kill, and youd argue that what he did was wrong. Then others of you would say, Well, isnt that what Jesus did? Didnt Jesus lay down His life and sacrifice His life for the greater good? Didnt He give of Himself so others would live? So wed find that even in a room where people are followers of Christ, we might disagree on the meaning of what Will Smiths character did-the rightness or the wrongness. Things are not always easy to understand. Its not always easy to determine what is right or wrong. I want to share a story with you, but before I share the story, I want to ask a question. The question is this: how many of you think its ever right for a Christian who is a leader, a pastor, a theologian to plot to kill the head of state of a country? Can you foresee any circumstances where that might be acceptable or correct? For a Christian leader, theologian pastor to plot to kill the head of a country. Can you foresee any circumstances where that would be right? I have a few. In the past services, there has only been one hand that has gone up. In the Saturday night service, there was nobody. The 8 oclock service, nobody; 9:30, one [person put their hand up]. In this service, there are about five. I want to tell you the story about a young pastor and theologian. He was born into affluence. His father was educated, a professor at a leading university. His mother received a degree which was very rare in those days from a major university. He grew up in a very Godly home, very structured, and theology was something that attracted him at a young age. By the age of 21, he had already become a doctor. He became a doctor of theology at the age of 21. His dissertation was one of the greatest theological pieces ever written. Many people regard him as the greatest theological mind of modern times. By his early twenties, he was an ordained minister, was traveling around the world, and was teaching at a prestigious theological seminary. He was living a good life and doing good things, but the government began to change. A party was coming to power, which seemed to be corrupt. It was almost like he could read the writing on the wall that this party and this leader was up to no good. They were espousing doctrines of hate, of superiority, of one race over another. They were taking away human rights and civil rights. In fact, it had become so corrupt that for the state-run church and this corrupt regime, it was almost impossible to distinguish them. It was as if the state lived to propagate the doctrines of this regime, and the regime just fed in to the church. I use the word church with italics because they were espousing things that are contrary to the Scripture. This pastor said, We need to come out and be separate. This is not the church. The state is not the head of the church. Jesus is the head of the church. The church does not teach against the equality of all people. It should not, so we need to separate ourselves, and he started a new church called the Confessing Church. They said, All true Christians need to come out from the state-run church because its teachings are contrary to the Gospel, and we need to unite across denominational lines and form this Confessing Church, and thats what they did. His message became increasingly strong as the government became more aggressive in its steps. Innocent people, crippled people, minorities, Jews, handicapped people were all being put in to concentration camps, and rumors were that some were even being executed. There was an ethnic cleansing going on. People were hearing that their race, their color, was the superior, so many in the church and many in society were loving the message. This leader had brought economic prosperity at a time when the country had been in a depression, so people who should know better by the millions were following after his leadership. The pastor tried to warn, You have to get on the airwaves and warn, but he was literally cut off in the middle of his speech one time while on the radio. The leader of this country said that he should not speak anymore, so his rights to speak, to teach theology, and to ordain pastors were taken away from him, so his ministry went underground. He began to teach, train and write underground, so the Confessing Church continued but away from the eyes of the government. As the government became stronger and as the leader went from a leader to a dictator, this pastor who was a pacifist foresaw that millions of people were gonna die at the hands of this brutal man; so he decided amongst other people that he would plot to assassinate Adolf Hitler. That would be his goal. As a Christian man, he made a decision that this man was going to kill millions of innocent people-Gods people-and he must be stopped. He plotted to kill the head of state. His name was Dietrich Bonhoeffer. He wrote many books-including books from prison-because he was soon arrested. The plot was uncovered, and he was ordered to be executed. Just days before the allied forces would come and liberate and defeat the Nazi Party, the Third Reich, Dietrich Bonhoeffer was lead to the gallows and hung by a wire until dead. The same question I asked you just a moment ago-is it ever right for a Christian, a pastor, or a theologian-to plot to assassinate the head of state? Most hands did not go up. A few of you now are telling me, You didnt tell me it was Hitler! The question still stands, and some of you who would not raise your hand a few moments ago if I were to ask again would probably go, Okay. In Hitlers case, okay. Others of you would say, No! No, not even in Hitlers case. Youd say, The government is ordained by God, and then others of you would wise up and say, Well, not that kind of government, not a government that kills people thats not ordained by God. Then, some would say, Well, thats murder. Youre murdering the head of state. Its still murder. Others of you would say, But hes gonna kill millions of people. So in this room, I guarantee you we would have a debate. People who otherwise get along and otherwise seem to think theologically along the same lines would be arguing their case, and their cases would be different. Do you see? Things are not always easy. Things are not always black and white. Things are not always cut and dry in the world of ethics. I want to talk about utilitarianism a little bit, and I want to look at it in light of the Scripture. Before we do that, I want to raise a couple of points, a couple of questions I have. You may be more skilled than this, but number one is if there are no universal laws, how do we ever determine whats good? You say, Well, whats great is good for the greatest number of people. Well, how do you know whats great is good? If there is no standard, if there is no norm by which were to measure what is good and what is not good, then how do we decide what collectively is good if we have no standard or measurement. If youre running a race and nobody has ever decided whats fast or whats far, how do you know if your time was good, if you were fast? There is no standard of measurement. We have to have a norm. We have to have a standard. Number two, if the end justifies the means, how do we know when the end is? How do we measure the end? Is it the day after the act that we determine whether it was good or bad? Is it the year after? The decade after? The century after? How many have ever had a situation where you thought something was terrible in your life, and it ended up being something good? Let me see your hands. You thought something was bad, and when all was said and done, it was good. We baptized seven people last night after the Saturday night service-seven people! A couple of those people got up and said, This bad thing happened to me, and this bad thing happened to me, but these [bad] things brought me to God. This thing that I thought was so terrible and so awful ended up bringing me to new life in Christ. So what was bad turned out to be something that led to something good. My wife wanted so desperately to get a job after nursing school, and she was hoping to get a job at the clinic; and they called her and said, No job. No dice. She cried. A couple days later the hospital called where shed been trained and said, We want you to come and work as a nurse. It was a better job, better money, something that she wanted to do more, and she wouldnt have been able to take the job had she already been at the clinic. You see? So what we thought was bad turned out to be good. And other times, some of you have had something good that turned out to be bad, right? Let me see your hands. Something you thought was great-maybe you took a job, and once you took that job, it wasnt what it was cracked up to be. Maybe you married Mr. or Mrs. Right, and they turned out to be Mr. or Mrs. Wrong. Some people wanted to win the lottery, Oh, if I could just be rich and have all this money, everything would be wonderful and good. They win the lottery, and theyre miserable. The whole world is at their doorstep asking for money. Some famous people have said, If I could just be famous, if everybody knew who I was and I was famous, boy would I be happy! Theyre miserable. The paparazzi follow them all over the place. They have no privacy! They say, I wish I could just be a regular person again. I wish I could just go to the grocery store again without people following me, taking my picture everywhere, and pictures of my children. You see, you think something is going to be good, and it turns out to be bad. But we know things can change. Things that we think are bad can become good and vice versa, so how do we define the end? Thats the second question. Then the third question is if its up to man to define what is good, then good is relative. We talked about that World War II theme. Staying on that theme, what about the dropping of the bombs to end the war in Japan-Hiroshima and Nagasaki? Is that good? A utilitarian might say, Well, it brought about the greatest good for the greatest number of people because millions of lives were saved. We ended the war. They would say its good because their son got to come home. If your son has been in Asia or been in Europe fighting World War II and theyve come home, and its not in a body bag, but theyre healthy and they get to live a normal life, you would say the outcome of the dropping of those two bombs was a good thing, but what if you lived in Hiroshima or Nagasaki? What if you lived in those cities? As the plane carrying fat boy, the name of the bomb, they didnt even sound the alarms because there were only a few planes. They thought it was just a recognizance mission. They didnt even sound the alarms. The bomb was dropped releasing energy unseen, unheard of before; 7,000 degree heat; 600 miles an hour stronger winds. Over 200,000 people died from those bombs. Half of them died because they were just incinerated in a moments time, and those were the lucky ones. Others hung around. There was no medical care. The hospital was destroyed. Ninety three percent of the doctors and nurses died in the bomb, so you would lie and suffer with your burns-men, women, children and infants-until they died. If that didnt kill you, radiation poisoning would kill you, and you would die a long, slow death-maybe many months after the bomb was dropped. If youd been a person living in Japan, and you had lost a child, father, mother, a spouse, or a friend, from your perspective, was the bomb good or bad? You see, it becomes a relative thing. Think about the cross for a moment. When Jesus announced the news to the Disciples that He was gonna suffer and die and He went to the cross, suffered and died, was it bad or good at that point? Its bad. Its bad. In fact, it was something to be prevented, Peter thought. Lets take a look. Lets go to Matthew 16. Youre gonna watch Peter as he falls from lofty heights here. If you follow the Bible, you know that Peter was a man who spoke his heart, often put his foot in his mouth, and didnt think about consequences until afterwards. Jesus is going to quiz His Disciples, and Hes going to ask, Who do people say that I am? Some say Youre John the Baptist. Some say Youre Elijah. Then He turns the question on them and asks, Who do you say that I am? Peter gets it right. Peter says, You are the Christ. You are the Son of the Living God. You are the One that weve longed for. Youre the One the prophesies foretold. You are divine. Youre the Son of God. Youre the fulfillment of prophesy. You are the Son of the Living God. Jesus said, Youve answered correctly, Peter. You solved the mystery. You answered the puzzle correctly. Peter is praised. He gets an A that day in the pop quiz. He is promoted to the head of the class. Way to go, Peter! You got it right. That is exactly who I am. Then Verse 21 says, From that moment on, Jesus began to change how He taught them. He began to disclose new revelation. He says in Verse 21, From that time on… from the time of that confession, …Jesus began to explain to His Disciples that He must go to Jerusalem and suffer many things at the hands of the elders, chief priests and teachers of the law, and that He must be killed and on the third day be raised to life. This does not sit well with Peter. Hes been to Sunday school. Hes studied the Scriptures. The Messiah abides forever. His kingdom never ends, so what Jesus is saying is wrong. Its wrong! He is going to let Jesus know that. Hes going to tell Jesus in no uncertain terms that what He is thinking is wrong. Jesus is his friend; Hes his leader; Hes his teacher, but more than that, He is everybodys Messiah. Think of what Peter is going through. He has just discovered and just made known who Jesus is, and not only is he seeing history fulfilled and seeing a prophesy fulfilled, he gets to be a part of it. He gets to walk and talk with the Messiah. He gets to learn from the Messiah-not just publicly, but privately. What a privileged person he is, and now its all going to end. Its going to be taken away from him. The way its going to be taken away from him-this man who he loves and follows is going to be taken away, beaten, suffer and die on a cross where thieves and criminals go. Jesus is a good person. Hes done miracles. Hes helped people. This isnt right! Good people dont die on a cross. Good people dont suffer at the hands of evil men. This is not going to happen on Peters watch. It would be like in that movie if Will Smith would have said, Im going to commit suicide and give my organs to people I love, and a friend would say, Oh no, youre not. Oh no, you wont because I love you! His girlfriend would have said, No, I dont want to lose you even if it means getting a new heart. I dont want to lose you! I will guard you 24/7. I will tackle you whenever I see you going near a bathtub. So thats what Peter does. Hes going to take matters in to his own hands. He says, Never, Lord! Verse 22 (page 973 of pew Bibles), Peter took Him aside and began to rebuke Him. Never, Lord! he said. This shall never happen to You! Not only should it not happen to you, he says, It will not happen to You! Hes going to personally see that it does not happen. Remember Peter is the one in the Garden of Gethsemane that when the soldiers come to arrest Christ, he takes the sword and cuts off the ear of Malchus. Dr. Luke, whos in to detail, even tells us which ear it was. He cuts it off, and Jesus says [in John 18:11, page 1071 of pew Bibles], Put away your sword. Those who live by the sword shall die by the sword. The cup the Father has given me, shall I not drink it? So Peter is willing to defend Jesus. He says, I will stop You. You are working against Your own best [interest]. You dont even know what Youre talking about. The Messiah abides forever. Youre a good man. Youre not going to go to the cross. I will personally see to it that it never happens! In other words, Peter wants to personally see to it that he and every other human being is lost in sin for all eternity [congregation laughing]. Thats what it boils down to because he sees the end, and according to him, the end is the death of Jesus on the cross, and its bad! It must be prevented because you and I are finite. If you and I determine what is good, then good is relative. Peter did not want to lose a friend. He did not want to lose his Lord, and he didnt want anybody else to lose Him. He didnt want the world to lose Him, but he didnt understand that in order for the world to gain the benefits of the Messiah, Christ must die. Peter understood who Jesus was, but Peter did not understand why He had come. Do you see the difference? I know who Jesus is. Hes the Christ, the Son of the Living God, but I dont understand why He came. Peter thinks he knows better than Jesus. Peter thinks his plan is better than Jesus plan. I know nobody in this room has ever thought that, but the 9:30 crowd-you have to watch those guys because some of the those people think they know better than God. There have been occasions-some of those 9:30 people-have known the Will of God and done something contrary or read the Will of God and done something contrary. Not you guys! Understand, Im not talking to you, but some of these other folks sometimes think they know better than God. They get ahead of God, and they start trying to lead instead of follow. You might even know some of the people from the 9:30 service, so you know what Im talking about. Its good to talk to a group that is always in the Will of God and submissive to the Will of God. Thats why I love the 11:00 service so much! Anyway, what does Jesus say? He looks at Peter, and He says, Peter, get behind Me Satan or adversary. You are a stumbling block to me. You are trying to thwart Gods plan. You are trying to thwart the plan of salvation, which he doesnt understand at this point. He said, You do not have in mind the things of God but the things of men. People are very short-sided. The end of Jesus, Peter thought, was something bad. Now that brings us to point number four, utilitarianism exalts man. It makes man the arbitrator of whats right and wrong in our limited knowledge, in our finite wisdom; but apart from divine knowledge, you and I will not understand what is good. Thats point number four: without divine knowledge, you and I could not understand what is good-apart from divine knowledge. We are limited in our understanding. Now, keep in mind this same Peter who once said, You shall not go to the cross. Evil men go to the cross. Youre not an evil man. Criminals go to the cross. Youre not a criminal. Youre a good man. The Messiah is supposed to live forever. You are the Messiah, and this is a wrong plan. Through divine revelation, Peter came to see that entirely different. Lets turn to the Book that bears his name. Turn please, if you will, to 1 Peter 2 (page 1201-1202 of pew Bibles). The same man is now looking back on the same event. He says this in Verse 24, speaking of Jesus, He Himself bore our sins in His body on the tree, so that we might die to sins and live for righteousness. He now sees the cross as not just an instrument of death but as a means of righteousness. …for by His wounds, you have been healed. As difficult as it was for him to watch his friend suffer, bleed, and die, he recognizes that God was going to use that event to bring healing, to bring reconciliation. Chapter 3:17, It is better, if it is Gods will, to suffer for doing good than for doing evil. Jesus, you cant go to the cross. Youre a good man. Now he sees its better for a good man to suffer for doing good than an evil man to suffer for doing evil. One of the few witnesses to Pastor Bonhoeffer said they had never seen a man more submissive to the Will of God. Before he walked the gallows, Pastor Bonhoeffer got on his knees and prayed earnestly to God and walked calmly up on the gallows and submitted to death without a struggle. In a matter of seconds, the witness said he was dead. Pastor Bonhoeffer discussed what it was to be a sufferer of Christ and a persecuted follower of Christ in his book, The Cost of Discipleship. If you havent read a book on discipleship, this is required seminary and bible school reading across our country, The Cost of Discipleship, by Dietrich Bonhoeffer, I would encourage you to read that book-submissive to the Will of the Father. Verse 18, For Christ died for our sins once and for all, the righteous for the unrighteous, to bring you to God. He was put to death in the body but made alive by the Spirit. He comes to see the death on the cross as the key to reconcile man to God. What he once regarded as evil and bad, he now sees as the means of salvation. Its how God would pay for sin. Its how God would bring sinners to Himself. He would exchange the righteousness of Christ for our unrighteousness. It is the good news-how God took this ultimate bad event and turned it in to the ultimate good for the greatest amount of people. Man could not conceive that in his limited wisdom. It took divine revelation and understanding-the Holy Spirit-to illuminate that truth. Utilitarianism is a form of humanism that puts man at the center and says, You and I are capable of determining right from wrong. You and I can understand all these complex issues and questions and come to right decisions on our own apart from divine revelation. And Im saying to you that we cannot. We are fallen; we are finite; and we are limited, and we need to take a good dose of humble medicine this morning and say, I am dependent upon the Creator, the author of good and righteousness to tell me how to live my life, to tell me whats good, whats right, and whats truth. Lets pray together, Father, this morning, we want to thank You for the life of Dietrich Bonhoeffer, for his writings, for his struggle against evil, against oppression, against a regime and a man who sought to kill and did kill so many people. Lord, we see in his example whether one agrees that he was right or wrong, this is a man who loved You, a man who was vexed in soul with what he had to do-a pacifist who was tormented over the suffering that he saw. Lord, these sermons, these stories, and these Scriptures raise a lot of questions in our minds, and we realize that things arent always tied and wrapped in a neat little package with a bow. We see Peter, and in Peter, we see ourselves. We see ourselves sometimes getting ahead of You and thinking we know better than You, thinking our plan is better yet realizing in the end that You are the One who determines what is right and good. You can take what is harmful and evil, and You can turn it in to something thats good in our lives; and so we are dependent upon You to know goodness and truth. Help us, Lord, to have this humble awareness. Help us to have our radar up, as the Apostle Paul said, to be aware of the subtleties and the hollow deception that is a part of our society and culture-that we might reflect Christ and not the world thats around us. In Jesus name we pray, Amen.