Podcast appearances and mentions of Max Alexander

  • 36PODCASTS
  • 45EPISODES
  • 54mAVG DURATION
  • ?INFREQUENT EPISODES
  • Nov 15, 2024LATEST

POPULARITY

20172018201920202021202220232024


Best podcasts about Max Alexander

Latest podcast episodes about Max Alexander

Little News Ears - Cuddly News
News for Kids at LNE.news - Dragon Ranger Owl - S4E1

Little News Ears - Cuddly News

Play Episode Listen Later Nov 15, 2024 15:23


It's November 15, 2024. Jasper, Jarvis, and Clementine are back with news for kids a different way! Today they're sharing news from May 2024 to the present. Adidas Pulls German Soccer Jerseys Over Nazi Symbol LookalikeNew Brain Data Privacy Law in ColoradoSchool Cafeterias Brace for New Salt and Sugar Limits: A Recipe for ChangePlaying Cards Help Solve CrimesSong Lyrics Got Simpler and More NegativeKids Are Loving Mocktails NowCassette Tapes Are Back in Style!Ghost Bikes: A Sad Reminder in NYCMattel's New Barbie Dolls Are Here!Fashion Tip from the Past: Old Paintings for InspirationLemonade Stands Go High-Tech!Penny Pinching for the DisabledDyeing for Transparency: Doritos' Secret Revealed8-Year-Old Fashion Star Max Alexander

Le Bret du Faux
Le prodige de la mode : Un jeune créateur de 8 ans déjà star

Le Bret du Faux

Play Episode Listen Later Oct 21, 2024 1:53


Préparez-vous à être époustouflé ! Dans cet épisode captivant du podcast "On s'en tient à Caro", nous allons découvrir l'histoire incroyable d'un jeune prodige de la mode, Max Alexander, qui à seulement 8 ans, a déjà atteint des sommets insoupçonnés.Dès l'âge de 4 ans, Max a développé une véritable passion pour la haute couture, grâce à l'enseignement de sa mère. Avec un talent et une maturité hors du commun, il s'est rapidement lancé dans la conception de ses propres tenues, dépassant largement les capacités de la plupart des enfants de son âge.

Fülke: a HVG Online közéleti podcastja
Mérlegen: Minden, amit a vállalati hitelezésről tudni kell

Fülke: a HVG Online közéleti podcastja

Play Episode Listen Later Oct 15, 2024 49:32


Folytatódik a Mérlegen, a HVG üzleti podcastjának negyedik évada, ahol továbbra is sikeres magyar vállalkozókkal beszélgetünk arról, hogyan építették fel a saját cégüket, és mit tanácsolnának azoknak, akik most vágnának bele. Ezúttal Kementzey Ferenc, a Raiffeisen Bank vállalati és tőkepiaci divízióiért felelős vezérigazgató- helyettes volt a vendégünk, akivel többek között arról beszélgettünk, mire érdemes figyelni egy hitelkérelem benyújtásakor, illetve hogyan alakult át a hitelpiac az elmúlt években. Váltsd velünk valóra! – a műsort a Raiffeisen Bank támogatja. Iratkozz fel a Mérlegen csatornájára! Spotify: tiny.cc/MerlegenSpotify Apple Podcasts: tiny.cc/MerlegenApple Hallgasd meg a HVG többi podcastját! Spotify: tiny.cc/HVGpodcastokSpotify Apple Podcasts: tiny.cc/HVGpodcastokApple SoundCloud: tiny.cc/HVGpodcastokSC Tartalom: 00:00 Intro 00:27 Max Alexander, a 8 éves divattervező 04:13 Az ultimate álommeló: körbeutazni a Földet, és közben enni 07:25 Te kiírnád LinkedIn-re, hogy lecsuktak? 10:38 Ezek a mai fiatalok még állásinterjúzni sem tudnak 15:11 A stúdióban Kementzey Ferenc, a Raiffeisen bank vállalati és tőkepiaci divízióiért felelős vezérigazgató-helyettese 15:21 Hogy áll jelenleg a vállalati hitelkereslet Magyarországon? 17:00 Mennyi idő alatt hat a bérek emelkedése a hitelkeresletre? 18:11 Mi a különbség a nagyvállalatok és a kis- és középvállalkozások hitelcéljai között? 19:15 Hogyan zajlik egy céges hiteltanácsadás? 20:20 Mennyi idő telik el az előzetes megkereséstől az összeg érkezéséig? 21:12 Mennyire nehéz a cégeknek előteremteniük az önerőt? 22:16 Milyen gyenge pontjai lehetnek egy hiteligénylésnek? 24:46 Mik a legkeresettebb banki termékek az agráriumban? 26:36 Van-e különbség az agrártermékek népszerűségében a magyar és a nemzetközi trendek között? 27:31 Van-e még igény a zöldmezős beruházásokra? 28:59 Mekkora az igény a zöld finanszírozásra? 32:00 Elterjedt-e már a digitalizáció a vállalati pénzügyeknél is? 35:09 Mi a helyzet egy cégen belüli generációváltás esetén? 39:25 Milyen hatása van a mesterséges intelligenciának a bankszektorban? 40:46 Milyen érvek szólnak az újabb fizetési megoldások mellett? 41:42 Mi lehet a bankszektor következő nagy kihívása? 42:43 Hogyan változott egy átlagos munkanap 20 év alatt? 43:48 Hogyan próbálnak a magyar borászok betörni a nemzetközi piacra? 45:38 Milyen célra keresnek leggyakrabban hitelt a borászok? 47:30 Egy jó tanács azoknak, akik vállalati hitelt szeretnének felvenni

The Official Trance Podcast - Radio Show

Fresh tunes by Omnia & UUFO & Max Alexander, Orjan Nilsen and more.

Pigeon Hour
#13: Max Alexander and I debate whether total utilitarianism implies the very repugnant conclusion

Pigeon Hour

Play Episode Listen Later Sep 10, 2024 114:24


The gang from Episode 10 is back, with yet another Consistently Candid x Pigeon Hour crossoverAs Sarah from Consistently Candid describes:In this episode, Aaron Bergman and Max Alexander are back to battle it out for the philosophy crown, while I (attempt to) moderate. They discuss the Very Repugnant Conclusion, which, in the words of Claude, "posits that a world with a vast population living lives barely worth living could be considered ethically inferior to a world with an even larger population, where most people have extremely high quality lives, but a significant minority endure extreme suffering." Listen to the end to hear my uninformed opinion on who's right.- Listen to Consistently Candid on Apple Podcasts, Spotify, or via RSS- My blog post on suffering-focused utilitarianism- Follow Max on Twitter and check out his blog- Follow Sarah on Twitter Get full access to Aaron's Blog at www.aaronbergman.net/subscribe

RNZ: Nights
Kiwi photographer exploring humanity's impact on space

RNZ: Nights

Play Episode Listen Later Jun 20, 2024 15:25


Max Alexander grew up in Papakura, and has been living in London for the past 30 years. He's back in Aotearoa for the next few weeks to participate in Matariki and add a new section to his exhibition "Our Fragile Space - Protecting the Near-Space" that opened at Lloyds of London and has been shown in Europe and the United States. 

Ian Talks Comedy
Jeff Altman

Ian Talks Comedy

Play Episode Listen Later Apr 20, 2024 54:46


Jeff Altman and I discuss growing up in Syracuse; his father, Arthur, teaching him slight of hand magic; learning more at Johns Hopkins; moving to LA and going to the Magic Castle; trying out at the Comedy Store; doing a comedy album with Denny Johnston; his Carson impression and two times he met him; doing his voice on Mary Hartman, Mary Hartman; doing voices on Norman Lear shows and Archie Bunker's Place; Celeste Holm, who his uncle discovered; his year living off income from being on Merv Griffin; Starland Vocal Band Show; guest appearances on Mork and Mindy and WKRP; playing Huey Hogg on Dukes of Hazard; getting the co-hosting job on Pink Lady; being told they spoke perfect English; working with Sid Caesar and Jim Varney; impressions; Rich Little; talking to Johnny right before he announced his retirement; his on-air prank calls to Letterman; Tim Thomerson; the origins of butt steak; being in the Israeli Candid Camera and almost getting arrested; his appearance on Night Court; his tough times being a cast member on Nurses; hosting Sunday Comics; Franklin Ajaye; doing an episode for the troops in the Gulf War; impersonating Bob Hope to Brooke Shields; his "dad" character; his album, I'll Flip You Like a Cheese Sandwich"; having Seinfeld ask him to be in The Bee Movie; retiring from comedy and going full time into sleight of hand; getting married to a girl he knew from high school and moving to Raleigh, NC; Max Alexander; Pink Lady covers and original songs

Pigeon Hour
Drunk Pigeon Hour!

Pigeon Hour

Play Episode Listen Later Mar 9, 2024 95:53


IntroAround New Years, Max Alexander, Laura Duffy, Matt and I tried to raise money for animal welfare (more specifically, the EA Animal Welfare Fund) on Twitter. We put out a list of incentives (see the pink image below), one of which was to record a drunk podcast episode if the greater Very Online Effective Altruism community managed to collectively donate $10,000.To absolutely nobody's surprise, they did ($10k), and then did it again ($20k) and then almost did it a third time ($28,945 as of March 9, 2024). To everyone who gave or helped us spread the word, and on behalf of the untold number of animals these dollars will help, thank you.And although our active promotion on Twitter has come to an end, it is not too late to give! I give a bit more context in a short monologue intro I recorded (sober) after the conversation, so without further ado, Drunk Pigeon Hour:Transcript(Note: very imperfect - sorry!)MonologueHi, this is Aaron. This episode of Pigeon Hour is very special for a couple of reasons.The first is that it was recorded in person, so three of us were physically within a couple feet of each other. Second, it was recorded while we were drunk or maybe just slightly inebriated. Honestly, I didn't get super drunk, so I hope people forgive me for that.But the occasion for drinking was that this, a drunk Pigeon Hour episode, was an incentive for a fundraiser that a couple of friends and I hosted on Twitter, around a little bit before New Year's and basically around Christmas time. We basically said, if we raise $10,000 total, we will do a drunk Pigeon Hour podcast. And we did, in fact, we are almost at $29,000, just shy of it. So technically the fundraiser has ended, but it looks like you can still donate. So, I will figure out a way to link that.And also just a huge thank you to everyone who donated. I know that's really cliche, but this time it really matters because we were raising money for the Effective Altruism Animal Welfare Fund, which is a strong contender for the best use of money in the universe.Without further ado, I present me, Matt, and Laura. Unfortunately, the other co-host Max was stuck in New Jersey and so was unable to participate tragically. Yeah so here it is!ConversationAARONHello, people who are maybe listening to this. I just, like, drank alcohol for, like, the first time in a while. I don't know. Maybe I do like alcohol. Maybe I'll find that out now.MATTUm, All right, yeah, so this is, this is Drunk Pigeon Hour! Remember what I said earlier when I was like, as soon as we are recording, as soon as we press record, it's going to get weird and awkward.LAURAI am actually interested in the types of ads people get on Twitter. Like, just asking around, because I find that I get either, like, DeSantis ads. I get American Petroleum Institute ads, Ashdale College.MATTWeirdly, I've been getting ads for an AI assistant targeted at lobbyists. So it's, it's like step up your lobbying game, like use this like tuned, I assume it's like tuned ChatGPT or something. Um, I don't know, but it's, yeah, it's like AI assistant for lobbyists, and it's like, like, oh, like your competitors are all using this, like you need to buy this product.So, so yeah, Twitter thinks I'm a lobbyist. I haven't gotten any DeSantis ads, actually.AARONI think I might just like have personalization turned off. Like not because I actually like ad personalization. I think I'm just like trying to like, uh, this is, this is like a half-baked protest of them getting rid of circles. I will try to minimize how much revenue they can make from me.MATTSo, so when I, I like went through a Tumblr phase, like very late. In like 2018, I was like, um, like I don't like, uh, like what's happening on a lot of other social media.Like maybe I'll try like Tumblr as a, as an alternative.And I would get a lot of ads for like plus-sized women's flannels.So, so like the Twitter ad targeting does not faze me because I'm like, oh, okay, like, I can, hold on.AARONSorry, keep going. I can see every ad I've ever.MATTCome across, actually, in your giant CSV of Twitter data.AARONJust because I'm a nerd. I like, download. Well, there's actually a couple of things. I just download my Twitter data once in a while. Actually do have a little web app that I might try to improve at some point, which is like, you drop it in and then it turns them. It gives you a csV, like a spreadsheet of your tweets, but that doesn't do anything with any of the other data that they put in there.MATTI feel like it's going to be hard to get meaningful information out of this giant csv in a short amount of time.AARONIt's a giant JSON, actually.MATTAre you just going to drop it all into c long and tell it to parse it for you or tell it to give you insights into your ads.AARONWait, hold on. This is such a.MATTWait. Do people call it “C-Long” or “Clong”?AARONWhy would it be long?MATTWell, because it's like Claude Long.LAURAI've never heard this phrase.MATTThis is like Anthropic's chat bot with a long context with so like you can put. Aaron will be like, oh, can I paste the entire group chat history?AARONOh yeah, I got clong. Apparently that wasn't acceptable so that it.MATTCan summarize it for me and tell me what's happened since I was last year. And everyone is like, Aaron, don't give our data to Anthropic, is already suss.LAURAEnough with the impressions feel about the Internet privacy stuff. Are you instinctively weirded out by them farming out your personal information or just like, it gives me good ads or whatever? I don't care.MATTI lean a little towards feeling weird having my data sold. I don't have a really strong, and this is probably like a personal failing of mine of not having a really strong, well formed opinion here. But I feel a little sketched out when I'm like all my data is being sold to everyone and I don't share. There is this vibe on Twitter that the EU cookies prompts are like destroying the Internet. This is regulation gone wrong. I don't share that instinct. But maybe it's just because I have average tolerance for clicking no cookies or yes cookies on stuff. And I have this vibe that will.AARONSketch down by data. I think I'm broadly fine with companies having my information and selling it to ad targeting. Specifically. I do trust Google a lot to not be weird about it, even if it's technically legal. And by be weird about it, what do mean? Like, I don't even know what I mean exactly. If one of their random employees, I don't know if I got into a fight or something with one of their random employees, it would be hard for this person to track down and just see my individual data. And that's just a random example off the top of my head. But yeah, I could see my view changing if they started, I don't know, or it started leaching into the physical world more. But it seems just like for online ads, I'm pretty cool with everything.LAURAHave you ever gone into the ad personalization and tried see what demographics they peg you?AARONOh yeah. We can pull up mine right now.LAURAIt's so much fun doing that. It's like they get me somewhat like the age, gender, they can predict relationship status, which is really weird.AARONThat's weird.MATTDid you test this when you were in and not in relationships to see if they got it right?LAURANo, I think it's like they accumulate data over time. I don't know. But then it's like we say that you work in a mid sized finance. Fair enough.MATTThat's sort of close.LAURAYeah.AARONSorry. Keep on podcasting.LAURAOkay.MATTDo they include political affiliation in the data you can see?AARONOkay.MATTI would have been very curious, because I think we're all a little bit idiosyncratic. I'm probably the most normie of any of us in terms of. I can be pretty easily sorted into, like, yeah, you're clearly a Democrat, but all of us have that classic slightly. I don't know what you want to call it. Like, neoliberal project vibe or, like, supply side. Yeah. Like, some of that going on in a way that I'm very curious.LAURAThe algorithm is like, advertising deSantis.AARONYeah.MATTI guess it must think that there's some probability that you're going to vote in a republican primary.LAURAI live in DC. Why on earth would I even vote, period.MATTWell, in the primary, your vote is going to count. I actually would think that in the primary, DC is probably pretty competitive, but I guess it votes pretty. I think it's worth.AARONI feel like I've seen, like, a.MATTI think it's probably hopeless to live. Find your demographic information from Twitter. But, like.AARONAge 13 to 54. Yeah, they got it right. Good job. I'm only 50, 99.9% confident. Wait, that's a pretty General.MATTWhat's this list above?AARONOh, yeah. This is such a nerd snipe. For me, it's just like seeing y'all. I don't watch any. I don't regularly watch any sort of tv series. And it's like, best guesses of, like, I assume that's what it is. It thinks you watch dune, and I haven't heard of a lot of these.MATTWait, you watch cocaine there?AARONBig bang theory? No, I definitely have watched the big Bang theory. Like, I don't know, ten years ago. I don't know. Was it just, like, random korean script.MATTOr whatever, when I got Covid real bad. Not real bad, but I was very sick and in bed in 2022. Yeah, the big bang theory was like, what I would say.AARONThese are my interest. It's actually pretty interesting, I think. Wait, hold on. Let me.MATTOh, wait, it's like, true or false for each of these?AARONNo, I think you can manually just disable and say, like, oh, I'm not, actually. And, like, I did that for Olivia Rodrigo because I posted about her once, and then it took over my feed, and so then I had to say, like, no, I'm not interested in Olivia Rodrigo.MATTWait, can you control f true here? Because almost all of these. Wait, sorry. Is that argentine politics?AARONNo, it's just this.MATTOh, wait, so it thinks you have no interest?AARONNo, this is disabled, so I haven't. And for some reason, this isn't the list. Maybe it was, like, keywords instead of topics or something, where it was the.MATTGot it.AARONYes. This is interesting. It thinks I'm interested in apple stock, and, I don't know, a lot of these are just random.MATTWait, so argentine politics was something it thought you were interested in? Yeah. Right.AARONCan.MATTDo you follow Maya on Twitter?AARONWho's Maya?MATTLike, monetarist Maya? Like, neoliberal shell two years ago.AARONI mean, maybe. Wait, hold on. Maybe I'm just like.MATTYeah, hardcore libertarianism.LAURAYeah. No, so far so good with him. I feel like.AARONMaia, is it this person? Oh, I am.MATTYeah.AARONOkay.MATTYeah, she was, like, neoliberal shell two years ago.AARONSorry, this is, like, such an errands. Like snipe. I got my gender right. Maybe. I don't know if I told you that. Yeah. English. Nice.MATTWait, is that dogecoin?AARONI assume there's, like, an explicit thing, which is like, we're going to err way on the side of false positives instead of false negatives, which is like. I mean, I don't know. I'm not that interested in AB club, which.MATTYou'Re well known for throwing staplers at your subordinate.AARONYeah.LAURAWait, who did you guys support in 2020 primary?MATTYou were a Pete stan.LAURAI was a Pete stan. Yes, by that point, definitely hardcore. But I totally get. In 2016, I actually was a Bernie fan, which was like, I don't know how much I was really into this, or just, like, everybody around me was into it. So I was trying to convince myself that he was better than Hillary, but I don't know, that fell apart pretty quickly once he started losing. And, yeah, I didn't really know a whole lot about politics. And then, like, six months later, I became, like, a Reddit libertarian.AARONWe think we've talked about your ideological evolution.MATTHave you ever done the thing of plotting it out on the political? I feel like that's a really interesting.LAURAExercise that doesn't capture the online. I was into Ben Shapiro.MATTReally? Oh, my God. That's such a funny lore fact.AARONI don't think I've ever listened to Ben Shapiro besides, like, random clips on Twitter that I like scroll?MATTI mean, he talks very fast. I will give him that.LAURAAnd he's funny. And I think it's like the fast talking plus being funny is like, you can get away with a lot of stuff and people just end up like, oh, sure, I'm not really listening to this because it's on in the background.AARONYeah.MATTIn defense of the Bernie thing. So I will say I did not support Bernie in 2016, but there was this moment right about when he announced where I was very intrigued. And there's something about his backstory that's very inspiring. This is a guy who has been just extraordinarily consistent in his politics for close to 50 years, was saying lots of really good stuff about gay rights when he was like, Burlington mayor way back in the day, was giving speeches on the floor of the House in the number one sound very similar to the things he's saying today, which reflects, you could say, maybe a very myopic, closed minded thing, but also an ideological consistency. That's admirable. And I think is pointing at problems that are real often. And so I think there is this thing that's, to me, very much understandable about why he was a very inspiring candidate. But when it came down to nitty gritty details and also to his decisions about who to hire subordinates and stuff, very quickly you look at the Bernie campaign alumni and the nuances of his views and stuff, and you're like, okay, wait, this is maybe an inspiring story, but does it actually hold up?AARONProbably not.LAURAYeah, that is interesting. It's like Bernie went woke in 2020, kind of fell apart, in my opinion.AARONI stopped following or not following on social media, just like following him in general, I guess. 2016 also, I was 16. You were not 16. You were.MATTYeah, I was in college at that time, so I was about 20.AARONSo that was, you can't blame it. Anything that I do under the age of 18 is like just a race when I turn 18.LAURAOkay, 2028 draft. Who do we want to be democratic nominee?AARONOh, Jesse from pigeonhole. I honestly think he should run. Hello, Jesse. If you're listening to this, we're going to make you listen to this. Sorry. Besides that, I don't know.MATTI don't have, like, an obvious front runner in mind.AARONWait, 2028? We might be dead by 2028. Sorry, we don't talk about AI.MATTYeah.AARONNo, but honestly, that is beyond the range of planability, I think. I don't actually think all humans are going to be dead by 2028. But that is a long way away. All I want in life is not all I want. This is actually what I want out of a political leader. Not all I want is somebody who is good on AI and also doesn't tells the Justice Department to not sue California or whatever about their gestation. Or maybe it's like New Jersey or something about the gestation crate.MATTOh, yeah. Top twelve.AARONYeah. Those are my two criteria.MATTCorey Booker is going to be right on the latter.AARONYeah.MATTI have no idea about his views on.AARONIf to some extent. Maybe this is actively changing as we speak, basically. But until recently it wasn't a salient political issue and so it was pretty hard to tell. I don't know. I don't think Biden has a strong take on it. He's like, he's like a thousand years old.LAURAWatch what Mitch should have possibly decided. That's real if we don't do mean.AARONBut like, but his executive order was way better than I would have imagined. And I, like, I tweeted about, know, I don't think I could have predicted that necessarily.MATTI agree. I mean, I think the Biden administration has been very reasonable on AI safety issues and that generally is reflective. Yeah, I think that's reflective of the.AARONTongue we know Joe Biden is listening to.MATTOkay.AARONOkay.MATTTopics that are not is like, this is a reward for the fundraiser. Do we want to talk about fundraiser and retrospective on that?AARONSure.MATTBecause I feel like, I don't know. That ended up going at least like one sigma above.AARONHow much? Wait, how much did we actually raise?MATTWe raised like 22,500.LAURAOkay. Really pissed that you don't have to go to Ava.AARONI guess this person, I won't name them, but somebody who works at a prestigious organization basically was seriously considering donating a good amount of his donation budget specifically for the shrimp costume. And, and we chatted about it over Twitter, DM, and I think he ended up not doing it, which I think was like the right call because for tax reasons, it would have been like, oh. He thought like, oh, yeah, actually, even though that's pretty funny, it's not worth losing. I don't know, maybe like 1000 out of $5,000 tax reasons or whatever. Clearly this guy is actually thinking through his donations pretty well. But I don't know, it brought him to the brink of donating several, I think, I don't know, like single digit thousands of dollars. Exactly.LAURAClearly an issue in the tax.AARONDo you have any tax take? Oh, wait, sorry.MATTYeah, I do think we should like, I mean, to the extent you are allowed by your employer too, in public space.AARONAll people at think tanks, they're supposed to go on podcast and tweet. How could you not be allowed to do that kind of thing?MATTSorry, keep going. But yeah, no, I mean, I think it's worth dwelling on it a little bit longer because I feel like, yeah, okay, so we didn't raise a billion dollars as you were interested in doing.AARONYeah. Wait, can I make the case for like. Oh, wait. Yeah. Why? Being slightly unhinged may have been actually object level. Good. Yeah, basically, I think this didn't end up exposed to. We learned this didn't actually end up happening. I think almost all of the impact money, because it's basically one of the same in this context. Sorry. Most of the expected money would come in the form of basically having some pretty large, probably billionaire account, just like deciding like, oh, yeah, I'll just drop a couple of mil on this funny fundraiser or whatever, or maybe less, honestly, listen, $20,000, a lot of money. It's probably more money than I have personally ever donated. On the other hand, there's definitely some pretty EA adjacent or broadly rationalist AI adjacent accounts whose net worth is in at least tens of millions of dollars, for whom $100,000 just would not actually affect their quality of life or whatever. And I think, yeah, there's not nontrivial chance going in that somebody would just decide to give a bunch of money.MATTI don't know. My view is that even the kinds of multimillionaires and billionaires that hang out on Twitter are not going to ever have dropped that much on a random fundraiser. They're more rational.AARONWell, there was proof of concept for rich people being insane. Is Balaji giving like a million dollars to James Medlock.MATTThat's true.AARONThat was pretty idiosyncratic. Sorry. So maybe that's not fair. On the other hand. On the other hand, I don't know, people do things for clout. And so, yeah, I would have, quote, tweeted. If somebody was like, oh yeah, here's $100,000 guys, I would have quote, tweeted the shit out of them. They would have gotten as much possible. I don't know. I would guess if you have a lot of rich people friends, they're also probably on Twitter, especially if it's broadly like tech money or whatever. And so there's that. There's also the fact that, I don't know, it's like object people, at least some subset of rich people have a good think. EA is basically even if they don't identify as an EA themselves, think like, oh yeah, this is broadly legit and correct or whatever. And so it's not just like a random.MATTThat's true. I do think the choice of the animal welfare fund made that harder. Right. I think if it's like bed nets, I think it's more likely that sort of random EA rich person would be like, yes, this is clearly good. And I think we chose something that I think we could all get behind.AARONBecause we have, there was a lot of politicking around.MATTYeah, we all have different estimates of the relative good of different cause areas and this was the one we could very clearly agree on, which I think is very reasonable and good. And I'm glad we raised money for the animal welfare fund, but I do think that reduces the chance of, yeah.LAURAI think it pushes the envelope towards the animal welfare fund being more acceptable as in mainstream ea.org, just like Givewell would be. And so by forcing that issue, maybe we have done more good for the.AARONThat there's like that second order effect. I do just think even though you're like, I think choosing this over AMF or whatever, global health fund or whatever decreased the chance of a random person. Not a random person, but probably decrease the total amount of expected money being given. I think that was just trumped by the fact that I think the animal welfare, the number I pull out of thin air is not necessarily not out of thin air, but very uncertain is like 1000 x or whatever relative to the standards you vote for. Quote, let it be known that there is a rabbit on the premises. Do they interact with other rodents?MATTOkay, so rabbits aren't rodents. We can put this on the pod. So rabbits are lagging wars, which is.AARONFuck is that?MATTIt's a whole separate category of animals.AARONI just found out that elk were part of it. Like a type of deer. This is another world shattering insight.MATTNo, but rabbits are evolutionarily not part of the same. I guess it's a family on the classification tree.AARONNobody, they taught us that in 7th grade.MATTYeah, so they're not part of the same family as rodents. They're their own thing. What freaks me out is that guinea pigs and rabbits seem like pretty similar, they have similar diet.AARONThat's what I was thinking.MATTThey have similar digestive systems, similar kind of like general needs, but they're actually like, guinea pigs are more closely related to rats than they are to rabbits. And it's like a convergent evolution thing that they ended up.AARONAll mammals are the same. Honestly.MATTYeah. So it's like, super weird, but they're not rodents, to answer your question. Rabbits do like these kinds of rabbits. So these are all pet rabbits are descended from european. They're not descended from american rabbits because.LAURAAmerican rabbits like cotton tails. Oh, those are different.MATTYeah. So these guys are the kinds of rabbits that will live in warrens. Warrens. So, like, tunnel systems that they like. Like Elizabeth Warren. Yeah. And so they'll live socially with other rabbits, and they'll dig warrens. And so they're used to living in social groups. They're used to having a space they need to keep clean. And so that's why they can be, like, litter box trained, is that they're used to having a warren where you don't just want to leave poop everywhere. Whereas american rabbits are more solitary. They live above ground, or in my understanding is they sometimes will live in holes, but only occupying a hole that another animal has dug. They won't do their hole themselves. And so then they are just not social. They're not easily litter box trained, that kind of stuff. So all the domestic rabbits are bred from european ones.AARONI was thinking, if you got a guinea pig, would they become friends? Okay.MATTSo apparently they have generally similar dispositions and it can get along, but people don't recommend it because each of them can carry diseases that can hurt the other one. And so you actually don't want to do it. But it does seem very cute to have rabbit.AARONNo, I mean, yeah. My last pet was a guinea pig, circa 20. Died like, a decade ago. I'm still not over it.MATTWould you consider another one?AARONProbably. Like, if I get a pet, it'll be like a dog or a pig. I really do want a pig. Like an actual pig.MATTWait, like, not a guinea pig? Like a full size pig?AARONYeah. I just tweeted about this. I think that they're really cool and we would be friends. I'm being slightly sarcastic, but I do think if I had a very large amount of money, then the two luxury purchases would be, like, a lot of massages and a caretaker and space and whatever else a pig needs. And so I could have a pet.MATTLike, andy organized a not EADC, but EADC adjacent trip to Rosie's farm sanctuary.AARONOh, I remember this. Yeah.MATTAnd we got to pet pigs. And they were very sweet and seems very cute and stuff. They're just like, they feel dense, not like stupid. But when you pet them, you're like, this animal is very large and heavy for its size. That was my biggest surprising takeaway, like, interacting with the hair is not soft either. No, they're pretty coarse, but they seem like sweeties, but they are just like very robust.LAURAHave you guys seen Dave?AARONYes.LAURAThat's like one of the top ten movies of all time.AARONYou guys watch movies? I don't know. Maybe when I was like four. I don't like.LAURAOkay, so the actor who played farmer Hoggett in this movie ended up becoming a vegan activist after he realized, after having to train all of the animals, that they were extremely intelligent. And obviously the movie is about not killing animals, and so that ended up going pretty well.AARONYeah, that's interesting. Good brown.MATTOkay, sorry. Yeah, no, this is all tracked. No, this is great. We are doing a drunk podcast rather than a sober podcast, I think, precisely because we are trying to give the people some sidetracks and stuff. Right. But I jokingly put on my list of topics like, we solved the two envelopes paradox once and for all.AARONNo, but it's two boxing.MATTNo. Two envelopes. No. So this is the fundamental challenge to questions about, I think one of the fundamental challenges to be like, you multiply out the numbers and the number.AARONYeah, I feel like I don't have like a cash take. So just like, tell me the thing.MATTOkay.AARONI'll tell you the correct answer. Yeah.MATTOkay, great. We were leading into this. You were saying, like, animal charity is 1000 x game, right?AARONConditional. Yeah.MATTAnd I think it's hard to easily get to 1000 x, but it is totally possible to get to 50 x if you just sit down and multiply out numbers and you're like, probability of sentience and welfare range.AARONI totally stand by that as my actual point estimate. Maybe like a log mean or something. I'm actually not sure, but. Sorry, keep going.MATTOkay, so one line of argument raised against this is the two envelopes problem, and I'm worried I'm going to do a poor job explaining this. Laura, please feel free to jump in if I say something wrong. So two envelopes is like, it comes from the thing of, like, suppose you're given two envelopes and you're told that one envelope has twice as much money in it as the other.AARONOh, you are going to switch back and forth forever.MATTExactly. Every time. You're like, if I switch the other envelope and it has half as much money as this envelope, then I lose 0.5. But if it has twice as much money as this envelope, then I gain one. And so I can never decide on which envelope because it always looks like it's positive ev to switch the other. So that's where the name comes from.AARONI like a part that you're like, you like goggles?MATTSo let me do the brief summary, which is that basically, depending on which underlying units you pick, whether you work in welfare range, units that are using one human as the baseline or one chicken as the baseline, you can end up with different outputs of the expected value calculation. Because it's like, basically, is it like big number of chickens times some fraction of the human welfare range that dominates? Or is it like some small probability that chickens are basically not sentient times? So then a human has like a huge human's welfare range is huge in chicken units, and which of those dominates is determined by which unit you work in.AARONI also think, yeah, this is not a good conducive to this problem. Is not conducive to alcohol or whatever. Or alcohol is not going to this issue. To this problem or whatever. In the maximally abstract envelope thing. I have an intuition that's something weird kind of probably fake going on. I don't actually see what the issue is here. I don't believe you yet that there's like an actual issue here. It's like, okay, just do the better one. I don't know.MATTOkay, wait, I'll get a piece of paper. Talk amongst yourselves, and I think I'll be able to show this is like.LAURAMe as the stats person, just saying I don't care about the math. At some point where it's like, look, I looked at an animal and I'm like, okay, so we have evolutionarily pretty similar paths. It would be insane to think that it's not feeling like, it's not capable of feeling hedonic pain to pretty much the same extent as me. So I'm just going to ballpark it. And I don't actually care for webs.AARONI feel like I've proven my pro animal bona fide. I think it's bona fide. But here, and I don't share that intuition, I still think that we can go into that megapig discourse. Wait, yeah, sort of. Wait, not exactly megapig discourse. Yeah, I remember. I think I got cyberbullyed by, even though they didn't cyberbully me because I was informed of offline bullying via cyber about somebody's, sorry, this is going to sound absolutely incoherent. So we'll take this part out. Yeah. I was like, oh, I think it's like some metaphysical appeal to neuron counts. You specifically told me like, oh, yeah, Mr. So and so didn't think this checked out. Or whatever. Do you know what I'm talking about?LAURAYeah.AARONOkay. No, but maybe I put it in dawn or Cringey or pretentious terms, but I do think I'm standing by my metaphysical neurons claim here. Not that I'm super confident in anything, but just that we're really radically unsure about the nature of sentience and qualia and consciousness. And probably it has something to do with neurons, at least. They're clearly related in a very boring sciency way. Yeah. It's not insane to me that, like, that, like. Like the unit of. Yeah, like the. The thing. The thing that, like, produces or like is, or like is directly, like one to one associated with, like, particular, like. Like, I guess, amount, for lack of better terms, of conscious experience, is some sort of physical thing. The neurons jumps out as the unit that might make sense. And then there's like, oh, yeah, do we really think all the neurons that control the tongue, like the motor function of the tongue, are those really make you quadrillion more important than a seal or whatever? And then I go back to, okay, even though I haven't done any research on this, maybe it's just like opiate. The neurons directly related neuron counts directly of. Sorry. Neurons directly involved in pretty low level hedonic sensations. The most obvious one would be literal opioid receptors. Maybe those are the ones that matter. This is like, kind of. I feel like we've sort of lost the plot a little.MATTOkay, this is like weird drunk math.AARONBut I think your handwriting is pretty good.MATTI think I have it. So suppose we work in human units. I have a hypothetical intervention that can help ten chickens or one human, and we assume that when I say help, it's like, help them. The same of it. So if I work in human units, I say maybe there is a 50% chance that a chicken is zero one to 1100 of a human and a 50% chance that a chicken and a human are equal. Obviously, this is a thought experiment. I'm not saying that this is my real world probabilities, but suppose that these are my credences. So I do out the EV. The thing that helps ten chickens. I say that, okay, in half of the world, chickens are one 100th of a human, so helping ten of them is worth, like, zero five. Sorry, helping ten of them is zero one. And so 0.5 times zero one times ten is zero five. And then in the other half of the world, I say that a chicken and a human are equal. So then my intervention helps ten chickens, which is like helping ten humans so my total credence, like the benefit in that set of worlds with my 0.5 probability, is five. And so in the end, the chicken intervention wins because it has, on net, an ev of 5.5 versus one for the human intervention. Because the human intervention always helps one human. I switch it around and I say my base unit of welfare range, or, like moral weight, or whatever you want to say, is chicken units. Like, one chicken's worth of moral weight. So in half of the world, a human is worth 100 chickens, and then in the other half of the world, a human is worth one chicken. So I do out the ev for my intervention that helps the one human. Now, in the chicken units, and in chicken units, like, half of the time, that human is worth 100 chickens. And so I get 0.5 times, 100 times one, which is 50. And then in the other half of the world, the chicken and the human are equal. And so then it's 0.5 times one, times one, because I'm helping one human, so that's 0.5. The ev is 50.5. And then I do have my ev for my chicken welfare thing. That's, like, ten chickens, and I always help ten chickens. And so it's ten as my units of good. So when I worked in human units, I said that the chickens won because it was 5.5 human units versus one human unit for helping the human. When I did it in chicken units, it was 50.5 to help the humans versus ten to help the chickens. And so now I'm like, okay, my ev is changing just based on which units I work in. And I think this is, like, the two envelopes problem that's applied to animals. Brian Tomasic has, like, a long post about this, but I think this is, like, this is a statement or an example of the problem.AARONCool.LAURACan I just say something about the moral weight project? It's like, really just. We ended up coming up with numbers, which I think may have been a bit of a mistake in the end, because I think the real value of that was going through the literature and finding out the similarities and the traits between animals and humans, and then there are a surprising number of them that we have in common. And so at the end of the day, it's a judgment call. And I don't know what you do with it, because that is, like, a legit statistical problem with things that arises when you put numbers on stuff.MATTSo I'm pretty sympathetic to what you're saying here of, like, the core insight of the moral weight project is, like, when we look at features that could plausibly determine capacity to experience welfare, we find that a pig and a human have a ton in common. Obviously, pigs cannot write poetry, but they do show evidence of grief behavior when another pig dies. And they show evidence of vocalizing in response to pain and all of these things. I think coming out of the moral waste project being like, wow. Under some form of utilitarianism, it's really hard to justify harms to, or like harms to pigs. Really. Morally matter makes complete sense. I think the challenge here is when you get to something like black soldier flies or shrimp, where when you actually look at the welfare range table, you see that the number of proxies that they likely or definitely have is remarkably low. The shrimp number is hinging on. It's not hinging on a ton. They share a few things. And because there aren't that many categories overall, that ends up being in the median case. Like, they have a moral weight, like one 30th of a human. And so I worry that sort of your articulation of the benefit starts to break down when you get to those animals. And we start to like, I don't know what you do without numbers there. And I think those numbers are really susceptible to this kind of 200.AARONI have a question.MATTYeah, go.AARONWait. This supposed to be like 5.5 versus one?MATTYeah.AARONAnd this is 50.5 versus ten? Yeah. It sound like the same thing to me.MATTNo, but they've inverted this case, the chickens one. So it's like when I'm working in human units, right? Like, half the time, I help.AARONIf you're working in human units, then the chicken intervention looks 5.5 times better. Yes. Wait, can I write this down over here?MATTYeah. And maybe I'm not an expert on this problem. This is just like something that tortures me when I try and sleep at night, not like a thing that I've carefully studied. So maybe I'm stating this wrong, but, yeah. When I work in human units, the 50% probability in this sort of toy example that the chickens and the humans are equal means that the fact that my intervention can help more chickens makes the ev higher. And then when I work in the chicken units, the fact that human might be 100 times more sentient than the chicken or more capable of realizing welfare, to be technical, that means the human intervention just clearly wins.AARONJust to check that I would have this right, the claim is that in human units, the chicken intervention looks 5.5 times better than the human intervention. But when you use chicken units, the human intervention looks 5.5 times better than the chicken intervention. Is that correct?MATTYes, that's right.AARONWait, hold on. Give me another minute.MATTThis is why doing this drunk was a bad idea.AARONIn human.LAURANo, I think that's actually right. And I don't know what to do about the flies and shrimp and stuff like this. This is like where I draw my line of like, okay, so lemonstone quote.MATTTweeted me, oh, my God.LAURAI think he actually had a point of, there's a type of ea that is like, I'm going to set my budget constraint and then maximize within that versus start with a blank slate and allow the reason to take me wherever it goes. And I'm definitely in the former camp of like, my budget constraint is like, I care about humans and a couple of types of animals, and I'm just like drawing the line there. And I don't know what you do with the other types of things.MATTI am very skeptical of arguments that are like, we should end Medicare to spend it all on shrimp.AARONNo one's suggesting that. No, there's like a lot of boring, prosaic reasons.MATTI guess what I'm saying is there's a sense in which, like, totally agreeing with you. But I think the challenge is that object level.AARONYeah, you set us up. The political economy, I like totally by double it.MATTI think that there is. This is great. Aaron, I think you should have to take another shot for.AARONI'm sorry, this isn't fair. How many guys, I don't even drink, so I feel like one drink is like, is it infinity times more than normalize it? So it's a little bit handle.MATTI think there has to be room for moral innovation in my view. I think that your line of thinking, we don't want to do radical things based on sort of out there moral principles in the short term. Right. We totally want to be very pragmatic and careful when our moral ideas sort of put us really far outside of what's socially normal. But I don't think you get to where we are. I don't know what we owe the future was like a book that maybe was not perfect, but I think it eloquently argues with the fact that the first person to be like, hey, slavery in the Americas is wrong. Or I should say really the first person who is not themselves enslaved. Because of course, the people who are actually victims of this system were like, this is wrong from the start. But the first people to be like, random white people in the north being like, hey, this system is wrong. Looks super weird. And the same is true for almost any moral innovation. And so you have to, I think saying, like, my budget constraint is totally fixed seems wrong to me because it leaves no room for being wrong about some of your fundamental morals.LAURAYeah, okay. A couple of things here. I totally get that appeal 100%. At the same time, a lot of people have said this about things that now we look back at as being really bad, like the USSR. I think communism ends up looking pretty bad in retrospect, even though I think there are a lot of very good moral intuitions underpinning it.AARONYeah, I don't know. It's like, mostly an empirical question in that case, about what government policies do to human preference satisfaction, which is like, pretty. Maybe I'm too econ. These seem like very different questions.LAURAIt's like we let our reason go astray, I think.MATTRight, we, as in some humans.AARONNo, I think. Wait, at first glance. At first glance, I think communism and things in that vicinity seem way more intuitively appealing than they actually, or than they deserve to be, basically. And the notion of who is it? Like Adam Smith? Something Smith? Yeah, like free hand of the market or whatever. Invisible hand. Invisible free hand of the bunny ear of the market. I think maybe it's like, field intuitive to me at this point, because I've heard it a lot. But no, I totally disagree that people's natural intuition was that communism can't work. I think it's like, isn't true.MATTI'm not sure you guys are disagreeing with one.AARONYeah.MATTLike, I think, Laura, if I can attempt to restate your point, is that to at least a subset of the people in the USSR at the time of the russian revolution, communism plausibly looked like the same kind of moral innovation as lots of stuff we looked back on as being really good, like the abolition of slavery or like, women's rights or any of those other things. And so you need heuristics that will defend against these false moral innovations.AARONWait, no, you guys are both wrong. Wait, hold on. No, the issue there isn't that we disregard, I guess, humans, I don't know exactly who's responsible for what, but people disregarded some sort of deserving heuristic that would have gardened against communism. The issue was that, like, it was that, like, we had, like, lots of empirical, or, like, it's not even necessarily. I mean, in this case, it is empirical evidence, but, like, like, after a couple years of, like, communism or whatever, we had, like, lots of good evidence to think, oh, no, books like that doesn't actually help people, and then they didn't take action on that. That's the problem. If we were sitting here in 1910 or whatever, and I think it's totally possible, I will be convinced communism is, in fact, the right thing to do. But the thing that would be wrong is if, okay, five years later, you have kids starving or people starving or whatever, and maybe you can find intellectuals who claim and seem reasonably correct that they can explain how this downstream of your policies. Then doubling down is the issue, not the ex ante hypothesis that communism is good. I don't even know if that made any sense, I think.LAURABut we're in the ex ante position right now.AARONYeah, totally. Maybe we'll find out some sort of, whether it's empirical or philosophical or something like maybe in five years or two years or whatever, there'll be some new insight that sheds light on how morally valuable shrimp are. And we should take that into account.LAURAI don't know. Because it's really easy to get good feedback when other fellow humans are starving to death versus. How are you supposed to judge? No, we've made an improvement.AARONYeah, I do think. Okay. Yes. That's like a substantial difference. Consciousness is, like, extremely hard. Nobody knows what the hell is going on. It kind of drives me insane.MATTWhomstemonga has not been driven insane by the hard problem of consciousness.AARONYeah. For real. I don't know. I don't have to say. It's like, you kind of got to make your best guess at some point.MATTOkay, wait, so maybe tacking back to how to solve it, did you successfully do math on this piece of paper?AARONMostly? No, mostly I was word selling.MATTI like the verb form there.AARONYeah. No, I mean, like, I don't have, like, a fully thought out thing. I think in part this might be because of the alcohol. I'm pretty sure that what's going on here is just that, like, in fact, like, there actually is an asymmetry between chicken units and human units, which is that. Which is that we have much better idea. The real uncertainty here is how valuable a chicken is. There's probably somebody in the world who doubts this, but I think the common sense thing and thing that everybody assumes is we basically have it because we're all humans and there's a lot of good reasons to think we have a decent idea of how valuable another human life is. And if we don't, it's going to be a lot worse for other species. And so just, like, taking that as a given, the human units are the correct unit because the thing with the unit is that you take it as given or whatever. The real uncertainty here isn't the relationship between chickens and humans. The real question is how valuable is a chicken? And so the human units are just like the correct one to use.LAURAYeah, there's something there, which is the right theory is kind of driving a lot of the problem in the two envelope stuff. Because if you just chose one theory, then the units wouldn't really matter which one. The equality theory is like, you've resolved all the inter theoretic uncertainty and so wouldn't that get rid of.AARONI don't know if you know, if there's, like. I'm not exactly sure what you mean by theory.LAURALike, are they equal, the equality theory versus are they 1100 theory? And we're assuming that each of them has five probabilities each end. So if we resolved that, it's like we decide upon the 1100 theory, then the problem goes away.AARONYeah, I mean, that's true, but you might not be able to.MATTYeah, I think it doesn't reflect our current state or, like.AARONNo, just like taking as given the numbers, like, invented, which I think is fine for the illustration of the problem. Maybe a better example is what's, like, another thing, chicken versus a rabbit. I don't know. Or like rabbits. I don't know.MATTChicken versus shrimp. I think it's like a real one. Because if you're the animal welfare fund, you are practically making that decision.AARONYeah. I think that becomes harder. But it's not, like, fundamentally different. And it's like the question of, like, okay, which actually makes sense, makes more sense to use as a unit. And maybe you actually can come up with two, if you can just come up with two different species for which, on the merits, they're equally valid as a unit and there's no issue anymore. It really is 50 50 in the end.MATTYeah. I don't know. I see the point you're making. With humans, we know in some sense we have much more information about how capable of realizing welfare a human is. But I guess I treat this as, like, man, I don't know. It's like why all of my confidence intervals are just, like, massive on all these things is I'm just very confused by these problems and how much that.AARONSeems like I'm confused by this one. Sorry, I'm, like, half joking. It is like maybe. I don't know, maybe I'll be less confident. Alcohol or so.MATTYeah, I don't know. I think it's maybe much more concerning to me the idea that working in a different unit changes your conclusion radically.AARONThan it is to you.LAURASometimes. I don't know if this is, like, too much of a stoner cake or something like that.AARONBring it on.LAURAI kind of doubt working with numbers at all.MATTOkay. Fit me well.LAURAIt's just like when he's.AARONStop doing that.LAURAI don't know what to do, because expected value theory. Okay, so one of the things that, when we hired a professional philosopher to talk about uncertainty.MATTPause for a sec. Howie is very sweetly washing his ears, which is very cute in the background. He's like, yeah, I see how he licks his paws and squeezes his ear.AARONIs it unethical for me to videotape?MATTNo, you're more than welcome to videotape it, but I don't know, he might be done.AARONYeah, that was out.MATTLaura, I'm very sorry. No, yeah, you were saying you hired the professional philosopher.LAURAYeah. And one of the first days, she's like, okay, well, is it the same type of uncertainty if we, say, have a one in ten chance of saving the life of a person we know for sure is conscious, versus we have a certain chance of saving the life of an animal that has, like, a one in ten probability of being sentient? These seem like different types.AARONI mean, maybe in some sense they're like different types. Sure. But what are the implications? It's not obviously the same.LAURAIt kind of calls into question as to whether we can use the same mathematical approach for analyzing each of these.AARONI think my main take is, like, you got a better idea? That was like, a generic.LAURANo, I don't.AARONYeah. It's like, okay, yeah, these numbers are, like, probably. It seems like the least bad option if you're going by intuition. I don't know. I think all things considered, sometimes using numbers is good because our brains aren't built to handle getting moral questions correct.MATTYeah, I mean, I think that there is a very strong piece of evidence for what you're saying, Aaron, which is.AARONThe whole paper on this. It's called the unreasonable efficacy of mathematics in the natural sciences.MATTOr this is. This is interesting. I was going to make sort of an easier or simpler argument, which is just like, I think the global health ea pitch of, like, we tend to get charity radically wrong.AARONOften.MATTCharities very plausibly do differ by 100 x or 1000 x in cost effectiveness. And most of the time, most people don't take that into account and end up helping people close to them or help an issue that's salient to them or help whatever they've heard about most and leave opportunities for doing what I think is very difficult to argue as not being radically more effective opportunities on the table as a result. Now, I led into this saying that I have this very profound uncertainty when it comes to human versus animal trade offs. So I'm not saying that, yes, we just should shut up and multiply. But I do think that is sort of like the intuition for why I think the stoner take is very hard for me to endorse is that we know in other cases, actually bringing numbers to the problem leads to saving many more lives of real people who have all of the same hopes and dreams and fears and feelings and experiences as the people who would have been saved in alternate options.LAURAIsn't that just like still underlying this is we're sure that all humans are equal. And that's like our theory that we have endorsed.AARONWait, what?MATTOr like on welfare ranges, the differences among different humans are sufficiently small in terms of capacity to realize welfare. That plausibly they are.AARONYeah, I don't think anyone believes that. Does anyone believe that? Wait, some people that everybody's hedonic range is the same.LAURARandomly select a person who lives in Kenya. You would think that they have the same welfare range, a priority as somebody.MATTWho lives in the description. The fundamental statistics of describing their welfare range are the same.AARONYeah, I think that's probably correct. It's also at an individual level, I think it's probably quite varied between humans.LAURASo I don't think we can say that we can have the same assumption about animals. And that's where it kind of breaks down, is we don't know the right theory to apply it.AARONWell, yeah, it's a hard question. Sorry, I'm being like kind of sarcastic.LAURAI think you have to have the theory right. And you can't easily average over theories with numbers.MATTYeah, no, I mean, I think you're right. I think this is the challenge of the two envelopes. Problem is exactly this kind of thing. I'm like four chapters into moral uncertainty. The book.AARONBy Will.MATTYeah. McCaskill, Ord and Bryke Fist. I'm probably getting that name. But they have a third co author who is not as much of like an.AARONYeah, I don't know. I don't have any super eloquent take except that to justify the use of math right now. Although I actually think I could. Yeah, I think mostly it's like, insofar as there's any disagreement, it's like we're both pointing at the issue, pointing at a question, and saying, look at that problem. It's, like, really hard. And then I'm saying like, yeah, I know. Shit. You should probably just do your best to answer it. Sorry, maybe I'm just not actually adding any insight here or whatever, but I agree with you that a lot of these problems are very difficult, actually. Sorry, maybe this is, like, a little bit of a nonsense. Whatever. Getting back to the hard problem of consciousness, I really do think it feels like a cruel joke that we have to implicitly, we have to make decisions about potentially gigantic numbers of digital lives or, like, digital sentience or, you know, whatever you want to call it, without having any goddamn idea, like, what the fuck is up with consciousness. And, I don't know, it doesn't seem fair. Okay.MATTYeah, wait, okay, so fundraiser. This is great. We've done all of these branching off things. So we talked about how much we raised, which was, like, amount that I was quite happy with, though. Maybe that's, like, selfish because I didn't have to wear a shrink costume. And we talked about. Cause prio. We haven't talked about the whole fake OpenAI thing.AARONFake open AI.MATTWait. Like the entire.AARONOh, well, shout out to I really. God damn it, Qualy. I hope you turn into a human at some point, because let it be known that Qualy made a whole ass Google Doc to plan out the whole thing and was, like, the driving. Yeah, I think it's fair to say Qualy was the driving force.MATTYeah, totally. Like, absolutely had the concept, did the Google Doc. I think everybody played their parts really well, and I think that was very fun.AARONYeah, you did. Good job, everybody.MATTBut, yeah, that was fun. It was very unexpected. Also, I enjoyed that. I was still seeing tweets and replies that were like, wait, this was a bit. I didn't get this after the end of it, which maybe suggests. But if you look at the graph I think I sent in, maybe we.AARONShould pull up my. We can analyze my Twitter data and find out which things got how many views have.MATTLike, you have your text here. I think the graph of donations by date is, like, I sent in the text chat between.AARONMaybe I can pull it like, media.MATTLike you and me and Max and Laura. And it's very clear that that correlated with a. I think it's probably pretty close to the end.AARONMaybe I just missed this. Oh, Laura, thank you for making.MATTYeah, the cards were amazing cards.AARONThey're beautiful.MATTOh, wait, okay, maybe it's not. I thought I said. Anyway, yeah, we got, like, a couple grand at the start, and then definitely at least five grand, maybe like, ten grand, somewhere in the five to ten range.AARONCan we get a good csv going? Do you have access to. You don't have to do this right now.MATTWait, yeah, let me grab that.AARONI want to get, like, aerospace engineering grade cpus going to analyze the causal interactions here based on, I don't know, a few kilobytes of data. It's a baby laptop.MATTYeah, this is what the charts looked like. So it's basically like there was some increase in the first. We raised, like, a couple of grand in the first couple of days. Then, yeah, we raised close to ten grand over the course of the quality thing, and then there was basically flat for a week, and then we raised another ten grand right at the end.AARONThat's cool. Good job, guys.MATTAnd I was very surprised by this.AARONMaybe I didn't really internalize that or something. Maybe I was sort of checked out at that point. Sorry.MATTI guess. No, you were on vacation because when you were coming back from vacation, it's when you did, like, the fake Sama.AARONYeah, that was on the plane.LAURAOkay, yeah, I remember this. My mom got there the next day. I'm like, I'm checking out, not doing anything.AARONYeah, whatever. I'll get rstudio revving later. Actually, I'm gradually turning it into my worst enemy or something like that.MATTWait, how so?AARONI just use Python because it's actually faster and catchy and I don't have to know anything. Also, wait, this is like a rant. This is sort of a totally off topic take, but something I was thinking about. No, actually, I feel like a big question is like, oh, are LLMs going to make it easy for people to do bad things that make it easier for me to do? Maybe not terrible things, but things that are, like, I don't know, I guess of dubious or various things that are mostly in the realm of copyright violation or pirating are not ever enforced, as far as I can tell. But, no, I just couldn't have done a lot of things in the past, but now I can, so that's my anecdote.MATTOkay, I have a whole python.AARONYou can give me a list of YouTube URLs. I guess Google must do, like, a pretty good job of policing how public websites do for YouTube to md three sites, because nothing really just works very well very fast. But you can just do that in python, like, five minutes. But I couldn't do that before, so.MATTI feel like, to me, it's obvious that LLMs make it easier for people to do bad stuff. Exactly as you said because they let make in general make it easier for people to do stuff and they have some protections on this, but those protections are going to be imperfect. I think the much more interesting question in some sense is this like a step change relative to the fact that Google makes it way easier for you to do stuff and including bad stuff and the printing press made it way easier for you to do?AARONI wouldn't even call it a printing press.MATTI like think including bad stuff. So it's like, right, like every invention that generally increases people's capability to do stuff and share information also has these bad effects. And I think the hard question is, are LLMs, wait, did I just x.AARONNo, I don't think, wait, did I just like, hold on. I'm pretty sure it's like still wait, how do I have four things?LAURAWhat is the benefit of LLMs versus.AARONYou can ask it something and it tells you the answer.LAURAI know, but Google does this too.AARONI don't mean, I don't know if I have like a super, I don't think I have any insightful take it just in some sense, maybe these are all not the same, but maybe they're all of similar magnitude, but like object level. Now we live in a world with viruses CRISPR. Honestly, I think to the EA movement's credit, indefinite pause, stop. AI is just not, it's not something that I support. It's not something like most people support, it's not like the official EA position and I think for good reason. But yeah, going back to whatever it was like 1416 or whatever, who knows? If somebody said somebody invented the printing press and somebody else was like, yeah, we should, well I think there's some pretty big dis analysis just because of I guess, biotech in particular, but just like how destructive existing technologies are now. But if somebody had said back then, yeah, let's wait six months and see if we can think of any reason not to release the printing press. I don't think that would have been a terrible thing to do. I don't know, people. I feel like I'm saying something that's going to get coded as pretty extreme. But like x ante hard ex ante. People love thinking, exposed nobody. Like I don't know. I don't actually think that was relevant to anything. Maybe I'm just shit faced right now.MATTOn one shot of vodka.AARON$15 just to have one shot.MATTI'll have a little.AARONYeah. I think is honestly, wait. Yeah, this is actually interesting. Every time I drink I hope that it'll be the time that I discover that I like drinking and it doesn't happen, and I think that this is just because my brain is weird. I don't hate it. I don't feel, like, bad. I don't know. I've used other drugs, which I like. Alcohol just doesn't do it for me. Yeah, screw you, alcohol.MATTYes. And you're now 15.99 cheaper or 50. 99 poorer.AARONYeah, I mean, this will last me a lifetime.MATTYou can use it for, like, cleaning your sink.AARONWait, this has got to be the randomest take of all time. But, yeah, actually, like, isopropyl alcohol, top tier, disinfected. Because you don't have to do anything with it. You leave it there, it evaporates on its own.MATTHonestly. Yeah.AARONI mean, you don't want to be in an enclosed place or whatever. Sorry. To keep. Forget. This is like.MATTNo, I mean, it seems like a good take to me.AARONThat's all.MATTYeah, this is like a very non sequitur.AARONBut what are your guys' favorite cleaning suppliers?MATTOkay, this is kind of bad. Okay, this is not that bad. But I'm, like, a big fan of Clorox wipes.AARONScandalous.MATTI feel like this gets looked down on a little bit because it's like, in theory, I should be using a spray cleaner and sponge more.AARONIf you're like, art porn, what theories do you guys.MATTIf you're very sustainable, very like, you shouldn't just be buying your plastic bucket of Clorox infused wet wipes and you're killing the planet.AARONWhat I thought you were going to say is like, oh, this is like germaphobe coating.MATTNo, I think this is fine. I don't wipe down my groceries with Clorox wipes. This is like, oh, if I need to do my deep clean of the kitchen, what am I going to reach for? I feel like my roommate in college was very much like, oh, I used to be this person. No, I'm saying he was like an anti wet wipe on sustainability reasons person. He was like, oh, you should use a rag and a spray cleaner and wash the rag after, and then you will have not used vast quantities of resources to clean your kitchen.AARONAt one point, I tweeted that I bought regular. Actually, don't do this anymore because it's no longer practical. But I buy regularly about 36 packs of bottled water for like $5 or whatever. And people actually, I think it was like, this is like close to a scissor statement, honestly. Because object level, you know what I am, right. It's not bad. For anything. I'm sorry. It just checks out. But people who are normally pretty technocratic or whatever were kind of like, I don't know, they were like getting heated on.MATTI think this is an amazing scissor statement.AARONYeah.MATTBecause I do.AARONI used to be like, if I were to take my twelve year old self, I would have been incredibly offended, enraged.MATTAnd to be fair, I think in my ideal policy world, there would be a carbon tax that slightly increases the price of that bottled water. Because actually it is kind of wasteful to. There is something, something bad has happened there and you should internalize those.AARONYeah, I think in this particular, I think like thin plastic is just like not. Yeah, I don't think it would raise it like very large amount. I guess.MATTI think this is probably right that even a relatively high carbon tax would not radically change the price.LAURAIt's not just carbon, though. I think because there is land use implicated in this.AARONNo, there's not.LAURAYeah, you're filling up more landfills.AARONYeah, I'm just doing like hearsay right now. Heresy.MATTHearsay. Hearsay is going to be whatever. Well, wait, no, heresy is, if you're arguing against standardly accepted doctrine. Hearsay is like, well, it's both. Then you're just saying shit.AARONI'm doing both right now. Which is that actually landfills are usually like on the outskirts of town. It's like, fine.LAURAThey'Re on the outskirts of town until the town sprawls, and then the elementary school is on a phone.AARONYeah, no, I agree in principle. I don't have a conceptual reason why you're wrong. I just think basically, honestly, the actual heuristic operating here is that I basically outsource what I should pay attention to, to other people. And since I've never seen a less wrong post or gave Warren post about how actually landfills are filling up, it's like, fine, probably.LAURANo, this is me being devil's advocate. I really don't care that about personal waste.MATTYeah, I mean, I think plausibly here, there is, right? So I think object level, the things that matter, when we think about plastic, there is a carbon impact. There is a production impact of like, you need to think about what pollution happened when the oil was drilled and stuff. And then there is like a disposal impact. If you successfully get that bottle into a trash can, for what it's worth.AARONMy bottles are going into their goddamn trash can.MATTIdeally a recycling. No, apparently recycling, I mean, recycling is.AARONWell, I mean, my sense is like apparently recycling. Yeah, I recycle metal. I think I do paper out of convenience.MATTIf you successfully get that bottle handle a waste disposal system that is properly disposing of it, rather than like you're throwing it on a slap, then I think my guess is that the willingness to pay, or if you really crunch the numbers really hard, it would not be once again, a huge cost for the landfill costs. On the flip side, if you throw it in a river, that's very bad. My guess is that it would be right for everyone on Twitter to flame you for buying bottles and throwing them in a river if you did that.AARONWhat is an ed impact on wild animal welfare and equilibrium? No, just kidding. This is something. Yeah, don't worry, guys. No, I was actually the leave no trade coordinator for my Boy scout troop. It's actually kind of ironic because I think probably like a dumb ideology or.LAURAWhatever, it's a public good for the other people around you to not have a bunch of garbage around on that trail.AARONYeah, I do think I went to an overnight training for this. They're very hardcore, but basically conceptually incoherent people. I guess people aren't conceptually incoherent. Their concepts are incoherent who think it

Pigeon Hour
Best of Pigeon Hour

Pigeon Hour

Play Episode Listen Later Jan 24, 2024 107:33


Table of contentsNote: links take you to the corresponding section below; links to the original episode can be found there.* Laura Duffy solves housing, ethics, and more [00:01:16]* Arjun Panickssery solves books, hobbies, and blogging, but fails to solve the Sleeping Beauty problem because he's wrong on that one [00:10:47]* Nathan Barnard on how financial regulation can inform AI regulation [00:17:16]* Winston Oswald-Drummond on the tractability of reducing s-risk, ethics, and more [00:27:48]* Nathan Barnard (again!) on why general intelligence is basically fake [00:34:10]* Daniel Filan on why I'm wrong about ethics (+ Oppenheimer and what names mean in like a hardcore phil of language sense) [00:56:54]* Holly Elmore on AI pause, wild animal welfare, and some cool biology things I couldn't fully follow but maybe you can [01:04:00]* Max Alexander and I solve ethics, philosophy of mind, and cancel culture once and for all [01:24:43]* Sarah Woodhouse on discovering AI x-risk, Twitter, and more [01:30:56] * Pigeon Hour x Consistently Candid pod-crossover: I debate moral realism with Max Alexander and Sarah Hastings-Woodhouse [01:41:08]Intro [00:00:00]To wrap up the year of Pigeon Hour, the podcast, I put together some clips from each episode to create a best-of compilation. This was inspired by 80,000 Hours, a podcast that did the same with their episodes, and I thought it was pretty cool and tractable enough.It's important to note that the clips I chose range in length significantly. This does not represent the quality or amount of interesting content in the episode. Sometimes there was a natural place to break the episode into a five-minute chunk, and other times it wouldn't have made sense to take a five-minute chunk out of what really needed to be a 20-minute segment. I promise I'm not just saying that.So without further ado, please enjoy.#1: Laura Duffy solves housing, ethics, and more [00:01:16]In this first segment, Laura, Duffy, and I discuss the significance and interpretation of Aristotle's philosophical works in relation to modern ethics and virtue theory.AARON: Econ is like more interesting. I don't know. I don't even remember of all the things. I don't know, it seems like kind of cool. Philosophy. Probably would have majored in philosophy if signaling wasn't an issue. Actually, maybe I'm not sure if that's true. Okay. I didn't want to do the old stuff though, so I'm actually not sure. But if I could aristotle it's all wrong. Didn't you say you got a lot out of Nicomachi or however you pronounce that?LAURA: Nicomachian ethics guide to how you should live your life. About ethics as applied to your life because you can't be perfect. Utilitarians. There's no way to be that.AARON: But he wasn't even responding to utilitarianism. I'm sure it was a good work given the time, but like, there's like no other discipline in which we care. So people care so much about like, what people thought 2000 years ago because like the presumption, I think the justified presumption is that things have iterated and improved since then. And I think that's true. It's like not just a presumption.LAURA: Humans are still rather the same and what our needs are for living amongst each other in political society are kind of the same. I think America's founding is very influenced by what people thought 2000 years ago.AARON: Yeah, descriptively that's probably true. But I don't know, it seems like all the whole body of philosophers have they've already done the work of, like, compressing the good stuff. Like the entire academy since like, 1400 or whatever has like, compressed the good stuff and like, gotten rid of the bad stuff. Not in like a high fidelity way, but like a better than chance way. And so the stuff that remains if you just take the state of I don't know if you read the Oxford Handbook of whatever it is, like ethics or something, the takeaways you're going to get from that are just better than the takeaways you're going to get from a summary of the state of the knowledge in any prior year. At least. Unless something weird happened. And I don't know. I don't know if that makes sense.LAURA: I think we're talking about two different things, though. Okay. In terms of knowledge about logic or something or, I don't know, argumentation about trying to derive the correct moral theory or something, versus how should we think about our own lives. I don't see any reason as to why the framework of virtue theory is incorrect and just because it's old. There's many virtue theorists now who are like, oh yeah, they were really on to something and we need to adapt it for the times in which we live and the kind of societies we live in now. But it's still like there was a huge kernel of truth in at least the way of thinking that Aristotle put forth in terms of balancing the different virtues that you care about and trying to find. I think this is true. Right? Like take one virtue of his humor. You don't want to be on one extreme where you're just basically a meme your entire life. Everybody thinks you're funny, but that's just not very serious. But you don't want to be a boar and so you want to find somewhere in the middle where it's like you have a good sense of humor, but you can still function and be respected by other people.AARON: Yeah. Once again, I agree. Well, I don't agree with everything. I agree with a lot of what you just said. I think there was like two main points of either confusion or disagreement. And like, the first one is that I definitely think, no, Aristotle shouldn't be discounted or like his ideas or virtue ethics or anything like that shouldn't be discounted because they were canonical texts or something were written a long time ago. I guess it's just like a presumption that I have a pretty strong presumption that conditional on them being good, they would also be written about today. And so you don't actually need to go back to the founding texts and then in fact, you probably shouldn't because the good stuff will be explained better and not in weird it looks like weird terms. The terms are used differently and they're like translations from Aramaic or whatever. Probably not Aramaic, probably something else. And yeah, I'm not sure if you.LAURA: Agree with this because we have certain assumptions about what words like purpose mean now that we're probably a bit richer in the old conception of them like telos or happiness. Right. Udaimnia is much better concept and to read the original text and see how those different concepts work together is actually quite enriching compared to how do people use these words now. And it would take like I don't know, I think there just is a lot of value of looking at how these were originally conceived because popularizers of the works now or people who are seriously doing philosophy using these concepts. You just don't have the background knowledge that's necessary to understand them fully if you don't read the canonical text.AARON: Yeah, I think that would be true. If you are a native speaker. Do you know Greek? If you know Greek, this is like dumb because then you're just right.LAURA: I did take a quarter of it.AARON: Oh God. Oh my God. I don't know if that counts, but that's like more than anybody should ever take. No, I'm just kidding. That's very cool. No, because I was going to say if you're a native speaker of Greek and you have the connotations of the word eudaimonia and you were like living in the temper shuttle, I would say. Yeah, that's true actually. That's a lot of nuanced, connotation and context that definitely gets lost with translation. But once you take the jump of reading English translations of the texts, not you may as well but there's nothing super special. You're not getting any privileged knowledge from saying the word eudaimonia as opposed to just saying some other term as a reference to that concept or something. You're absorbing the connotation in the context via English, I guess, via the mind of literally the translators who have like.LAURA: Yeah, well see, I tried to learn virtue theory by any other route than reading Aristotle.AARON: Oh God.LAURA: I took a course specifically on Plato and Aristotle.AARON: Sorry, I'm not laughing at you. I'm just like the opposite type of philosophy person.LAURA: But keep going. Fair. But she had us read his physics before we read Nicomachi.AARON: Think he was wrong about all that.LAURA: Stuff, but it made you understand what he meant by his teleology theory so much better in a way that I could not get if I was reading some modern thing.AARON: I don't know, I feel like you probably could. No, sorry, that's not true. I don't think you could get what Aristotle the man truly believed as well via a modern text. But is that what you? Depends. If you're trying to be a scholar of Aristotle, maybe that's important. If you're trying to find the best or truest ethics and learn the lessons of how to live, that's like a different type of task. I don't think Aristotle the man should be all that privileged in that.LAURA: If all of the modern people who are talking about virtue theory are basically Aristotle, then I don't see the difference.AARON: Oh, yeah, I guess. Fair enough. And then I would say, like, oh, well, they should probably start. Is that in fact the state of the things in virtue theory? I don't even know.LAURA: I don't know either.#2 Arjun Panickssery solves books, hobbies, and blogging, but fails to solve the Sleeping Beauty problem because he's wrong on that one [00:10:47]All right, next, Arjun Panixery and I explore the effectiveness of reading books in retaining and incorporating knowledge, discussing the value of long form content and the impact of great literary works on understanding and shaping personal worldviews.ARJUN: Oh, you were in the book chat, though. The book rant group chat, right?AARON: Yeah, I think I might have just not read any of it. So do you want to fill me in on what I should have read?ARJUN: Yeah, it's group chat of a bunch of people where we were arguing about a bunch of claims related to books. One of them is that most people don't remember pretty much anything from books that they read, right? They read a book and then, like, a few months later, if you ask them about it, they'll just say one page's worth of information or maybe like, a few paragraphs. The other is that what is it exactly? It's that if you read a lot of books, it could be that you just incorporate the information that's important into your existing models and then just forget the information. So it's actually fine. Isn't this what you wrote in your blog post or whatever? I think that's why I added you to that.AARON: Oh, thank you. I'm sorry I'm such a bad group chat participant. Yeah, honestly, I wrote that a while ago. I don't fully remember exactly what it says, but at least one of the things that it said was and that I still basically stand by, is that it's basically just like it's increasing the salience of a set of ideas more so than just filling your brain with more facts. And I think this is probably true insofar as the facts support a set of common themes or ideas that are kind of like the intellectual core of it. It would be really hard. Okay, so this is not a book, but okay. I've talked about how much I love an 80,000 hours podcast, and I've listened to, I don't think every episode, but at least 100 of the episodes. And no, you're just, like, not going to definitely I've forgotten most of the actual almost all of the actual propositional pieces of information said, but you're just not going to convince me that it's completely not affecting either model of the world or stuff that I know or whatever. I mean, there are facts that I could list. I think maybe I should try.ARJUN: Sure.AARON: Yeah. So what's your take on book other long form?ARJUN: Oh, I don't know. I'm still quite confused or I think the impetus for the group chat's creation was actually Hanania's post where he wrote the case against most books or most was in parentheses or something. I mean, there's a lot of things going on in that post. He just goes off against a bunch of different categories of books that are sort of not closely related. Like, he goes off against great. I mean, this is not the exact take he gives, but it's something like the books that are considered great are considered great literature for some sort of contingent reason, not because they're the best at getting you information that you want.AARON: This is, like, another topic. But I'm, like, anti great books. In fact, I'm anti great usually just means old and famous. So insofar as that's what we mean by I'm like, I think this is a bad thing, or, like, I don't know, aristotle is basically wrong about everything and stuff like that.ARJUN: Right, yeah. Wait, we could return to this. I guess this could also be divided into its component categories. He spends more time, though, I think, attacking a certain kind of nonfiction book that he describes as the kind of book that somebody pitches to a publisher and basically expands a single essay's worth of content into with a bunch of anecdotes and stuff. He's like, most of these books are just not very useful to read, I guess. I agree with that.AARON: Yeah. Is there one that comes to mind as, like, an? Mean, I think of Malcolm Gladwell as, like, the kind of I haven't actually read any of his stuff in a while, but I did, I think, when I started reading nonfiction or with any sort of intent, I read. A bunch of his stuff or whatever and vaguely remember that this is basically what he like for better or.ARJUN: Um yeah, I guess so. But he's almost, like, trying to do it on purpose. This is the experience that you're getting by reading a Malcolm Gladwell book. It's like talib. Right? It's just him just ranting. I'm thinking, I guess, of books that are about something. So, like, if you have a book that's know negotiation or something, it'll be filled with a bunch of anecdotes that are of dubious usefulness. Or if you get a book that's just about some sort of topic, there'll be historical trivia that's irrelevant. Maybe I can think of an example.AARON: Yeah. So the last thing I tried to read, maybe I am but haven't in a couple of weeks or whatever, is like, the Derek Parfit biography. And part of this is motivated because I don't even like biographies in general for some reason, I don't know. But I don't know. He's, like, an important guy. Some of the anecdotes that I heard were shockingly close to home for me, or not close to home, but close to my brain or something. So I was like, okay, maybe I'll see if this guy's like the smarter version of Aaron Bergman. And it's not totally true.ARJUN: Sure, I haven't read the book, but I saw tweet threads about it, as one does, and I saw things that are obviously false. Right. It's the claims that he read, like, a certain number of pages while brushing his teeth. That's, like, anatomically impossible or whatever. Did you get to that part? Or I assumed no, I also saw.AARON: That tweet and this is not something that I do, but I don't know if it's anatomically impossible. Yeah, it takes a little bit of effort to figure out how to do that, I guess. I don't think that's necessarily false or whatever, but this is probably not the most important.ARJUN: Maybe it takes long time to brush his teeth.#3: Nathan Barnard on how financial regulation can inform AI regulation [00:17:16]In this next segment, Nathan Barnard and I dive into the complexities of AI regulation, including potential challenges and outcomes of governing AI in relation to economic growth and existential security. And we compare it to banking regulation as well.AARON: Yeah, I don't know. I just get gloomy for, I think justified reasons when people talk about, oh yeah, here's the nine step process that has to take place and then maybe there's like a 20% chance that we'll be able to regulate AI effectively. I'm being facetious or exaggerating, something like that, but not by a gigantic amount.NATHAN: I think this is pretty radically different to my mainline expectation.AARON: What's your mainline expectation?NATHAN: I suppose I expect like AI to come with an increasing importance past economy and to come up to really like a very large fraction of the economy before really crazy stuff starts happening and this world is going very anonymous. Anonymous, anonymous, anonymous. I know the word is it'd be very unusual if this extremely large sector economy which was impacted like a very large number of people's lives remains like broadly unregulated.AARON: It'll be regulated, but just maybe in a stupid way.NATHAN: Sure, yes, maybe in a stupid way. I suppose critically, do you expect the stupid way to be like too conservative or too like the specific question of AI accenture it's basically too conservative or too lenient or I just won't be able to interact with this.AARON: I guess generally too lenient, but also mostly on a different axis where just like I don't actually know enough. I don't feel like I've read learned about various governance proposals to have a good object level take on this. But my broad prior is that there are just a lot of ways to for anything. There's a lot of ways to regulate something poorly. And the reason insofar as anything isn't regulated poorly it's because of a lot of trial and error.NATHAN: Maybe.AARON: I mean, there's probably exceptions, right? I don't know. Tax Americana is like maybe we didn't just kept winning wars starting with World War II. I guess just like maybe like a counterexample or something like that.NATHAN: Yeah, I think I still mostly disagree with this. Oh, cool. Yeah. I suppose I see a much like broader spectrum between bad regulation and good regulation. I agree it's like very small amount. The space of optimal regulation is very small. But I think we have to hit that space for regulation to be helpful. Especially in this especially if you consider that if you sort of buy the AI extension safety risk then the downsides of it's not this quite fine balancing act between too much whether consumer protection and siphoning competition and cycling innovation too much. It's like trying to end this quite specific, very bad outcome which is maybe much worse than going somewhat slowering economic growth, at least somewhat particularly if we think we're going to get something. This is very explosive rates for economic growth really quite soon. And the cost of slowing down economic growth by weather even by quite a large percentage, very small compared to the cost of sort of an accidental catastrophe. I sort of think of Sony iconic growth as the main cost of main way regulation goes wrong currently.AARON: I think in an actual sense that is correct. There's the question of like okay, Congress in the states like it's better than nothing. I'm glad it's not anarchy in terms of like I'm glad we have a legislature.NATHAN: I'm also glad the United States.AARON: How reasons responsive is Congress? I don't think reasons responsive enough to make it so that the first big law that gets passed insofar as there is one or if there is one is on the pareto frontier trading off between economic growth and existential security. It's going to be way inside of that production frontier or whatever. It's going to suck on every action, maybe not every act but at least like some relevant actions.NATHAN: Yeah that doesn't seem like obviously true to me. I think Dodge Frank was quite a good law.AARON: That came after 2008, right?NATHAN: Yeah correct. Yeah there you go. No, I agree. I'm not especially confident about doing regulation before there's some quite bad before there's a quite bad warning shot and yes, if we're in world where we have no warning shots and we're just like blindsided by everyone getting turned into everyone getting stripped their Athens within 3 seconds, this is not good. Both in law we do have one of those shots and I think Glass Seagull is good law. Not good law is a technical term. I think Glass Steagall was a good piece of legislation. I think DoD Frank was a good piece of legislation. I think the 2008 Seamless Bill was good piece of legislation. I think the Troubled Assets Relief Program is a good piece of piece of legislation.AARON: I recognize these terms and I know some of them and others I do not know the contents of.NATHAN: Yeah so Glass Eagle was the financial regulation passed in 1933 after Great Depression. The Tropical Asset Relief Program was passed in I think 2008, moved 2009 to help recapitalize banks. Dodge Frank was the sort of landmark post financial cris piece of legislation passed in 2011. I think these are all good pieces of legislation now. I think like financial regulation is probably unusually good amongst US legislation. This is like a quite weak take, I guess. It's unusually.AARON: So. I don't actually know the pre depression financial history at all but I feel like the more relevant comparison to the 21st century era is what was the regulatory regime in 1925 or something? I just don't know.NATHAN: Yeah, I know a bit. I haven't read this stuff especially deeply and so I don't want to don't want to be so overcompensant here but sort of the core pieces which were sort of important for the sort of the Great Depression going very badly was yeah, no distinction between commercial banks and investment banks. Yes, such a bank could take much riskier. Much riskier. Things with like custom deposits than they could from 1933 until the Peel Glass Eagle. And combine that with no deposit insurance and if you sort of have the combination of banks being able to do quite risky things with depositors money and no deposit insurance, this is quite dangerously known. And glassy repeal.AARON: I'm an expert in the sense that I have the Wikipedia page up. Well, yeah, there was a bunch of things. Basically. There's the first bank of the United States. There's the second bank of the United States. There's the free banking era. There was the era of national banks. Yada, yada, yada. It looks like 19. Seven was there was some panic. I vaguely remember this from like, AP US history, like seven years ago or.NATHAN: Yes, I suppose in short, I sort of agree that the record of sort of non post Cris legislation is like, not very good, but I think record of post Cris legislation really, at least in the financial sector, really is quite good. I'm sure lots of people disagree with this, but this is my take.#4 Winston Oswald-Drummond on the tractability of reducing s-risk, ethics, and more [00:27:48]Up next, Winston Oswald Drummond and I talk about the effectiveness and impact of donating to various research organizations, such as suffering-focused S-risk organizations. We discuss tractability, expected value, and essentially where we should give our money.AARON: Okay, nice. Yeah. Where to go from here? I feel like largely we're on the same page, I feel like.WINSTON: Yeah. Is your disagreement mostly tractability? Then? Maybe we should get into the disagreement.AARON: Yeah. I don't even know if I've specified, but insofar as I have one, yes, it's trapped ability. This is the reason why I haven't donated very much to anywhere for money reasons. But insofar as I have, I have not donated to Clrcrs because I don't see a theory of change that connects the research currently being done to actually reducing s risks. And I feel like there must be something because there's a lot of extremely smart people at both of these orgs or whatever, and clearly they thought about this and maybe the answer is it's very general and the outcome is just so big in magnitude that anything kind.WINSTON: Of that is part of it, I think. Yeah, part of it is like an expected value thing and also it's just very neglected. So it's like you want some people working on this, I think, at least. Even if it's unlikely to work. Yeah, even that might be underselling it, though. I mean, I do think there's people at CRS and Clr, like talking to people at AI labs and some people in politics and these types of things. And hopefully the research is a way to know what to try to get done at these places. You want to have some concrete recommendations and I think obviously people have to also be willing to listen to you, but I think there is some work being done on that and research is partially just like a community building thing as well. It's a credible signal that you were smart and have thought about this, and so it gives people reason to listen to you and maybe that mostly pays off later on in the future.AARON: Yeah, that all sounds like reasonable. And I guess one thing is that I just don't there's definitely things I mean, first of all, I haven't really stayed up to date on what's going on, so I haven't even done I've done zero research for this podcast episode, for example. Very responsible and insofar as I've know things about these. Orgs. It's just based on what's on their website at some given time. So insofar as there's outreach going on, not like behind the scenes, but just not in a super public way, or I guess you could call that behind the scenes. I just don't have reason to, I guess, know about that. And I guess, yeah, I'm pretty comfortable. I don't even know if this is considered biting a bullet for the crowd that will be listening to this, if that's anybody but with just like yeah, saying a very small change for a very large magnitude, just, like, checks out. You can just do expected value reasoning and that's basically correct, like a correct way of thinking about ethics. But even I don't know how much you know specifically or, like, how much you're allowed want to reveal, but if there was a particular alignment agenda that I guess you in a broad sense, like the suffering focused research community thought was particularly promising and relative to other tractable, I guess, generic alignment recommendations. And you were doing research on that and trying to push that into the alignment mainstream, which is not very mainstream. And then with the hope that that jumps into the AI mainstream. Even if that's kind of a long chain of events. I think I would be a lot more enthusiastic about I don't know that type of agenda, because it feels like there's like a particular story you're telling where it cashes out in the end. You know what I mean?WINSTON: Yeah, I'm not the expert on this stuff, but I do think you just mean I think there's some things about influencing alignment and powerful AI for sure. Maybe not like a full on, like, this is our alignment proposal and it also handles Sris. But some things we could ask AI labs that are already building, like AGI, we could say, can you also implement these sort of, like, safeguards so if you failed alignment, you fail sort of gracefully and don't cause lots of suffering.AARON: Right?WINSTON: Yeah. Or maybe there are other things too, which also seem potentially more tractable. Even if you solve alignment in some sense, like aligning with whatever the human operator tells the AI to do, then you can also get the issue that malevolent actors can take control of the AI and then what they want also causes lots of suffering that type of alignment wouldn't. Yeah, and I guess I tend to be somewhat skeptical of coherent extrapolated volition and things like this, where the idea is sort of like it'll just figure out our values and do the right thing. So, yeah, there's some ways to push on this without having a full alignment plan, but I'm not sure if that counts as what you were saying.AARON: No, I guess it does. Yeah, it sounds like it does. And it could be that I'm just kind of mistaken about the degree to which that type of research and outreach is going on. That sounds like it's at least partially true.#5: Nathan Barnard (again!) on why general intelligence is basically fake [00:34:10]Up next, Nathan Barnard is back for his second episode. And we talked about the nature of general intelligence, its relationship with language and the implications of specialized brain functions on the understanding of human cognitive abilities.NATHAN: Yes. This like symbolic like symbolic, symbolic reasoning stuff. Yeah. So I think if I was, like, making the if I was, like, making the case for general intelligence being real, I wouldn't have symbolic reasoning, but I would have language stuff. I'd have this hierarchical structure thing, which.AARON: I would probably so I think of at least most uses of language and central examples as a type of symbolic reasoning because words mean things. They're like yeah. Pointers to objects or something like that.NATHAN: Yeah, I think it's like, pretty confidence isn't where this isn't a good enough description of general intelligence. So, for instance so if you bit in your brain called, I'm using a checklist, I don't fuck this up vernacular, I'm not making this cool. Lots of connects to use words like pointers as these arbitrary signs happens mostly in this area of the brain called Berkeley's area. But very famously, you can have Berkeley's epaxics who lose the ability to do language comprehension and use the ability to consistently use words as pointers, as signs to point to things, but still have perfect good spatial reasoning abilities. And so, conversely, people with brokers of fascia who fuck up, who have the broker's reason their brain fucks up will not be able to form fluent sentences and have some problems like unsigned syntax, and they'll still be able to have very good spatial reasoning. It could still, for instance, be like, good engineers. Would you like many problems which, like, cost engineering?AARON: Yeah, I totally buy that. I don't think language is the central thing. I think it's like an outgrowth of, like I don't know, there's like a simplified model I could make, which is like it's like an outgrowth of whatever general intelligence really is. But whatever the best spatial or graphical model is, I don't think language is cognition.NATHAN: Yes, this is a really big debate in psycholinguistics as to whether language is like an outgrowth of other abilities like the brain has, whether language whether there's very specialized language modules. Yeah, this is just like a very live debate in psycholinguistics moments. I actually do lean towards the reason I've been talking about this actually just going to explain this hierarchical structure thing? Yeah, I keep talking about it. So one theory for how you can comprehend new sentences, like, the dominant theory in linguistics, how you can comprehend new sentences, um, is you break them up into, like you break them up into, like, chunks, and you form these chunks together in this, like, tree structure. So something like, if you hear, like, a totally novel sentence like the pit bull mastiff flopped around deliciously or something, you can comprehend what the sentence means despite the fact you've never heard it. Theory behind this is you saw yes, this can be broken up into this tree structure, where the different, like, ah, like like bits of the sentence. So, like like the mastiff would be like, one bit, and then you have, like, another bit, which is like, the mastiff I can't remember I said rolled around, so that'd be like, another bit, and then you'd have connectors to our heart.AARON: Okay.NATHAN: So the massive rolling around one theory of one of the sort of distinctive things that humans have disabilities is like, this quite general ability to break things up into these these tree structures. This is controversial within psycholinguistics, but it's broadly an area which I broadly buy it because we do see harms to other areas of intelligence. You get much worse at, like, Ravens Progressive Matrices, for instance, when you have, like, an injury to brokers area, but, like, not worse at, like, tests like tests of space, of, like, spatial reasoning, for instance.AARON: So what is like, is there, like, a main alternative to, like, how humans.NATHAN: Understand language as far as this specificity of how we pass completely novel sentences, as far as where this is just like this is just like the the academic consensus. Okay.AARON: I mean, it sounds totally like right? I don't know.NATHAN: Yeah. But yeah, I suppose going back to saying, how far is language like an outgrowth of general intelligence? An outgrowth like general intelligence versus having much more specialized language modules? Yeah, I lean towards the latter, despite yeah, I still don't want to give too strong of a personal opinion here because I'm not a linguistic this is a podcast.AARON: You're allowed to give takes. No one's going to say this is like the academic we want takes.NATHAN: We want takes. Well, gone to my head is.AARON: I.NATHAN: Think language is not growth of other abilities. I think the main justification for this, I think, is that the loss of other abilities we see when you have damage to broker's area and verca's area.AARON: Okay, cool. So I think we basically agree on that. And also, I guess one thing to highlight is I think outgrowth can mean a couple of different things. I definitely think it's plausible. I haven't read about this. I think I did at some point, but not in a while. But outgrowth could mean temporarily or whatever. I think I'm kind of inclined to think it's not that straightforward. You could have coevolution where language per se encourages both its own development and the development of some general underlying trait or something.NATHAN: Yeah. Which seems likely.AARON: Okay, cool. So why don't humans have general intelligence?NATHAN: Right. Yeah. As I was sort of talking about previously.AARON: Okay.NATHAN: I think I think I'd like to use go back to like a high level like a high level argument is there appears to be very surprised, like, much higher levels of functional specialization in brains than you expect. You can lose much more specific abilities than you expect to be able to lose. You can lose specifically the ability a famous example is like facebindness, actually. You probably lose the ability to specifically recognize things which you're, like, an expert in.AARON: Who does it or who loses this ability.NATHAN: If you've damaged your fuse inform area, you'll lose the ability to recognize faces, but nothing else.AARON: Okay.NATHAN: And there's this general pattern that your brain is much more you can lose much more specific abilities than you expect. So, for instance, if you sort of have damage to your ventral, medial, prefrontal cortex, you can say the reasoning for why you shouldn't compulsively gamble but still compulsively gamble.AARON: For instance okay, I understand this not gambling per se, but like executive function stuff at a visceral level. Okay, keep going.NATHAN: Yeah. Some other nice examples of this. I think memory is quite intuitive. So there's like, a very famous patient called patient HM who had his hippocampus removed and so as a result, lost all declarative memory. So all memory of specific facts and things which happened in his life. He just couldn't remember any of these things, but still perfectly functioning otherwise. I think at a really high level, I think this functional specialization is probably the strongest piece of evidence against the general intelligence hypothesis. I think fundamentally, general intelligence hypothesis implies that, like, if you, like yeah, if you was, like, harm a piece of your brain, if you have some brain injury, you might like generically get worse at tasks you like, generically get worse at, like at like all task groups use general intelligence. But I think suggesting people, including general intelligence, like the ability to write, the ability to speak, maybe not speak, the ability to do math, you do have.AARON: This it's just not as easy to analyze in a Cogsy paper which IQ or whatever. So there is something where if somebody has a particular cubic centimeter of their brain taken out, that's really excellent evidence about what that cubic centimeter does or whatever, but that non spatial modification is just harder to study and analyze. I guess we'll give people drugs, right? Suppose that set aside the psychometric stuff. But suppose that general intelligence is mostly a thing or whatever and you actually can ratchet it up and down. This is probably just true, right? You can probably give somebody different doses of, like, various drugs. I don't know, like laughing gas, like like, yeah, like probably, probably weed. Like I don't know.NATHAN: So I think this just probably isn't true. Your working memory corrects quite strongly with G and having better working memory generic can make you much better at lots of tasks if you have like.AARON: Yeah.NATHAN: Sorry, but this is just like a specific ability. It's like just specifically your working memory, which is improved if you go memory to a drugs. Improved working memory. I think it's like a few things like memory attention, maybe something like decision making, which are all like extremely useful abilities and improve how well other cognitive abilities work. But they're all separate things. If you improved your attention abilities, your working memory, but you sort of had some brain injury, which sort of meant you sort of had lost ability to pass syntax, you would not get better at passing syntax. And you can also use things separately. You can also improve attention and improve working memory separately, which just it's not just this one dial which you can turn up.AARON: There's good reason to expect that we can't turn it up because evolution is already sort of like maximizing, given the relevant constraints. Right. So you would need to be looking just like injuries. Maybe there are studies where they try to increase people's, they try to add a cubic centimeter to someone's brain, but normally it's like the opposite. You start from some high baseline and then see what faculties you lose. Just to clarify, I guess.NATHAN: Yeah, sorry, I think I've lost the you still think there probably is some general intelligence ability to turn up?AARON: Honestly, I think I haven't thought about this nearly as much as you. I kind of don't know what I think at some level. If I could just write down all of the different components and there are like 74 of them and what I think of a general intelligence consists of does that make it I guess in some sense, yeah, that does make it less of an ontologically legit thing or something. I think I think the thing I want to get the motivating thing here is that with humans yet you can like we know humans range in IQ, and there's, like, setting aside a very tiny subset of people with severe brain injuries or development disorders or whatever. Almost everybody has some sort of symbolic reasoning that they can do to some degree. Whereas the smartest maybe I'm wrong about this, but as far as I know, the smartest squirrel is not going to be able to have something semantically represent something else. And that's what I intuitively want to appeal to, you know what I mean?NATHAN: Yeah, I know what you're guessing at. So I think there's like two interesting things here. So I think one is, could a squirrel do this? I'm guessing a squirrel couldn't do this, but a dog can, or like a dog probably can. A chimpanzee definitely can.AARON: Do what?NATHAN: Chimpanzees can definitely learn to associate arbitrary signs, things in the world with arbitrary signs.AARON: Yes, but maybe I'm just adding on epicentercles here, but I feel like correct me if I'm wrong, but I think that maybe I'm just wrong about this, but I would assume that Chicken Tees cannot use that sign in a domain that is qualitatively different from the ones they've been in. Right. So, like, a dog will know that a certain sign means sit or whatever, but maybe that's not a good I.NATHAN: Don'T know think this is basically not true.AARON: Okay.NATHAN: And we sort of know this from teaching.AARON: Teaching.NATHAN: There's like a famously cocoa de guerrilla. Also a bonobo whose name I can't remember were taught sign language. And the thing they were consistently bad at was, like, putting together sentences they could learn quite large vocabularies learning to associate by large, I mean in the hundreds of words, in the low hundreds of words which they could consistently use consistently use correctly.AARON: What do you mean by, like, in what sense? What is bonobo using?NATHAN: A very famous and quite controversial example is like, coco gorilla was like, saw a swan outside and signed water bird. That's like, a controversial example. But other things, I think, which are controversial here is like, the syntax part of putting water and bird together is the controversial part, but it's not the controversial part that she could see a swan and call that a bird.AARON: Yeah, I mean, this is kind of just making me think, okay, maybe the threshold for D is just like at the chimp level or something. We are like or whatever the most like that. Sure. If a species really can generate from a prefix and a suffix or whatever, a concept that they hadn't learned before.NATHAN: Yeah, this is a controversial this is like a controversial example of that the addition to is the controversial part. Yeah, I suppose maybe brings back to why I think this matters is will there be this threshold which AIS cross such that their reasoning after this is qualitatively different to their reasoning previously? And this is like two things. One, like a much faster increase in AI capabilities and two, alignment techniques which worked on systems which didn't have g will no longer work. Systems which do have g. Brings back to why I think this actually matters. But I think if we're sort of accepting it, I think elephants probably also if you think that if we're saying, like, g is like a level of chimpanzees, chimpanzees just, like, don't don't look like quantitatively different to, like, don't look like that qualitatively different to, like, other animals. Now, lots of other animals live in similar complex social groups. Lots of other animals use tools.AARON: Yeah, sure. For one thing, I don't think there's not going to be a discontinuity in the same way that there wasn't a discontinuity at any point between humans evolution from the first prokaryotic cells or whatever are eukaryotic one of those two or both, I guess. My train of thought. Yes, I know it's controversial, but let's just suppose that the sign language thing was legit with the waterbird and that's not like a random one off fluke or something. Then maybe this is just some sort of weird vestigial evolutionary accident that actually isn't very beneficial for chimpanzees and they just stumbled their way into and then it just enabled them to it enables evolution to bootstrap Shimp genomes into human genomes. Because at some the smartest or whatever actually, I don't know. Honestly, I don't have a great grasp of evolutionary biology or evolution at all. But, yeah, it could just be not that helpful for chimps and helpful for an extremely smart chimp that looks kind of different or something like that.NATHAN: Yeah. So I suppose just like the other thing she's going on here, I don't want to keep banging on about this, but you can lose the language. You can lose linguistic ability. And it's just, like, happens this happens in stroke victims, for instance. It's not that rare. Just, like, lose linguistic ability, but still have all the other abilities which we sort of think of as like, general intelligence, which I think would be including the general intelligence, like, hypothesis.AARON: I agree that's, like, evidence against it. I just don't think it's very strong evidence, partially because I think there is a real school of thought that says that language is fundamental. Like, language drives thought. Language is, like, primary to thought or something. And I don't buy that. If you did buy that, I think this would be, like, more damning evidence.#6 Daniel Filan on why I'm wrong about ethics (+ Oppenheimer and what names mean in like a hardcore phil of language sense) [00:56:54][Note: I forgot to record an intro segment here. Sorry!]AARON: Yeah. Yes. I'm also anti scam. Right, thank you. Okay, so I think that thing that we were talking about last time we talked, which is like the thing I think we actually both know stuff about instead of just like, repeating New York Times articles is my nuanced ethics takes and why you think about talk about that and then we can just also branch off from there.DANIEL: Yeah, we can talk about that.AARON: Maybe see where that did. I luckily I have a split screen up, so I can pull up things. Maybe this is kind of like egotistical or something to center my particular view, but you've definitely given me some of the better pushback or whatever that I haven't gotten that much feedback of any kind, I guess, but it's still interesting to hear your take. So basically my ethical position or the thing that I think is true is that which I think is not the default view. I think most people think this is wrong is that total utilitarianism does not imply that for some amount of suffering that could be created there exists some other extremely large arbitrarily, large amount of happiness that could also be created which would morally justify the former. Basically.DANIEL: So you think that even under total utilitarianism there can be big amounts of suffering such that there's no way to morally tip the calculus. However much pleasure you can create, it's just not going to outweigh the fact that you inflicted that much suffering on some people.AARON: Yeah, and I'd highlight the word inflicted if something's already there and you can't do anything about it, that's kind of neither here nor there as it pertains to your actions or something. So it's really about you increasing, you creating suffering that wouldn't have otherwise been created. Yeah. It's also been a couple of months since I've thought about this in extreme detail, although I thought about it quite a bit. Yeah.DANIEL: Maybe I should say my contrary view, I guess, when you say that, I don't know, does total utilitarianism imply something or not? I'm like, well, presumably it depends on what we mean by total utilitarianism. Right. So setting that aside, I think that thesis is probably false. I think that yeah. You can offset great amounts of suffering with great amounts of pleasure, even for arbitrary amounts of suffering.AARON: Okay. I do think that position is like the much more common and even, I'd say default view. Do you agree with that? It's sort of like the implicit position of people who are of self described total utilitarians who haven't thought a ton about this particular question.DANIEL: Yeah, I think it's probably the implicit default. I think it's the implicit default in ethical theory or something. I think that in practice, when you're being a utilitarian, I don't know, normally, if you're trying to be a utilitarian and you see yourself inflicting a large amount of suffering, I don't know. I do think there's some instinct to be like, is there any way we can get around this?AARON: Yeah, for sure. And to be clear, I don't think this would look like a thought experiment. I think what it looks like in practice and also I will throw in caveats as I see necessary, but I think what it looks like in practice is like, spreading either wild animals or humans or even sentient digital life through the universe. That's in a non as risky way, but that's still just maybe like, say, making the earth, making multiple copies of humanity or something like that. That would be an example that's probably not like an example of what an example of creating suffering would be. For example, just creating another duplicate of earth. Okay.DANIEL: Anything that would be like so much suffering that we shouldn't even the pleasures of earth outweighs.AARON: Not necessarily, which is kind of a cop out. But my inclination is that if you include wild animals, the answer is yes, that creating another earth especially. Yeah, but I'm much more committed to some amount. It's like some amount than this particular time and place in human industry is like that or whatever.DANIEL: Okay, can I get a feel of some other concrete cases to see?AARON: Yeah.DANIEL: So one example that's on my mind is, like, the atomic bombing of Hiroshima and Nagasaki, right? So the standard case for this is, like, yeah, what? A hundred OD thousand people died? Like, quite terrible, quite awful. And a lot of them died, I guess a lot of them were sort of some people were sort of instantly vaporized, but a lot of people died in extremely painful ways. But the countercase is like, well, the alternative to that would have been like, an incredibly grueling land invasion of Japan, where many more people would have died or know regardless of what the actual alternatives were. If you think about the atomic bombings, do you think that's like the kind of infliction of suffering where there's just not an offsetting amount of pleasure that could make that okay?AARON: My intuition is no, that it is offsettable, but I would also emphasize that given the actual historical contingencies, the alternative, the implicit case for the bombing includes reducing suffering elsewhere rather than merely creating happiness. There can definitely be two bad choices that you have to make or something. And my claim doesn't really pertain to that, at least not directly.#7: Holly Elmore on AI pause, wild animal welfare, and some cool biology things I couldn't fully follow but maybe you can [01:04:00]Up next, Holly Elmore and I discuss the complexities and implications of AI development and open sourcing. We talk about protests and ethical considerations around her, um, uh, campaign to pause the development of frontier AI systems until, until we can tell that they're safe.AARON: So what's the plan? Do you have a plan? You don't have to have a plan. I don't have plans very much.HOLLY: Well, right now I'm hopeful about the UK AI summit. Pause AI and I have planned a multi city protest on the 21 October to encourage the UK AI Safety Summit to focus on safety first and to have as a topic arranging a pause or that of negotiation. There's a lot of a little bit upsetting advertising for that thing that's like, we need to keep up capabilities too. And I just think that's really a secondary objective. And that's how I wanted to be focused on safety. So I'm hopeful about the level of global coordination that we're already seeing. It's going so much faster than we thought. Already the UN Secretary General has been talking about this and there have been meetings about this. It's happened so much faster at the beginning of this year. Nobody thought we could talk about nobody was thinking we'd be talking about this as a mainstream topic. And then actually governments have been very receptive anyway. So right now I'm focused on other than just influencing opinion, the targets I'm focused on, or things like encouraging these international like, I have a protest on Friday, my first protest that I'm leading and kind of nervous that's against Meta. It's at the Meta building in San Francisco about their sharing of model weights. They call it open source. It's like not exactly open source, but I'm probably not going to repeat that message because it's pretty complicated to explain. I really love the pause message because it's just so hard to misinterpret and it conveys pretty clearly what we want very quickly. And you don't have a lot of bandwidth and advocacy. You write a lot of materials for a protest, but mostly what people see is the title.AARON: That's interesting because I sort of have the opposite sense. I agree that in terms of how many informational bits you're conveying in a particular phrase, pause AI is simpler, but in some sense it's not nearly as obvious. At least maybe I'm more of a tech brain person or whatever. But why that is good, as opposed to don't give extremely powerful thing to the worst people in the world. That's like a longer everyone.HOLLY: Maybe I'm just weird. I've gotten the feedback from open source ML people is the number one thing is like, it's too late, there's already super powerful models. There's nothing you can do to stop us, which sounds so villainous, I don't know if that's what they mean. Well, actually the number one message is you're stupid, you're not an ML engineer. Which like, okay, number two is like, it's too late, there's nothing you can do. There's all of these other and Meta is not even the most powerful generator of models that it share of open source models. I was like, okay, fine. And I don't know, I don't think that protesting too much is really the best in these situations. I just mostly kind of let that lie. I could give my theory of change on this and why I'm focusing on Meta. Meta is a large company I'm hoping to have influence on. There is a Meta building in San Francisco near where yeah, Meta is the biggest company that is doing this and I think there should be a norm against model weight sharing. I was hoping it would be something that other employees of other labs would be comfortable attending and that is a policy that is not shared across the labs. Obviously the biggest labs don't do it. So OpenAI is called OpenAI but very quickly decided not to do that. Yeah, I kind of wanted to start in a way that made it more clear than pause AI. Does that anybody's welcome something? I thought a one off issue like this that a lot of people could agree and form a coalition around would be good. A lot of people think that this is like a lot of the open source ML people think know this is like a secret. What I'm saying is secretly an argument for tyranny. I just want centralization of power. I just think that there are elites that are better qualified to run everything. It was even suggested I didn't mention China. It even suggested that I was racist because I didn't think that foreign people could make better AIS than Meta.AARON: I'm grimacing here. The intellectual disagreeableness, if that's an appropriate term or something like that. Good on you for standing up to some pretty bad arguments.HOLLY: Yeah, it's not like that worth it. I'm lucky that I truly am curious about what people think about stuff like that. I just find it really interesting. I spent way too much time understanding the alt. Right. For instance, I'm kind of like sure I'm on list somewhere because of the forums I was on just because I was interested and it is something that serves me well with my adversaries. I've enjoyed some conversations with people where I kind of like because my position on all this is that look, I need to be convinced and the public needs to be convinced that this is safe before we go ahead. So I kind of like not having to be the smart person making the arguments. I kind of like being like, can you explain like I'm five. I still don't get it. How does this work?AARON: Yeah, no, I was thinking actually not long ago about open source. Like the phrase has such a positive connotation and in a lot of contexts it really is good. I don't know. I'm glad that random tech I don't know, things from 2004 or whatever, like the reddit source code is like all right, seems cool that it's open source. I don't actually know if that was how that right. But yeah, I feel like maybe even just breaking down what the positive connotation comes from and why it's in people's self. This is really what I was thinking about, is like, why is it in people's self interest to open source things that they made and that might break apart the allure or sort of ethical halo that it has around it? And I was thinking it probably has something to do with, oh, this is like how if you're a tech person who makes some cool product, you could try to put a gate around it by keeping it closed source and maybe trying to get intellectual property or something. But probably you're extremely talented already, or pretty wealthy. Definitely can be hired in the future. And if you're not wealthy yet I don't mean to put things in just materialist terms, but basically it could easily be just like in a yeah, I think I'll probably take that bit out because I didn't mean to put it in strictly like monetary terms, but basically it just seems like pretty plausibly in an arbitrary tech person's self interest, broadly construed to, in fact, open source their thing, which is totally fine and normal.HOLLY: I think that's like 99 it's like a way of showing magnanimity showing, but.AARON: I don't make this sound so like, I think 99.9% of human behavior is like this. I'm not saying it's like, oh, it's some secret, terrible self interested thing, but just making it more mechanistic. Okay, it's like it's like a status thing. It's like an advertising thing. It's like, okay, you're not really in need of direct economic rewards, or sort of makes sense to play the long game in some sense, and this is totally normal and fine, but at the end of the day, there's reasons why it makes sense, why it's in people's self interest to open source.HOLLY: Literally, the culture of open source has been able to bully people into, like, oh, it's immoral to keep it for yourself. You have to release those. So it's just, like, set the norms in a lot of ways, I'm not the bully. Sounds bad, but I mean, it's just like there is a lot of pressure. It looks bad if something is closed source.AARON: Yeah, it's kind of weird that Meta I don't know, does Meta really think it's in their I don't know. Most economic take on this would be like, oh, they somehow think it's in their shareholders interest to open source.HOLLY: There are a lot of speculations on why they're doing this. One is that? Yeah, their models aren't as good as the top labs, but if it's open source, then open source quote, unquote then people will integrate it llama Two into their apps. Or People Will Use It And Become I don't know, it's a little weird because I don't know why using llama Two commits you to using llama Three or something, but it just ways for their models to get in in places where if you just had to pay for their models too, people would go for better ones. That's one thing. Another is, yeah, I guess these are too speculative. I don't want to be seen repeating them since I'm about to do this purchase. But there's speculation that it's in best interests in various ways to do this. I think it's possible also that just like so what happened with the release of Llama One is they were going to allow approved people to download the weights, but then within four days somebody had leaked Llama One on four chan and then they just were like, well, whatever, we'll just release the weights. And then they released Llama Two with the weights from the beginning. And it's not like 100% clear that they intended to do full open source or what they call Open source. And I keep saying it's not open source because this is like a little bit of a tricky point to make. So I'm not emphasizing it too much. So they say that they're open source, but they're not. The algorithms are not open source. There are open source ML models that have everything open sourced and I don't think that that's good. I think that's worse. So I don't want to criticize them for that. But they're saying it's open source because there's all this goodwill associated with open source. But actually what they're doing is releasing the product for free or like trade secrets even you could say like things that should be trade secrets. And yeah, they're telling people how to make it themselves. So it's like a little bit of a they're intentionally using this label that has a lot of positive connotations but probably according to Open Source Initiative, which makes the open Source license, it should be called something else or there should just be like a new category for LLMs being but I don't want things to be more open. It could easily sound like a rebuke that it should be more open to make that point. But I also don't want to call it Open source because I think Open source software should probably does deserve a lot of its positive connotation, but they're not releasing the part, that the software part because that would cut into their business. I think it would be much worse. I think they shouldn't do it. But I also am not clear on this because the Open Source ML critics say that everyone does have access to the same data set as Llama Two. But I don't know. Llama Two had 7 billion tokens and that's more than GPT Four. And I don't understand all of the details here. It's possible that the tokenization process was different or something and that's why there were more. But Meta didn't say what was in the longitude data set and usually there's some description given of what's in the data set that led some people to speculate that maybe they're using private data. They do have access to a lot of private data that shouldn't be. It's not just like the common crawl backup of the Internet. Everybody's basing their training on that and then maybe some works of literature they're not supposed to. There's like a data set there that is in question, but metas is bigger than bigger than I think well, sorry, I don't have a list in front of me. I'm not going to get stuff wrong, but it's bigger than kind of similar models and I thought that they have access to extra stuff that's not public. And it seems like people are asking if maybe that's part of the training set. But yeah, the ML people would have or the open source ML people that I've been talking to would have believed that anybody who's decent can just access all of the training sets that they've all used.AARON: Aside, I tried to download in case I'm guessing, I don't know, it depends how many people listen to this. But in one sense, for a competent ML engineer, I'm sure open source really does mean that. But then there's people like me. I don't know. I knew a little bit of R, I think. I feel like I caught on the very last boat where I could know just barely enough programming to try to learn more, I guess. Coming out of college, I don't know, a couple of months ago, I tried to do the thing where you download Llama too, but I tried it all and now I just have like it didn't work. I have like a bunch of empty folders and I forget got some error message or whatever. Then I tried to train my own tried to train my own model on my MacBook. It just printed. That's like the only thing that a language model would do because that was like the most common token in the training set. So anyway, I'm just like, sorry, this is not important whatsoever.HOLLY: Yeah, I feel like torn about this because I used to be a genomicist and I used to do computational biology and it was not machine learning, but I used a highly parallel GPU cluster. And so I know some stuff about it and part of me wants to mess around with it, but part of me feels like I shouldn't get seduced by this. I am kind of worried that this has happened in the AI safety community. It's always been people who are interested in from the beginning, it was people who are interested in singularity and then realized there was this problem. And so it's always been like people really interested in tech and wanting to be close to it. And I think we've been really influenced by our direction, has been really influenced by wanting to be where the action is with AI development. And I don't know that that was right.AARON: Not personal, but I guess individual level I'm not super worried about people like you and me losing the plot by learning more about ML on their personal.HOLLY: You know what I mean? But it does just feel sort of like I guess, yeah, this is maybe more of like a confession than, like a point. But it does feel a little bit like it's hard for me to enjoy in good conscience, like, the cool stuff.AARON: Okay. Yeah.HOLLY: I just see people be so attached to this as their identity. They really don't want to go in a direction of not pursuing tech because this is kind of their whole thing. And what would they do if we weren't working toward AI? This is a big fear that people express to me with they don't say it in so many words usually, but they say things like, well, I don't want AI to never get built about a pause. Which, by the way, just to clear up, my assumption is that a pause would be unless society ends for some other reason, that a pause would eventually be lifted. It couldn't be forever. But some people are worried that if you stop the momentum now, people are just so luddite in their insides that we would just never pick it up again. Or something like that. And, yeah, there's some identity stuff that's been expressed. Again, not in so many words to me about who will we be if we're just sort of like activists instead of working on.AARON: Maybe one thing that we might actually disagree on. It's kind of important is whether so I think we both agree that Aipause is better than the status quo, at least broadly, whatever. I know that can mean different things, but yeah, maybe I'm not super convinced, actually, that if I could just, like what am I trying to say? Maybe at least right now, if I could just imagine the world where open eye and Anthropic had a couple more years to do stuff and nobody else did, that would be better. I kind of think that they are reasonably responsible actors. And so I don't k

Pigeon Hour
#10: Pigeon Hour x Consistently Candid pod-crossover: I debate moral realism* with Max Alexander and Sarah Hastings-Woodhouse

Pigeon Hour

Play Episode Listen Later Dec 28, 2023 68:17


IntroAt the gracious invitation of AI Safety Twitter-fluencer Sarah Hastings-Woodhouse, I appeared on the very first episode of her new podcast “Consistently Candid” to debate moral realism (or something kinda like that, I guess; see below) with fellow philosophy nerd and EA Twitter aficionado Max Alexander, alongside Sarah as moderator and judge of sorts.What I believeIn spite of the name of the episode and the best of my knowledge/understanding a few days ago, it turns out my stance may not be ~genuine~ moral realism. Here's my basic meta-ethical take:* Descriptive statements that concern objective relative goodness or badness (e.g., "it is objectively for Sam to donate $20 than to buy an expensive meal that costs $20 more than a similar, less fancy meal”) can be and sometimes are true; but* Genuinely normative claims like “Sam should (!) donate $20 and should not buy that fancy meal” are never objectively true.Of course the label per se doesn't really matter. But for a bunch of reasons it still seems wise to figure out which label really does work best.Some definitionsStanford Encyclopedia of Philosophy: Moral realists are those who think that, in these respects, things should be taken at face value—moral claims do purport to report facts and are true if they get the facts right. Moreover, they hold, at least some moral claims actually are true. That much is the common and more or less defining ground of moral realism (although some accounts of moral realism see it as involving additional commitments, say to the independence of the moral facts from human thought and practice, or to those facts being objective in some specified way).Wikipedia:The robust model of moral realism commits moral realists to three theses:* The semantic thesis: The primary semantic role of moral predicates (such as "right" and "wrong") is to refer to moral properties (such as rightness and wrongness), so that moral statements (such as "honesty is good" and "slavery is unjust") purport to represent moral facts, and express propositions that are true or false (or approximately true, largely false, and so on).* The alethic thesis: Some moral propositions are in fact true.* The metaphysical thesis: Moral propositions are true when actions and other objects of moral assessment have the relevant moral properties (so that the relevant moral facts obtain), where these facts and properties are robust: their metaphysical status, whatever it is, is not relevantly different from that of (certain types of) ordinary non-moral facts and properties.And to settle the matter, here's what some LLM have to say.My prompt:Suppose a person believes the following:1: Descriptive statements that concern objective relative goodness or badness (e.g., "it is objectively for Sam to donate $20 than to buy an expensive meal that costs $20 more than a similar, less fancy meal”) can be and sometimes are true; but 2: Genuinely normative claims like “Sam should donate $20 and should not buy that fancy meal” are never objectively true.Is this person best categorized as a moral realist?ChatGPTGiven these two beliefs, the person's stance doesn't align completely with traditional moral realism, which typically asserts that there are objective truths about both moral facts (like the goodness or badness of actions) and moral duties (what one should or should not do). However, it also doesn't fully align with moral anti-realism, since the person believes in some form of objective truth regarding moral value.One possible categorization for this stance could be a form of "value realism" combined with "normative anti-realism" or "normative subjectivism." This would mean that the person believes in objective truths about what is better or worse (value realism) but thinks that what one ought to do (normative claims) is not a matter of objective truth but perhaps of subjective judgment or societal conventions.This position is quite sophisticated and might be best understood as a hybrid or intermediate position in the landscape of moral philosophy, rather than fitting neatly into the category of moral realism or its standard alternatives.Great, so much for trying to fit things into nice little buckets.Claude, fortunately, is more definitive:No, the view described is not best categorized as moral realism. Here's why:Moral realism is the view that there are objective moral facts or truths. A moral realist believes that some moral claims, including normative claims about what one should or shouldn't do, can be objectively and mind-independently true.The view in the example accepts that there can be objective truths about relative degrees of goodness or badness (claim 1). However, it denies that there are objective normative truths about what one should or shouldn't do (claim 2). By rejecting that genuine normative 'should/shouldn't' claims can be objectively true, the view denies a key commitment of moral realism - namely that moral claims can state objective normative facts. At best it allows for objective evaluative facts about goodness/badness, not prescriptive facts about obligations.So this view would be more accurately categorized as a form of moral anti-realism like non-cognitivism or error theory. It does not qualify as moral realism because of its rejection of the existence of objective normative truths that could make strong normative claims justified or true. The commitment to objective evaluative facts alone does not sufficiently qualify the view as realist in the metaethical sense.In summary, the view accepts objective "good/bad" facts but denies objective "should/shouldn't" facts. This combination puts it at odds with core commitments of moral realism, so it is better seen as an anti-realist view.I'm gonna disagree with Claude here, at least in that the bolded claim above (objective normative facts are a key part of moral realism) just seems…wrong? Like all the definitions could replace the term “moral fact” or “moral claim” with “normative fact” and “normative claim” but they do not!So I really don't have a great answer to which label works best. One of two ~human level chatbots giving me a definitive-sounding “anti-realism” answer is too much evidence for me, whose only formal philosophy training is an undergrad minor, to rule that one out. There are also good arguments, I think, for the “realist label,” as well as for “neither” (i.e., ‘secret third thing'). In fact all of these seem pretty similar in terms of argument convincingness/correctness. So, in sum,

Pigeon Hour
#9: Sarah Woodhouse on discovering AI x-risk, Twitter, and more

Pigeon Hour

Play Episode Listen Later Nov 15, 2023 74:47


Note: I can't seem to edit or remove the “transcript” tab. I recommend you ignore that and just look at the much higher quality, slightly cleaned up one below. Most importantly, follow Sarah on Twitter! Summary (Written by chatGPT, as you can probably tell)In this episode of Pigeon Hour host Aaron delves deep into the world of AI safety with his guest, Sarah Woodhouse. Sarah shares her unexpected journey from fearing job automation to becoming a recognized voice on AI safety Twitter. Her story starts with a simple Google search that led her down a rabbit hole of existential dread and unexpected fame on social media. As she narrates her path from lurker to influencer, Sarah reflects on the quirky dynamics of the AI safety community, her own existential crisis, and the serendipitous tweet that resonated with thousands.Aaron and Sarah's conversation takes unexpected turns, discussing everything from the peculiarities of EA rationalists to the surprisingly serious topic of shrimp welfare. They also explore the nuances of AI doom probabilities, the social dynamics of tech Twitter, and Sarah's unexpected viral fame as a tween. This episode is a rollercoaster of insights and anecdotes, perfect for anyone interested in the intersection of technology, society, and the unpredictable journey of internet fame.Topics discussedDiscussion on AI Safety and Personal Journeys:* Aaron and Sarah discuss her path to AI safety, triggered by concerns about job automation and the realization that AI could potentially replace her work.* Sarah's deep dive into AI safety started with a simple Google search, leading her to Geoffrey Hinton's alarming statements, and eventually to a broader exploration without finding reassuring consensus.* Sarah's Twitter engagement began with lurking, later evolving into active participation and gaining an audience, especially after a relatable tweet thread about an existential crisis.* Aaron remarks on the rarity of people like Sarah, who follow the AI safety rabbit hole to its depths, considering its obvious implications for various industries.AI Safety and Public Perception:* Sarah discusses her surprise at discovering the AI safety conversation happening mostly in niche circles, often with a tongue-in-cheek attitude that could seem dismissive of the serious implications of AI risks.* The discussion touches on the paradox of AI safety: it's a critically important topic, yet it often remains confined within certain intellectual circles, leading to a lack of broader public engagement and awareness.Cultural Differences and Personal Interests:* The conversation shifts to cultural differences between the UK and the US, particularly in terms of sincerity and communication styles.* Personal interests, such as theater and musicals (like "Glee"), are also discussed, revealing Sarah's background and hobbies.Effective Altruism (EA) and Rationalist Communities:* Sarah points out certain quirks of the EA and rationalist communities, such as their penchant for detailed analysis, hedging statements, and the use of probabilities in discussions.* The debate around the use of "P(Doom)" (probability of doom) in AI safety discussions is critiqued, highlighting how it can be both a serious analytical tool and a potentially alienating jargon for outsiders.Shrimp Welfare and Ethical Considerations:* A detailed discussion on shrimp welfare as an ethical consideration in effective altruism unfolds, examining the moral implications and effectiveness of focusing on animal welfare at a large scale.* Aaron defends his position on prioritizing shrimp welfare in charitable giving, based on the principles of importance, tractability, and neglectedness.Personal Decision-Making in Charitable Giving:* Strategies for personal charitable giving are explored, including setting a donation cutoff point to balance moral obligations with personal needs and aspirations.TranscriptAARON: Whatever you want. Okay. Yeah, I feel like you said this on Twitter. The obvious thing is, how did you learn about AI safety? But maybe you've already covered that. That's boring. First of all, do you want to talk about that? Because we don't have to.SARAH: I don't mind talking about that.AARON: But it's sort of your call, so whatever. I don't know. Maybe briefly, and then we can branch out?SARAH: I have a preference for people asking me things and me answering them rather than me setting the agenda. So don't ever feel bad about just asking me stuff because I prefer that.AARON: Okay, cool. But also, it feels like the kind of thing where, of course, we have AI. Everyone already knows that this is just like the voice version of these four tweets or whatever. But regardless. Yes. So, Sarah, as Pigeon Hour guest, what was your path through life to AI safety Twitter?SARAH: Well, I realized that a chatbot could very easily do my job and that my employers either hadn't noticed this or they had noticed, but they were just being polite about it and they didn't want to fire me because they're too nice. And I was like, I should find out what AI development is going to be like over the next few years so that I know if I should go and get good at some other stuff.SARAH: I just had a little innocent Google. And then within a few clicks, I'd completely doom pilled myself. I was like, we're all going to die. I think I found Geoffrey Hinton because he was on the news at the time, because he just quit his job at Google. And he was there saying things that sounded very uncertain, very alarming. And I was like, well, he's probably the pessimist, but I'm sure that there are loads of optimists to counteract that because that's how it usually goes. You find a doomer and then you find a bunch of more moderate people, and then there's some consensus in the middle that everything's basically fine.SARAH: I was like, if I just keep looking, I'll find the consensus because it's there. I'm sure it's there. So I just kept looking and looking for it. I looked for it for weeks. I just didn't find it. And then I was like, nobody knows what's going on. This seems really concerning. So then I started lurking on Twitter, and then I got familiar with all the different accounts, whatever. And then at some point, I was like, I'm going to start contributing to this conversation, but I didn't think that anybody would talk back to me. And then at some point, they started talking back to me and I was like, this is kind of weird.SARAH: And then at some point, I was having an existential crisis and I had a couple of glasses of wine or something, and I just decided to type this big, long thread. And then I went to bed. I woke up the next morning slightly grouchy and hungover. I checked my phone and there were all these people messaging me and all these people replying to my thread being like, this is so relatable. This really resonated with me. And I was like, what is going on?AARON: You were there on Twitter before that thread right? I'm pretty sure I was following you.SARAH: I think, yeah, I was there before, but no one ever really gave me any attention prior to that. I think I had a couple of tweets that blew up before that, but not to the same extent. And then after that, I think I was like, okay, so now I have an audience. When I say an audience, like, obviously a small one, but more of an audience than I've ever had before in my life. And I was like, how far can I take this?SARAH: I was a bit like, people obviously started following me because I'm freFreaking out about AI, but if I post an outfit, what's going to happen? How far can I push this posting, these fit checks? I started posting random stuff about things that were completely unrelated. I was like, oh, people are kind of here for this, too. Okay, this is weird. So now I'm just milking it for all its worth, and I really don't know why anybody's listening to me. I'm basically very confused about the whole thing.AARON: I mean, I think it's kind of weird from your perspective, or it's weird in general because there aren't that many people who just do that extremely logical thing at the beginning. I don't know, maybe it's not obvious to people in every industry or whatever that AI is potentially a big deal, but there's lots of truckers or whatever. Maybe they're not the best demographic or the most conducive demographic, like, getting on Twitter or whatever, but there's other jobs that it would make sense to look into that. It's kind of weird to me that only you followed the rabbit hole all the way down.SARAH: I know! This is what I…Because it's not that hard to complete the circle. It probably took me like a day, it took me like an afternoon to get from, I'm worried about job automation to I should stop saving for retirement. It didn't take me that long. Do you know what I mean? No one ever looks. I literally don't get it. I was talking to some people. I was talking to one of my coworkers about this the other day, and I think I came up in conversation. She was like, yeah, I'm a bit worried about AI because I heard on the radio that taxi drivers might be out of a job. That's bad. And I was like, yeah, that is bad. But do you know what else? She was like, what are the AI companies up to that we don't know about? And I was like, I mean, you can go on their website. You can just go on their website and read about how they think that their technology is an extinction risk. It's not like they're hiding. It's literally just on there and no one ever looks. It's just crazy.AARON: Yeah. Honestly, I don't even know if I was in your situation, if I would have done that. It's like, in some sense, I am surprised. It's very few people maybe like one, but at another level, it's more rationality than most humans have or something. Yeah. You regret going down that rabbit hole?SARAH: Yeah, kind of. Although I'm enjoying the Twitter thing and it's kind of fun, and it turns out there's endless comedic material that you can get out of impending doom. The whole thing is quite funny. It's not funny, but you can make it funny if you try hard enough. But, yeah, what was I going to say? I think maybe I was more primed for doom pilling than your average person because I already knew what EA was and I already knew, you know what I mean. That stuff was on my radar.AARON: That's interesting.SARAH: I think had it not been on my radar, I don't think I would have followed the pipeline all the way.AARON: Yeah. I don't know what browser you use, but it would be. And you should definitely not only do this if you actually think it would be cool or whatever, but this could be in your browser history from that day and that would be hilarious. You could remove anything you didn't want to show, but if it's like Google Chrome, they package everything into sessions. It's one browsing session and it'll have like 10,000 links.SARAH: Yeah, I think for non-sketchy reasons, I delete my Google history more regularly than that. I don't think I'd be able to find that. But I can remember the day and I can remember my anxiety levels just going up and up somewhere between 01:00 p.m. and 07:00 p.m. And by the evening I'm like, oh, my God.AARON: Oh, damn, that's wild.SARAH: It was really stressful.AARON: Yeah, I guess props for, I don't know if props…Is the right word, I guess, impressed? I'm actually somewhat surprised to hear that you said you regret it. I mean, that sucks though, I guess. I'm sorry.SARAH: If you could unknow this, would you?AARON: No, because I think it's worth maybe selfishly, but not overall because. Okay, yeah, I think that would plausibly be the selfish thing to do. Actually. No, actually, hold on. No, I actually don't think that's true. I actually think there's enough an individual can do selfishly such that it makes sense. Even the emotional turmoil.SARAH: It would depend how much you thought that you were going to personally move the needle by knowing about it. I personally don't think that I'm going to be able to do very much. I was going to tip the scales. I wouldn't selfishly unknow it and sacrifice the world. But me being not particularly informed or intelligent and not having any power, I feel like if I forgot that AI was going to end the world, it would not make much difference.AARON: You know what I mean? I agree that it's like, yes, it is unlikely for either of us to tip the scales, but.SARAH: Maybe you can't.AARON: No, actually, in terms of, yeah, I'm probably somewhat more technically knowledgeable just based on what I know about you. Maybe I'm wrong.SARAH: No, you're definitely right.AARON: It's sort of just like a probabilities thing. I do think that ‘doom' - that word - is too simplified, often too simple to capture what people really care about. But if you just want to say doom versus no doom or whatever, AI doom versus no AI doom. Maybe there's like a one in 100,000 chance that one of us tips the scales. And that's important. Maybe even, like, one in 10,000. Probably not. Probably not.SARAH: One in 10,000. Wow.AARON: But that's what people do. People vote, even though this is old 80k material I'm regurgitating because they basically want to make the case for why even if you're not. Or in some article they had from a while ago, they made a case for why doing things that are unlikely to counterfactually matter can still be amazingly good. And the classic example, just voting if you're in a tight race, say, in a swing state in the United States, and it could go either way. Yeah. It might be pretty unlikely that you are the single swing vote, but it could be one in 100,000. And that's not crazy.SARAH: It doesn't take very much effort to vote, though.AARON: Yeah, sure. But I think the core justification, also, the stakes are proportionally higher here, so maybe that accounts for some. But, yes, you're absolutely right. Definitely different amounts of effort.SARAH: Putting in any effort to saving the world from AI. I wouldn't say that. I wouldn't say that I'm sacrificing.AARON: I don't even know if I like. No. Maybe it doesn't feel like a sacrifice. Maybe it isn't. But I do think there's, like, a lot. There's at least something to be. I don't know if this really checks out, but I would, like, bet that it does, which is that more reasonably, at least calibrated. I wanted to say reasonably well informed. But really what it is is, like, some level of being informed and, like, some level of knowing what you don't know or whatever, and more just like, normal. Sorry. I hope normal is not like a bat. I'm saying not like tech Bros, I guess so more like non tech bros. People who are not coded as tech bros. Talking about this on a public platform just seems actually, in fact, pretty good.SARAH: As long as we like, literally just people that aren't men as well. No offense.AARON: Oh, no, totally. Yeah.SARAH: Where are all the women? There's a few.AARON: There's a few that are super. I don't know, like, leaders in some sense, like Ajeya Cotra and Katja Grace. But I think the last EA survey was a third. Or I could be butchering this or whatever. And maybe even within that category, there's some variation. I don't think it's 2%.SARAH: Okay. All right. Yeah.AARON: Like 15 or 20% which is still pretty low.SARAH: No, but that's actually better than I would have thought, I think.AARON: Also, Twitter is, of all the social media platforms, especially mail. I don't really know.SARAH: Um.AARON: I don't like Instagram, I think.SARAH: I wonder, it would be interesting to see whether or not that's much, if it's become more male dominated since Elon Musk took.AARON: It's not a huge difference, but who knows?SARAH: I don't know. I have no idea. I have no idea. We'll just be interesting to know.AARON: Okay. Wait. Also, there's no scheduled time. I'm very happy to keep talking or whatever, but as soon as you want to take a break or hop off, just like. Yeah.SARAH: Oh, yeah. I'm in no rush.AARON: Okay, well, I don't know. We've talked about the two obvious candidates. Do you have a take or something? Want to get out to the world? It's not about AI or obesity or just a story you want to share.SARAH: These are my two pet subjects. I don't know anything else.AARON: I don't believe you. I know you know about house plants.SARAH: I do. A secret, which you can't tell anyone, is that I actually only know about house plants that are hard to kill, and I'm actually not very good at taking care of them.AARON: Well, I'm glad it's house plants in that case, rather than pets. Whatever.SARAH: Yeah. I mean, I have killed some sea monkeys, too, but that was a long time ago.AARON: Yes. So did I, actually.SARAH: Did you? I feel like everyone has. Everyone's got a little sea monkey graveyard in their past.AARON: New cause area.SARAH: Are there more shrimp or more sea monkeys? That's the question.AARON: I don't even know what even. I mean, are they just plankton?SARAH: No, they're not plankton.AARON: I know what sea monkeys are.SARAH: There's definitely a lot of them because they're small and insignificant.AARON: Yeah, but I also think we don't. It depends if you're talking about in the world, which I guess probably like sea monkeys or farmed for food, which is basically like. I doubt these are farmed either for food or for anything.SARAH: Yeah, no, you're probably right.AARON: Or they probably are farmed a tiny bit for this niche little.SARAH: Or they're farmed to sell in aquariums for kids.AARON: Apparently. They are a kind of shrimp, but they were bred specifically to, I don't know, be tiny or something. I'm just skimming that, Wikipedia. Here.SARAH: Sea monkeys are tiny shrimp. That is crazy.AARON: Until we get answers, tell me your life story in whatever way you want. It doesn't have to be like. I mean, hopefully not. Don't straight up lie, but wherever you want to take that.SARAH: I'm not going to lie. I'm just trying to think of ways to make it spicier because it's so average. I don't know what to say about it.AARON: Well, it's probably not that average, right? I mean, it might be average among people you happen to know.SARAH: Do you have any more specific questions?AARON: Okay, no. Yeah, hold on. I have a meta point, which is like, I think the people who are they have a thing on the top of their mind, and if I give any sort of open ended question whatsoever, they'll take it there and immediately just start giving slinging hot takes. But thenOther people, I think, this category is very EA. People who aren't, especially my sister, they're like, “No, I have nothing to talk about. I don't believe that.” But they're not, I guess, as comfortable.SARAH: No, I mean, I have. Something needs to trigger them in me. Do you know what I mean? Yeah, I need an in.AARON: Well, okay, here's one. Is there anything you're like, “Maybe I'll cut this. This is kind of, like narcissistic. I don't know. But is there anything you want or curious to ask?” This does sound kind of weird. I don't know. But we can cut it if need be.SARAH: What does the looking glass in your Twitter name mean? Because I've seen a bunch of people have this, and I actually don't know what it means, but I was like, no.AARON: People ask this. I respond to a tweet that's like, “What does that like?” At least, I don't know, once every month or two. Or know basically, like Spencer Greenberg. I don't know if you're familiar with him. He's like a sort of.SARAH: I know the know.AARON: He literally just tweeted, like a couple years ago. Put this in your bio to show that you really care about finding the truth or whatever and are interested in good faith conversations. Are you familiar with the scout mindset?SARAH: Yeah.AARON: Julia Galef. Yeah. That's basically, like the short version.SARAH: Okay.AARON: I'm like, yeah, all right. And there's at least three of us who have both a magnifying glass. Yeah. And a pause thing, which is like, my tightest knit online community I guess.SARAH: I think I've followed all the pause people now. I just searched the emoji on Twitter, and I just followed everyone. Now I can't find. And I also noticed when I was doing this, that some people, if they've suspended their account or they're taking time off, then they put a pause in their thing. So I was, like, looking, and I was like, oh, these are, like, AI people. But then they were just, like, in their bio, they were, like, not tweeting until X date. This is a suspended account. And I was like, I see we have a messaging problem here. Nice. I don't know how common that actually.AARON: Was. I'm glad. That was, like, a very straightforward question. Educated the masses. Max Alexander said Glee. Is that, like, the show? You can also keep asking me questions, but again, this is like.SARAH: Wait, what did he say? Is that it? Did he just say glee? No.AARON: Not even a question mark. Just the word glee.SARAH: Oh, right. He just wants me to go off about Glee.AARON: Okay. Go off about. Wait, what kind of Glee are we? Vaguely. This is like a show or a movie or something.SARAH: Oh, my God. Have you not seen it?AARON: No. I mean, I vaguely remember, I think, watching some TV, but maybe, like, twelve years ago or something. I don't know.SARAH: I think it stopped airing in, like, maybe 2015?AARON: 16. So go off about it. I don't know what I. Yeah, I.SARAH: Don't know what to say about this.AARON: Well, why does Max think you might have a take about Glee?SARAH: I mean, I don't have a take about. Just see the thing. See? No, not even, like, I am just transparently extremely lame. And I really like cheesy. I'm like. I'm like a musical theater kid. Not even ironically. I just like show tunes. And Glee is just a show about a glee club at a high school where they sing show tunes and there's, like, petty drama, and people burst into song in the hallways, and I just think it's just the most glorious thing on Earth. That's it. There are no hot takes.AARON: Okay, well, that's cool. I don't have a lot to say, unfortunately, but.SARAH: No, that's totally fine. I feel like this is not a spicy topic for us to discuss. It's just a good time.AARON: Yeah.SARAH: Wait.AARON: Okay. Yeah. So I do listen to Hamilton on Spotify.SARAH: Okay.AARON: Yeah, that's about it.SARAH: I like Hamilton. I've seen it three times. Oh.AARON: Live or ever. Wow. Cool. Yeah, no, that's okay. Well, what do people get right or wrong about theater kids?SARAH: Oh, I don't know. I think all the stereotypes are true.AARON: I mean, that's generally true, but usually, it's either over moralized, there's like a descriptive thing that's true, but it's over moralized, or it's just exaggerated.SARAH: I mean, to put this in more context, I used to be in choir. I went every Sunday for twelve years. And then every summer we do a little summer school and we go away and put on a production. So we do a musical or something. So I have been. What have I been? I was in Guys and Dolls. I think I was just in the chorus for that. I was the reverend in Anything Goes. But he does unfortunately get kidnapped in like the first five minutes. So he's not a big presence. Oh, I've been Tweedle dumb in Alice in Wonderland. I could go on, but right now as I'm saying this, I'm looking at my notice board and I have two playbills from when I went to Broadway in April where I saw Funny Girl and Hadestown.SARAH: I went to New York.AARON: Oh, cool. Oh yeah. We can talk about when you're moving to the United States. However.SARAH: I'm not going to do that. Okay.AARON: I know. I'm joking. I mean, I don't know.SARAH: I don't think I'm going to do that. I don't know. It just seems like you guys have got a lot going on over there. It seems like things aren't quite right with you guys. Things aren't quite right with us either.AARON: No, I totally get this. I think it would be cool. But also I completely relate to not wanting to. I've lived within 10 miles of one. Not even 10 miles, 8 miles in one location. Obviously gone outside of that. But my entire life.SARAH: You've just always lived in DC.AARON: Yeah, either in DC or. Sorry. But right now in Maryland, it's like right next to DC on the Metro or at Georgia University, which is in the trying to think would I move to the UK. Like I could imagine situations that would make me move to the UK. But it would still be annoying. Kind of.SARAH: Yeah, I mean, I guess it's like they're two very similar places, but there are all these little cultural things which I feel like kind of trip you up.AARON: I don't to. Do you want to say what?SARAH: Like I think people, I just like, I don't know. I don't have that much experience because I've only been to America twice. But people seem a lot more sincere in a way that you don't really get that. Like people are just never really being upfront. And in America, I just got the impression that people just have less of a veneer up, which is probably a good thing. But it's really hard to navigate if you're not used to it or something. I don't know how to describe that.AARON: Yeah, I've definitely heard this at least. And yeah, I think it's for better and for worse.SARAH: Yeah, I think it's generally a good thing.AARON: Yeah.SARAH: But it's like there's this layer of cynicism or irony or something that is removed and then when it's not there, it's just everything feels weak. I can't describe it.AARON: This is definitely, I think, also like an EA rationalist thing. I feel like I'm pretty far on the spectrum. Towards the end of surgical niceties are fine, but I don't know, don't obscure what you really think unless it's a really good reason to or something. But it can definitely come across as being rude.SARAH: Yeah. No, but I think it's actually a good rule of thumb to obscure what you. It's good to try not to obscure what you think most of the time, probably.Ably, I don't know, but I would love to go over temporarily for like six months or something and just hang out for a bit. I think that'd be fun. I don't know if I would go back to New York again. Maybe. I like the bagels there.AARON: I should have a place. Oh yeah. Remember, I think we talked at some point. We can cut this out if you like. Don't if either of us doesn't want it in. But we discussed, oh yeah, I should be having a place. You can. I emailed the landlord like an hour before this. Hopefully, probably more than 50%. That is still an offer. Yeah, probably not for all six months, but I don't know.SARAH: I would not come and sleep on your sofa for six months. That would be definitely impolite and very weird.AARON: Yeah. I mean, my roommates would probably grumble.SARAH: Yeah. They would be like.AARON: Although I don't know. Who knows? I wouldn't be shocked if people were actually like, whatever somebody asked for as a question. This is what he said. I might also be interested in hearing how different backgrounds. Wait, sorry. This is not good grammar. Let me try to parse this. Not having a super hardcore EA AI rationalist background shape how you think or how you view AI as rationality?SARAH: Oh, that's a good question. I think it's more happening the other way around, the more I hang around in these circles. You guys are impacting how I think.AARON: It's definitely true for me as well.SARAH: Seeping into my brain and my language as well. I've started talking differently. I don't know. That's a good question, though. Yeah. One thing that I will say is that there are certain things that I find irritating about the EA way of style of doing things. I think one specific, I don't know, the kind of like hand ring about everything. And I know that this is kind of the point, right? But it's kind of like, you know, when someone's like, I want to take a stance on something, but then whenever they want to take a stance on something, they feel the need to write like a 10,000 word blog post where they're thinking about the second and order and third and fifth order effects of this thing. And maybe this thing that seems good is actually bad for this really convoluted reason. That's just so annoying.AARON: Yeah.SARAH: Also understand that maybe that is a good thing to do sometimes, but it just seems like, I don't know how anyone ever gets anywhere. It seems like everyone must be paralyzed by indecision all the time because they just can't commit to ever actually just saying anything.AARON: I think this kind of thing is really good if you're trying to give away a billion dollars. Oh yes, I do want the billion dollar grantor to be thinking through second and third order effects of how they give away their billion dollars. But also, no, I am super. The words on the tip of my tongue, not overwhelmed but intimidated when I go on the EA forum because the posts, none of them are like normal, like five paragraph essays. Some of them are like, I think one of them I looked up for fun because I was going to make a meme about it and still will. Probably was like 30,000 words or something. And even the short form posts, which really gets me kind of not even annoyed. I don't know, maybe kind of annoyed is that the short form posts, which is sort of the EA forum version of Twitter, are way too high quality, way too intimidating. And so maybe I should just suck it up and post stuff anyway more often. It just feels weird. I totally agree.SARAH: I was also talking to someone recently about how I lurked on the EA forum and less wrong for months and months and I couldn't figure out the upvoting system and I was like, am I being stupid or why are there four buttons? And I was like, well, eventually I had to ask someone because I couldn't figure it out. And then he explained it to me and I was like, that is just so unnecessary. Like, just do it.AARON: No, I do know what you mean.SARAH: I just tI think it's annoying. It pisses me off. I just feel like sometimes you don't need to add more things. Sometimes less is good. Yeah, that's my hot take. Nice things.AARON: Yeah, that's interesting.SARAH: But actually, a thing that I like that EA's do is the constant hedging and caveatting. I do find it kind of adorable. I love that because it's like you're having to constantly acknowledge that you probably didn't quite articulate what you really meant and that you're not quite making contact with reality when you're talking. So you have to clarify that you probably were imprecise when you said this thing. It's unnecessary, but it's kind of amazing.AARON: No, it's definitely. I am super guilty of this because I'll give an example in a second. I think I've been basically trained to try pretty hard, even in normal conversation with anybody, to just never say anything that's literally wrong. Or at least if I do caveat it.AARON: I was driving home, me and my parents and I, unless visited, our grandparents were driving back, and we were driving back past a cruise ship that was in a harbor. And my mom, who was driving at the time, said, “Oh, Aaron, can you see if there's anyone on there?” And I immediately responded like, “Well, there's probably at least one person.” Obviously, that's not what she meant. But that was my technical best guess. It's like, yes, there probably are people on there, even though I couldn't see anybody on the decks or in the rooms. Yeah, there's probably a maintenance guy. Felt kind of bad.SARAH: You can't technically exclude that there are, in fact, no people.AARON: Then I corrected myself. But I guess I've been trained into giving that as my first reaction.SARAH: Yeah, I love that. I think it's a waste of words, but I find it delightful.AARON: It does go too far. People should be more confident. I wish that, at least sometimes, people would say, “Epistemic status: Want to bet?” or “I am definitely right about this.” Too rarely do we hear, "I'm actually pretty confident here.SARAH: Another thing is, people are too liberal with using probabilities. The meaning of saying there is an X percent chance of something happening is getting watered down by people constantly saying things like, “I would put 30% on this claim.” Obviously, there's no rigorous method that's gone into determining why it's 30 and not 35. That's a problem and people shouldn't do that. But I kind of love it.AARON: I can defend that. People are saying upfront, “This is my best guess. But there's no rigorous methodology.” People should take their word for that. In some parts of society, it's seen as implying that a numeric probability came from a rigorous model. But if you say, “This is my best guess, but it's not formed from anything,” people should take their word for that and not refuse to accept them at face value.SARAH: But why do you have to put a number on it?AARON: It depends on what you're talking about. Sometimes probabilities are relevant and if you don't use numbers, it's easy to misinterpret. People would say, “It seems quite likely,” but what does that mean? One person might think “quite reasonably likely” means 70%, the other person thinks it means 30%. Even though it's weird to use a single number, it's less confusing.SARAH: To be fair, I get that. I've disagreed with people about what the word “unlikely” means. Someone's pulled out a scale that the government uses, or intelligence services use to determine what “unlikely” means. But everyone interprets those words differently. I see what you're saying. But then again, I think people in AI safety talking about P Doom was making people take us less seriously, especially because people's probabilities are so vibey.AARON: Some people are, but I take Paul Cristiano's word seriously.SARAH: He's a 50/50 kind of guy.AARON: Yeah, I take that pretty seriously.Obviously, it's not as simple as him having a perfect understanding of the world, even after another 10,000 hours of investigation. But it's definitely not just vibes, either.SARAH: No, I came off wrong there. I don't mean that everyone's understanding is just vibes.AARON: Yeah.SARAH: If you were looking at it from the outside, it would be really difficult to distinguish between the ones that are vibes and the ones that are rigorous, unless you carefully parsed all of it and evaluated everyone's background, or looked at the model yourself. If you're one step removed, it looks like people just spitting out random, arbitrary numbers everywhere.AARON: Yeah. There's also the question of whether P doom is too weird or silly, or if it could be easily dismissed as such.SARAH: Exactly, the moment anyone unfamiliar with this discussion sees it, they're almost definitely going to dismiss it. They won't see it as something they need to engage with.AARON: That's a very fair point. Aside from the social aspect, it's also a large oversimplification. There's a spectrum of outcomes that we lump into doom and not doom. While this binary approach can be useful at times, it's probably overdone.SARAH: Yeah, because when some people say doom, they mean everyone dies, while others mean everyone dies plus everything is terrible. And no one specifies what they mean. It is silly. But, I also find it kind of funny and I kind of love it.AARON: I'm glad there's something like that. So it's not perfect. The more straightforward thing would be to say P existential risk from AI comes to pass. That's the long version, whatever.SARAH: If I was in charge, I would probably make people stop using PDOOm. I think it's better to say it the long way around. But obviously I'm not in charge. And I think it's funny and kind of cute, so I'll keep using it.AARON: Maybe I'm willing to go along and try to start a new norm. Not spend my whole life on it, but say, I think this is bad for X, Y, and Z reasons. I'll use this other phrase instead and clarify when people ask.SARAH: You're going to need Twitter premium because you're going to need a lot more characters.AARON: I think there's a shorthand which is like PX risk or P AiX risk.SARAH: Maybe it's just the word doom that's a bit stupid.AARON: Yeah, that's a term out of the Bay Area rationalists.SARAH: But then I also think it kind of makes the whole thing seem less serious. People should be indignant to hear that this meme is being used to trade probabilities about the likelihood that they're going to die and their families are going to die. This has been an in-joke in this weird niche circle for years and they didn't know about it. I'm not saying that in a way to morally condemn people, but if you explain this to people…People just go to dinner parties in Silicon Valley and talk about this weird meme thing, and what they really mean is the ODs know everyone's going to prematurely die. People should be outraged by that, I think.AARON: I disagree that it's a joke. It is a funny phrase, but the actual thing is people really do stand by their belief.SARAH: No, I totally agree with that part. I'm not saying that people are not being serious when they give their numbers, but I feel like there's something. I don't know how to put this in words. There's something outrageous about the fact that for outsiders, this conversation has been happening for years and people have been using this tongue-in-cheek phrase to describe it, and 99.9% of people don't know that's happening. I'm not articulating this very well.AARON: I see what you're saying. I don't actually think it's like. I don't know a lot of jargon.SARAH: But when I first found out about this, I was outraged.AARON: I honestly just don't share that intuition. But that's really good.SARAH: No, I don't know how to describe this.AARON: I think I was just a little bit indignant, perhaps.SARAH: Yeah, I was indignant about it. I was like, you guys have been at social events making small talk by discussing the probability of human extinction all this time, and I didn't even know. I was like, oh, that's really messed up, guys.AARON: I feel like I'm standing by the rational tier because, it was always on. No one was stopping you from going on less wrong or whatever. It wasn't behind closed.SARAH: Yeah, but no one ever told me about it.AARON: Yeah, that's like a failure of outreach, I suppose.SARAH: Yeah. I think maybe I'm talking more about. Maybe the people that I'm mad at is the people who are actually working on capabilities and using this kind of jargon. Maybe I'm mad at those people. They're fine.AARON: Do we have more questions? I think we might have more questions. We have one more. Okay, sorry, but keep going.SARAH: No, I'm going to stop making that point now because I don't really know what I'm trying to say and I don't want to be controversial.AARON: Controversy is good for views. Not necessarily for you. No, thank you for that. Yes, that was a good point. I think it was. Maybe it was wrong. I think it seems right.SARAH: It was probably wrong.Shrimp Welfare: A Serious DiscussionAARON: I don't know what she thinks about shrimp welfare. Oh, yeah. I think it's a general question, but let's start with that. What do you think about shrimp? Well, today.SARAH: Okay. Is this an actual cause area or is this a joke about how if you extrapolate utilitarianism to its natural conclusion, you would really care about shrimp?AARON: No, there's a charity called the Shrimp Welfare Initiative or project. I think it's Shrimp Welfare Initiative. I can actually have a rant here about how it's a meme that people find amusing. It is a serious thing, but I think people like the meme more than they're willing to transfer their donations in light of it. This is kind of wrong and at least distasteful.No, but there's an actual, if you Google, Shrimp Welfare Project. Yeah, it's definitely a thing, but it's only a couple of years old. And it's also kind of a meme because it does work in both ways. It sort of shows how we're weird, but in the sense that we are willing to care about things that are very different from us. Not like we're threatening other people. That's not a good description.SARAH: Is the extreme version of this position that we should put more resources into improving the lives of shrimp than into improving the lives of people just because there are so many more shrimp? Are there people that actually believe that?AARON: Well, I believe some version of that, but it really depends on who the ‘we' is there.SARAH: Should humanity be putting more resources?AARON: No one believes that as far as I know.SARAH: Okay. Right. So what is the most extreme manifestation of the shrimp welfare position?AARON: Well, I feel like my position is kind of extreme, and I'm happy to discuss it. It's easier than speculating about what the more extreme ones are. I don't think any of them are that extreme, I guess, from my perspective, because I think I'm right.SARAH: Okay, so what do you believe?AARON: I think that most people who have already decided to donate, say $20, if they are considering where to donate it and they are better morally, it would be better if they gave it to the shrimp welfare project than if they gave it to any of the commonly cited EA organizations.SARAH: Malaria nets or whatever.AARON: Yes. I think $20 of malaria nets versus $20 of shrimp. I can easily imagine a world where it would go the other way. But given the actual situation, the $20 of shrimp is much better.SARAH: Okay. Is it just purely because there's just more shrimp? How do we know how much shrimp suffering there is in the world?AARON: No, this is an excellent question. The numbers are a key factor, but no, it's not as simple. I definitely don't think one shrimp is worth one human.SARAH: I'm assuming that it's based on the fact that there are so many more shrimp than there are people that I don't know how many shrimp there are.AARON: Yeah, that's important, but at some level, it's just the margin. What I think is that when you're donating money, you should give to wherever it does the most good, whatever that means, whatever you think that means. But let's just leave it at that. The most good is morally best at the margin, which means you're not donating where you think the world should or how you think the world should expend its trillion dollar wealth. All you're doing is adding $20 at this current level, given the actual world. And so part of it is what you just said, and also including some new research from Rethink Priorities.Measuring suffering in reasonable ranges is extremely hard to do. But I believe it's difficult to do a better job than raising priorities on that, given what I've seen. I can provide some links. There are a few things to consider here: numbers, times, and the enormity of suffering. I think there are a couple of key elements, including tractability.Are you familiar with the three-pronged concept people sometimes discuss, which encompasses tractability, and neglectedness?SARAH: Okay.AARON: Importance is essentially what we just mentioned. Huge numbers and plausible amounts of suffering. When you try to do the comparison, it seems like they're a significant concern. Tractability is another factor. I think the best estimates suggest that a one-dollar donation could save around 10,000 shrimp from a very painful death.SARAH: In that sense…AARON: You could imagine that even if there were a hundred times more shrimp than there actually are, we have direct control over how they live and die because we're farming them. The industry is not dominated by wealthy players in the United States. Many individual farmers in developing nations, if educated and provided with a more humane way of killing the shrimp, would use it. There's a lot of potential for improvement here. This is partly due to the last prong, neglectedness, which is really my focus.SARAH: You're saying no one cares about the shrimp.AARON: I'm frustrated that it's not taken seriously enough. One of the reasons why the marginal cost-effectiveness is so high is because large amounts of money are donated to well-approved organizations. But individual donors often overlook this. They ignore their marginal impact. If you want to see even a 1% shift towards shrimp welfare, the thing to do is to donate to shrimp welfare. Not donate $19 to human welfare and one dollar to shrimp welfare, which is perhaps what they think the overall portfolio should be.SARAH: Interesting. I don't have a good reason why you're wrong. It seems like you're probably right.AARON: Let me put the website in the chat. This isn't a fair comparison since it's something I know more about.SARAH: Okay.AARON: On the topic of obesity, neither of us were more informed than the other. But I could have just made stuff up or said something logically fallacious.SARAH: You could have told me that there were like 50 times the number of shrimp in the world than there really are. And I would have been like, sure, seems right.AARON: Yeah. And I don't know, if I…If I were in your position, I would say, “Oh, yeah, that sounds right.” But maybe there are other people who have looked into this way more than me that disagree, and I can get into why I think it's less true than you'd expect in some sense.SARAH: I just wonder if there's like… This is like a deeply non-EA thing to say. So I don't know, maybe I shouldn't say it, but are there not any moral reasons? Is there not any good moral philosophy behind just caring more about your own species than other species? If you're sorry, but that's probably not right, is it? There's probably no way to actually morally justify that, but it seems like it feels intuitively wrong. If you've got $20 to be donating 19 of them to shrimp and one to children with malaria, that feels like there should be something wrong with that, but I can't tell you what it is.AARON: Yeah, no, there is something wrong, which is that you should donate all 20 because they're acting on the margin, for one thing. I do think that doesn't check out morally, but I think basically me and everybody I know in terms of real life or whatever, I do just care way more about humans. I don't know, for at least the people that it's hard to formalize or specify what you mean by caring about or something. But, yeah, I think you can definitely basically just be a normal human who basically cares a lot about other humans. And still that's not like, negated by changing your $20 donation or whatever. Especially because there's nothing else that I do for shrimp. I think you should be like a kind person or something. I'm like an honest person, I think. Yeah, people should be nice to other humans. I mean, you should be nice in the sense of not beating them. But if you see a pigeon on the street, you don't need to say hi or whatever, give it a pet, because. I don't know. But yeah, you should be basically like, nice.SARAH: You don't stop to say hi to every pigeon that you see on the way to anywhere.AARON: I do, but I know most normal people don't.SARAH: This is why I'm so late to everything, because I have to do it. I have to stop for every single one. No exceptions.AARON: Yeah. Or how I think about it is sort of like a little bit of compartmentalization, which I think is like… Which is just sort of like a way to function normally and also sort of do what you think really checks out at the end of the day, just like, okay, 99% of the time I'm going to just be like a normal person who doesn't care about shrimp. Maybe I'll refrain from eating them. But actually, even that is like, I could totally see a person just still eating them and then doing this. But then during the 1% of the time where you're deciding how to give money away and none of those, the beneficiaries are going to be totally out of sight either way. This is like a neutral point, I guess, but it's still worth saying, yeah, then you can be like a hardcore effective altruist or whatever and then give your money to the shrimp people.SARAH: Do you have this set up as like a recurring donation?AARON: Oh, no. Everybody should call me out as a hypocrite because I haven't donated much money, but I'm trying to figure out actually, given that I haven't had a stable income ever. And maybe, hopefully I will soon, actually. But even then, it's still a part-time thing. I haven't been able to do sort of standard 10% or more thing, and I'm trying to figure out what the best thing to do or how to balance, I guess, not luxury, not like consumption on things that I… Well, to some extent, yeah. Maybe I'm just selfish by sometimes getting an Uber. That's totally true. I think I'm just a hypocrite in that respect. But mostly I think the trade-off is between saving, investing, and giving. Beast of the money that I have saved up and past things. So this is all sort of a defense of why I don't have a recurring donation going on.SARAH: I'm not asking you to defend yourself because I do not do that either.AARON: I think if I was making enough money that I could give away $10,000 a year and plan on doing that indefinitely, I would be unlikely to set up a recurring donation. What I would really want to do is once or twice a year, really try to prioritize deciding on how to give it away rather than making it the default. This has a real cost for charities. If you set up a recurring donation, they have more certainty in some sense of their future cash flow. But that's only good to do if you're really confident that you're going to want to keep giving there in the future. I could learn new information that says something else is better. So I don't think I would do that.SARAH: Now I'm just thinking about how many shrimp did you say it was per dollar?AARON: Don't quote me. I didn't say an actual thing.SARAH: It was like some big number. Right. Because I just feel like that's such a brainworm. Imagine if you let that actually get in your head and then every time you spend some unnecessary amount of money on something you don't really need, you think about how many shrimp you just killed by getting an Uber or buying lunch out. That is so stressful. I think I'm going to try not to think about that.AARON: I don't mean to belittle this. This is like a core, I think you're new to EA type of thinking. It's super natural and also troubling when you first come upon it. Do you want me to talk about how I, or other people deal with that or take action?SARAH: Yeah, tell me how to get the shrimp off my conscience.AARON: Well, for one thing, you don't want to totally do that. But I think the main thing is that the salience of things like this just decreases over time. I would be very surprised if, even if you're still very engaged in the EA adjacent communities or EA itself in five years, that it would be as emotionally potent. Brains make things less important over time. But I think the thing to do is basically to compartmentalize in a sort of weird sense. Decide how much you're willing to donate. And it might be hard to do that, but that is sort of a process. Then you have that chunk of money and you try to give it away the best you can under whatever you think the best ethics are. But then on the daily, you have this other set pot of money. You just are a normal person. You spend it as you wish. You don't think about it unless you try not to. And maybe if you notice that you might even have leftover money, then you can donate the rest of it. But I really do think picking how much to give should sort of be its own project. And then you have a pile of money you can be a hardcore EA about.SARAH: So you pick a cut off point and then you don't agonize over anything over and above that.AARON: Yeah. And then people, I mean, the hard part is that if somebody says their cut off point is like 1% of their income and they're making like $200,000, I don't know. Maybe their cut off point should be higher. So there is a debate. It depends on that person's specific situation. Maybe if they have a kid or some super expensive disease, it's a different story. If you're just a random guy making $200,000, I think you should give more.SARAH: Maybe you should be giving away enough to feel the pinch. Well, not even that. I don't think I'm going to do that. This is something that I do actually want to do at some point, but I need to think about it more and maybe get a better job.AARON: Another thing is, if you're wanting to earn to give as a path to impact, you could think and strive pretty hard. Maybe talk to people and choose your education or professional development opportunities carefully to see if you can get a better paying job. That's just much more important than changing how much you give from 10% to 11% or something. You should have this macro level optimization. How can I have more money to spend? Let me spend, like, I don't know, depends what life stage you are, but if you had just graduated college or maybe say you're a junior in college or something. It could make sense to spend a good amount of time figuring out what that path might look like.AARON: I'm a huge hypocrite because I definitely haven't done all this nearly as much as I should, but I still endorse it.SARAH: Yeah, I think it's fine to say what you endorse doing in an ideal world, even if you're not doing that, that's fine.AARON: For anybody listening, I tweeted a while ago, asking if anyone has resources on how to think about giving away wealth. I'm not very wealthy but have some amount of savings. It's more than I really need. At the same time, maybe I should be investing it because EA orgs don't feel like, or they think they can't invest it because there's potentially a lot of blowback if they make poor investments, even though it would be higher expected value.There's also the question of, okay, having some amount of savings allows me to take higher, potentially somewhat higher risk, but higher value opportunities because I have a cushion. But I'm very confused about how to give away what I should do here. People should DM me on Twitter or anywhere they have ideas.SARAH: I think you should calculate how much you need to cover your very basic needs. Maybe you should work out, say, if you were working 40 hours a week in a minimum wage job, like how much would you make then? And then you should keep that for yourself. And then the rest should definitely all go to the shrimp. Every single penny. All of it.AARON: This is pretty plausible. Just to make it more complicated, there's also the thing that I feel like my estimates or my best guesses of the best charities to give to over time has changed. And so there's like two competing forces. One is that I might get wiser and more knowledgeable as time goes on. The other one is that in general, giving now is better than giving later. All else equal, because I think for a couple of reasons, the main one just being that the charities don't know that you're going to give later.AARON: So it's like they can plan for the future much better if they get money now. And also there's just higher leverage opportunities or higher value per dollar opportunities now in general than there will be later for a couple of reasons I don't really need to. This is what makes it really complicated. So I've donated in the past to places that I don't think, or I don't think even at the time were the best to. So then there's a question of like, okay, how long do I save this money? Do I sit on it for months until I'm pretty confident, like a year.AARON: I do think that probably over the course of zero to five years or something, becoming more confident or changing your mind is like the stronger effect than how much good you give to the, or how much better it is for the charities to give now instead of later. But also that's weird because you're never committing at all.Sometimes you might decide to give it away, and maybe you won't. Maybe at that time you're like, “Oh, that's what I want. A car, I have a house, whatever.” It's less salient or something. Maybe something bad happened with EA and you no longer identify that way. Yeah, there's a lot of really thorny considerations. Sorry, I'm talking way too much.SARAH: Long, are you factoring AI timelines into this?AARON: That makes it even more sketchy. But that could also go both ways. On one hand, you have the fact that if you don't give away your money now and you die with it, it's never going to do any good. The other thing is that it might be that especially high leverage opportunities come in the future or something potentially you need, I don't know, whatever I can imagine I could make something up about. OpenPhil needs as much money as it can get to do X, Y and Z. It's really important right now, but I won't know that until a few years down the line. So just like everything else, it doesn't neatly wash out.SARAH: What do you think the AGI is going to do to the shrimp? I reckon it's probably pretty neat, like one shrimp per paperclip. Maybe you could get more. I wonder what the sort of shrimp to paperclip conversion rate is.AARON: Has anyone looked into that morally? I think like one to zero. I don't think in terms of money. You could definitely price that. I have no idea.SARAH: I don't know. Maybe I'm not taking this as seriously as I should be because I'm.AARON: No, I mean, humor is good. When people are giving away money or deciding what to do, they should be serious. But joking and humor is good. Sorry, go ahead.SARAH: No, you go ahead.AARON: I had a half-baked idea. At EA Global, they should have a comedy show where people roast everybody, but it's a fundraiser. You have to pay to get 100 people to attend. They have a bidding contest to get into the comedy show. That was my original idea. Or they could just have a normal comedy show. I think that'd be cool.SARAH: Actually, I think that's a good idea because you guys are funny. There is a lot of wit on this side of Twitter. I'm impressed.AARON: I agree.SARAH: So I think that's a very good idea.AARON: Okay. Dear Events team: hire Aaron Bergman, professional comedian.SARAH: You can just give them your Twitter as a source for how funny you are, and that clearly qualifies you to set this up. I love it.AARON: This is not important or related to anything, but I used to be a good juggler for entertainment purposes. I have this video. Maybe I should make sure the world can see it. It's like a talent show. So maybe I can do that instead.SARAH: Juggling. You definitely should make sure the world has access to this footage.AARON: It had more views than I expected. It wasn't five views. It was 90 or something, which is still nothing.SARAH: I can tell you a secret right now if you want. That relates to Max asking in the chat about glee.AARON: Yes.SARAH: This bit will also have to edit out, but me having a public meltdown over AI was the second time that I've ever blown up on the Internet. The first time being. I can't believe I'm telling you this. I think I'm delirious right now. Were you ever in any fandoms, as a teenager?AARON: No.SARAH: Okay. Were you ever on Tumblr?AARON: No. I sort of know what the cultural vibes were. I sort of know what you're referring to. There are people who like Harry Potter stuff and bands, like Kpop stuff like that.SARAH: So people would make these fan videos where they'd take clips from TV shows and then they edit them together to music. Sometimes people would edit the clips to make it look like something had happened in the plot of the show that hadn't actually happened. For example, say, what if X character had died? And then you edit the clips together to try and make it look like they've died. And you put a sad song, how to save a life by the fray or something, over the top. And then you put it on YouTube.AARON: Sorry, tell me what…"Hat I should search or just send the link here. I'm sending my link.SARAH: Oh, no, this doesn't exist anymore. It does not exist anymore. Right? So, say if you're, like, eleven or twelve years old and you do this, and you don't even have a mechanism to download videos because you don't know how to do technology. Instead, you take your little iPod touch and you just play a YouTube video on your screen, and you literally just film the screen with your iPod touch, and that's how you're getting the clips. It's kind of shaky because you're holding the camera anyway.SARAH: Then you edit together on the iMovie app of your iPod touch, and then you put it on the Internet, and then you just forget about it. You forget about it. Two years later, you're like, oh, I wonder what happened to that YouTube account? And you log in and this little video that you've made with edited clips that you've filmed off the screen of your laptop to ‘How To Save Life' by The Fray with clips from Glee in it, has nearly half a million views.AARON: Nice. Love it.SARAH: Embarrassing because this is like, two years later. And then all the comments were like, oh, my God, this was so moving. This made me cry. And then obviously, some of them were hating and being like, do you not even know how to download video clips? Like, what? And then you're so embarrassed.AARON: I could totally seem it. Creative, but only a reasonable solution. Yeah.SARAH: So that's my story of how I went viral when I was like, twelve.AARON: It must have been kind of overwhelming.SARAH: Yeah, it was a bit. And you can tell that my time, it's like 20 to eleven at night, and now I'm starting to really go off on one and talk about weird things.AARON: Like an hour. So, yeah, we can wrap up. And I always say this, but it's actually true. Which is that low standard, like, low stakes or low threshold. Low bar for doing that in recording some of the time.SARAH: Yeah, probably. We'll have to get rid of the part about how I went viral on YouTube when I was twelve. I'll sleep on that.AARON: Don't worry. I'll send the transcription at some point soon.SARAH: Yeah, cool.AARON: Okay, lovely. Thank you for staying up late into the night for this.SARAH: It's not that late into the night. I'm just like, lame and go to bed early.AARON: Okay, cool. Yeah, I know. Yeah, for sure. All right, bye. Get full access to Aaron's Blog at www.aaronbergman.net/subscribe

Pigeon Hour
#8: Max Alexander and I solve ethics, philosophy of mind, and cancel culture once and for all

Pigeon Hour

Play Episode Listen Later Nov 6, 2023 70:41


Logically Speaking
Building Business Continuity & Resilience with Max Alexander

Logically Speaking

Play Episode Listen Later Oct 30, 2023 53:10


Business continuity and cyber resilience are terms many professionals know. But, they don't always understand what it takes for an organization to be resilient and maintain continuity. This week, Logically CRO, Steve Rivera, and JPMorgan Chase VP, Max Alexander, touch on threat actor TTPs hitting small and mid-size businesses, the recovery process after an attack, and everything in between. Tune in to this discussion to:Discovery common tactics, techniques, and procedures used by big-name threat actors.Learn how to empower employees to make the best secure decisions to avoid unintentional insider threat.Understand why having an incident response and recovery plan in place (before an attack!) could save your company millions.

The Nonlinear Library
EA - The Lives We Can Save by Omnizoid

The Nonlinear Library

Play Episode Listen Later Sep 3, 2023 11:42


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The Lives We Can Save, published by Omnizoid on September 3, 2023 on The Effective Altruism Forum. I work as a Resident Assistant at my college. Last year, only a few weeks into me starting, I was called at night to come help with a drunk student. I didn't actually help very much, and probably didn't have to be there. I didn't even have to write up the report at the end. At one point I went outside to let medical services into the building, but mostly I just stood in a hallway. The person in question was so drunk they couldn't move. They had puked in the bathroom and were lying in the hallway crying. They could barely talk. When Campus Safety arrived they kneeled down next to this person and helped them drink water, while asking the normal slew of questions about the person's evening. They asked this person, whose name I can't even remember, why they had been drinking so much. They said, in between hiccups and sobs, "friend doesn't want to be friend anymore." How do you describe that feeling? I don't think transcription can convey the misery and the drunkenness and the awful situation that had led to this awful situation. Someone drank so much that they could barely move, was lying curled in a hallway where all the other residents could and were watching, and was only able to muster out "friend doesn't want to be friend anymore" as they cried. Should I only care because I happened to be standing in that hallway on a late September evening? Had I remained in my room, laughing with my friends, would this person's struggle have been worth nothing? Max Alexander (this whole post is very worth reading)! It's sometimes hard to be motivated to help the world. The trip you forego, the fun you could have had with a friend, the nice things you could have bought are instead sent straight into the coffers of some charity that you've read about. It can feel sort of alienating when you think just of the number of people you have saved. Instead of thinking of numbers, think of stories. The people who make up the numbers - who make up the hundreds of thousands of lives saved by effective charities - are real, flesh-and-blood people, who matter just as much as you and I. We may not look into the gaunt faces of those who would have otherwise starved to death, we may not see their suffering with our eyes, but we know it is real. People are dying in ways that we can prevent. GiveWell top charities can save lives for only a few thousand dollars. It's hard to get your mind around that. I have a friend who has raised over 50,000 dollars for effective charities. 10 lives. 10 people. 10 people, many of them children, who will be able to live out a full life, rather than being snuffed out at a young age by a horrible painful disease. They will not have to lie in bed, with a fever of 105, slowly dying of malaria when they are five. They will have the chance to grow up. Who are these people? I do not know. But I can imagine their stories. I can imagine their stories because I can hear the stories of other people like this, people who are about to die. For example, on this Reddit thread, you can find the stories of lots of people who are about to die. Stories like these: Stage IV colon cancer here. Age 35. I'm a single mum to a 1-year-old and there is a 94% chance I'll be dead in 4 years. But there is still a wee bit of hope, so I try to hold onto that (hard to do most days). My days are filled with spending time with my baby and hoping that I live long enough that she'll remember me. She's pretty awesome and makes me laugh every day, so there is a lot of happiness in this life of mine. Reading these stories causes me to tear up. I think a lot of people have a similar response. They're so tragic - entire lives being snuffed out. The line "My days are filled with spending time with my baby and ho...

KELTEK | Pure Hardstyle
KELTEK | Pure Hardstyle | Episode 031

KELTEK | Pure Hardstyle

Play Episode Listen Later Jun 16, 2023 60:10


Tracklist:  00:00 | Intro 01:10 | 01. KELTEK - All Or Nothing 04:44 | 02. Refuzion - Sun Comes Up 06:32 | 03. D-Block & S-te-Fan - Someone To Believe In 09:13 | 04. Demi Kanon - Let Me Dance 10:52 | 05. Atmozfears and Toneshifterz - Illusion 12:44 | 06. Brennan Heart and Phuture Noize - When Daylight Strikes (Intents Kingdom of Unity 2023 Anthem) 15:56 | 07. Adrenalize - Toxic 17:39 | 08. Ghost Stories (D-Block & S-te-Fan) - Devil's Night 20:57 | 09. Frequencerz, Max Alexander and MC Synergy - Brothers In Crime 22:33 | 10. Divinez - Set You Free 24:58 | 11. Adaro - The Endless Voyage (Wish Outdoor Anthem 2023) 26:44 | 12. D-Sturb - Flow 29:37 | 13. Devin Wild - Holy Grounds 32:37 | 14. Dvastate - Rockin' With The Best 35:08 | 15. D-Charged - Relief 37:18 | 16. Code Black - Open Your Mind 39:18 | 17. Act of Rage and ATILAX - Raging City (Official Decibel Outdoor 2023 Anthem) 42:34 | 18. Sub Zero Project featuring MC Stretch - LFG PSYCHO 45:08 | 19. Rejecta - Resurrector 47:06 | 20. Ran-D and Invector - Intoxicated  50:00 | 21. Coone and Rooler - Return Of The Return 51:42 | 22. Warface, D-Sturb and E-Force featuring Artifact, Bloodlust and Killshot - Making History 54:39 | 23. Revive - I WISH 56:53 | 24. Aversion - Activation

Strawberries on fire
Navy SEAL als rolmodel

Strawberries on fire

Play Episode Listen Later Mar 2, 2023 67:48


Terwijl IJlien ruim 30 kilometer wegtrapte naar Berkel en weer terug, vroeg Gaia zich al de hele dag af waarom ze het koud had in haar lente-trui (met korte mouwen, ja).IJlien leest de boeken van David Goggins en is daardoor weer lekker paradigma's aan het shiften. 50 kilo afvallen in drie maanden, Navy SEAL-training, Hell Week, ultra-running: niet dat ze hem na gaat doen, maar 15 kilometer fietsen is een kwestie van gewoon doorgaan inderdaad. Gaia is sceptisch en vraagt zich af of dit wel een gezond rolmodel is, voor mensen die al onrealistische verwachtingen van zichzelf hebben. En wie heeft dat nu niet, als goedgeaard millennial. De pizza-bitterballen-proeverij gaat er toch echt komen, waarbij IJlien als vegetariër de dans mooi denkt te ontspringen.Ze demonstreert wel de physiological sigh of: fysiologische zucht van (uiteraard!) Andrew Huberman maar dat blijkt in je eentje toch een stuk effectiever dan dat je het met zijn tweeën oefent #jankenvanhetlachenVia een zwemles-debacle roept Gaia op tot stemmen voor de Provinciale Staten en waterschap, want dat zou nog weleens meer effect op je (achtertuin) kunnen hebben dan je denkt.Wij vinden wel dat de busverbinding op Goeree-Overflakkee moet blijven.Ze is ook langs de Kruidvat geweest om het een en ander aan slaaphulpmiddelen te halen, maar heeft ook een slag gewonnen door niet toe te geven aan copingsmechanisme 'bedek de emotie met een zak chips' maar de emoties toe te laten en een middernachtelijke huilbui te ondergaan.We sluiten af met een warme volgtip: Max Alexander is een 8-jarige fashion designer die menig (lees: 1,2 miljoen) volger inspireert te kiezen voor wat jij leuk vindt om te doen.Volg je ons op @ditisdertigpodcast op Instagram? Vinden we leuk!

Space Café Podcast
Max Alexander on Space Photography - Iconic Captures of Neil Armstrong, Stephen Hawking, and more on this episode

Space Café Podcast

Play Episode Listen Later Jan 31, 2023 73:51


SpaceWatch.Global is pleased to present: The Space Café Podcast #73: Max Alexander on Space Photography - Iconic Captures of Neil Armstrong, Stephen Hawking, and more on this episodeEpisode 073 features a special guest: Max AlexanderWe're excited to welcome world-renowned photographer, Max Alexander, to the Space Cafe podcast. With over 25 years of experience in editorial and commercial photography, Max has captured the likes of Neil Armstrong, Stephen Hawking, and Nelson Mandela. He specializes in science communication through photography and is a Fellow of the Royal Astronomical Society. From his work with the UK Space Agency to his photography for The Children's Society, Max has left a lasting impact with his powerful and inspiring images. In this episode, we'll delve into his journey as a photographer and his approach to capturing meaningful moments.Some selected photos Max mentioned in the episode you can find here!  The Space Cafe Podcast brings our signature brand of engaging and relevant content from across the global space economy to you via an exciting new format. Together with award-winning Executive Producer, Markus Mooslechner (Terra Mater Factual Studios), our fortnightly podcast features guests that matter.Each episode includes a review of important topics, as well as guest appearances and deep commentary from expert panelists from across the space sector.SPOTIFY:  Space Cafe Podcast Guest Tunes You can find us on Spotify and Apple Podcast!Please visit us at SpaceWatch.Global, subscribe to our newsletters. Follow us on LinkedIn and Twitter!

Space Café Podcast
EPISODE 73: Max Alexander on Space Photography - Iconic Captures of Neil Armstrong, Stephen Hawking, and More on the Space Cafe Podcast.

Space Café Podcast

Play Episode Listen Later Jan 31, 2023 73:51


SpaceWatch.Global is pleased to present: The Space Café Podcast #73: Max Alexander on Space Photography - Iconic Captures of Neil Armstrong, Stephen Hawking, and more on this episodeEpisode 073 features a special guest: Max AlexanderWe're excited to welcome world-renowned photographer, Max Alexander, to the Space Cafe podcast. With over 25 years of experience in editorial and commercial photography, Max has captured the likes of Neil Armstrong, Stephen Hawking, and Nelson Mandela. He specializes in science communication through photography and has been a Fellow of the Royal Astronomical Society. From his work with the UK Space Agency to his photography for The Children's Society, Max has left a lasting impact with his powerful and inspiring images. In this episode, we'll delve into his journey as a photographer and his approach to capturing meaningful moments.Some selected photos Max mentioned in the episode you can find here! The Space Cafe Podcast brings our signature brand of engaging and relevant content from across the global space economy to you via an exciting new format. Together with award-winning Executive Producer, Markus Mooslechner (Terra Mater Factual Studios), our fortnightly podcast features guests that matter.Each episode includes a review of important topics, as well as guest appearances and deep commentary from expert panelists from across the space sector.SPOTIFY: Space Cafe Podcast Guest TunesWho we are?SpaceWatch.Global is a digital magazine and portal for those interested in space, and the far-reaching impact that space developments have.While showcasing the technology that enables the industry to edge closer to the next frontier, SpaceWatch.Global also provides analysis, forecasts, and insight into the geopolitical implications of space developments. From space policy, exploration, and missions, to space weapons and technology, we provide a complete perspective on the emerging space sector as it expands into a global marketplace.Host: Markus MooslechnerGuest: Max AlexanderPublisher: Torsten Kriening SpaceWatch.Global GmbHMessedamm 2314055 BerlinGermany○ Website: www.spacewatch.global○ Email: podcast@spacewatch.global○ Exclusive distribution Partner: Bookwire - www.bookwire.de Hosted on Acast. See acast.com/privacy for more information.

The Gym Lords Podcast
Ep 874 Ramon & Kim Garcia, Blake Franklin, Max Alexander

The Gym Lords Podcast

Play Episode Listen Later Dec 13, 2022 83:48


This Episode we interview Ramon & Kim Garcia, Blake Franklin, Max Alexander about their take on being a Gym Owner. Welcome to the Gym Lords Podcast, where we talk with successful gym owners to hear what they're doing that is working RIGHT NOW, and to hear lessons and failures they've learned along the way. We would love to share your story! If you'd like to be featured on the podcast, fill out the form on the link below. https://gymlaunchsecrets.com/podcast

garcia gym owners max alexander
The Gym Lords Podcast
Ep 874 Ramon & Kim Garcia, Blake Franklin, Max Alexander

The Gym Lords Podcast

Play Episode Listen Later Dec 13, 2022 83:48


This Episode we interview Ramon & Kim Garcia, Blake Franklin, Max Alexander about their take on being a Gym Owner. Welcome to the Gym Lords Podcast, where we talk with successful gym owners to hear what they're doing that is working RIGHT NOW, and to hear lessons and failures they've learned along the way. We would love to share your story! If you'd like to be featured on the podcast, fill out the form on the link below. https://gymlaunchsecrets.com/podcast

garcia gym owners max alexander
GOOD DAY LIVE WITH MICHELLE AND KYLE
Max Alexander & Sherri Madison

GOOD DAY LIVE WITH MICHELLE AND KYLE

Play Episode Listen Later Aug 10, 2022 29:49


max alexander
Focus TV
Max Alexander & Sherri Madison

Focus TV

Play Episode Listen Later Aug 10, 2022 29:49


max alexander
Focus TV
Max Alexander & Sherri Madison

Focus TV

Play Episode Listen Later Aug 10, 2022 29:49


max alexander
TBA - Rope of Dope Presents
S4E5 TBA 2.0 - Max Alexander (business coach)

TBA - Rope of Dope Presents

Play Episode Listen Later May 1, 2022


Max Alexander er business coach. Max er tidligere personlig træner i Fitness World med så stor succes, at han gik fra at stå med den enkelte borger til at oplære de andre trænere i at få flere klienter til idag at være gået selvstændig!

Ian Talks Comedy
John DeBellis (author Stand Up Guys, writer SNL '80 - '81)

Ian Talks Comedy

Play Episode Listen Later Feb 18, 2022 65:17


John joins me to talk about his book Stand Up Guys and making it like the book the Last Laugh; how comedians "learned" their craft; Elaine Boosler; his mentor comedy mentors; Ronnie Shakes; Larry David; Richard Lewis; auditioning for the Improv; Bill Maher; crowd work; dealing with hecklers; Rita Rudner; Gilbert Gottfried; Andy Kaufman; not watching Man in the Moon; Richard Belzer; his Strat 'O' Magic Srand Up League; Bob Costas; Drew Carey; Max Alexander; Dennis Wolfberg; having strangers give Gilbert Gottfried a dollar; Saturday Night Live - Joe Piscopo, Gilbert Gottfried, Eddie Murphy & Hal Wilner; writing Weekend Update; The SNL/Fridays rivalry tries to hurt John and Larry David's friendship; Billy Crystal Comedy Hour; Into the Night starring Rick Dees; DC Follies; John Roarke; Steve Barker; The Good Life with John Mendoza; writing a dark take on The Flintstones for Billy that was nixed by censors; Make Me Laugh; comedy today; Jackie Martling; his travel schedule; Joe Piscopo --- This episode is sponsored by · Anchor: The easiest way to make a podcast. https://anchor.fm/app

Media Path Podcast
TV Production & The Art of the Interview featuring Barry Kibrick

Media Path Podcast

Play Episode Listen Later Dec 2, 2021 62:41


Barry Kibrick's storied career has taken him from The Iditarod dog sled race to the halls of NBC, The Ellis Island Medal of Honor and three Emmy awards. AND he produced Fritz's groundbreaking variety show, It's Fritz! Barry's famed PBS show Between the Lines is now a podcast! He's conducted thousands of fascinating interviews with extraordinary luminaries, so there is much to discuss. Plus Fritz and Weezy are recommending House of Gucci, now in theaters and Tik Tik Boom on Netflix.Path Points of Interest:Barry KibrickBarry Kibrick's podcast Between the LinesBetween the Lines on YoutubeBetween the Lines on PBSHouse of Gucci - In Theaters and Streaming House of Gucci by Sara Gay FordenTik Tik BoomThe Real Story of ABBA: Bright Lights, Dark Shadows by Carl Magnus PalmThe Beatles: Get Back from Peter Jackson Shirley, I Jest by Cindy Williams

Minnesota Fight Night
An interview with Marty "The Wolfman" Lindquist

Minnesota Fight Night

Play Episode Listen Later Sep 25, 2021 36:48


At age 52, Minnesota boxer Marty "The Wolfman" Lindquist shows no signs of slowing down. A popular combat sports veteran with a pro boxing record of 14-12, Lindquist returns to the squared circle on Saturday, Oct. 2, as part of a Valhalla Combat Sports event at Element Gym in St. Paul. His opponent, making his pro boxing debut, is Victor Josef Temnograd of Minneapolis. In the following interview, Lindquist talks about his long boxing and kickboxing career, his work as a boxing trainer at Valhalla and much more. He also gives the inside story behind his stunning 2006 10-second knockout of Max Alexander, a then-unbeaten rising star from Philadelphia. Though the result was later changed to a no-decision, Lindquist proved one thing that night: If you take the Wolfman lightly, you do so at your own risk. Intro music: “Ali Shuffle,” by the Toler/Townsend Band. Special thanks to Deb Toler. The full instrumental is available for purchase: music.apple.com/us/album/ali-shuffle/334927560?i=334927635 (1) Valhalla Combat Sports | Facebook --- Support this podcast: https://podcasters.spotify.com/pod/show/brian-johnson492/support

逐工一幅天文圖 APOD Taigi
Tī Stonehenge 天頂 ê 日出夏至 ft. 阿錕 (20210620)

逐工一幅天文圖 APOD Taigi

Play Episode Listen Later Jul 20, 2021 2:08


Tī Stonehenge 天頂 ê 日出夏至 https://apod.tw/daily/20210620/ 今仔日太陽行到地球上北爿 ê 天頂。這个點 to̍h 叫做「至點」。Tī 真濟文化內底,這工開始換季。地球 北半球是 ùi 春天變做熱天,地球 南半球 是 ùi 秋天變做寒天。閣較正確來講,地球一寡所在 ê 至 是 今仔日,另外一寡所在 ê 至 是明仔載。這張相片是 2008 年 tī 英國 Stonehenge ê 夏至 彼工翕 ê。拄仔好翕著親像是 kah 一幅圖仝款 ê 日出畫面,內底閣有雲霧、樹仔、雲、四千五百年前 ê 石頭、kah 45 億歲 ê 大粒電火球。就算是 kā 地球自轉軸 幾若千年 ê 進動現象 算入去,太陽 按照天文路線 嘛是仝款會 peh-到 Stonehenge ê 天頂去。 ——— 這是 NASA Astronomy Picture of the Day ê 台語文 podcast 原文版:https://apod.nasa.gov/ 台文版:https://apod.tw/ 今仔日 ê 文章: 影像:Max Alexander, STFC, SPL 音樂:PiSCO - 鼎鼎 聲優:阿錕 翻譯:An-Li Tsai (NCU) 原文:https://apod.nasa.gov/apod/ap210620.html Powered by Firstory Hosting

RD Talks
Fatal affair

RD Talks

Play Episode Listen Later Mar 10, 2021 15:34


He was a successful businessman. She was a suburban housewife. For 20 years, they kept quiet about their fatal affair. First published September 2008. Written by Max Alexander. Read by Zoe Meunier.

fatal affair max alexander
Crusader Sports Talk
Player Profile - Max Alexander SR

Crusader Sports Talk

Play Episode Listen Later Feb 10, 2021 3:02


player profiles max alexander
From 0 To Hero
How to overcome insecurities with Max Alexander

From 0 To Hero

Play Episode Listen Later Dec 25, 2020 58:29


Hi Ohana, today I have singer-songwriter Max Alexander with me. We talked about the challenges of live gigs, guitar pedals, overcoming insecurities when singing, and much much more. He also shared a story of what to absolutely avoid before going into a recording studio. Have fun! Follow Max Alexander: Instagram @maxalexandermusic https://www.instagram.com/maxalexandermusic/ Facebook @maxalexandermusic https://www.facebook.com/maxalexandermusic YouTube Max Alexander https://www.youtube.com/channel/UCEnRMRkkFBxbBRumf7BAhSQ Follow me on Instagram @alise.koa And if you want to learn more about songwriting you can go to www.alisekoa.com --- Send in a voice message: https://anchor.fm/alisekoa/message

Count it up
How to build a product to fit your target market.

Count it up

Play Episode Listen Later Dec 3, 2020 40:56


Is your company actually solving the problems of your customers? In this episode of Count it Up!, David interviews CoFounder of Ditto, Max Alexander. They discuss the best ways for businesses to succeed is by solving the problems of the market. . #inventory #inventorymanagement #business #innovative #tech #airline #airport #construction #founder #innovation #budget #costsavings #value #accuracy #operations

C21Podcast
Adeel Amini, Max Alexander and Jonathan Jones

C21Podcast

Play Episode Listen Later Aug 4, 2020 35:57


Adeel Amini, founder of The TV Mindset, discusses a new initiative to provide freelancers with mental health support; Secret Cinema CEO Max Alexander explains why high-end TV series offer more scope for immersive events than movies; and Ember Films MD Jonathan Jones on new Netflix series Tiny Creatures.

Bildungsfern
BF20: Perspektivwechsel (mit Reinhard, Max, Alexander)

Bildungsfern

Play Episode Listen Later Apr 14, 2020 53:54


In der heutigen Sendung sind neben Reinhard noch Max und Alexander zu Gast, die an unserer Schule eine Ausbildung zum Techniker absolvieren und über ihre Perspektive der Schulschließung sprechen.

Monocle 24: The Monocle Weekly
Max Alexander, Stanley Donwood and Dr Tessa Roynon

Monocle 24: The Monocle Weekly

Play Episode Listen Later Mar 8, 2020 60:00


We meet Secret Cinema CEO Max Alexander to learn about experiential cinema and artist Stanley Donwood to discuss creativity in the anthropocene and his new book of linocuts, ‘Bad Island’. Plus, as a new documentary on Toni Morrison hits cinemas, we talk about the author’s legacy with Oxford academic Dr Tessa Roynon.

Three Mandys Podcast
Episode 65: New Year, New Max, Same Mandys

Three Mandys Podcast

Play Episode Listen Later Dec 31, 2019 58:25


This week we have our good friend, Max Alexander back in studio! We talk about the Home alone series, what's the time to go to the gym, and we talk about our TuTus. Make sure to check out our sponsor, Brandless! Use the link brandies.pxf.io/QxbnY to find out more!

Three Mandys Podcast
Episode 49: Mad Max and the City Councilman

Three Mandys Podcast

Play Episode Listen Later Sep 10, 2019 59:29


On this episode we are joined by Max Alexander and Sam Blackwood!  Listen as we navigate through 90's sitcoms, Rosie O'Donnell money, and fighting Dirty!  Also, Follow Sam Blackwood as he runs for Lafayette City Council!!  Thank you to our sponsor Brandless. Use the link brandless.pxf.io/QxbnY to find out more! Make sure to go see Mandy compete in the Wild Card round at Comedy Works New Faces Competition, Sept. 11th! Call Comedy Works in Denver and tell them you are going to see Mandy for guest tickets!!!! Link for Comedy Works Guest Tickets: www.comedyworks.com/comedians/new-f…ntest-wild-card

Scandal
Stan Lee's World: Protecting old folks & little dogs

Scandal

Play Episode Listen Later Dec 13, 2018 39:12


Stan Lee road manager Max Alexander tells what happened when Los Angeles police knocked on Lee's door to investigate allegations his daughter violently attacked him and his wife. Alan Duke also reveals an animal abuse case is pending against J.C. Lee. Reporter Mark Ebner shares observations from Stan Lee's exit interview. More details are at StanLeesWorld.com.

Happily Ever Aftermath
Roxanne (1987)

Happily Ever Aftermath

Play Episode Listen Later Dec 4, 2018 68:51


Polina counters Diana's Steve Martin movie from last episode with her pick Roxanne (1987). By the end, you will thoroughly know how she feels about Daryl Hannah's curly hair. Diana sings The Police within the first minute of the show. In this modern take on Edmond Rostand's classic play "Cyrano de Bergerac," C. D. Bales (Steve Martin) is the witty, intelligent and brave fire chief of a small Pacific Northwest town who, due to the size of his enormous nose, declines to pursue the girl of his dreams, lovely Roxanne Kowalski (Daryl Hannah). Instead, when his shy underling Chris McConnell (Rick Rossovich) becomes smitten with Roxanne, Bales feeds the handsome young man the words of love to win her heart. Also stars Shelley Duvall, John Kapelos, Fred Willard, Max Alexander, Michael J. Pollard, Steve Mittleman, Damon Wayans, Matt Lattanzi, Shandra Beri, Jean Sincere, Thom Curley, Ritch Shydner, and Kevin Nealon. (from Fandango.com and Wikipedia.org) The "alien abduction" scene that tickled Polina: https://youtu.be/pw5Y_7wtJmk One Person's Trash is Our Treasure's episode about Cyrano de Bergerac and Roxanne can be found at: https://bit.ly/2BRDyTK Find other amazing podcasts by searching #ladypodsquad on Twitter, Facebook, and all the social media platforms. Tweet us @HEAMCast, like us on Facebook @HappilyEverAftermath, and e-mail us at contact@heamcast.com.

The High Button Podcast
#136 Max Kirsh, Perks Halifax & DEEP TALK

The High Button Podcast

Play Episode Listen Later Jun 25, 2018 62:13


Max Alexander joins the High Button to talk about his love for entrepreneurship and how he stays motivated. Talking about passion, creativity & visualization, Max gives great advice on how to stay motivated when going through the ups and downs. A great listen for anyone looking to start a business or side hustle. Ep. #136

halifax perks deeptalk kirsh max alexander high button
Bombing with Mike Dorval
26- Max Alexander

Bombing with Mike Dorval

Play Episode Listen Later Jul 25, 2016 71:47


In addition to a stand up career that saw him on The Tonight Show With Johnny Carson and Merv Griffin, Max Alexander co starred in the definitive movie about comedians, Punchline. We talk about all that AND Julio Iglesias on this weeks Bombing!

Smart Mark Radio
Pro Wrestling Weekend: Dropkick Diabetes 2 preview

Smart Mark Radio

Play Episode Listen Later Jul 10, 2016 89:00


Max Alexander and Kip Paige joins our hosts to preview the second annual Dropkick Diabetes show in Lisbon, Ohio featuring Rhino and Tommy Dreamer.  To donate, please go to https://www.gofundme.com/dropkickdiabetes2.  Plus, time permitting our hosts will discuss all the new in professional wrestling.

CooperTalk
Max Alexander - Episode 525

CooperTalk

Play Episode Listen Later Jun 27, 2016 60:02


Steve Cooper talks with comic/actor Max Alexander. Max is one of the few comics to have appeared on The tonight Show with Jay Leno... and Johnny Carson! His face is instantly recognizable from such movies as Forgetting Sarah Marshall with Jason Segal, Man on the Moon with Jim Carrey, Punchline with Tom Hanks and Roxanne with Steve Martin. He has and can be seen on many television shows such as The 9 on ABC, the Emmy Award winning ABC drama The Practice, Hollywood Squares and on HBO and Comedy Central. His voice can also be heard in the comedy hit The Farce of the Penguins. As far as stand up goes, he has performed with the best of the best. From Frank Sinatra, Julio Iglesias and Tom Jones, he hasn’t just shared the stage with these icons once or twice, he has delivered the goods consistently for well over 3,000 shows. He is also one of the top corporate comics in the country today and has performed over the years for such companies as McDonald’s, Buick, Home Depot, Acura and Pepsi, to name a few.

Yarn in the City
Episode 26: Counting Down to Christmas

Yarn in the City

Play Episode Listen Later Dec 1, 2015 49:08


We've had our last event of the year, the Bath Christmas Craft Crawl, which has inspired us to think about knitting gifts and gifts for knitters. News and Events The Bath Christmas Craft Crawl Recap! 5th December - Knit for Peace Knit-in at the Barbican 11th December - PPQ Christmas party   London Craft Guide Update Now with the designers, and looking at an end-of-2015 or very-early-2016 release date. Thank you all for your patience and our apologies about the continued delay! Important note: if you preordered the book at the Yarn in the City Pop Up Marketplace, please email us!!!   What we’re working on Rachel has almost finished two sets of fingerless mittens in Mind the Gap sport weight, and has decided (for the moment) that the other presents aren't due until just after Christmas, and can therefore wait. She is madly and obsessively working on Dark and Stormy by Thea Colman in SweetGeorgia Superwash Merino in Wasabi. Allison is making great progress on Grettir by Jared Flood, and is almost at yoke. She's finished a quick, made up cowl with Porpoise Fur handspun from the Lab Goddess Fibre Club, and has a pair of socks for her dad to work on; pattern is from Kate Atherley's new book.   2015 Gift Guide – Tiny Treats Yarn bowls - Emily Cross Ceramics, Little Wren Pottery, Hadley Clay, Earth Wool & Fire, and Kingfishersblue. Sheep Shawl sticks by JUL Designs. Sirka counter, available at YAK or Meadow Yarn. Moth cards by Max Alexander, available at Prick Your Finger or Max’s World. Silver crochet hooks by Lyn Roberts at A Yarn Story Knitters Keep from Coco Knits at A Yarn Story Project bags - The Little Grey Girl, Nicsknots, BritYarn Knitters jewellery - Hello Sunshine, Max's World, SIlverdashery John Arbon's fabulous wool and alpaca socks Grounded by Heather Ordover Off the air, you can find us on Ravelry, Facebook, Twitter, Instagram and Pinterest. Please leave a review for us on iTunes or listen to us on Stitcher Radio, and don't hesitate to share your thoughts and feedback with us. We are always out and about, so come say hi at any of the events we're attending if you get a chance! Everyone is also more then welcome to join our Wednesday night knit group from 7-9 pm (or longer) upstairs at The Goat on the Rise, Clapham SW11 1EQ.    Music credits (all available on NoiseTrade)Santa Clause is Strumming to Town - The Winter FailureThis Christmas - Paste Holiday Sampler 2014Christmas is All Around (Love Actually) - Sleeping at Last

The Louie Anderson Podcast
Episode_19_Max Alexander talks about The Round, Merv Griffin and how doing Carson one night gave him work for over 20 years

The Louie Anderson Podcast

Play Episode Listen Later Dec 2, 2014 64:58


Max Alexander talks about The Round, Merv Griffin and how doing Carson one night gave him work for over 20 years